id
stringlengths
4
4
title
stringlengths
22
113
abstract
stringlengths
282
2.29k
keyphrases
sequence
prmu
sequence
lvl-1
stringlengths
16.7k
86.2k
lvl-2
stringlengths
9.61k
76k
lvl-3
stringlengths
1.52k
23.8k
lvl-4
stringlengths
1.28k
19.2k
J-39
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution
Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.
[ "sequenti auction problem", "empir analysi", "bid strategi", "multipl auction", "strateg behavior", "commodit market", "comput simul", "market effect", "ebai", "option-base extens", "proxi-bid system", "trade opportun", "electron marketplac", "busi-to-consum auction", "autom trade agent", "onlin auction", "option", "proxi bid" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "M", "U", "U", "M", "U", "M", "U", "M" ]
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution ∗ Adam I. Juda Division of Engineering and Applied Sciences Harvard University, Harvard Business School ajuda@hbs.edu David C. Parkes Division of Engineering and Applied Sciences Harvard University parkes@eecs.harvard.edu ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay``s proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. Categories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Design, Economics 1. INTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportunities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). Many authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1]. There is still little evidence of automated trading in e-markets, though. We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Without this, we do not expect individual consumers, or firms, to be confident in placing their business in the hands of an automated agent. One of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B. Among items listed on eBay, many are essentially identical. This is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5B of eBay``s gross merchandise volume in 2005. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o``clock or 3 o``clock eBay auction. While Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the wrong auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. As investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions.1 For example, if Alice values a video game console by itself for $200, a video game by itself for $30, and both a console and game for $250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1 The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. 180 market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper.2 We extend the proxy bidding technology currently employed by eBay. Our super-proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. Buyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good(s). The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer``s patience has expired, choosing options that maximize a buyer``s payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder``s true value is 2 Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers'' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20]. These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 2. EBAY AND THE DELL E193FP The most common type of auction held on eBay is a singleitem proxy auction. Auctions open at a given time and remain open for a set period of time (usually one week). Bidders bid for the item by giving a proxy a value ceiling. The proxy will bid on behalf of the bidder only as much as is necessary to maintain a winning position in the auction, up to the ceiling received from the bidder. Bidders may communicate with the proxy multiple times before an auction closes. In the event that a bidder``s proxy has been outbid, a bidder may give the proxy a higher ceiling to use in the auction. eBay``s proxy auction implements an incremental version of a Vickrey auction, with the item sold to the highest bidder for the second-highest bid plus a small increment. 181 10 0 10 1 10 2 10 3 10 4 10 0 10 1 10 2 10 3 10 4 Number of Auctions NumberofBidders Auctions Available Auctions in Which Bid Figure 1: Histogram of number of LCD auctions available to each bidder and number of LCD auctions in which a bidder participates. The market analyzed in this paper is that of a specific model of an LCD monitor, a 19 Dell LCD model E193FP. This market was selected for a variety of reasons including: • The mean price of the monitor was $240 (with standard deviation $32), so we believe it reasonable to assume that bidders on the whole are only interested in acquiring one copy of the item on eBay.3 • The volume transacted is fairly high, at approximately 500 units sold per month. • The item is not usually bundled with other items. • The item is typically sold as new, and so suitable for the price-matching of the options-based scheme. Raw auction information was acquired via a PERL script. The script accesses the eBay search engine,4 and returns all auctions containing the terms `Dell'' and `LCD'' that have closed within the past month.5 Data was stored in a text file for post-processing. To isolate the auctions in the domain of interest, queries were made against the titles of eBay auctions that closed between 27 May, 2005 through 1 October, 2005.6 Figure 1 provides a general sense of how many LCD auctions occur while a bidder is interested in pursuing a monitor.7 8,746 bidders (86%) had more than one auction available between when they first placed a bid on eBay and the 3 For reference, Dell``s October 2005 mail order catalogue quotes the price of the monitor as being $379 without a desktop purchase, and $240 as part of a desktop purchase upgrade. 4 http://search.ebay.com 5 The search is not case-sensitive. 6 Specifically, the query found all auctions where the title contained all of the following strings: `Dell,'' `LCD'' and `E193FP,'' while excluding all auctions that contained any of the following strings: `Dimension,'' `GHZ,'' `desktop,'' `p4'' and `GB.'' The exclusion terms were incorporated so that the only auctions analyzed would be those selling exclusively the LCD of interest. For example, the few bundled auctions selling both a Dell Dimension desktop and the E193FP LCD are excluded. 7 As a reference, most auctions close on eBay between noon and midnight EDT, with almost two auctions for the Dell LCD monitor closing each hour on average during peak time periods. Bidders have an average observed patience of 3.9 days (with a standard deviation of 11.4 days). latest closing time of an auction in which they bid (with an average of 78 auctions available). Figure 1 also illustrates the number of auctions in which each bidder participates. Only 32.3% of bidders who had more than one auction available are observed to bid in more than one auction (bidding in 3.6 auctions on average). A simple regression analysis shows that bidders tend to submit maximal bids to an auction that are $1.22 higher after spending twice as much time in the system, as well as bids that are $0.27 higher in each subsequent auction. Among the 508 bidders that won exactly one monitor and participated in multiple auctions, 201 (40%) paid more than $10 more than the closing price of another auction in which they bid, paying on average $35 more (standard deviation $21) than the closing price of the cheapest auction in which they bid but did not win. Furthermore, among the 2,216 bidders that never won an item despite participating in multiple auctions, 421 (19%) placed a losing bid in one auction that was more than $10 higher than the closing price of another auction in which they bid, submitting a losing bid on average $34 more (standard deviation $23) than the closing price of the cheapest auction in which they bid but did not win. Although these measures do not say a bidder that lost could have definitively won (because we only consider the final winning price and not the bid of the winner to her proxy), or a bidder that won could have secured a better price, this is at least indicative of some bidder mistakes. 3. MODELING THE SEQUENTIAL AUCTION PROBLEM While the eBay analysis was for simple bidders who desire only a single item, let us now consider a more general scenario where people may desire multiple goods of different types, possessing general valuations over those goods. Consider a world with buyers (sometimes called bidders) B and K different types of goods G1...GK . Let T = {0, 1, ...} denote time periods. Let L denote a bundle of goods, represented as a vector of size K, where Lk ∈ {0, 1} denotes the quantity of good type Gk in the bundle.8 The type of a buyer i ∈ B is (ai, di, vi), with arrival time ai ∈ T, departure time di ∈ T, and private valuation vi(L) ≥ 0 for each bundle of goods L received between ai and di, and zero value otherwise. The arrival time models the period in which a buyer first realizes her demand and enters the market, while the departure time models the period in which a buyer loses interest in acquiring the good(s). In settings with general valuations, we need an additional assumption: an upper bound on the difference between a buyer``s arrival and departure, denoted ΔMax. Buyers have quasi-linear utilities, so that the utility of buyer i receiving bundle L and paying p, in some period no later than di, is ui(L, p) = vi(L) − p. Each seller j ∈ S brings a single item kj to the market, has no intrinsic value and wants to maximize revenue. Seller j has an arrival time, aj, which models the period in which she is first interested in listing the item, while the departure time, dj, models the latest period in which she is willing to consider having an auction for the item close. A seller will receive payment by the end of the reported departure of the winning buyer. 8 We extend notation whereby a single item k of type Gk refers to a vector L : Lk = 1. 182 We say an individual auction in a sequence is locally strategyproof (LSP) if truthful bidding is a dominant strategy for a buyer that can only bid in that auction. Consider the following example to see that LSP is insufficient for the existence of a dominant bidding strategy for buyers facing a sequence of auctions. Example 1. Alice values one ton of Sand with one ton of Stone at $2, 000. Bob holds a Vickrey auction for one ton of Sand on Monday and a Vickrey auction for one ton of Stone on Tuesday. Alice has no dominant bidding strategy because she needs to know the price for Stone on Tuesday to know her maximum willingness to pay for Sand on Monday. Definition 1. The sequential auction problem. Given a sequence of auctions, despite each auction being locally strategyproof, a bidder has no dominant bidding strategy. Consider a sequence of auctions. Generally, auctions selling the same item will be uncertainly-ordered, because a buyer will not know the ordering of closing prices among the auctions. Define the interesting bundles for a buyer as all bundles that could maximize the buyer``s profit for some combination of auctions and bids of other buyers.9 Within the interesting bundles, say that an item has uncertain marginal value if the marginal value of an item depends on the other goods held by the buyer.10 Say that an item is oversupplied if there is more than one auction offering an item of that type. Say two bundles are substitutes if one of those bundles has the same value as the union of both bundles.11 Proposition 1. Given locally strategyproof single-item auctions, the sequential auction problem exists for a bidder if and only if either of the following two conditions is true: (1) within the set of interesting bundles (a) there are two bundles that are substitutes, (b) there is an item with uncertain marginal value, or (c) there is an item that is over-supplied; (2) a bidder faces competitors'' bids that are conditioned on the bidder``s past bids. Proof. (Sketch.) (⇐) A bidder does not have a dominant strategy when (a) she does not know which bundle among substitutes to pursue, (b) she faces the exposure problem, or (c) she faces the multiple copies problem. Additionally, a bidder does not have a dominant strategy when she does not how to optimally influence the bids of competitors. (⇒) By contradiction. A bidder has a dominant strategy to bid its constant marginal value for a given item in each auction available when conditions (1) and (2) are both false. For example, the following buyers all face the sequential auction problem as a result of condition (a), (b) and (c) respectively: a buyer who values one ton of Sand for $1,000, or one ton of Stone for $2,000, but not both Sand and Stone; a buyer who values one ton of Sand for $1,000, one ton of Stone for $300, and one ton of Sand and one ton of Stone for $1,500, and can participate in an auction for Sand before an auction for Stone; a buyer who values one ton of Sand for $1,000 and can participate in many auctions selling Sand. 9 Assume that the empty set is an interesting bundle. 10 Formally, an item k has uncertain marginal value if |{m : m = vi(Q) − vi(Q − k), ∀Q ⊆ L ∈ InterestingBundle, Q ⊇ k}| > 1. 11 Formally, two bundles A and B are substitutes if vi(A ∪ B) = max(vi(A), vi(B)), where A ∪ B = L where Lk = max(Ak, Bk). 4. SUPER PROXIES AND OPTIONS The novel solution proposed in this work to resolve the sequential auction problem consists of two primary components: richer proxy agents, and options with price matching. In finance, a real option is a right to acquire a real good at a certain price, called the exercise price. For instance, Alice may obtain from Bob the right to buy Sand from him at an exercise price of $1, 000. An option provides the right to purchase a good at an exercise price but not the obligation. This flexibility allows buyers to put together a collection of options on goods and then decide which to exercise. Options are typically sold at a price called the option price. However, options obtained at a non-zero option price cannot generally support a simple, dominant bidding strategy, as a buyer must compute the expected value of an option to justify the cost [8]. This computation requires a model of the future, which in our setting requires a model of the bidding strategies and the values of other bidders. This is the very kind of game-theoretic reasoning that we want to avoid. Instead, we consider costless options with an option price of zero. This will require some care as buyers are weakly better off with a costless option than without one, whatever its exercise price. However, multiple bidders pursuing options with no intention of exercising them would cause the efficiency of an auction for options to unravel. This is the role of the mandatory proxy agents, which intermediate between buyers and the market. A proxy agent forces a link between the valuation function used to acquire options and the valuation used to exercise options. If a buyer tells her proxy an inflated value for an item, she runs the risk of having the proxy exercise options at a price greater than her value. 4.1 Buyer Proxies 4.1.1 Acquiring Options After her arrival, a buyer submits her valuation ˆvi (perhaps untruthfully) to her proxy in some period ˆai ≥ ai, along with a claim about her departure time ˆdi ≥ ˆai. All transactions are intermediated via proxy agents. Each auction is modified to sell an option on that good to the highest bidding proxy, with an initial exercise price set to the second-highest bid received.12 When an option in which a buyer is interested becomes available for the first time, the proxy determines its bid by computing the buyer``s maximum marginal value for the item, and then submits a bid in this amount. A proxy does not bid for an item when it already holds an option. The bid price is: bidt i(k) = max L [ˆvi(L + k) − ˆvi(L)] (1) By having a proxy compute a buyer``s maximum marginal value for an item and then bidding only that amount, a buyer``s proxy will win any auction that could possibly be of benefit to the buyer and only lose those auctions that could never be of value to the buyer. 12 The system can set a reserve price for each good, provided that the reserve is universal for all auctions selling the same item. Without a universal reserve price, price matching is not possible because of the additional restrictions on prices that individual sellers will accept. 183 Buyer Type Monday Tuesday Molly (Mon, Tues, $8) 6Nancy 6Nancy → 4Polly Nancy (Mon, Tues, $6) - 4Polly Polly (Mon, Tues, $4) -Table 1: Three-buyer example with each wanting a single item and one auction occurring on Monday and Tuesday. XY implies an option with exercise price X and bookkeeping that a proxy has prevented Y from currently possessing an option. → is the updating of exercise price and bookkeeping. When a proxy wins an auction for an option, the proxy will store in its local memory the identity (which may be a pseudonym) of the proxy not holding an option because of the proxy``s win (i.e., the proxy that it `bumped'' from winning, if any). This information will be used for price matching. 4.1.2 Pricing Options Sellers agree by joining the market to allow the proxy representing a buyer to adjust the exercise price of an option that it holds downwards if the proxy discovers that it could have achieved a better price by waiting to bid in a later auction for an option on the same good. To assist in the implementation of the price matching scheme each proxy tracks future auctions for an option that it has already won and will determine who would be bidding in that auction had the proxy delayed its entry into the market until this later auction. The proxy will request price matching from the seller that granted it an option if the proxy discovers that it could have secured a lower price by waiting. To reiterate, the proxy does not acquire more than one option for any good. Rather, it reduces the exercise price on its already issued option if a better deal is found. The proxy is able to discover these deals by asking each future auction to report the identities of the bidders in that auction together with their bids. This needs to be enforced by eBay, as the central authority. The highest bidder in this later auction, across those whose identity is not stored in the proxy``s memory for the given item, is exactly the bidder against whom the proxy would be competing had it delayed its entry until this auction. If this high bid is lower than the current option price held, the proxy price matches down to this high bid price. After price matching, one of two adjustments will be made by the proxy for bookkeeping purposes. If the winner of the auction is the bidder whose identity has been in the proxy``s local memory, the proxy will replace that local information with the identity of the bidder whose bid it just price matched, as that is now the bidder the proxy has prevented from obtaining an option. If the auction winner``s identity is not stored in the proxy``s local memory the memory may be cleared. In this case, the proxy will simply price match against the bids of future auction winners on this item until the proxy departs. Example 2 (Table 1). Molly``s proxy wins the Monday auction, submitting a bid of $8 and receiving an option for $6. Molly``s proxy adds Nancy to its local memory as Nancy``s proxy would have won had Molly``s proxy not bid. On Tuesday, only Nancy``s and Polly``s proxy bid (as Molly``s proxy holds an option), with Nancy``s proxy winning an opBuyer Type Monday Tuesday Truth: Molly (Mon, Mon, $8) 6NancyNancy (Mon, Tues, $6) - 4Polly Polly (Mon, Tues, $4) -Misreport: Molly (Mon, Mon, $8) -Nancy (Mon, Tues, $10) 8Molly 8Molly → 4φ Polly (Mon, Tues, $4) - 0φ Misreport & match low: Molly (Mon, Mon, $8) -Nancy (Mon, Tues, $10) 8 8 → 0 Polly (Mon, Tues, $4) - 0 Table 2: Examples demonstrating why bookkeeping will lead to a truthful system whereas simply matching to the lowest winning price will not. tion for $4 and noting that it bumped Polly``s proxy. At this time, Molly``s proxy will price match its option down to $4 and replace Nancy with Polly in its local memory as per the price match algorithm, as Polly would be holding an option had Molly never bid. 4.1.3 Exercising Options At the reported departure time the proxy chooses which options to exercise. Therefore, a seller of an option must wait until period ˆdw for the option to be exercised and receive payment, where w was the winner of the option.13 For bidder i, in period ˆdi, the proxy chooses the option(s) that maximize the (reported) utility of the buyer: θ∗ t = argmax θ⊆Θ (ˆvi(γ(θ)) − π(θ)) (2) where Θ is the set of all options held, γ(θ) are the goods corresponding to a set of options, and π(θ) is the sum of exercise prices for a set of options. All other options are returned.14 No options are exercised when no combination of options have positive utility. 4.1.4 Why bookkeep and not match winning price? One may believe that an alternative method for implementing a price matching scheme could be to simply have proxies match the lowest winning price they observe after winning an option. However, as demonstrated in Table 2, such a simple price matching scheme will not lead to a truthful system. The first scenario in Table 2 demonstrates the outcome if all agents were to truthfully report their types. Molly 13 While this appears restrictive on the seller, we believe it not significantly different than what sellers on eBay currently endure in practice. An auction on eBay closes at a specific time, but a seller must wait until a buyer relinquishes payment before being able to realize the revenue, an amount of time that could easily be days (if payment is via a money order sent through courier) to much longer (if a buyer is slow but not overtly delinquent in remitting her payment). 14 Presumably, an option returned will result in the seller holding a new auction for an option on the item it still possesses. However, the system will not allow a seller to re-auction an option until ΔMax after the option had first been issued in order to maintain a truthful mechanism. 184 would win the Monday auction and receive an option with an exercise price of $6 (subsequently exercising that option at the end of Monday), and Nancy would win the Tuesday auction and receive an option with an exercise price of $4 (subsequently exercising that option at the end of Tuesday). The second scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, using the proposed bookkeeping method. Nancy would win the Monday auction and receive an option with an exercise price of $8. On Tuesday, Polly would win the auction and receive an option with an exercise price of $0. Nancy``s proxy would observe that the highest bid submitted on Tuesday among those proxies not stored in local memory is Polly``s bid of $4, and so Nancy``s proxy would price match the exercise price of its option down to $4. Note that the exercise price Nancy``s proxy has obtained at the end of Tuesday is the same as when she truthfully revealed her type to her proxy. The third scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, if the price matching scheme were for proxies to simply match their option price to the lowest winning price at any time while they are in the system. Nancy would win the Monday auction and receive an option with an exercise price of $8. On Tuesday, Polly would win the auction and receive an option with an exercise price of $0. Nancy``s proxy would observe that the lowest price on Tuesday was $0, and so Nancy``s proxy would price match the exercise price of its option down to $0. Note that the exercise price Nancy``s proxy has obtained at the end of Tuesday is lower than when she truthfully revealed her type to the proxy. Therefore, a price matching policy of simply matching the lowest price paid may not elicit truthful information from buyers. 4.2 Complexity of Algorithm An XOR-valuation of size M for buyer i is a set of M terms, < L1 , v1 i > ...< LM , vM i >, that maps distinct bundles to values, where i is interested in acquiring at most one such bundle. For any bundle S, vi(S) = maxLm⊆S(vm i ). Theorem 1. Given an XOR-valuation which possesses M terms, there is an O(KM2 ) algorithm for computing all maximum marginal values, where K is the number of different item types in which a buyer may be interested. Proof. For each item type, recall Equation 1 which defines the maximum marginal value of an item. For each bundle L in the M-term valuation, vi(L + k) may be found by iterating over the M terms. Therefore, the number of terms explored to determine the maximum marginal value for any item is O(M2 ), and so the total number of bundle comparisons to be performed to calculate all maximum marginal values is O(KM2 ). Theorem 2. The total memory required by a proxy for implementing price matching is O(K), where K is the number of distinct item types. The total work performed by a proxy to conduct price matching in each auction is O(1). Proof. By construction of the algorithm, the proxy stores one maximum marginal value for each item for bidding, of which there are O(K); at most one buyer``s identity for each item, of which there are O(K); and one current option exercise price for each item, of which there are O(K). For each auction, the proxy either submits a precomputed bid or price matches, both of which take O(1) work. 4.3 Truthful Bidding to the Proxy Agent Proxies transform the market into a direct revelation mechanism, where each buyer i interacts with the proxy only once,15 and does so by declaring a bid, bi, which is defined as an announcement of her type, (ˆai, ˆdi, ˆvi), where the announcement may or may not be truthful. We denote all received bids other than i``s as b−i. Given bids, b = (bi, b−i), the market determines allocations, xi(b), and payments, pi(b) ≥ 0, to each buyer (using an online algorithm). A dominant strategy equilibrium for buyers requires that vi(xi(bi, b−i))−pi(bi, b−i) ≥ vi(xi(bi, b−i))−pi(bi, b−i), ∀bi = bi, ∀b−i. We now establish that it is a dominant strategy for a buyer to reveal her true valuation and true departure time to her proxy agent immediately upon arrival to the system. The proof builds on the price-based characterization of strategyproof single-item online auctions in Hajiaghayi et al. [12]. Define a monotonic and value-independent price function psi(ai, di, L, v−i) which can depend on the values of other agents v−i. Price psi(ai, di, L, v−i) will represent the price available to agent i for bundle L in the mechanism if it announces arrival time ai and departure time di. The price is independent of the value vi of agent i, but can depend on ai, di and L as long as it satisfies a monotonicity condition. Definition 2. Price function psi(ai, di, L, v−i) is monotonic if psi(ai, di, L , v−i) ≤ psi(ai, di, L, v−i) for all ai ≤ ai, all di ≥ di, all bundles L ⊆ L and all v−i. Lemma 1. An online combinatorial auction will be strategyproof (with truthful reports of arrival, departure and value a dominant strategy) when there exists a monotonic and value-independent price function, psi(ai, di, L, v−i), such that for all i and all ai, di ∈ T and all vi, agent i is allocated bundle L∗ = argmaxL [vi(L) − psi(ai, di, L, v−i)] in period di and makes payment psi(ai, di, L∗ , v−i). Proof. Agent i cannot benefit from reporting a later departure ˆdi because the allocation is made in period ˆdi and the agent would have no value for this allocation. Agent i cannot benefit from reporting a later arrival ˆai ≥ ai or earlier departure ˆdi ≤ di because of price monotonicity. Finally, the agent cannot benefit from reporting some ˆvi = vi because its reported valuation does not change the prices it faces and the mechanism maximizes its utility given its reported valuation and given the prices. Lemma 2. At any given time, there is at most one buyer in the system whose proxy does not hold an option for a given item type because of buyer i``s presence in the system, and the identity of that buyer will be stored in i``s proxy``s local memory at that time if such a buyer exists. Proof. By induction. Consider the first proxy that a buyer prevents from winning an option. Either (a) the 15 For analysis purposes, we view the mechanism as an opaque market so that the buyer cannot condition her bid on bids placed by others. 185 bumped proxy will leave the system having never won an option, or (b) the bumped proxy will win an auction in the future. If (a), the buyer``s presence prevented exactly that one buyer from winning an option, but will have not prevented any other proxies from winning an option (as the buyer``s proxy will not bid on additional options upon securing one), and will have had that bumped proxy``s identity in its local memory by definition of the algorithm. If (b), the buyer has not prevented the bumped proxy from winning an option after all, but rather has prevented only the proxy that lost to the bumped proxy from winning (if any), whose identity will now be stored in the proxy``s local memory by definition of the algorithm. For this new identity in the buyer``s proxy``s local memory, either scenario (a) or (b) will be true, ad infinitum. Given this, we show that the options-based infrastructure implements a price-based auction with a monotonic and value-independent price schedule to every agent. Theorem 3. Truthful revelation of valuation, arrival and departure is a dominant strategy for a buyer in the optionsbased market. Proof. First, define a simple agent-independent price function pk i (t, v−i) as the highest bid by the proxies not holding an option on an item of type Gk at time t, not including the proxy representing i herself and not including any proxies that would have already won an option had i never entered the system (i.e., whose identity is stored in i``s proxy``s local memory)(∞ if no supply at t). This set of proxies is independent of any declaration i makes to its proxy (as the set explicitly excludes the at most one proxy (see Lemma 2) that i has prevented from holding an option), and each bid submitted by a proxy within this set is only a function of their own buyer``s declared valuation (see Equation 1). Furthermore, i cannot influence the supply she faces as any options returned by bidders due to a price set by i``s proxy``s bid will be re-auctioned after i has departed the system. Therefore, pk i (t, v−i) is independent of i``s declaration to its proxy. Next, define psk i (ˆai, ˆdi, v−i) = minˆai≤τ≤ ˆdi [pk i (τ, v−i)] (possibly ∞) as the minimum price over pk i (t, v−i), which is clearly monotonic. By construction of price matching, this is exactly the price obtained by a proxy on any option that it holds at departure. Define psi(ˆai, ˆdi, L, v−i) = Èk=K k=1 psk i (ˆai, ˆdi, v−i)Lk, which is monotonic in ˆai, ˆdi and L since psk i (ˆai, ˆdi, v−i) is monotonic in ˆai and ˆdi and (weakly) greater than zero for each k. Given the set of options held at ˆdi, which may be a subset of those items with non-infinite prices, the proxy exercises options to maximize the reported utility. Left to show is that all bundles that could not be obtained with options held are priced sufficiently high as to not be preferred. For each such bundle, either there is an item priced at ∞ (in which case the bundle would not be desired) or there must be an item in that bundle for which the proxy does not hold an option that was available. In all auctions for such an item there must have been a distinct bidder with a bid greater than bidt i(k), which subsequently results in psk i (ˆai, ˆdi, v−i) > bidt i(k), and so the bundle without k would be preferred to the bundle. Theorem 4. The super proxy, options-based scheme is individually-rational for both buyers and sellers. Price σ(Price) Value Surplus eBay $240.24 $32 $244 $4 Options $239.66 $12 $263 $23 Table 3: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as the simulated options-based market using worst-case estimates of bidders'' true value. Proof. By construction, the proxy exercises the profit maximizing set of options obtained, or no options if no set of options derives non-negative surplus. Therefore, buyers are guaranteed non-negative surplus by participating in the scheme. For sellers, the price of each option is based on a non-negative bid or zero. 5. EVALUATING THE OPTIONS / PROXY INFRASTRUCTURE A goal of the empirical benchmarking and a reason to collect data from eBay is to try and build a realistic model of buyers from which to estimate seller revenue and other market effects under the options-based scheme. We simulate a sequence of auctions that match the timing of the Dell LCD auctions on eBay.16 When an auction successfully closes on eBay, we simulate a Vickrey auction for an option on the item sold in that period. Auctions that do not successfully close on eBay are not simulated. We estimate the arrival, departure and value of each bidder on eBay from their observed behavior.17 Arrival is estimated as the first time that a bidder interacts with the eBay proxy, while departure is estimated as the latest closing time among eBay auctions in which a bidder participates. We initially adopt a particularly conservative estimate for bidder value, estimating it as the highest bid a bidder was observed to make on eBay. Table 3 compares the distribution of closing prices on eBay and in the simulated options scheme. While the average revenue in both schemes is virtually the same ($239.66 in the options scheme vs. $240.24 on eBay), the winners in the options scheme tend to value the item won 7% more than the winners on eBay ($263 in the options scheme vs. $244 on eBay). 5.1 Bid Identification We extend the work of Haile and Tamer [11] to sequential auctions to get a better view of underlying bidder values. Rather than assume for bidders an equilibrium behavior as in standard econometric techniques, Haile and Tamer do not attempt to model how bidders'' true values get mapped into a bid in any given auction. Rather, in the context of repeated 16 When running the simulations, the results of the first and final ten days of auctions are not recorded to reduce edge effects that come from viewing a discrete time window of a continuous process. 17 For the 100 bidders that won multiple times on eBay, we have each one bid a constant marginal value for each additional item in each auction until the number of options held equals the total number of LCDs won on eBay, with each option available for price matching independently. This bidding strategy is not a dominant strategy (falling outside the type space possible for buyers on which the proof of truthfulness has been built), but is believed to be the most appropriate first order action for simulation. 186 0 100 200 300 400 500 0 0.2 0.4 0.6 0.8 1 Value ($) CDF Observed Max Bids Upper Bound of True Value Figure 2: CDF of maximum bids observed and upper bound estimate of the bidding population``s distribution for maximum willingness to pay. The true population distribution lies below the estimated upper bound. single-item auctions with distinct bidder populations, Haile and Tamer make only the following two assumptions when estimating the distribution of true bidder values: 1. Bidders do not bid more than they are willing to pay. 2. Bidders do not allow an opponent to win at a price they are willing to beat. From the first of their two assumptions, given the bids placed by each bidder in each auction, Haile and Tamer derive a method for estimating an upper bound of the bidding population``s true value distribution (i.e., the bound that lies above the true value distribution). From the second of their two assumptions, given the winning price of each auction, Haile and Tamer derive a method for estimating a lower bound of the bidding population``s true value distribution. It is only the upper-bound of the distribution that we utilize in our work. Haile and Tamer assume that bidders only participate in a single auction, and require independence of the bidding population from auction to auction. Neither assumption is valid here: the former because bidders are known to bid in more than one auction, and the latter because the set of bidders in an auction is in all likelihood not a true i.i.d. sampling of the overall bidding population. In particular, those who win auctions are less likely to bid in successive auctions, while those who lose auctions are more likely to remain bidders in future auctions. In applying their methods we make the following adjustments: • Within a given auction, each individual bidder``s true willingness to pay is assumed weakly greater than the maximum bid that bidder submits across all auctions for that item (either past or future). • When estimating the upper bound of the value distribution, if a bidder bids in more than one auction, randomly select one of the auctions in which the bidder bid, and only utilize that one observation during the estimation.18 18 In current work, we assume that removing duplicate bidders is sufficient to make the buying populations independent i.i.d. draws from auction to auction. If one believes that certain portions of the population are drawn to cerPrice σ(Price) Value Surplus eBay $240.24 $32 $281 $40 Options $275.80 $14 $302 $26 Table 4: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as in the simulated options-based market using an adjusted Haile and Tamer estimate of bidders'' true values being 15% higher than their maximum observed bid. Figure 2 provides the distribution of maximum bids placed by bidders on eBay as well as the estimated upper bound of the true value distribution of bidders based on the extended Haile and Tamer method.19 As can be seen, the smallest relative gap between the two curves meaningfully occurs near the 80th percentile, where the upper bound is 1.17 times the maximum bid. Therefore, adopted as a less conservative model of bidder values is a uniform scaling factor of 1.15. We now present results from this less conservative analysis. Table 4 shows the distribution of closing prices in auctions on eBay and in the simulated options scheme. The mean price in the options scheme is now significantly higher, 15% greater, than the prices on eBay ($276 in the options scheme vs. $240 on eBay), while the standard deviation of closing prices is lower among the options scheme auctions ($14 in the options scheme vs. $32 on eBay). Therefore, not only is the expected revenue stream higher, but the lower variance provides sellers a greater likelihood of realizing that higher revenue. The efficiency of the options scheme remains higher than on eBay. The winners in the options scheme now have an average estimated value 7.5% higher at $302. In an effort to better understand this efficiency, we formulated a mixed integer program (MIP) to determine a simple estimate of the allocative efficiency of eBay. The MIP computes the efficient value of the offline problem with full hindsight on all bids and all supply.20 Using a scaling of 1.15, the total value allocated to eBay winners is estimated at $551,242, while the optimal value (from the MIP) is $593,301. This suggests an allocative efficiency of 92.9%: while the typical value of a winner on eBay is $281, an average value of $303 was possible.21 Note the options-based tain auctions, then further adjustments would be required in order to utilize these techniques. 19 The estimation of the points in the curve is a minimization over many variables, many of which can have smallnumbers bias. Consequently, Haile and Tamer suggest using a weighted average over all terms yi of È i yi exp(yiρ)Èj exp(yj ρ) to approximate the minimum while reducing the small number effects. We used ρ = −1000 and removed observations of auctions with 17 bidders or more as they occurred very infrequently. However, some small numbers bias still demonstrated itself with the plateau in our upper bound estimate around a value of $300. 20 Buyers who won more than one item on eBay are cloned so that they appear to be multiple bidders of identical type. 21 As long as one believes that every bidder``s true value is a constant factor α away from their observed maximum bid, the 92.9% efficiency calculation holds for any value of α. In practice, this belief may not be reasonable. For example, if losing bidders tend to have true values close to their observed 187 scheme comes very close to achieving this level of efficiency [at 99.7% efficient in this estimate] even though it operates without the benefit of hindsight. Finally, although the typical winning bidder surplus decreases between eBay and the options-based scheme, some surplus redistribution would be possible because the total market efficiency is improved.22 6. DISCUSSION The biggest concern with our scheme is that proxy agents who may be interested in many different items may acquire many more options than they finally exercise. This can lead to efficiency loss. Notice that this is not an issue when bidders are only interested in a single item (as in our empirical study), or have linear-additive values on items. To fix this, we would prefer to have proxy agents use more caution in acquiring options and use a more adaptive bidding strategy than that in Equation 1. For instance, if a proxy is already holding an option with an exercise price of $3 on some item for which it has value of $10, and it values some substitute item at $5, the proxy could reason that in no circumstance will it be useful to acquire an option on the second item. We formulate a more sophisticated bidding strategy along these lines. Let Θt be the set of all options a proxy for bidder i already possesses at time t. Let θt ⊆ Θt, be a subset of those options, the sum of whose exercise prices are π(θt), and the goods corresponding to those options being γ(θt). Let Π(θt) = ˆvi(γ(θt)) − π(θt) be the (reported) available surplus associated with a set of options. Let θ∗ t be the set of options currently held that would maximize the buyer``s surplus; i.e., θ∗ t = argmaxθt⊆Θt Π(θt). Let the maximal willingness to pay for an item k represent a price above which the agent knows it would never exercise an option on the item given the current options held. This can be computed as follows: bidt i(k) = max L [0, min[ˆvi(L + k) − Π(θ∗ t ), ˆvi(L + k) − ˆvi(L)]] (3) where ˆvi(L+k)−Π(θ∗ t ) considers surplus already held, ˆvi(L+ k)−ˆvi(L) considers the marginal value of a good, and taking the max[0, .] considers the overall use of pursuing the good. However, and somewhat counter intuitively, we are not able to implement this bidding scheme without forfeiting truthfulness. The Π(θ∗ t ) term in Equation 3 (i.e., the amount of guaranteed surplus bidder i has already obtained) can be influenced by proxy j``s bid. Therefore, bidder j may have the incentive to misrepresent her valuation to her proxy if she believes doing so will cause i to bid differently in the future in a manner beneficial to j. Consider the following example where the proxy scheme is refined to bid the maximum willingness to pay. Example 3. Alice values either one ton of Sand or one ton of Stone for $2,000. Bob values either one ton of Sand or one ton of Stone for $1,500. All bidders have a patience maximum bids while eBay winners have true values much greater than their observed maximum bids then downward bias is introduced in the efficiency calculation at present. 22 The increase in eBay winner surplus between Tables 3 and 4 is to be expected as the α scaling strictly increases the estimated value of the eBay winners while holding the prices at which they won constant. of 2 days. On day one, a Sand auction is held, where Alice``s proxy bids $2,000 and Bob``s bids $1,500. Alice``s proxy wins an option to purchase Sand for $1,500. On day two, a Stone auction is held, where Alice``s proxy bids $1,500 [as she has already obtained a guaranteed $500 of surplus from winning a Sand option, and so reduces her Stone bid by this amount], and Bob``s bids $1,500. Either Alice``s proxy or Bob``s proxy will win the Stone option. At the end of the second day, Alice``s proxy holds an option with an exercise price of $1,500 to obtain a good valued for $2,000, and so obtains $500 in surplus. Now, consider what would have happened had Alice declared that she valued only Stone. Example 4. Alice declares valuing only Stone for $2,000. Bob values either one ton of Sand or one ton of Stone for $1,500. All bidders have a patience of 2 days. On day one, a Sand auction is held, where Bob``s proxy bids $1,500. Bob``s proxy wins an option to purchase Sand for $0. On day two, a Stone auction is held, where Alice``s proxy bids $2,000, and Bob``s bids $0 [as he has already obtained a guaranteed $1,500 of surplus from winning a Sand option, and so reduces his Stone bid by this amount]. Alice``s proxy wins the Stone option for $0. At the end of the second day, Alice``s proxy holds an option with an exercise price of $0 to obtain a good valued for $2,000, and so obtains $2,000 in surplus. By misrepresenting her valuation (i.e., excluding her value of Sand), Alice was able to secure higher surplus by guiding Bob``s bid for Stone to $0. An area of immediate further work by the authors is to develop a more sophisticated proxy agent that can allow for bidding of maximum willingness to pay (Equation 3) while maintaining truthfulness. An additional, practical, concern with our proxy scheme is that we assume an available, trusted, and well understood method to characterize goods (and presumably the quality of goods). We envision this happening in practice by sellers defining a classification for their item upon entering the market, for instance via a UPC code. Just as in eBay, this would allow an opportunity for sellers to improve revenue by overstating the quality of their item (new vs. like new), and raises the issue of how well a reputation scheme could address this. 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay? Acknowledgments We would like to thank Pai-Ling Yin. Helpful comments have been received from William Simpson, attendees at Har188 vard University``s EconCS and ITM seminars, and anonymous reviewers. Thank you to Aaron L. Roth and KangXing Jin for technical support. All errors and omissions remain our own. 8. REFERENCES [1] P. Anthony and N. R. Jennings. Developing a bidding agent for multiple heterogeneous auctions. ACM Trans. On Internet Technology, 2003. [2] R. Bapna, P. Goes, A. Gupta, and Y. Jin. User heterogeneity and its impact on electronic auction market design: An empirical exploration. MIS Quarterly, 28(1):21-43, 2004. [3] D. Bertsimas, J. Hawkins, and G. Perakis. Optimal bidding in on-line auctions. Working Paper, 2002. [4] C. Boutilier, M. Goldszmidt, and B. Sabata. Sequential auctions for the allocation of resources with complementarities. In Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 527-534, 1999. [5] A. Byde, C. Preist, and N. R. Jennings. Decision procedures for multiple auctions. In Proc. 1st Int. Joint Conf. on Autonomous Agents and Multiagent Systems (AAMAS-02), 2002. [6] M. M. Bykowsky, R. J. Cull, and J. O. Ledyard. Mutually destructive bidding: The FCC auction design problem. Journal of Regulatory Economics, 17(3):205-228, 2000. [7] Y. Chen, C. Narasimhan, and Z. J. Zhang. Consumer heterogeneity and competitive price-matching guarantees. Marketing Science, 20(3):300-314, 2001. [8] A. K. Dixit and R. S. Pindyck. Investment under Uncertainty. Princeton University Press, 1994. [9] R. Gopal, S. Thompson, Y. A. Tung, and A. B. Whinston. Managing risks in multiple online auctions: An options approach. Decision Sciences, 36(3):397-425, 2005. [10] A. Greenwald and J. O. Kephart. Shopbots and pricebots. In Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 506-511, 1999. [11] P. A. Haile and E. Tamer. Inference with an incomplete model of English auctions. Journal of Political Economy, 11(1), 2003. [12] M. T. Hajiaghayi, R. Kleinberg, M. Mahdian, and D. C. Parkes. Online auctions with re-usable goods. In Proc. ACM Conf. on Electronic Commerce, 2005. [13] K. Hendricks, I. Onur, and T. Wiseman. Preemption and delay in eBay auctions. University of Texas at Austin Working Paper, 2005. [14] A. Iwasaki, M. Yokoo, and K. Terada. A robust open ascending-price multi-unit auction protocol against false-name bids. Decision Support Systems, 39:23-40, 2005. [15] E. G. James D. Hess. Price-matching policies: An empirical case. Managerial and Decision Economics, 12(4):305-315, 1991. [16] A. X. Jiang and K. Leyton-Brown. Estimating bidders'' valuation distributions in online auctions. In Workshop on Game Theory and Decision Theory (GTDT) at IJCAI, 2005. [17] R. Lavi and N. Nisan. Competitive analysis of incentive compatible on-line auctions. In Proc. 2nd ACM Conf. on Electronic Commerce (EC-00), 2000. [18] Y. J. Lin. Price matching in a model of equilibrium price dispersion. Southern Economic Journal, 55(1):57-69, 1988. [19] D. Lucking-Reiley and D. F. Spulber. Business-to-business electronic commerce. Journal of Economic Perspectives, 15(1):55-68, 2001. [20] A. Ockenfels and A. Roth. Last-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet. American Economic Review, 92(4):1093-1103, 2002. [21] M. Peters and S. Severinov. Internet auctions with many traders. Journal of Economic Theory (Forthcoming), 2005. [22] R. Porter. Mechanism design for online real-time scheduling. In Proceedings of the 5th ACM conference on Electronic commerce, pages 61-70. ACM Press, 2004. [23] M. H. Rothkopf and R. Engelbrecht-Wiggans. Innovative approaches to competitive mineral leasing. Resources and Energy, 14:233-248, 1992. [24] T. Sandholm and V. Lesser. Leveled commitment contracts and strategic breach. Games and Economic Behavior, 35:212-270, 2001. [25] T. W. Sandholm and V. R. Lesser. Issues in automated negotiation and electronic commerce: Extending the Contract Net framework. In Proc. 1st International Conference on Multi-Agent Systems (ICMAS-95), pages 328-335, 1995. [26] H. S. Shah, N. R. Joshi, A. Sureka, and P. R. Wurman. Mining for bidding strategies on eBay. Lecture Notes on Artificial Intelligence, 2003. [27] M. Stryszowska. Late and multiple bidding in competing second price Internet auctions. EuroConference on Auctions and Market Design: Theory, Evidence and Applications, 2003. [28] J. T.-Y. Wang. Is last minute bidding bad? UCLA Working Paper, 2003. [29] R. Zeithammer. An equilibrium model of a dynamic auction marketplace. Working Paper, University of Chicago, 2005. 189
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution * ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. 1. INTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportu * A preliminary version of this work appeared in the AMEC workshop in 2004. nities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). Many authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1]. There is still little evidence of automated trading in e-markets, though. We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Without this, we do not expect individual consumers, or firms, to be confident in placing their business in the "hands" of an automated agent. One of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B. Among items listed on eBay, many are essentially identical. This is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5 B of eBay's gross merchandise volume in 2005. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction. While Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the "wrong" auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. As investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions .1 For example, if Alice values a video game console by itself for $200, a video game by itself for $30, and both a console and game for $250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay. Our "super" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. Buyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good (s). The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20]. These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 2. EBAY AND THE DELL E193FP The most common type of auction held on eBay is a singleitem proxy auction. Auctions open at a given time and remain open for a set period of time (usually one week). Bidders bid for the item by giving a proxy a value ceiling. The proxy will bid on behalf of the bidder only as much as is necessary to maintain a winning position in the auction, up to the ceiling received from the bidder. Bidders may communicate with the proxy multiple times before an auction closes. In the event that a bidder's proxy has been outbid, a bidder may give the proxy a higher ceiling to use in the auction. eBay's proxy auction implements an incremental version of a Vickrey auction, with the item sold to the highest bidder for the second-highest bid plus a small increment. Figure 1: Histogram of number of LCD auctions available to each bidder and number of LCD auctions in which a bidder participates. The market analyzed in this paper is that of a specific model of an LCD monitor, a 19" Dell LCD model E193FP. This market was selected for a variety of reasons including: • The mean price of the monitor was $240 (with standard deviation $32), so we believe it reasonable to assume that bidders on the whole are only interested in acquiring one copy of the item on eBay .3 • The volume transacted is fairly high, at approximately 500 units sold per month. • The item is not usually bundled with other items. • The item is typically sold "as new," and so suitable for the price-matching of the options-based scheme. Raw auction information was acquired via a PERL script. The script accesses the eBay search engine,4 and returns all auctions containing the terms ` Dell' and ` LCD' that have closed within the past month .5 Data was stored in a text file for post-processing. To isolate the auctions in the domain of interest, queries were made against the titles of eBay auctions that closed between 27 May, 2005 through 1 October, 2005.6 Figure 1 provides a general sense of how many LCD auctions occur while a bidder is interested in pursuing a monitor .7 8,746 bidders (86%) had more than one auction available between when they first placed a bid on eBay and the 6Specifically, the query found all auctions where the title contained all of the following strings: ` Dell,' ` LCD' and ` E193FP,' while excluding all auctions that contained any of the following strings: ` Dimension,' ` GHZ,' ` desktop,' ` p4' and ` GB . ' The exclusion terms were incorporated so that the only auctions analyzed would be those selling exclusively the LCD of interest. For example, the few bundled auctions selling both a Dell Dimension desktop and the E193FP LCD are excluded. 7As a reference, most auctions close on eBay between noon and midnight EDT, with almost two auctions for the Dell LCD monitor closing each hour on average during peak time periods. Bidders have an average observed patience of 3.9 days (with a standard deviation of 11.4 days). latest closing time of an auction in which they bid (with an average of 78 auctions available). Figure 1 also illustrates the number of auctions in which each bidder participates. Only 32.3% of bidders who had more than one auction available are observed to bid in more than one auction (bidding in 3.6 auctions on average). A simple regression analysis shows that bidders tend to submit maximal bids to an auction that are $1.22 higher after spending twice as much time in the system, as well as bids that are $0.27 higher in each subsequent auction. Among the 508 bidders that won exactly one monitor and participated in multiple auctions, 201 (40%) paid more than $10 more than the closing price of another auction in which they bid, paying on average $35 more (standard deviation $21) than the closing price of the cheapest auction in which they bid but did not win. Furthermore, among the 2,216 bidders that never won an item despite participating in multiple auctions, 421 (19%) placed a losing bid in one auction that was more than $10 higher than the closing price of another auction in which they bid, submitting a losing bid on average $34 more (standard deviation $23) than the closing price of the cheapest auction in which they bid but did not win. Although these measures do not say a bidder that lost could have definitively won (because we only consider the final winning price and not the bid of the winner to her proxy), or a bidder that won could have secured a better price, this is at least indicative of some bidder mistakes. 3. MODELING THE SEQUENTIAL AUCTION PROBLEM While the eBay analysis was for simple bidders who desire only a single item, let us now consider a more general scenario where people may desire multiple goods of different types, possessing general valuations over those goods. Consider a world with buyers (sometimes called bidders) B and K different types of goods G1...GK. Let T = {0, 1, ...} denote time periods. Let L denote a bundle of goods, represented as a vector of size K, where Lk ∈ {0, 1} denotes the quantity of good type Gk in the bundle .8 The type of a buyer i ∈ B is (ai, di, vi), with arrival time ai ∈ T, departure time di ∈ T, and private valuation vi (L) ≥ 0 for each bundle of goods L received between ai and di, and zero value otherwise. The arrival time models the period in which a buyer first realizes her demand and enters the market, while the departure time models the period in which a buyer loses interest in acquiring the good (s). In settings with general valuations, we need an additional assumption: an upper bound on the difference between a buyer's arrival and departure, denoted ΔMax. Buyers have quasi-linear utilities, so that the utility of buyer i receiving bundle L and paying p, in some period no later than di, is ui (L, p) = vi (L) − p. Each seller j ∈ S brings a single item kj to the market, has no intrinsic value and wants to maximize revenue. Seller j has an arrival time, aj, which models the period in which she is first interested in listing the item, while the departure time, dj, models the latest period in which she is willing to consider having an auction for the item close. A seller will receive payment by the end of the reported departure of the winning buyer. 8We extend notation whereby a single item k of type Gk refers to a vector L: Lk = 1. We say an individual auction in a sequence is locally strategyproof (LSP) if truthful bidding is a dominant strategy for a buyer that can only bid in that auction. Consider the following example to see that LSP is insufficient for the existence of a dominant bidding strategy for buyers facing a sequence of auctions. EXAMPLE 1. Alice values one ton of Sand with one ton of Stone at $2, 000. Bob holds a Vickrey auction for one ton of Sand on Monday and a Vickrey auction for one ton of Stone on Tuesday. Alice has no dominant bidding strategy because she needs to know the price for Stone on Tuesday to know her maximum willingness to pay for Sand on Monday. DEFINITION 1. The sequential auction problem. Given a sequence of auctions, despite each auction being locally strategyproof, a bidder has no dominant bidding strategy. Consider a sequence of auctions. Generally, auctions selling the same item will be uncertainly-ordered, because a buyer will not know the ordering of closing prices among the auctions. Define the interesting bundles for a buyer as all bundles that could maximize the buyer's profit for some combination of auctions and bids of other buyers .9 Within the interesting bundles, say that an item has uncertain marginal value if the marginal value of an item depends on the other goods held by the buyer .10 Say that an item is oversupplied if there is more than one auction offering an item of that type. Say two bundles are substitutes if one of those bundles has the same value as the union of both bundles .11 PROPOSITION 1. Given locally strategyproof single-item auctions, the sequential auction problem exists for a bidder if and only if either of the following two conditions is true: (1) within the set of interesting bundles (a) there are two bundles that are substitutes, (b) there is an item with uncertain marginal value, or (c) there is an item that is over-supplied; (2) a bidder faces competitors' bids that are conditioned on the bidder's past bids. PROOF. (Sketch.) (⇐) A bidder does not have a dominant strategy when (a) she does not know which bundle among substitutes to pursue, (b) she faces the exposure problem, or (c) she faces the multiple copies problem. Additionally, a bidder does not have a dominant strategy when she does not how to optimally influence the bids of competitors. (⇒) By contradiction. A bidder has a dominant strategy to bid its constant marginal value for a given item in each auction available when conditions (1) and (2) are both false. For example, the following buyers all face the sequential auction problem as a result of condition (a), (b) and (c) respectively: a buyer who values one ton of Sand for $1,000, or one ton of Stone for $2,000, but not both Sand and Stone; a buyer who values one ton of Sand for $1,000, one ton of Stone for $300, and one ton of Sand and one ton of Stone for $1,500, and can participate in an auction for Sand before an auction for Stone; a buyer who values one ton of Sand for $1,000 and can participate in many auctions selling Sand. 9Assume that the empty set is an interesting bundle. 10Formally, an item k has uncertain marginal value if Ifm: m = vi (Q)--vi (Q--k), ` dQ C _ L E InterestingBundle, Q _ D kJJ> 1. 11Formally, two bundles A and B are substitutes if vi (A U B) = max (vi (A), vi (B)), where A U B = L where Lk = max (Ak, Bk). 4. "SUPER" PROXIES AND OPTIONS The novel solution proposed in this work to resolve the sequential auction problem consists of two primary components: richer proxy agents, and options with price matching. In finance, a real option is a right to acquire a real good at a certain price, called the exercise price. For instance, Alice may obtain from Bob the right to buy Sand from him at an exercise price of $1, 000. An option provides the right to purchase a good at an exercise price but not the obligation. This flexibility allows buyers to put together a collection of options on goods and then decide which to exercise. Options are typically sold at a price called the option price. However, options obtained at a non-zero option price cannot generally support a simple, dominant bidding strategy, as a buyer must compute the expected value of an option to justify the cost [8]. This computation requires a model of the future, which in our setting requires a model of the bidding strategies and the values of other bidders. This is the very kind of game-theoretic reasoning that we want to avoid. Instead, we consider costless options with an option price of zero. This will require some care as buyers are weakly better off with a costless option than without one, whatever its exercise price. However, multiple bidders pursuing options with no intention of exercising them would cause the efficiency of an auction for options to unravel. This is the role of the mandatory proxy agents, which intermediate between buyers and the market. A proxy agent forces a link between the valuation function used to acquire options and the valuation used to exercise options. If a buyer tells her proxy an inflated value for an item, she runs the risk of having the proxy exercise options at a price greater than her value. 4.1 Buyer Proxies 4.1.1 Acquiring Options After her arrival, a buyer submits her valuation ˆvi (perhaps untruthfully) to her proxy in some period ˆai> ai, along with a claim about her departure time ˆdi> ˆai. All transactions are intermediated via proxy agents. Each auction is modified to sell an option on that good to the highest bidding proxy, with an initial exercise price set to the second-highest bid received .12 When an option in which a buyer is interested becomes available for the first time, the proxy determines its bid by computing the buyer's maximum marginal value for the item, and then submits a bid in this amount. A proxy does not bid for an item when it already holds an option. The bid price is: By having a proxy compute a buyer's maximum marginal value for an item and then bidding only that amount, a buyer's proxy will win any auction that could possibly be of benefit to the buyer and only lose those auctions that could never be of value to the buyer. 12The system can set a reserve price for each good, provided that the reserve is universal for all auctions selling the same item. Without a universal reserve price, price matching is not possible because of the additional restrictions on prices that individual sellers will accept. Table 1: Three-buyer example with each wanting a sin gle item and one auction occurring on Monday and Tuesday. "XY" implies an option with exercise price X and bookkeeping that a proxy has prevented Y from currently possessing an option. "→" is the updating of exercise price and bookkeeping. When a proxy wins an auction for an option, the proxy will store in its local memory the identity (which may be a pseudonym) of the proxy not holding an option because of the proxy's win (i.e., the proxy that it ` bumped' from winning, if any). This information will be used for price matching. 4.1.2 Pricing Options Sellers agree by joining the market to allow the proxy representing a buyer to adjust the exercise price of an option that it holds downwards if the proxy discovers that it could have achieved a better price by waiting to bid in a later auction for an option on the same good. To assist in the implementation of the price matching scheme each proxy tracks future auctions for an option that it has already won and will determine who would be bidding in that auction had the proxy delayed its entry into the market until this later auction. The proxy will request price matching from the seller that granted it an option if the proxy discovers that it could have secured a lower price by waiting. To reiterate, the proxy does not acquire more than one option for any good. Rather, it reduces the exercise price on its already issued option if a better deal is found. The proxy is able to discover these deals by asking each future auction to report the identities of the bidders in that auction together with their bids. This needs to be enforced by eBay, as the central authority. The highest bidder in this later auction, across those whose identity is not stored in the proxy's memory for the given item, is exactly the bidder against whom the proxy would be competing had it delayed its entry until this auction. If this high bid is lower than the current option price held, the proxy "price matches" down to this high bid price. After price matching, one of two adjustments will be made by the proxy for bookkeeping purposes. If the winner of the auction is the bidder whose identity has been in the proxy's local memory, the proxy will replace that local information with the identity of the bidder whose bid it just price matched, as that is now the bidder the proxy has prevented from obtaining an option. If the auction winner's identity is not stored in the proxy's local memory the memory may be cleared. In this case, the proxy will simply price match against the bids of future auction winners on this item until the proxy departs. EXAMPLE 2 (TABLE 1). Molly's proxy wins the Monday auction, submitting a bid of $8 and receiving an option for $6. Molly's proxy adds Nancy to its local memory as Nancy's proxy would have won had Molly's proxy not bid. On Tuesday, only Nancy's and Polly's proxy bid (as Molly's proxy holds an option), with Nancy's proxy winning an op Table 2: Examples demonstrating why bookkeeping will lead to a truthful system whereas simply matching to the lowest winning price will not. tion for $4 and noting that it bumped Polly's proxy. At this time, Molly's proxy will price match its option down to $4 and replace Nancy with Polly in its local memory as per the price match algorithm, as Polly would be holding an option had Molly never bid. 4.1.3 Exercising Options At the reported departure time the proxy chooses which options to exercise. Therefore, a seller of an option must wait until period ˆdw for the option to be exercised and receive payment, where w was the winner of the option .13 For bidder i, in period ˆdi, the proxy chooses the option (s) that maximize the (reported) utility of the buyer: where Θ is the set of all options held, γ (θ) are the goods corresponding to a set of options, and π (θ) is the sum of exercise prices for a set of options. All other options are returned .14 No options are exercised when no combination of options have positive utility. 4.1.4 Why bookkeep and not match winning price? One may believe that an alternative method for implementing a price matching scheme could be to simply have proxies match the lowest winning price they observe after winning an option. However, as demonstrated in Table 2, such a simple price matching scheme will not lead to a truthful system. The first scenario in Table 2 demonstrates the outcome if all agents were to truthfully report their types. Molly 13While this appears restrictive on the seller, we believe it not significantly different than what sellers on eBay currently endure in practice. An auction on eBay closes at a specific time, but a seller must wait until a buyer relinquishes payment before being able to realize the revenue, an amount of time that could easily be days (if payment is via a money order sent through courier) to much longer (if a buyer is slow but not overtly delinquent in remitting her payment). 14Presumably, an option returned will result in the seller holding a new auction for an option on the item it still possesses. However, the system will not allow a seller to re-auction an option until ΔMax after the option had first been issued in order to maintain a truthful mechanism. would win the Monday auction and receive an option with an exercise price of $6 (subsequently exercising that option at the end of Monday), and Nancy would win the Tuesday auction and receive an option with an exercise price of $4 (subsequently exercising that option at the end of Tuesday). The second scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, using the proposed bookkeeping method. Nancy would win the Monday auction and receive an option with an exercise price of $8. On Tuesday, Polly would win the auction and receive an option with an exercise price of $0. Nancy's proxy would observe that the highest bid submitted on Tuesday among those proxies not stored in local memory is Polly's bid of $4, and so Nancy's proxy would price match the exercise price of its option down to $4. Note that the exercise price Nancy's proxy has obtained at the end of Tuesday is the same as when she truthfully revealed her type to her proxy. The third scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, if the price matching scheme were for proxies to simply match their option price to the lowest winning price at any time while they are in the system. Nancy would win the Monday auction and receive an option with an exercise price of $8. On Tuesday, Polly would win the auction and receive an option with an exercise price of $0. Nancy's proxy would observe that the lowest price on Tuesday was $0, and so Nancy's proxy would price match the exercise price of its option down to $0. Note that the exercise price Nancy's proxy has obtained at the end of Tuesday is lower than when she truthfully revealed her type to the proxy. Therefore, a price matching policy of simply matching the lowest price paid may not elicit truthful information from buyers. 4.2 Complexity of Algorithm An XOR-valuation of size M for buyer i is a set of M terms, <L1, v1i>... <LM, vMi>, that maps distinct bundles to values, where i is interested in acquiring at most one such bundle. For any bundle S, vi (S) = maxLm ⊆ S (vmi). THEOREM 1. Given an XOR-valuation which possesses M terms, there is an O (KM2) algorithm for computing all maximum marginal values, where K is the number of different item types in which a buyer may be interested. PROOF. For each item type, recall Equation 1 which defines the maximum marginal value of an item. For each bundle L in the M-term valuation, vi (L + k) may be found by iterating over the M terms. Therefore, the number of terms explored to determine the maximum marginal value for any item is O (M2), and so the total number of bundle comparisons to be performed to calculate all maximum marginal values is O (KM2). THEOREM 2. The total memory required by a proxy for implementing price matching is O (K), where K is the number of distinct item types. The total work performed by a proxy to conduct price matching in each auction is O (1). PROOF. By construction of the algorithm, the proxy stores one maximum marginal value for each item for bidding, of which there are O (K); at most one buyer's identity for each item, of which there are O (K); and one current option exercise price for each item, of which there are O (K). For each auction, the proxy either submits a precomputed bid or price matches, both of which take O (1) work. 4.3 Truthful Bidding to the Proxy Agent Proxies transform the market into a direct revelation mechanism, where each buyer i interacts with the proxy only once,15 and does so by declaring a bid, bi, which is defined as an announcement of her type, (ˆai, ˆdi, ˆvi), where the announcement may or may not be truthful. We denote all received bids other than i's as b − i. Given bids, b = (bi, b − i), the market determines allocations, xi (b), and payments, pi (b)> 0, to each buyer (using an online algorithm). A dominant strategy equilibrium for buyers requires that vi (xi (bi, b − i))--pi (bi, b − i)> vi (xi (b ~ i, b − i))--pi (b ~ i, b − i), ` db ~ i = ~ bi, ` db − i.We now establish that it is a dominant strategy for a buyer to reveal her true valuation and true departure time to her proxy agent immediately upon arrival to the system. The proof builds on the price-based characterization of strategyproof single-item online auctions in Hajiaghayi et al. [12]. Define a monotonic and value-independent price function psi (ai, di, L, v − i) which can depend on the values of other agents v − i. Price psi (ai, di, L, v − i) will represent the price available to agent i for bundle L in the mechanism if it announces arrival time ai and departure time di. The price is independent of the value vi of agent i, but can depend on ai, di and L as long as it satisfies a monotonicity condition. PROOF. Agent i cannot benefit from reporting a later departure ˆdi because the allocation is made in period ˆdi and the agent would have no value for this allocation. Agent i cannot benefit from reporting a later arrival ˆai> ai or earlier departure ˆdi <di because of price monotonicity. Finally, the agent cannot benefit from reporting some ˆvi = ~ vi because its reported valuation does not change the prices it faces and the mechanism maximizes its utility given its reported valuation and given the prices. LEMMA 2. At any given time, there is at most one buyer in the system whose proxy does not hold an option for a given item type because of buyer i's presence in the system, and the identity of that buyer will be stored in i's proxy's local memory at that time if such a buyer exists. bumped proxy will leave the system having never won an option, or (b) the bumped proxy will win an auction in the future. If (a), the buyer's presence prevented exactly that one buyer from winning an option, but will have not prevented any other proxies from winning an option (as the buyer's proxy will not bid on additional options upon securing one), and will have had that bumped proxy's identity in its local memory by definition of the algorithm. If (b), the buyer has not prevented the bumped proxy from winning an option after all, but rather has prevented only the proxy that lost to the bumped proxy from winning (if any), whose identity will now be stored in the proxy's local memory by definition of the algorithm. For this new identity in the buyer's proxy's local memory, either scenario (a) or (b) will be true, ad infinitum. Given this, we show that the options-based infrastructure implements a price-based auction with a monotonic and value-independent price schedule to every agent. THEOREM 3. Truthful revelation of valuation, arrival and departure is a dominant strategy for a buyer in the optionsbased market. PROOF. First, define a simple agent-independent price function pki (t, v_i) as the highest bid by the proxies not holding an option on an item of type Gk at time t, not including the proxy representing i herself and not including any proxies that would have already won an option had i never entered the system (i.e., whose identity is stored in i's proxy's local memory) (∞ if no supply at t). This set of proxies is independent of any declaration i makes to its proxy (as the set explicitly excludes the at most one proxy (see Lemma 2) that i has prevented from holding an option), and each bid submitted by a proxy within this set is only a function of their own buyer's declared valuation (see Equation 1). Furthermore, i cannot influence the supply she faces as any options returned by bidders due to a price set by i's proxy's bid will be re-auctioned after i has departed the system. Therefore, pki (t, v_i) is independent of i's declaration to its proxy. Next, define pski (ˆai, ˆdi, v_i) = minˆa, <τ <ˆd, [pki (τ, v_i)] (possibly ∞) as the minimum price over pki (t, v_i), which is clearly monotonic. By construction of price matching, this is exactly the price obtained by a proxy on any option that it holds at departure. Define psi (ˆai, ˆdi, L, v_i) = k = K k = 1 pski (ˆai, ˆdi, v_i) Lk, which is monotonic in ˆai, ˆdi and L since pski (ˆai, ˆdi, v_i) is monotonic in ˆai and ˆdi and (weakly) greater than zero for each k. Given the set of options held at ˆdi, which may be a subset of those items with non-infinite prices, the proxy exercises options to maximize the reported utility. Left to show is that all bundles that could not be obtained with options held are priced sufficiently high as to not be preferred. For each such bundle, either there is an item priced at ∞ (in which case the bundle would not be desired) or there must be an item in that bundle for which the proxy does not hold an option that was available. In all auctions for such an item there must have been a distinct bidder with a bid greater than bidti (k), which subsequently results in pski (ˆai, ˆdi, v_i)> bidti (k), and so the bundle without k would be preferred to the bundle. THEOREM 4. The super proxy, options-based scheme is individually-rational for both buyers and sellers. Table 3: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as the simulated options-based market using worst-case estimates of bidders' true value. PROOF. By construction, the proxy exercises the profit maximizing set of options obtained, or no options if no set of options derives non-negative surplus. Therefore, buyers are guaranteed non-negative surplus by participating in the scheme. For sellers, the price of each option is based on a non-negative bid or zero. 5. EVALUATING THE OPTIONS / PROXY INFRASTRUCTURE A goal of the empirical benchmarking and a reason to collect data from eBay is to try and build a realistic model of buyers from which to estimate seller revenue and other market effects under the options-based scheme. We simulate a sequence of auctions that match the timing of the Dell LCD auctions on eBay .16 When an auction successfully closes on eBay, we simulate a Vickrey auction for an option on the item sold in that period. Auctions that do not successfully close on eBay are not simulated. We estimate the arrival, departure and value of each bidder on eBay from their observed behavior .17 Arrival is estimated as the first time that a bidder interacts with the eBay proxy, while departure is estimated as the latest closing time among eBay auctions in which a bidder participates. We initially adopt a particularly conservative estimate for bidder value, estimating it as the highest bid a bidder was observed to make on eBay. Table 3 compares the distribution of closing prices on eBay and in the simulated options scheme. While the average revenue in both schemes is virtually the same ($239.66 in the options scheme vs. $240.24 on eBay), the winners in the options scheme tend to value the item won 7% more than the winners on eBay ($263 in the options scheme vs. $244 on eBay). 5.1 Bid Identification We extend the work of Haile and Tamer [11] to sequential auctions to get a better view of underlying bidder values. Rather than assume for bidders an equilibrium behavior as in standard econometric techniques, Haile and Tamer do not attempt to model how bidders' true values get mapped into a bid in any given auction. Rather, in the context of repeated 16When running the simulations, the results of the first and final ten days of auctions are not recorded to reduce edge effects that come from viewing a discrete time window of a continuous process. 17For the 100 bidders that won multiple times on eBay, we have each one bid a constant marginal value for each additional item in each auction until the number of options held equals the total number of LCDs won on eBay, with each option available for price matching independently. This bidding strategy is not a dominant strategy (falling outside the type space possible for buyers on which the proof of truthfulness has been built), but is believed to be the most appropriate first order action for simulation. Figure 2: CDF of maximum bids observed and upper bound estimate of the bidding population's distribution for maximum willingness to pay. The true population distribution lies below the estimated upper bound. single-item auctions with distinct bidder populations, Haile and Tamer make only the following two assumptions when estimating the distribution of true bidder values: 1. Bidders do not bid more than they are willing to pay. 2. Bidders do not allow an opponent to win at a price they are willing to beat. From the first of their two assumptions, given the bids placed by each bidder in each auction, Haile and Tamer derive a method for estimating an upper bound of the bidding population's true value distribution (i.e., the bound that lies above the true value distribution). From the second of their two assumptions, given the winning price of each auction, Haile and Tamer derive a method for estimating a lower bound of the bidding population's true value distribution. It is only the upper-bound of the distribution that we utilize in our work. Haile and Tamer assume that bidders only participate in a single auction, and require independence of the bidding population from auction to auction. Neither assumption is valid here: the former because bidders are known to bid in more than one auction, and the latter because the set of bidders in an auction is in all likelihood not a true i.i.d. sampling of the overall bidding population. In particular, those who win auctions are less likely to bid in successive auctions, while those who lose auctions are more likely to remain bidders in future auctions. In applying their methods we make the following adjustments: • Within a given auction, each individual bidder's true willingness to pay is assumed weakly greater than the maximum bid that bidder submits across all auctions for that item (either past or future). • When estimating the upper bound of the value distribution, if a bidder bids in more than one auction, randomly select one of the auctions in which the bidder bid, and only utilize that one observation during the estimation .18 Table 4: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as in the simulated options-based market using an adjusted Haile and Tamer estimate of bidders' true values being 15% higher than their maximum observed bid. Figure 2 provides the distribution of maximum bids placed by bidders on eBay as well as the estimated upper bound of the true value distribution of bidders based on the extended Haile and Tamer method .19 As can be seen, the smallest relative gap between the two curves meaningfully occurs near the 80th percentile, where the upper bound is 1.17 times the maximum bid. Therefore, adopted as a less conservative model of bidder values is a uniform scaling factor of 1.15. We now present results from this less conservative analysis. Table 4 shows the distribution of closing prices in auctions on eBay and in the simulated options scheme. The mean price in the options scheme is now significantly higher, 15% greater, than the prices on eBay ($276 in the options scheme vs. $240 on eBay), while the standard deviation of closing prices is lower among the options scheme auctions ($14 in the options scheme vs. $32 on eBay). Therefore, not only is the expected revenue stream higher, but the lower variance provides sellers a greater likelihood of realizing that higher revenue. The efficiency of the options scheme remains higher than on eBay. The winners in the options scheme now have an average estimated value 7.5% higher at $302. In an effort to better understand this efficiency, we formulated a mixed integer program (MIP) to determine a simple estimate of the allocative efficiency of eBay. The MIP computes the efficient value of the offline problem with full hindsight on all bids and all supply .20 Using a scaling of 1.15, the total value allocated to eBay winners is estimated at $551,242, while the optimal value (from the MIP) is $593,301. This suggests an allocative efficiency of 92.9%: while the typical value of a winner on eBay is $281, an average value of $303 was possible .21 Note the options-based tain auctions, then further adjustments would be required in order to utilize these techniques. 19The estimation of the points in the curve is a minimization over many variables, many of which can have smallnumbers bias. Consequently, Haile and Tamer suggest using a weighted average over all terms yi of ~ i yi exp (yiρ) j exp (yj ρ) to approximate the minimum while reducing the small number effects. We used p = − 1000 and removed observations of auctions with 17 bidders or more as they occurred very infrequently. However, some small numbers bias still demonstrated itself with the plateau in our upper bound estimate around a value of $300. 20Buyers who won more than one item on eBay are cloned so that they appear to be multiple bidders of identical type. 21As long as one believes that every bidder's true value is a constant factor α away from their observed maximum bid, the 92.9% efficiency calculation holds for any value of α. In practice, this belief may not be reasonable. For example, if losing bidders tend to have true values close to their observed scheme comes very close to achieving this level of efficiency [at 99.7% efficient in this estimate] even though it operates without the benefit of hindsight. Finally, although the typical winning bidder surplus decreases between eBay and the options-based scheme, some surplus redistribution would be possible because the total market efficiency is improved .22 6. DISCUSSION The biggest concern with our scheme is that proxy agents who may be interested in many different items may acquire many more options than they finally exercise. This can lead to efficiency loss. Notice that this is not an issue when bidders are only interested in a single item (as in our empirical study), or have linear-additive values on items. To fix this, we would prefer to have proxy agents use more caution in acquiring options and use a more adaptive bidding strategy than that in Equation 1. For instance, if a proxy is already holding an option with an exercise price of $3 on some item for which it has value of $10, and it values some substitute item at $5, the proxy could reason that in no circumstance will it be useful to acquire an option on the second item. We formulate a more sophisticated bidding strategy along these lines. Let Θt be the set of all options a proxy for bidder i already possesses at time t. Let θt C _ Θt, be a subset of those options, the sum of whose exercise prices are π (θt), and the goods corresponding to those options being γ (θt). Let Π (θt) = ˆvi (γ (θt))--π (θt) be the (reported) available surplus associated with a set of options. Let θ * t be the set of options currently held that would maximize the buyer's surplus; i.e., θ * t = argmaxθtCΘt Π (θt). Let the maximal willingness to pay for an item k represent a price above which the agent knows it would never exercise an option on the item given the current options held. This can be computed as follows: (3) where ˆvi (L + k)--Π (θ * t) considers surplus already held, ˆvi (L + k)--ˆvi (L) considers the marginal value of a good, and taking the max [0,.] considers the overall use of pursuing the good. However, and somewhat counter intuitively, we are not able to implement this bidding scheme without forfeiting truthfulness. The Π (θ * t) term in Equation 3 (i.e., the amount of guaranteed surplus bidder i has already obtained) can be influenced by proxy j's bid. Therefore, bidder j may have the incentive to misrepresent her valuation to her proxy if she believes doing so will cause i to bid differently in the future in a manner beneficial to j. Consider the following example where the proxy scheme is refined to bid the maximum willingness to pay. EXAMPLE 3. Alice values either one ton of Sand or one ton of Stone for $2,000. Bob values either one ton of Sand or one ton of Stone for $1,500. All bidders have a patience maximum bids while eBay winners have true values much greater than their observed maximum bids then downward bias is introduced in the efficiency calculation at present. 22The increase in eBay winner surplus between Tables 3 and 4 is to be expected as the α scaling strictly increases the estimated value of the eBay winners while holding the prices at which they won constant. of 2 days. On day one, a Sand auction is held, where Alice's proxy bids $2,000 and Bob's bids $1,500. Alice's proxy wins an option to purchase Sand for $1,500. On day two, a Stone auction is held, where Alice's proxy bids $1,500 [as she has already obtained a guaranteed $500 of surplus from winning a Sand option, and so reduces her Stone bid by this amount], and Bob's bids $1,500. Either Alice's proxy or Bob's proxy will win the Stone option. At the end of the second day, Alice's proxy holds an option with an exercise price of $1,500 to obtain a good valued for $2,000, and so obtains $500 in surplus. Now, consider what would have happened had Alice declared that she valued only Stone. EXAMPLE 4. Alice declares valuing only Stone for $2,000. Bob values either one ton of Sand or one ton of Stone for $1,500. All bidders have a patience of 2 days. On day one, a Sand auction is held, where Bob's proxy bids $1,500. Bob's proxy wins an option to purchase Sand for $0. On day two, a Stone auction is held, where Alice's proxy bids $2,000, and Bob's bids $0 [as he has already obtained a guaranteed $1,500 of surplus from winning a Sand option, and so reduces his Stone bid by this amount]. Alice's proxy wins the Stone option for $0. At the end of the second day, Alice's proxy holds an option with an exercise price of $0 to obtain a good valued for $2,000, and so obtains $2,000 in surplus. By misrepresenting her valuation (i.e., excluding her value of Sand), Alice was able to secure higher surplus by guiding Bob's bid for Stone to $0. An area of immediate further work by the authors is to develop a more sophisticated proxy agent that can allow for bidding of maximum willingness to pay (Equation 3) while maintaining truthfulness. An additional, practical, concern with our proxy scheme is that we assume an available, trusted, and well understood method to characterize goods (and presumably the quality of goods). We envision this happening in practice by sellers defining a classification for their item upon entering the market, for instance via a UPC code. Just as in eBay, this would allow an opportunity for sellers to improve revenue by overstating the quality of their item ("new" vs. "like new"), and raises the issue of how well a reputation scheme could address this. 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay?
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution * ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. 1. INTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportu * A preliminary version of this work appeared in the AMEC workshop in 2004. nities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). Many authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1]. There is still little evidence of automated trading in e-markets, though. We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Without this, we do not expect individual consumers, or firms, to be confident in placing their business in the "hands" of an automated agent. One of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B. Among items listed on eBay, many are essentially identical. This is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5 B of eBay's gross merchandise volume in 2005. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction. While Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the "wrong" auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. As investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions .1 For example, if Alice values a video game console by itself for $200, a video game by itself for $30, and both a console and game for $250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay. Our "super" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. Buyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good (s). The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20]. These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 2. EBAY AND THE DELL E193FP 3. MODELING THE SEQUENTIAL AUCTION PROBLEM 4. "SUPER" PROXIES AND OPTIONS 4.1 Buyer Proxies 4.1.1 Acquiring Options 4.1.2 Pricing Options 4.1.3 Exercising Options 4.1.4 Why bookkeep and not match winning price? 4.2 Complexity of Algorithm 4.3 Truthful Bidding to the Proxy Agent 5. EVALUATING THE OPTIONS / PROXY INFRASTRUCTURE 5.1 Bid Identification 6. DISCUSSION 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay?
The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution * ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue. 1. INTRODUCTION nities while allowing for the dynamic pricing of goods. In addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]). We believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs. Among items listed on eBay, many are essentially identical. This presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem. For example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction. This is a problem of multiple copies. Another problem bidders may face is the exposure problem. Both problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations. Why might the sequential auction problem be bad? Complex games may lead to bidders employing costly strategies and making mistakes. Potential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions. The problem is also a familiar one of online decision making. market, inhibiting seller revenue opportunities. Additionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities. We are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties. 1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems. Return policies alleviate the exposure problem by allowing customers to return goods at the purchase price. Price matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18]. Furthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item. These two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay. Our "super" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies. The extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions. A seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option. The proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions. A proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation. All other options are returned to the market and not exercised. The options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics. We conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005. LCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem. We first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period. This model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market. We also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments. Using this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices. However, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids. from the maximum bid they were observed to have placed on eBay. Based on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient. 1.2 Related Work These papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions. Peters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium. However, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders. Previous work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2]. Previous work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1]. Unfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction. Iwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem. In other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24]. Most similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem. However, their work uses costly options and does not remove the sequential bidding problem completely. Work on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time. We leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol. The special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation. Jiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions. 7. CONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods. Our scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic. In addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers. For instance, does the options scheme change seller incentives from what they currently are on eBay?
I-54
Approximate and Online Multi-Issue Negotiation
This paper analyzes bilateral multi-issue negotiation between self-interested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m > 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity.
[ "approxim", "negoti", "time constraint", "equilibrium", "strategi", "rel error", "interact kei form", "multiag system", "disput agent", "gain from cooper", "protocol", "indivis issu", "game-theori", "onlin comput" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "M", "U", "U", "R", "U", "R" ]
Approximate and Online Multi-Issue Negotiation Shaheen S. Fatima Department of Computer Science University of Liverpool Liverpool L69 3BX, UK. shaheen@csc.liv.ac.uk Michael Wooldridge Department of Computer Science University of Liverpool Liverpool L69 3BX, UK. mjw@csc.liv.ac.uk Nicholas R. Jennings School of Electronics and Computer Science University of Southampton Southampton SO17 1BJ, UK. nrj@ecs.soton.ac.uk ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m > 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Design, Theory 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Since this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them. Now, the simplest form of negotiation involves two agents and a single issue. For example, consider a scenario in which a buyer and a seller negotiate on the price of a good. To begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement. Since agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues 951 978-81-904262-7-5 (RPS) c 2007 IFAAMAS and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Our primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Likewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation. Moreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues). Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. Then, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( √ m) . These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. The remainder of the paper is organised as follows. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. Section 5 discusses the related literature and Section 6 concludes. 2. SINGLE-ISSUE NEGOTIATION We adopt the single issue model of [27] because this is a model where, during negotiation, the parties are allowed to make offers from a set of discrete offers. Since our focus is on indivisible issues (i.e., parties are allowed to make one of two possible offers: zero or one), our scenario fits in well with [27]. Hence we use this basic single issue model and extend it to multiple issues. Before doing so, we give an overview of this model and its equilibrium strategies. There are two strategic agents: a and b. Each agent has time constraints in the form of deadlines and discount factors. The two agents negotiate over a single indivisible issue (i). This issue is a `pie'' of size 1 and the agents want to determine who gets the pie. There is a deadline (i.e., a number of rounds by which negotiation must end). Let n ∈ N+ denote this deadline. The agents use an alternating offers protocol (as the one of Rubinstein [18]), which proceeds through a series of time periods. One of the agents, say a, starts negotiation in the first time period (i.e., t = 1) by making an offer (xi = 0 or 1) to b. Agent b can either accept or reject the offer. If it accepts, negotiation ends in an agreement with a getting xi and b getting yi = 1 − xi. Otherwise, negotiation proceeds to the next time period, in which agent b makes a counter-offer. This process of making offers continues until one of the agents either accepts an offer or quits negotiation (resulting in a conflict). Thus, there are three possible actions an agent can take during any time period: accept the last offer, make a new counter-offer, or quit the negotiation. An essential feature of negotiations involving alternating offers is that the agents'' utilities decrease with time [21]. Specifically, the decrease occurs at each step of offer and counteroffer. This decrease is represented with a discount factor denoted 0 < δi ≤ 1 for both1 agents. Let [xt i, yt i ] denote the offer made at time period t where xt i and yt i denote the share for agent a and b respectively. Then, for a given pie, the set of possible offers is: {[xt i, yt i ] : xt i = 0 or 1, yt i = 0 or 1, and xt i + yt i = 1} At time t, if a and b receive a share of xt i and yt i respectively, then their utilities are: ua i (xt i, t) = j xt i × δt−1 if t ≤ n 0 otherwise ub i (yt i , t) = j yt i × δt−1 if t ≤ n 0 otherwise The conflict utility (i.e., the utility received in the event that no deal is struck) is zero for both agents. For the above setting, the agents reason as follows in order to determine what to offer at t = 1. We let A(1) (B(1)) denote a``s (b``s) equilibrium offer for the first time period. Let agent a denote the first mover (i.e., at t = 1, a proposes to b who should get the pie). To begin, consider the case where the deadline for both agents is n = 1. If b accepts, the division occurs as agreed; if not, neither agent gets anything (since n = 1 is the deadline). Here, a is in a powerful position and is able to propose to keep 100 percent of the pie and give nothing to b 2 . Since the deadline is n = 1, b accepts this offer and agreement takes place in the first time period. Now, consider the case where the deadline is n = 2. In order to decide what to offer in the first round, a looks ahead to t = 2 and reasons backwards. Agent a reasons that if negotiation proceeds to the second round, b will take 100 percent of the pie by offering [0, 1] and leave nothing for a. Thus, in the first time period, if a offers b anything less than the whole pie, b will reject the offer. Hence, during the first time period, agent a offers [0, 1]. Agent b accepts this and an agreement occurs in the first time period. In general, if the deadline is n, negotiation proceeds as follows. As before, agent a decides what to offer in the first round by looking ahead as far as t = n and then reasoning backwards. Agent a``s 1 Having a different discount factor for different agents only makes the presentation more involved without leading to any changes in the analysis of the strategic behaviour of the agents or the time complexity of finding the equilibrium offers. Hence we have a single discount factor for both agents. 2 It is possible that b may reject such a proposal. However, irrespective of whether b accepts or rejects the proposal, it gets zero utility (because the deadline is n = 1). Thus, we assume that b accepts a``s offer. 952 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) offer for t = 1 depends on who the offering agent is for the last time period. This, in turn, depends on whether n is odd or even. Since a makes an offer at t = 1 and the agents use the alternating offers protocol, the offering agent for the last time period is b if n is even and it is a if n is odd. Thus, depending on whether n is odd or even, a makes the following offer at t = 1: A(1) = j OFFER [1, 0] IF ODD n ACCEPT IF b``s TURN B(1) = j OFFER [0, 1] IF EVEN n ACCEPT IF a``s TURN Agent b accepts this offer and negotiation ends in the first time period. Note that the equilibrium outcome depends on who makes the first move. Since we have two agents and either of them could move first, we get two possible equilibrium outcomes. On the basis of the above equilibrium for single-issue negotiation with complete information, we first obtain the equilibrium for multiple issues and then show that computing these offers is a hard problem. We then present a time efficient approximate equilibrium. 3. MULTI-ISSUE NEGOTIATION We first analyse the complete information setting. This section forms the base which we extend to the case of information uncertainty in Section 4. Here a and b negotiate over m > 1 indivisible issues. These issues are m distinct pies and the agents want to determine how to distribute the pies between themselves. Let S = {1, 2, ... , m} denote the set of m pies. As before, each pie is of size 1. Let the discount factor for issue c, where 1 ≤ c ≤ m, be 0 < δc ≤ 1. For each issue, let n denote each agent``s deadline. In the offer for time period t (where 1 ≤ t ≤ n), agent a``s (b``s) share for each of the m issues is now represented as an m element vector xt ∈ Bm (yt ∈ Bm ) where B denotes the set {0, 1}. Thus, if agent a``s share for issue c at time t is xt c, then agent b``s share is yt c = (1−xt c). The shares for a and b are together represented as the package [xt , yt ]. As is traditional in multi-issue utility theory, we define an agent``s cumulative utility using the standard additive form [12]. The functions Ua : Bm × Bm × N+ → R and Ub : Bm × Bm × N+ → R give the cumulative utilities for a and b respectively at time t. These are defined as follows: Ua ([xt , yt ], t) = ( Σm c=1ka c ua c (xt c, t) if t ≤ n 0 otherwise (1) Ub ([xt , yt ], t) = ( Σm c=1kb cub c(yt c, t) if t ≤ n 0 otherwise (2) where ka ∈ Nm + denotes an m element vector of constants for agent a and kb ∈ Nm + that for b. Here N+ denotes the set of positive integers. These vectors indicate how the agents value different issues. For example, if ka c > ka c+1, then agent a values issue c more than issue c + 1. Likewise for agent b. In other words, the m issues are perfect substitutes (i.e., all that matters to an agent is its total utility for all the m issues and not that for any subset of them). In all the settings we study, the issues will be perfect substitutes. To begin each agent has complete information about all negotiation parameters (i.e., n, m, ka c , kb c, and δc for 1 ≤ c ≤ m). Now, multi-issue negotiation can be done using different procedures. Broadly speaking, there are three key procedures for negotiating multiple issues [19]: 1. the package deal procedure where all the issues are settled together as a bundle, 2. the sequential procedure where the issues are discussed one after another, and 3. the simultaneous procedure where the issues are discussed in parallel. Between these three procedures, the package deal is known to generate Pareto optimal outcomes [19, 6]. Hence we adopt it here. We first give a brief description of the procedure and then determine the equilibrium strategies for it. 3.1 The package deal procedure In this procedure, the agents use the same protocol as for singleissue negotiation (described in Section 2). However, an offer for the package deal includes a proposal for each issue under negotiation. Thus, for m issues, an offer includes m divisions, one for each issue. Agents are allowed to either accept a complete offer (i.e., all m issues) or reject a complete offer. An agreement can therefore take place either on all m issues or on none of them. As per the single-issue negotiation, an agent decides what to offer by looking ahead and reasoning backwards. However, since an offer for the package deal includes a share for all the m issues, the agents can now make tradeoffs across the issues in order to maximise their cumulative utilities. For 1 ≤ c ≤ m, the equilibrium offer for issue c at time t is denoted as [at c, bt c] where at c and bt c denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [at , bt ] where at ∈ Bm (bt ∈ Bm ) is an m element vector that denotes a``s (b``s) share for each of the m issues. Also, for 1 ≤ c ≤ m, δc is the discount factor for issue c. The symbols 0 and 1 denote m element vectors of zeroes and ones respectively. Note that for 1 ≤ t ≤ n, at c + bt c = 1 (i.e., the sum of the agents'' shares (at time t) for each pie is one). Finally, for time period t (for 1 ≤ t ≤ n) we let A(t) (respectively B(t)) denote the equilibrium strategy for agent a (respectively b). 3.2 Equilibrium strategies As mentioned in Section 1, the package deal allows agents to make tradeoffs. We let TRADEOFFA (TRADEOFFB) denote agent a``s (b``s) function for making tradeoffs. We let P denote a set of parameters to the procedure TRADEOFFA (TRADEOFFB) where P = {ka , kb , δ, m}. Given this, the following theorem characterises the equilibrium for the package deal procedure. THEOREM 1. For the package deal procedure, the following strategies form a Nash equilibrium. The equilibrium strategy for t = n is: A(n) = j OFFER [1, 0] IF a``s TURN ACCEPT IF b``s TURN B(n) = j OFFER [0, 1] IF b``s TURN ACCEPT IF a``s TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: A(t) = 8 < : OFFER TRADEOFFA(P, UB(t), t) IF a``s TURN If (Ua ([xt , yt ], t) ≥ UA(t)) ACCEPT else REJECT IF b``s TURN B(t) = 8 < : OFFER TRADEOFFB(P, UA(t), t) IF b``s TURN If (Ub ([xt , yt ], t) ≥ UB(t)) ACCEPT else REJECT IF a``s TURN The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 where UA(t) = Ua ([at+1 , bt+1 ], t + 1) and UB(t) = Ub ([at+1 , bt+1 ], t + 1). PROOF. We look ahead to the last time period (i.e., t = n) and then reason backwards. To begin, if negotiation reaches the deadline (n), then the agent whose turn it is takes everything and leaves nothing for its opponent. Hence, we get the strategies A(n) and B(n) as given in the statement of the theorem. In all the preceding time periods (t < n), the offering agent proposes a package that gives its opponent a cumulative utility equal to what the opponent would get from its own equilibrium offer for the next time period. During time period t, either a or b could be the offering agent. Consider the case where a makes an offer at t. The package that a offers at t gives b a cumulative utility of Ub ([at+1 , bt+1 ], t + 1). However, since there is more than one issue, there is more than one package that gives b this cumulative utility. From among these packages, a offers the one that maximises its own cumulative utility (because it is a utility maximiser). Thus, the problem for a is to find the package [at , bt ] so as to: maximize mX c=1 ka c (1 − bt c)δt−1 c (3) such that mX c=1 bt ckb c ≥ UB(t) bt c = 0 or 1 for 1 ≤ c ≤ m where UB(t), δt−1 c , ka c , and kb c are constants and bt c (1 ≤ c ≤ m) is a variable. Assume that the function TRADEOFFA takes parameters P, UB(t), and t, to solve the maximisation problem given in Equation 3 and returns the corresponding package. If there is more than one package that solves Equation 3, then TRADEOFFA returns any one of them (because agent a gets equal utility from all such packages and so does agent b). The function TRADEOFFB for agent b is analogous to that for a. On the other hand, the equilibrium strategy for the agent that receives an offer is as follows. For time period t, let b denote the receiving agent. Then, b accepts [xt , yt ] if UB(t) ≤ Ub ([xt , yt ], t), otherwise it rejects the offer because it can get a higher utility in the next time period. The equilibrium strategy for a as receiving agent is defined analogously. In this way, we reason backwards and obtain the offers for the first time period. Thus, we get the equilibrium strategies (A(t) and B(t)) given in the statement of the theorem. The following example illustrates how the agents make tradeoffs using the above equilibrium strategies. EXAMPLE 1. Assume there are m = 2 issues for negotiation, the deadline for both issues is n = 2, and the discount factor for both issues for both agents is δ = 1/2. Let ka 1 = 3, ka 2 = 1, kb 1 = 1, and kb 2 = 5. Let agent a be the first mover. By using backward reasoning, a knows that if negotiation reaches the second time period (which is the deadline), then b will get a hundred percent of both the issues. This gives b a cumulative utility of UB(2) = 1/2 + 5/2 = 3. Thus, in the first time period, if b gets anything less than a utility of 3, it will reject a``s offer. So, at t = 1, a offers the package where it gets issue 1 and b gets issue 2. This gives a cumulative utility of 3 to a and 5 to b. Agent b accepts the package and an agreement takes place in the first time period. The maximization problem in Equation 3 can be viewed as the 0-1 knapsack problem3 . In the 0-1 knapsack problem, we have a set 3 Note that for the case of divisible issues this is the fractional knapof m items where each item has a profit and a weight. There is a knapsack with a given capacity. The objective is to fill the knapsack with items so as to maximize the cumulative profit of the items in the knapsack. This problem is analogous to the negotiation problem we want to solve (i.e., the maximization problem of Equation 3). Since ka c and δt−1 c are constants, maximizing Pm c=1 ka c (1−bt c)δt−1 c is the same as minimizing Pm c=1 ka c bt c. Hence Equation 3 can be written as: minimize mX c=1 ka c bt c (4) such that mX c=1 bt ckb c ≥ UB(t) bt c = 0 or 1 for 1 ≤ c ≤ m Equation 4 is a minimization version of the standard 0-1 knapsack problem4 with m items where ka c represents the profit for item c, kb c the weight for item c, and UB(t) the knapsack capacity. Example 1 was for two issues and so it was easy to find the equilibrium offers. But, in general, it is not computationally easy to find the equilibrium offers of Theorem 1. The following theorem proves this. THEOREM 2. For the package deal procedure, the problem of finding the equilibrium offers given in Theorem 1 is NP-hard. PROOF. Finding the equilibrium offers given in Theorem 1 requires solving the 0-1 knapsack problem given in Equation 4. Since the 0-1 knapsack problem is NP-hard [17], the problem of finding equilibrium for the package deal is also NP-hard. 3.3 Approximate equilibrium Researchers in the area of algorithms have found time efficient methods for computing approximate solutions to 0-1 knapsack problems [10]. Hence we use these methods to find a solution to our negotiation problem. At this stage, we would like to point out the main difference between solving the 0-1 knapsack problem and solving our negotiation problem. The 0-1 knapsack problem involves decision making by a single agent regarding which items to place in the knapsack. On the other hand, our negotiation problem involves two players and they are both strategic. Hence, in our case, it is not enough to just find an approximate solution to the knapsack problem, we must also show that such an approximation forms an equilibrium. The traditional approach for overcoming the computational complexity in finding an equilibrium has been to use an approximate equilibrium (see [14, 26] for example). In this approach, a strategy profile is said to form an approximate Nash equilibrium if neither agent can gain more than the constant by deviating. Hence, our aim is to use the solution to the 0-1 knapsack problem proposed in [10] and show that it forms an approximate equilibrium to our negotiation problem. Before doing so, we give a brief overview of the key ideas that underlie approximation algorithms. There are two key issues in the design of approximate algorithms [1]: sack problem. The factional knapsack problem is computationally easy; it can be solved in time polynomial in the number of items in the knapsack problem [17]. In contrast, the 0-1 knapsack problem is computationally hard. 4 Note that for the standard 0-1 knapsack problem the weights, profits and the capacity are positive integers. However a 0-1 knapsack problem with fractions and non positive values can easily be transformed to one with positive integers in time linear in m using the methods given in [8, 17]. 954 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1. the quality of their solution, and 2. the time taken to compute the approximation. The quality of an approximate algorithm is determined by comparing its performance to that of the optimal algorithm and measuring the relative error [3, 1]. The relative error is defined as (z−z∗ )/z∗ where z is the approximate solution and z∗ the optimal one. In general, we are interested in finding approximate algorithms whose relative error is bounded from above by a certain constant , i.e., (z − z∗ )/z∗ ≤ (5) Regarding the second issue of time complexity, we are interested in finding fully polynomial approximation algorithms. An approximation algorithm is said to be fully polynomial if for any > 0 it finds a solution satisfying Equation 5 in time polynomially bounded by size of the problem (for the 0-1 knapsack problem, the problem size is equal to the number of items) and by 1/ [1]. For the 0-1 knapsack problem, Ibarra and Kim [10] presented a fully polynomial approximation method. This method is based on dynamic programming. It is a parametric method that takes as a parameter and for any > 0, finds a heuristic solution z with relative error at most , such that the time and space complexity grow polynomially with the number of items m and 1/ . More specifically, the space and time complexity are both O(m/ 2 ) and hence polynomial in m and 1/ (see [10] for the detailed approximation algorithm and proof of time and space complexity). Since the Ibarra and Kim method is fully polynomial, we use it to solve our negotiation problem. This is done as follows. For agent a, let APRX-TRADEOFFA(P, UB(t), t, ) denote a procedure that returns an approximate solution to Equation 4 using the Ibarra and Kim method. The procedure APRX-TRADEOFFB(P, UA(t), t, ) for agent b is analogous. For 1 ≤ c ≤ m, the approximate equilibrium offer for issue c at time t is denoted as [¯at c,¯bt c] where ¯at c and ¯bt c denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [¯at ,¯bt ] where ¯at ∈ Bm (¯bt ∈ Bm ) is an m element vector that denotes a``s (b``s) share for each of the m issues. Also, as before, for 1 ≤ c ≤ m, δc is the discount factor for issue c. Note that for 1 ≤ t ≤ n, ¯at c + ¯bt c = 1 (i.e., the sum of the agents'' shares (at time t) for each pie is one). Finally, for time period t (for 1 ≤ t ≤ n) we let ¯A(t) (respectively ¯B(t)) denote the approximate equilibrium strategy for agent a (respectively b). The following theorem uses this notation and characterizes an approximate equilibrium for multi-issue negotiation. THEOREM 3. For the package deal procedure, the following strategies form an approximate Nash equilibrium. The equilibrium strategy for t = n is: ¯A(n) = j OFFER [1, 0] IF a``s TURN ACCEPT IF b``s TURN ¯B(n) = j OFFER [0, 1] IF b``s TURN ACCEPT IF a``s TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: ¯A(t) = 8 < : OFFER APRX-TRADEOFFA(P, UB(t), t, ) IF a``s TURN If (Ua ([xt , yt ], t) ≥ UA(t)) ACCEPT else REJECT IF b``s TURN ¯B(t) = 8 < : OFFER APRX-TRADEOFFB(P, UA(t), t, ) IF b``s TURN If (Ub ([xt , yt ], t) ≥ UB(t)) ACCEPT else REJECT IF a``s TURN where UA(t) = Ua ([¯at+1 ,¯bt+1 ], t + 1) and UB(t) = Ub ([¯at+1 , ¯bt+1 ], t + 1). An agreement takes place at t = 1. PROOF. As in the proof for Theorem 1, we use backward reasoning. We first obtain the strategies for the last time period t = n. It is straightforward to get these strategies; the offering agent gets a hundred percent of all the issues. Then for t = n − 1, the offering agent must solve the maximization problem of Equation 4 by substituting t = n−1 in it. For agent a (b), this is done by APPROX-TRADEOFFA (APPROX-TRADEOFFB). These two functions are nothing but the Ibarra and Kim``s approximation method for solving the 0-1 knapsack problem. These two functions take as a parameter and use the Ibarra and Kim``s approximation method to return a package that approximately maximizes Equation 4. Thus, the relative error for these two functions is the same as that for Ibarra and Kim``s method (i.e., it is at most where is given in Equation 5). Assume that a is the offering agent for t = n − 1. Agent a must offer a package that gives b a cumulative utility equal to what it would get from its own approximate equilibrium offer for the next time period (i.e., Ub ([¯at+1 ,¯bt+1 ], t + 1) where [¯at+1 ,¯bt+1 ] is the approximate equilibrium package for the next time period). Recall that for the last time period, the offering agent gets a hundred percent of all the issues. Since a is the offering agent for t = n − 1 and the agents use the alternating offers protocol, it is b``s turn at t = n. Thus Ub ([¯at+1 ,¯bt+1 ], t + 1) is equal to b``s cumulative utility from receiving a hundred percent of all the issues. Using this utility as the capacity of the knapsack, a uses APPROX-TRADEOFFA and obtains the approximate equilibrium package for t = n − 1. On the other hand, if b is the offering agent at t = n − 1, it uses APPROX-TRADEOFFB to obtain the approximate equilibrium package. In the same way for t < n − 1, the offering agent (say a) uses APPROX-TRADEOFFA to find an approximate equilibrium package that gives b a utility of Ub ([¯at+1 ,¯bt+1 ], t + 1). By reasoning backwards, we obtain the offer for time period t = 1. If a (b) is the offering agent, it proposes the offer APPROX-TRADEOFFA(P, UB(1), 1, ) (APPROX-TRADEOFFB(P, UA(1), 1, )). The receiving agent accepts the offer. This is because the relative error in its cumulative utility from the offer is at most . An agreement therefore takes place in the first time period. THEOREM 4. The time complexity of finding the approximate equilibrium offer for the first time period is O(nm/ 2 ). PROOF. The time complexity of APPROX-TRADEOFFA and APPROXTRADEOFFB is the same as the time complexity of the Ibarra and Kim method [10] i.e., O(m/ 2 )). In order to find the equilibrium offer for the first time period using backward reasoning, APPROXTRADEOFFA (or APPROX- TRADEOFFB) is invoked n times. Hence the time complexity of finding the approximate equilibrium offer for the first time period is O(nm/ 2 ). This analysis was done in a complete information setting. However an extension of this analysis to an incomplete information setting where the agents have probability distributions over some uncertain parameter is straightforward, as long as the negotiation is done offline; i.e., the agents know their preference for each individual issue before negotiation begins. For instance, consider the case where different agents have different discount factors, and each agent is uncertain about its opponent``s discount factor although it knows its own. This uncertainty is modelled with a probability distribution over the possible values for the opponent``s discount factor and having this distribution as common knowledge to the agents. All our analysis for the complete information setting still holds for The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 this incomplete information setting, except for the fact that an agent must now use the given probability distribution and find its opponent``s expected utility instead of its actual utility. Hence, instead of analyzing an incomplete information setting for offline negotiation, we focus on online multi-issue negotiation. 4. ONLINE MULTI-ISSUE NEGOTIATION We now consider a more general and, arguably more realistic, version of multi-issue negotiation, where the agents are uncertain about the issues they will have to negotiate about in future. In this setting, when negotiating an issue, the agents know that they will negotiate more issues in the future, but they are uncertain about the details of those issues. As before, let m be the total number of issues that are up for negotiation. The agents have a probability distribution over the possible values of ka c and kb c. For 1 ≤ c ≤ m let ka c and kb c be uniformly distributed over [0,1]. This probability distribution, n, and m are common knowledge to the agents. However, the agents come to know ka c and kb c only just before negotiation for issue c begins. Once the agents reach an agreement on issue c, it cannot be re-negotiated. This scenario requires online negotiation since the agents must make decisions about an issue prior to having the information about the future issues [3]. We first give a brief introduction to online problems and then draw an analogy between the online knapsack problem and the negotiation problem we want to solve. In an online problem, data is given to the algorithm incrementally, one unit at a time [3]. The online algorithm must also produce the output incrementally: after seeing i units of input it must output the ith unit of output. Since decisions about the output are made with incomplete knowledge about the entire input, an online algorithm often cannot produce an optimal solution. Such an algorithm can only approximate the performance of the optimal algorithm that sees all the inputs in advance. In the design of online algorithms, the main aim is to achieve a performance that is close to that of the optimal offline algorithm on each input. An online algorithm is said to be stochastic if it makes decisions on the basis of the probability distributions for the future inputs. The performance of stochastic online algorithms is assessed in terms of the expected difference between the optimum and the approximate solution (denoted E[z∗ m −zm] where z∗ m is the optimal and zm the approximate solution). Note that the subscript m is used to indicate the fact that this difference depends on m. We now describe the protocol for online negotiation and then obtain an approximate equilibrium. The protocol is defined as follows. Let agent a denote the first mover (since we focus on the package deal procedure, the first mover is the same for all the m issues). Step 1. For c = 1, the agents are given the values of ka c and kb c. These two values are now common5 knowledge. Step 2. The agents settle issue c using the alternating offers protocol described in Section 2. Negotiation for issue c must end within n time periods from the start of negotiation on the issue. If an agreement is not reached within this time, then negotiation fails on this and on all remaining issues. Step 3. The above steps are repeated for issues c = 2, 3, ... , m. Negotiation for issue c (2 ≤ c ≤ m) begins in the time period following an agreement on issue c − 1. 5 We assume common knowledge because it simplifies exposition. However, if ka c (kb c) is a``s (b``s) private knowledge, then our analysis will still hold but now an agent must find its opponent``s expected utility on the basis of the p.d.fs for ka c and kb c. Thus, during time period t, the problem for the offering agent (say a) is to find the optimal offer for issue c on the basis of ka c and kb c and the probability distribution for ka i and kb i (c < i ≤ m). In order to solve this online negotiation problem we draw analogy with the online knapsack problem. Before doing so, however, we give a brief overview of the online knapsack problem. In the online knapsack problem, there are m items. The agent must examine the m items one at a time according to the order they are input (i.e., as their profit and size coefficients become known). Hence, the algorithm is required to decide whether or not to include each item in the knapsack as soon as its weight and profit become known, without knowledge concerning the items still to be seen, except for their total number. Note that since the agents have a probability distribution over the weights and profits of the future items, this is a case of stochastic online knapsack problem. Our online negotiation problem is analogous to the online knapsack problem. This analogy is described in detail in the proof for Theorem 5. Again, researchers in algorithms have developed time efficient approximate solutions to the online knapsack problem [16]. Hence we use this solution and show that it forms an equilibrium. The following theorem characterizes an approximate equilibrium for online negotiation. Here the agents have to choose a strategy without knowing the features of the future issues. Because of this information incompleteness, the relevant equilibrium solution is that of a Bayes'' Nash Equilibrium (BNE) in which each agent plays the best response to the other agents with respect to their expected utilities [18]. However, finding an agent``s BNE strategy is analogous to solving the online 0-1 knapsack problem. Also, the online knapsack can only be solved approximately [16]. Hence the relevant equilibrium solution concept is approximate BNE (see [26] for example). The following theorem finds this equilibrium using procedures ONLINE- TRADEOFFA and ONLINE-TRADEOFFB which are defined in the proof of the theorem. For a given time period, we let zm denote the approximately optimal solution generated by ONLINE-TRADEOFFA (or ONLINE-TRADEOFFB) and z∗ m the actual optimum. THEOREM 5. For the package deal procedure, the following strategies form an approximate Bayes'' Nash equilibrium. The equilibrium strategy for t = n is: A(n) = j OFFER [1, 0] IF a``s TURN ACCEPT IF b``s TURN B(n) = j OFFER [0, 1] IF b``s TURN ACCEPT IF a``s TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: A(t) = 8 < : OFFER ONLINE-TRADEOFFA(P, UB(t), t) IF a``s TURN If (Ua ([xt , yt ], t) ≥ UA(t)) ACCEPT else REJECT IF b``s TURN B(t) = 8 < : OFFER ONLINE-TRADEOFFB(P, UA(t), t) IF b``s TURN If (Ub ([xt , yt ], t) ≥ UB(t)) ACCEPT else REJECT IF a``s TURN where UA(t) = Ua ([¯at+1 ,¯bt+1 ], t + 1) and UB(t) = Ub ([¯at+1 , ¯bt+1 ], t + 1). An agreement on issue c takes place at t = c. For a given time period, the expected difference between the solution generated by the optimal strategy and that by the approximate strategy is E[z∗ m − zm] = O( √ m). 956 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) PROOF. As in Theorem 1 we find the equilibrium offer for time period t = 1 using backward induction. Let a be the offering agent for t = 1 for all the m issues. Consider the last time period t = n (recall from Step 2 of the online protocol that n is the deadline for completing negotiation on the first issue). Since the first mover is the same for all the issues, and the agents make offers alternately, the offering agent for t = n is also the same for all the m issues. Assume that b is the offering agent for t = n. As in Section 3, the offering agent for t = n gets a hundred percent of all the m issues. Since b is the offering agent for t = n, his utility for this time period is: UB(n) = kb 1δn−1 1 + 1/2 mX i=2 δ i(n−1) i (6) Recall that ka i and kb i (for c < i ≤ m) are not known to the agents. Hence, the agents can only find their expected utilities from the future issues on the basis of the probability distribution functions for ka i and kb i . However, during the negotiation for issue c the agents know ka c but not kb c (see Step 1 of the online protocol). Hence, a computes UB(n) as follows. Agent b``s utility from issue c = 1 is kb 1δn−1 1 (which is the first term of Equation 6). Then, on the basis of the probability distribution functions for ka i and kb i , agent a computes b``s expected utility from each future issue i as δ i(n−1) i /2 (since ka i and kb i are uniformly distributed on [0, 1]). Thus, b``s expected cumulative utility from these m − c issues is 1/2 Pm i=2 δ i(n−1) i (which is the second term of Equation 6). Now, in order to decide what to offer for issue c = 1, the offering agent for t = n − 1 (i.e., agent a) must solve the following online knapsack problem: maximize Σm i=1ka i (1 − ¯bt i)δn−1 i (7) such that Σm i=1kb i ¯bt i ≥ UB(n) ¯bt i = 0 or 1 for 1 ≤ i ≤ m The only variables in the above maximization problem are ¯bt i. Now, maximizing Σm i=1ka i (1−¯bt i)δn−1 i is the same as minimizing Σm i=1ka i ¯bt i since δn−1 i and ka i are constants. Thus, we write Equation 7 as: minimize Σm i=1ka i ¯bt i (8) such that Σm i=1kb i ¯bt i ≥ UB(n) ¯bt i = 0 or 1 for 1 ≤ i ≤ m The above optimization problem is analogous to the online 0-1 knapsack problem. An algorithm to solve the online knapsack problem has already proposed in [16]. This algorithm is called the fixed-choice online algorithm. It has time complexity linear in the number of items (m) in the knapsack problem. We use this to solve our online negotiation problem. Thus, our ONLINE-TRADEOFFA algorithm is nothing but the fixed-choice online algorithm and therefore has the same time complexity as the latter. This algorithm takes the values of ka i and kb i one at a time and generates an approximate solution to the above knapsack problem. The expected difference between the optimum and approximate solution is E[z∗ m − zm] = O( √ m) [16] (see [16] for the detailed fixed-choice online algorithm and a proof for E[z∗ m − zm] = O( √ m)). The fixed-choice online algorithm of [16] is a generalization of the basic greedy algorithm for the offline knapsack problem; the idea behind it is as follows. A threshold value is determined on the basis of the information regarding weights and profits for the 0-1 knapsack problem. The method then includes into the knapsack all items whose profit density (profit density of an item is its profit per unit weight) exceeds the threshold until either the knapsack is filled or all the m items have been considered. In more detail, the algorithm ONLINE-TRADEOFFA works as follows. It first gets the values of ka 1 and kb 1 and finds ¯bt c. Since we have a 0-1 knapsack problem, ¯bt c can be either zero or one. Now, if ¯bt c = 1 for t = n, then ¯bt c must be one for 1 ≤ t < n (i.e., a must offer ¯bt c = 1 at t = 1). If ¯bt c = 1 for t = n, but a offers ¯bt c = 0 at t = 1, then agent b gets less utility than what it expects from a``s offer and rejects the proposal. Thus, if ¯bt c = 1 for t = n, then the optimal strategy for a is to offer ¯bt c = 1 at t = 1. Agent b accepts the offer. Thus, negotiation on the first issue starts at t = 1 and an agreement on it is also reached at t = 1. In the next time period (i.e., t = 2), negotiation proceeds to the next issue. The deadline for the second issue is n time periods from the start of negotiation on the issue. For c = 2, the algorithm ONLINE-TRADEOFFA is given the values of ka 2 and kb 2 and finds ¯bt c as described above. Agent offers bc at t = 2 and b accepts. Thus, negotiation on the second issue starts at t = 2 and an agreement on it is also reached at t = 2. This process repeats for the remaining issues c = 3, ... , m. Thus, each issue is agreed upon in the same time period in which it starts. As negotiation for the next issue starts in the following time period (see step 3 of the online protocol), agreement on issue i occurs at time t = i. On the other hand, if b is the offering agent at t = 1, he uses the algorithm ONLINE-TRADEOFFB which is defined analogously. Thus, irrespective of who makes the first move, all the m issues are settled at time t = m. THEOREM 6. The time complexity of finding the approximate equilibrium offers of Theorem 5 is linear in m. PROOF. The time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is the same as the time complexity of the fixed-choice online algorithm of [16]. Since the latter has time complexity linear in m, the time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is also linear in m. It is worth noting that, for the 0-1 knapsack problem, the lower bound on the expected difference between the optimum and the solution found by any online algorithm is Ω(1) [16]. Thus, it follows that this lower bound also holds for our negotiation problem. 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. Since Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent``s cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. On the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent``s utility. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants ka c and kb c are both uniformly distributed. It will be interesting to analyze the case where ka c and kb c have other, possibly different, probability distributions. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain. 7. REFERENCES [1] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi. Complexity and approximation: Combinatorial optimization problems and their approximability properties. Springer, 2003. [2] M. Bac and H. Raff. Issue-by-issue negotiations: the role of information and time preference. Games and Economic Behavior, 13:125-134, 1996. [3] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998. [4] S. J. Brams. Fair division: from cake cutting to dispute resolution. Cambridge University Press, 1996. [5] L. A. Busch and I. J. Horstman. Bargaining frictions, bargaining procedures and implied costs in multiple-issue bargaining. Economica, 64:669-680, 1997. [6] S. S. Fatima, M. Wooldridge, and N. R. Jennings. Multi-issue negotiation with deadlines. Journal of Artificial Intelligence Research, 27:381-417, 2006. [7] C. Fershtman. The importance of the agenda in bargaining. Games and Economic Behavior, 2:224-238, 1990. [8] F. Glover. A multiphase dual algorithm for the zero-one integer programming problem. Operations Research, 13:879-919, 1965. [9] M. T. Hajiaghayi, R. Kleinberg, and D. C. Parkes. Adaptive limited-supply online auctions. In ACM Conference on Electronic Commerce (ACMEC-04), pages 71-80, New York, 2004. [10] O. H. Ibarra and C. E. Kim. Fast approximation algorithms for the knapsack and sum of subset problems. Journal of ACM, 22:463-468, 1975. [11] R. Inderst. Multi-issue bargaining with endogenous agenda. Games and Economic Behavior, 30:64-82, 2000. [12] R. Keeney and H. Raiffa. Decisions with Multiple Objectives: Preferences and Value Trade-offs. New York: John Wiley, 1976. [13] S. Kraus. Strategic negotiation in multi-agent environments. The MIT Press, Cambridge, Massachusetts, 2001. [14] D. Lehman, L. I. O``Callaghan, and Y. Shoham. Truth revelation in approximately efficient combinatorial auctions. Journal of the ACM, 49(5):577-602, 2002. [15] A. Lomuscio, M. Wooldridge, and N. R. Jennings. A classification scheme for negotiation in electronic commerce. International Journal of Group Decision and Negotiation, 12(1):31-56, 2003. [16] A. Marchetti-Spaccamela and C. Vercellis. Stochastic online knapsack problems. Mathematical Programming, 68:73-104, 1995. [17] S. Martello and P. Toth. Knapsack problems: Algorithms and computer implementations. John Wiley and Sons, 1990. [18] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, 1994. [19] H. Raiffa. The Art and Science of Negotiation. Harvard University Press, Cambridge, USA, 1982. [20] J. S. Rosenschein and G. Zlotkin. Rules of Encounter. MIT Press, 1994. [21] A. Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97-109, January 1982. [22] T. Sandholm and V. Lesser. Levelled commitment contracts and strategic breach. Games and Economic Behavior: Special Issue on AI and Economics, 35:212-270, 2001. [23] T. Sandholm and N. Vulkan. Bargaining with deadlines. In AAAI-99, pages 44-51, Orlando, FL, 1999. [24] T. C. Schelling. An essay on bargaining. American Economic Review, 46:281-306, 1956. [25] O. Shehory and S. Kraus. Methods for task allocation via agent coalition formation. Artificial Intelligence Journal, 101(1-2):165-200, 1998. [26] S. Singh, V. Soni, and M. Wellman. Computing approximate Bayes Nash equilibria in tree games of incomplete information. In Proceedings of the ACM Conference on Electronic Commerce ACM-EC, pages 81-90, New York, May 2004. [27] I. Stahl. Bargaining Theory. Economics Research Institute, Stockholm School of Economics, Stockholm, 1972. 958 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Approximate and Online Multi-Issue Negotiation ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m> 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are "indivisible" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (√ m). These approximate strategies also have polynomial time complexity. 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Since this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them. Now, the simplest form of negotiation involves two agents and a single issue. For example, consider a scenario in which a buyer and a seller negotiate on the price of a good. To begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement. Since agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Our primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Likewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation. Moreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues). Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. Then, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is 0 (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- / m). These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. The remainder of the paper is organised as follows. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. Section 5 discusses the related literature and Section 6 concludes. 2. SINGLE-ISSUE NEGOTIATION We adopt the single issue model of [27] because this is a model where, during negotiation, the parties are allowed to make offers from a set of discrete offers. Since our focus is on indivisible issues (i.e., parties are allowed to make one of two possible offers: zero or one), our scenario fits in well with [27]. Hence we use this basic single issue model and extend it to multiple issues. Before doing so, we give an overview of this model and its equilibrium strategies. There are two strategic agents: a and b. Each agent has time constraints in the form of deadlines and discount factors. The two agents negotiate over a single indivisible issue (i). This issue is a ` pie' of size 1 and the agents want to determine who gets the pie. There is a deadline (i.e., a number of rounds by which negotiation must end). Let n E N + denote this deadline. The agents use an alternating offers protocol (as the one of Rubinstein [18]), which proceeds through a series of time periods. One of the agents, say a, starts negotiation in the first time period (i.e., t = 1) by making an offer (xi = 0 or 1) to b. Agent b can either accept or reject the offer. If it accepts, negotiation ends in an agreement with a getting xi and b getting yi = 1--xi. Otherwise, negotiation proceeds to the next time period, in which agent b makes a counter-offer. This process of making offers continues until one of the agents either accepts an offer or quits negotiation (resulting in a conflict). Thus, there are three possible actions an agent can take during any time period: accept the last offer, make a new counter-offer, or quit the negotiation. An essential feature of negotiations involving alternating offers is that the agents' utilities decrease with time [21]. Specifically, the decrease occurs at each step of offer and counteroffer. This decrease is represented with a discount factor denoted 0 <δi <1 for both1 agents. Let [xt i, yti] denote the offer made at time period t where xti and yti denote the share for agent a and b respectively. Then, for a given pie, the set of possible offers is: The conflict utility (i.e., the utility received in the event that no deal is struck) is zero for both agents. For the above setting, the agents reason as follows in order to determine what to offer at t = 1. We let A (1) (B (1)) denote a's (b's) equilibrium offer for the first time period. Let agent a denote the first mover (i.e., at t = 1, a proposes to b who should get the pie). To begin, consider the case where the deadline for both agents is n = 1. If b accepts, the division occurs as agreed; if not, neither agent gets anything (since n = 1 is the deadline). Here, a is in a powerful position and is able to propose to keep 100 percent of the pie and give nothing to b 2. Since the deadline is n = 1, b accepts this offer and agreement takes place in the first time period. Now, consider the case where the deadline is n = 2. In order to decide what to offer in the first round, a looks ahead to t = 2 and reasons backwards. Agent a reasons that if negotiation proceeds to the second round, b will take 100 percent of the pie by offering [0, 1] and leave nothing for a. Thus, in the first time period, if a offers b anything less than the whole pie, b will reject the offer. Hence, during the first time period, agent a offers [0, 1]. Agent b accepts this and an agreement occurs in the first time period. In general, if the deadline is n, negotiation proceeds as follows. As before, agent a decides what to offer in the first round by looking ahead as far as t = n and then reasoning backwards. Agent a's 1Having a different discount factor for different agents only makes the presentation more involved without leading to any changes in the analysis of the strategic behaviour of the agents or the time complexity of finding the equilibrium offers. Hence we have a single discount factor for both agents. 2It is possible that b may reject such a proposal. However, irrespective of whether b accepts or rejects the proposal, it gets zero utility (because the deadline is n = 1). Thus, we assume that b accepts a's offer. ubi 952 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) offer for t = 1 depends on who the offering agent is for the last time period. This, in turn, depends on whether n is odd or even. Since a makes an offer at t = 1 and the agents use the alternating offers protocol, the offering agent for the last time period is b if n is even and it is a if n is odd. Thus, depending on whether n is odd or even, a makes the following offer at t = 1: Agent b accepts this offer and negotiation ends in the first time period. Note that the equilibrium outcome depends on who makes the first move. Since we have two agents and either of them could move first, we get two possible equilibrium outcomes. On the basis of the above equilibrium for single-issue negotiation with complete information, we first obtain the equilibrium for multiple issues and then show that computing these offers is a hard problem. We then present a time efficient approximate equilibrium. 3. MULTI-ISSUE NEGOTIATION We first analyse the complete information setting. This section forms the base which we extend to the case of information uncertainty in Section 4. Here a and b negotiate over m> 1 indivisible issues. These issues are m distinct pies and the agents want to determine how to distribute the pies between themselves. Let S = {1, 2,..., mI denote the set of m pies. As before, each pie is of size 1. Let the discount factor for issue c, where 1 <c <m, be 0 <δc <1. For each issue, let n denote each agent's deadline. In the offer for time period t (where 1 <t <n), agent a's (b's) share for each of the m issues is now represented as an m element vector xt E Bm (yt E Bm) where B denotes the set {0, 1I. Thus, if agent a's share for issue c at time t is xt c, then agent b's share is ytc = (1--xtc). The shares for a and b are together represented as the package [xt, yt]. As is traditional in multi-issue utility theory, we define an agent's cumulative utility using the standard additive form [12]. The functions Ua: Bm x Bm x N +, R and Ub: Bm x Bm x N +, R give the cumulative utilities for a and b respectively at time t. These are defined as follows: where ka E Nm + denotes an m element vector of constants for agent a and kb E Nm + that for b. Here N + denotes the set of positive integers. These vectors indicate how the agents value different issues. For example, if kac> kac +1, then agent a values issue c more than issue c + 1. Likewise for agent b. In other words, the m issues are perfect substitutes (i.e., all that matters to an agent is its total utility for all the m issues and not that for any subset of them). In all the settings we study, the issues will be perfect substitutes. To begin each agent has complete information about all negotiation parameters (i.e., n, m, kac, kb c, and δc for 1 <c <m). Now, multi-issue negotiation can be done using different procedures. Broadly speaking, there are three key procedures for negotiating multiple issues [19]: 1. the package deal procedure where all the issues are settled together as a bundle, 2. the sequential procedure where the issues are discussed one after another, and 3. the simultaneous procedure where the issues are discussed in parallel. Between these three procedures, the package deal is known to generate Pareto optimal outcomes [19, 6]. Hence we adopt it here. We first give a brief description of the procedure and then determine the equilibrium strategies for it. 3.1 The package deal procedure In this procedure, the agents use the same protocol as for singleissue negotiation (described in Section 2). However, an offer for the package deal includes a proposal for each issue under negotiation. Thus, for m issues, an offer includes m divisions, one for each issue. Agents are allowed to either accept a complete offer (i.e., all m issues) or reject a complete offer. An agreement can therefore take place either on all m issues or on none of them. As per the single-issue negotiation, an agent decides what to offer by looking ahead and reasoning backwards. However, since an offer for the package deal includes a share for all the m issues, the agents can now make tradeoffs across the issues in order to maximise their cumulative utilities. For 1 <c <m, the equilibrium offer for issue c at time t is denoted as [atc, btc] where atc and btc denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [at, bt] where at E Bm (bt E Bm) is an m element vector that denotes a's (b's) share for each of the m issues. Also, for 1 <c <m, δc is the discount factor for issue c. The symbols 0 and 1 denote m element vectors of zeroes and ones respectively. Note that for 1 <t <n, atc + btc = 1 (i.e., the sum of the agents' shares (at time t) for each pie is one). Finally, for time period t (for 1 <t <n) we let A (t) (respectively B (t)) denote the equilibrium strategy for agent a (respectively b). 3.2 Equilibrium strategies As mentioned in Section 1, the package deal allows agents to make tradeoffs. We let TRADEOFFA (TRADEOFFB) denote agent a's (b's) function for making tradeoffs. We let P denote a set of parameters to the procedure TRADEOFFA (TRADEOFFB) where P = {ka, kb, δ, mI. Given this, the following theorem characterises the equilibrium for the package deal procedure. For all preceding time periods t <n, if [xt, yt] denotes the offer made at time t, then the equilibrium strategies are defined as follows: The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 where UA (t) = Ua ([at +1, bt +1], t + 1) and UB (t) = Ub ([at +1, bt +1], t + 1). PROOF. We look ahead to the last time period (i.e., t = n) and then reason backwards. To begin, if negotiation reaches the deadline (n), then the agent whose turn it is takes everything and leaves nothing for its opponent. Hence, we get the strategies A (n) and B (n) as given in the statement of the theorem. In all the preceding time periods (t <n), the offering agent proposes a package that gives its opponent a cumulative utility equal to what the opponent would get from its own equilibrium offer for the next time period. During time period t, either a or b could be the offering agent. Consider the case where a makes an offer at t. The package that a offers at t gives b a cumulative utility of Ub ([at +1, bt +1], t + 1). However, since there is more than one issue, there is more than one package that gives b this cumulative utility. From among these packages, a offers the one that maximises its own cumulative utility (because it is a utility maximiser). Thus, the problem for a is to find the package [at, bt] so as to: where UB (t), δt − 1c, kac, and kbc are constants and btc (1 <c <m) is a variable. Assume that the function TRADEOFFA takes parameters P, UB (t), and t, to solve the maximisation problem given in Equation 3 and returns the corresponding package. If there is more than one package that solves Equation 3, then TRADEOFFA returns any one of them (because agent a gets equal utility from all such packages and so does agent b). The function TRADEOFFB for agent b is analogous to that for a. On the other hand, the equilibrium strategy for the agent that receives an offer is as follows. For time period t, let b denote the receiving agent. Then, b accepts [xt, yt] if UB (t) <Ub ([xt, yt], t), otherwise it rejects the offer because it can get a higher utility in the next time period. The equilibrium strategy for a as receiving agent is defined analogously. In this way, we reason backwards and obtain the offers for the first time period. Thus, we get the equilibrium strategies (A (t) and B (t)) given in the statement of the theorem. The following example illustrates how the agents make tradeoffs using the above equilibrium strategies. EXAMPLE 1. Assume there are m = 2 issues for negotiation, the deadline for both issues is n = 2, and the discount factor for both issues for both agents is δ = 1/2. Let ka1 = 3, ka2 = 1, kb1 = 1, and kb2 = 5. Let agent a be the first mover. By using backward reasoning, a knows that if negotiation reaches the second time period (which is the deadline), then b will get a hundred percent of both the issues. This gives b a cumulative utility of UB (2) = 1/2 + 5/2 = 3. Thus, in the first time period, if b gets anything less than a utility of 3, it will reject a's offer. So, at t = 1, a offers the package where it gets issue 1 and b gets issue 2. This gives a cumulative utility of 3 to a and 5 to b. Agent b accepts the package and an agreement takes place in the first time period. The maximization problem in Equation 3 can be viewed as the 0-1 knapsack problem3. In the 0-1 knapsack problem, we have a set 3Note that for the case of divisible issues this is the fractional knapof m items where each item has a profit and a weight. There is a knapsack with a given capacity. The objective is to fill the knapsack with items so as to maximize the cumulative profit of the items in the knapsack. This problem is analogous to the negotiation problem we want to solve (i.e., the maximization problem of Equation 3). Since kc a and δt − 1 c are constants, maximizing Pmc = 1 kac (1--bt c) δt − 1 Equation 4 is a minimization version of the standard 0-1 knapsack problem4 with m items where kac represents the profit for item c, kbc the weight for item c, and UB (t) the knapsack capacity. Example 1 was for two issues and so it was easy to find the equilibrium offers. But, in general, it is not computationally easy to find the equilibrium offers of Theorem 1. The following theorem proves this. THEOREM 2. For the package deal procedure, the problem of finding the equilibrium offers given in Theorem 1 is NP-hard. PROOF. Finding the equilibrium offers given in Theorem 1 requires solving the 0-1 knapsack problem given in Equation 4. Since the 0-1 knapsack problem is NP-hard [17], the problem offinding equilibrium for the package deal is also NP-hard. 3.3 Approximate equilibrium Researchers in the area of algorithms have found time efficient methods for computing approximate solutions to 0-1 knapsack problems [10]. Hence we use these methods to find a solution to our negotiation problem. At this stage, we would like to point out the main difference between solving the 0-1 knapsack problem and solving our negotiation problem. The 0-1 knapsack problem involves decision making by a single agent regarding which items to place in the knapsack. On the other hand, our negotiation problem involves two players and they are both strategic. Hence, in our case, it is not enough to just find an approximate solution to the knapsack problem, we must also show that such an approximation forms an equilibrium. The traditional approach for overcoming the computational complexity in finding an equilibrium has been to use an approximate equilibrium (see [14, 26] for example). In this approach, a strategy profile is said to form an e approximate Nash equilibrium if neither agent can gain more than the constant a by deviating. Hence, our aim is to use the solution to the 0-1 knapsack problem proposed in [10] and show that it forms an approximate equilibrium to our negotiation problem. Before doing so, we give a brief overview of the key ideas that underlie approximation algorithms. There are two key issues in the design of approximate algorithms [1]: sack problem. The factional knapsack problem is computationally easy; it can be solved in time polynomial in the number of items in the knapsack problem [17]. In contrast, the 0-1 knapsack problem is computationally hard. 4Note that for the standard 0-1 knapsack problem the weights, profits and the capacity are positive integers. However a 0-1 knapsack problem with fractions and non positive values can easily be transformed to one with positive integers in time linear in m using the methods given in [8, 17]. 954 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1. the quality of their solution, and 2. the time taken to compute the approximation. The quality of an approximate algorithm is determined by comparing its performance to that of the optimal algorithm and measuring the relative error [3, 1]. The relative error is defined as (z − z ∗) / z ∗ where z is the approximate solution and z ∗ the optimal one. In general, we are interested in finding approximate algorithms whose relative error is bounded from above by a certain constant ~, i.e., Regarding the second issue of time complexity, we are interested in finding fully polynomial approximation algorithms. An approximation algorithm is said to be fully polynomial if for any ~> 0 it finds a solution satisfying Equation 5 in time polynomially bounded by size of the problem (for the 0-1 knapsack problem, the problem size is equal to the number of items) and by 1 / ~ [1]. For the 0-1 knapsack problem, Ibarra and Kim [10] presented a fully polynomial approximation method. This method is based on dynamic programming. It is a parametric method that takes ~ as a parameter and for any ~> 0, finds a heuristic solution z with relative error at most ~, such that the time and space complexity grow polynomially with the number of items m and 1 / ~. More specifically, the space and time complexity are both O (m / ~ 2) and hence polynomial in m and 1 / ~ (see [10] for the detailed approximation algorithm and proof of time and space complexity). Since the Ibarra and Kim method is fully polynomial, we use it to solve our negotiation problem. This is done as follows. For agent a, let APRX-TRADEOFFA (P, UB (t), t, ~) denote a procedure that returns an ~ approximate solution to Equation 4 using the Ibarra and Kim method. The procedure APRX-TRADEOFFB (P, UA (t), t, ~) for agent b is analogous. For 1 ≤ c ≤ m, the approximate equilibrium offer for issue c at time t is denoted as [¯ atc, ¯ btc] where ¯ atc and ¯ btc denote the shares for agent a and b respectively. We denote the equilibrium package at time t as [¯ at, ¯ bt] where ¯ at ∈ Bm (¯ bt ∈ Bm) is an m element vector that denotes a's (b's) share for each of the m issues. Also, as before, for 1 ≤ c ≤ m, δc is the discount factor for issue c. Note that for 1 ≤ t ≤ n, ¯ atc + ¯ btc = 1 (i.e., the sum of the agents' shares (at time t) for each pie is one). Finally, for time period t (for 1 ≤ t ≤ n) we let ¯ A (t) (respectively ¯ B (t)) denote the approximate equilibrium strategy for agent a (respectively b). The following theorem uses this notation and characterizes an approximate equilibrium for multi-issue negotiation. THEOREM 3. For the package deal procedure, the following strategies form an ~ approximate Nash equilibrium. The equilibrium strategy for t = n is: For all preceding time periods t <n, if [xt, yt] denotes the offer made at time t, then the equilibrium strategies are defined as follows: where UA (t) = Ua ([¯ at +1, ¯ bt +1], t + 1) and UB (t) = Ub ([¯ at +1, ¯ bt +1], t + 1). An agreement takes place at t = 1. PROOF. As in the prooffor Theorem 1, we use backward reasoning. We first obtain the strategies for the last time period t = n. It is straightforward to get these strategies; the offering agent gets a hundred percent of all the issues. Then for t = n − 1, the offering agent must solve the maximization problem of Equation 4 by substituting t = n − 1 in it. For agent a (b), this is done by APPROX-TRADEOFFA (APPROX-TRADEOFFB). These two functions are nothing but the Ibarra and Kim's approximation method for solving the 0-1 knapsack problem. These two functions take ~ as a parameter and use the Ibarra and Kim's approximation method to return a package that approximately maximizes Equation 4. Thus, the relative error for these two functions is the same as that for Ibarra and Kim's method (i.e., it is at most ~ where ~ is given in Equation 5). Assume that a is the offering agent for t = n − 1. Agent a must offer a package that gives b a cumulative utility equal to what it would get from its own approximate equilibrium offer for the next time period (i.e., Ub ([¯ at +1, ¯ bt +1], t + 1) where [¯ at +1, ¯ bt +1] is the approximate equilibrium package for the next time period). Recall that for the last time period, the offering agent gets a hundred percent of all the issues. Since a is the offering agent for t = n − 1 and the agents use the alternating offers protocol, it is b's turn at t = n. Thus Ub ([¯ at +1, ¯ bt +1], t + 1) is equal to b's cumulative utility from receiving a hundred percent of all the issues. Using this utility as the capacity of the knapsack, a uses APPROX-TRADEOFFA and obtains the approximate equilibrium package for t = n − 1. On the other hand, if b is the offering agent at t = n − 1, it uses APPROX-TRADEOFFB to obtain the approximate equilibrium package. In the same way for t <n − 1, the offering agent (say a) uses APPROX-TRADEOFFA to find an approximate equilibrium package that gives b a utility of Ub ([¯ at +1, ¯ bt +1], t + 1). By reasoning backwards, we obtain the offerfor time period t = 1. If a (b) is the offering agent, it proposes the offer APPROX-TRADEOFFA (P, UB (1), 1, ~) (APPROX-TRADEOFFB (P, UA (1), 1, ~)). The receiving agent accepts the offer. This is because the relative error in its cumulative utility from the offer is at most ~. An agreement therefore takes place in the first time period. THEOREM 4. The time complexity of finding the ~ approximate equilibrium offer for the first time period is O (nm / ~ 2). PROOF. The time complexity of APPROX-TRADEOFFA and APPROXTRADEOFFB is the same as the time complexity of the Ibarra and Kim method [10] i.e., O (m / ~ 2)). In order to find the equilibrium offer for the first time period using backward reasoning, APPROXTRADEOFFA (or APPROX - TRADEOFFB) is invoked n times. Hence the time complexity offinding the ~ approximate equilibrium offer for the first time period is O (nm / ~ 2). This analysis was done in a complete information setting. However an extension of this analysis to an incomplete information setting where the agents have probability distributions over some uncertain parameter is straightforward, as long as the negotiation is done offline; i.e., the agents know their preference for each individual issue before negotiation begins. For instance, consider the case where different agents have different discount factors, and each agent is uncertain about its opponent's discount factor although it knows its own. This uncertainty is modelled with a probability distribution over the possible values for the opponent's discount factor and having this distribution as common knowledge to the agents. All our analysis for the complete information setting still holds for The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 this incomplete information setting, except for the fact that an agent must now use the given probability distribution and find its opponent's expected utility instead of its actual utility. Hence, instead of analyzing an incomplete information setting for offline negotiation, we focus on online multi-issue negotiation. 4. ONLINE MULTI-ISSUE NEGOTIATION We now consider a more general and, arguably more realistic, version of multi-issue negotiation, where the agents are uncertain about the issues they will have to negotiate about in future. In this setting, when negotiating an issue, the agents know that they will negotiate more issues in the future, but they are uncertain about the details of those issues. As before, let m be the total number of issues that are up for negotiation. The agents have a probability distribution over the possible values of kac and kbc. For 1 <c <m let kac and kbc be uniformly distributed over [0,1]. This probability distribution, n, and m are common knowledge to the agents. However, the agents come to know kac and kbc only just before negotiation for issue c begins. Once the agents reach an agreement on issue c, it cannot be re-negotiated. This scenario requires online negotiation since the agents must make decisions about an issue prior to having the information about the future issues [3]. We first give a brief introduction to online problems and then draw an analogy between the online knapsack problem and the negotiation problem we want to solve. In an online problem, data is given to the algorithm incrementally, one unit at a time [3]. The online algorithm must also produce the output incrementally: after seeing i units of input it must output the ith unit of output. Since decisions about the output are made with incomplete knowledge about the entire input, an online algorithm often cannot produce an optimal solution. Such an algorithm can only approximate the performance of the optimal algorithm that sees all the inputs in advance. In the design of online algorithms, the main aim is to achieve a performance that is close to that of the optimal offline algorithm on each input. An online algorithm is said to be stochastic if it makes decisions on the basis of the probability distributions for the future inputs. The performance of stochastic online algorithms is assessed in terms of the expected difference between the optimum and the approximate solution (denoted E [z ∗ m--zm] where z ∗ m is the optimal and zm the approximate solution). Note that the subscript m is used to indicate the fact that this difference depends on m. We now describe the protocol for online negotiation and then obtain an approximate equilibrium. The protocol is defined as follows. Let agent a denote the first mover (since we focus on the package deal procedure, the first mover is the same for all the m issues). Step 1. For c = 1, the agents are given the values of kac and kbc. These two values are now common5 knowledge. Step 2. The agents settle issue c using the alternating offers protocol described in Section 2. Negotiation for issue c must end within n time periods from the start of negotiation on the issue. If an agreement is not reached within this time, then negotiation fails on this and on all remaining issues. Step 3. The above steps are repeated for issues c = 2, 3,..., m. Negotiation for issue c (2 <c <m) begins in the time period following an agreement on issue c--1. 5We assume common knowledge because it simplifies exposition. However, if kac (kbc) is a's (b's) private knowledge, then our analysis will still hold but now an agent must find its opponent's expected utility on the basis of the p.d.fs for kac and kbc. Thus, during time period t, the problem for the offering agent (say a) is to find the optimal offer for issue c on the basis of kac and kbc and the probability distribution for kai and kbi (c <i <m). In order to solve this online negotiation problem we draw analogy with the online knapsack problem. Before doing so, however, we give a brief overview of the online knapsack problem. In the online knapsack problem, there are m items. The agent must examine the m items one at a time according to the order they are input (i.e., as their profit and size coefficients become known). Hence, the algorithm is required to decide whether or not to include each item in the knapsack as soon as its weight and profit become known, without knowledge concerning the items still to be seen, except for their total number. Note that since the agents have a probability distribution over the weights and profits of the future items, this is a case of stochastic online knapsack problem. Our online negotiation problem is analogous to the online knapsack problem. This analogy is described in detail in the proof for Theorem 5. Again, researchers in algorithms have developed time efficient approximate solutions to the online knapsack problem [16]. Hence we use this solution and show that it forms an equilibrium. The following theorem characterizes an approximate equilibrium for online negotiation. Here the agents have to choose a strategy without knowing the features of the future issues. Because of this information incompleteness, the relevant equilibrium solution is that of a Bayes' Nash Equilibrium (BNE) in which each agent plays the best response to the other agents with respect to their expected utilities [18]. However, finding an agent's BNE strategy is analogous to solving the online 0-1 knapsack problem. Also, the online knapsack can only be solved approximately [16]. Hence the relevant equilibrium solution concept is approximate BNE (see [26] for example). The following theorem finds this equilibrium using procedures ONLINE - TRADEOFFA and ONLINE-TRADEOFFB which are defined in the proof of the theorem. For a given time period, we let zm denote the approximately optimal solution generated by ONLINE-TRADEOFFA (or ONLINE-TRADEOFFB) and z ∗ m the actual optimum. THEOREM 5. For the package deal procedure, the following strategies form an approximate Bayes' Nash equilibrium. The equilibrium strategy for t = n is: r ` l OFFER [1, 0] IF a's TURN For all preceding time periods t <n, if [xt, yt] denotes the offer made at time t, then the equilibrium strategies are defined as follows: where UA (t) = Ua ([¯ at +1, ¯ bt +1], t + 1) and UB (t) = Ub ([¯ at +1, ¯ bt +1], t + 1). An agreement on issue c takes place at t = c. For a given time period, the expected difference between the solution generated by the optimal strategy and that by the approximate strategy is E [z ∗ m--zm] = 0 (, / m). 956 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) PROOF. As in Theorem 1 we find the equilibrium offer for time period t = 1 using backward induction. Let a be the offering agent for t = 1 for all the m issues. Consider the last time period t = n (recall from Step 2 of the online protocol that n is the deadline for completing negotiation on the first issue). Since the first mover is the same for all the issues, and the agents make offers alternately, the offering agent for t = n is also the same for all the m issues. Assume that b is the offering agent for t = n. As in Section 3, the offering agent for t = n gets a hundred percent of all the m issues. Since b is the offering agent for t = n, his utility for this time period is: Recall that kai and kbi (for c <i <m) are not known to the agents. Hence, the agents can only find their expected utilities from the future issues on the basis of the probability distribution functions for kai and kbi. However, during the negotiation for issue c the agents know kac but not kbc (see Step 1 of the online protocol). Hence, a computes UB (n) as follows. Agent b's utility from issue c = 1 is kb1δn-1 1 (which is the first term of Equation 6). Then, on the basis of the probability distribution functions for kai and kbi, agent a computes b's expected utility from each future issue i as δi (n-1) i / 2 (since kai and kbi are uniformly distributed on [0, 1]). Thus, b's expected cumulative utility from these m--c issues is Now, in order to decide what to offer for issue c = 1, the offering agent fort = n--1 (i.e., agent a) must solve the following online knapsack problem: The only variables in the above maximization problem are ¯ bt i. Now, maximizing Σm i = 1ka i (1--¯ bt i) δn-1 i is the same as minimizing Σm i = 1kai ¯ bti since δn-1 i and kai are constants. Thus, we write Equation 7 as: minimize Σm i = 1ka i ¯ bt (8) i such that Σm i = 1kb i ¯ bt i> UB (n) The above optimization problem is analogous to the online 0-1 knapsack problem. An algorithm to solve the online knapsack problem has already proposed in [16]. This algorithm is called the fixed-choice online algorithm. It has time complexity linear in the number of items (m) in the knapsack problem. We use this to solve our online negotiation problem. Thus, our ONLINE-TRADEOFFA algorithm is nothing but the fixed-choice online algorithm and therefore has the same time complexity as the latter. This algorithm takes the values of kai and kbi one at a time and generates an approximate solution to the above knapsack problem. The expected difference between the optimum and approximate solution is E [z * rithm and a prooffor E [zm *--zm] = 0 (, / m)). The fixed-choice online algorithm of [16] is a generalization of the basic greedy algorithm for the offline knapsack problem; the idea behind it is as follows. A threshold value is determined on the basis of the information regarding weights and profits for the 0-1 knapsack problem. The method then includes into the knapsack all items whose profit density (profit density of an item is its profit per unit weight) exceeds the threshold until either the knapsack is filled or all the m items have been considered. In more detail, the algorithm ONLINE-TRADEOFFA works as follows. It first gets the values of ka1 and kb1 and finds ¯ btc. Since we have a 0-1 knapsack problem, ¯ btc can be either zero or one. Now, if ¯ btc = 1 for t = n, then ¯ btc must be one for 1 <t <n (i.e., a must offer ¯ btc = 1 at t = 1). If ¯ btc = 1 for t = n, but a offers ¯ btc = 0 at t = 1, then agent b gets less utility than what it expects from a's offer and rejects the proposal. Thus, if ¯ btc = 1 for t = n, then the optimal strategy for a is to offer ¯ btc = 1 at t = 1. Agent b accepts the offer. Thus, negotiation on the first issue starts at t = 1 and an agreement on it is also reached at t = 1. In the next time period (i.e., t = 2), negotiation proceeds to the next issue. The deadline for the second issue is n time periods from the start of negotiation on the issue. For c = 2, the algorithm ONLINE-TRADEOFFA is given the values of ka2 and kb 2 and finds ¯ btc as described above. Agent offers bc at t = 2 and b accepts. Thus, negotiation on the second issue starts at t = 2 and an agreement on it is also reached at t = 2. This process repeats for the remaining issues c = 3,..., m. Thus, each issue is agreed upon in the same time period in which it starts. As negotiation for the next issue starts in the following time period (see step 3 of the online protocol), agreement on issue i occurs at time t = i. On the other hand, if b is the offering agent at t = 1, he uses the algorithm ONLINE-TRADEOFFB which is defined analogously. Thus, irrespective of who makes the first move, all the m issues are settled at time t = m. THEOREM 6. The time complexity of finding the approximate equilibrium offers of Theorem 5 is linear in m. PROOF. The time complexity of ONLINE-TRADEOFFAand ONLINETRADEOFFB is the same as the time complexity of the fixed-choice online algorithm of [16]. Since the latter has time complexity linear in m, the time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is also linear in m. It is worth noting that, for the 0-1 knapsack problem, the lower bound on the expected difference between the optimum and the solution found by any online algorithm is Ω (1) [16]. Thus, it follows that this lower bound also holds for our negotiation problem. 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. Since Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. On the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent's utility. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed. It will be interesting to analyze the case where kac and kbchave other, possibly different, probability distributions. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain.
Approximate and Online Multi-Issue Negotiation ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m> 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are "indivisible" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (√ m). These approximate strategies also have polynomial time complexity. 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Since this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them. Now, the simplest form of negotiation involves two agents and a single issue. For example, consider a scenario in which a buyer and a seller negotiate on the price of a good. To begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement. Since agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Our primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Likewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation. Moreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues). Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. Then, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is 0 (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- / m). These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. The remainder of the paper is organised as follows. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. Section 5 discusses the related literature and Section 6 concludes. 2. SINGLE-ISSUE NEGOTIATION 952 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. MULTI-ISSUE NEGOTIATION 3.1 The package deal procedure 3.2 Equilibrium strategies The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 3.3 Approximate equilibrium 954 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 4. ONLINE MULTI-ISSUE NEGOTIATION 956 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. Since Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. On the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent's utility. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed. It will be interesting to analyze the case where kac and kbchave other, possibly different, probability distributions. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain.
Approximate and Online Multi-Issue Negotiation ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m> 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are "indivisible" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (√ m). These approximate strategies also have polynomial time complexity. 1. INTRODUCTION Negotiation is a key form of interaction in multiagent systems. It is a process in which disputing agents decide how to divide the gains from cooperation. Now, the simplest form of negotiation involves two agents and a single issue. However, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers. That is, they must set the negotiation protocol [20]. On the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation). Given this context, this work focuses on competitive scenarios with self-interested agents. For such cases, each participant defines its strategy so as to maximise its individual utility. However, in most bilateral negotiations, the parties involved need to settle more than one issue. For this case, the issues may be divisible or indivisible [4]. For the former, the problem for the agents is to decide how to split each issue between themselves [21]. For the latter, the individual issues cannot be divided. An issue, in its entirety, must be allocated to either of the two agents. Since the agents value different issues differently, they must come to terms about who will take which issue. To date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6]. However, in many real-world settings, the issues are indivisible. Hence, our focus here is on negotiation for indivisible issues. Such negotiations are very common in multiagent systems. For example, consider the case of task allocation between two agents. There is a set of tasks to be carried out and different agents have different preferences for the tasks. The tasks cannot be partitioned; a task must be carried out by one agent. The problem then is for the agents to negotiate about who will carry out which task. A key problem in the study of multi-issue negotiation is to determine the equilibrium strategies. An equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers. However, such computational issues have so far received little attention. As we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues. Thus, we consider negotiations in which there is incomplete information and time constraints. Incompleteness of information on the part of negotiators is a common feature of most practical negotiations. Also, agents typically have time constraints in the form of both deadlines and discount factors. Deadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit. Given this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents. For this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is 0 (nm / ~ 2) where n is the negotiation deadline and ~ the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- / m). These approximate strategies also have polynomial time complexity. In so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria. No previous work has determined these equilibria. Since software agents have limited computational resources, our results are especially relevant to such resource bounded agents. We begin by giving a brief overview of single-issue negotiation in Section 2. In Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem. We then present an approximate equilibrium and evaluate its approximation error. Section 4 analyzes online multi-issue negotiation. 5. RELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues. We first describe the existing work for the case of divisible issues. However, most of this work has focussed on the sequential procedure [7, 2]. For this procedure, a key issue is the negotiation agenda. Here the term agenda refers to the order in which the issues are negotiated. The agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change. Hence, the agents must decide what agenda they will use. Now, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous). For instance, Fershtman [7] analyze sequential negotiation with exogenous agenda. A number of researchers have also studied negotiations with an endogenous agenda [2]. In contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure. However, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl. . Joint Conf. where each issue is divisible. Specifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation. Existing work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents. The problem of task allocation has been previously studied in the context of coalitions involving more than two agents. For example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole. In contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities. In contrast, our focuses on negotiation regarding who will carry out which task. Finally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work). 6. CONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints. The issues are indivisible and different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. Specifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting. We then presented approximately optimal negotiation strategies and showed that they form an equilibrium. These strategies have polynomial time complexity. We also analysed the difference between the true optimum and the approximate optimum. Finally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues. Specifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error. These approximate strategies also have polynomial time complexity. There are several interesting directions for future work. First, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed. Apart from this, we treated the number of issues as being common knowledge to the agents. In future, it will be interesting to treat the number of issues as uncertain.
I-68
On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints
Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DEC-MDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC-MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations.
[ "decentr markov decis process", "decentr markov decis process", "tempor constraint", "agent-coordin problem", "valu function propag", "valu function propag", "decis-theoret model", "decentr partial observ markov decis process", "opportun cost", "polici iter", "rescu mission", "probabl function propag", "multipl", "heurist perform", "multi-agent system", "local optim solut" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "M", "U", "M", "U", "M" ]
On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints Janusz Marecki and Milind Tambe Computer Science Department University of Southern California 941 W 37th Place, Los Angeles, CA 90089 {marecki, tambe}@usc. edu ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC- MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMulti-agent Systems General Terms Algorithms, Theory 1. INTRODUCTION The development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Exploration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Unfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research. In particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses. Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain 830 978-81-904262-7-5 (RPS) c 2007 IFAAMAS and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC- MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. This paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire- brigades must coordinate in order to rescue civilians trapped in a burning building. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 2. MOTIVATING EXAMPLE We are interested in domains where multiple agents must coordinate their plans over time, despite uncertainty in plan execution duration and outcome. One example domain is large-scale disaster, like a fire in a skyscraper. Because there can be hundreds of civilians scattered across numerous floors, multiple rescue teams have to be dispatched, and radio communication channels can quickly get saturated and useless. In particular, small teams of fire-brigades must be sent on separate missions to rescue the civilians trapped in dozens of different locations. Picture a small mission plan from Figure (1), where three firebrigades have been assigned a task to rescue the civilians trapped at site B, accessed from site A (e.g. an office accessed from the floor)1 . General fire fighting procedures involve both: (i) putting out the flames, and (ii) ventilating the site to let the toxic, high temperature gases escape, with the restriction that ventilation should not be performed too fast in order to prevent the fire from spreading. The team estimates that the civilians have 20 minutes before the fire at site B becomes unbearable, and that the fire at site A has to be put out in order to open the access to site B. As has happened in the past in large scale disasters, communication often breaks down; and hence we assume in this domain that there is no communication between the fire-brigades 1,2 and 3 (denoted as FB1, FB2 and FB3). Consequently, FB2 does not know if it is already safe to ventilate site A, FB1 does not know if it is already safe to enter site A and start fighting fire at site B, etc.. We assign the reward 50 for evacuating the civilians from site B, and a smaller reward 20 for the successful ventilation of site A, since the civilians themselves might succeed in breaking out from site B. One can clearly see the dilemma, that FB2 faces: It can only estimate the durations of the Fight fire at site A methods to be executed by FB1 and FB3, and at the same time FB2 knows that time is running out for civilians. If FB2 ventilates site A too early, the fire will spread out of control, whereas if FB2 waits with the ventilation method for too long, fire at site B will become unbearable for the civilians. In general, agents have to perform a sequence of such 1 We explain the EST and LET notation in section 3 Figure 1: Civilian rescue domain and a mission plan. Dotted arrows represent implicit precedence constraints within an agent. difficult decisions; in particular, decision process of FB2 involves first choosing when to start ventilating site A, and then (depending on the time it took to ventilate site A), choosing when to start evacuating the civilians from site B. Such sequence of decisions constitutes the policy of an agent, and it must be found fast because time is running out. 3. MODEL DESCRIPTION We encode our decision problems in a model which we refer to as Decentralized MDP with Temporal Constraints 2 . Each instance of our decision problems can be described as a tuple M, A, C, P, R where M = {mi} |M| i=1 is the set of methods, and A = {Ak} |A| k=1 is the set of agents. Agents cannot communicate during mission execution. Each agent Ak is assigned to a set Mk of methods, such that S|A| k=1 Mk = M and ∀i,j;i=jMi ∩ Mj = ø. Also, each method of agent Ak can be executed only once, and agent Ak can execute only one method at a time. Method execution times are uncertain and P = {pi} |M| i=1 is the set of distributions of method execution durations. In particular, pi(t) is the probability that the execution of method mi consumes time t. C is a set of temporal constraints in the system. Methods are partially ordered and each method has fixed time windows inside which it can be executed, i.e., C = C≺ ∪ C[ ] where C≺ is the set of predecessor constraints and C[ ] is the set of time window constraints. For c ∈ C≺, c = mi, mj means that method mi precedes method mj i.e., execution of mj cannot start before mi terminates. In particular, for an agent Ak, all its methods form a chain linked by predecessor constraints. We assume, that the graph G = M, C≺ is acyclic, does not have disconnected nodes (the problem cannot be decomposed into independent subproblems), and its source and sink vertices identify the source and sink methods of the system. For c ∈ C[ ], c = mi, EST, LET means that execution of mi can only start after the Earliest Starting Time EST and must finish before the Latest End Time LET; we allow methods to have multiple disjoint time window constraints. Although distributions pi can extend to infinite time horizons, given the time window constraints, the planning horizon Δ = max m,τ,τ ∈C[ ] τ is considered as the mission deadline. Finally, R = {ri} |M| i=1 is the set of non-negative rewards, i.e., ri is obtained upon successful execution of mi. Since there is no communication allowed, an agent can only estimate the probabilities that its methods have already been enabled 2 One could also use the OC-DEC-MDP framework, which models both time and resource constraints The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831 by other agents. Consequently, if mj ∈ Mk is the next method to be executed by the agent Ak and the current time is t ∈ [0, Δ], the agent has to make a decision whether to Execute the method mj (denoted as E), or to Wait (denoted as W). In case agent Ak decides to wait, it remains idle for an arbitrary small time , and resumes operation at the same place (= about to execute method mj) at time t + . In case agent Ak decides to Execute the next method, two outcomes are possible: Success: The agent Ak receives reward rj and moves on to its next method (if such method exists) so long as the following conditions hold: (i) All the methods {mi| mi, mj ∈ C≺} that directly enable method mj have already been completed, (ii) Execution of method mj started in some time window of method mj, i.e., ∃ mj ,τ,τ ∈C[ ] such that t ∈ [τ, τ ], and (iii) Execution of method mj finished inside the same time window, i.e., agent Ak completed method mj in time less than or equal to τ − t. Failure: If any of the above-mentioned conditions does not hold, agent Ak stops its execution. Other agents may continue their execution, but methods mk ∈ {m| mj, m ∈ C≺} will never become enabled. The policy πk of an agent Ak is a function πk : Mk × [0, Δ] → {W, E}, and πk( m, t ) = a means, that if Ak is at method m at time t, it will choose to perform the action a. A joint policy π = [πk] |A| k=1 is considered to be optimal (denoted as π∗ ), if it maximizes the sum of expected rewards for all the agents. 4. SOLUTION TECHNIQUES 4.1 Optimal Algorithms Optimal joint policy π∗ is usually found by using the Bellman update principle, i.e., in order to determine the optimal policy for method mj, optimal policies for methods mk ∈ {m| mj, m ∈ C≺} are used. Unfortunately, for our model, the optimal policy for method mj also depends on policies for methods mi ∈ {m| m, mj ∈ C≺}. This double dependency results from the fact, that the expected reward for starting the execution of method mj at time t also depends on the probability that method mj will be enabled by time t. Consequently, if time is discretized, one needs to consider Δ|M| candidate policies in order to find π∗ . Thus, globally optimal algorithms used for solving real-world problems are unlikely to terminate in reasonable time [11]. The complexity of our model could be reduced if we considered its more restricted version; in particular, if each method mj was allowed to be enabled at time points t ∈ Tj ⊂ [0, Δ], the Coverage Set Algorithm (CSA) [1] could be used. However, CSA complexity is double exponential in the size of Ti, and for our domains Tj can store all values ranging from 0 to Δ. 4.2 Locally Optimal Algorithms Following the limited applicability of globally optimal algorithms for DEC-MDPs with Temporal Constraints, locally optimal algorithms appear more promising. Specially, the OC-DEC-MDP algorithm [4] is particularly significant, as it has shown to easily scale up to domains with hundreds of methods. The idea of the OC-DECMDP algorithm is to start with the earliest starting time policy π0 (according to which an agent will start executing the method m as soon as m has a non-zero chance of being already enabled), and then improve it iteratively, until no further improvement is possible. At each iteration, the algorithm starts with some policy π, which uniquely determines the probabilities Pi,[τ,τ ] that method mi will be performed in the time interval [τ, τ ]. It then performs two steps: Step 1: It propagates from sink methods to source methods the values Vi,[τ,τ ], that represent the expected utility for executing method mi in the time interval [τ, τ ]. This propagation uses the probabilities Pi,[τ,τ ] from previous algorithm iteration. We call this step a value propagation phase. Step 2: Given the values Vi,[τ,τ ] from Step 1, the algorithm chooses the most profitable method execution intervals which are stored in a new policy π . It then propagates the new probabilities Pi,[τ,τ ] from source methods to sink methods. We call this step a probability propagation phase. If policy π does not improve π, the algorithm terminates. There are two shortcomings of the OC-DEC-MDP algorithm that we address in this paper. First, each of OC-DEC-MDP states is a pair mj, [τ, τ ] , where [τ, τ ] is a time interval in which method mj can be executed. While such state representation is beneficial, in that the problem can be solved with a standard value iteration algorithm, it blurs the intuitive mapping from time t to the expected total reward for starting the execution of mj at time t. Consequently, if some method mi enables method mj, and the values Vj,[τ,τ ]∀τ,τ ∈[0,Δ] are known, the operation that calculates the values Vi,[τ,τ ]∀τ, τ ∈ [0, Δ] (during the value propagation phase), runs in time O(I2 ), where I is the number of time intervals 3 . Since the runtime of the whole algorithm is proportional to the runtime of this operation, especially for big time horizons Δ, the OC- DECMDP algorithm runs slow. Second, while OC-DEC-MDP emphasizes on precise calculation of values Vj,[τ,τ ], it fails to address a critical issue that determines how the values Vj,[τ,τ ] are split given that the method mj has multiple enabling methods. As we show later, OC-DEC-MDP splits Vj,[τ,τ ] into parts that may overestimate Vj,[τ,τ ] when summed up again. As a result, methods that precede the method mj overestimate the value for enabling mj which, as we show later, can have disastrous consequences. In the next two sections, we address both of these shortcomings. 5. VALUE FUNCTION PROPAGATION (VFP) The general scheme of the VFP algorithm is identical to the OCDEC-MDP algorithm, in that it performs a series of policy improvement iterations, each one involving a Value and Probability Propagation Phase. However, instead of propagating separate values, VFP maintains and propagates the whole functions, we therefore refer to these phases as the value function propagation phase and the probability function propagation phase. To this end, for each method mi ∈ M, we define three new functions: Value Function, denoted as vi(t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t. Opportunity Cost Function, denoted as Vi(t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t assuming that mi is enabled. Probability Function, denoted as Pi(t), that maps time t ∈ [0, Δ] to the probability that method mi will be completed before time t. Such functional representation allows us to easily read the current policy, i.e., if an agent Ak is at method mi at time t, then it will wait as long as value function vi(t) will be greater in the future. Formally: πk( mi, t ) = j W if ∃t >t such that vi(t) < vi(t ) E otherwise. We now develop an analytical technique for performing the value function and probability function propagation phases. 3 Similarly for the probability propagation phase 832 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase Suppose, that we are performing a value function propagation phase during which the value functions are propagated from the sink methods to the source methods. At any time during this phase we encounter a situation shown in Figure 2, where opportunity cost functions [Vjn ]N n=0 of methods [mjn ]N n=0 are known, and the opportunity cost Vi0 of method mi0 is to be derived. Let pi0 be the probability distribution function of method mi0 execution duration, and ri0 be the immediate reward for starting and completing the execution of method mi0 inside a time interval [τ, τ ] such that mi0 τ, τ ∈ C[ ]. The function Vi0 is then derived from ri0 and opportunity costs Vjn,i0 (t) n = 1, ..., N from future methods. Formally: Vi0 (t) = 8 >>< >>: R τ −t 0 pi0 (t )(ri0 + PN n=0 Vjn,i0 (t + t ))dt if ∃ mi0 τ,τ ∈C[ ] such that t ∈ [τ, τ ] 0 otherwise (1) Note, that for t ∈ [τ, τ ], if h(t) := ri0 + PN n=0 Vjn,i0 (τ −t) then Vi0 is a convolution of p and h: vi0 (t) = (pi0 ∗h)(τ −t). Assume for now, that Vjn,i0 represents a full opportunity cost, postponing the discussion on different techniques for splitting the opportunity cost Vj0 into [Vj0,ik ]K k=0 until section 6. We now show how to derive Vj0,i0 (derivation of Vjn,i0 for n = 0 follows the same scheme). Figure 2: Fragment of an MDP of agent Ak. Probability functions propagate forward (left to right) whereas value functions propagate backward (right to left). Let V j0,i0 (t) be the opportunity cost of starting the execution of method mj0 at time t given that method mi0 has been completed. It is derived by multiplying Vi0 by the probability functions of all methods other than mi0 that enable mj0 . Formally: V j0,i0 (t) = Vj0 (t) · KY k=1 Pik (t). Where similarly to [4] and [5] we ignored the dependency of [Plk ]K k=1. Observe that V j0,i0 does not have to be monotonically decreasing, i.e., delaying the execution of the method mi0 can sometimes be profitable. Therefore the opportunity cost Vj0,i0 (t) of enabling method mi0 at time t must be greater than or equal to V j0,i0 . Furthermore, Vj0,i0 should be non-increasing. Formally: Vj0,i0 = min f∈F f (2) Where F = {f | f ≥ V j0,i0 and f(t) ≥ f(t ) ∀t<t }. Knowing the opportunity cost Vi0 , we can then easily derive the value function vi0 . Let Ak be an agent assigned to the method mi0 . If Ak is about to start the execution of mi0 it means, that Ak must have completed its part of the mission plan up to the method mi0 . Since Ak does not know if other agents have completed methods [mlk ]k=K k=1 , in order to derive vi0 , it has to multiply Vi0 by the probability functions of all methods of other agents that enable mi0 . Formally: vi0 (t) = Vi0 (t) · KY k=1 Plk (t) Where the dependency of [Plk ]K k=1 is also ignored. We have consequently shown a general scheme how to propagate the value functions: Knowing [vjn ]N n=0 and [Vjn ]N n=0 of methods [mjn ]N n=0 we can derive vi0 and Vi0 of method mi0 . In general, the value function propagation scheme starts with sink nodes. It then visits at each time a method m, such that all the methods that m enables have already been marked as visited. The value function propagation phase terminates when all the source methods have been marked as visited. 5.2 Reading the Policy In order to determine the policy of agent Ak for the method mj0 we must identify the set Zj0 of intervals [z, z ] ⊂ [0, ..., Δ], such that: ∀t∈[z,z ] πk( mj0 , t ) = W. One can easily identify the intervals of Zj0 by looking at the time intervals in which the value function vj0 does not decrease monotonically. 5.3 Probability Function Propagation Phase Assume now, that value functions and opportunity cost values have all been propagated from sink methods to source nodes and the sets Zj for all methods mj ∈ M have been identified. Since value function propagation phase was using probabilities Pi(t) for methods mi ∈ M and times t ∈ [0, Δ] found at previous algorithm iteration, we now have to find new values Pi(t), in order to prepare the algorithm for its next iteration. We now show how in the general case (Figure 2) propagate the probability functions forward through one method, i.e., we assume that the probability functions [Pik ]K k=0 of methods [mik ]K k=0 are known, and the probability function Pj0 of method mj0 must be derived. Let pj0 be the probability distribution function of method mj0 execution duration, and Zj0 be the set of intervals of inactivity for method mj0 , found during the last value function propagation phase. If we ignore the dependency of [Pik ]K k=0 then the probability Pj0 (t) that the execution of method mj0 starts before time t is given by: Pj0 (t) = (QK k=0 Pik (τ) if ∃(τ, τ ) ∈ Zj0 s.t. t ∈ (τ, τ ) QK k=0 Pik (t) otherwise. Given Pj0 (t), the probability Pj0 (t) that method mj0 will be completed by time t is derived by: Pj0 (t) = Z t 0 Z t 0 ( ∂Pj0 ∂t )(t ) · pj0 (t − t )dt dt (3) Which can be written compactly as ∂Pj0 ∂t = pj0 ∗ ∂P j0 ∂t . We have consequently shown how to propagate the probability functions [Pik ]K k=0 of methods [mik ]K k=0 to obtain the probability function Pj0 of method mj0 . The general, the probability function propagation phase starts with source methods msi for which we know that Psi = 1 since they are enabled by default. We then visit at each time a method m such that all the methods that enable The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833 m have already been marked as visited. The probability function propagation phase terminates when all the sink methods have been marked as visited. 5.4 The Algorithm Similarly to the OC-DEC-MDP algorithm, VFP starts the policy improvement iterations with the earliest starting time policy π0 . Then at each iteration it: (i) Propagates the value functions [vi] |M| i=1 using the old probability functions [Pi] |M| i=1 from previous algorithm iteration and establishes the new sets [Zi] |M| i=1 of method inactivity intervals, and (ii) propagates the new probability functions [Pi ] |M| i=1 using the newly established sets [Zi] |M| i=1. These new functions [Pi ] |M| i=1 are then used in the next iteration of the algorithm. Similarly to OC-DEC-MDP, VFP terminates if a new policy does not improve the policy from the previous algorithm iteration. 5.5 Implementation of Function Operations So far, we have derived the functional operations for value function and probability function propagation without choosing any function representation. In general, our functional operations can handle continuous time, and one has freedom to choose a desired function approximation technique, such as piecewise linear [7] or piecewise constant [9] approximation. However, since one of our goals is to compare VFP with the existing OC-DEC- MDP algorithm, that works only for discrete time, we also discretize time, and choose to approximate value functions and probability functions with piecewise linear (PWL) functions. When the VFP algorithm propagates the value functions and probability functions, it constantly carries out operations represented by equations (1) and (3) and we have already shown that these operations are convolutions of some functions p(t) and h(t). If time is discretized, functions p(t) and h(t) are discrete; however, h(t) can be nicely approximated with a PWL function bh(t), which is exactly what VFP does. As a result, instead of performing O(Δ2 ) multiplications to compute f(t), VFP only needs to perform O(k · Δ) multiplications to compute f(t), where k is the number of linear segments of bh(t) (note, that since h(t) is monotonic, bh(t) is usually close to h(t) with k Δ). Since Pi values are in range [0, 1] and Vi values are in range [0, P mi∈M ri], we suggest to approximate Vi(t) with bVi(t) within error V , and Pi(t) with bPi(t) within error P . We now prove that the overall approximation error accumulated during the value function propagation phase can be expressed in terms of P and V : THEOREM 1. Let C≺ be a set of precedence constraints of a DEC-MDP with Temporal Constraints, and P and V be the probability function and value function approximation errors respectively. The overall error π = maxV supt∈[0,Δ]|V (t) − bV (t)| of value function propagation phase is then bounded by: |C≺| V + ((1 + P )|C≺| − 1) P mi∈M ri . PROOF. In order to establish the bound for π, we first prove by induction on the size of C≺, that the overall error of probability function propagation phase, π(P ) = maxP supt∈[0,Δ]|P(t) − bP(t)| is bounded by (1 + P )|C≺| − 1. Induction base: If n = 1 only two methods are present, and we will perform the operation identified by Equation (3) only once, introducing the error π(P ) = P = (1 + P )|C≺| − 1. Induction step: Suppose, that π(P ) for |C≺| = n is bounded by (1 + P )n − 1, and we want to prove that this statement holds for |C≺| = n. Let G = M, C≺ be a graph with at most n + 1 edges, and G = M, C≺ be a subgraph of G, such that C≺ = C≺ − { mi, mj }, where mj ∈ M is a sink node in G. From the induction assumption we have, that C≺ introduces the probability propagation phase error bounded by (1 + P )n − 1. We now add back the link { mi, mj } to C≺, which affects the error of only one probability function, namely Pj, by a factor of (1 + P ). Since probability propagation phase error in C≺ was bounded by (1 + P )n − 1, in C≺ = C≺ ∪ { mi, mj } it can be at most ((1 + P )n − 1)(1 + P ) < (1 + P )n+1 − 1. Thus, if opportunity cost functions are not overestimated, they are bounded by P mi∈M ri and the error of a single value function propagation operation will be at most Z Δ 0 p(t)( V +((1+ P ) |C≺| −1) X mi∈M ri) dt < V +((1+ P ) |C≺| −1) X mi∈M ri. Since the number of value function propagation operations is |C≺|, the total error π of the value function propagation phase is bounded by: |C≺| V + ((1 + P )|C≺| − 1) P mi∈M ri . 6. SPLITTING THE OPPORTUNITY COST FUNCTIONS In section 5 we left out the discussion about how the opportunity cost function Vj0 of method mj0 is split into opportunity cost functions [Vj0,ik ]K k=0 sent back to methods [mik ]K k=0 , that directly enable method mj0 . So far, we have taken the same approach as in [4] and [5] in that the opportunity cost function Vj0,ik that the method mik sends back to the method mj0 is a minimal, non-increasing function that dominates function V j0,ik (t) = (Vj0 · Q k ∈{0,...,K} k =k Pik )(t). We refer to this approach, as heuristic H 1,1 . Before we prove that this heuristic overestimates the opportunity cost, we discuss three problems that might occur when splitting the opportunity cost functions: (i) overestimation, (ii) underestimation and (iii) starvation. Consider the situation in Figure Figure 3: Splitting the value function of method mj0 among methods [mik ]K k=0. (3) when value function propagation for methods [mik ]K k=0 is performed. For each k = 0, ..., K, Equation (1) derives the opportunity cost function Vik from immediate reward rk and opportunity cost function Vj0,ik . If m0 is the only methods that precedes method mk, then V ik,0 = Vik is propagated to method m0, and consequently the opportunity cost for completing the method m0 at time t is equal to PK k=0 Vik,0(t). If this cost is overestimated, then an agent A0 at method m0 will have too much incentive to finish the execution of m0 at time t. Consequently, although the probability P(t) that m0 will be enabled by other agents by time t is low, agent A0 might still find the expected utility of starting the execution of m0 at time t higher than the expected utility of doing it later. As a result, it will choose at time t to start executing method m0 instead of waiting, which can have disastrous consequences. Similarly, if PK k=0 Vik,0(t) is underestimated, agent A0 might loose interest in enabling the future methods [mik ]K k=0 and just focus on 834 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) maximizing the chance of obtaining its immediate reward r0. Since this chance is increased when agent A0 waits4 , it will consider at time t to be more profitable to wait, instead of starting the execution of m0, which can have similarly disastrous consequences. Finally, if Vj0 is split in a way, that for some k, Vj0,ik = 0, it is the method mik that underestimates the opportunity cost of enabling method mj0 , and the similar reasoning applies. We call such problem a starvation of method mk. That short discussion shows the importance of splitting the opportunity cost function Vj0 in such a way, that overestimation, underestimation, and starvation problem is avoided. We now prove that: THEOREM 2. Heuristic H 1,1 can overestimate the opportunity cost. PROOF. We prove the theorem by showing a case where the overestimation occurs. For the mission plan from Figure (3), let H 1,1 split Vj0 into [V j0,ik = Vj0 · Q k ∈{0,...,K} k =k Pik ]K k=0 sent to methods [mik ]K k=0 respectively. Also, assume that methods [mik ]K k=0 provide no local reward and have the same time windows, i.e., rik = 0; ESTik = 0, LETik = Δ for k = 0, ..., K. To prove the overestimation of opportunity cost, we must identify t0 ∈ [0, ..., Δ] such that the opportunity cost PK k=0 Vik (t) for methods [mik ]K k=0 at time t ∈ [0, . . , Δ] is greater than the opportunity cost Vj0 (t). From Equation (1) we have: Vik (t) = Z Δ−t 0 pik (t )Vj0,ik (t + t )dt Summing over all methods [mik ]K k=0 we obtain: KX k=0 Vik (t) = KX k=0 Z Δ−t 0 pik (t )Vj0,ik (t + t )dt (4) ≥ KX k=0 Z Δ−t 0 pik (t )V j0,ik (t + t )dt = KX k=0 Z Δ−t 0 pik (t )Vj0 (t + t ) Y k ∈{0,...,K} k =k Pik (t + t )dt Let c ∈ (0, 1] be a constant and t0 ∈ [0, Δ] be such that ∀t>t0 and ∀k=0,. . ,K we have Q k ∈{0,...,K} k =k Pik (t) > c. Then: KX k=0 Vik (t0) > KX k=0 Z Δ−t0 0 pik (t )Vj0 (t0 + t ) · c dt Because Pjk is non-decreasing. Now, suppose there exists t1 ∈ (t0, Δ], such that PK k=0 R t1−t0 0 pik (t )dt > Vj0 (t0) c·Vj0 (t1) . Since decreasing the upper limit of the integral over positive function also decreases the integral, we have: KX k=0 Vik (t0) > c KX k=0 Z t1 t0 pik (t − t0)Vj0 (t )dt And since Vj0 (t ) is non-increasing we have: KX k=0 Vik (t0) > c · Vj0 (t1) KX k=0 Z t1 t0 pik (t − t0)dt (5) = c · Vj0 (t1) KX k=0 Z t1−t0 0 pik (t )dt > c · Vj0 (t1) Vj(t0) c · Vj(t1) = Vj(t0) 4 Assuming LET0 t Consequently, the opportunity cost PK k=0 Vik (t0) of starting the execution of methods [mik ]K k=0 at time t ∈ [0, . . , Δ] is greater than the opportunity cost Vj0 (t0) which proves the theorem.Figure 4 shows that the overestimation of opportunity cost is easily observable in practice. To remedy the problem of opportunity cost overestimation, we propose three alternative heuristics that split the opportunity cost functions: • Heuristic H 1,0 : Only one method, mik gets the full expected reward for enabling method mj0 , i.e., V j0,ik (t) = 0 for k ∈ {0, ..., K}\{k} and V j0,ik (t) = (Vj0 · Q k ∈{0,...,K} k =k Pik )(t). • Heuristic H 1/2,1/2 : Each method [mik ]K k=0 gets the full opportunity cost for enabling method mj0 divided by the number K of methods enabling the method mj0 , i.e., V j0,ik (t) = 1 K (Vj0 · Q k ∈{0,...,K} k =k Pik )(t) for k ∈ {0, ..., K}. • Heuristic bH 1,1 : This is a normalized version of the H 1,1 heuristic in that each method [mik ]K k=0 initially gets the full opportunity cost for enabling the method mj0 . To avoid opportunity cost overestimation, we normalize the split functions when their sum exceeds the opportunity cost function to be split. Formally: V j0,ik (t) = 8 >< >: V H 1,1 j0,ik (t) if PK k=0 V H 1,1 j0,ik (t) < Vj0 (t) Vj0 (t) V H 1,1 j0,ik (t) PK k=0 V H 1,1 j0,ik (t) otherwise Where V H 1,1 j0,ik (t) = (Vj0 · Q k ∈{0,...,K} k =k Pjk )(t). For the new heuristics, we now prove, that: THEOREM 3. Heuristics H 1,0 , H 1/2,1/2 and bH 1,1 do not overestimate the opportunity cost. PROOF. When heuristic H 1,0 is used to split the opportunity cost function Vj0 , only one method (e.g. mik ) gets the opportunity cost for enabling method mj0 . Thus: KX k =0 Vik (t) = Z Δ−t 0 pik (t )Vj0,ik (t + t )dt (6) And since Vj0 is non-increasing ≤ Z Δ−t 0 pik (t )Vj0 (t + t ) · Y k ∈{0,...,K} k =k Pjk (t + t )dt ≤ Z Δ−t 0 pik (t )Vj0 (t + t )dt ≤ Vj0 (t) The last inequality is also a consequence of the fact that Vj0 is non-increasing. For heuristic H 1/2,1/2 we similarly have: KX k=0 Vik (t) ≤ KX k=0 Z Δ−t 0 pik (t ) 1 K Vj0 (t + t ) Y k ∈{0,...,K} k =k Pjk (t + t )dt ≤ 1 K KX k=0 Z Δ−t 0 pik (t )Vj0 (t + t )dt ≤ 1 K · K · Vj0 (t) = Vj0 (t). For heuristic bH 1,1 , the opportunity cost function Vj0 is by definition split in such manner, that PK k=0 Vik (t) ≤ Vj0 (t). Consequently, we have proved, that our new heuristics H 1,0 , H 1/2,1/2 and bH 1,1 avoid the overestimation of the opportunity cost. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 The reason why we have introduced all three new heuristics is the following: Since H 1,1 overestimates the opportunity cost, one has to choose which method mik will receive the reward from enabling the method mj0 , which is exactly what the heuristic H 1,0 does. However, heuristic H 1,0 leaves K − 1 methods that precede the method mj0 without any reward which leads to starvation. Starvation can be avoided if opportunity cost functions are split using heuristic H 1/2,1/2 , that provides reward to all enabling methods. However, the sum of split opportunity cost functions for the H 1/2,1/2 heuristic can be smaller than the non-zero split opportunity cost function for the H 1,0 heuristic, which is clearly undesirable. Such situation (Figure 4, heuristic H 1,0 ) occurs because the mean f+g 2 of two functions f, g is not smaller than f nor g only if f = g. This is why we have proposed the bH 1,1 heuristic, which by definition avoids the overestimation, underestimation and starvation problems. 7. EXPERIMENTAL EVALUATION Since the VFP algorithm that we introduced provides two orthogonal improvements over the OC-DEC-MDP algorithm, the experimental evaluation we performed consisted of two parts: In part 1, we tested empirically the quality of solutions that an locally optimal solver (either OC-DEC-MDP or VFP) finds, given it uses different opportunity cost function splitting heuristic, and in part 2, we compared the runtimes of the VFP and OC-DEC- MDP algorithms for a variety of mission plan configurations. Part 1: We first ran the VFP algorithm on a generic mission plan configuration from Figure 3 where only methods mj0 , mi1 , mi2 and m0 were present. Time windows of all methods were set to 400, duration pj0 of method mj0 was uniform, i.e., pj0 (t) = 1 400 and durations pi1 , pi2 of methods mi1 , mi2 were normal distributions, i.e., pi1 = N(μ = 250, σ = 20), and pi2 = N(μ = 200, σ = 100). We assumed that only method mj0 provided reward, i.e. rj0 = 10 was the reward for finishing the execution of method mj0 before time t = 400. We show our results in Figure (4) where the x-axis of each of the graphs represents time whereas the y-axis represents the opportunity cost. The first graph confirms, that when the opportunity cost function Vj0 was split into opportunity cost functions Vi1 and Vi2 using the H 1,1 heuristic, the function Vi1 +Vi2 was not always below the Vj0 function. In particular, Vi1 (280) + Vi2 (280) exceeded Vj0 (280) by 69%. When heuristics H 1,0 , H 1/2,1/2 and bH 1,1 were used (graphs 2,3 and 4), the function Vi1 + Vi2 was always below Vj0 . We then shifted our attention to the civilian rescue domain introduced in Figure 1 for which we sampled all action execution durations from the normal distribution N = (μ = 5, σ = 2)). To obtain the baseline for the heuristic performance, we implemented a globally optimal solver, that found a true expected total reward for this domain (Figure (6a)). We then compared this reward with a expected total reward found by a locally optimal solver guided by each of the discussed heuristics. Figure (6a), which plots on the y-axis the expected total reward of a policy complements our previous results: H 1,1 heuristic overestimated the expected total reward by 280% whereas the other heuristics were able to guide the locally optimal solver close to a true expected total reward. Part 2: We then chose H 1,1 to split the opportunity cost functions and conducted a series of experiments aimed at testing the scalability of VFP for various mission plan configurations, using the performance of the OC-DEC-MDP algorithm as a benchmark. We began the VFP scalability tests with a configuration from Figure (5a) associated with the civilian rescue domain, for which method execution durations were extended to normal distributions N(μ = Figure 5: Mission plan configurations: (a) civilian rescue domain, (b) chain of n methods, (c) tree of n methods with branching factor = 3 and (d) square mesh of n methods. Figure 6: VFP performance in the civilian rescue domain. 30, σ = 5), and the deadline was extended to Δ = 200. We decided to test the runtime of the VFP algorithm running with three different levels of accuracy, i.e., different approximation parameters P and V were chosen, such that the cumulative error of the solution found by VFP stayed within 1%, 5% and 10% of the solution found by the OC- DEC-MDP algorithm. We then run both algorithms for a total of 100 policy improvement iterations. Figure (6b) shows the performance of the VFP algorithm in the civilian rescue domain (y-axis shows the runtime in milliseconds). As we see, for this small domain, VFP runs 15% faster than OCDEC-MDP when computing the policy with an error of less than 1%. For comparison, the globally optimal solved did not terminate within the first three hours of its runtime which shows the strength of the opportunistic solvers, like OC-DEC-MDP. We next decided to test how VFP performs in a more difficult domain, i.e., with methods forming a long chain (Figure (5b)). We tested chains of 10, 20 and 30 methods, increasing at the same time method time windows to 350, 700 and 1050 to ensure that later methods can be reached. We show the results in Figure (7a), where we vary on the x-axis the number of methods and plot on the y-axis the algorithm runtime (notice the logarithmic scale). As we observe, scaling up the domain reveals the high performance of VFP: Within 1% error, it runs up to 6 times faster than OC-DECMDP. We then tested how VFP scales up, given that the methods are arranged into a tree (Figure (5c)). In particular, we considered trees with branching factor of 3, and depth of 2, 3 and 4, increasing at the same time the time horizon from 200 to 300, and then to 400. We show the results in Figure (7b). Although the speedups are smaller than in case of a chain, the VFP algorithm still runs up to 4 times faster than OC-DEC-MDP when computing the policy with an error of less than 1%. We finally tested how VFP handles the domains with methods arranged into a n × n mesh, i.e., C≺ = { mi,j, mk,j+1 } for i = 1, ..., n; k = 1, ..., n; j = 1, ..., n − 1. In particular, we consider 836 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: Visualization of heuristics for opportunity costs splitting. Figure 7: Scalability experiments for OC-DEC-MDP and VFP for different network configurations. meshes of 3×3, 4×4, and 5×5 methods. For such configurations we have to greatly increase the time horizon since the probabilities of enabling the final methods by a particular time decrease exponentially. We therefore vary the time horizons from 3000 to 4000, and then to 5000. We show the results in Figure (7c) where, especially for larger meshes, the VFP algorithm runs up to one order of magnitude faster than OC-DEC-MDP while finding a policy that is within less than 1% from the policy found by OC- DECMDP. 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scale up to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. However, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning. Acknowledgments This material is based upon work supported by the DARPA/IPTO COORDINATORS program and the Air Force Research Laboratory under Contract No. FA875005C0030. The authors also want to thank Sven Koenig and anonymous reviewers for their valuable comments. 9. REFERENCES [1] R. Becker, V. Lesser, and S. Zilberstein. Decentralized MDPs with Event-Driven Interactions. In AAMAS, pages 302-309, 2004. [2] R. Becker, S. Zilberstein, V. Lesser, and C. V. Goldman. Transition-Independent Decentralized Markov Decision Processes. In AAMAS, pages 41-48, 2003. [3] D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of decentralized control of Markov decision processes. In UAI, pages 32-37, 2000. [4] A. Beynier and A. Mouaddib. A polynomial algorithm for decentralized Markov decision processes with temporal constraints. In AAMAS, pages 963-969, 2005. [5] A. Beynier and A. Mouaddib. An iterative algorithm for solving constrained decentralized Markov decision processes. In AAAI, pages 1089-1094, 2006. [6] C. Boutilier. Sequential optimality and coordination in multiagent systems. In IJCAI, pages 478-485, 1999. [7] J. Boyan and M. Littman. Exact solutions to time-dependent MDPs. In NIPS, pages 1026-1032, 2000. [8] C. Goldman and S. Zilberstein. Optimizing information exchange in cooperative multi-agent systems, 2003. [9] L. Li and M. Littman. Lazy approximation for solving continuous finite-horizon MDPs. In AAAI, pages 1175-1180, 2005. [10] Y. Liu and S. Koenig. Risk-sensitive planning with one-switch utility functions: Value iteration. In AAAI, pages 993-999, 2005. [11] D. Musliner, E. Durfee, J. Wu, D. Dolgov, R. Goldman, and M. Boddy. Coordinated plan management using multiagent MDPs. In AAAI Spring Symposium, 2006. [12] R. Nair, M. Tambe, M. Yokoo, D. Pynadath, and S. Marsella. Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In IJCAI, pages 705-711, 2003. [13] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. Networked distributed POMDPs: A synergy of distributed constraint optimization and POMDPs. In IJCAI, pages 1758-1760, 2005. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 837
On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. 1. INTRODUCTION The development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Explo ration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Unfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research. In particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses. Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. This paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire - brigades must coordinate in order to rescue civilians trapped in a burning building. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 2. MOTIVATING EXAMPLE We are interested in domains where multiple agents must coordinate their plans over time, despite uncertainty in plan execution duration and outcome. One example domain is large-scale disaster, like a fire in a skyscraper. Because there can be hundreds of civilians scattered across numerous floors, multiple rescue teams have to be dispatched, and radio communication channels can quickly get saturated and useless. In particular, small teams of fire-brigades must be sent on separate missions to rescue the civilians trapped in dozens of different locations. Picture a small mission plan from Figure (1), where three firebrigades have been assigned a task to rescue the civilians trapped at site B, accessed from site A (e.g. an office accessed from the floor) 1. General fire fighting procedures involve both: (i) putting out the flames, and (ii) ventilating the site to let the toxic, high temperature gases escape, with the restriction that ventilation should not be performed too fast in order to prevent the fire from spreading. The team estimates that the civilians have 20 minutes before the fire at site B becomes unbearable, and that the fire at site A has to be put out in order to open the access to site B. As has happened in the past in large scale disasters, communication often breaks down; and hence we assume in this domain that there is no communication between the fire-brigades 1,2 and 3 (denoted as FB1, FB2 and FB3). Consequently, FB2 does not know if it is already safe to ventilate site A, FB1 does not know if it is already safe to enter site A and start fighting fire at site B, etc. . We assign the reward 50 for evacuating the civilians from site B, and a smaller reward 20 for the successful ventilation of site A, since the civilians themselves might succeed in breaking out from site B. One can clearly see the dilemma, that FB2 faces: It can only estimate the durations of the "Fight fire at site A" methods to be executed by FB1 and FB3, and at the same time FB2 knows that time is running out for civilians. If FB2 ventilates site A too early, the fire will spread out of control, whereas if FB2 waits with the ventilation method for too long, fire at site B will become unbearable for the civilians. In general, agents have to perform a sequence of such Figure 1: Civilian rescue domain and a mission plan. Dotted arrows represent implicit precedence constraints within an agent. difficult decisions; in particular, decision process of FB2 involves first choosing when to start ventilating site A, and then (depending on the time it took to ventilate site A), choosing when to start evacuating the civilians from site B. Such sequence of decisions constitutes the policy of an agent, and it must be found fast because time is running out. 3. MODEL DESCRIPTION We encode our decision problems in a model which we refer to as Decentralized MDP with Temporal Constraints 2. Each instance of our decision problems can be described as a tuple (M, A, C, P, R) where M = {mi} | M | i = 1 is the set of methods, and A = {Ak} | A | k = 1 is the set of agents. Agents cannot communicate during mission execution. Each agent Ak is assigned to a set Mk of methods, such that S | A | k = 1 Mk = M and b' i, j; i ~ = jMi fl Mj = ø. Also, each method of agent Ak can be executed only once, and agent Ak can execute only one method at a time. Method execution times are uncertain and P = {pi} | M | i = 1 is the set of distributions of method execution durations. In particular, pi (t) is the probability that the execution of method mi consumes time t. C is a set of temporal constraints in the system. Methods are partially ordered and each method has fixed time windows inside which it can be executed, i.e., C = C ≺ U C [] where C ≺ is the set of predecessor constraints and C [] is the set of time window constraints. For c E C ≺, c = (mi, mj) means that method mi precedes method mj i.e., execution of mj cannot start before mi terminates. In particular, for an agent Ak, all its methods form a chain linked by predecessor constraints. We assume, that the graph G = (M, C ≺) is acyclic, does not have disconnected nodes (the problem cannot be decomposed into independent subproblems), and its source and sink vertices identify the source and sink methods of the system. For c E C [], c = (mi, EST, LET) means that execution of mi can only start after the Earliest Starting Time EST and must finish before the Latest End Time LET; we allow methods to have multiple disjoint time window constraints. Although distributions pi can extend to infinite time horizons, given the time window constraints, the planning horizon Δ = max ~ m, τ, τ ~ ∈ C [] τ is considered as the mission deadline. Finally, R = {ri} | M | i = 1 is the set of non-negative rewards, i.e., ri is obtained upon successful execution of mi. Since there is no communication allowed, an agent can only estimate the probabilities that its methods have already been enabled 2One could also use the OC-DEC-MDP framework, which models both time and resource constraints The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831 by other agents. Consequently, if mj ∈ Mk is the next method to be executed by the agent Ak and the current time is t ∈ [0, Δ], the agent has to make a decision whether to Execute the method mj (denoted as E), or to Wait (denoted as W). In case agent Ak decides to wait, it remains idle for an arbitrary small time ~, and resumes operation at the same place (= about to execute method mj) at time t + ~. In case agent Ak decides to Execute the next method, two outcomes are possible: Success: The agent Ak receives reward rj and moves on to its next method (if such method exists) so long as the following conditions hold: (i) All the methods {mi | ~ mi, mj ~ ∈ C ≺} that directly enable method mj have already been completed, (ii) Execution of method mj started in some time window of method mj, i.e., ∃ ~ mj, τ, τ, ~ ∈ C [] such that t ∈ [τ, τ ~], and (iii) Execution of method mj finished inside the same time window, i.e., agent Ak completed method mj in time less than or equal to τ ~ − t. Failure: If any of the above-mentioned conditions does not hold, agent Ak stops its execution. Other agents may continue their execution, but methods mk ∈ {m | ~ mj, m ~ ∈ C ≺} will never become enabled. The policy πk of an agent Ak is a function πk: Mk × [0, Δ] → {W, E}, and πk (~ m, t ~) = a means, that if Ak is at method m at time t, it will choose to perform the action a. A joint policy π = [πk] | A | k = 1 is considered to be optimal (denoted as π ∗), if it maximizes the sum of expected rewards for all the agents. 4. SOLUTION TECHNIQUES 4.1 Optimal Algorithms Optimal joint policy π ∗ is usually found by using the Bellman update principle, i.e., in order to determine the optimal policy for method mj, optimal policies for methods mk ∈ {m | ~ mj, m ~ ∈ C ≺} are used. Unfortunately, for our model, the optimal policy for method mj also depends on policies for methods mi ∈ {m | ~ m, mj ~ ∈ C ≺}. This double dependency results from the fact, that the expected reward for starting the execution of method mj at time t also depends on the probability that method mj will be enabled by time t. Consequently, if time is discretized, one needs to consider Δ | M | candidate policies in order to find π ∗. Thus, globally optimal algorithms used for solving real-world problems are unlikely to terminate in reasonable time [11]. The complexity of our model could be reduced if we considered its more restricted version; in particular, if each method mj was allowed to be enabled at time points t ∈ Tj ⊂ [0, Δ], the Coverage Set Algorithm (CSA) [1] could be used. However, CSA complexity is double exponential in the size of Ti, and for our domains Tj can store all values ranging from 0 to Δ. 4.2 Locally Optimal Algorithms Following the limited applicability of globally optimal algorithms for DEC-MDPs with Temporal Constraints, locally optimal algorithms appear more promising. Specially, the OC-DEC-MDP algorithm [4] is particularly significant, as it has shown to easily scale up to domains with hundreds of methods. The idea of the OC-DECMDP algorithm is to start with the earliest starting time policy π0 (according to which an agent will start executing the method m as soon as m has a non-zero chance of being already enabled), and then improve it iteratively, until no further improvement is possible. At each iteration, the algorithm starts with some policy π, which uniquely determines the probabilities Pi, [τ, τ,] that method mi will be performed in the time interval [τ, τ ~]. It then performs two steps: Step 1: It propagates from sink methods to source methods the values Vi, [τ, τ,], that represent the expected utility for executing method mi in the time interval [τ, τ ~]. This propagation uses the probabilities Pi, [τ, τ,] from previous algorithm iteration. We call this step a value propagation phase. Step 2: Given the values Vi, [τ, τ,] from Step 1, the algorithm chooses the most profitable method execution intervals which are stored in a new policy π ~. It then propagates the new probabilities Pi, [τ, τ,] from source methods to sink methods. We call this step a probability propagation phase. If policy π ~ does not improve π, the algorithm terminates. There are two shortcomings of the OC-DEC-MDP algorithm that we address in this paper. First, each of OC-DEC-MDP states is a pair ~ mj, [τ, τ ~] ~, where [τ, τ ~] is a time interval in which method mj can be executed. While such state representation is beneficial, in that the problem can be solved with a standard value iteration algorithm, it blurs the intuitive mapping from time t to the expected total reward for starting the execution of mj at time t. Consequently, if some method mi enables method mj, and the values Vj, [τ, τ,] ∀ τ, τ, ∈ [0, Δ] are known, the operation that calculates the values Vi, [τ, τ,] ∀ τ, τ ~ ∈ [0, Δ] (during the value propagation phase), runs in time O (I2), where I is the number of time intervals 3. Since the runtime of the whole algorithm is proportional to the runtime of this operation, especially for big time horizons Δ, the OC - DECMDP algorithm runs slow. Second, while OC-DEC-MDP emphasizes on precise calculation of values Vj, [τ, τ,], it fails to address a critical issue that determines how the values Vj, [τ, τ,] are split given that the method mj has multiple enabling methods. As we show later, OC-DEC-MDP splits Vj, [τ, τ,] into parts that may overestimate Vj, [τ, τ,] when summed up again. As a result, methods that precede the method mj overestimate the value for enabling mj which, as we show later, can have disastrous consequences. In the next two sections, we address both of these shortcomings. 5. VALUE FUNCTION PROPAGATION (VFP) The general scheme of the VFP algorithm is identical to the OCDEC-MDP algorithm, in that it performs a series of policy improvement iterations, each one involving a Value and Probability Propagation Phase. However, instead of propagating separate values, VFP maintains and propagates the whole functions, we therefore refer to these phases as the value function propagation phase and the probability function propagation phase. To this end, for each method mi ∈ M, we define three new functions: Value Function, denoted as vi (t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t. Opportunity Cost Function, denoted as Vi (t), that maps time t ∈ [0, Δ] to the expected total reward for starting the execution of method mi at time t assuming that mi is enabled. Probability Function, denoted as Pi (t), that maps time t ∈ [0, Δ] to the probability that method mi will be completed before time t. Such functional representation allows us to easily read the current policy, i.e., if an agent Ak is at method mi at time t, then it will wait as long as value function vi (t) will be greater in the future. Formally: j W if ∃ t,> t such that vi (t) <vi (t ~) πk (~ mi, t ~) = E otherwise. We now develop an analytical technique for performing the value function and probability function propagation phases. 3Similarly for the probability propagation phase 832 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase Suppose, that we are performing a value function propagation phase during which the value functions are propagated from the sink methods to the source methods. At any time during this phase we encounter a situation shown in Figure 2, where opportunity cost functions [Vjn] Nn = 0 of methods [mjn] Nn = 0 are known, and the opportunity cost Vio of method mio is to be derived. Let pio be the probability distribution function of method mio execution duration, and rio be the immediate reward for starting and completing the execution of method mio inside a time interval [τ, τ'] such that ~ mioτ, τ' ~ ∈ C []. The function Vio is then derived from rio and opportunity costs Vjn, io (t) n = 1,..., N from future methods. Formally: 0 otherwise Note, that fort ∈ [τ, τ'], if h (t): = rio + 'N n = 0 Vjn, io (τ' − t) then Vio is a convolution of p and h: vio (t) = (pio ∗ h) (τ' − t). Assume for now, that Vjn, io represents a full opportunity cost, postponing the discussion on different techniques for splitting the opportunity cost Vjo into [Vjo, ik] Kk = 0 until section 6. We now show how to derive Vjo, io (derivation of Vjn, io for n = ~ 0 follows the same scheme). Figure 2: Fragment of an MDP of agent Ak. Probability functions propagate forward (left to right) whereas value functions propagate backward (right to left). Let Vjo, io (t) be the opportunity cost of starting the execution of method mjo at time t given that method mio has been completed. It is derived by multiplying Vio by the probability functions of all methods other than mio that enable mjo. Formally: Knowing the opportunity cost Vio, we can then easily derive the value function vio. Let Ak be an agent assigned to the method mio. If Ak is about to start the execution of mio it means, that Ak must have completed its part of the mission plan up to the method mio. Since Ak does not know if other agents have completed methods [mlk] k = K k = 1, in order to derive vio, it has to multiply Vio by the probability functions of all methods of other agents that enable mio. Formally: Where the dependency of [Plk] Kk = 1 is also ignored. We have consequently shown a general scheme how to propagate the value functions: Knowing [vjn] Nn = 0 and [Vjn] Nn = 0 of methods [mjn] Nn = 0 we can derive vio and Vio of method mio. In general, the value function propagation scheme starts with sink nodes. It then visits at each time a method m, such that all the methods that m enables have already been marked as visited. The value function propagation phase terminates when all the source methods have been marked as visited. 5.2 Reading the Policy In order to determine the policy of agent Ak for the method mjo we must identify the set Zjo of intervals [z, z'] ⊂ [0,..., Δ], such that: One can easily identify the intervals of Zjo by looking at the time intervals in which the value function vjo does not decrease monotonically. 5.3 Probability Function Propagation Phase Assume now, that value functions and opportunity cost values have all been propagated from sink methods to source nodes and the sets Zj for all methods mj ∈ M have been identified. Since value function propagation phase was using probabilities Pi (t) for methods mi ∈ M and times t ∈ [0, Δ] found at previous algorithm iteration, we now have to find new values Pi (t), in order to prepare the algorithm for its next iteration. We now show how in the general case (Figure 2) propagate the probability functions forward through one method, i.e., we assume that the probability functions [Pik] Kk = 0 of methods [mik] Kk = 0 are known, and the probability function Pjo of method mjo must be derived. Let pjo be the probability distribution function of method mjo execution duration, and Zjo be the set of intervals of inactivity for method mjo, found during the last value function propagation phase. If we ignore the dependency of [Pik] Kk = 0 then the probability Pjo (t) that the execution of method mjo starts before time t is given by: Where similarly to [4] and [5] we ignored the dependency of [Plk] K k = 1. Observe that Vjo, io does not have to be monotonically decreasing, i.e., delaying the execution of the method mio can sometimes be profitable. Therefore the opportunity cost Vjo, io (t) of enabling method mio at time t must be greater than or equal to Vjo, io. Furthermore, Vjo, io should be non-increasing. Formally: We have consequently shown how to propagate the probability functions [Pik] Kk = 0 of methods [mik] Kk = 0 to obtain the probability function Pjo of method mjo. The general, the probability function propagation phase starts with source methods msi for which we know that Psi = 1 since they are enabled by default. We then visit at each time a method m such that all the methods that enable The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833 m have already been marked as visited. The probability function propagation phase terminates when all the sink methods have been marked as visited. 5.4 The Algorithm Similarly to the OC-DEC-MDP algorithm, VFP starts the policy improvement iterations with the earliest starting time policy π0. Then at each iteration it: (i) Propagates the value functions [vi] | M | i = 1 using the old probability functions [Pi] | M | i = 1 from previous algorithm iteration and establishes the new sets [Zi] | M | i = 1 of method inactivity intervals, and (ii) propagates the new probability functions [Pi ~] | M | i = 1 using the newly established sets [Zi] | M | i = 1. These new functions [Pi ~] | M | i = 1 are then used in the next iteration of the algorithm. Similarly to OC-DEC-MDP, VFP terminates if a new policy does not improve the policy from the previous algorithm iteration. 5.5 Implementation of Function Operations So far, we have derived the functional operations for value function and probability function propagation without choosing any function representation. In general, our functional operations can handle continuous time, and one has freedom to choose a desired function approximation technique, such as piecewise linear [7] or piecewise constant [9] approximation. However, since one of our goals is to compare VFP with the existing OC-DEC - MDP algorithm, that works only for discrete time, we also discretize time, and choose to approximate value functions and probability functions with piecewise linear (PWL) functions. When the VFP algorithm propagates the value functions and probability functions, it constantly carries out operations represented by equations (1) and (3) and we have already shown that these operations are convolutions of some functions p (t) and h (t). If time is discretized, functions p (t) and h (t) are discrete; however, h (t) can be nicely approximated with a PWL function bh (t), which is exactly what VFP does. As a result, instead of performing O (Δ2) multiplications to compute f (t), VFP only needs to perform O (k · Δ) multiplications to compute f (t), where k is the number of linear segments of bh (t) (note, that since h (t) is monotonic, bh (t) is usually close to h (t) with k ~ Δ). Since Pi values are in range [0, 1] and Vi values are in range [0, Pmi ∈ M ri], we suggest to approximate Vi (t) with bVi (t) within error ~ V, and Pi (t) with bPi (t) within error ~ P. We now prove that the overall approximation error accumulated during the value function propagation phase can be expressed in terms of ~ P and ~ V: THEOREM 1. Let C ≺ be a set of precedence constraints of a DEC-MDP with Temporal Constraints, and ~ P and ~ V be the probability function and value function approximation errors respectively. The overall error ~ π = maxV supt ∈ [0, Δ] | V (t) − Vb (t) | of value function propagation phase is then bounded by: PROOF. In order to establish the bound for ~ π, we first prove by induction on the size of C ≺, that the overall error of probability function propagation phase, ~ π (P) = maxP supt ∈ [0, Δ] | P (t) − Pb (t) | is bounded by (1 + ~ P) | C ≺ | − 1. Induction base: If n = 1 only two methods are present, and we will perform the operation identified by Equation (3) only once, introducing the error ~ π (P) = ~ P = (1 + ~ P) | C ≺ | − 1. Induction step: Suppose, that ~ π (P) for | C ≺ | = n is bounded by (1 + ~ P) n − 1, and we want to prove that this statement holds for | C ≺ | = n. Let G = ~ M, C ≺ ~ be a graph with at most n + 1 edges, and G = ~ M, C ≺ ~ be a subgraph of G, such that C ≺ = C ≺ − {~ mi, mj ~}, where mj ∈ M is a sink node in G. From the induction assumption we have, that C ≺ introduces the probability propagation phase error bounded by (1 + ~ P) n − 1. We now add back the link {~ mi, mj ~} to C ≺, which affects the error of only one probability function, namely Pj, by a factor of (1 + ~ P). Since probability propagation phase error in C ≺ was bounded by (1 + ~ P) n − 1, in C ≺ = C ≺ ∪ {~ mi, mj ~} it can be at most ((1 + functions are not overestimated, they are bounded by P and the error of a single value function propagation operation will be at most 6. SPLITTING THE OPPORTUNITY COST FUNCTIONS In section 5 we left out the discussion about how the opportunity cost function Vjo of method mjo is split into opportunity cost functions [Vjo, ik] Kk = 0 sent back to methods [mik] Kk = 0, that directly enable method mjo. So far, we have taken the same approach as in [4] and [5] in that the opportunity cost function Vjo, ik that the method mik sends back to the method mjo is a minimal, non-increasing function that dominates function Vjo, ik (t) = (Vjo · Qk ~ ∈ {0,..., K} Pik ~) (t). We refer to this approach, as heurisk ~ ~ = k tic H ~ 1,1 ~. Before we prove that this heuristic overestimates the opportunity cost, we discuss three problems that might occur when splitting the opportunity cost functions: (i) overestimation, (ii) underestimation and (iii) starvation. Consider the situation in Figure Figure 3: Splitting the value function of method mjo among methods [mik] Kk = 0. (3) when value function propagation for methods [mik] Kk = 0 is performed. For each k = 0,..., K, Equation (1) derives the opportunity cost function Vik from immediate reward rk and opportunity cost function Vjo, ik. If m0 is the only methods that precedes method mk, then Vik,0 = Vik is propagated to method m0, and consequently the opportunity cost for completing the method m0 at time t is equal to PK k = 0 Vik,0 (t). If this cost is overestimated, then an agent A0 at method m0 will have too much incentive to finish the execution of m0 at time t. Consequently, although the probability P (t) that m0 will be enabled by other agents by time t is low, agent A0 might still find the expected utility of starting the execution of m0 at time t higher than the expected utility of doing it later. As a result, it will choose at time t to start executing method m0 instead of waiting, which can have disastrous consequences. Similarly, if PKk = 0 Vik,0 (t) is underestimated, agent A0 might loose interest in enabling the future methods [mik] Kk = 0 and just focus on 834 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) maximizing the chance of obtaining its immediate reward r0. Since this chance is increased when agent A0 waits4, it will consider at time t to be more profitable to wait, instead of starting the execution of m0, which can have similarly disastrous consequences. Finally, if Vj0 is split in a way, that for some k, Vj0, ik = 0, it is the method mik that underestimates the opportunity cost of enabling method mj0, and the similar reasoning applies. We call such problem a starvation of method mk. That short discussion shows the importance of splitting the opportunity cost function Vj0 in such a way, that overestimation, underestimation, and starvation problem is avoided. We now prove that: Consequently, the opportunity cost PKk = 0 Vik (t0) of starting the execution of methods [mik] Kk = 0 at time t E [0,. . , Δ] is greater than the opportunity cost Vj0 (t0) which proves the theorem.Figure 4 shows that the overestimation of opportunity cost is easily observable in practice. To remedy the problem of opportunity cost overestimation, we propose three alternative heuristics that split the opportunity cost functions: • Heuristic H ~ 1,0 ~: Only one method, mik gets the full expected reward for enabling method mj0, i.e., Vj0, ik ~ (t) = 0 • Heuristic bH ~ 1,1 ~: This is a normalized version of the H ~ 1,1 ~ heuristic in that each method [mik] Kk = 0 initially gets the full opportunity cost for enabling the method mj0. To avoid opportunity cost overestimation, we normalize the split functions when their sum exceeds the opportunity cost function to be split. Formally: For the new heuristics, we now prove, that: For heuristic bH ~ 1,1 ~, the opportunity cost function Vj0 is by definition split in such manner, that PKk = 0 Vik (t) <Vj0 (t). Consequently, we have proved, that our new heuristics H ~ 1,0 ~, H ~ 1/2,1 / 2 ~ and bH ~ 1,1 ~ avoid the overestimation of the opportunity cost. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 The reason why we have introduced all three new heuristics is the following: Since H (1,1) overestimates the opportunity cost, one has to choose which method mik will receive the reward from enabling the method mj0, which is exactly what the heuristic H (1,0) does. However, heuristic H (1,0) leaves K − 1 methods that precede the method mj0 without any reward which leads to starvation. Starvation can be avoided if opportunity cost functions are split using heuristic H (1/2,1 / 2), that provides reward to all enabling methods. However, the sum of split opportunity cost functions for the H (1/2,1 / 2) heuristic can be smaller than the non-zero split opportunity cost function for the H (1,0) heuristic, which is clearly undesirable. Such situation (Figure 4, heuristic H (1,0)) occurs because the mean f + g 2 of two functions f, g is not smaller than f nor g only if f = g. This is why we have proposed the bH (1,1) heuristic, which by definition avoids the overestimation, underestimation and starvation problems. 7. EXPERIMENTAL EVALUATION Since the VFP algorithm that we introduced provides two orthogonal improvements over the OC-DEC-MDP algorithm, the experimental evaluation we performed consisted of two parts: In part 1, we tested empirically the quality of solutions that an locally optimal solver (either OC-DEC-MDP or VFP) finds, given it uses different opportunity cost function splitting heuristic, and in part 2, we compared the runtimes of the VFP and OC-DEC - MDP algorithms for a variety of mission plan configurations. Part 1: We first ran the VFP algorithm on a generic mission plan configuration from Figure 3 where only methods mj0, mi1, mi2 and m0 were present. Time windows of all methods were set to 400, duration pj0 of method mj0 was uniform, i.e., pj0 (t) = 1 and durations pi1, pi2 of methods mi1, mi2 were normal distributions, i.e., pi1 = N (μ = 250, σ = 20), and pi2 = N (μ = 200, σ = 100). We assumed that only method mj0 provided reward, i.e. rj0 = 10 was the reward for finishing the execution of method mj0 before time t = 400. We show our results in Figure (4) where the x-axis of each of the graphs represents time whereas the y-axis represents the opportunity cost. The first graph confirms, that when the opportunity cost function Vj0 was split into opportunity cost functions Vi1 and Vi2 using the H (1,1) heuristic, the function Vi1 + Vi2 was not always below the Vj0 function. In particular, Vi1 (280) + Vi2 (280) exceeded Vj0 (280) by 69%. When heuristics H (1,0), H (1/2,1 / 2) and bH (1,1) were used (graphs 2,3 and 4), the function Vi1 + Vi2 was always below Vj0. We then shifted our attention to the civilian rescue domain introduced in Figure 1 for which we sampled all action execution durations from the normal distribution N = (μ = 5, σ = 2)). To obtain the baseline for the heuristic performance, we implemented a globally optimal solver, that found a true expected total reward for this domain (Figure (6a)). We then compared this reward with a expected total reward found by a locally optimal solver guided by each of the discussed heuristics. Figure (6a), which plots on the y-axis the expected total reward of a policy complements our previous results: H (1,1) heuristic overestimated the expected total reward by 280% whereas the other heuristics were able to guide the locally optimal solver close to a true expected total reward. Part 2: We then chose H (1,1) to split the opportunity cost functions and conducted a series of experiments aimed at testing the scalability of VFP for various mission plan configurations, using the performance of the OC-DEC-MDP algorithm as a benchmark. We began the VFP scalability tests with a configuration from Figure (5a) associated with the civilian rescue domain, for which method execution durations were extended to normal distributions N (μ = Figure 5: Mission plan configurations: (a) civilian rescue domain, (b) chain of n methods, (c) tree of n methods with branching factor = 3 and (d) square mesh of n methods. Figure 6: VFP performance in the civilian rescue domain. 30, σ = 5), and the deadline was extended to Δ = 200. We decided to test the runtime of the VFP algorithm running with three different levels of accuracy, i.e., different approximation parameters ~ P and ~ V were chosen, such that the cumulative error of the solution found by VFP stayed within 1%, 5% and 10% of the solution found by the OC - DEC-MDP algorithm. We then run both algorithms for a total of 100 policy improvement iterations. Figure (6b) shows the performance of the VFP algorithm in the civilian rescue domain (y-axis shows the runtime in milliseconds). As we see, for this small domain, VFP runs 15% faster than OCDEC-MDP when computing the policy with an error of less than 1%. For comparison, the globally optimal solved did not terminate within the first three hours of its runtime which shows the strength of the opportunistic solvers, like OC-DEC-MDP. We next decided to test how VFP performs in a more difficult domain, i.e., with methods forming a long chain (Figure (5b)). We tested chains of 10, 20 and 30 methods, increasing at the same time method time windows to 350, 700 and 1050 to ensure that later methods can be reached. We show the results in Figure (7a), where we vary on the x-axis the number of methods and plot on the y-axis the algorithm runtime (notice the logarithmic scale). As we observe, scaling up the domain reveals the high performance of VFP: Within 1% error, it runs up to 6 times faster than OC-DECMDP. We then tested how VFP scales up, given that the methods are arranged into a tree (Figure (5c)). In particular, we considered trees with branching factor of 3, and depth of 2, 3 and 4, increasing at the same time the time horizon from 200 to 300, and then to 400. We show the results in Figure (7b). Although the speedups are smaller than in case of a chain, the VFP algorithm still runs up to 4 times faster than OC-DEC-MDP when computing the policy with an error of less than 1%. We finally tested how VFP handles the domains with methods arranged into a n × n mesh, i.e., C _ = {(mi, j, mk, j +1)} for i = 1,..., n; k = 1,..., n; j = 1,..., n − 1. In particular, we consider 836 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: Visualization of heuristics for opportunity costs splitting. Figure 7: Scalability experiments for OC-DEC-MDP and VFP for different network configurations. meshes of 3 × 3, 4 × 4, and 5 × 5 methods. For such configurations we have to greatly increase the time horizon since the probabilities of enabling the final methods by a particular time decrease exponentially. We therefore vary the time horizons from 3000 to 4000, and then to 5000. We show the results in Figure (7c) where, especially for larger meshes, the VFP algorithm runs up to one order of magnitude faster than OC-DEC-MDP while finding a policy that is within less than 1% from the policy found by OC - DECMDP. 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scaleup to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. However, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning.
On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. 1. INTRODUCTION The development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Explo ration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Unfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research. In particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses. Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. This paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire - brigades must coordinate in order to rescue civilians trapped in a burning building. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 2. MOTIVATING EXAMPLE 3. MODEL DESCRIPTION The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831 4. SOLUTION TECHNIQUES 4.1 Optimal Algorithms 4.2 Locally Optimal Algorithms 5. VALUE FUNCTION PROPAGATION (VFP) 832 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase 0 otherwise 5.2 Reading the Policy 5.3 Probability Function Propagation Phase The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833 5.4 The Algorithm 5.5 Implementation of Function Operations 6. SPLITTING THE OPPORTUNITY COST FUNCTIONS 834 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 7. EXPERIMENTAL EVALUATION 836 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scaleup to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9]. However, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning.
On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints ABSTRACT Decentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve. In this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP. First, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval. Furthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP. We test both improvements independently in a crisis-management domain as well as for other types of domains. Our experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations. 1. INTRODUCTION ration Rovers [2]. Because of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time. Key decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs). Although globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks. To remedy that, locally optimal algorithms have been proposed [12] [4] [5]. In particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons. Additionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains. OC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration. However, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values. The reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval. Furthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods. This reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies. In this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP. VFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain and manipulate a value function over time for each method rather than a separate value for each pair of method and time interval. Such representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions. Second, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem. In section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers. Sections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements. Finally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm 8. CONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains. In this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs. Our heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP. In terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4]. Furthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11]. Unfortunately, they fail to scaleup to large-scale domains at present time. Beyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints. Finally, value function techniques have been studied in context of single agent MDPs [7] [9].
I-55
Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory
It is well established by conflict theorists and others that successful negotiation should incorporate creating value as well as claiming value. Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier.
[ "autom negoti", "negoti", "creat valu", "claim valu", "mediat", "ineffici compromis", "dilemma", "concess", "deadlock situat", "uncertainti", "incomplet inform", "mcdm", "integr negoti", "multi-criterion decis make" ]
[ "P", "P", "P", "P", "P", "U", "U", "U", "U", "U", "M", "U", "M", "M" ]
Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory Quoc Bao Vo School of Computer Science and IT RMIT University, Australia vqbao@cs.rmit.edu.au Lin Padgham School of Computer Science and IT RMIT University, Australia linpa@cs.rmit.edu.au ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate creating value as well as claiming value. Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems; K.4.4 [Computers and Society]: Electronic Commerce General Terms Algorithms, Design 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, it``s perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators'' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as creating value and claiming value. They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiator``s Dilemma. We argue that the Negotiator``s Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agent``s bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. For instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price. On the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent. But at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence. Coming back to the Negotiator``s Dilemma, it``s not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponent``s interests against her during the distributive negotiation process. That is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponent``s interests even though the concession he made could be insignificant to him. For instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty. Negotiation Support Systems (NSSs) and negotiating software 508 978-81-904262-7-5 (RPS) c 2007 IFAAMAS agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). However, because of the Negotiator``s Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information. In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1, ... , m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1, ... , Xk. We assume that Xi, (i = 1, ... , k), maps the alternatives to real numbers. Thus, a tuple (x1, ... , xk) = (X1(a), ... , Xk(a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker``s objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like minimise cost and maximise the quality of services. Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision maker``s objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a ∈ A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. Part of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space {(X1(a), ... , Xk(a))}a∈A. DEFINITION 1. (Dominant) Let x = (x1, ... , xk) and x = (x1, ... , xk) be two consequences. x dominates x iff xi > xi for all i, and the inequality is strict for at least one i. The Pareto frontier in a consequence space then consists of all consequences that are not dominated by any other consequence. This is illustrated in Fig. 1 in which an alternative consists of two attributes d1 and d2 and the decision maker tries to maximise the two objectives X1 and X2. A decision a ∈ A whose consequence does not lie on the Pareto frontier is inefficient. While the Pareto 1x d2 a (X (a),X (a)) d1 1 x2 2 Alternative spaceA Pareto frontier Consequence space optimal consequenc Figure 1: The Pareto frontier frontier allows M to avoid taking inefficient decisions, M still has to decide which of the efficient consequences on the Pareto frontier is most preferred by him. MCDM theorists introduce a mechanism to allow the objective components of consequences to be normalised to the payoff valuations for the objectives. Consequences can then be ordered: if the gains in satisfaction brought about by C1 (in comparison to C2) equals to the losses in satisfaction brought about by C1 (in comparison to C2), then the two consequences C1 and C2 are considered indifferent. M can now construct the set of indifference curves1 in the consequence space (the dashed curves in Fig. 1). The most preferred indifference curve that intersects with the Pareto frontier is in focus: its intersection with the Pareto frontier is the sought after consequence (i.e., the optimal consequence in Fig. 1). 2.2 A negotiation framework A multi-agent negotiation framework consists of: 1. A set of two negotiating agents N = {1, 2}. 2. A set of attributes Att = {α1, ... , αm} characterising the issues the agents are negotiating over. Each attribute α can take a value from the set V alα; 3. A set of alternative outcomes O. An outcome o ∈ O is represented by an assignment of values to the corresponding attributes in Att. 4. Agents'' utility: Based on the theory of multiple-criteria decision making [8], we define the agents'' utility as follows: • Objectives: Agent i has a set of ni objectives, or interests; denoted by j (j = 1, ... , ni). To measure how much an outcome o fulfills an objective j to an agent i, we use objective functions: for each agent i, we define i``s interests using the objective vector function fi = [fij ] : O → Rni . • Value functions: Instead of directly evaluating an outcome o, agent i looks at how much his objectives are fulfilled and will make a valuation based on these more basic criteria. Thus, for each agent i, there is a value function σi : Rni → R. In particular, Raiffa [17] shows how to systematically construct an additive value function to each party involved in a negotiation. • Utility: Now, given an outcome o ∈ O, an agent i is able to determine its value, i.e., σi(fi(o)). However, a negotiation infrastructure is usually required to facilitate negotiation. This might involve other mechanisms and factors/parties, e.g., a mediator, a legal institution, participation fees, etc.. The standard way to implement such a thing is to allow money 1 In fact, given the k-dimensional space, these should be called indifference surfaces. However, we will not bog down to that level of details. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509 and side-payments. In this paper, we ignore those side-effects and assume that agent i``s utility function ui is normalised so that ui : O → [0, 1]. EXAMPLE 1. There are two agents, A and B. Agent A has a task T that needs to be done and also 100 units of a resource R. Agent B has the capacity to perform task T and would like to obtain at least 10 and at most 20 units of the resource R. Agent B is indifferent on any amount between 10 and 20 units of the resource R. The objective functions for both agents A and B are cost and revenue. And they both aim at minimising costs while maximising revenues. Having T done generates for A a revenue rA,T while doing T incurs a cost cB,T to B. Agent B obtains a revenue rB,R for each unit of the resource R while providing each unit of the resource R costs agent A cA,R. Assuming that money transfer between agents is possible, the set Att then contains three attributes: • T, taking values from the set {0, 1}, indicates whether the task T is assigned to agent B; • R, taking values from the set of non-negative integer, indicates the amount of resource R being allocated to agent B; and • MT, taking values from R, indicates the payment p to be transferred from A to B. Consider the outcome o = [T = 1, R = k, MT = p], i.e., the task T is assigned to B, and A allocates to B with k units of the resource R, and A transfers p dollars to B. Then, costA(o) = k.cA,R + p and revA(o) = rA,T ; and costB(o) = cB,T and revA(o) = j k.rB,R + p if 10 ≤ k ≤ 20 p otherwise. And, σi(costi(o), revi(o)) = revi(o) − costi(o), (i = A, B). 3. PROBLEM FORMALISATION Consider Example 1, assume that rA,T = $150 and cB,T = $100 and rB,R = $10 and cA,R = $7. That is, the revenues generated for A exceeds the costs incurred to B to do task T, and B values resource R more highly than the cost for A to provide it. The optimal solution to this problem scenario is to assign task T to agent B and to allocate 20 units of resource R (i.e., the maximal amount of resource R required by agent B) from agent A to agent B. This outcome regarding the resource and task allocation problems leaves payoffs of $10 to agent A and $100 to agent B.2 Any other outcome would leave at least one of the agents worse off. In other words, the presented outcome is Pareto-efficient and should be part of the solution outcome for this problem scenario. However, as the agents still have to bargain over the amount of money transfer p, neither agent would be willing to disclose their respective costs and revenues regarding the task T and the resource R. As a consequence, agents often do not achieve the optimal outcome presented above in practice. To address this issue, we introduce a mediator to help the agents discover better agreements than the ones they might try to settle on. Note that this problem is essentially the problem of searching for joint gains in a multilateral negotiation in which the involved parties hold strategic information, i.e., the integrative part in a negotiation. In order to help facilitate this process, we introduce the role of a neutral mediator. Before formalising the decision problems faced by the mediator and the 2 Certainly, without money transfer to compensate agent A, this outcome is not a fair one. negotiating agents, we discuss the properties of the solution outcomes to be achieved by the mediator. In a negotiation setting, the two typical design goals would be: • Efficiency: Avoid the agents from settling on an outcome that is not Pareto-optimal; and • Fairness: Avoid agreements that give the most of the gains to a subset of agents while leaving the rest with too little. The above goals are axiomatised in Nash``s seminal work [16] on cooperative negotiation games. Essentially, Nash advocates for the following properties to be satisfied by solution to the bilateral negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii) it is invariant to affine transformation (to the consequence space); (iii) it is symmetric; and (iv) it is independent from irrelevant alternatives. A solution satisfying Nash``s axioms is called a Nash bargaining solution. It then turns out that, by taking the negotiators'' utilities as its objectives the mediator itself faces a multi-criteria decision making problem. The issues faced by the mediator are: (i) the mediator requires access to the negotiators'' utility functions, and (ii) making (fair) tradeoffs between different agents'' utilities. Our methods allow the agents to repeatedly interact with the mediator so that a Nash solution outcome could be found by the parties. Informally, the problem faced by both the mediator and the negotiators is construction of the indifference curves. Why are the indifference curves so important? • To the negotiators, knowing the options available along indifference curves opens up opportunities to reach more efficient outcomes. For instance, consider an agent A who is presenting his opponent with an offer θA which she refuses to accept. Rather than having to concede, A could look at his indifference curve going through θA and choose another proposal θA. To him, θA and θA are indifferent but θA could give some gains to B and thus will be more acceptable to B. In other words, the outcome θA is more efficient than θA to these two negotiators. • To the mediator, constructing indifference curves requires a measure of fairness between the negotiators. The mediator needs to determine how much utility it needs to take away from the other negotiators to give a particular negotiator a specific gain G (in utility). In order to search for integrative solutions within the outcome space O, we characterise the relationship between the agents over the set of attributes Att. As the agents hold different objectives and have different capacities, it may be the case that changing between two values of a specific attribute implies different shifts in utility of the agents. However, the problem of finding the exact Paretooptimal set3 is NP-hard [2]. Our approach is thus to solve this optimisation problem in two steps. In the first steps, the more manageable attributes will be solved. These are attributes that take a finite set of values. The result of this step would be a subset of outcomes that contains the Pareto-optimal set. In the second step, we employ an iterative procedure that allows the mediator to interact with the negotiators to find joint improvements that move towards a Pareto-optimal outcome. This approach will not work unless the attributes from Att 3 The Pareto-optimal set is the set of outcomes whose consequences (in the consequence space) correspond to the Pareto frontier. 510 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) are independent. Most works on multi-attribute, or multi-issue, negotiation (e.g. [17]) assume that the attributes or the issues are independent, resulting in an additive value function for each agent.4 ASSUMPTION 1. Let i ∈ N and S ⊆ Att. Denote by ¯S the set Att \ S. Assume that vS and vS are two assignments of values to the attributes of S and v1 ¯S, v2 ¯S are two arbitrary value assignments to the attributes of ¯S, then (ui([vS, v1 ¯S]) − ui([vS, v2 ¯S])) = (ui([vS, v1 ¯S])−ui([vS, v2 ¯S])). That is, the utility function of agent i will be defined on the attributes from S independently of any value assignment to other attributes. 4. MEDIATOR-BASED BILATERAL NEGOTIATIONS As discussed by Lax and Sebenius [13], under incomplete information the tension between creating and claiming values is the primary cause of inefficient outcomes. This can be seen most easily in negotiations involving two negotiators; during the distributive phase of the negotiation, the two negotiators``s objectives are directly opposing each other. We will now formally characterise this relationship between negotiators by defining the opposition between two negotiating parties. The following exposition will be mainly reproduced from [9]. Assuming for the moment that all attributes from Att take values from the set of real numbers R, i.e., V alj ⊆ R for all j ∈ Att. We further assume that the set O = ×j∈AttV alj of feasible outcomes is defined by constraints that all parties must obey and O is convex. Now, an outcome o ∈ O is just a point in the m-dimensional space of real numbers. Then, the questions are: (i) from the point of view of an agent i, is o already the best outcome for i? (ii) if o is not the best outcome for i then is there another outcome o such that o gives i a better utility than o and o does not cause a utility loss to the other agent j in comparison to o? The above questions can be answered by looking at the directions of improvement of the negotiating parties at o, i.e., the directions in the outcome space O into which their utilities increase at point o. Under the assumption that the parties'' utility functions ui are differentiable concave, the set of all directions of improvement for a party at a point o can be defined in terms of his most preferred, or gradient, direction at that point. When the gradient direction ∇ui(o) of agent i at point o is outright opposing to the gradient direction ∇uj (o) of agent j at point o then the two parties strongly disagree at o and no joint improvements can be achieved for i and j in the locality surrounding o. Since opposition between the two parties can vary considerably over the outcome space (with one pair of outcomes considered highly antagonistic and another pair being highly cooperative), we need to describe the local properties of the relationship. We begin with the opposition at any point of the outcome space Rm . The following definition is reproduced from [9]: DEFINITION 2. 1. The parties are in local strict opposition at a point x ∈ Rm iff for all points x ∈ Rm that are sufficiently close to x (i.e., for some > 0 such that ∀x x −x < ), an increase of one utility can be achieved only at the expense of a decrease of the other utility. 2. The parties are in local non-strict opposition at a point x ∈ Rm iff they are not in local strict opposition at x, i.e., iff it is possible for both parties to raise their utilities by moving an infinitesimal distance from x. 4 Klein et al. [10] explore several implications of complex contracts in which attributes are possibly inter-dependent. 3. The parties are in local weak opposition at a point x ∈ Rm iff ∇u1(x). ∇u2(x) ≥ 0, i.e., iff the gradients at x of the two utility functions form an acute or right angle. 4. The parties are in local strong opposition at a point x ∈ Rm iff ∇u1(x). ∇u2(x) < 0, i.e., iff the gradients at x form an obtuse angle. 5. The parties are in global strict (nonstrict, weak, strong) opposition iff for every x ∈ Rm they are in local strict (nonstrict, weak, strong) opposition. Global strict and nonstrict oppositions are complementary cases. Essentially, under global strict opposition the whole outcome space O becomes the Pareto-optimal set as at no point in O can the negotiating parties make a joint improvement, i.e., every point in O is a Pareto-efficient outcome. In other words, under global strict opposition the outcome space O can be flattened out into a single line such that for each pair of outcomes x, y ∈ O, u1(x) < u1(y) iff u2(x) > u2(y), i.e., at every point in O, the gradient of the two utility functions point to two different ends of the line. Intuitively, global strict opposition implies that there is no way to obtain joint improvements for both agents. As a consequence, the negotiation degenerates to a distributive negotiation, i.e., the negotiating parties should try to claim as much shares from the negotiation issues as possible while the mediator should aim for the fairness of the division. On the other hand, global nonstrict opposition allows room for joint improvements and all parties might be better off trying to realise the potential gains by reaching Pareto-efficient agreements. Weak and strong oppositions indicate different levels of opposition. The weaker the opposition, the more potential gains can be realised making cooperation the better strategy to employ during negotiation. On the other hand, stronger opposition suggests that the negotiating parties tend to behave strategically leading to misrepresentation of their respective objectives and utility functions and making joint gains more difficult to realise. We have been temporarily making the assumption that the outcome space O is the subset of Rm . In many real-world negotiations, this assumption would be too restrictive. We will continue our exposition by lifting this restriction and allowing discrete attributes. However, as most negotiations involve only discrete issues with a bounded number of options, we will assume that each attribute takes values either from a finite set or from the set of real numbers R. In the rest of the paper, we will refer to attributes whose values are from finite sets as simple attributes and attributes whose values are from R as continuous attributes. The notions of local oppositions, i.e., strict, nonstrict, weak and strong, are not applicable to outcome spaces that contain simple attributes and nor are the notions of global weak and strong oppositions. However, the notions of global strict and nonstrict oppositions can be generalised for outcome spaces that contain simple attributes. DEFINITION 3. Given an outcome space O, the parties are in global strict opposition iff ∀x, y ∈ O, u1(x) < u1(y) iff u2(x) > u2(y). The parties are in global nonstrict opposition if they are not in global strict opposition. 4.1 Optimisation on simple attributes In order to extract the optimal values for a subset of attributes, in the first step of this optimisation process the mediator requests the negotiators to submit their respective utility functions over the set of simple attributes. Let Simp ⊆ Att denote the set of all simple attributes from Att. Note that, due to Assumption 1, agent i``s The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511 utility function can be characterised as follows: ui([vSimp, vSimp]) = wi 1 ∗ ui,1([vSimp]) + wi 2 ∗ ui,2([vSimp]), where Simp = Att \ Simp, and ui,1 and ui,2 are the utility components of ui over the sets of attributes Simp and Simp, respectively, and 0 < wi 1, wi 2 < 1 and wi 1 + wi 2 = 1. As attributes are independent of each other regarding the agents'' utility functions, the optimisation problem over the attributes from Simp can be carried out by fixing ui([vSimp]) to a constant C, and then search for the optimal values within the set of attributes Simp. Now, how does the mediator determine the optimal values for the attributes in Simp? Several well-known optimisation strategies could be applicable here: • The utilitarian solution: The sum of the agents'' utilities are maximised. Thus, the optimal values are the solution of the following optimisation problem: arg max v∈V alSimp X i∈N ui(v) • The Nash solution: The product of the agents'' utilities are maximised. Thus, the optimal values are the solution of the following optimisation problem: arg max v∈V alSimp Y i∈N ui(v) • The egalitarian solution (aka. the maximin solution): The utility of the agent with minimum utility is maximised. Thus, the optimal values are the solution of the following optimisation problem: arg max v∈V alSimp min i∈N ui(v) The question now is of course whether a negotiator has the incentive to misrepresent his utility function. First of all, recall that the agents'' utility functions are bounded, i.e., ∀o ∈ O.0 ≤ ui(o) ≤ 1. Thus, the agents have no incentive to overstate their utility regarding an outcome o: If o is the most preferred outcome to an agent i then he already assigns the maximal utility to o. On the other hand, if o is not the most preferred outcome to i then by overstating the utility he assigns to o, the agent i runs the risk of having to settle on an agreement which would give him less payoffs than he is supposed to receive. However, agents do have an incentive to understate their utility if the final settlement will be based on the above solutions alone. Essentially, the mechanism to avoid an agent to understate his utility regarding particular outcomes is to guarantee a certain measure of fairness for the final settlement. That is, the agents lose the incentive to be dishonest to obtain gains from taking advantage of the known solutions to determine the settlement outcome for they would be offset by the fairness maintenance mechanism. Firsts, we state an easy lemma. LEMMA 1. When Simp contains one single attributes, the agents have the incentive to understate their utility functions regarding outcomes that are not attractive to them. By way of illustration, consider the set Simp containing only one attribute that could take values from the finite set {A, B, C, D}. Assume that negotiator 1 assigns utilities of 0.4, 0.7, 0.9, and 1 to A, B, C, and D, respectively. Assume also that negotiator 2 assigns utilities of 1, 0.9, 0.7, and 0.4 to A, B, C, and D, respectively. If agent 1 misrepresents his utility function to the mediator by reporting utility 0 for all values A, B and C and utility 1 for value D then the agent 2 who plays honestly in his report to the mediator will obtain the worst outcome D given any of the above solutions. Note that agent 1 doesn``t need to know agent 2``s utility function, nor does he need to know the strategy employed by agent 2. As long as he knows that the mediator is going to employ one of the above three solutions, then the above misrepresentation is the dominant strategy for this game. However, when the set Simp contains more than one attribute and none of the attributes strongly dominate the other attributes then the above problem disminishes by itself thanks to the integrative solution. We of course have to define clearly what it means for an attribute to strongly dominate other attributes. Intuitively, if most of an agent``s utility concentrates on one of the attributes then this attribute strongly dominates other attributes. We again appeal to the Assumption 1 on additivity of utility functions to achieve a measure of fairness within this negotiation setting. Due to Assumption 1, we can characterise agent i``s utility component over the set of attributes Simp by the following equation: ui,1([vSimp]) = X j∈Simp wi j ∗ ui,j([vj]) (1) where P j∈Simp wj = 1. Then, an attribute ∈ Simp strongly dominates the rest of the attributes in Simp (for agent i) iff wi > P j∈(Simp− ) wi j . Attribute is said to be strongly dominant (for agent i) wrt. the set of simple attributes Simp. The following theorem shows that if the set of attributes Simp does not contain a strongly dominant attribute then the negotiators have no incentive to be dishonest. THEOREM 1. Given a negotiation framework, if for every agent the set of simple attributes doesn``t contain a strongly dominant attribute, then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. So far, we have been concentrating on the efficiency issue while leaving the fairness issue aside. A fair framework does not only support a more satisfactory distribution of utility among the agents, but also often a good measure to prevent misrepresentation of private information by the agents. Of the three solutions presented above, the utilitarian solution does not support fairness. On the other hand, Nash [16] proves that the Nash solution satisfies the above four axioms for the cooperative bargaining games and is considered a fair solution. The egalitarian solution is another mechanism to achieve fairness by essentially helping the worst off. The problem with these solutions, as discussed earlier, is that they are vulnerable to strategic behaviours when one of the attributes strongly dominates the rest of attributes. However, there is yet another solution that aims to guarantee fairness, the minimax solution. That is, the utility of the agent with maximum utility is minimised. It``s obvious that the minimax solution produces inefficient outcomes. However, to get around this problem (given that the Pareto-optimal set can be tractably computed), we can apply this solution over the Pareto-optimal set only. Let POSet ⊆ V alSimp be the Pareto-optimal subset of the simple outcomes, the minimax solution is defined to be the solution of the following optimisation problem. arg min v∈P OSet max i∈N ui(v) While overall efficiency often suffers under a minimax solution, i.e., the sum of all agents'' utilities are often lower than under other solutions, it can be shown that the minimax solution is less vulnerable to manipulation. 512 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) THEOREM 2. Given a negotiation framework, under the minimax solution, if the negotiators are uncertain about their opponents'' preferences then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. That is, even when there is only one single simple attribute, if an agent is uncertain whether the other agent``s most preferred resolution is also his own most preferred resolution then he should opt for truth-telling as the optimal strategy. 4.2 Optimisation on continuous attributes When the attributes take values from infinite sets, we assume that they are continuous. This is similar to the common practice in operations research in which linear programming solutions/techniques are applied to integer programming problems. We denote the number of continuous attributes by k, i.e., Att = Simp ∪ Simp and |Simp| = k. Then, the outcome space O can be represented as follows: O = ( Q j∈Simp V alj) × ( Q l∈Simp V all), where Q l∈Simp V all ⊆ Rk is the continuous component of O. Let Oc denote the set Q l∈Simp V all. We``ll refer to Oc as the feasible set and assume that Oc is closed and convex. After carrying out the optimisation over the set of simple attributes, we are able to assign the optimal values to the simple attributes from Simp. Thus, we reduce the original problem to the problem of searching for optimal (and fair) outcomes within the feasible set Oc . Recall that, by Assumption 1, we can characterise agent i``s utility function as follows: ui([v∗ Simp, vSimp]) = C + wi 2 ∗ ui,2([vSimp]), where C is the constant wi 1 ∗ ui,1([v∗ Simp]) and v∗ Simp denotes the optimal values of the simple attributes in Simp. Hence, without loss of generality (albeit with a blatant abuse of notation), we can take the agent i``s utility function as ui : Rk → [0, 1]. Accordingly we will also take the set of outcomes under consideration by the agents to be the feasible set Oc . We now state another assumption to be used in this section: ASSUMPTION 2. The negotiators'' utility functions can be described by continuously differentiable and concave functions ui : Rk → [0, 1], (i = 1, 2). It should be emphasised that we do not assume that agents explicitly know their utility functions. For the method to be described in the following to work, we only assume that the agents know the relevant information, e.g. at certain point within the feasible set Oc , the gradient direction of their own utility functions and some section of their respective indifference curves. Assume that a tentative agreement (which is a point x ∈ Rk ) is currently on the table, the process for the agents to jointly improve this agreement in order to reach a Pareto-optimal agreement can be described as follows. The mediator asks the negotiators to discretely submit their respective gradient directions at x, i.e., ∇u1(x) and ∇u2(x). Note that the goal of the process to be described here is to search for agreements that are more efficient than the tentative agreement currently on the table. That is, we are searching for points x within the feasible set Oc such that moving to x from the current tentative agreement x brings more gains to at least one of the agents while not hurting any of the agents. Due to the assumption made above, i.e. the feasible set Oc is bounded, the conditions for an alternative x ∈ Oc to be efficient vary depending on the position of x. The following results are proved in [9]: Let B(x) = 0 denote the equation of the boundary of Oc , defining x ∈ Oc iff B(x) ≥ 0. An alternative x∗ ∈ Oc is efficient iff, either A. x∗ is in the interior of Oc and the parties are in local strict opposition at x∗ , i.e., ∇u1(x∗ ) = −γ∇u2(x∗ ) (2) where γ > 0; or B. x∗ is on the boundary of Oc , and for some α, β ≥ 0: α∇u1(x∗ ) + β∇u2(x∗ ) = ∇B(x∗ ) (3) We are now interested in answering the following questions: (i) What is the initial tentative agreement x0? (ii) How to find the more efficient agreement xh+1, given the current tentative agreement xh? 4.2.1 Determining a fair initial tentative agreement It should be emphasised that the choice of the initial tentative agreement affects the fairness of the final agreement to be reached by the presented method. For instance, if the initial tentative agreement x0 is chosen to be the most preferred alternative to one of the agents then it is also a Pareto-optimal outcome, making it impossible to find any joint improvement from x0. However, if x0 will then be chosen to be the final settlement and if x0 turns out to be the worst alternative to the other agent then this outcome is a very unfair one. Thus, it``s important that the choice of the initial tentative agreement be sensibly made. Ehtamo et al [3] present several methods to choose the initial tentative agreement (called reference point in their paper). However, their goal is to approximate the Pareto-optimal set by systematically choosing a set of reference points. Once an (approximate) Pareto-optimal set is generated, it is left to the negotiators to decide which of the generated Pareto-optimal outcomes to be chosen as the final settlement. That is, distributive negotiation will then be required to settle the issue. We, on the other hand, are interested in a fair initial tentative agreement which is not necessarily efficient. Improving a given tentative agreement to yield a Pareto-optimal agreement is considered in the next section. For each attribute j ∈ Simp, an agent i will be asked to discretely submit three values (from the set V alj): the most preferred value, denoted by pvi,j, the least preferred value, denoted by wvi,j, and a value that gives i an approximately average payoff, denoted by avi,j. (Note that this is possible because the set V alj is bounded.) If pv1,j and pv2,j are sufficiently close, i.e., |pv1,j − pv2,j| < Δ for some pre-defined Δ > 0, then pv1,j and pv2,j are chosen to be the two core values, denoted by cv1 and cv2. Otherwise, between the two values pv1,j and av1,j, we eliminate the one that is closer to wv2,j, the remaining value is denoted by cv1. Similarly, we obtain cv2 from the two values pv2,j and av2,j. If cv1 = cv2 then cv1 is selected as the initial value for the attribute j as part of the initial tentative agreement. Otherwise, without loss of generality, we assume that cv1 < cv2. The mediator selects randomly p values mv1, ... , mvp from the open interval (cv1, cv2), where p ≥ 1. The mediator then asks the agents to submit their valuations over the set of values {cv1, cv2, mv1, ... , mvp}. The value whose the two valuations of two agents are closest is selected as the initial value for the attribute j as part of the initial tentative agreement. The above procedure guarantees that the agents do not gain by behaving strategically. By performing the above procedure on every attribute j ∈ Simp, we are able to identify the initial tentative agreement x0 such that x0 ∈ Oc . The next step is to compute a new tentative agreement from an existing tentative agreement so that the new one would be more efficient than the existing one. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement Our procedure is a combination of the method of jointly improving direction introduced by Ehtamo et al [4] and a method we propose in the coming section. Basically, the idea is to see how strong the opposition the parties are in. If the two parties are in (local) weak opposition at the current tentative agreement xh, i.e., their improving directions at xh are close to each other, then the compromise direction proposed by Ehtamo et al [4] is likely to point to a better agreement for both agents. However, if the two parties are in local strong opposition at the current point xh then it``s unclear whether the compromise direction would really not hurt one of the agents whilst bringing some benefit to the other. We will first review the method proposed by Ehtamo et al [4] to compute the compromise direction for a group of negotiators at a given point x ∈ Oc . Ehtamo et al define a a function T(x) that describes the mediator``s choice for a compromise direction at x. For the case of two-party negotiations, the following bisecting function, denoted by T BS , can be defined over the interior set of Oc . Note that the closed set Oc contains two disjoint subsets: Oc = Oc 0 ∪Oc B , where Oc 0 denotes the set of interior points of Oc and Oc B denotes the boundary of Oc . The bisecting compromise is defined by a function T BS : Oc 0 → R2 , T BS (x) = ∇u1(x) ∇u1(x) + ∇u2(x) ∇u2(x) , x ∈ Oc 0. (4) Given the current tentative agreement xh (h ≥ 0), the mediator has to choose a point xh+1 along d = T(xh) so that all parties gain. Ehtamo et al then define a mechanism to generate a sequence of points and prove that when the generated sequence is bounded and when all generated points (from the sequence) belong to the interior set Oc 0 then the sequence converges to a weakly Paretooptimal agreement [4, pp. 59-60].5 As the above mechanism does not work at the boundary points of Oc , we will introduce a procedure that works everywhere in an alternative space Oc . Let x ∈ Oc and let θ(x) denote the angle between the gradients ∇u1(x) and ∇u2(x) at x. That is, θ(x) = arccos( ∇u1(x). ∇u2(x) ∇u1(x) . ∇u2(x) ) From Definition 2, it is obvious that the two parties are in local strict opposition (at x) iff θ(x) = π, and they are in local strong opposition iff π ≥ θ(x) > π/2, and they are in local weak opposition iff π/2 ≥ θ(x) ≥ 0. Note also that the two vectors ∇u1(x) and ∇u2(x) define a hyperplane, denoted by h∇(x), in the kdimensional space Rk . Furthermore, there are two indifference curves of agents 1 and 2 going through point x, denoted by IC1(x) and IC2(x), respectively. Let hT1(x) and hT2(x) denote the tangent hyperplanes to the indifference curves IC1(x) and IC2(x), respectively, at point x. The planes hT1(x) and hT2(x) intersect h∇(x) in the lines IS1(x) and IS2(x), respectively. Note that given a line L(x) going through the point x, there are two (unit) vectors from x along L(x) pointing to two opposite directions, denoted by L+ (x) and L− (x). We can now informally explain our solution to the problem of searching for joint gains. When it isn``t possible to obtain a compromise direction for joint improvements at a point x ∈ Oc either because the compromise vector points to the space outside of the feasible set Oc or because the two parties are in local strong opposition at x, we will consider to move along the indifference curve of one party while trying to improve the utility of the other party. As 5 Let S be the set of alternatives, x∗ is weakly Pareto optimal if there is no x ∈ S such that ui(x) > ui(x∗ ) for all agents i. the mediator does not know the indifference curves of the parties, he has to use the tangent hyperplanes to the indifference curves of the parties at point x. Note that the tangent hyperplane to a curve is a useful approximation of the curve in the immediate vicinity of the point of tangency, x. We are now describing an iteration step to reach the next tentative agreement xh+1 from the current tentative agreement xh ∈ Oc . A vector v whose tail is xh is said to be bounded in Oc if ∃λ > 0 such that xh +λv ∈ Oc . To start, the mediator asks the negotiators for their gradients ∇u1(xh) and ∇u2(xh), respectively, at xh. 1. If xh is a Pareto-optimal outcome according to equation 2 or equation 3, then the process is terminated. 2. If 1 ≥ ∇u1(xh). ∇u2(xh) > 0 and the vector T BS (xh) is bounded in Oc then the mediator chooses the compromise improving direction d = T BS (xh) and apply the method described by Ehtamo et al [4] to generate the next tentative agreement xh+1. 3. Otherwise, among the four vectors ISσ i (xh), i = 1, 2 and σ = +/−, the mediator chooses the vector that (i) is bounded in Oc , and (ii) is closest to the gradient of the other agent, ∇uj (xh)(j = i). Denote this vector by T G(xh). That is, we will be searching for a point on the indifference curve of agent i, ICi(xh), while trying to improve the utility of agent j. Note that when xh is an interior point of Oc then the situation is symmetric for the two agents 1 and 2, and the mediator has the choice of either finding a point on IC1(xh) to improve the utility of agent 2, or finding a point on IC2(xh) to improve the utility of agent 1. To decide on which choice to make, the mediator has to compute the distribution of gains throughout the whole process to avoid giving more gains to one agent than to the other. Now, the point xh+1 to be generated lies somewhere on the intersection of ICi(xh) and the hyperplane defined by ∇ui(xh) and T G(xh). This intersection is approximated by T G(xh). Thus, the sought after point xh+1 can be generated by first finding a point yh along the direction of T G(xh) and then move from yh to the same direction of ∇ui(xh) until we intersect with ICi(xh). Mathematically, let ζ and ξ denote the vectors T G(xh) and ∇ui(xh), respectively, xh+1 is the solution to the following optimisation problem: max λ1,λ2∈L uj(xh + λ1ζ + λ2ξ) s.t. xh+λ1ζ+λ2ξ ∈ Oc , and ui(xh+λ1ζ+λ2ξ) = ui(xh), where L is a suitable interval of positive real numbers; e.g., L = {λ|λ > 0}, or L = {λ|a < λ ≤ b}, 0 ≤ a < b. Given an initial tentative agreement x0, the method described above allows a sequence of tentative agreements x1, x2, ... to be iteratively generated. The iteration stops whenever a weakly Pareto optimal agreement is reached. THEOREM 3. If the sequence of agreements generated by the above method is bounded then the method converges to a point x∗ ∈ Oc that is weakly Pareto optimal. 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents'' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or create value. 514 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. That is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents'' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator. Acknowledgement The authors acknowledge financial support by ARC Dicovery Grant (2006-2009, grant DP0663147) and DEST IAP grant (2004-2006, grant CG040014). The authors would like to thank Lawrence Cavedon and the RMIT Agents research group for their helpful comments and suggestions. 6. REFERENCES [1] F. Alemi, P. Fos, and W. Lacorte. A demonstration of methods for studying negotiations between physicians and health care managers. Decision Science, 21:633-641, 1990. [2] M. Ehrgott. Multicriteria Optimization. Springer-Verlag, Berlin, 2000. [3] H. Ehtamo, R. P. Hamalainen, P. Heiskanen, J. Teich, M. Verkama, and S. Zionts. Generating pareto solutions in a two-party setting: Constraint proposal methods. Management Science, 45(12):1697-1709, 1999. [4] H. Ehtamo, E. Kettunen, and R. P. Hmlinen. Searching for joint gains in multi-party negotiations. European Journal of Operational Research, 130:54-69, 2001. [5] P. Faratin. Automated Service Negotiation Between Autonomous Computational Agents. PhD thesis, University of London, 2000. [6] A. Foroughi. Minimizing negotiation process losses with computerized negotiation support systems. The Journal of Applied Business Research, 14(4):15-26, 1998. [7] C. M. Jonker, V. Robu, and J. Treur. An agent architecture for multi-attribute negotiation using incomplete preference information. J. Autonomous Agents and Multi-Agent Systems, (to appear). [8] R. L. Keeney and H. Raiffa. Decisions with Multiple Objectives: Preferences and Value Trade-Offs. John Wiley and Sons, Inc., New York, 1976. [9] G. Kersten and S. Noronha. Rational agents, contract curves, and non-efficient compromises. IEEE Systems, Man, and Cybernetics, 28(3):326-338, 1998. [10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam. Protocols for negotiating complex contracts. IEEE Intelligent Systems, 18(6):32-38, 2003. [11] S. Kraus, J. Wilkenfeld, and G. Zlotkin. Multiagent negotiation under time constraints. Artificial Intelligence Journal, 75(2):297-345, 1995. [12] G. Lai, C. Li, and K. Sycara. Efficient multi-attribute negotiation with incomplete information. Group Decision and Negotiation, 15:511-528, 2006. [13] D. Lax and J. Sebenius. The manager as negotiator: The negotiator``s dilemma: Creating and claiming value, 2nd ed. In S. Goldberg, F. Sander & N. Rogers, editors, Dispute Resolution, 2nd ed., pages 49-62. Little Brown & Co., 1992. [14] M. Lomuscio and N. Jennings. A classification scheme for negotiation in electronic commerce. In Agent-Mediated Electronic Commerce: A European Agentlink Perspective. Springer-Verlag, 2001. [15] R. Maes and A. Moukas. Agents that buy and sell. Communications of the ACM, 42(3):81-91, 1999. [16] J. Nash. Two-person cooperative games. Econometrica, 21(1):128-140, April 1953. [17] H. Raiffa. The Art and Science of Negotiation. Harvard University Press, Cambridge, USA, 1982. [18] H. Raiffa, J. Richardson, and D. Metcalfe. Negotiation Analysis: The Science and Art of Collaborative Decision Making. Belknap Press, Cambridge, MA, 2002. [19] T. Sandholm. Agents in electronic commerce: Component technologies for automated negotiation and coalition formation. JAAMAS, 3(1):73-96, 2000. [20] J. Sebenius. Negotiation analysis: A characterization and review. Management Science, 38(1):18-38, 1992. [21] L. Weingart, E. Hyder, and M. Pietrula. Knowledge matters: The effect of tactical descriptions on negotiation behavior and outcome. Tech. Report, CMU, 1995. [22] X. Zhang, V. R. Lesser, and T. Wagner. Integrative negotiation among agents situated in organizations. IEEE Trans. on Systems, Man, and Cybernetics, Part C, 36(1):19-30, 2006. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 515
Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate "creating value" as well as "claiming value." Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as" creating value" and" claiming value." They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiator's Dilemma. We argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. For instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price. On the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent. But at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence. Coming back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process. That is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponent's interests even though the concession he made could be insignificant to him. For instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty. Negotiation Support Systems (NSSs) and negotiating software agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). However, because of the Negotiator's Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information. In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1,..., Xk. We assume that Xi, (i = 1,..., k), maps the alternatives to real numbers. Thus, a tuple (x1,..., xk) = (X1 (a),..., Xk (a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like "minimise cost" and "maximise the quality of services." Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a E A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. Part of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space 1 (X1 (a),..., Xk (a))} aEA. DEFINITION 1. (Dominant) Let x = (x1,..., xk) and x' = (x' 1,..., x ` k) be two consequences. x dominates x' iff xi> x' i for all i, and the inequality is strict for at least one i. The Pareto frontier in a consequence space then consists of all consequences that are not dominated by any other consequence. This is illustrated in Fig. 1 in which an alternative consists of two attributes d1 and d2 and the decision maker tries to maximise the two objectives X1 and X2. A decision a E A whose consequence does not lie on the Pareto frontier is inefficient. While the Pareto Figure 1: The Pareto frontier frontier allows M to avoid taking inefficient decisions, M still has to decide which of the efficient consequences on the Pareto frontier is most preferred by him. MCDM theorists introduce a mechanism to allow the objective components of consequences to be normalised to the payoff valuations for the objectives. Consequences can then be ordered: if the gains in satisfaction brought about by C1 (in comparison to C2) equals to the losses in satisfaction brought about by C1 (in comparison to C2), then the two consequences C1 and C2 are considered indifferent. M can now construct the set of indifference curves1 in the consequence space (the dashed curves in Fig. 1). The most preferred indifference curve that intersects with the Pareto frontier is in focus: its intersection with the Pareto frontier is the sought after consequence (i.e., the optimal consequence in Fig. 1). 2.2 A negotiation framework A multi-agent negotiation framework consists of: 1. A set of two negotiating agents N = 11, 2}. 2. A set of attributes Att = 1α1,..., αm} characterising the issues the agents are negotiating over. Each attribute α can take a value from the set Valα; 3. A set of alternative outcomes O. An outcome o E O is represented by an assignment of values to the corresponding attributes in Att. 4. Agents' utility: Based on the theory of multiple-criteria decision making [8], we define the agents' utility as follows: • Objectives: Agent i has a set of ni objectives, or interests; denoted by j (j = 1,..., ni). To measure how much an outcome o fulfills an objective j to an agent i, we use objective functions: for each agent i, we define i's interests using the objective vector function fi = [fij]: O--* Rni. • Value functions: Instead of directly evaluating an outcome o, agent i looks at how much his objectives are fulfilled and will make a valuation based on these more basic criteria. Thus, for each agent i, there is a value function σi: Rni--* R. In particular, Raiffa [17] shows how to systematically construct an additive value function to each party involved in a negotiation. • Utility: Now, given an outcome o E O, an agent i is able to determine its value, i.e., σi (fi (o)). However, a negotiation infrastructure is usually required to facilitate negotiation. This might involve other mechanisms and factors/parties, e.g., a mediator, a legal institution, participation fees, etc. . The standard way to implement such a thing is to allow money 1In fact, given the k-dimensional space, these should be called indifference surfaces. However, we will not bog down to that level of details. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509 and side-payments. In this paper, we ignore those side-effects and assume that agent i's utility function v, i is normalised so that v, i: O → [0, 1]. EXAMPLE 1. There are two agents, A and B. Agent A has a task T that needs to be done and also 100 units of a resource R. Agent B has the capacity to perform task T and would like to obtain at least 10 and at most 20 units of the resource R. Agent B is indifferent on any amount between 10 and 20 units of the resource R. The objective functions for both agents A and B are cost and revenue. And they both aim at minimising costs while maximising revenues. Having T done generates for A a revenue rA, T while doing T incurs a cost cB, T to B. Agent B obtains a revenue rB, R for each unit of the resource R while providing each unit of the resource R costs agent A cA, R. Assuming that money transfer between agents is possible, the set Att then contains three attributes: • T, taking values from the set {0, 1}, indicates whether the task T is assigned to agent B; • R, taking values from the set of non-negative integer, indicates the amount of resource R being allocated to agent B; and • MT, taking values from R, indicates the payment p to be transferred from A to B. 3. PROBLEM FORMALISATION Consider Example 1, assume that rA, T = $150 and cB, T = $100 and rB, R = $10 and cA, R = $7. That is, the revenues generated for A exceeds the costs incurred to B to do task T, and B values resource R more highly than the cost for A to provide it. The optimal solution to this problem scenario is to assign task T to agent B and to allocate 20 units of resource R (i.e., the maximal amount of resource R required by agent B) from agent A to agent B. This outcome regarding the resource and task allocation problems leaves payoffs of $10 to agent A and $100 to agent B. 2 Any other outcome would leave at least one of the agents worse off. In other words, the presented outcome is Pareto-efficient and should be part of the solution outcome for this problem scenario. However, as the agents still have to bargain over the amount of money transfer p, neither agent would be willing to disclose their respective costs and revenues regarding the task T and the resource R. As a consequence, agents often do not achieve the optimal outcome presented above in practice. To address this issue, we introduce a mediator to help the agents discover better agreements than the ones they might try to settle on. Note that this problem is essentially the problem of searching for joint gains in a multilateral negotiation in which the involved parties hold strategic information, i.e., the integrative part in a negotiation. In order to help facilitate this process, we introduce the role of a neutral mediator. Before formalising the decision problems faced by the mediator and the 2Certainly, without money transfer to compensate agent A, this outcome is not a fair one. negotiating agents, we discuss the properties of the solution outcomes to be achieved by the mediator. In a negotiation setting, the two typical design goals would be: • Efficiency: Avoid the agents from settling on an outcome that is not Pareto-optimal; and • Fairness: Avoid agreements that give the most of the gains to a subset of agents while leaving the rest with too little. The above goals are axiomatised in Nash's seminal work [16] on cooperative negotiation games. Essentially, Nash advocates for the following properties to be satisfied by solution to the bilateral negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii) it is invariant to affine transformation (to the consequence space); (iii) it is symmetric; and (iv) it is independent from irrelevant alternatives. A solution satisfying Nash's axioms is called a Nash bargaining solution. It then turns out that, by taking the negotiators' utilities as its objectives the mediator itself faces a multi-criteria decision making problem. The issues faced by the mediator are: (i) the mediator requires access to the negotiators' utility functions, and (ii) making (fair) tradeoffs between different agents' utilities. Our methods allow the agents to repeatedly interact with the mediator so that a Nash solution outcome could be found by the parties. Informally, the problem faced by both the mediator and the negotiators is construction of the indifference curves. Why are the indifference curves so important? • To the negotiators, knowing the options available along indifference curves opens up opportunities to reach more efficient outcomes. For instance, consider an agent A who is presenting his opponent with an offer θA which she refuses to accept. Rather than having to concede, A could look at his indifference curve going through θA and choose another proposal θ ~ A. To him, θA and θ ~ A are indifferent but θ ~ A could give some gains to B and thus will be more acceptable to B. In other words, the outcome θ ~ A is more efficient than θA to these two negotiators. • To the mediator, constructing indifference curves requires a measure of fairness between the negotiators. The mediator needs to determine how much utility it needs to take away from the other negotiators to give a particular negotiator a specific gain G (in utility). In order to search for integrative solutions within the outcome space O, we characterise the relationship between the agents over the set of attributes Att. As the agents hold different objectives and have different capacities, it may be the case that changing between two values of a specific attribute implies different shifts in utility of the agents. However, the problem of finding the exact Paretooptimal set3 is NP-hard [2]. Our approach is thus to solve this optimisation problem in two steps. In the first steps, the more manageable attributes will be solved. These are attributes that take a finite set of values. The result of this step would be a subset of outcomes that contains the Pareto-optimal set. In the second step, we employ an iterative procedure that allows the mediator to interact with the negotiators to find joint improvements that move towards a Pareto-optimal outcome. This approach will not work unless the attributes from Att 3The Pareto-optimal set is the set of outcomes whose consequences (in the consequence space) correspond to the Pareto frontier. otherwise. 510 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) are independent. Most works on multi-attribute, or multi-issue, negotiation (e.g. [17]) assume that the attributes or the issues are independent, resulting in an additive value function for each agent .4 -- set Att \ S. Assume that vS and v'S are two assignments of values to the attributes of S and v1 ¯ S, v2 ¯ S are two arbitrary value assignments to the attributes of ¯ S, then (ui ([vS, v1 ¯ S]) ui ([v'S, v2 ¯ S])) = 4. MEDIATOR-BASED BILATERAL NEGOTIATIONS As discussed by Lax and Sebenius [13], under incomplete information the tension between creating and claiming values is the primary cause of inefficient outcomes. This can be seen most easily in negotiations involving two negotiators; during the distributive phase of the negotiation, the two negotiators's objectives are directly opposing each other. We will now formally characterise this relationship between negotiators by defining the opposition between two negotiating parties. The following exposition will be mainly reproduced from [9]. Assuming for the moment that all attributes from Att take values from the set of real numbers R, i.e., V alj C R for all j E Att. We further assume that the set 0 = XjEAttV alj of feasible outcomes is defined by constraints that all parties must obey and 0 is convex. Now, an outcome o E 0 is just a point in the m-dimensional space of real numbers. Then, the questions are: (i) from the point of view of an agent i, is o already the best outcome for i? (ii) if o is not the best outcome for i then is there another outcome o' such that o' gives i a better utility than o and o' does not cause a utility loss to the other agent j in comparison to o? The above questions can be answered by looking at the directions of improvement of the negotiating parties at o, i.e., the directions in the outcome space 0 into which their utilities increase at point o. Under the assumption that the parties' utility functions ui are differentiable concave, the set of all directions of improvement for a party at a point o can be defined in terms of his most preferred, or gradient, direction at that point. When the gradient direction Vui (o) of agent i at point o is outright opposing to the gradient direction Vuj (o) of agent j at point o then the two parties strongly disagree at o and no joint improvements can be achieved for i and j in the locality surrounding o. Since opposition between the two parties can vary considerably over the outcome space (with one pair of outcomes considered highly antagonistic and another pair being highly cooperative), we need to describe the local properties of the relationship. We begin with the opposition at any point of the outcome space Rm. The following definition is reproduced from [9]: 3. The parties are in local weak opposition at a point x E Rm iff Vu1 (x) - Vu2 (x)> 0, i.e., iff the gradients at x of the two utility functions form an acute or right angle. 4. The parties are in local strong opposition at a point x E Rm iff Vu1 (x) - Vu2 (x) <0, i.e., iff the gradients at x form an obtuse angle. 5. The parties are in global strict (nonstrict, weak, strong) opposition ifffor every x E Rm they are in local strict (nonstrict, weak, strong) opposition. Global strict and nonstrict oppositions are complementary cases. Essentially, under global strict opposition the whole outcome space 0 becomes the Pareto-optimal set as at no point in 0 can the negotiating parties make a joint improvement, i.e., every point in 0 is a Pareto-efficient outcome. In other words, under global strict opposition the outcome space 0 can be flattened out into a single line such that for each pair of outcomes x, y E 0, u1 (x) <u1 (y) iff u2 (x)> u2 (y), i.e., at every point in 0, the gradient of the two utility functions point to two different ends of the line. Intuitively, global strict opposition implies that there is no way to obtain joint improvements for both agents. As a consequence, the negotiation degenerates to a distributive negotiation, i.e., the negotiating parties should try to claim as much shares from the negotiation issues as possible while the mediator should aim for the fairness of the division. On the other hand, global nonstrict opposition allows room for joint improvements and all parties might be better off trying to realise the potential gains by reaching Pareto-efficient agreements. Weak and strong oppositions indicate different levels of opposition. The weaker the opposition, the more potential gains can be realised making cooperation the better strategy to employ during negotiation. On the other hand, stronger opposition suggests that the negotiating parties tend to behave strategically leading to misrepresentation of their respective objectives and utility functions and making joint gains more difficult to realise. We have been temporarily making the assumption that the outcome space 0 is the subset of Rm. In many real-world negotiations, this assumption would be too restrictive. We will continue our exposition by lifting this restriction and allowing discrete attributes. However, as most negotiations involve only discrete issues with a bounded number of options, we will assume that each attribute takes values either from a finite set or from the set of real numbers R. In the rest of the paper, we will refer to attributes whose values are from finite sets as simple attributes and attributes whose values are from R as continuous attributes. The notions of local oppositions, i.e., strict, nonstrict, weak and strong, are not applicable to outcome spaces that contain simple attributes and nor are the notions of global weak and strong oppositions. However, the notions of global strict and nonstrict oppositions can be generalised for outcome spaces that contain simple attributes. DEFINITION 3. Given an outcome space 0, the parties are in global strict opposition iff b ` x, y E 0, u1 (x) <u1 (y) iff u2 (x)> u2 (y). The parties are in global nonstrict opposition if they are not in global strict opposition. 4.1 Optimisation on simple attributes In order to extract the optimal values for a subset of attributes, in the first step of this optimisation process the mediator requests the negotiators to submit their respective utility functions over the set of simple attributes. Let Simp C Att denote the set of all simple attributes from Att. Note that, due to Assumption 1, agent i's (ui ([vS, v1 ¯ S])--ui ([v'S, v2 ¯ S])). That is, the utility function of agent i will be defined on the attributes from S independently of any value assignment to other attributes. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511 utility function can be characterised as follows: ui ([VSimp, VSimp]) = wi1 * ui,1 ([VSimp]) + wi2 * ui,2 ([VSimp]), where Simp = Att \ Simp, and ui,1 and ui,2 are the utility components of ui over the sets of attributes Simp and Simp, respectively, and 0 <wi1, wi2 <1 and wi1 + wi2 = 1. As attributes are independent of each other regarding the agents' utility functions, the optimisation problem over the attributes from Simp can be carried out by fixing ui ([VSimp]) to a constant C, and then search for the optimal values within the set of attributes Simp. Now, how does the mediator determine the optimal values for the attributes in Simp? Several well-known optimisation strategies could be applicable here: • The utilitarian solution: The sum of the agents' utilities are maximised. Thus, the optimal values are the solution of the following optimisation problem: • The Nash solution: The product of the agents' utilities are • The egalitarian solution (aka. the maximin solution): The utility of the agent with minimum utility is maximised. Thus, the optimal values are the solution of the following optimisation problem: The question now is of course whether a negotiator has the incentive to misrepresent his utility function. First of all, recall that the agents' utility functions are bounded, i.e., do E O. 0 <ui (o) <1. Thus, the agents have no incentive to overstate their utility regarding an outcome o: If o is the most preferred outcome to an agent i then he already assigns the maximal utility to o. On the other hand, if o is not the most preferred outcome to i then by overstating the utility he assigns to o, the agent i runs the risk of having to settle on an agreement which would give him less payoffs than he is supposed to receive. However, agents do have an incentive to understate their utility if the final settlement will be based on the above solutions alone. Essentially, the mechanism to avoid an agent to understate his utility regarding particular outcomes is to guarantee a certain measure of fairness for the final settlement. That is, the agents lose the incentive to be dishonest to obtain gains from taking advantage of the known solutions to determine the settlement outcome for they would be offset by the fairness maintenance mechanism. Firsts, we state an easy lemma. LEMMA 1. When Simp contains one single attributes, the agents have the incentive to understate their utility functions regarding outcomes that are not attractive to them. Byway of illustration, consider the set Simp containing only one attribute that could take values from the finite set {A, B, C, D}. Assume that negotiator 1 assigns utilities of 0.4, 0.7, 0.9, and 1 to A, B, C, and D, respectively. Assume also that negotiator 2 assigns utilities of 1, 0.9, 0.7, and 0.4 to A, B, C, and D, respectively. If agent 1 misrepresents his utility function to the mediator by reporting utility 0 for all values A, B and C and utility 1 for value D then the agent 2 who plays honestly in his report to the mediator will obtain the worst outcome D given any of the above solutions. Note that agent 1 doesn't need to know agent 2's utility function, nor does he need to know the strategy employed by agent 2. As long as he knows that the mediator is going to employ one of the above three solutions, then the above misrepresentation is the dominant strategy for this game. However, when the set Simp contains more than one attribute and none of the attributes strongly dominate the other attributes then the above problem disminishes by itself thanks to the integrative solution. We of course have to define clearly what it means for an attribute to strongly dominate other attributes. Intuitively, if most of an agent's utility concentrates on one of the attributes then this attribute strongly dominates other attributes. We again appeal to the Assumption 1 on additivity of utility functions to achieve a measure of fairness within this negotiation setting. Due to Assumption 1, we can characterise agent i's utility component over the set of attributes Simp by the following equation: Then, an attribute f E Simp strongly dominates the rest of the attributes in Simp (for agent i) iff wi ~> PjE (Simp _ ~) wij. Attribute f is said to be strongly dominant (for agent i) wrt. the set of simple attributes Simp. The following theorem shows that if the set of attributes Simp does not contain a strongly dominant attribute then the negotiators have no incentive to be dishonest. THEOREM 1. Given a negotiation framework, if for every agent the set of simple attributes doesn't contain a strongly dominant attribute, then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. So far, we have been concentrating on the efficiency issue while leaving the fairness issue aside. A fair framework does not only support a more satisfactory distribution of utility among the agents, but also often a good measure to prevent misrepresentation of private information by the agents. Of the three solutions presented above, the utilitarian solution does not support fairness. On the other hand, Nash [16] proves that the Nash solution satisfies the above four axioms for the cooperative bargaining games and is considered a fair solution. The egalitarian solution is another mechanism to achieve fairness by essentially helping the worst off. The problem with these solutions, as discussed earlier, is that they are vulnerable to strategic behaviours when one of the attributes strongly dominates the rest of attributes. However, there is yet another solution that aims to guarantee fairness, the minimax solution. That is, the utility of the agent with maximum utility is minimised. It's obvious that the minimax solution produces inefficient outcomes. However, to get around this problem (given that the Pareto-optimal set can be tractably computed), we can apply this solution over the Pareto-optimal set only. Let POSet C V alSimp be the Pareto-optimal subset of the simple outcomes, the minimax solution is defined to be the solution of the following optimisation problem. arg min vEP OSet While overall efficiency often suffers under a minimax solution, i.e., the sum of all agents' utilities are often lower than under other solutions, it can be shown that the minimax solution is less vulnerable to manipulation. where 512 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) THEOREM 2. Given a negotiation framework, under the minimax solution, if the negotiators are uncertain about their opponents' preferences then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes. That is, even when there is only one single simple attribute, if an agent is uncertain whether the other agent's most preferred resolution is also his own most preferred resolution then he should opt for truth-telling as the optimal strategy. 4.2 Optimisation on continuous attributes When the attributes take values from infinite sets, we assume that they are continuous. This is similar to the common practice in operations research in which linear programming solutions/techniques are applied to integer programming problems. We denote the number of continuous attributes by k, i.e., Att = Simp ∪ Simp and | Simp | = k. Then, the outcome space O can be represented as follows: O = (QjESimp V alj) × (QlESimp Vall), where QlESimp V all ⊆ Rk is the continuous component of O. Let Oc denote the set QlESimp V all. We'll refer to Oc as the feasible set and assume that Oc is closed and convex. After carrying out the optimisation over the set of simple attributes, we are able to assign the optimal values to the simple attributes from Simp. Thus, we reduce the original problem to the problem of searching for optimal (and fair) outcomes within the feasible set Oc. Recall that, by Assumption 1, we can characterise agent i's utility function as follows: where C is the constant wi1 ∗ ui,1 ([v * Simp]) and v * Simp denotes the optimal values of the simple attributes in Simp. Hence, without loss of generality (albeit with a blatant abuse of notation), we can take the agent i's utility function as ui: Rk → [0, 1]. Accordingly we will also take the set of outcomes under consideration by the agents to be the feasible set Oc. We now state another assumption to be used in this section: ASSUMPTION 2. The negotiators' utility functions can be described by continuously differentiable and concave functions ui: Rk → [0, 1], (i = 1, 2). It should be emphasised that we do not assume that agents explicitly know their utility functions. For the method to be described in the following to work, we only assume that the agents know the relevant information, e.g. at certain point within the feasible set Oc, the gradient direction of their own utility functions and some section of their respective indifference curves. Assume that a tentative agreement (which is a point x ∈ Rk) is currently on the table, the process for the agents to jointly improve this agreement in order to reach a Pareto-optimal agreement can be described as follows. The mediator asks the negotiators to discretely submit their respective gradient directions at x, i.e., ∇ u1 (x) and ∇ u2 (x). Note that the goal of the process to be described here is to search for agreements that are more efficient than the tentative agreement currently on the table. That is, we are searching for points x' within the feasible set Oc such that moving to x' from the current tentative agreement x brings more gains to at least one of the agents while not hurting any of the agents. Due to the assumption made above, i.e. the feasible set Oc is bounded, the conditions for an alternative x ∈ Oc to be efficient vary depending on the position of x. The following results are proved in [9]: Let B (x) = 0 denote the equation of the boundary of Oc, defining x ∈ Oc iff B (x) ≥ 0. An alternative x * ∈ Oc is efficient iff, We are now interested in answering the following questions: (i) What is the initial tentative agreement x0? (ii) How to find the more efficient agreement xh +1, given the current tentative agreement xh? 4.2.1 Determining a fair initial tentative agreement It should be emphasised that the choice of the initial tentative agreement affects the fairness of the final agreement to be reached by the presented method. For instance, if the initial tentative agreement x0 is chosen to be the most preferred alternative to one of the agents then it is also a Pareto-optimal outcome, making it impossible to find any joint improvement from x0. However, if x0 will then be chosen to be the final settlement and if x0 turns out to be the worst alternative to the other agent then this outcome is a very unfair one. Thus, it's important that the choice of the initial tentative agreement be sensibly made. Ehtamo et al [3] present several methods to choose the initial tentative agreement (called reference point in their paper). However, their goal is to approximate the Pareto-optimal set by systematically choosing a set of reference points. Once an (approximate) Pareto-optimal set is generated, it is left to the negotiators to decide which of the generated Pareto-optimal outcomes to be chosen as the final settlement. That is, distributive negotiation will then be required to settle the issue. We, on the other hand, are interested in a fair initial tentative agreement which is not necessarily efficient. Improving a given tentative agreement to yield a Pareto-optimal agreement is considered in the next section. For each attribute j ∈ Simp, an agent i will be asked to discretely submit three values (from the set V alj): the most preferred value, denoted by pvi, j, the least preferred value, denoted by wvi, j, and a value that gives i an approximately average payoff, denoted by avi, j. (Note that this is possible because the set V alj is bounded.) If pv1, j and pv2, j are sufficiently close, i.e., | pv1, j − pv2, j | <Δ for some pre-defined Δ> 0, then pv1, j and pv2, j are chosen to be the two "core" values, denoted by cv1 and cv2. Otherwise, between the two values pv1, j and av1, j, we eliminate the one that is closer to wv2, j, the remaining value is denoted by cv1. Similarly, we obtain cv2 from the two values pv2, j and av2, j. If cv1 = cv2 then cv1 is selected as the initial value for the attribute j as part of the initial tentative agreement. Otherwise, without loss of generality, we assume that cv1 <cv2. The mediator selects randomly p values mv1,..., mvp from the open interval (cv1, cv2), where p ≥ 1. The mediator then asks the agents to submit their valuations over the set of values {cv1, cv2, mv1,..., mvp}. The value whose the two valuations of two agents are closest is selected as the initial value for the attribute j as part of the initial tentative agreement. The above procedure guarantees that the agents do not gain by behaving strategically. By performing the above procedure on every attribute j ∈ Simp, we are able to identify the initial tentative agreement x0 such that x0 ∈ Oc. The next step is to compute a new tentative agreement from an existing tentative agreement so that the new one would be more efficient than the existing one. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement Our procedure is a combination of the method of jointly improving direction introduced by Ehtamo et al [4] and a method we propose in the coming section. Basically, the idea is to see how strong the opposition the parties are in. If the two parties are in (local) weak opposition at the current tentative agreement xh, i.e., their improving directions at xh are close to each other, then the compromise direction proposed by Ehtamo et al [4] is likely to point to a better agreement for both agents. However, if the two parties are in local strong opposition at the current point xh then it's unclear whether the compromise direction would really not hurt one of the agents whilst bringing some benefit to the other. We will first review the method proposed by Ehtamo et al [4] to compute the compromise direction for a group of negotiators at a given point x ∈ Oc. Ehtamo et al define a a function T (x) that describes the mediator's choice for a compromise direction at x. For the case of two-party negotiations, the following bisecting function, denoted by TBS, can be defined over the interior set of Oc. Note that the closed set Oc contains two disjoint subsets: Oc = Oc0 ∪ OcB, where Oc0 denotes the set of interior points of Oc and OcB denotes the boundary of Oc. The bisecting compromise is defined by a function TBS: Oc0 → R2, Given the current tentative agreement xh (h ≥ 0), the mediator has to choose a point xh +1 along d = T (xh) so that all parties gain. Ehtamo et al then define a mechanism to generate a sequence of points and prove that when the generated sequence is bounded and when all generated points (from the sequence) belong to the interior set Oc0 then the sequence converges to a weakly Paretooptimal agreement [4, pp. 59--60].5 As the above mechanism does not work at the boundary points of Oc, we will introduce a procedure that works everywhere in an alternative space Oc. Let x ∈ Oc and let θ (x) denote the angle between the gradients ∇ u1 (x) and ∇ u2 (x) at x. That is, From Definition 2, it is obvious that the two parties are in local strict opposition (at x) iff θ (x) = π, and they are in local strong opposition iff π ≥ θ (x)> π / 2, and they are in local weak opposition iff π / 2 ≥ θ (x) ≥ 0. Note also that the two vectors ∇ u1 (x) and ∇ u2 (x) define a hyperplane, denoted by h ∇ (x), in the kdimensional space Rk. Furthermore, there are two indifference curves of agents 1 and 2 going through point x, denoted by IC1 (x) and IC2 (x), respectively. Let hT1 (x) and hT2 (x) denote the tangent hyperplanes to the indifference curves IC1 (x) and IC2 (x), respectively, at point x. The planes hT1 (x) and hT2 (x) intersect h ∇ (x) in the lines IS1 (x) and IS2 (x), respectively. Note that given a line L (x) going through the point x, there are two (unit) vectors from x along L (x) pointing to two opposite directions, denoted by L + (x) and L − (x). We can now informally explain our solution to the problem of searching for joint gains. When it isn't possible to obtain a compromise direction for joint improvements at a point x ∈ Oc either because the compromise vector points to the space outside of the feasible set Oc or because the two parties are in local strong opposition at x, we will consider to move along the indifference curve of one party while trying to improve the utility of the other party. As 5Let S be the set of alternatives, x ∗ is weakly Pareto optimal if there is no x ∈ S such that ui (x)> ui (x ∗) for all agents i. the mediator does not know the indifference curves of the parties, he has to use the tangent hyperplanes to the indifference curves of the parties at point x. Note that the tangent hyperplane to a curve is a useful approximation of the curve in the immediate vicinity of the point of tangency, x. We are now describing an iteration step to reach the next tentative agreement xh +1 from the current tentative agreement xh ∈ Oc. A vector v whose tail is xh is said to be bounded in Oc if ∃ λ> 0 such that xh + λv ∈ Oc. To start, the mediator asks the negotiators for their gradients ∇ u1 (xh) and ∇ u2 (xh), respectively, at xh. 1. If xh is a Pareto-optimal outcome according to equation 2 or equation 3, then the process is terminated. 2. If 1 ≥ ∇ u1 (xh). ∇ u2 (xh)> 0 and the vector TBS (xh) is bounded in Oc then the mediator chooses the compromise improving direction d = T BS (xh) and apply the method described by Ehtamo et al [4] to generate the next tentative agreement xh +1. 3. Otherwise, among the four vectors ISσi (xh), i = 1, 2 and σ = + / −, the mediator chooses the vector that (i) is bounded in Oc, and (ii) is closest to the gradient of the other agent, ∇ uj (xh) (j = i). Denote this vector by TG (xh). That is, we will be searching for a point on the indifference curve of agent i, ICi (xh), while trying to improve the utility of agent j. Note that when xh is an interior point of Oc then the situa tion is symmetric for the two agents 1 and 2, and the mediator has the choice of either finding a point on IC1 (xh) to improve the utility of agent 2, or finding a point on IC2 (xh) to improve the utility of agent 1. To decide on which choice to make, the mediator has to compute the distribution of gains throughout the whole process to avoid giving more gains to one agent than to the other. Now, the point xh +1 to be generated lies somewhere on the intersection of ICi (xh) and the hyperplane defined by ∇ ui (xh) and TG (xh). This intersection is approximated by TG (xh). Thus, the sought after point xh +1 can be generated by first finding a point yh along the direction of TG (xh) and then move from yh to the same direction of ∇ ui (xh) until we intersect with ICi (xh). Mathematically, let ζ and ξ denote the vectors TG (xh) and ∇ ui (xh), respectively, xh +1 is the solution to the following optimisation problem: Given an initial tentative agreement x0, the method described above allows a sequence of tentative agreements x1, x2,...to be iteratively generated. The iteration stops whenever a weakly Pareto optimal agreement is reached. THEOREM 3. If the sequence of agreements generated by the above method is bounded then the method converges to a point x ∗ ∈ Oc that is weakly Pareto optimal. 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or "create value." max λ1, λ2 ∈ L 514 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. That is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator.
Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate "creating value" as well as "claiming value." Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as" creating value" and" claiming value." They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiator's Dilemma. We argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. For instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price. On the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent. But at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence. Coming back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process. That is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponent's interests even though the concession he made could be insignificant to him. For instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty. Negotiation Support Systems (NSSs) and negotiating software agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). However, because of the Negotiator's Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information. In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1,..., Xk. We assume that Xi, (i = 1,..., k), maps the alternatives to real numbers. Thus, a tuple (x1,..., xk) = (X1 (a),..., Xk (a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like "minimise cost" and "maximise the quality of services." Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a E A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. Part of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space 1 (X1 (a),..., Xk (a))} aEA. DEFINITION 1. (Dominant) 2.2 A negotiation framework The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509 3. PROBLEM FORMALISATION 510 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. MEDIATOR-BASED BILATERAL NEGOTIATIONS 4.1 Optimisation on simple attributes The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511 512 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 Optimisation on continuous attributes 4.2.1 Determining a fair initial tentative agreement The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or "create value." max λ1, λ2 ∈ L 514 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. That is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator.
Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory ABSTRACT It is well established by conflict theorists and others that successful negotiation should incorporate "creating value" as well as "claiming value." Joint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party. In this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists. We use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains. We separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values. In addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes. We also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier. 1. INTRODUCTION Given that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21]. Raiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons. According to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as" creating value" and" claiming value." They argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension. Negotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation. They refer to this problem as the Negotiator's Dilemma. We argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents. Such private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength. Coming back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process. In fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process. This is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process. Negotiation Support Systems (NSSs) and negotiating software agents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]). In this paper, we argue for the role of a mediator to resolve the above two issues. The mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device. To take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed. Section 2 provides background on MCDM theory and the negotiation framework. Section 3 formulates the problem. In Section 4, we discuss our approach to integrative negotiation. Section 5 discusses the future work with some concluding remarks. 2. BACKGROUND 2.1 Multi-criteria decision making theory Let A denote the set of feasible alternatives available to a decision maker M. As an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m). (Attributes are also referred to as issues, or decision variables.) A typical decision maker also has several objectives X1,..., Xk. Thus, M wishes to maximise his objectives. However, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another. For instance, most businesses and public services have objectives like "minimise cost" and "maximise the quality of services." Since better services can often only be attained for a price, these objectives conflict. Due to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution. That is, he may have to choose an act a E A that does not optimise every objective. This is the topic of the multi-criteria decision making theory. 5. CONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities. The focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or "create value." max λ1, λ2 ∈ L 514 The Sixth Intl. . Joint Conf. We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents. Furthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome. We have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes. For simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution. In order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation. We have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes. For non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution. The approach presented in this paper is similar to the ideas behind negotiation analysis [18]. Ehtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations. The relation of their approach to our approach is discussed in the preceding section. Lai et al [12] provide an alternative approach to integrative negotiation. While their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear. Zhang et at [22] discuss the use of integrative negotiation in agent organisations. They assume that agents are honest. Their main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation. Jonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator. Thus, their approach can be considered a complement of ours. Their experimental results show that agents can reach Paretooptimal outcomes using their approach. The details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done. There is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes. In order to provide a complete framework we are also working on the distributive phase using the mediator.
J-38
Multi-Attribute Coalitional Games
We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these games -- the Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature.
[ "multi-attribut coalit game", "coalit game", "cooper", "agent", "divers econom interact", "comput complex", "core", "shaplei valu", "graph", "multi-issu represent", "linear combin", "unrestrict aggreg of subgam", "polynomi function min-cost flow problem", "min-cost flow problem", "superaddit game", "coalit game theori", "multi-attribut model", "compact represent" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U", "U", "U", "M", "U", "U", "M", "M", "R", "U" ]
Multi-Attribute Coalitional Games∗ Samuel Ieong † Computer Science Department Stanford University Stanford, CA 94305 sieong@cs.stanford.edu Yoav Shoham Computer Science Department Stanford University Stanford, CA 94305 shoham@cs.stanford.edu ABSTRACT We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these gamesthe Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; F.2 [Analysis of Algorithms and Problem Complexity] General Terms Algorithms, Economics 1. INTRODUCTION When agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities. Consider the problem of forming a soccer team. For a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper. The relevant attributes of the players are their skills at playing each of the four positions. The value of a team depends on how well its players can play these positions. At a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players. Consider an example from the business world. Companies in the metals industry are usually vertically-integrated and diversified. They have mines for various types of ores, and also mills capable of processing and producing different kinds of metal. They optimize their production profile according to the market prices for their products. For example, when the price of aluminum goes up, they will allocate more resources to producing aluminum. However, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores. Two or more companies may benefit from trading ores and processing capacities with one another. To model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies. Given the exogenous input of market prices, the value of a group of companies will be determined by these attributes. Many real-world problems can be likewise modeled by picking the right attributes. As attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models. Coalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation. Our goal is to understand the computational aspects of computing the solution concepts (stable and/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes. Our contributions can be summarized as follows: • We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature. We show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse. • Given the generality of the model, positive results carry over to other representations. We discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature. 170 • We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently. We provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight. We also carry out experiments to evaluate how the heuristic performs on random instances.1 2. RELATED WORK Coalitional game theory has been well studied in economics [9, 10, 14]. A vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties. The first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5]. They consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players. This can be efficiently modeled and generalized using attributes. As a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7]. Both of these representations are based on dividing a coalitional game into subgames (termed issues in [3] and rules in [7]), and aggregating the subgames via linear combination. The key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem. The relationship of these models will be made clear after we define the multiattribute representation in Section 4. Another representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2]. This representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem. While it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions. In this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions. We will therefore not make further comparisons with [2]. The model of coalitional games with attributes has been considered in the works of Shehory and Kraus. They model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13]. Our work differs significantly as our focus is on reasoning about solution concepts. Our model also covers a wider scope as attributes generalize the notion of capabilities. Yokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15]. There are two major differences between their work and ours. Firstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill. Also, they focus on developing new solution concepts that are robust with respect to manipulation by agents. Our focus is on reasoning about traditional solution concepts. 1 We acknowledge that random instances may not be typical of what happens in practice, but given the generality of our model, it provides the most unbiased view. Our work is also related to the study of cooperative games with committee control [4]. In these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3). multiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games. We note that when restricted to simple games, we derive similar results to that in [4]. 3. PRELIMINARIES In this section, we will review the relevant concepts of coalitional game theory and its two most important solution concepts - the Shapley value and the core. We will then define the computational questions that will be studied in the second half of the paper. 3.1 Coalitional Games Throughout this paper, we assume that payoffs to groups of agents can be freely distributed among its members. This transferable utility assumption is commonly made in coalitional game theory. The canonical representation of a coalitional game with transferable utility is its characteristic form. Definition 1. A coalition game with transferable utility in characteristic form is denoted by the pair N, v , where • N is the set of agents; and • v : 2N → R is a function that maps each group of agents S ⊆ N to a real-valued payoff. A group of agents in a game is known as a coalition, and the entire set of agents is known as the grand coalition. An important class of coalitional games is the class of monotonic games. Definition 2. A coalitional game is monotonic if for all S ⊂ T ⊆ N, v(S) ≤ v(T). Another important class of coalitional games is the class of simple games. In a simple game, a coalition either wins, in which case it has a value of 1, or loses, in which case it has a value of 0. It is often used to model voting situations. Simple games are often assumed to be monotonic, i.e., if S wins, then for all T ⊇ S, T also wins. This coincides with the notion of using simple games as a model for voting. If a simple game is monotonic, then it is fully described by the set of minimal winning coalitions, i.e., coalitions S for which v(S) = 1 but for all coalitions T ⊂ S, v(T) = 0. An outcome in a coalitional game specifies the utilities the agents receive. A solution concept assigns to each coalitional game a set of reasonable outcomes. Different solution concepts attempt to capture in some way outcomes that are stable and/or fair. Two of the best known solution concepts are the Shapley value and the core. The Shapley value is a normative solution concept that prescribes a fair way to divide the gains from cooperation when the grand coalition is formed. The division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents. Formally, Definition 3. The Shapley value of agent i, φi(v), in game N, v is given by the following formula φi(v) = S⊆N\{i} |S|! (|N| − |S| − 1)! |N|! (v(S ∪ {i}) − v(S)) 171 The core is a descriptive solution concept that focuses on outcomes that are stable. Stability under core means that no set of players can jointly deviate to improve their payoffs. Definition 4. An outcome x ∈ R|N| is in the core of the game N, v if for all S ⊆ N, i∈S xi ≥ v(S) Note that the core of a game may be empty, i.e., there may not exist any payoff vector that satisfies the stability requirement for the given game. 3.2 Computational Problems We will study the following three problems related to solution concepts in coalitional games. Problem 1. (Shapley Value) Given a description of the coalitional game and an agent i, compute the Shapley value of agent i. Problem 2. (Core Membership) Given a description of the coalitional game and a payoff vector x such that È i∈N xi = v(N), determine if È i∈S xi ≥ v(S) for all S ⊆ N. Problem 3. (Core Non-emptiness) Given a description of the coalitional game, determine if there exists any payoff vector x such that È i∈S xi ≥ V (S) for all S ⊆ N, andÈ i∈N xi = v(N). Note that the complexity of the above problems depends on the how the game is described. All these problems will be easy if the game is described by its characteristic form, but only so because the description takes space exponential in the number of agents, and hence simple brute-force approach takes time polynomial to the input description. To properly understand the computational complexity questions, we have to look at compact representation. 4. FORMAL MODEL In this section, we will give a formal definition of multiattribute coalitional games, and show how it is related to some of the representations discussed in the literature. We will also discuss some limitations to our proposed approach. 4.1 Multi-Attribute Coalitional Games A multi-attribute coalitional game (MACG) consists of two parts: a description of the attributes of the agents, which we termed an attribute model, and a function that assigns values to combination of attributes. Together, they induce a coalitional game over the agents. We first define the attribute model. Definition 5. An attribute model is a tuple N, M, A , where • N denotes the set of agents, of size n; • M denotes the set of attributes, of size m; • A ∈ Rm×n , the attribute matrix, describes the values of the attributes of the agents, with Aij denoting the value of attribute i for agent j. We can directly define a function that maps combinations of attributes to real values. However, for many problems, we can describe the function more compactly by computing it in two steps: we first compute an aggregate value for each attribute, then compute the values of combination of attributes using only the aggregated information. Formally, Definition 6. An aggregating function (or aggregator) takes as input a row of the attribute matrix and a coalition S, and summarizes the attributes of the agents in S with a single number. We can treat it as a mapping from Rn × 2N → R. Aggregators often perform basic arithmetic or logical operations. For example, it may compute the sum of the attributes, or evaluate a Boolean expression by treating the agents i ∈ S as true and j /∈ S as false. Analogous to the notion of simple games, we call an aggregator simple if its range is {0, 1}. For any aggregator, there is a set of relevant agents, and a set of irrelevant agents. An agent i is irrelevant to aggregator aj if aj (S ∪ {i}) = aj (S) for all S ⊆ N. A relevant agent is one not irrelevant. Given the attribute matrix, an aggregator assigns a value to each coalition S ⊆ N. Thus, each aggregator defines a game over N. For aggregator aj , we refer to this induced game as the game of attribute j, and denote it with aj (A). When the attribute matrix is clear from the context, we may drop A and simply denote the game as aj . We may refer to the game as the aggregator when no ambiguities arise. We now define the second step of the computation with the help of aggregators. Definition 7. An aggregate value function takes as input the values of the aggregators and maps these to a real value. In this paper, we will focus on having one aggregator per attribute. Therefore, in what follows, we will refer to the aggregate value function as a function over the attributes. Note that when all aggregators are simple, the aggregate value function implicitly defines a game over the attributes, as it assigns a value to each set of attributes T ⊆ M. We refer to this as the game among attributes. We now define multi-attribute coalitional game. Definition 8. A multi-attribute coalitional game is defined by the tuple N, M, A, a, w , where • N, M, A is an attribute model; • a is a set of aggregators, one for each attribute; we can treat the set together as a vector function, mapping Rm×n × 2N → Rm • w : Rm → R is an aggregate value function. This induces a coalitional game with transferable payoffs N, v with players N and the value function defined by v(S) = w(a(A, S)) Note that MACG as defined is fully capable of representing any coalitional game N, v . We can simply take the set of attributes as equal to the set of agents, i.e., M = N, an identity matrix for A, aggregators of sums, and the aggregate value function w to be v. 172 4.2 An Example Let us illustrate how MACG can be used to represent a game with a simple example. Suppose there are four types of resources in the world: gold, silver, copper, and iron, that each agent is endowed with some amount of these resources, and there is a fixed price for each of the resources in the market. This game can be described using MACG with an attribute matrix A, where Aij denote the amount of resource i that agent j is endowed. For each resource, the aggregator sums together the amount of resources the agents have. Finally, the aggregate value function takes the dot product between the market price vector and the aggregate vector. Note the inherent flexibility in the model: only limited work would be required to update the game as the market price changes, or when a new agent arrives. 4.3 Relationship with Other Representations As briefly discussed in Section 2, MACG is closely related to two other representations in the literature, the multiissue representation of Conitzer and Sandholm [3], and our work on marginal contribution nets [7]. To make their relationships clear, we first review these two representations. We have changed the notations from the original papers to highlight their similarities. Definition 9. A multi-issue representation is given as a vector of coalitional games, (v1, v2, ... vm), each possibly with a varying set of agents, say N1, ... , Nm. The coalitional game N, v induced by multi-issue representation has player set N = Ëm i=1 Ni, and for each coalition S ⊆ N, v(S) = Èm i=1 v(S ∩ Ni). The games vi are assumed to be represented in characteristic form. Definition 10. A marginal contribution net is given as a set of rules (r1, r2, ... , rm), where rule ri has a weight wi, and a pattern pi that is a conjunction over literals (positive or negative). The agents are represented as literals. A coalition S is said to satisfy the pattern pi, if we treat the agents i ∈ S as true, an agent j /∈ S as false, pi(S) evaluates to true. Denote the set of literals involved in rule i by Ni. The coalitional game N, v induced by a marginal contribution net has player set N = Ëm i=1 Ni, and for each coalition S ⊆ N, v(S) = È i:pi(S)=true wi. From these definitions, we can see the relationships among these three representations clearly. An issue of a multi-issue representation corresponds to an attribute in MACG. Similarly, a rule of a marginal contribution net corresponds to an attribute in MACG. The aggregate value functions are simple sums and weighted sums for the respective representations. Therefore, it is clear that MACG will be no less succinct than either representation. However, MACG differs in two important way. Firstly, there is no restriction on the operations performed by the aggregate value function over the attributes. This is an important generalization over the linear combination of issues or rules in the other two approaches. In particular, there are games for which MACG can be exponentially more compact. The proof of the following proposition can be found in the Appendix. Proposition 1. Consider the parity game N, v where coalition S ⊆ N has value v(S) = 1 if |S| is odd, and v(S) = 0 otherwise. MACG can represent the game in O(n) space. Both multi-issue representation and marginal contribution nets requires O(2n ) space. A second important difference of MACG is that the attribute model and the value function is cleanly separated. As suggested in the example in Section 4.2, this often allows us more efficient update of the values of the game as it changes. Also, the same attribute model can be evaluated using different value functions, and the same value function can be used to evaluate different attribute model. Therefore, MACG is very suitable for representing multiple games. We believe the problems of updating games and representing multiple games are interesting future directions to explore. 4.4 Limitation of One Aggregator per Attribute Before focusing on one aggregator per attribute for the rest of the paper, it is natural to wonder if any is lost per such restriction. The unfortunate answer is yes, best illustrated by the following. Consider again the problem of forming a soccer team discussed in the introduction, where we model the attributes of the agents as their ability to take the four positions of the field, and the value of a team depends on the positions covered. If we first aggregate each of the attribute individually, we will lose the distributional information of the attributes. In other words, we will not be able to distinguish between two teams, one of which has a player for each position, the other has one player who can play all positions, but the rest can only play the same one position. This loss of distributional information can be recovered by using aggregators that take as input multiple rows of the attribute matrix rather than just a single row. Alternatively, if we leave such attributes untouched, we can leave the burden of correctly evaluating these attributes to the aggregate value function. However, for many problems that we found in the literature, such as the transportation domain of [12] and the flow game setting of [4], the distribution of attributes does not affect the value of the coalitions. In addition, the problem may become unmanageably complex as we introduce more complicated aggregators. Therefore, we will focus on the representation as defined in Definition 8. 5. SHAPLEY VALUE In this section, we focus on computational issues of finding the Shapley value of a player in MACG. We first set up the problem with the use of oracles to avoid complexities arising from the aggregators. We then show that when attributes are linearly separable, the Shapley value can be efficiently computed. This generalizes the proofs of related results in the literature. For the non-linearly separable case, we consider a natural heuristic for estimating the Shapley value, and study the heuristic theoretically and empirically. 5.1 Problem Setup We start by noting that computing the Shapley value for simple aggregators can be hard in general. In particular, we can define aggregators to compute weighted majority over its input set of agents. As noted in [6], finding the Shapley value of a weighted majority game is #P-hard. Therefore, discussion of complexity of Shapley value for MACG with unrestricted aggregators is moot. Instead of placing explicit restriction on the aggregator, we assume that the Shapley value of the aggregator can be 173 answered by an oracle. For notation, let φi(u) denote the Shapley value for some game u. We make the following assumption: Assumption 1. For each aggregator aj in a MACG, there is an associated oracle that answers the Shapley value of the game of attribute j. In other words, φi(aj ) is known. For many aggregators that perform basic operations over its input, polynomial time oracle for Shapley value exists. This include operations such as sum, and symmetric functions when the attributes are restricted to {0, 1}. Also, when only few agents have an effect on the aggregator, brute-force computation for Shapley value is feasible. Therefore, the above assumption is reasonable for many settings. In any case, such abstraction allows us to focus on the aggregate value function. 5.2 Linearly Separable Attributes When the aggregate value function can be written as a linear function of the attributes, the Shapley value of the game can be efficiently computed. Theorem 1. Given a game N, v represented as a MACG N, M, A, a, w , if the aggregate value function can be written as a linear function of its attributes, i.e., w(a(A, S)) = m j=1 cj aj (A, S) The Shapley value of agent i in N, v is given by φi(v) = m j=1 cj φi(aj ) (1) Proof. First, we note that Shapley value satisfies an additivity axiom [11]. The Shapley value satisfies additivity, namely, φi(a + b) = φi(a) + φi(b), where N, a + b is a game defined to be (a + b)(S) = a(S) + b(S) for all S ⊆ N. It is also clear that Shapley value satisfies scaling, namely φi(αv) = αφi(v) where (αv)(S) = αv(S) for all S ⊆ N. Since the aggregate value function can be expressed as a weighted sum of games of attributes, φi(v) = φi(w(a)) = φi( m j=1 cjaj ) = m j=1 cjφi(aj ) Many positive results regarding efficient computation of Shapley value in the literature depends on some form of linearity. Examples include the edge-spanning game on graphs by Deng and Papadimitriou [5], the multi-issue representation of [3], and the marginal contribution nets of [7]. The key to determine if the Shapley value can be efficiently computed depends on the linear separability of attributes. Once this is satisfied, as long as the Shapley value of the game of attributes can be efficiently determined, the Shapley value of the entire game can be efficiently computed. Corollary 1. The Shapley value for the edge-spanning game of [5], games in multi-issue representation [3], and games in marginal contribution nets [7], can be computed in polynomial time. 5.3 Polynomial Combination of Attributes When the aggregate value function cannot be expressed as a linear function of its attributes, computing the Shapley value exactly is difficult. Here, we will focus on aggregate value function that can be expressed as some polynomial of its attributes. If we do not place a limit on the degree of the polynomial, and the game N, v is not necessarily monotonic, the problem is #P-hard. Theorem 2. Computing the Shapley value of a MACG N, M, A, a, w , when w can be an arbitrary polynomial of the aggregates a, is #P-hard, even when the Shapley value of each aggregator can be efficiently computed. The proof is via reduction from three-dimensional matching, and details can be found in the Appendix. Even if we restrict ourselves to monotonic games, and non-negative coefficients for the polynomial aggregate value function, computing the exact Shapley value can still be hard. For example, suppose there are two attributes. All agents in some set B ⊆ N possess the first attribute, and all agents in some set C ⊆ N possess the second, and B and C are disjoint. For a coalition S ⊆ N, the aggregator for the first evaluates to 1 if and only if |S ∩ B| ≥ b , and similarly, the aggregator for the second evaluates to 1 if and only if |S ∩ C| ≥ c . Let the cardinality of the sets B and C be b and c. We can verify that the Shapley value of an agent i in B equals φi = 1 b b −1 i=0 b i ¡ c c −1 ¡ b+c c +i−1 ¡ c − c + 1 b + c − c − i + 1 The equation corresponds to a weighted sum of probability values of hypergeometric random variables. The correspondence with hypergeometric distribution is due to sampling without replacement nature of Shapley value. As far as we know, there is no close-form formula to evaluate the sum above. In addition, as the number of attributes involved increases, we move to multi-variate hypergeometric random variables, and the number of summands grow exponentially in the number of attributes. Therefore, it is highly unlikely that the exact Shapley value can be determined efficiently. Therefore, we look for approximation. 5.3.1 Approximation First, we need a criteria for evaluating how well an estimate, ˆφ, approximates the true Shapley value, φ. We consider the following three natural criteria: • Maximum underestimate: maxi φi/ˆφi • Maximum overestimate: maxi ˆφi/φi • Total variation: 1 2 È i |φi − ˆφi|, or alternatively maxS | È i∈S φi − È i∈S ˆφi| The total variation criterion is more meaningful when we normalize the game to having a value of 1 for the grand coalition, i.e., v(N) = 1. We can also define additive analogues of the under- and overestimates, especially when the games are normalized. 174 We will assume for now that the aggregate value function is a polynomial over the attributes with non-negative coefficients. We will also assume that the aggregators are simple. We will evaluate a specific heuristic that is analogous to Equation (1). Suppose the aggregate function can be written as a polynomial with p terms w(a(A, S)) = p j=1 cj aj(1) (A, S)aj(2) (A, S) · · · aj(kj ) (A, S) (2) For term j, the coefficient of the term is cj , its degree kj , and the attributes involved in the term are j(1), ... , j(kj ). We compute an estimate ˆφ to the Shapley value as ˆφi = p j=1 kj l=1 cj kj φi(aj(l) ) (3) The idea behind the estimate is that for each term, we divide the value of the term equally among all its attributes. This is represented by the factor cj kj . Then for for each attribute of an agent, we assign the player a share of value from the attribute. This share is determined by the Shapley value of the simple game of that attribute. Without considering the details of the simple games, this constitutes a fair (but blind) rule of sharing. 5.3.2 Theoretical analysis of heuristic We can derive a simple and tight bound for the maximum (multiplicative) underestimate of the heuristic estimate. Theorem 3. Given a game N, v represented as a MACG N, M, A, a, w , suppose w can be expressed as a polynomial function of its attributes (cf Equation (2)). Let K = maxjkj, i.e., the maximum degree of the polynomial. Let ˆφ denote the estimated Shapley value using Equation (3), and φ denote the true Shapley value. For all i ∈ N, φi ≥ K ˆφi. Proof. We bound the maximum underestimate term-byterm. Let tj be the j-th term of the polynomial. We note that the term can be treated as a game among attributes, as it assigns a value to each coalition S ⊆ N. Without loss of generality, renumber attributes j(1) through j(kj ) as 1 through kj. tj (S) = cj kj l=1 al (A, S) To make the equations less cluttered, let B(N, S) = |S|! (|N| − |S| − 1)! |N|! and for a game a, contribution of agent i to group S : i /∈ S, ∆i(a, S) = a(S ∪ {i}) − a(S) The true Shapley value of the game tj is φi(tj) = cj S⊆N\{i} B(N, S)∆i(tj, S) For each coalition S, i /∈ S, ∆i(tj , S) = 1 if and only if for at least one attribute, say l∗ , ∆i(al∗ , S) = 1. Therefore, if we sum over all the attributes, we would have included l∗ for sure. φi(tj) ≤ cj kj j=1 S⊆N\{i} B(N, S)∆i(aj , S) = kj kj j=1 cj kj φi(aj ) = kj ˆφi(T) Summing over the terms, we see that the worst case underestimate is by the maximum degree. Without loss of generality, since the bound is multiplicative, we can normalize the game to having v(N) = 1. As a corollary, because we cannot overestimate any set by more than K times, we obtain a bound on the total variation: Corollary 2. The total variation between the estimated Shapley value and the true Shapley value, for K-degree bounded polynomial aggregate value function, is K−1 K . We can show that this bound is tight. Example 1. Consider a game with n players and K attributes. Let the first (n−1) agents be a member of the first (K − 1) attributes, and that the corresponding aggregator returns 1 if any one of the first (K − 1) agents is present. Let the n-th agent be the sole member of the K-th attribute. The estimated Shapley will assign a value of K−1 K 1 n−1 to the first (n − 1) agents and 1 K to the n-th agent. However, the true Shapley value of the n-th agent tends to 1 as n → ∞, and the total variation approaches K−1 K . In general, we cannot bound how much ˆφ may overestimate the true Shapley value. The problem is that ˆφi may be non-zero for agent i even though may have no influence over the outcome of a game when attributes are multiplied together, as illustrated by the following example. Example 2. Consider a game with 2 players and 2 attributes, and let the first agent be a member of both attributes, and the other agent a member of the second attribute. For a coalition S, the first aggregator evaluates to 1 if agent 1 ∈ S, and the second aggregator evaluates to 1 if both agents are in S. While agent 2 is not a dummy with respect to the second attribute, it is a dummy with respect to the product of the attributes. Agent 2 will be assigned a value of 1 4 by the estimate. As mentioned, for simple monotonic games, a game is fully described by its set of minimal winning coalitions. When the simple aggregators are represented as such, it is possible to check, in polynomial time, for agents turning dummies after attributes are multiplied together. Therefore, we can improve the heuristic estimate in this special case. 5.3.3 Empirical evaluation Due to a lack of benchmark problems for coalitional games, we have tested the heuristic on random instances. We believe more meaningful results can be obtained when we have real instances to test this heuristic on. Our experiment is set up as follows. We control three parameters of the experiment: the number of players (6 − 10), 175 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 6 7 8 9 10 No. of Players TotalVariationDistance 2 3 4 5 (a) Effect of Max Degree 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 6 7 8 9 10 No. of Players TotalVariationDistance 4 5 6 (b) Effect of Number of Attributes Figure 1: Experimental results the number of attributes (3 − 8), and the maximum degree of the polynomial (2 − 5). For each attribute, we randomly sample one to three minimal winning coalitions. We then randomly generate a polynomial of the desired maximum degree with a random number (3 − 12) of terms, each with a random positive weight. We normalize each game to have v(N) = 1. The results of the experiments are shown in Figure 1. The y-axis of the graphs shows the total variation, and the x-axis the number of players. Each datapoint is an average of approximately 700 random samples. Figure 1(a) explores the effect of the maximum degree and the number of players when the number of attributes is fixed (at six). As expected, the total variation increases as the maximum degree increases. On the other hand, there is only a very small increase in error as the number of players increases. The error is nowhere near the theoretical worstcase bound of 1 2 to 4 5 for polynomials of degrees 2 to 5. Figure 1(b) explores the effect of the number of attributes and the number of players when the maximum degree of the polynomial is fixed (at three). We first note that these three lines are quite tightly clustered together, suggesting that the number of attributes has relatively little effect on the error of the estimate. As the number of attributes increases, the total variation decreases. We think this is an interesting phenomenon. This is probably due to the precise construct required for the worst-case bound, and so as more attributes are available, we have more diverse terms in the polynomial, and the diversity pushes away from the worst-case bound. 6. CORE-RELATED QUESTIONS In this section, we look at the complexity of the two computational problems related to the core: Core Nonemptiness and Core Membership. We show that the nonemptiness of core of the game among attributes and the cores of the aggregators imply non-emptiness of the core of the game induced by the MACG. We also show that there appears to be no such general relationship that relates the core memberships of the game among attributes, games of attributes, and game induced by MACG. 6.1 Problem Setup There are many problems in the literature for which the questions of Core Non-emptiness and Core Membership are known to be hard [1]. For example, for the edgespanning game that Deng and Papadimitriou studied [5], both of these questions are coNP-complete. As MACG can model the edge-spanning game in the same amount of space, these hardness results hold for MACG as well. As in the case for computing Shapley value, we attempt to find a way around the hardness barrier by assuming the existence of oracles, and try to build algorithms with these oracles. First, we consider the aggregate value function. Assumption 2. For a MACG N, M, A, a, w , we assume there are oracles that answers the questions of Core Nonemptiness, and Core Membership for the aggregate value function w. When the aggregate value function is a non-negative linear function of its attributes, the core is always non-empty, and core membership can be determined efficiently. The concept of core for the game among attributes makes the most sense when the aggregators are simple games. We will further assume that these simple games are monotonic. Assumption 3. For a MACG N, M, A, a, w , we assume all aggregators are monotonic and simple. We also assume there are oracles that answers the questions of Core Nonemptiness, and Core Membership for the aggregators. We consider this a mild assumption. Recall that monotonic simple games are fully described by their set of minimal winning coalitions (cf Section 3). If the aggregators are represented as such, Core Non-emptiness and Core Membership can be checked in polynomial time. This is due to the following well-known result regarding simple games: Lemma 1. A simple game N, v has a non-empty core if and only if it has a set of veto players, say V , such that v(S) = 0 for all S ⊇ V . Further, A payoff vector x is in the core if and only if xi = 0 for all i /∈ V . 6.2 Core Non-emptiness There is a strong connection between the non-emptiness of the cores of the games among attributes, games of the attributes, and the game induced by a MACG. Theorem 4. Given a game N, v represented as a MACG N, M, A, a, w , if the core of the game among attributes, 176 M, w , is non-empty, and the cores of the games of attributes are non-empty, then the core of N, v is non-empty. Proof. Let u be an arbitrary payoff vector in the core of the game among attributes, M, w . For each attribute j, let θj be an arbitrary payoff vector in the core of the game of attribute j. By Lemma 1, each attribute j must have a set of veto players; let this set be denoted by Pj . For each agent i ∈ N, let yi = È j ujθj i . We claim that this vector y is in the core of N, v . Consider any coalition S ⊆ N, v(S) = w(a(A, S)) ≤ j:S⊇P j uj (4) This is true because an aggregator cannot evaluate to 1 without all members of the veto set. For any attribute j, by Lemma 1, È i∈P j θj i = 1. Therefore, j:S⊇P j uj = j:S⊇P j uj i∈P j θj i = i∈S j:S⊇P j ujθj i ≤ i∈S yi Note that the proof is constructive, and hence if we are given an element in the core of the game among attributes, we can construct an element of the core of the coalitional game. From Theorem 4, we can obtain the following corollaries that have been previously shown in the literature. Corollary 3. The core of the edge-spanning game of [5] is non-empty when the edge weights are non-negative. Proof. Let the players be the vertices, and their attributes the edges incident on them. For each attribute, there is a veto set - namely, both endpoints of the edges. As previously observed, an aggregate value function that is a non-negative linear function of its aggregates has non-empty core. Therefore, the precondition of Theorem 4 is satisfied, and the edge-spanning game with non-negative edge weights has a non-empty core. Corollary 4 (Theorem 1 of [4]). The core of a flow game with committee control, where each edge is controlled by a simple game with a veto set of players, is non-empty. Proof. We treat each edge of the flow game as an attribute, and so each attribute has a veto set of players. The core of a flow game (without committee) has been shown to be non-empty in [8]. We can again invoke Theorem 4 to show the non-emptiness of core for flow games with committee control. However, the core of the game induced by a MACG may be non-empty even when the core of the game among attributes is empty, as illustrated by the following example. Example 3. Suppose the minimal winning coalition of all aggregators in a MACG N, M, A, a, w is N, then v(S) = 0 for all coalitions S ⊂ N. As long as v(N) ≥ 0, any nonnegative vector x that satisfies È i∈N xi = v(N) is in the core of N, v . Complementary to the example above, when all the aggregators have empty cores, the core of N, v is also empty. Theorem 5. Given a game N, v represented as a MACG N, M, A, a, w , if the cores of all aggregators are empty, v(N) > 0, and for each i ∈ N, v({i}) ≥ 0, then the core of N, v is empty. Proof. Suppose the core of N, v is non-empty. Let x be a member of the core, and pick an agent i such that xi > 0. However, for each attribute, since the core is empty, by Lemma 1, there are at least two disjoint winning coalitions. Pick the winning coalition Sj that does not include i for each attribute j. Let S∗ = Ë j Sj . Because S∗ is winning for all coalitions, v(S∗ ) = v(N). However, v(N) = j∈N xj = xi + j /∈N xj ≥ xi + j∈S∗ xj > j∈S∗ xj Therefore, v(S∗ ) > È j∈S∗ xj, contradicting the fact that x is in the core of N, v . We do not have general results regarding the problem of Core Non-emptiness when some of the aggregators have non-empty cores while others have empty cores. We suspect knowledge about the status of the cores of the aggregators alone is insufficient to decide this problem. 6.3 Core Membership Since it is possible for the game induced by the MACG to have a non-empty core when the core of the aggregate value function is empty (Example 3), we try to explore the problem of Core Membership assuming that the core of both the game among attributes, M, w , and the underlying game, N, v , is known to be non-empty, and see if there is any relationship between their members. One reasonable requirement is whether a payoff vector x in the core of N, v can be decomposed and re-aggregated to a payoff vector y in the core of M, w . Formally, Definition 11. We say that a vector x ∈ Rn ≥0 can be decomposed and re-aggregated into a vector y ∈ Rm ≥0 if there exists Z ∈ Rm×n ≥0 , such that yi = n j=1 Zij for all i xj = m i=1 Zij for all j We may refer Z as shares. When there is no restriction on the entries of Z, it is always possible to decompose a payoff vector x in the core of N, v to a payoff vector y in the core of M, w . However, it seems reasonable to restrict that if an agent j is irrelevant to the aggregator i, i.e., i never changes the outcome of aggregator j, then Zij should be restricted to be 0. Unfortunately, this restriction is already too strong. Example 4. Consider a MACG N, M, A, a, w with two players and three attributes. Suppose agent 1 is irrelevant to attribute 1, and agent 2 is irrelevant to attributes 2 and 3. For any set of attributes T ⊆ M, let w be defined as w(T) = 0 if |T| = 0 or 1 6 if |T| = 2 10 if |T| = 3 177 Since the core of a game with a finite number of players forms a polytope, we can verify that the set of vectors (4, 4, 2), (4, 2, 4), and (2, 4, 4), fully characterize the core C of M, w . On the other hand, the vector (10, 0) is in the core of N, v . This vector cannot be decomposed and re-aggregated to a vector in C under the stated restriction. Because of the apparent lack of relationship among members of the core of N, v and that of M, w , we believe an algorithm for testing Core Membership will require more input than just the veto sets of the aggregators and the oracle of Core Membership for the aggregate value function. 7. CONCLUDING REMARKS Multi-attribute coalitional games constitute a very natural way of modeling problems of interest. Its space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems. Positive results obtained under this representation can easily be translated to results about other representations. Some of these corollary results have been discussed in Sections 5 and 6. An important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch. As pointed out at the end of Section 4.3, MACG is very naturally suited for updates. Representation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting. Our work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well. Given the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees. 8. REFERENCES [1] J. M. Bilbao, J. R. Fern´andez, and J. J. L´opez. Complexity in cooperative game theory. http://www.esi.us.es/~mbilbao. [2] V. Conitzer and T. Sandholm. Complexity of determining nonemptiness of the core. In Proc. 18th Int. Joint Conf. on Artificial Intelligence, pages 613-618, 2003. [3] V. Conitzer and T. Sandholm. Computing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains. In Proc. 19th Nat. Conf. on Artificial Intelligence, pages 219-225, 2004. [4] I. J. Curiel, J. J. Derks, and S. H. Tijs. On balanced games and games with committee control. OR Spectrum, 11:83-88, 1989. [5] X. Deng and C. H. Papadimitriou. On the complexity of cooperative solution concepts. Math. Oper. Res., 19:257-266, May 1994. [6] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, New York, 1979. [7] S. Ieong and Y. Shoham. Marginal contribution nets: A compact representation scheme for coalitional games. In Proc. 6th ACM Conf. on Electronic Commerce, pages 193-202, 2005. [8] E. Kalai and E. Zemel. Totally balanced games and games of flow. Math. Oper. Res., 7:476-478, 1982. [9] A. Mas-Colell, M. D. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, New York, 1995. [10] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, Cambridge, Massachusetts, 1994. [11] L. S. Shapley. A value for n-person games. In H. W. Kuhn and A. W. Tucker, editors, Contributions to the Theory of Games II, number 28 in Annals of Mathematical Studies, pages 307-317. Princeton University Press, 1953. [12] O. Shehory and S. Kraus. Task allocation via coalition formation among autonomous agents. In Proc. 14th Int. Joint Conf. on Artificial Intelligence, pages 31-45, 1995. [13] O. Shehory and S. Kraus. A kernel-oriented model for autonomous-agent coalition-formation in general environments: Implentation and results. In Proc. 13th Nat. Conf. on Artificial Intelligence, pages 134-140, 1996. [14] J. von Neumann and O. Morgenstern. Theory of Games and Economic Behvaior. Princeton University Press, 1953. [15] M. Yokoo, V. Conitzer, T. Sandholm, N. Ohta, and A. Iwasaki. Coalitional games in open anonymous environments. In Proc. 20th Nat. Conf. on Artificial Intelligence, pages 509-515, 2005. Appendix We complete the missing proofs from the main text here. To prove Proposition 1, we need the following lemma. Lemma 2. Marginal contribution nets when all coalitions are restricted to have values 0 or 1 have the same representation power as an AND/OR circuit with negation at the literal level (i.e., AC0 circuit) of depth two. Proof. If a rule assigns a negative value in the marginal contribution nets, we can write the rule by a corresponding set of at most n rules, where n is the number of agents, such that each of which has positive values through application of De Morgan``s Law. With all values of the rules non-negative, we can treat the weighted summation step of marginal contribution nets can be viewed as an OR, and each rule as a conjunction over literals, possibly negated. This exactly match up with an AND/OR circuit of depth two. Proof (Proposition 1). The parity game can be represented with a MACG using a single attribute, aggregator of sum, and an aggregate value function that evaluates that sum modulus two. As a Boolean function, parity is known to require an exponential number of prime implicants. By Lemma 2, a prime implicant is the exact analogue of a pattern in a rule of marginal contribution nets. Therefore, to represent the parity function, a marginal contribution nets must be an exponential number of rules. Finally, as shown in [7], a marginal contribution net is at worst a factor of O(n) less compact than multi-issue representation. Therefore, multi-issue representation will also 178 take exponential space to represent the parity game. This is assuming that each issue in the game is represented in characteristic form. Proof (Theorem 2). An instance of three-dimensional matching is as follows [6]: Given set P ⊆ W × X × Y , where W , X, Y are disjoint sets having the same number q of elements, does there exist a matching P ⊆ P such that |P | = q and no two elements of P agree in any coordinate. For notation, let P = {p1, p2, ... , pK}. We construct a MACG N, M, A, a, w as follows: • M: Let attributes 1 to q correspond to elements in W , (q+1) to 2q correspond to elements in X, (2q+1) to 3q corresponds to element in Y , and let there be a special attribute (3q + 1). • N: Let player i corresponds to pi, and let there be a special player . • A: Let Aji = 1 if the element corresponding to attribute j is in pi. Thus, for the first K columns, there are exactly three non-zero entries. We also set A(3q+1) = 1. • a: for each aggregator j, aj (A(S)) = 1 if and only if sum of row j of A(S) equals 1. • w: product over all aj . In the game N, v that corresponds to this construction, v(S) = 1 if and only if all attributes are covered exactly once. Therefore, for /∈ T ⊆ N, v(T ∪ { }) − v(T) = 1 if and only if T covers attributes 1 to 3q exactly once. Since all such T, if exists, must be of size q, the number of threedimensional matchings is given by φ (v) (K + 1)! q! (K − q)! 179
Multi-Attribute Coalitional Games * t ABSTRACT We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these games--the Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature. 1. INTRODUCTION When agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities. Consider the problem of forming a soccer team. For a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper. The relevant attributes of the players are their skills at playing each of the four positions. The value of a team depends on how well its players can play these positions. At a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players. Consider an example from the business world. Companies in the metals industry are usually vertically-integrated and diversified. They have mines for various types of ores, and also mills capable of processing and producing different kinds of metal. They optimize their production profile according to the market prices for their products. For example, when the price of aluminum goes up, they will allocate more resources to producing aluminum. However, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores. Two or more companies may benefit from trading ores and processing capacities with one another. To model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies. Given the exogenous input of market prices, the value of a group of companies will be determined by these attributes. Many real-world problems can be likewise modeled by picking the right attributes. As attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models. Coalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation. Our goal is to understand the computational aspects of computing the solution concepts (stable and/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes. Our contributions can be summarized as follows: 9 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature. We show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse. 9 Given the generality of the model, positive results carry over to other representations. We discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature. • We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently. We provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight. We also carry out experiments to evaluate how the heuristic performs on random instances .1 2. RELATED WORK Coalitional game theory has been well studied in economics [9, 10, 14]. A vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties. The first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5]. They consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players. This can be efficiently modeled and generalized using attributes. As a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7]. Both of these representations are based on dividing a coalitional game into subgames (termed "issues" in [3] and "rules" in [7]), and aggregating the subgames via linear combination. The key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem. The relationship of these models will be made clear after we define the multiattribute representation in Section 4. Another representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2]. This representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem. While it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions. In this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions. We will therefore not make further comparisons with [2]. The model of coalitional games with attributes has been considered in the works of Shehory and Kraus. They model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13]. Our work differs significantly as our focus is on reasoning about solution concepts. Our model also covers a wider scope as attributes generalize the notion of capabilities. Yokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15]. There are two major differences between their work and ours. Firstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill. Also, they focus on developing new solution concepts that are robust with respect to manipulation by agents. Our focus is on reasoning about traditional solution concepts. Our work is also related to the study of cooperative games with committee control [4]. In these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3). multiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games. We note that when restricted to simple games, we derive similar results to that in [4]. 3. PRELIMINARIES In this section, we will review the relevant concepts of coalitional game theory and its two most important solution concepts--the Shapley value and the core. We will then define the computational questions that will be studied in the second half of the paper. 3.1 Coalitional Games Throughout this paper, we assume that payoffs to groups of agents can be freely distributed among its members. This transferable utility assumption is commonly made in coalitional game theory. The canonical representation of a coalitional game with transferable utility is its characteristic form. Definition 1. A coalition game with transferable utility in characteristic form is denoted by the pair (N, v), where • N is the set of agents; and • v: 2N → R is a function that maps each group of agents S ⊆ N to a real-valued payoff. A group of agents in a game is known as a coalition, and the entire set of agents is known as the grand coalition. An important class of coalitional games is the class of monotonic games. Another important class of coalitional games is the class of simple games. In a simple game, a coalition either wins, in which case it has a value of 1, or loses, in which case it has a value of 0. It is often used to model voting situations. Simple games are often assumed to be monotonic, i.e., if S wins, then for all T D S, T also wins. This coincides with the notion of using simple games as a model for voting. If a simple game is monotonic, then it is fully described by the set of minimal winning coalitions, i.e., coalitions S for which v (S) = 1 but for all coalitions T ⊂ S, v (T) = 0. An outcome in a coalitional game specifies the utilities the agents receive. A solution concept assigns to each coalitional game a set of "reasonable" outcomes. Different solution concepts attempt to capture in some way outcomes that are stable and/or fair. Two of the best known solution concepts are the Shapley value and the core. The Shapley value is a normative solution concept that prescribes a "fair" way to divide the gains from cooperation when the grand coalition is formed. The division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents. Formally, Definition 3. The Shapley value of agent i, φi (v), in game (N, v) is given by the following formula 1We acknowledge that random instances may not be typical φi (v) = E | S |! (| N | − | S | − 1)! (v (S U {i}) − v (S)) of what happens in practice, but given the generality of our S ⊆ N \ {i} | N |! model, it provides the most unbiased view. The core is a descriptive solution concept that focuses on outcomes that are "stable." Stability under core means that no set of players can jointly deviate to improve their payoffs. Definition 4. An outcome x ∈ RINI is in the core of the game ~ N, v ~ if for all S ⊆ N, Note that the core of a game may be empty, i.e., there may not exist any payoff vector that satisfies the stability requirement for the given game. 3.2 Computational Problems We will study the following three problems related to solution concepts in coalitional games. Problem 1. (SHAPLEY VALUE) Given a description of the coalitional game and an agent i, compute the Shapley value of agent i. the coalitional game and a payoff vector x such that EiEN xi = Problem 2. (CORE MEMBERSHIP) Given a description of v (N), determine if EiES xi ≥ v (S) for all S ⊆ N. Problem 3. (CORE NON-EMPTINESS) Given a description of the coalitional game, determine if there exists any payoff vector such that EiES xi ≥ V (S) for all S ⊆ N, and Note that the complexity of the above problems depends on the how the game is described. All these problems will be "easy" if the game is described by its characteristic form, but only so because the description takes space exponential in the number of agents, and hence simple brute-force approach takes time polynomial to the input description. To properly understand the computational complexity questions, we have to look at compact representation. 4. FORMAL MODEL In this section, we will give a formal definition of multiattribute coalitional games, and show how it is related to some of the representations discussed in the literature. We will also discuss some limitations to our proposed approach. 4.1 Multi-Attribute Coalitional Games A multi-attribute coalitional game (MACG) consists of two parts: a description of the attributes of the agents, which we termed an attribute model, and a function that assigns values to combination of attributes. Together, they induce a coalitional game over the agents. We first define the attribute model. • N denotes the set of agents, of size n; • M denotes the set of attributes, of size m; • A ∈ RmXn, the attribute matrix, describes the values of the attributes of the agents, with Aij denoting the value of attribute i for agent j. We can directly define a function that maps combinations of attributes to real values. However, for many problems, we can describe the function more compactly by computing it in two steps: we first compute an aggregate value for each attribute, then compute the values of combination of attributes using only the aggregated information. Formally, Definition 6. An aggregating function (or aggregator) takes as input a row of the attribute matrix and a coalition S, and summarizes the attributes of the agents in S with a single number. We can treat it as a mapping from Rn × 2N ~ → R. Aggregators often perform basic arithmetic or logical operations. For example, it may compute the sum of the attributes, or evaluate a Boolean expression by treating the agents i ∈ S as true and j ∈ / S as false. Analogous to the notion of simple games, we call an aggregator simple if its range is {0, 1}. For any aggregator, there is a set of relevant agents, and a set of irrelevant agents. An agent i is irrelevant to aggregator aj if aj (S ∪ {i}) = aj (S) for all S ⊆ N. A relevant agent is one not irrelevant. Given the attribute matrix, an aggregator assigns a value to each coalition S ⊆ N. Thus, each aggregator defines a game over N. For aggregator aj, we refer to this induced game as the game of attribute j, and denote it with aj (A). When the attribute matrix is clear from the context, we may drop A and simply denote the game as aj. We may refer to the game as the aggregator when no ambiguities arise. We now define the second step of the computation with the help of aggregators. Definition 7. An aggregate value function takes as input the values of the aggregators and maps these to a real value. In this paper, we will focus on having one aggregator per attribute. Therefore, in what follows, we will refer to the aggregate value function as a function over the attributes. Note that when all aggregators are simple, the aggregate value function implicitly defines a game over the attributes, as it assigns a value to each set of attributes T ⊆ M. We refer to this as the game among attributes. We now define multi-attribute coalitional game. • ~ N, M, A ~ is an attribute model; • a is a set of aggregators, one for each attribute; we can treat the set together as a vector function, mapping RmXn × 2N ~ → Rm • w: Rm ~ → R is an aggregate value function. This induces a coalitional game with transferable payoffs ~ N, v ~ with players N and the value function defined by Note that MACG as defined is fully capable of representing any coalitional game ~ N, v ~. We can simply take the set of attributes as equal to the set of agents, i.e., M = N, an identity matrix for A, aggregators of sums, and the aggregate value function w to be v. 4.2 An Example Let us illustrate how MACG can be used to represent a game with a simple example. Suppose there are four types of resources in the world: gold, silver, copper, and iron, that each agent is endowed with some amount of these resources, and there is a fixed price for each of the resources in the market. This game can be described using MACG with an attribute matrix A, where Aij denote the amount of resource i that agent j is endowed. For each resource, the aggregator sums together the amount of resources the agents have. Finally, the aggregate value function takes the dot product between the market price vector and the aggregate vector. Note the inherent flexibility in the model: only limited work would be required to update the game as the market price changes, or when a new agent arrives. 4.3 Relationship with Other Representations As briefly discussed in Section 2, MACG is closely related to two other representations in the literature, the multiissue representation of Conitzer and Sandholm [3], and our work on marginal contribution nets [7]. To make their relationships clear, we first review these two representations. We have changed the notations from the original papers to highlight their similarities. Definition 9. A multi-issue representation is given as a vector of coalitional games, (v1, v2,...vm), each possibly with a varying set of agents, say N1,..., Nm. The coalitional game (N, v) induced by multi-issue representation has player set N = Umi = 1 Ni, and for each coalition S C N, v (S) = Emi = 1 v (S n Ni). The games vi are assumed to be represented in characteristic form. Definition 10. A marginal contribution net is given as a set of rules (r1, r2,..., rm), where rule ri has a weight wi, and a pattern pi that is a conjunction over literals (positive or negative). The agents are represented as literals. A coalition S is said to satisfy the pattern pi, if we treat the agents i E S as true, an agent j E / S as false, pi (S) evaluates to true. Denote the set of literals involved in rule i by Ni. The coalitional game (N, v) induced by a marginal contribution net has player set N = Um coalition S C N, v (S) = Ei: pi (S) = true wi. i = 1 Ni, and for each From these definitions, we can see the relationships among these three representations clearly. An issue of a multi-issue representation corresponds to an attribute in MACG. Similarly, a rule of a marginal contribution net corresponds to an attribute in MACG. The aggregate value functions are simple sums and weighted sums for the respective representations. Therefore, it is clear that MACG will be no less succinct than either representation. However, MACG differs in two important way. Firstly, there is no restriction on the operations performed by the aggregate value function over the attributes. This is an important generalization over the linear combination of issues or rules in the other two approaches. In particular, there are games for which MACG can be exponentially more compact. The proof of the following proposition can be found in the Appendix. PROPOSITION 1. Consider the parity game (N, v) where coalition S C N has value v (S) = 1 if ISI is odd, and v (S) = 0 otherwise. MACG can represent the game in O (n) space. Both multi-issue representation and marginal contribution nets requires O (2n) space. A second important difference of MACG is that the attribute model and the value function is cleanly separated. As suggested in the example in Section 4.2, this often allows us more efficient update of the values of the game as it changes. Also, the same attribute model can be evaluated using different value functions, and the same value function can be used to evaluate different attribute model. Therefore, MACG is very suitable for representing multiple games. We believe the problems of updating games and representing multiple games are interesting future directions to explore. 4.4 Limitation of One Aggregator per Attribute Before focusing on one aggregator per attribute for the rest of the paper, it is natural to wonder if any is lost per such restriction. The unfortunate answer is yes, best illustrated by the following. Consider again the problem of forming a soccer team discussed in the introduction, where we model the attributes of the agents as their ability to take the four positions of the field, and the value of a team depends on the positions covered. If we first aggregate each of the attribute individually, we will lose the distributional information of the attributes. In other words, we will not be able to distinguish between two teams, one of which has a player for each position, the other has one player who can play all positions, but the rest can only play the same one position. This loss of distributional information can be recovered by using aggregators that take as input multiple rows of the attribute matrix rather than just a single row. Alternatively, if we leave such attributes untouched, we can leave the burden of correctly evaluating these attributes to the aggregate value function. However, for many problems that we found in the literature, such as the transportation domain of [12] and the flow game setting of [4], the distribution of attributes does not affect the value of the coalitions. In addition, the problem may become unmanageably complex as we introduce more complicated aggregators. Therefore, we will focus on the representation as defined in Definition 8. 5. SHAPLEY VALUE In this section, we focus on computational issues of finding the Shapley value of a player in MACG. We first set up the problem with the use of oracles to avoid complexities arising from the aggregators. We then show that when attributes are linearly separable, the Shapley value can be efficiently computed. This generalizes the proofs of related results in the literature. For the non-linearly separable case, we consider a natural heuristic for estimating the Shapley value, and study the heuristic theoretically and empirically. 5.1 Problem Setup We start by noting that computing the Shapley value for simple aggregators can be hard in general. In particular, we can define aggregators to compute weighted majority over its input set of agents. As noted in [6], finding the Shapley value of a weighted majority game is #P - hard. Therefore, discussion of complexity of Shapley value for MACG with unrestricted aggregators is moot. Instead of placing explicit restriction on the aggregator, we assume that the Shapley value of the aggregator can be answered by an oracle. For notation, let φi (u) denote the Shapley value for some game u. We make the following assumption: Assumption 1. For each aggregator aj in a MACG, there is an associated oracle that answers the Shapley value of the game of attribute j. In other words, φi (aj) is known. For many aggregators that perform basic operations over its input, polynomial time oracle for Shapley value exists. This include operations such as sum, and symmetric functions when the attributes are restricted to {0, 11. Also, when only few agents have an effect on the aggregator, brute-force computation for Shapley value is feasible. Therefore, the above assumption is reasonable for many settings. In any case, such abstraction allows us to focus on the aggregate value function. 5.2 Linearly Separable Attributes When the aggregate value function can be written as a linear function of the attributes, the Shapley value of the game can be efficiently computed. PROOF. First, we note that Shapley value satisfies an additivity axiom [11]. The Shapley value satisfies additivity, namely, φi (a + b) = φi (a) + φi (b), where (N, a + b) is a game defined to be (a + b) (S) = a (S) + b (S) for all S C _ N. It is also clear that Shapley value satisfies scaling, namely φi (αv) = αφi (v) where (αv) (S) = αv (S) for all S C _ N. Since the aggregate value function can be expressed as a weighted sum of games of attributes, Many positive results regarding efficient computation of Shapley value in the literature depends on some form of linearity. Examples include the edge-spanning game on graphs by Deng and Papadimitriou [5], the multi-issue representation of [3], and the marginal contribution nets of [7]. The key to determine if the Shapley value can be efficiently computed depends on the linear separability of attributes. Once this is satisfied, as long as the Shapley value of the game of attributes can be efficiently determined, the Shapley value of the entire game can be efficiently computed. 5.3 Polynomial Combination of Attributes When the aggregate value function cannot be expressed as a linear function of its attributes, computing the Shapley value exactly is difficult. Here, we will focus on aggregate value function that can be expressed as some polynomial of its attributes. If we do not place a limit on the degree of the polynomial, and the game (N, v) is not necessarily monotonic, the problem is #P - hard. The proof is via reduction from three-dimensional matching, and details can be found in the Appendix. Even if we restrict ourselves to monotonic games, and non-negative coefficients for the polynomial aggregate value function, computing the exact Shapley value can still be hard. For example, suppose there are two attributes. All agents in some set B C _ N possess the first attribute, and all agents in some set C C _ N possess the second, and B and C are disjoint. For a coalition S C _ N, the aggregator for the first evaluates to 1 if and only if IS n BI> b ~, and similarly, the aggregator for the second evaluates to 1 if and only if IS n CI> c ~. Let the cardinality of the sets B and C be b and c. We can verify that the Shapley value of an agent i in The equation corresponds to a weighted sum of probability values of hypergeometric random variables. The correspondence with hypergeometric distribution is due to sampling without replacement nature of Shapley value. As far as we know, there is no close-form formula to evaluate the sum above. In addition, as the number of attributes involved increases, we move to multi-variate hypergeometric random variables, and the number of summands grow exponentially in the number of attributes. Therefore, it is highly unlikely that the exact Shapley value can be determined efficiently. Therefore, we look for approximation. 5.3.1 Approximation First, we need a criteria for evaluating how well an estimate, ˆφ, approximates the true Shapley value, φ. We consider the following three natural criteria: 9 Maximum underestimate: maxi φi / ˆφi 9 Maximum overestimate: maxi ˆφi / φi 9 Total variation: 21 Pi Iφi--ˆφiI, or alternatively maxS I Pi ∈ S φi--Pi ∈ S ˆφiI The total variation criterion is more meaningful when we normalize the game to having a value of 1 for the grand coalition, i.e., v (N) = 1. We can also define additive analogues of the under - and overestimates, especially when the games are normalized. We will assume for now that the aggregate value function is a polynomial over the attributes with non-negative coefficients. We will also assume that the aggregators are simple. We will evaluate a specific heuristic that is analogous to Equation (1). Suppose the aggregate function can be written as a polynomial with p terms and the attributes involved in the term are j (1),..., j (kj). We compute an estimate φˆ to the Shapley value as The idea behind the estimate is that for each term, we divide the value of the term equally among all its attributes. This is represented by the factor cj kj. Then for for each attribute of an agent, we assign the player a share of value from the attribute. This share is determined by the Shapley value of the simple game of that attribute. Without considering the details of the simple games, this constitutes a fair (but blind) rule of sharing. 5.3.2 Theoretical analysis of heuristic We can derive a simple and tight bound for the maximum (multiplicative) underestimate of the heuristic estimate. THEOREM 3. Given a game (N, v) represented as a MACG (N, M, A, a, w), suppose w can be expressed as a polynomial function of its attributes (cf Equation (2)). Let K = maxjkj, i.e., the maximum degree of the polynomial. Let φˆ denote the estimated Shapley value using Equation (3), and φ denote the true Shapley value. For all i E N, φi> K ˆφi. PROOF. We bound the maximum underestimate term-byterm. Let tj be the j-th term of the polynomial. We note that the term can be treated as a game among attributes, as it assigns a value to each coalition S C _ N. Without loss of generality, renumber attributes j (1) through j (kj) as 1 through kj. To make the equations less cluttered, let and for a game a, contribution of agent i to group S: i E / S, For each coalition S, i E / S, ∆ i (tj, S) = 1 if and only if for at least one attribute, say l ∗, ∆ i (al ∗, S) = 1. Therefore, if we sum over all the attributes, we would have included l ∗ Summing over the terms, we see that the worst case underestimate is by the maximum degree. Without loss of generality, since the bound is multiplicative, we can normalize the game to having v (N) = 1. As a corollary, because we cannot overestimate any set by more than K times, we obtain a bound on the total variation: COROLLARY 2. The total variation between the estimated Shapley value and the true Shapley value, for K-degree bounded polynomial aggregate value function, is K − 1 We can show that this bound is tight. Example 1. Consider a game with n players and K attributes. Let the first (n--1) agents be a member of the first (K--1) attributes, and that the corresponding aggregator returns 1 if any one of the first (K--1) agents is present. Let the n-th agent be the sole member of the K-th attribute. first (n--1) agents and 1K to the n-th agent. However, the true Shapley value of the n-th agent tends to 1 as n--+ oo, and the total variation approaches K − 1 In general, we cannot bound how much φˆ may overestimate the true Shapley value. The problem is that ˆφi may be non-zero for agent i even though may have no influence over the outcome of a game when attributes are multiplied together, as illustrated by the following example. Example 2. Consider a game with 2 players and 2 attributes, and let the first agent be a member of both attributes, and the other agent a member of the second attribute. For a coalition S, the first aggregator evaluates to 1 if agent 1 E S, and the second aggregator evaluates to 1 if both agents are in S. While agent 2 is not a dummy with respect to the second attribute, it is a dummy with respect to the product of the attributes. Agent 2 will be assigned a value of 14 by the estimate. As mentioned, for simple monotonic games, a game is fully described by its set of minimal winning coalitions. When the simple aggregators are represented as such, it is possible to check, in polynomial time, for agents turning dummies after attributes are multiplied together. Therefore, we can improve the heuristic estimate in this special case. 5.3.3 Empirical evaluation Due to a lack of benchmark problems for coalitional games, we have tested the heuristic on random instances. We believe more meaningful results can be obtained when we have real instances to test this heuristic on. Our experiment is set up as follows. We control three parameters of the experiment: the number of players (6--10), Figure 1: Experimental results the number of attributes (3--8), and the maximum degree of the polynomial (2--5). For each attribute, we randomly sample one to three minimal winning coalitions. We then randomly generate a polynomial of the desired maximum degree with a random number (3--12) of terms, each with a random positive weight. We normalize each game to have v (N) = 1. The results of the experiments are shown in Figure 1. The y-axis of the graphs shows the total variation, and the x-axis the number of players. Each datapoint is an average of approximately 700 random samples. Figure 1 (a) explores the effect of the maximum degree and the number of players when the number of attributes is fixed (at six). As expected, the total variation increases as the maximum degree increases. On the other hand, there is only a very small increase in error as the number of players increases. The error is nowhere near the theoretical worstcase bound of 21 to 54 for polynomials of degrees 2 to 5. Figure 1 (b) explores the effect of the number of attributes and the number of players when the maximum degree of the polynomial is fixed (at three). We first note that these three lines are quite tightly clustered together, suggesting that the number of attributes has relatively little effect on the error of the estimate. As the number of attributes increases, the total variation decreases. We think this is an interesting phenomenon. This is probably due to the precise construct required for the worst-case bound, and so as more attributes are available, we have more diverse terms in the polynomial, and the diversity pushes away from the worst-case bound. 6. CORE-RELATED QUESTIONS In this section, we look at the complexity of the two computational problems related to the core: CORE NONEMPTINESS and CORE MEMBERSHIP. We show that the nonemptiness of core of the game among attributes and the cores of the aggregators imply non-emptiness of the core of the game induced by the MACG. We also show that there appears to be no such general relationship that relates the core memberships of the game among attributes, games of attributes, and game induced by MACG. 6.1 Problem Setup There are many problems in the literature for which the questions of CORE NON-EMPTINESS and CORE MEMBERSHIP are known to be hard [1]. For example, for the edgespanning game that Deng and Papadimitriou studied [5], both of these questions are coNP-complete. As MACG can model the edge-spanning game in the same amount of space, these hardness results hold for MACG as well. As in the case for computing Shapley value, we attempt to find a way around the hardness barrier by assuming the existence of oracles, and try to build algorithms with these oracles. First, we consider the aggregate value function. Assumption 2. For a MACG (N, M, A, a, w), we assume there are oracles that answers the questions of CORE NONEMPTINESS, and CORE MEMBERSHIP for the aggregate value function w. When the aggregate value function is a non-negative linear function of its attributes, the core is always non-empty, and core membership can be determined efficiently. The concept of core for the game among attributes makes the most sense when the aggregators are simple games. We will further assume that these simple games are monotonic. Assumption 3. For a MACG (N, M, A, a, w), we assume all aggregators are monotonic and simple. We also assume there are oracles that answers the questions of CORE NONEMPTINESS, and CORE MEMBERSHIP for the aggregators. We consider this a mild assumption. Recall that monotonic simple games are fully described by their set of minimal winning coalitions (cf Section 3). If the aggregators are represented as such, CORE NON-EMPTINESS and CORE MEMBERSHIP can be checked in polynomial time. This is due to the following well-known result regarding simple games: 6.2 Core Non-emptiness There is a strong connection between the non-emptiness of the cores of the games among attributes, games of the attributes, and the game induced by a MACG. (M, w), is non-empty, and the cores of the games of attributes are non-empty, then the core of (N, v) is non-empty. PROOF. Let u be an arbitrary payoff vector in the core of the game among attributes, (M, w). For each attribute j, let 0j be an arbitrary payoff vector in the core of the game of attribute j. By Lemma 1, each attribute j must have a set of veto players; let this set be denoted by P j. For each agent i E N, let yi = Ej uj0j i. We claim that this vector y is in the core of (N, v). Consider any coalition S C N, This is true because an aggregator cannot evaluate to 1 without all members of the veto set. For any attribute j, by Lemma 1, EiEPj 0ji = 1. Therefore, Note that the proof is constructive, and hence if we are given an element in the core of the game among attributes, we can construct an element of the core of the coalitional game. From Theorem 4, we can obtain the following corollaries that have been previously shown in the literature. PROOF. Let the players be the vertices, and their attributes the edges incident on them. For each attribute, there is a veto set--namely, both endpoints of the edges. As previously observed, an aggregate value function that is a non-negative linear function of its aggregates has non-empty core. Therefore, the precondition of Theorem 4 is satisfied, and the edge-spanning game with non-negative edge weights has a non-empty core. COROLLARY 4 (THEOREM 1 OF [4]). The core of a flow game with committee control, where each edge is controlled by a simple game with a veto set of players, is non-empty. PROOF. We treat each edge of the flow game as an attribute, and so each attribute has a veto set of players. The core of a flow game (without committee) has been shown to be non-empty in [8]. We can again invoke Theorem 4 to show the non-emptiness of core for flow games with committee control. However, the core of the game induced by a MACG may be non-empty even when the core of the game among attributes is empty, as illustrated by the following example. Example 3. Suppose the minimal winning coalition of all aggregators in a MACG (N, M, A, a, w) is N, then v (S) = 0 for all coalitions S C N. As long as v (N)> 0, any nonnegative vector x that satisfies EiEN xi = v (N) is in the core of (N, v). Complementary to the example above, when all the aggregators have empty cores, the core of (N, v) is also empty. PROOF. Suppose the core of (N, v) is non-empty. Let x be a member of the core, and pick an agent i such that xi> 0. However, for each attribute, since the core is empty, by Lemma 1, there are at least two disjoint winning coalitions. Pick the winning coalition Sj that does not include i for each attribute j. Let S * = Uj Sj. Because S * is winning for all coalitions, v (S *) = v (N). However, Therefore, v (S *)> EjES ∗ xj, contradicting the fact that x is in the core of (N, v). We do not have general results regarding the problem of CORE NON-EMPTINESS when some of the aggregators have non-empty cores while others have empty cores. We suspect knowledge about the status of the cores of the aggregators alone is insufficient to decide this problem. 6.3 Core Membership Since it is possible for the game induced by the MACG to have a non-empty core when the core of the aggregate value function is empty (Example 3), we try to explore the problem of CORE MEMBERSHIP assuming that the core of both the game among attributes, (M, w), and the underlying game, (N, v), is known to be non-empty, and see if there is any relationship between their members. One reasonable requirement is whether a payoff vector x in the core of (N, v) can be decomposed and re-aggregated to a payoff vector y in the core of (M, w). Formally, Definition 11. We say that a vector x E Rn ≥ 0 can be decomposed and re-aggregated into a vector y E Rm ≥ 0 if there exists Z E Rm × n ≥ 0, such that Zij for all i Zij for all j We may refer Z as shares. When there is no restriction on the entries of Z, it is always possible to decompose a payoff vector x in the core of (N, v) to a payoff vector y in the core of (M, w). However, it seems reasonable to restrict that if an agent j is irrelevant to the aggregator i, i.e., i never changes the outcome of aggregator j, then Zij should be restricted to be 0. Unfortunately, this restriction is already too strong. Example 4. Consider a MACG (N, M, A, a, w) with two players and three attributes. Suppose agent 1 is irrelevant to attribute 1, and agent 2 is irrelevant to attributes 2 and 3. For any set of attributes T C M, let w be defined as Since the core of a game with a finite number of players forms a polytope, we can verify that the set of vectors (4, 4, 2), (4, 2, 4), and (2, 4, 4), fully characterize the core C of (M, w). On the other hand, the vector (10, 0) is in the core of (N, v). This vector cannot be decomposed and re-aggregated to a vector in C under the stated restriction. Because of the apparent lack of relationship among members of the core of (N, v) and that of (M, w), we believe an algorithm for testing CORE MEMBERSHIP will require more input than just the veto sets of the aggregators and the oracle of CORE MEMBERSHIP for the aggregate value function. 7. CONCLUDING REMARKS Multi-attribute coalitional games constitute a very natural way of modeling problems of interest. Its space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems. Positive results obtained under this representation can easily be translated to results about other representations. Some of these corollary results have been discussed in Sections 5 and 6. An important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch. As pointed out at the end of Section 4.3, MACG is very naturally suited for updates. Representation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting. Our work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well. Given the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.
Multi-Attribute Coalitional Games * t ABSTRACT We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these games--the Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature. 1. INTRODUCTION When agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities. Consider the problem of forming a soccer team. For a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper. The relevant attributes of the players are their skills at playing each of the four positions. The value of a team depends on how well its players can play these positions. At a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players. Consider an example from the business world. Companies in the metals industry are usually vertically-integrated and diversified. They have mines for various types of ores, and also mills capable of processing and producing different kinds of metal. They optimize their production profile according to the market prices for their products. For example, when the price of aluminum goes up, they will allocate more resources to producing aluminum. However, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores. Two or more companies may benefit from trading ores and processing capacities with one another. To model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies. Given the exogenous input of market prices, the value of a group of companies will be determined by these attributes. Many real-world problems can be likewise modeled by picking the right attributes. As attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models. Coalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation. Our goal is to understand the computational aspects of computing the solution concepts (stable and/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes. Our contributions can be summarized as follows: 9 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature. We show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse. 9 Given the generality of the model, positive results carry over to other representations. We discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature. • We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently. We provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight. We also carry out experiments to evaluate how the heuristic performs on random instances .1 2. RELATED WORK Coalitional game theory has been well studied in economics [9, 10, 14]. A vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties. The first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5]. They consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players. This can be efficiently modeled and generalized using attributes. As a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7]. Both of these representations are based on dividing a coalitional game into subgames (termed "issues" in [3] and "rules" in [7]), and aggregating the subgames via linear combination. The key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem. The relationship of these models will be made clear after we define the multiattribute representation in Section 4. Another representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2]. This representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem. While it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions. In this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions. We will therefore not make further comparisons with [2]. The model of coalitional games with attributes has been considered in the works of Shehory and Kraus. They model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13]. Our work differs significantly as our focus is on reasoning about solution concepts. Our model also covers a wider scope as attributes generalize the notion of capabilities. Yokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15]. There are two major differences between their work and ours. Firstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill. Also, they focus on developing new solution concepts that are robust with respect to manipulation by agents. Our focus is on reasoning about traditional solution concepts. Our work is also related to the study of cooperative games with committee control [4]. In these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3). multiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games. We note that when restricted to simple games, we derive similar results to that in [4]. 3. PRELIMINARIES 3.1 Coalitional Games 3.2 Computational Problems 4. FORMAL MODEL 4.1 Multi-Attribute Coalitional Games 4.2 An Example 4.3 Relationship with Other Representations 4.4 Limitation of One Aggregator per Attribute 5. SHAPLEY VALUE 5.1 Problem Setup 5.2 Linearly Separable Attributes 5.3 Polynomial Combination of Attributes 5.3.1 Approximation 5.3.2 Theoretical analysis of heuristic 5.3.3 Empirical evaluation 6. CORE-RELATED QUESTIONS 6.1 Problem Setup 6.2 Core Non-emptiness 6.3 Core Membership 7. CONCLUDING REMARKS Multi-attribute coalitional games constitute a very natural way of modeling problems of interest. Its space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems. Positive results obtained under this representation can easily be translated to results about other representations. Some of these corollary results have been discussed in Sections 5 and 6. An important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch. As pointed out at the end of Section 4.3, MACG is very naturally suited for updates. Representation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting. Our work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well. Given the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.
Multi-Attribute Coalitional Games * t ABSTRACT We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these games--the Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature. 1. INTRODUCTION When agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities. Consider the problem of forming a soccer team. The relevant attributes of the players are their skills at playing each of the four positions. The value of a team depends on how well its players can play these positions. At a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players. Consider an example from the business world. Companies in the metals industry are usually vertically-integrated and diversified. They have mines for various types of ores, and also mills capable of processing and producing different kinds of metal. However, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores. Two or more companies may benefit from trading ores and processing capacities with one another. To model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies. Given the exogenous input of market prices, the value of a group of companies will be determined by these attributes. Many real-world problems can be likewise modeled by picking the right attributes. As attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models. Our goal is to understand the computational aspects of computing the solution concepts (stable and/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes. Our contributions can be summarized as follows: 9 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature. We show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse. 9 Given the generality of the model, positive results carry over to other representations. We discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature. • We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently. We also carry out experiments to evaluate how the heuristic performs on random instances .1 2. RELATED WORK Coalitional game theory has been well studied in economics [9, 10, 14]. A vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties. The first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5]. They consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players. This can be efficiently modeled and generalized using attributes. As a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7]. Both of these representations are based on dividing a coalitional game into subgames (termed "issues" in [3] and "rules" in [7]), and aggregating the subgames via linear combination. The relationship of these models will be made clear after we define the multiattribute representation in Section 4. Another representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2]. This representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem. While it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions. In this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions. The model of coalitional games with attributes has been considered in the works of Shehory and Kraus. Our work differs significantly as our focus is on reasoning about solution concepts. Our model also covers a wider scope as attributes generalize the notion of capabilities. Yokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15]. There are two major differences between their work and ours. Also, they focus on developing new solution concepts that are robust with respect to manipulation by agents. Our focus is on reasoning about traditional solution concepts. Our work is also related to the study of cooperative games with committee control [4]. In these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3). multiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games. We note that when restricted to simple games, we derive similar results to that in [4]. 7. CONCLUDING REMARKS Multi-attribute coalitional games constitute a very natural way of modeling problems of interest. Its space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems. Positive results obtained under this representation can easily be translated to results about other representations. Some of these corollary results have been discussed in Sections 5 and 6. An important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch. Representation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting. Our work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well. Given the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.
I-57
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System
In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.
[ "multi-dimension trust", "reput system", "correl", "dirichlet distribut", "rumour propag", "anonym", "overconfid", "trust model", "heurist", "probabl theori", "data fusion", "doubl count", "rumour propog" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "U", "M", "U", "U", "M" ]
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System Steven Reece1 , Alex Rogers2 , Stephen Roberts1 and Nicholas R. Jennings2 1 Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK. {reece,sjrob}@robots. ox.ac.uk 2 Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, UK. {acr,nrj}@ecs. soton.ac.uk ABSTRACT In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Algorithms, Design, Theory 1. INTRODUCTION The role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest. In such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments. To date, however, much of the work within this area has used domain specific or ad-hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review). Recent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11]. This approach allows many of the desiderata of computational trust models to be addressed through principled means. In particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents. Whilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety). However, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7]. This presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made. Furthermore, these dimensions will typically also exhibit correlations. For example, a contract within a supply chain may specify criteria for timeliness, quality and quantity. A supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time. Thus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension. To date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad-hoc models do exist - see section 2 for more details). To rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. The starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier. Here we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1 ) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1 In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors. 1070 978-81-904262-7-5 (RPS) c 2007 IFAAMAS multiple dimensions. Building upon this, we then devise a novel trust model that addresses the three desiderata discussed above. In more detail, in this paper we extend the state of the art in four key ways: 1. We devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates. 2. We present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above. We then benchmark this solution and show that it leads to good estimates. 3. We show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another. The sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information. 4. We show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems. Thus, we present a novel solution based upon the idea of private and shared information. We show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence. The remainder of this paper is organised as follows: in section 2 we review related work. In section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4. In sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems. We conclude in section 7. 2. RELATED WORK The need for a multi-dimensional trust model has been recognised by a number of researchers. Sabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts. They provide heuristic approaches to combining these impressions to form a measure they call subjective reputation. Likewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4]. In his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning. He develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers. Whilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented. Gujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5]. They define multidimensional goal requirements, and evaluate an expected payoff based on a supplier``s estimated behaviour. These estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty. Nevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one. By contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension. Jøsang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6]. The beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty. Moreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time. Likewise, Teacy et al. use the beta distribution to describe an agent``s belief in the probability that another agent will successfully fulfill its commitments [11]. They present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents. They provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates. Thus, they present a principled approach to detecting and removing inconsistent reports. Our work builds upon these more principled approaches. However, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract. We show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract. In the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per Jøsang and Ismail and Teacy et al.). However, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent``s belief over multiple dimensions. 3. SINGLE-DIMENSIONAL TRUST Before presenting our multi-dimensional trust model, we first introduce the notation and formalism that we will use by describing the more familiar single dimensional case. We consider an agent who must decide whether to engage in a future contract with a supplier. This contract will lead to some outcome, o, and we consider that o = 1 if the contract is successfully fulfilled, and o = 0 if not2 . In order for the agent to make a rational decision, it should consider the utility that it will derive from this contract. We assume that in the case that the contract is successfully fulfilled, the agent derives a utility u(o = 1), otherwise it receives no utility3 . Now, given that the agent is uncertain of the reliability with which the supplier will fulfill the contract, it should consider the expected utility that it will derive, E[U], and this is given by: E[U] = p(o = 1)u(o = 1) (1) where p(o = 1) is the probability that the supplier will successfully fulfill the contract. However, whilst u(o = 1) is known by the agent, p(o = 1) is not. The best the agent can do is to determine a distribution over possible values of p(o = 1) given its direct experience of previous contract outcomes. Given that it has been able to do so, it can then determine an estimate of the expected utility4 of the contract, E[E[U]], and a measure of its uncertainty in this expected utility, Var(E[U]). This uncertainty is important since a risk averse agent may make a decision regarding a contract, 2 Note that we only consider binary contract outcomes, although extending this to partial outcomes is part of our future work. 3 Clearly this can be extended to the case where some utility is derived from an unsuccessful outcome. 4 Note that this is often called the expected expected utility, and this is the notation that we adopt here [2]. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071 not only on its estimate of the expected utility of the contract, but also on the probability that the expected utility will exceed some minimum amount. These two properties are given by: E[E[U]] = ˆp(o = 1)u(o = 1) (2) Var(E[U]) = Var(p(o = 1))u(o = 1)2 (3) where ˆp(o = 1) and Var(p(o = 1)) are the estimate and uncertainty of the probability that a contract will be successfully fulfilled, and are calculated from the distribution over possible values of p(o = 1) that the agent determines from its direct experience. The utility based approach that we present here provides an attractive motivation for this model of Teacy et al. [11]. Now, in the case of binary contract outcomes, the beta distribution is the natural choice to represent the distribution over possible values of p(o = 1) since within Bayesian statistics this well known to be the conjugate prior for binomial observations [3]. By adopting the beta distribution, we can calculate ˆp(o = 1) and Var(p(o = 1)) using standard results, and thus, if an agent observed N previous contracts of which n were successfully fulfilled, then: ˆp(o = 1) = n + 1 N + 2 and: Var(p(o = 1)) = (n + 1)(N − n + 1) (N + 2)2(N + 3) Note that as expected, the greater the number of contracts the agent observes, the smaller the variance term Var(p(o = 1)), and, thus, the less the uncertainty regarding the probability that a contract will be successfully fulfilled, ˆp(o = 1). 4. MULTI-DIMENSIONAL TRUST We now extend the description above, to consider contracts between suppliers and agents that are represented by multiple dimensions, and hence the success or failure of a contract can be decomposed into the success or failure of each separate dimension. Consider again the example of the supply chain that specifies the timeliness, quantity, and quality of the goods that are to be delivered. Thus, within our trust model oa = 1 now indicates a successful outcome over dimension a of the contract and oa = 0 indicates an unsuccessful one. A contract outcome, X, is now composed of a vector of individual contract part outcomes (e.g. X = {oa = 1, ob = 0, oc = 0, ...}). Given a multi-dimensional contract whose outcome is described by the vector X, we again consider that in order for an agent to make a rational decision, it should consider the utility that it will derive from this contract. To this end, we can make the general statement that the expected utility of a contract is given by: E[U] = p(X)U(X)T (4) where p(X) is a joint probability distribution over all possible contract outcomes: p(X) = ⎛ ⎜ ⎜ ⎜ ⎝ p(oa = 1, ob = 0, oc = 0, ...) p(oa = 1, ob = 1, oc = 0, ...) p(oa = 0, ob = 1, oc = 0, ...) ... ⎞ ⎟ ⎟ ⎟ ⎠ (5) and U(X) is the utility derived from these possible outcomes: U(X) = ⎛ ⎜ ⎜ ⎜ ⎝ u(oa = 1, ob = 0, oc = 0, ...) u(oa = 1, ob = 1, oc = 0, ...) u(oa = 0, ob = 1, oc = 0, ...) ... ⎞ ⎟ ⎟ ⎟ ⎠ (6) As before, whilst U(X) is known to the agent, the probability distribution p(X) is not. Rather, given the agent``s direct experience of the supplier, the agent can determine a distribution over possible values for p(X). In the single dimensional case, a beta distribution was the natural choice over possible values of p(o = 1). In the multi-dimensional case, where p(X) itself is a vector of probabilities, the corresponding natural choice is the Dirichlet distribution, since this is a conjugate prior for multinomial proportions [3]. Given this distribution, the agent is then able to calculate an estimate of the expected utility of a contract. As before, this estimate is itself represented by an expected value given by: E[E[U]] = ˆp(X)U(X)T (7) and a variance, describing the uncertainty in this expected utility: Var(E[U]) = U(X)Cov(p(X))U(X)T (8) where: Cov(p(X)) E[(p(X) − ˆp(X))(p(X) − ˆp(X))T ] (9) Thus, whilst the single dimensional case naturally leads to a trust model in which the agents attempt to derive an estimate of probability that a contract will be successfully fulfilled, ˆp(o = 1), along with a scalar variance that describes the uncertainty in this probability, Var(p(o = 1)), in this case, the agents must derive an estimate of a vector of probabilities, ˆp(X), along with a covariance matrix, Cov(p(X)), that represents the uncertainty in p(X) given the observed contractual outcomes. At this point, it is interesting to note that the estimate in the single dimensional case, ˆp(o = 1), has a clear semantic meaning in relation to trust; it is the agent``s belief in the probability of a supplier successfully fulfilling a contract. However, in the multi-dimensional case the agent must determine ˆp(X), and since this describes the probability of all possible contract outcomes, including those that are completely un-fulfilled, this direct semantic interpretation is not present. In the next section, we describe the exemplar utility function that we shall use in the remainder of this paper. 4.1 Exemplar Utility Function The approach described so far is completely general, in that it applies to any utility function of the form described above, and also applies to the estimation of any joint probability distribution. In the remainder of this paper, for illustrative purposes, we shall limit the discussion to the simplest possible utility function that exhibits a dependence upon the correlations between the contract dimensions. That is, we consider the case that expected utility is dependent only on the marginal probabilities of each contract dimension being successfully fulfilled, rather than the full joint probabilities: U(X) = ⎛ ⎜ ⎜ ⎜ ⎝ u(oa = 1) u(ob = 1) u(oc = 1) ... ⎞ ⎟ ⎟ ⎟ ⎠ (10) Thus, ˆp(X) is a vector estimate of the probability of each contract dimension being successfully fulfilled, and maintains the clear semantic interpretation seen in the single dimensional case: ˆp(X) = ⎛ ⎜ ⎜ ⎜ ⎝ ˆp(oa = 1) ˆp(ob = 1) ˆp(oc = 1) ... ⎞ ⎟ ⎟ ⎟ ⎠ (11) The correlations between the contract dimensions affect the uncertainty in the expected utility. To see this, consider the covariance 1072 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) matrix that describes this uncertainty, Cov(p(X)), is now given by: Cov(p(X)) = ⎛ ⎜ ⎜ ⎜ ⎝ Va Cab Cac ... Cab Vb Cbc ... Cac Cbc Vc ... ... ... ... ⎞ ⎟ ⎟ ⎟ ⎠ (12) In this matrix, the diagonal terms, Va, Vb and Vc, represent the uncertainties in p(oa = 1), p(ob = 1) and p(oc = 1) within p(X). The off-diagonal terms, Cab, Cac and Cbc, represent the correlations between these probabilities. In the next section, we use the Dirichlet distribution to calculate both ˆp(X) and Cov(p(X)) from an agent``s direct experience of previous contract outcomes. We first illustrate why this is necessary by considering an alternative approach to modelling multi-dimensional contracts whereby an agent na¨ıvely assumes that the dimensions are independent, and thus, it models each individually by separate beta distributions (as in the single dimensional case we presented in section 3). This is actually equivalent to setting the off-diagonal terms within the covariance matrix, Cov(p(X)), to zero. However, doing so can lead an agent to assume that its estimate of the expected utility of the contract is more accurate than it actually is. To illustrate this, consider a specific scenario with the following values: u(oa = 1) = u(ob = 1) = 1 and Va = Vb = 0.2. In this case, Var(E[U]) = 0.4(1 + Cab), and thus, if the correlation Cab is ignored then the variance in the expected utility is 0.4. However, if the contract outcomes are completely correlated then Cab = 1 and Var(E[U]) is actually 0.8. Thus, in order to have an accurate estimate of the variance of the expected contract utility, and to make a rational decision, it is essential that the agent is able to represent and calculate these correlation terms. In the next section, we describe how an agent may do so using the Dirichlet distribution. 4.2 The Dirichlet Distribution In this section, we describe how the agent may use its direct experience of previous contracts, and the standard results of the Dirichlet distribution, to determine an estimate of the probability that each contract dimension will be successful fulfilled, ˆp(X), and a measure of the uncertainties in these probabilities that expresses the correlations between the contract dimensions, Cov(p(X)). We first consider the calculation of ˆp(X) and the diagonal terms of the covariance matrix Cov(p(X)). As described above, the derivation of these results is identical to the case of the single dimensional beta distribution, where out of N contract outcomes, n are successfully fulfilled. In the multi-dimensional case, however, we have a vector {na, nb, nc, ...} that represents the number of outcomes for which each of the individual contract dimensions were successfully fulfilled. Thus, in terms of the standard Dirichlet parameters where αa = na + 1 and α0 = N + 2, the agent can estimate the probability of this contract dimension being successfully fulfilled: ˆp(oa = 1) = αa α0 = na + 1 N + 2 and can also calculate the variance in any contract dimension: Va = αa(α0 − αa) α2 0(1 + α0) = (na + 1)(N − na + 1) (N + 2)2(N + 3) However, calculating the off-diagonal terms within Cov(p(X)) is more complex since it is necessary to consider the correlations between the contract dimensions. Thus, for each pair of dimensions (i.e. a and b), we must consider all possible combinations of contract outcomes, and thus we define nab ij as the number of contract outcomes for which both oa = i and ob = j. For example, nab 10 represents the number of contracts for which oa = 1 and ob = 0. Now, using the standard Dirichlet notation, we can define αab ij nab ij + 1 for all i and j taking values 0 and 1, and then, to calculate the cross-correlations between contract pairs a and b, we note that the Dirichlet distribution over pair-wise joint probabilities is: Prob(pab) = Kab i∈{0,1} j∈{0,1} p(oa = i, ob = j)αab ij −1 where: i∈{0,1} j∈{0,1} p(oa = i, ob = j) = 1 and Kab is a normalising constant [3]. From this we can derive pair-wise probability estimates and variances: E[p(oa = i, ob = j)] = αab ij α0 (13) V [p(oa = i, ob = j)] = αab ij (α0 − αab ij ) α2 0(1 + α0) (14) where: α0 = i∈{0,1} j∈{0,1} αab ij (15) and in fact, α0 = N + 2, where N is the total number of contracts observed. Likewise, we can express the covariance in these pairwise probabilities in similar terms: C[p(oa = i, ob = j), p(oa = m, ob = n)] = −αab ij αab mn α2 0(1 + α0) Finally, we can use the expression: p(oa = 1) = j∈{0,1} p(oa = 1, ob = j) to determine the covariance Cab. To do so, we first simplify the notation by defining V ab ij V [p(oa = i, ob = j)] and Cab ijmn C[p(oa = i, ob = j), p(oa = m, ob = n)]. The covariance for the probability of positive contract outcomes is then the covariance between j∈{0,1} p(oa = 1, ob = j) and i∈{0,1} p(oa = i, ob = 1), and thus: Cab = Cab 1001 + Cab 1101 + Cab 1011 + V ab 11 . Thus, given a set of contract outcomes that represent the agent``s previous interactions with a supplier, we may use the Dirichlet distribution to calculate the mean and variance of the probability of any contract dimension being successfully fulfilled (i.e. ˆp(oa = 1) and Va). In addition, by a somewhat more complex procedure we can also calculate the correlations between these probabilities (i.e. Cab). This allows us to calculate an estimate of the probability that any contract dimension will be successfully fulfilled, ˆp(X), and also represent the uncertainty and correlations in these probabilities by the covariance matrix, Cov(p(X)). In turn, these results may be used to calculate the estimate and uncertainty in the expected utility of the contract. In the next section we present empirical results that show that in practise this formalism yields significant improvements in these estimates compared to the na¨ıve approximation using multiple independent beta distributions. 4.3 Empirical Comparison In order to evaluate the effectiveness of our formalism, and show the importance of the off-diagonal terms in Cov(p(X)), we compare two approaches: The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1073 −1 −0.5 0 0.5 1 0.2 0.4 0.6 0.8 Correlation (ρ) Var(E[U]) Dirichlet Distribution Indepedent Beta Distributions −1 −0.5 0 0.5 1 0.5 1 1.5 2 2.5 x 10 4 Correlation (ρ) Information(I) Dirichlet Distribution Indepedent Beta Distributions Figure 1: Plots showing (i) the variance of the expected contract utility and (ii) the information content of the estimates computed using the Dirichlet distribution and multiple independent beta distributions. Results are averaged over 106 runs, and the error bars show the standard error in the mean. • Dirichlet Distribution: We use the full Dirichlet distribution, as described above, to calculate ˆp(X) and Cov(p(X)) including all its off-diagonal terms that represent the correlations between the contract dimensions. • Independent Beta Distributions: We use independent beta distributions to represent each contract dimension, in order to calculate ˆp(X), and then, as described earlier, we approximate Cov(p(X)) and ignore the correlations by setting all the off-diagonal terms to zero. We consider a two-dimensional case where u(oa = 1) = 6 and u(ob = 1) = 2, since this allows us to plot ˆp(X) and Cov(p(X)) as ellipses in a two-dimensional plane, and thus explain the differences between the two approaches. Specifically, we initially allocate the agent some previous contract outcomes that represents its direct experience with a supplier. The number of contracts is drawn uniformly between 10 and 20, and the actual contract outcomes are drawn from an arbitrary joint distribution intended to induce correlations between the contract dimensions. For each set of contracts, we use the approaches described above to calculate ˆp(X) and Cov(p(X)), and hence, the variance in the expected contract utility, Var(E[U]). In addition, we calculate a scalar measure of the information content, I, of the covariance matrix Cov(p(X)), which is a standard way of measuring the uncertainty encoded within the covariance matrix [1]. More specifically, we calculate the determinant of the inverse of the covariance matrix: I = det(Cov(p(X))−1 ) (16) and note that the larger the information content, the more precise ˆp(X) will be, and thus, the better the estimate of the expected utility that the agent is able to calculate. Finally, we use the results 0.3 0.4 0.5 0.6 0.7 0.8 0.3 0.4 0.5 0.6 0.7 0.8 p(o =1) p(o=1) a b Dirichlet Distribution Indepedent Beta Distributions Figure 2: Examples of ˆp(X) and Cov(p(X)) plotted as second standard error ellipses. presented in section 4.2 to calculate the actual correlation, ρ, associated with this particular set of contract outcomes: ρ = Cab √ VaVb (17) where Cab, Va and Vb are calculated as described in section 4.2. The results of this analysis are shown in figure 1. Here we show the values of I and Var(E[U]) calculated by the agents, plotted against the correlation of the contract outcomes, ρ, that constituted their direct experience. The results are averaged over 106 simulation runs. Note that as expected, when the dimensions of the contract outcomes are uncorrelated (i.e. ρ = 0), then both approaches give the same results. However, the value of using our formalism with the full Dirichlet distribution is shown when the correlation between the dimensions increases (either negatively or positively). As can be seen, if we approximate the Dirichlet distribution with multiple independent beta distributions, all of the correlation information contained within the covariance matrix, Cov(p(X)), is lost, and thus, the information content of the matrix is much lower. The loss of this correlation information leads the variance of the expected utility of the contract to be incorrect (either over or under estimated depending on the correlation)5 , with the exact amount of mis-estimation depending on the actual utility function chosen (i.e. the values of u(oa = 1) and u(ob = 1)). In addition, in figure 2 we illustrate an example of the estimates calculated through both methods, for a single exemplar set of contract outcomes. We represent the probability estimates, ˆp(X), and the covariance matrix, Cov(p(X)), in the standard way as an ellipse [1]. That is, ˆp(X) determines the position of the center of the ellipse, Cov(p(X)) defines its size and shape. Note that whilst the ellipse resulting from the full Dirichlet formalism accurately reflects the true distribution (samples of which are plotted as points), that calculated by using multiple independent Beta distributions (and thus ignoring the correlations) results in a much larger ellipse that does not reflect the true distribution. The larger size of this ellipse is a result of the off-diagonal terms of the covariance matrix being set to zero, and corresponds to the agent miscalculating the uncertainty in the probability of each contract dimension being fulfilled. This, in turn, leads it to miscalculate the uncertainty in the expected utility of a contract (shown in figure 1 as Var(E[U]). 5. COMMUNICATING REPUTATION Having described how an individual agent can use its own direct experience of contract outcomes in order to estimate the probabil5 Note that the plots are not smooth due to the fact that given a limited number of contract outcomes, then the mean of Va and Vb do not vary smoothly with ρ. 1074 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) ity that a multi-dimensional contract will be successfully fulfilled, we now go on to consider how agents within an open multi-agent system can communicate these estimates to one another. This is commonly referred to as reputation and allows agents with limited direct experience of a supplier to make rational decisions. Both Jøsang and Ismail, and Teacy et al. present models whereby reputation is communicated between agents using the sufficient statistics of the beta distribution [6, 11]. This approach is attractive since these sufficient statistics are simple aggregations of contract outcomes (more precisely, they are simply the total number of contracts observed, N, and the number of these that were successfully fulfilled, n). Under the probabilistic framework of the beta distribution, reputation reports in this form may simply be aggregated with an agent``s own direct experience, in order to gain a more precise estimate based on a larger set of contract outcomes. We can immediately extend this approach to the multi-dimensional case considered here, by requiring that the agents exchange the sufficient statistics of the Dirichlet distribution instead of the beta distribution. In this case, for each pair of dimensions (i.e. a and b), the agents must communicate a vector of contract outcomes, N, which are the sufficient statistics of the Dirichlet distribution, given by: N =< nab ij > ∀a, b, i ∈ {0, 1}, j ∈ {0, 1} (18) Thus, an agent is able to communicate the sufficient statistics of its own Dirichlet distribution in terms of just 2d(d − 1) numbers (where d is the number of contract dimensions). For instance, in the case of three dimensions, N, is given by: N =< nab 00, nab 01, nab 10, nab 11, nac 00, nac 01, nac 10, nac 11, nbc 00, nbc 01, nbc 10, nbc 11 > and, hence, large sets of contract outcomes may be communicated within a relatively small message size, with no loss of information. Again, agents receiving these sufficient statistics may simply aggregate them with their own direct experience in order to gain a more precise estimate of the trustworthiness of a supplier. Finally, we note that whilst it is not the focus of our work here, by adopting the same principled approach as Jøsang and Ismail, and Teacy et al., many of the techniques that they have developed (such as discounting reports from unreliable agents, and filtering inconsistent reports from selfish agents) may be directly applied within this multi-dimensional model. However, we now go on to consider a new issue that arises in both the single and multi-dimensional models, namely the problems that arise when such aggregated sufficient statistics are propagated within decentralised agent networks. 6. RUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS In the previous section, we described the use of sufficient statistics to communicate reputation, and we showed that by aggregating contract outcomes together into these sufficient statistics, a large number of contract outcomes can be represented and communicated in a compact form. Whilst, this is an attractive property, it can be problematic in practise, since the individual provenance of each contract outcome is lost in the aggregation. Thus, to ensure an accurate estimate, the reputation system must ensure that each observation of a contract outcome is included within the aggregated statistics no more than once. Within a centralised reputation system, where all agents report their direct experience to a trusted center, such double counting of contract outcomes is easy to avoid. However, in a decentralised reputation system, where agents communicate reputation to one another, and aggregate their direct experience with these reputation reports on-the-fly, avoiding double counting is much more difficult. a1 a2 a3 ¨ ¨¨ ¨¨ ¨¨B E T N1 N1 N1 + N2 Figure 3: Example of rumour propagation in a decentralised reputation system. For example, consider the case shown in figure 3 where three agents (a1 ... a3), each with some direct experience of a supplier, share reputation reports regarding this supplier. If agent a1 were to provide its estimate to agents a2 and a3 in the form of the sufficient statistics of its Dirichlet distribution, then these agents can aggregate these contract outcomes with their own, and thus obtain more precise estimates. If at a later stage, agent a2 were to send its aggregate vector of contract outcomes to agent a3, then agent a3 being unaware of the full history of exchanges, may attempt to combine these contract outcomes with its own aggregated vector. However, since both vectors contain a contribution from agent a1, these will be counted twice in the final aggregated vector, and will result in a biased and overconfident estimate. This is termed rumour propagation or data incest in the data fusion literature [9]. One possible solution would be to uniquely identify the source of each contract outcome, and then propagate each vector, along with its label, through the network. Agents can thus identify identical observations that have arrived through different routes, and after removing the duplicates, can aggregate these together to form their estimates. Whilst this appears to be attractive in principle, for a number of reasons, it is not always a viable solution in practise [12]. Firstly, agents may not actually wish to have their uniquely labelled contract outcomes passed around an open system, since such information may have commercial or practical significance that could be used to their disadvantage. As such, agents may only be willing to exchange identifiable contract outcomes with a small number of other agents (perhaps those that they have some sort of reciprocal relationship with). Secondly, the fact that there is no aggregation of the contract outcomes as they pass around the network means that the message size increases over time, and the ultimate size of these messages is bounded only by the number of agents within the system (possibly an extremely large number for a global system). Finally, it may actually be difficult to assign globally agreeable, consistent, and unique labels for each agent within an open system. In the next section, we develop a novel solution to the problem of rumour propagation within decentralised reputation systems. Our solution is based on an approach developed within the area of target tracking and data fusion [9]. It avoids the need to uniquely identify an agent, it allows agents to restrict the number of other agents who they reveal their private estimates to, and yet it still allows information to propagate throughout the network. 6.1 Private and Shared Information Our solution to rumour propagation within decentralised reputation systems introduces the notion of private information that an agent knows it has not communicated to any other agent, and shared information that has been communicated to, or received from, another agent. Thus, the agent can decompose its contract outcome vector, N, into two vectors, a private one, Np, that has not been communicated to another agent, and a shared one, Ns, that has been shared with, or received from, another agent: N = Np + Ns (19) Now, whenever an agent communicates reputation, it communicates both its private and shared vectors separately. Both the origThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075 inating and receiving agents then update their two vectors appropriately. To understand this, consider the case that agent aα sends its private and shared contract outcome vectors, Nα p and Nα s , to agent aβ that itself has private and shared contract outcomes Nβ p and Nβ s . Each agent updates its vectors of contract outcomes according to the following procedure: • Originating Agent: Once the originating agent has sent both its shared and private contract outcome vectors to another agent, its private information is no longer private. Thus, it must remove the contract outcomes that were in its private vector, and add them into its shared vector: Nα s ← Nα s + Nα p Nα p ← ∅. • Receiving Agent: The goal of the receiving agent is to accumulate the largest number contract outcomes (since this will result in the most precise estimate) without including shared information from both itself and the other agent (since this may result in double counting of contract outcomes). It has two choices depending on the total number of contract outcomes6 within its own shared vector, Nβ s , and within that of the originating agent, Nα s . Thus, it updates its vector according to the procedure below: - Nβ s > Nα s : If the receiving agent``s shared vector represents a greater number of contract outcomes than that of the shared vector of the originating agent, then the agent combines its shared vector with the private vector of the originating agent: Nβ s ← Nβ s + Nα p Nβ p unchanged. - Nβ s < Nα s : Alternatively if the receiving agent``s shared vector represents a smaller number contract outcomes than that of the shared vector of the originating agent, then the receiving agent discards its own shared vector and forms a new one from both the private and shared vectors of the originating agent: Nβ s ← Nα s + Nα p Nβ p unchanged. In the case that Nβ s = Nα s then either option is appropriate. Once the receiving agent has updated its sets, it uses the contract outcomes within both to form its trust estimate. If agents receive several vectors simultaneously, this approach generalises to the receiving agent using the largest shared vector, and the private vectors of itself and all the originating agents to form its new shared vector. This procedure has a number of attractive properties. Firstly, since contract outcomes in an agent``s shared vector are never combined with those in the shared vector of another agent, outcomes that originated from the same agent are never combined together, and thus, rumour propagation is completely avoided. However, since the receiving agent may discard its own shared vector, and adopt the shared vector of the originating agent, information is still propagated around the network. Moreover, since contract outcomes are aggregated together within the private and shared vectors, the message size is constant and does not increase as the number of interactions increases. Finally, an agent only communicates its own private contract outcomes to its immediate neighbours. If this agent 6 Note that this may be calculated from N = nab 00 +nab 01 +nab 10 +nab 11. subsequently passes it on, it does so as unidentifiable aggregated information within its shared information. Thus, an agent may limit the number of agents with which it is willing to reveal identifiable contract outcomes, and yet these contract outcomes can still propagate within the network, and thus, improve estimates of other agents. Next, we demonstrate empirically that these properties can indeed be realised in practise. 6.2 Empirical Comparison In order to evaluate the effectiveness of this procedure we simulated random networks consisting of ten agents. Each agent has some direct experience of interacting with a supplier (as described in section 4.3). At each iteration of the simulation, it interacts with its immediate neighbours and exchanges reputation reports through the sufficient statistics of their Dirichlet distributions. We compare our solution to two of the most obvious decentralised alternatives: • Private and Shared Information: The agents follow the procedure described in the previous section. That is, they maintain separate private and shared vectors of contract outcomes, and at each iteration they communicate both these vectors to their immediate neighbours. • Rumour Propagation: The agents do not differentiate between private and shared contract outcomes. At the first iteration they communicate all of the contract outcomes that constitute their direct experience. In subsequent iterations, they propagate contract outcomes that they receive from any of the neighbours, to all their other immediate neighbours. • Private Information Only: The agents only communicate the contract outcomes that constitute their direct experience. In all cases, at each iteration, the agents use the Dirichlet distribution in order to calculate their trust estimates. We compare these three decentralised approaches to a centralised reputation system: • Centralised Reputation: All the agents pass their direct experience to a centralised reputation system that aggregates them together, and passes this estimate back to each agent. This centralised solution makes the most effective use of information available in the network. However, most real world problems demand decentralised solutions due to scalability, modularity and communication concerns. Thus, this centralised solution is included since it represents the optimal case, and allows us to benchmark our decentralised solution. The results of these comparisons are shown in figure 4. Here we show the sum of the information content of each agent``s covariance matrix (calculated as discussed earlier in section 4.3), for each of these four different approaches. We first note that where private information only is communicated, there is no change in information after the first iteration. Once each agent has received the direct experience of its immediate neighbours, no further increase in information can be achieved. This represents the minimum communication, and it exhibits the lowest total information of the four cases. Next, we note that in the case of rumour propagation, the information content increases continually, and rapidly exceeds the centralised reputation result. The fact that the rumour propagation case incorrectly exceeds this limit, indicates that it is continuously counting the same contract outcomes as they cycle around the network, in the belief that they are independent events. Finally, we note that using private and shared information represents a compromise between the private information only case and the centralised reputation case. Information is still allowed to propagate around the network, however rumours are eliminated. As before, we also plot a single instance of the trust estimates from one agent (i.e. ˆp(X) and Cov(p(X))) as a set of ellipses on a 1076 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1 2 3 4 5 10 4 10 6 10 8 10 10 Iteration Information(I) Private & Shared Information Rumour Propagation Private Information Only Centralised Reputation Figure 4: Sum of information over all agents as a function of the communication iteration. two-dimensional plane (along with samples from the true distribution). As expected, the centralised reputation system achieves the best estimate of the true distribution, since it uses the direct experience of all agents. The private information only case shows the largest ellipse since it propagates the least information around the network. The rumour propagation case shows the smallest ellipse, but it is inconsistent with the actual distribution p(X). Thus, propagating rumours around the network and double counting contract outcomes in the belief that they are independent events, results in an overconfident estimate. However, we note that our solution, using separate vectors of private and shared information, allows us to propagate more information than the private information only case, but we completely avoid the problems of rumour propagation. Finally, we consider the effect that this has on the agents'' calculation of the expected utility of the contract. We assume the same utility function as used in section 4.3 (i.e. u(oa = 1) = 6 and u(ob = 1) = 2), and in table 1 we present the estimate of the expected utility, and its standard deviation calculated for all four cases by a single agent at iteration five (after communication has ceased to have any further effect for all methods other than rumour propagation). We note that the rumour propagation case is clearly inconsistent with the centralised reputation system, since its standard deviation is too small and does not reflect the true uncertainty in the expected utility, given the contract outcomes. However, we observe that our solution represents the closest case to the centralised reputation system, and thus succeeds in propagating information throughout the network, whilst also avoiding bias and overconfidence. The exact difference between it and the centralised reputation system depends upon the topology of the network, and the history of exchanges that take place within it. 7. CONCLUSIONS In this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions. Our starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent. We then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems. Our future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous. That is, not all agents observe the same contract dimensions. This is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution can not be used directly. However, by 0.2 0.3 0.4 0.5 0.6 0.7 0.1 0.2 0.3 0.4 0.5 0.6 0.7 p(o =1) p(o=1) a b Private & Shared Information Rumour Propagation Private Information Only Centralised Reputation Figure 5: Instances of ˆp(X) and Cov(p(X)) plotted as second standard error ellipses after 5 communication iterations. Method E[E[U]] ± Var(E[U]) Private and Shared Information 3.18 ± 0.54 Rumour Propagation 3.33 ± 0.07 Private Information Only 3.20 ± 0.65 Centralised Reputation 3.17 ± 0.42 Table 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations. addressing this challenge, we hope to be able to apply these techniques to a setting in which a suppliers provides a range of services whose failures are correlated, and agents only have direct experiences of different subsets of these services. 8. ACKNOWLEDGEMENTS This research was undertaken as part of the ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) project and is jointly funded by a BAE Systems and EPSRC strategic partnership (EP/C548051/1). 9. REFERENCES [1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan. Estimation with Applications to Tracking and Navigation. Wiley Interscience, 2001. [2] C. Boutilier. The foundations of expected expected utility. In Proc. of the 4th Int. Joint Conf. on on Artificial Intelligence, pages 285-290, Acapulco, Mexico, 2003. [3] M. Evans, N. Hastings, and B. Peacock. Statistical Distributions. John Wiley & Sons, Inc., 1993. [4] N. Griffiths. Task delegation using experience-based multi-dimensional trust. In Proc. of the 4th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, pages 489-496, New York, USA, 2005. [5] N. Gukrai, D. DeAngelis, K. K. Fullam, and K. S. Barber. Modelling multi-dimensional trust. In Proc. of the 9th Int. Workshop on Trust in Agent Systems, Hakodate, Japan, 2006. [6] A. Jøsang and R. Ismail. The beta reputation system. In Proc. of the 15th Bled Electronic Commerce Conf., pages 324-337, Bled, Slovenia, 2002. [7] E. M. Maximilien and M. P. Singh. Agent-based trust model involving multiple qualities. In Proc. of the 4th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, pages 519-526, Utrecht, The Netherlands, 2005. [8] S. D. Ramchurn, D. Hunyh, and N. R. Jennings. Trust in multi-agent systems. Knowledge Engineering Review, 19(1):1-25, 2004. [9] S. Reece and S. Roberts. Robust, low-bandwidth, multi-vehicle mapping. In Proc. of the 8th Int. Conf. on Information Fusion, Philadelphia, USA, 2005. [10] J. Sabater and C. Sierra. REGRET: A reputation model for gregarious societies. In Proc. of the 4th Workshop on Deception, Fraud and Trust in Agent Societies, pages 61-69, Montreal, Canada, 2001. [11] W. T. L. Teacy, J. Patel, N. R. Jennings, and M. Luck. TRAVOS: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems, 12(2):183-198, 2006. [12] S. Utete. Network Management in Decentralised Sensing Systems. PhD thesis, University of Oxford, UK, 1994. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1077
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System ABSTRACT In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence. 1. INTRODUCTION The role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest. In such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments. To date, however, much of the work within this area has used domain specific or ad hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review). Recent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11]. This approach allows many of the desiderata of computational trust models to be addressed through principled means. In particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents. Whilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety). However, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7]. This presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made. Furthermore, these dimensions will typically also exhibit correlations. For example, a contract within a supply chain may specify criteria for timeliness, quality and quantity. A supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time. Thus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension. To date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad hoc models do exist--see section 2 for more details). To rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. The starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier. Here we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors. multiple dimensions. Building upon this, we then devise a novel trust model that addresses the three desiderata discussed above. In more detail, in this paper we extend the state of the art in four key ways: 1. We devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates. 2. We present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above. We then benchmark this solution and show that it leads to good estimates. 3. We show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another. The sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information. 4. We show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems. Thus, we present a novel solution based upon the idea of private and shared information. We show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence. The remainder of this paper is organised as follows: in section 2 we review related work. In section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4. In sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems. We conclude in section 7. 2. RELATED WORK The need for a multi-dimensional trust model has been recognised by a number of researchers. Sabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts. They provide heuristic approaches to combining these impressions to form a measure they call subjective reputation. Likewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4]. In his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning. He develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers. Whilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented. Gujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5]. They define multidimensional goal requirements, and evaluate an expected payoff based on a supplier's estimated behaviour. These estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty. Nevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one. By contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension. Jøsang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6]. The beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty. Moreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time. Likewise, Teacy et al. use the beta distribution to describe an agent's belief in the probability that another agent will successfully fulfill its commitments [11]. They present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents. They provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates. Thus, they present a principled approach to detecting and removing inconsistent reports. Our work builds upon these more principled approaches. However, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract. We show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract. In the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per Jøsang and Ismail and Teacy et al.). However, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent's belief over multiple dimensions. 3. SINGLE-DIMENSIONAL TRUST Before presenting our multi-dimensional trust model, we first introduce the notation and formalism that we will use by describing the more familiar single dimensional case. We consider an agent who must decide whether to engage in a future contract with a supplier. This contract will lead to some outcome, o, and we consider that o = 1 if the contract is successfully fulfilled, and o = 0 if not2. In order for the agent to make a rational decision, it should consider the utility that it will derive from this contract. We assume that in the case that the contract is successfully fulfilled, the agent derives a utility u (o = 1), otherwise it receives no utility3. Now, given that the agent is uncertain of the reliability with which the supplier will fulfill the contract, it should consider the expected utility that it will derive, E [U], and this is given by: where p (o = 1) is the probability that the supplier will successfully fulfill the contract. However, whilst u (o = 1) is known by the agent, p (o = 1) is not. The best the agent can do is to determine a distribution over possible values of p (o = 1) given its direct experience of previous contract outcomes. Given that it has been able to do so, it can then determine an estimate of the expected utility4 of the contract, E [E [U]], and a measure of its uncertainty in this expected utility, Var (E [U]). This uncertainty is important since a risk averse agent may make a decision regarding a contract, 2Note that we only consider binary contract outcomes, although extending this to partial outcomes is part of our future work. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071 not only on its estimate of the expected utility of the contract, but also on the probability that the expected utility will exceed some minimum amount. These two properties are given by: where ˆp (o = 1) and Var (p (o = 1)) are the estimate and uncertainty of the probability that a contract will be successfully fulfilled, and are calculated from the distribution over possible values of p (o = 1) that the agent determines from its direct experience. The utility based approach that we present here provides an attractive motivation for this model of Teacy et al. [11]. Now, in the case of binary contract outcomes, the beta distribution is the natural choice to represent the distribution over possible values of p (o = 1) since within Bayesian statistics this well known to be the conjugate prior for binomial observations [3]. By adopting the beta distribution, we can calculate ˆp (o = 1) and Var (p (o = 1)) using standard results, and thus, if an agent observed N previous contracts of which n were successfully fulfilled, then: Note that as expected, the greater the number of contracts the agent observes, the smaller the variance term Var (p (o = 1)), and, thus, the less the uncertainty regarding the probability that a contract will be successfully fulfilled, ˆp (o = 1). 4. MULTI-DIMENSIONAL TRUST We now extend the description above, to consider contracts between suppliers and agents that are represented by multiple dimensions, and hence the success or failure of a contract can be decomposed into the success or failure of each separate dimension. Consider again the example of the supply chain that specifies the timeliness, quantity, and quality of the goods that are to be delivered. Thus, within our trust model "oa = 1" now indicates a successful outcome over dimension a of the contract and "oa = 0" indicates an unsuccessful one. A contract outcome, X, is now composed of a vector of individual contract part outcomes (e.g. X = {oa = 1, ob = 0, oc = 0, ...}). Given a multi-dimensional contract whose outcome is described by the vector X, we again consider that in order for an agent to make a rational decision, it should consider the utility that it will derive from this contract. To this end, we can make the general statement that the expected utility of a contract is given by: ... As before, whilst U (X) is known to the agent, the probability distribution p (X) is not. Rather, given the agent's direct experience of the supplier, the agent can determine a distribution over possible values for p (X). In the single dimensional case, a beta distribution was the natural choice over possible values of p (o = 1). In the multi-dimensional case, where p (X) itself is a vector of probabilities, the corresponding natural choice is the Dirichlet distribution, since this is a conjugate prior for multinomial proportions [3]. Given this distribution, the agent is then able to calculate an estimate of the expected utility of a contract. As before, this estimate is itself represented by an expected value given by: and a variance, describing the uncertainty in this expected utility: Thus, whilst the single dimensional case naturally leads to a trust model in which the agents attempt to derive an estimate of probability that a contract will be successfully fulfilled, ˆp (o = 1), along with a scalar variance that describes the uncertainty in this probability, Var (p (o = 1)), in this case, the agents must derive an estimate of a vector of probabilities, ˆp (X), along with a covariance matrix, Cov (p (X)), that represents the uncertainty in p (X) given the observed contractual outcomes. At this point, it is interesting to note that the estimate in the single dimensional case, ˆp (o = 1), has a clear semantic meaning in relation to trust; it is the agent's belief in the probability of a supplier successfully fulfilling a contract. However, in the multi-dimensional case the agent must determine ˆp (X), and since this describes the probability of all possible contract outcomes, including those that are completely un-fulfilled, this direct semantic interpretation is not present. In the next section, we describe the exemplar utility function that we shall use in the remainder of this paper. 4.1 Exemplar Utility Function The approach described so far is completely general, in that it applies to any utility function of the form described above, and also applies to the estimation of any joint probability distribution. In the remainder of this paper, for illustrative purposes, we shall limit the discussion to the simplest possible utility function that exhibits a dependence upon the correlations between the contract dimensions. That is, we consider the case that expected utility is dependent only on the marginal probabilities of each contract dimension being successfully fulfilled, rather than the full joint probabilities: ... Thus, ˆp (X) is a vector estimate of the probability of each contract dimension being successfully fulfilled, and maintains the clear semantic interpretation seen in the single dimensional case: ... The correlations between the contract dimensions affect the uncertainty in the expected utility. To see this, consider the covariance 1072 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) matrix that describes this uncertainty, Cov (p (X)), is now given by: ......... In this matrix, the "diagonal" terms, Va, Vb and Vc, represent the uncertainties in p (oa = 1), p (ob = 1) and p (oc = 1) within p (X). The "off-diagonal" terms, Cab, Cac and Cbc, represent the correlations between these probabilities. In the next section, we use the Dirichlet distribution to calculate both ˆp (X) and Cov (p (X)) from an agent's direct experience of previous contract outcomes. We first illustrate why this is necessary by considering an alternative approach to modelling multi-dimensional contracts whereby an agent na ¨ ıvely assumes that the dimensions are independent, and thus, it models each individually by separate beta distributions (as in the single dimensional case we presented in section 3). This is actually equivalent to setting the off-diagonal terms within the covariance matrix, Cov (p (X)), to zero. However, doing so can lead an agent to assume that its estimate of the expected utility of the contract is more accurate than it actually is. To illustrate this, consider a specific scenario with the following values: u (oa = 1) = u (ob = 1) = 1 and Va = Vb = 0.2. In this case, Var (E [U]) = 0.4 (1 + Cab), and thus, if the correlation Cab is ignored then the variance in the expected utility is 0.4. However, if the contract outcomes are completely correlated then Cab = 1 and Var (E [U]) is actually 0.8. Thus, in order to have an accurate estimate of the variance of the expected contract utility, and to make a rational decision, it is essential that the agent is able to represent and calculate these correlation terms. In the next section, we describe how an agent may do so using the Dirichlet distribution. 4.2 The Dirichlet Distribution In this section, we describe how the agent may use its direct experience of previous contracts, and the standard results of the Dirichlet distribution, to determine an estimate of the probability that each contract dimension will be successful fulfilled, ˆp (X), and a measure of the uncertainties in these probabilities that expresses the correlations between the contract dimensions, Cov (p (X)). We first consider the calculation of ˆp (X) and the diagonal terms of the covariance matrix Cov (p (X)). As described above, the derivation of these results is identical to the case of the single dimensional beta distribution, where out of N contract outcomes, n are successfully fulfilled. In the multi-dimensional case, however, we have a vector Ina, nb, nc, ...} that represents the number of outcomes for which each of the individual contract dimensions were successfully fulfilled. Thus, in terms of the standard Dirichlet parameters where αa = na + 1 and α0 = N + 2, the agent can estimate the probability of this contract dimension being successfully fulfilled: However, calculating the off-diagonal terms within Cov (p (X)) is more complex since it is necessary to consider the correlations between the contract dimensions. Thus, for each pair of dimensions (i.e. a and b), we must consider all possible combinations of contract outcomes, and thus we define nabij as the number of contract outcomes for which both oa = i and ob = j. For example, nab represents the number of contracts for which oa = 1 and ob = 0. Now, using the standard Dirichlet notation, we can define αab ij + 1 for all i and j taking values 0 and 1, and then, to calculate the cross-correlations between contract pairs a and b, we note that the Dirichlet distribution over pair-wise joint probabilities is: and Kab is a normalising constant [3]. From this we can derive pair-wise probability estimates and variances: and in fact, α0 = N + 2, where N is the total number of contracts observed. Likewise, we can express the covariance in these pairwise probabilities in similar terms: to determine the covariance Cab. To do so, we first simplify the notation by defining Vijab' = V [p (oa = i, ob = j)] and Cab ijmn' _ C [p (oa = i, ob = j), p (oa = m, ob = n)]. The covariance for the probability of positive contract outcomes is then the covariance between Ej ∈ {0,1} p (oa = 1, ob = j) and Ei ∈ {0,1} p (oa = i, ob = 1), and thus: Thus, given a set of contract outcomes that represent the agent's previous interactions with a supplier, we may use the Dirichlet distribution to calculate the mean and variance of the probability of any contract dimension being successfully fulfilled (i.e. ˆp (oa = 1) and Va). In addition, by a somewhat more complex procedure we can also calculate the correlations between these probabilities (i.e. Cab). This allows us to calculate an estimate of the probability that any contract dimension will be successfully fulfilled, ˆp (X), and also represent the uncertainty and correlations in these probabilities by the covariance matrix, Cov (p (X)). In turn, these results may be used to calculate the estimate and uncertainty in the expected utility of the contract. In the next section we present empirical results that show that in practise this formalism yields significant improvements in these estimates compared to the na ¨ ıve approximation using multiple independent beta distributions. 4.3 Empirical Comparison In order to evaluate the effectiveness of our formalism, and show the importance of the off-diagonal terms in Cov (p (X)), we compare two approaches: Figure 1: Plots showing (i) the variance of the expected contract utility and (ii) the information content of the estimates computed using the Dirichlet distribution and multiple independent beta distributions. Results are averaged over 106 runs, and the error bars show the standard error in the mean. • Dirichlet Distribution: We use the full Dirichlet distribution, as described above, to calculate ˆp (X) and Cov (p (X)) including all its off-diagonal terms that represent the correlations between the contract dimensions. • Independent Beta Distributions: We use independent beta distributions to represent each contract dimension, in order to calculate ˆp (X), and then, as described earlier, we approximate Cov (p (X)) and ignore the correlations by setting all the off-diagonal terms to zero. We consider a two-dimensional case where u (oa = 1) = 6 and u (ob = 1) = 2, since this allows us to plot ˆp (X) and Cov (p (X)) as ellipses in a two-dimensional plane, and thus explain the differences between the two approaches. Specifically, we initially allocate the agent some previous contract outcomes that represents its direct experience with a supplier. The number of contracts is drawn uniformly between 10 and 20, and the actual contract outcomes are drawn from an arbitrary joint distribution intended to induce correlations between the contract dimensions. For each set of contracts, we use the approaches described above to calculate ˆp (X) and Cov (p (X)), and hence, the variance in the expected contract utility, Var (E [U]). In addition, we calculate a scalar measure of the information content, I, of the covariance matrix Cov (p (X)), which is a standard way of measuring the uncertainty encoded within the covariance matrix [1]. More specifically, we calculate the determinant of the inverse of the covariance matrix: and note that the larger the information content, the more precise ˆp (X) will be, and thus, the better the estimate of the expected utility that the agent is able to calculate. Finally, we use the results Figure 2: Examples of ˆp (X) and Cov (p (X)) plotted as second standard error ellipses. presented in section 4.2 to calculate the actual correlation, ρ, associated with this particular set of contract outcomes: where Cab, Va and Vb are calculated as described in section 4.2. The results of this analysis are shown in figure 1. Here we show the values of I and Var (E [U]) calculated by the agents, plotted against the correlation of the contract outcomes, ρ, that constituted their direct experience. The results are averaged over 106 simulation runs. Note that as expected, when the dimensions of the contract outcomes are uncorrelated (i.e. ρ = 0), then both approaches give the same results. However, the value of using our formalism with the full Dirichlet distribution is shown when the correlation between the dimensions increases (either negatively or positively). As can be seen, if we approximate the Dirichlet distribution with multiple independent beta distributions, all of the correlation information contained within the covariance matrix, Cov (p (X)), is lost, and thus, the information content of the matrix is much lower. The loss of this correlation information leads the variance of the expected utility of the contract to be incorrect (either over or under estimated depending on the correlation) 5, with the exact amount of mis-estimation depending on the actual utility function chosen (i.e. the values of u (oa = 1) and u (ob = 1)). In addition, in figure 2 we illustrate an example of the estimates calculated through both methods, for a single exemplar set of contract outcomes. We represent the probability estimates, ˆp (X), and the covariance matrix, Cov (p (X)), in the standard way as an ellipse [1]. That is, ˆp (X) determines the position of the center of the ellipse, Cov (p (X)) defines its size and shape. Note that whilst the ellipse resulting from the full Dirichlet formalism accurately reflects the true distribution (samples of which are plotted as points), that calculated by using multiple independent Beta distributions (and thus ignoring the correlations) results in a much larger ellipse that does not reflect the true distribution. The larger size of this ellipse is a result of the off-diagonal terms of the covariance matrix being set to zero, and corresponds to the agent miscalculating the uncertainty in the probability of each contract dimension being fulfilled. This, in turn, leads it to miscalculate the uncertainty in the expected utility of a contract (shown in figure 1 as Var (E [U]). 5. COMMUNICATING REPUTATION Having described how an individual agent can use its own direct experience of contract outcomes in order to estimate the probabil5Note that the plots are not smooth due to the fact that given a limited number of contract outcomes, then the mean of Va and Vb do not vary smoothly with ρ. 1074 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) ity that a multi-dimensional contract will be successfully fulfilled, we now go on to consider how agents within an open multi-agent system can communicate these estimates to one another. This is commonly referred to as reputation and allows agents with limited direct experience of a supplier to make rational decisions. Both 7øsang and Ismail, and Teacy et al. present models whereby reputation is communicated between agents using the sufficient statistics of the beta distribution [6, 11]. This approach is attractive since these sufficient statistics are simple aggregations of contract outcomes (more precisely, they are simply the total number of contracts observed, N, and the number of these that were successfully fulfilled, n). Under the probabilistic framework of the beta distribution, reputation reports in this form may simply be aggregated with an agent's own direct experience, in order to gain a more precise estimate based on a larger set of contract outcomes. We can immediately extend this approach to the multi-dimensional case considered here, by requiring that the agents exchange the sufficient statistics of the Dirichlet distribution instead of the beta distribution. In this case, for each pair of dimensions (i.e. a and b), the agents must communicate a vector of contract outcomes, N, which are the sufficient statistics of the Dirichlet distribution, given by: Thus, an agent is able to communicate the sufficient statistics of its own Dirichlet distribution in terms of just 2d (d − 1) numbers (where d is the number of contract dimensions). For instance, in the case of three dimensions, N, is given by: and, hence, large sets of contract outcomes may be communicated within a relatively small message size, with no loss of information. Again, agents receiving these sufficient statistics may simply aggregate them with their own direct experience in order to gain a more precise estimate of the trustworthiness of a supplier. Finally, we note that whilst it is not the focus of our work here, by adopting the same principled approach as 7øsang and Ismail, and Teacy et al., many of the techniques that they have developed (such as discounting reports from unreliable agents, and filtering inconsistent reports from selfish agents) may be directly applied within this multi-dimensional model. However, we now go on to consider a new issue that arises in both the single and multi-dimensional models, namely the problems that arise when such aggregated sufficient statistics are propagated within decentralised agent networks. 6. RUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS In the previous section, we described the use of sufficient statistics to communicate reputation, and we showed that by aggregating contract outcomes together into these sufficient statistics, a large number of contract outcomes can be represented and communicated in a compact form. Whilst, this is an attractive property, it can be problematic in practise, since the individual provenance of each contract outcome is lost in the aggregation. Thus, to ensure an accurate estimate, the reputation system must ensure that each observation of a contract outcome is included within the aggregated statistics no more than once. Within a centralised reputation system, where all agents report their direct experience to a trusted center, such double counting of contract outcomes is easy to avoid. However, in a decentralised reputation system, where agents communicate reputation to one another, and aggregate their direct experience with these reputation reports on-the-fly, avoiding double counting is much more difficult. Figure 3: Example of rumour propagation in a decentralised reputation system. For example, consider the case shown in figure 3 where three agents (a1...a3), each with some direct experience of a supplier, share reputation reports regarding this supplier. If agent a1 were to provide its estimate to agents a2 and a3 in the form of the sufficient statistics of its Dirichlet distribution, then these agents can aggregate these contract outcomes with their own, and thus obtain more precise estimates. If at a later stage, agent a2 were to send its aggregate vector of contract outcomes to agent a3, then agent a3 being unaware of the full history of exchanges, may attempt to combine these contract outcomes with its own aggregated vector. However, since both vectors contain a contribution from agent a1, these will be counted twice in the final aggregated vector, and will result in a biased and overconfident estimate. This is termed rumour propagation or data incest in the data fusion literature [9]. One possible solution would be to uniquely identify the source of each contract outcome, and then propagate each vector, along with its label, through the network. Agents can thus identify identical observations that have arrived through different routes, and after removing the duplicates, can aggregate these together to form their estimates. Whilst this appears to be attractive in principle, for a number of reasons, it is not always a viable solution in practise [12]. Firstly, agents may not actually wish to have their uniquely labelled contract outcomes passed around an open system, since such information may have commercial or practical significance that could be used to their disadvantage. As such, agents may only be willing to exchange identifiable contract outcomes with a small number of other agents (perhaps those that they have some sort of reciprocal relationship with). Secondly, the fact that there is no aggregation of the contract outcomes as they pass around the network means that the message size increases over time, and the ultimate size of these messages is bounded only by the number of agents within the system (possibly an extremely large number for a global system). Finally, it may actually be difficult to assign globally agreeable, consistent, and unique labels for each agent within an open system. In the next section, we develop a novel solution to the problem of rumour propagation within decentralised reputation systems. Our solution is based on an approach developed within the area of target tracking and data fusion [9]. It avoids the need to uniquely identify an agent, it allows agents to restrict the number of other agents who they reveal their private estimates to, and yet it still allows information to propagate throughout the network. 6.1 Private and Shared Information Our solution to rumour propagation within decentralised reputation systems introduces the notion of private information that an agent knows it has not communicated to any other agent, and shared information that has been communicated to, or received from, another agent. Thus, the agent can decompose its contract outcome vector, N, into two vectors, a private one, Np, that has not been communicated to another agent, and a shared one, Ns, that has been shared with, or received from, another agent: Now, whenever an agent communicates reputation, it communicates both its private and shared vectors separately. Both the orig The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075 inating and receiving agents then update their two vectors appropriately. To understand this, consider the case that agent aα sends its private and shared contract outcome vectors, Npα and Nsα, to agent aβ that itself has private and shared contract outcomes Npβ and Nsβ. Each agent updates its vectors of contract outcomes according to the following procedure: • Originating Agent: Once the originating agent has sent both its shared and private contract outcome vectors to another agent, its private information is no longer private. Thus, it must remove the contract outcomes that were in its private vector, and add them into its shared vector: Nsα ← Nsα + Npα Npα ← ∅. • Receiving Agent: The goal of the receiving agent is to accu mulate the largest number contract outcomes (since this will result in the most precise estimate) without including shared information from both itself and the other agent (since this may result in double counting of contract outcomes). It has two choices depending on the total number of contract outcomes6 within its own shared vector, Nsβ, and within that of the originating agent, Nsα. Thus, it updates its vector according to the procedure below:--Nsβ> Nsα: If the receiving agent's shared vector represents a greater number of contract outcomes than that of the shared vector of the originating agent, then the agent combines its shared vector with the private vector of the originating agent: Nsβ ← Nsβ + Npα Npβ unchanged. -- Nsβ <Nsα: Alternatively if the receiving agent's shared vector represents a smaller number contract outcomes than that of the shared vector of the originating agent, then the receiving agent discards its own shared vector and forms a new one from both the private and shared vectors of the originating agent: Nsβ ← Nsα + Npα Npβ unchanged. In the case that Nsβ = Nsα then either option is appropriate. Once the receiving agent has updated its sets, it uses the contract outcomes within both to form its trust estimate. If agents receive several vectors simultaneously, this approach generalises to the receiving agent using the largest shared vector, and the private vectors of itself and all the originating agents to form its new shared vector. This procedure has a number of attractive properties. Firstly, since contract outcomes in an agent's shared vector are never combined with those in the shared vector of another agent, outcomes that originated from the same agent are never combined together, and thus, rumour propagation is completely avoided. However, since the receiving agent may discard its own shared vector, and adopt the shared vector of the originating agent, information is still propagated around the network. Moreover, since contract outcomes are aggregated together within the private and shared vectors, the message size is constant and does not increase as the number of interactions increases. Finally, an agent only communicates its own private contract outcomes to its immediate neighbours. If this agent 6Note that this may be calculated from N = nab 00 + nab 0i + nab i0 + nab ii. subsequently passes it on, it does so as unidentifiable aggregated information within its shared information. Thus, an agent may limit the number of agents with which it is willing to reveal identifiable contract outcomes, and yet these contract outcomes can still propagate within the network, and thus, improve estimates of other agents. Next, we demonstrate empirically that these properties can indeed be realised in practise. 6.2 Empirical Comparison In order to evaluate the effectiveness of this procedure we simulated random networks consisting of ten agents. Each agent has some direct experience of interacting with a supplier (as described in section 4.3). At each iteration of the simulation, it interacts with its immediate neighbours and exchanges reputation reports through the sufficient statistics of their Dirichlet distributions. We compare our solution to two of the most obvious decentralised alternatives: • Private and Shared Information: The agents follow the procedure described in the previous section. That is, they maintain separate private and shared vectors of contract outcomes, and at each iteration they communicate both these vectors to their immediate neighbours. • Rumour Propagation: The agents do not differentiate between private and shared contract outcomes. At the first iteration they communicate all of the contract outcomes that constitute their direct experience. In subsequent iterations, they propagate contract outcomes that they receive from any of the neighbours, to all their other immediate neighbours. • Private Information Only: The agents only communicate the contract outcomes that constitute their direct experience. In all cases, at each iteration, the agents use the Dirichlet distribution in order to calculate their trust estimates. We compare these three decentralised approaches to a centralised reputation system: • Centralised Reputation: All the agents pass their direct experience to a centralised reputation system that aggregates them together, and passes this estimate back to each agent. This centralised solution makes the most effective use of information available in the network. However, most real world problems demand decentralised solutions due to scalability, modularity and communication concerns. Thus, this centralised solution is included since it represents the optimal case, and allows us to benchmark our decentralised solution. The results of these comparisons are shown in figure 4. Here we show the sum of the information content of each agent's covariance matrix (calculated as discussed earlier in section 4.3), for each of these four different approaches. We first note that where private information only is communicated, there is no change in information after the first iteration. Once each agent has received the direct experience of its immediate neighbours, no further increase in information can be achieved. This represents the minimum communication, and it exhibits the lowest total information of the four cases. Next, we note that in the case of rumour propagation, the information content increases continually, and rapidly exceeds the centralised reputation result. The fact that the rumour propagation case incorrectly exceeds this limit, indicates that it is continuously counting the same contract outcomes as they cycle around the network, in the belief that they are independent events. Finally, we note that using private and shared information represents a compromise between the private information only case and the centralised reputation case. Information is still allowed to propagate around the network, however rumours are eliminated. As before, we also plot a single instance of the trust estimates from one agent (i.e. ˆp (X) and Cov (p (X))) as a set of ellipses on a 1076 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: Sum of information over all agents as a function of the communication iteration. two-dimensional plane (along with samples from the true distribution). As expected, the centralised reputation system achieves the best estimate of the true distribution, since it uses the direct experience of all agents. The private information only case shows the largest ellipse since it propagates the least information around the network. The rumour propagation case shows the smallest ellipse, but it is inconsistent with the actual distribution p (X). Thus, propagating rumours around the network and double counting contract outcomes in the belief that they are independent events, results in an overconfident estimate. However, we note that our solution, using separate vectors of private and shared information, allows us to propagate more information than the private information only case, but we completely avoid the problems of rumour propagation. Finally, we consider the effect that this has on the agents' calculation of the expected utility of the contract. We assume the same utility function as used in section 4.3 (i.e. u (oa = 1) = 6 and u (ob = 1) = 2), and in table 1 we present the estimate of the expected utility, and its standard deviation calculated for all four cases by a single agent at iteration five (after communication has ceased to have any further effect for all methods other than rumour propagation). We note that the rumour propagation case is clearly inconsistent with the centralised reputation system, since its standard deviation is too small and does not reflect the true uncertainty in the expected utility, given the contract outcomes. However, we observe that our solution represents the closest case to the centralised reputation system, and thus succeeds in propagating information throughout the network, whilst also avoiding bias and overconfidence. The exact difference between it and the centralised reputation system depends upon the topology of the network, and the history of exchanges that take place within it. 7. CONCLUSIONS In this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions. Our starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent. We then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems. Our future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous. That is, not all agents observe the same contract dimensions. This is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution cannot be used directly. However, by Figure 5: Instances of ˆp (X) and Cov (p (X)) plotted as second standard error ellipses after 5 communication iterations. Table 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations. addressing this challenge, we hope to be able to apply these techniques to a setting in which a suppliers provides a range of services whose failures are correlated, and agents only have direct experiences of different subsets of these services.
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System ABSTRACT In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence. 1. INTRODUCTION The role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest. In such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments. To date, however, much of the work within this area has used domain specific or ad hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review). Recent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11]. This approach allows many of the desiderata of computational trust models to be addressed through principled means. In particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents. Whilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety). However, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7]. This presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made. Furthermore, these dimensions will typically also exhibit correlations. For example, a contract within a supply chain may specify criteria for timeliness, quality and quantity. A supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time. Thus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension. To date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad hoc models do exist--see section 2 for more details). To rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. The starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier. Here we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors. multiple dimensions. Building upon this, we then devise a novel trust model that addresses the three desiderata discussed above. In more detail, in this paper we extend the state of the art in four key ways: 1. We devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates. 2. We present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above. We then benchmark this solution and show that it leads to good estimates. 3. We show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another. The sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information. 4. We show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems. Thus, we present a novel solution based upon the idea of private and shared information. We show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence. The remainder of this paper is organised as follows: in section 2 we review related work. In section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4. In sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems. We conclude in section 7. 2. RELATED WORK The need for a multi-dimensional trust model has been recognised by a number of researchers. Sabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts. They provide heuristic approaches to combining these impressions to form a measure they call subjective reputation. Likewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4]. In his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning. He develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers. Whilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented. Gujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5]. They define multidimensional goal requirements, and evaluate an expected payoff based on a supplier's estimated behaviour. These estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty. Nevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one. By contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension. Jøsang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6]. The beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty. Moreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time. Likewise, Teacy et al. use the beta distribution to describe an agent's belief in the probability that another agent will successfully fulfill its commitments [11]. They present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents. They provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates. Thus, they present a principled approach to detecting and removing inconsistent reports. Our work builds upon these more principled approaches. However, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract. We show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract. In the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per Jøsang and Ismail and Teacy et al.). However, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent's belief over multiple dimensions. 3. SINGLE-DIMENSIONAL TRUST The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071 4. MULTI-DIMENSIONAL TRUST 4.1 Exemplar Utility Function 1072 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 The Dirichlet Distribution 4.3 Empirical Comparison 5. COMMUNICATING REPUTATION 1074 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 6. RUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS 6.1 Private and Shared Information The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075 6.2 Empirical Comparison 1076 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7. CONCLUSIONS In this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions. Our starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent. We then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems. Our future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous. That is, not all agents observe the same contract dimensions. This is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution cannot be used directly. However, by Figure 5: Instances of ˆp (X) and Cov (p (X)) plotted as second standard error ellipses after 5 communication iterations. Table 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations. addressing this challenge, we hope to be able to apply these techniques to a setting in which a suppliers provides a range of services whose failures are correlated, and agents only have direct experiences of different subsets of these services.
Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System ABSTRACT In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence. 1. INTRODUCTION The role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest. In such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments. Recent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11]. This approach allows many of the desiderata of computational trust models to be addressed through principled means. This presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made. Furthermore, these dimensions will typically also exhibit correlations. For example, a contract within a supply chain may specify criteria for timeliness, quality and quantity. Thus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension. To date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad hoc models do exist--see section 2 for more details). To rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. The starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier. multiple dimensions. Building upon this, we then devise a novel trust model that addresses the three desiderata discussed above. 1. We devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates. 2. We present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above. We then benchmark this solution and show that it leads to good estimates. 3. We show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another. The sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information. 4. We show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems. Thus, we present a novel solution based upon the idea of private and shared information. We show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence. The remainder of this paper is organised as follows: in section 2 we review related work. In section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4. In sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems. We conclude in section 7. 2. RELATED WORK The need for a multi-dimensional trust model has been recognised by a number of researchers. Sabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts. They provide heuristic approaches to combining these impressions to form a measure they call subjective reputation. Likewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4]. He develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers. Whilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented. Gujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5]. They define multidimensional goal requirements, and evaluate an expected payoff based on a supplier's estimated behaviour. These estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty. Nevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one. By contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension. Jøsang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6]. The beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty. Likewise, Teacy et al. use the beta distribution to describe an agent's belief in the probability that another agent will successfully fulfill its commitments [11]. They present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents. They provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates. Thus, they present a principled approach to detecting and removing inconsistent reports. Our work builds upon these more principled approaches. However, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract. We show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract. In the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per Jøsang and Ismail and Teacy et al.). However, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent's belief over multiple dimensions. 7. CONCLUSIONS In this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions. Our starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent. We then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems. Our future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous. That is, not all agents observe the same contract dimensions. This is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution cannot be used directly. However, by Table 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations.
I-43
Dynamics Based Control with an Application to Area-Sweeping Problems
In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation.
[ "dynam base control", "dynam base control", "control", "area-sweep problem", "stochast environ", "partial observ markov decis problem", "system dynam", "extend markov track", "reward function", "multi-agent system", "target dynam", "action-select random", "tag game", "environ design level", "user level", "agent level", "robot" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "R", "U", "U", "M", "U", "M", "U" ]
Dynamics Based Control with an Application to Area-Sweeping Problems Zinovi Rabinovich Engineering and Computer Science Hebrew University of Jerusalem Jerusalem, Israel nomad@cs.huji.ac.il Jeffrey S. Rosenschein Engineering and Computer Science Hebrew University of Jerusalem Jerusalem, Israel jeff@cs.huji.ac.il Gal A. Kaminka The MAVERICK Group Department of Computer Science Bar Ilan University, Israel galk@cs.biu.ac.il ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. Categories and Subject Descriptors I.2.8 [Problem Solving, Control Methods, and Search]: Control Theory; I.2.9 [Robotics]; I.2.11 [Distributed Artificial Intelligence]: Intelligent Agents General Terms Algorithms, Theory 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often addressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility. While theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7]. We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. The idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. To evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to tag a moving target (quarry) whose position is not known with certainty. Experimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent``s position). The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 790 978-81-904262-7-5 (RPS) c 2007 IFAAMAS 2. MOTIVATION AND RELATED WORK Many real-life scenarios naturally have a stochastic target dynamics specification, especially those domains where there exists no ultimate goal, but rather system behavior (with specific properties) that has to be continually supported. For example, security guards perform persistent sweeps of an area to detect any sign of intrusion. Cunning thieves will attempt to track these sweeps, and time their operation to key points of the guards'' motion. It is thus advisable to make the guards'' motion dynamics appear irregular and random. Recent work by Paruchuri et al. [10] has addressed such randomization in the context of single-agent and distributed POMDPs. The goal in that work was to generate policies that provide a measure of action-selection randomization, while maintaining rewards within some acceptable levels. Our focus differs from this work in that DBC does not optimize expected rewards-indeed we do not consider rewards at all-but instead maintains desired dynamics (including, but not limited to, randomization). The Game of Tag is another example of the applicability of the approach. It was introduced in the work by Pineau et al. [11]. There are two agents that can move about an area, which is divided into a grid. The grid may have blocked cells (holes) into which no agent can move. One agent (the hunter) seeks to move into a cell occupied by the other (the quarry), such that they are co-located (this is a successful tag). The quarry seeks to avoid the hunter agent, and is always aware of the hunter``s position, but does not know how the hunter will behave, which opens up the possibility for a hunter to surprise the prey. The hunter knows the quarry``s probabilistic law of motion, but does not know its current location. Tag is an instance of a family of area-sweeping (pursuit-evasion) problems. In [11], the hunter modeled the problem using a POMDP. A reward function was defined, to reflect the desire to tag the quarry, and an action policy was computed to optimize the reward collected over time. Due to the intractable complexity of determining the optimal policy, the action policy computed in that paper was essentially an approximation. In this paper, instead of formulating a reward function, we use EMT to solve the problem, by directly specifying the target dynamics. In fact, any search problem with randomized motion, the socalled class of area sweeping problems, can be described through specification of such target system dynamics. Dynamics Based Control provides a natural approach to solving these problems. 3. DYNAMICS BASED CONTROL The specification of Dynamics Based Control (DBC) can be broken into three interacting levels: Environment Design Level, User Level, and Agent Level. • Environment Design Level is concerned with the formal specification and modeling of the environment. For example, this level would specify the laws of physics within the system, and set its parameters, such as the gravitation constant. • User Level in turn relies on the environment model produced by Environment Design to specify the target system dynamics it wishes to observe. The User Level also specifies the estimation or learning procedure for system dynamics, and the measure of deviation. In the museum guard scenario above, these would correspond to a stochastic sweep schedule, and a measure of relative surprise between the specified and actual sweeping. • Agent Level in turn combines the environment model from the Environment Design level, the dynamics estimation procedure, the deviation measure and the target dynamics specification from User Level, to produce a sequence of actions that create system dynamics as close as possible to the targeted specification. As we are interested in the continual development of a stochastic system, such as happens in classical control theory [16] and continual planning [4], as well as in our example of museum sweeps, the question becomes how the Agent Level is to treat the deviation measurements over time. To this end, we use a probability threshold-that is, we would like the Agent Level to maximize the probability that the deviation measure will remain below a certain threshold. Specific action selection then depends on system formalization. One possibility would be to create a mixture of available system trends, much like that which happens in Behavior-Based Robotic architectures [1]. The other alternative would be to rely on the estimation procedure provided by the User Level-to utilize the Environment Design Level model of the environment to choose actions, so as to manipulate the dynamics estimator into believing that a certain dynamics has been achieved. Notice that this manipulation is not direct, but via the environment. Thus, for strong enough estimator algorithms, successful manipulation would mean a successful simulation of the specified target dynamics (i.e., beyond discerning via the available sensory input). DBC levels can also have a back-flow of information (see Figure 1). For instance, the Agent Level could provide data about target dynamics feasibility, allowing the User Level to modify the requirement, perhaps focusing on attainable features of system behavior. Data would also be available about the system response to different actions performed; combined with a dynamics estimator defined by the User Level, this can provide an important tool for the environment model calibration at the Environment Design Level. UserEnv. Design Agent Model Ideal Dynamics Estimator Estimator Dynamics Feasibility System Response Data Figure 1: Data flow of the DBC framework Extending upon the idea of Actor-Critic algorithms [5], DBC data flow can provide a good basis for the design of a learning algorithm. For example, the User Level can operate as an exploratory device for a learning algorithm, inferring an ideal dynamics target from the environment model at hand that would expose and verify most critical features of system behavior. In this case, feasibility and system response data from the Agent Level would provide key information for an environment model update. In fact, the combination of feasibility and response data can provide a basis for the application of strong learning algorithms such as EM [2, 9]. 3.1 DBC for Markovian Environments For a Partially Observable Markovian Environment, DBC can be specified in a more rigorous manner. Notice how DBC discards rewards, and replaces it by another optimality criterion (structural differences are summarized in Table 1): • Environment Design level is to specify a tuple < S, A, T, O, Ω, s0 >, where: - S is the set of all possible environment states; - s0 is the initial state of the environment (which can also be viewed as a probability distribution over S); The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 - A is the set of all possible actions applicable in the environment; - T is the environment``s probabilistic transition function: T : S ×A → Π(S). That is, T(s |a, s) is the probability that the environment will move from state s to state s under action a; - O is the set of all possible observations. This is what the sensor input would look like for an outside observer; - Ω is the observation probability function: Ω : S × A × S → Π(O). That is, Ω(o|s , a, s) is the probability that one will observe o given that the environment has moved from state s to state s under action a. • User Level, in the case of Markovian environment, operates on the set of system dynamics described by a family of conditional probabilities F = {τ : S × A → Π(S)}. Thus specification of target dynamics can be expressed by q ∈ F, and the learning or tracking algorithm can be represented as a function L : O×(A×O)∗ → F; that is, it maps sequences of observations and actions performed so far into an estimate τ ∈ F of system dynamics. There are many possible variations available at the User Level to define divergence between system dynamics; several of them are: - Trace distance or L1 distance between two distributions p and q defined by D(p(·), q(·)) = 1 2 x |p(x) − q(x)| - Fidelity measure of distance F(p(·), q(·)) = x p(x)q(x) - Kullback-Leibler divergence DKL(p(·) q(·)) = x p(x) log p(x) q(x) Notice that the latter two are not actually metrics over the space of possible distributions, but nevertheless have meaningful and important interpretations. For instance, KullbackLeibler divergence is an important tool of information theory [3] that allows one to measure the price of encoding an information source governed by q, while assuming that it is governed by p. The User Level also defines the threshold of dynamics deviation probability θ. • Agent Level is then faced with a problem of selecting a control signal function a∗ to satisfy a minimization problem as follows: a∗ = arg min a Pr(d(τa, q) > θ) where d(τa, q) is a random variable describing deviation of the dynamics estimate τa, created by L under control signal a, from the ideal dynamics q. Implicit in this minimization problem is that L is manipulated via the environment, based on the environment model produced by the Environment Design Level. 3.2 DBC View of the State Space It is important to note the complementary view that DBC and POMDPs take on the state space of the environment. POMDPs regard state as a stationary snap-shot of the environment; whatever attributes of state sequencing one seeks are reached through properties of the control process, in this case reward accumulation. This can be viewed as if the sequencing of states and the attributes of that sequencing are only introduced by and for the controlling mechanism-the POMDP policy. DBC concentrates on the underlying principle of state sequencing, the system dynamics. DBC``s target dynamics specification can use the environment``s state space as a means to describe, discern, and preserve changes that occur within the system. As a result, DBC has a greater ability to express state sequencing properties, which are grounded in the environment model and its state space definition. For example, consider the task of moving through rough terrain towards a goal and reaching it as fast as possible. POMDPs would encode terrain as state space points, while speed would be ensured by negative reward for every step taken without reaching the goalaccumulating higher reward can be reached only by faster motion. Alternatively, the state space could directly include the notion of speed. For POMDPs, this would mean that the same concept is encoded twice, in some sense: directly in the state space, and indirectly within reward accumulation. Now, even if the reward function would encode more, and finer, details of the properties of motion, the POMDP solution will have to search in a much larger space of policies, while still being guided by the implicit concept of the reward accumulation procedure. On the other hand, the tactical target expression of variations and correlations between position and speed of motion is now grounded in the state space representation. In this situation, any further constraints, e.g., smoothness of motion, speed limits in different locations, or speed reductions during sharp turns, are explicitly and uniformly expressed by the tactical target, and can result in faster and more effective action selection by a DBC algorithm. 4. EMT-BASED CONTROL AS A DBC Recently, a control algorithm was introduced called EMT-based Control [13], which instantiates the DBC framework. Although it provides an approximate greedy solution in the DBC sense, initial experiments using EMT-based control have been encouraging [14, 15]. EMT-based control is based on the Markovian environment definition, as in the case with POMDPs, but its User and Agent Levels are of the Markovian DBC type of optimality. • User Level of EMT-based control defines a limited-case target system dynamics independent of action: qEMT : S → Π(S). It then utilizes the Kullback-Leibler divergence measure to compose a momentary system dynamics estimator-the Extended Markov Tracking (EMT) algorithm. The algorithm keeps a system dynamics estimate τt EMT that is capable of explaining recent change in an auxiliary Bayesian system state estimator from pt−1 to pt, and updates it conservatively using Kullback-Leibler divergence. Since τt EMT and pt−1,t are respectively the conditional and marginal probabilities over the system``s state space, explanation simply means that pt(s ) = s τt EMT (s |s)pt−1(s), and the dynamics estimate update is performed by solving a 792 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment Level Approach MDP Markovian DBC Environment < S, A, T, O, Ω >,where S - set of states A - set of actions Design T : S × A → Π(S) - transition O - observation set Ω : S × A × S → Π(O) User r : S × A × S → R q : S × A → Π(S) F(π∗ ) → r L(o1, ..., ot) → τ r - reward function q - ideal dynamics F - reward remodeling L - dynamics estimator θ - threshold Agent π∗ = arg max π E[ γt rt] π∗ = arg min π Prob(d(τ q) > θ) minimization problem: τt EMT = H[pt, pt−1, τt−1 EMT ] = arg min τ DKL(τ × pt−1 τt−1 EMT × pt−1) s.t. pt(s ) = s (τ × pt−1)(s , s) pt−1(s) = s (τ × pt−1)(s , s) • Agent Level in EMT-based control is suboptimal with respect to DBC (though it remains within the DBC framework), performing greedy action selection based on prediction of EMT``s reaction. The prediction is based on the environment model provided by the Environment Design level, so that if we denote by Ta the environment``s transition function limited to action a, and pt−1 is the auxiliary Bayesian system state estimator, then the EMT-based control choice is described by a∗ = arg min a∈A DKL(H[Ta × pt, pt, τt EMT ] qEMT × pt−1) Note that this follows the Markovian DBC framework precisely: the rewarding optimality of POMDPs is discarded, and in its place a dynamics estimator (EMT in this case) is manipulated via action effects on the environment to produce an estimate close to the specified target system dynamics. Yet as we mentioned, naive EMTbased control is suboptimal in the DBC sense, and has several additional limitations that do not exist in the general DBC framework (discussed in Section 4.2). 4.1 Multi-Target EMT At times, there may exist several behavioral preferences. For example, in the case of museum guards, some art items are more heavily guarded, requiring that the guards stay more often in their close vicinity. On the other hand, no corner of the museum is to be left unchecked, which demands constant motion. Successful museum security would demand that the guards adhere to, and balance, both of these behaviors. For EMT-based control, this would mean facing several tactical targets {qk}K k=1, and the question becomes how to merge and balance them. A balancing mechanism can be applied to resolve this issue. Note that EMT-based control, while selecting an action, creates a preference vector over the set of actions based on their predicted performance with respect to a given target. If these preference vectors are normalized, they can be combined into a single unified preference. This requires replacement of standard EMT-based action selection by the algorithm below [15]: • Given: - a set of target dynamics {qk}K k=1, - vector of weights w(k) • Select action as follows - For each action a ∈ A predict the future state distribution ¯pa t+1 = Ta ∗ pt; - For each action, compute Da = H(¯pa t+1, pt, PDt) - For each a ∈ A and qk tactical target, denote V (a, k) = DKL (Da qk) pt . Let Vk(a) = 1 Zk V (a, k), where Zk = a∈A V (a, k) is a normalization factor. - Select a∗ = arg min a k k=1 w(k)Vk(a) The weights vector w = (w1, ..., wK ) allows the additional tuning of importance among target dynamics without the need to redesign the targets themselves. This balancing method is also seamlessly integrated into the EMT-based control flow of operation. 4.2 EMT-based Control Limitations EMT-based control is a sub-optimal (in the DBC sense) representative of the DBC structure. It limits the User by forcing EMT to be its dynamic tracking algorithm, and replaces Agent optimization by greedy action selection. This kind of combination, however, is common for on-line algorithms. Although further development of EMT-based controllers is necessary, evidence so far suggests that even the simplest form of the algorithm possesses a great deal of power, and displays trends that are optimal in the DBC sense of the word. There are two further, EMT-specific, limitations to EMT-based control that are evident at this point. Both already have partial solutions and are subjects of ongoing research. The first limitation is the problem of negative preference. In the POMDP framework for example, this is captured simply, through The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 the appearance of values with different signs within the reward structure. For EMT-based control, however, negative preference means that one would like to avoid a certain distribution over system development sequences; EMT-based control, however, concentrates on getting as close as possible to a distribution. Avoidance is thus unnatural in native EMT-based control. The second limitation comes from the fact that standard environment modeling can create pure sensory actions-actions that do not change the state of the world, and differ only in the way observations are received and the quality of observations received. Since the world state does not change, EMT-based control would not be able to differentiate between different sensory actions. Notice that both of these limitations of EMT-based control are absent from the general DBC framework, since it may have a tracking algorithm capable of considering pure sensory actions and, unlike Kullback-Leibler divergence, a distribution deviation measure that is capable of dealing with negative preference. 5. EMT PLAYING TAG The Game of Tag was first introduced in [11]. It is a single agent problem of capturing a quarry, and belongs to the class of area sweeping problems. An example domain is shown in Figure 2. 0 51 2 3 4 6 7 8 10 12 13 161514 17 18 19 2221 23 9 11Q A 20 Figure 2: Tag domain; an agent (A) attempts to seek and capture a quarry (Q) The Game of Tag extremely limits the agent``s perception, so that the agent is able to detect the quarry only if they are co-located in the same cell of the grid world. In the classical version of the game, co-location leads to a special observation, and the `Tag'' action can be performed. We slightly modify this setting: the moment that both agents occupy the same cell, the game ends. As a result, both the agent and its quarry have the same motion capability, which allows them to move in four directions, North, South, East, and West. These form a formal space of actions within a Markovian environment. The state space of the formal Markovian environment is described by the cross-product of the agent and quarry``s positions. For Figure 2, it would be S = {s0, ..., s23} × {s0, ..., s23}. The effects of an action taken by the agent are deterministic, but the environment in general has a stochastic response due to the motion of the quarry. With probability q0 1 it stays put, and with probability 1 − q0 it moves to an adjacent cell further away from the 1 In our experiments this was taken to be q0 = 0.2. agent. So for the instance shown in Figure 2 and q0 = 0.1: P(Q = s9|Q = s9, A = s11) = 0.1 P(Q = s2|Q = s9, A = s11) = 0.3 P(Q = s8|Q = s9, A = s11) = 0.3 P(Q = s14|Q = s9, A = s11) = 0.3 Although the evasive behavior of the quarry is known to the agent, the quarry``s position is not. The only sensory information available to the agent is its own location. We use EMT and directly specify the target dynamics. For the Game of Tag, we can easily formulate three major trends: catching the quarry, staying mobile, and stalking the quarry. This results in the following three target dynamics: Tcatch(At+1 = si|Qt = sj, At = sa) ∝ 1 si = sj 0 otherwise Tmobile(At+1 = si|Qt = so, At = sj) ∝ 0 si = sj 1 otherwise Tstalk(At+1 = si|Qt = so, At = sj) ∝ 1 dist(si,so) Note that none of the above targets are directly achievable; for instance, if Qt = s9 and At = s11, there is no action that can move the agent to At+1 = s9 as required by the Tcatch target dynamics. We ran several experiments to evaluate EMT performance in the Tag Game. Three configurations of the domain shown in Figure 3 were used, each posing a different challenge to the agent due to partial observability. In each setting, a set of 1000 runs was performed with a time limit of 100 steps. In every run, the initial position of both the agent and its quarry was selected at random; this means that as far as the agent was concerned, the quarry``s initial position was uniformly distributed over the entire domain cell space. We also used two variations of the environment observability function. In the first version, observability function mapped all joint positions of hunter and quarry into the position of the hunter as an observation. In the second, only those joint positions in which hunter and quarry occupied different locations were mapped into the hunter``s location. The second version in fact utilized and expressed the fact that once hunter and quarry occupy the same cell the game ends. The results of these experiments are shown in Table 2. Balancing [15] the catch, move, and stalk target dynamics described in the previous section by the weight vector [0.8, 0.1, 0.1], EMT produced stable performance in all three domains. Although direct comparisons are difficult to make, the EMT performance displayed notable efficiency vis-`a-vis the POMDP approach. In spite of a simple and inefficient Matlab implementation of the EMT algorithm, the decision time for any given step averaged significantly below 1 second in all experiments. For the irregular open arena domain, which proved to be the most difficult, 1000 experiment runs bounded by 100 steps each, a total of 42411 steps, were completed in slightly under 6 hours. That is, over 4 × 104 online steps took an order of magnitude less time than the offline computation of POMDP policy in [11]. The significance of this differential is made even more prominent by the fact that, should the environment model parameters change, the online nature of EMT would allow it to maintain its performance time, while the POMDP policy would need to be recomputed, requiring yet again a large overhead of computation. We also tested the behavior cell frequency entropy, empirical measures from trial data. As Figure 4 and Figure 5 show, empir794 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) A Q Q A 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A Q Figure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor Table 2: Performance of the EMT-based solution in three Tag Game domains and two observability models: I) omniposition quarry, II) quarry is not at hunter``s position Model Domain Capture% E(Steps) Time/Step I Dead-ends 100 14.8 72(mSec) Arena 80.2 42.4 500(mSec) Circle 91.4 34.6 187(mSec) II Dead-ends 100 13.2 91(mSec) Arena 96.8 28.67 396(mSec) Circle 94.4 31.63 204(mSec) ical entropy grows with the length of interaction. For runs where the quarry was not captured immediately, the entropy reaches between 0.85 and 0.952 for different runs and scenarios. As the agent actively seeks the quarry, the entropy never reaches its maximum. One characteristic of the entropy graph for the open arena scenario particularly caught our attention in the case of the omniposition quarry observation model. Near the maximum limit of trial length (100 steps), entropy suddenly dropped. Further analysis of the data showed that under certain circumstances, a fluctuating behavior occurs in which the agent faces equally viable versions of quarry-following behavior. Since the EMT algorithm has greedy action selection, and the state space does not encode any form of commitment (not even speed or acceleration), the agent is locked within a small portion of cells. It is essentially attempting to simultaneously follow several courses of action, all of which are consistent with the target dynamics. This behavior did not occur in our second observation model, since it significantly reduced the set of eligible courses of action-essentially contributing to tie-breaking among them. 6. DISCUSSION The design of the EMT solution for the Tag Game exposes the core difference in approach to planning and control between EMT or DBC, on the one hand, and the more familiar POMDP approach, on the other. POMDP defines a reward structure to optimize, and influences system dynamics indirectly through that optimization. EMT discards any reward scheme, and instead measures and influences system dynamics directly. 2 Entropy was calculated using log base equal to the number of possible locations within the domain; this properly scales entropy expression into the range [0, 1] for all domains. Thus for the Tag Game, we did not search for a reward function that would encode and express our preference over the agent``s behavior, but rather directly set three (heuristic) behavior preferences as the basis for target dynamics to be maintained. Experimental data shows that these targets need not be directly achievable via the agent``s actions. However, the ratio between EMT performance and achievability of target dynamics remains to be explored. The tag game experiment data also revealed the different emphasis DBC and POMDPs place on the formulation of an environment state space. POMDPs rely entirely on the mechanism of reward accumulation maximization, i.e., formation of the action selection procedure to achieve necessary state sequencing. DBC, on the other hand, has two sources of sequencing specification: through the properties of an action selection procedure, and through direct specification within the target dynamics. The importance of the second source was underlined by the Tag Game experiment data, in which the greedy EMT algorithm, applied to a POMDP-type state space specification, failed, since target description over such a state space was incapable of encoding the necessary behavior tendencies, e.g., tie-breaking and commitment to directed motion. The structural differences between DBC (and EMT in particular), and POMDPs, prohibits direct performance comparison, and places them on complementary tracks, each within a suitable niche. For instance, POMDPs could be seen as a much more natural formulation of economic sequential decision-making problems, while EMT is a better fit for continual demand for stochastic change, as happens in many robotic or embodied-agent problems. The complementary properties of POMDPs and EMT can be further exploited. There is recent interest in using POMDPs in hybrid solutions [17], in which the POMDPs can be used together with other control approaches to provide results not easily achievable with either approach by itself. DBC can be an effective partner in such a hybrid solution. For instance, POMDPs have prohibitively large off-line time requirements for policy computation, but can be readily used in simpler settings to expose beneficial behavioral trends; this can serve as a form of target dynamics that are provided to EMT in a larger domain for on-line operation. 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Dead−ends 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Arena 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Circle Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. 0 10 20 30 40 50 60 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Dead−ends 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Arena 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Circle Figure 5: Observation Model II: quarry not observed at hunter``s position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. Since EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems. 8. ACKNOWLEDGMENT The work of the first two authors was partially supported by Israel Science Foundation grant #898/05, and the third author was partially supported by a grant from Israel``s Ministry of Science and Technology. 9. REFERENCES [1] R. C. Arkin. Behavior-Based Robotics. MIT Press, 1998. [2] J. A. Bilmes. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and Hidden Markov Models. Technical Report TR-97-021, Department of Electrical Engeineering and Computer Science, University of California at Berkeley, 1998. [3] T. M. Cover and J. A. Thomas. Elements of information theory. Wiley, 1991. [4] M. E. desJardins, E. H. Durfee, C. L. Ortiz, and M. J. Wolverton. A survey of research in distributed, continual planning. AI Magazine, 4:13-22, 1999. [5] V. R. Konda and J. N. Tsitsiklis. Actor-Critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143-1166, 2003. [6] W. S. Lim. A rendezvous-evasion game on discrete locations with joint randomization. Advances in Applied Probability, 29(4):1004-1017, December 1997. [7] M. L. Littman, T. L. Dean, and L. P. Kaelbling. On the complexity of solving Markov decision problems. In Proceedings of the 11th Annual Conference on Uncertainty in Artificial Intelligence (UAI-95), pages 394-402, 1995. [8] O. Madani, S. Hanks, and A. Condon. On the undecidability of probabilistic planning and related stochastic optimization problems. Artificial Intelligence Journal, 147(1-2):5-34, July 2003. [9] R. M. Neal and G. E. Hinton. A view of the EM algorithm 796 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355-368. Kluwer Academic Publishers, 1998. [10] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus. Security in multiagent systems by policy randomization. In Proceeding of AAMAS 2006, 2006. [11] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for pomdps. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1025-1032, August 2003. [12] M. L. Puterman. Markov Decision Processes. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics Section. Wiley-Interscience Publication, New York, 1994. [13] Z. Rabinovich and J. S. Rosenschein. Extended Markov Tracking with an application to control. In The Workshop on Agent Tracking: Modeling Other Agents from Observations, at the Third International Joint Conference on Autonomous Agents and Multiagent Systems, pages 95-100, New-York, July 2004. [14] Z. Rabinovich and J. S. Rosenschein. Multiagent coordination by Extended Markov Tracking. In The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 431-438, Utrecht, The Netherlands, July 2005. [15] Z. Rabinovich and J. S. Rosenschein. On the response of EMT-based control to interacting targets and models. In The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 465-470, Hakodate, Japan, May 2006. [16] R. F. Stengel. Optimal Control and Estimation. Dover Publications, 1994. [17] M. Tambe, E. Bowring, H. Jung, G. Kaminka, R. Maheswaran, J. Marecki, J. Modi, R. Nair, J. Pearce, P. Paruchuri, D. Pynadath, P. Scerri, N. Schurr, and P. Varakantham. Conflicts in teamwork: Hybrids to the The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 797
Dynamics Based Control with an Application to Area-Sweeping Problems ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often addressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility. While theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7]. We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. The idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. To evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to "tag" a moving target (quarry) whose position is not known with certainty. Experimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent's position). The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 2. MOTIVATION AND RELATED WORK Many real-life scenarios naturally have a stochastic target dynamics specification, especially those domains where there exists no ultimate goal, but rather system behavior (with specific properties) that has to be continually supported. For example, security guards perform persistent sweeps of an area to detect any sign of intrusion. Cunning thieves will attempt to track these sweeps, and time their operation to key points of the guards' motion. It is thus advisable to make the guards' motion dynamics appear irregular and random. Recent work by Paruchuri et al. [10] has addressed such randomization in the context of single-agent and distributed POMDPs. The goal in that work was to generate policies that provide a measure of action-selection randomization, while maintaining rewards within some acceptable levels. Our focus differs from this work in that DBC does not optimize expected rewards--indeed we do not consider rewards at all--but instead maintains desired dynamics (including, but not limited to, randomization). The Game of Tag is another example of the applicability of the approach. It was introduced in the work by Pineau et al. [11]. There are two agents that can move about an area, which is divided into a grid. The grid may have blocked cells (holes) into which no agent can move. One agent (the hunter) seeks to move into a cell occupied by the other (the quarry), such that they are co-located (this is a "successful tag"). The quarry seeks to avoid the hunter agent, and is always aware of the hunter's position, but does not know how the hunter will behave, which opens up the possibility for a hunter to surprise the prey. The hunter knows the quarry's probabilistic law of motion, but does not know its current location. Tag is an instance of a family of area-sweeping (pursuit-evasion) problems. In [11], the hunter modeled the problem using a POMDP. A reward function was defined, to reflect the desire to tag the quarry, and an action policy was computed to optimize the reward collected over time. Due to the intractable complexity of determining the optimal policy, the action policy computed in that paper was essentially an approximation. In this paper, instead of formulating a reward function, we use EMT to solve the problem, by directly specifying the target dynamics. In fact, any search problem with randomized motion, the socalled class of area sweeping problems, can be described through specification of such target system dynamics. Dynamics Based Control provides a natural approach to solving these problems. 3. DYNAMICS BASED CONTROL The specification of Dynamics Based Control (DBC) can be broken into three interacting levels: Environment Design Level, User Level, and Agent Level. • Environment Design Level is concerned with the formal specification and modeling of the environment. For example, this level would specify the laws of physics within the system, and set its parameters, such as the gravitation constant. • User Level in turn relies on the environment model produced by Environment Design to specify the target system dynamics it wishes to observe. The User Level also specifies the estimation or learning procedure for system dynamics, and the measure of deviation. In the museum guard scenario above, these would correspond to a stochastic sweep schedule, and a measure of relative surprise between the specified and actual sweeping. • Agent Level in turn combines the environment model from the Environment Design level, the dynamics estimation procedure, the deviation measure and the target dynamics specification from User Level, to produce a sequence of actions that create system dynamics as close as possible to the targeted specification. As we are interested in the continual development of a stochastic system, such as happens in classical control theory [16] and continual planning [4], as well as in our example of museum sweeps, the question becomes how the Agent Level is to treat the deviation measurements over time. To this end, we use a probability threshold--that is, we would like the Agent Level to maximize the probability that the deviation measure will remain below a certain threshold. Specific action selection then depends on system formalization. One possibility would be to create a mixture of available system trends, much like that which happens in Behavior-Based Robotic architectures [1]. The other alternative would be to rely on the estimation procedure provided by the User Level--to utilize the Environment Design Level model of the environment to choose actions, so as to manipulate the dynamics estimator into believing that a certain dynamics has been achieved. Notice that this manipulation is not direct, but via the environment. Thus, for strong enough estimator algorithms, successful manipulation would mean a successful simulation of the specified target dynamics (i.e., beyond discerning via the available sensory input). DBC levels can also have a back-flow of information (see Figure 1). For instance, the Agent Level could provide data about target dynamics feasibility, allowing the User Level to modify the requirement, perhaps focusing on attainable features of system behavior. Data would also be available about the system response to different actions performed; combined with a dynamics estimator defined by the User Level, this can provide an important tool for the environment model calibration at the Environment Design Level. Figure 1: Data flow of the DBC framework Extending upon the idea of Actor-Critic algorithms [5], DBC data flow can provide a good basis for the design of a learning algorithm. For example, the User Level can operate as an exploratory device for a learning algorithm, inferring an ideal dynamics target from the environment model at hand that would expose and verify most critical features of system behavior. In this case, feasibility and system response data from the Agent Level would provide key information for an environment model update. In fact, the combination of feasibility and response data can provide a basis for the application of strong learning algorithms such as EM [2, 9]. 3.1 DBC for Markovian Environments For a Partially Observable Markovian Environment, DBC can be specified in a more rigorous manner. Notice how DBC discards rewards, and replaces it by another optimality criterion (structural differences are summarized in Table 1): • Environment Design level is to specify a tuple <S, A, T, O, Ω, s0>, where:--S is the set of all possible environment states;--s0 is the initial state of the environment (which can also be viewed as a probability distribution over S); The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 ity that the environment will move from state s to state s' under action a;--O is the set of all possible observations. This is what the sensor input would look like for an outside observer; observe o given that the environment has moved from state s to state s' under action a. • User Level, in the case of Markovian environment, operates on the set of system dynamics described by a family of conditional probabilities F = {τ: S × A--* Π (S)}. Thus specification of target dynamics can be expressed by q ∈ F, and the learning or tracking algorithm can be represented as a function L: O × (A × O) *--* F; that is, it maps sequences of observations and actions performed so far into an estimate τ ∈ F of system dynamics. There are many possible variations available at the User Level to define divergence between system dynamics; several of them are: Notice that the latter two are not actually metrics over the space of possible distributions, but nevertheless have meaningful and important interpretations. For instance, KullbackLeibler divergence is an important tool of information theory [3] that allows one to measure the "price" of encoding an information source governed by q, while assuming that it is governed by p. The User Level also defines the threshold of dynamics deviation probability θ. • Agent Level is then faced with a problem of selecting a control signal function a * to satisfy a minimization problem as follows: where d (τa, q) is a random variable describing deviation of the dynamics estimate τa, created by L under control signal a, from the ideal dynamics q. Implicit in this minimization problem is that L is manipulated via the environment, based on the environment model produced by the Environment Design Level. 3.2 DBC View of the State Space It is important to note the complementary view that DBC and POMDPs take on the state space of the environment. POMDPs regard state as a stationary snap-shot of the environment; whatever attributes of state sequencing one seeks are reached through properties of the control process, in this case reward accumulation. This can be viewed as if the sequencing of states and the attributes of that sequencing are only introduced by and for the controlling mechanism--the POMDP policy. DBC concentrates on the underlying principle of state sequencing, the system dynamics. DBC's target dynamics specification can use the environment's state space as a means to describe, discern, and preserve changes that occur within the system. As a result, DBC has a greater ability to express state sequencing properties, which are grounded in the environment model and its state space definition. For example, consider the task of moving through rough terrain towards a goal and reaching it as fast as possible. POMDPs would encode terrain as state space points, while speed would be ensured by negative reward for every step taken without reaching the goal--accumulating higher reward can be reached only by faster motion. Alternatively, the state space could directly include the notion of speed. For POMDPs, this would mean that the same concept is encoded twice, in some sense: directly in the state space, and indirectly within reward accumulation. Now, even if the reward function would encode more, and finer, details of the properties of motion, the POMDP solution will have to search in a much larger space of policies, while still being guided by the implicit concept of the reward accumulation procedure. On the other hand, the tactical target expression of variations and correlations between position and speed of motion is now grounded in the state space representation. In this situation, any further constraints, e.g., smoothness of motion, speed limits in different locations, or speed reductions during sharp turns, are explicitly and uniformly expressed by the tactical target, and can result in faster and more effective action selection by a DBC algorithm. 4. EMT-BASED CONTROL AS A DBC Recently, a control algorithm was introduced called EMT-based Control [13], which instantiates the DBC framework. Although it provides an approximate greedy solution in the DBC sense, initial experiments using EMT-based control have been encouraging [14, 15]. EMT-based control is based on the Markovian environment definition, as in the case with POMDPs, but its User and Agent Levels are of the Markovian DBC type of optimality. • User Level of EMT-based control defines a limited-case target system dynamics independent of action: It then utilizes the Kullback-Leibler divergence measure to compose a momentary system dynamics estimator--the Extended Markov Tracking (EMT) algorithm. The algorithm keeps a system dynamics estimate τt EMT that is capable of explaining recent change in an auxiliary Bayesian system state estimator from pt-1 to pt, and updates it conservatively using Kullback-Leibler divergence. Since τt EMT and pt-1, t are respectively the conditional and marginal probabilities over the system's state space, "explanation" simply means that and the dynamics estimate update is performed by solving a--Trace distance or L1 distance between two distributions p and q defined by 792 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment performance with respect to a given target. If these preference vectors are normalized, they can be combined into a single unified preference. This requires replacement of standard EMT-based action selection by the algorithm below [15]: • Given: -- a set of target dynamics {qk} Kk = 1,--vector of weights w (k) • Select action as follows • Agent Level in EMT-based control is suboptimal with respect to DBC (though it remains within the DBC framework), performing greedy action selection based on prediction of EMT's reaction. The prediction is based on the environment model provided by the Environment Design level, so that if we denote by Ta the environment's transition function limited to action a, and pt − 1 is the auxiliary Bayesian system state estimator, then the EMT-based control choice is described by Note that this follows the Markovian DBC framework precisely: the rewarding optimality of POMDPs is discarded, and in its place a dynamics estimator (EMT in this case) is manipulated via action effects on the environment to produce an estimate close to the specified target system dynamics. Yet as we mentioned, naive EMTbased control is suboptimal in the DBC sense, and has several additional limitations that do not exist in the general DBC framework (discussed in Section 4.2). 4.1 Multi-Target EMT At times, there may exist several behavioral preferences. For example, in the case of museum guards, some art items are more heavily guarded, requiring that the guards stay more often in their close vicinity. On the other hand, no corner of the museum is to be left unchecked, which demands constant motion. Successful museum security would demand that the guards adhere to, and balance, both of these behaviors. For EMT-based control, this would mean facing several tactical targets {qk} Kk = 1, and the question becomes how to merge and balance them. A balancing mechanism can be applied to resolve this issue. Note that EMT-based control, while selecting an action, creates a preference vector over the set of actions based on their predicted--For each action a ∈ A predict the future state distribution ¯ pat +1 = Ta ∗ pt;--For each action, compute The weights vector w ~ = (w1,..., wK) allows the additional "tuning of importance" among target dynamics without the need to redesign the targets themselves. This balancing method is also seamlessly integrated into the EMT-based control flow of operation. 4.2 EMT-based Control Limitations EMT-based control is a sub-optimal (in the DBC sense) representative of the DBC structure. It limits the User by forcing EMT to be its dynamic tracking algorithm, and replaces Agent optimization by greedy action selection. This kind of combination, however, is common for on-line algorithms. Although further development of EMT-based controllers is necessary, evidence so far suggests that even the simplest form of the algorithm possesses a great deal of power, and displays trends that are optimal in the DBC sense of the word. There are two further, EMT-specific, limitations to EMT-based control that are evident at this point. Both already have partial solutions and are subjects of ongoing research. The first limitation is the problem of negative preference. In the POMDP framework for example, this is captured simply, through The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 the appearance of values with different signs within the reward structure. For EMT-based control, however, negative preference means that one would like to avoid a certain distribution over system development sequences; EMT-based control, however, concentrates on getting as close as possible to a distribution. Avoidance is thus unnatural in native EMT-based control. The second limitation comes from the fact that standard environment modeling can create pure sensory actions--actions that do not change the state of the world, and differ only in the way observations are received and the quality of observations received. Since the world state does not change, EMT-based control would not be able to differentiate between different sensory actions. Notice that both of these limitations of EMT-based control are absent from the general DBC framework, since it may have a tracking algorithm capable of considering pure sensory actions and, unlike Kullback-Leibler divergence, a distribution deviation measure that is capable of dealing with negative preference. 5. EMT PLAYING TAG The Game of Tag was first introduced in [11]. It is a single agent problem of capturing a quarry, and belongs to the class of area sweeping problems. An example domain is shown in Figure 2. Figure 2: Tag domain; an agent (A) attempts to seek and capture a quarry (Q) The Game of Tag extremely limits the agent's perception, so that the agent is able to detect the quarry only if they are co-located in the same cell of the grid world. In the classical version of the game, co-location leads to a special observation, and the ` Tag' action can be performed. We slightly modify this setting: the moment that both agents occupy the same cell, the game ends. As a result, both the agent and its quarry have the same motion capability, which allows them to move in four directions, North, South, East, and West. These form a formal space of actions within a Markovian environment. The state space of the formal Markovian environment is described by the cross-product of the agent and quarry's positions. For Figure 2, it would be S = {s0,..., s23} × {s0,..., s23}. The effects of an action taken by the agent are deterministic, but the environment in general has a stochastic response due to the motion of the quarry. With probability q01 it stays put, and with probability 1 − q0 it moves to an adjacent cell further away from the 1In our experiments this was taken to be q0 = 0.2. agent. So for the instance shown in Figure 2 and q0 = 0.1: Although the evasive behavior of the quarry is known to the agent, the quarry's position is not. The only sensory information available to the agent is its own location. We use EMT and directly specify the target dynamics. For the Game of Tag, we can easily formulate three major trends: catching the quarry, staying mobile, and stalking the quarry. This results in the following three target dynamics: Note that none of the above targets are directly achievable; for instance, if Qt = s9 and At = s11, there is no action that can move the agent to At +1 = s9 as required by the Tcatch target dynamics. We ran several experiments to evaluate EMT performance in the Tag Game. Three configurations of the domain shown in Figure 3 were used, each posing a different challenge to the agent due to partial observability. In each setting, a set of 1000 runs was performed with a time limit of 100 steps. In every run, the initial position of both the agent and its quarry was selected at random; this means that as far as the agent was concerned, the quarry's initial position was uniformly distributed over the entire domain cell space. We also used two variations of the environment observability function. In the first version, observability function mapped all joint positions of hunter and quarry into the position of the hunter as an observation. In the second, only those joint positions in which hunter and quarry occupied different locations were mapped into the hunter's location. The second version in fact utilized and expressed the fact that once hunter and quarry occupy the same cell the game ends. The results of these experiments are shown in Table 2. Balancing [15] the catch, move, and stalk target dynamics described in the previous section by the weight vector [0.8, 0.1, 0.1], EMT produced stable performance in all three domains. Although direct comparisons are difficult to make, the EMT performance displayed notable efficiency vis - ` a-vis the POMDP approach. In spite of a simple and inefficient Matlab implementation of the EMT algorithm, the decision time for any given step averaged significantly below 1 second in all experiments. For the irregular open arena domain, which proved to be the most difficult, 1000 experiment runs bounded by 100 steps each, a total of 42411 steps, were completed in slightly under 6 hours. That is, over 4 × 104 online steps took an order of magnitude less time than the offline computation of POMDP policy in [11]. The significance of this differential is made even more prominent by the fact that, should the environment model parameters change, the online nature of EMT would allow it to maintain its performance time, while the POMDP policy would need to be recomputed, requiring yet again a large overhead of computation. We also tested the behavior cell frequency entropy, empirical measures from trial data. As Figure 4 and Figure 5 show, empir Figure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor Table 2: Performance of the EMT-based solution in three Tag Game domains and two observability models: I) omniposition quarry, II) quarry is not at hunter's position ical entropy grows with the length of interaction. For runs where the quarry was not captured immediately, the entropy reaches between 0.85 and 0.952 for different runs and scenarios. As the agent actively seeks the quarry, the entropy never reaches its maximum. One characteristic of the entropy graph for the open arena scenario particularly caught our attention in the case of the omniposition quarry observation model. Near the maximum limit of trial length (100 steps), entropy suddenly dropped. Further analysis of the data showed that under certain circumstances, a fluctuating behavior occurs in which the agent faces equally viable versions of quarry-following behavior. Since the EMT algorithm has greedy action selection, and the state space does not encode any form of commitment (not even speed or acceleration), the agent is locked within a small portion of cells. It is essentially attempting to simultaneously follow several courses of action, all of which are consistent with the target dynamics. This behavior did not occur in our second observation model, since it significantly reduced the set of eligible courses of action--essentially contributing to tie-breaking among them. 6. DISCUSSION The design of the EMT solution for the Tag Game exposes the core difference in approach to planning and control between EMT or DBC, on the one hand, and the more familiar POMDP approach, on the other. POMDP defines a reward structure to optimize, and influences system dynamics indirectly through that optimization. EMT discards any reward scheme, and instead measures and influences system dynamics directly. 2Entropy was calculated using log base equal to the number of possible locations within the domain; this properly scales entropy expression into the range [0, 1] for all domains. Thus for the Tag Game, we did not search for a reward function that would encode and express our preference over the agent's behavior, but rather directly set three (heuristic) behavior preferences as the basis for target dynamics to be maintained. Experimental data shows that these targets need not be directly achievable via the agent's actions. However, the ratio between EMT performance and achievability of target dynamics remains to be explored. The tag game experiment data also revealed the different emphasis DBC and POMDPs place on the formulation of an environment state space. POMDPs rely entirely on the mechanism of reward accumulation maximization, i.e., formation of the action selection procedure to achieve necessary state sequencing. DBC, on the other hand, has two sources of sequencing specification: through the properties of an action selection procedure, and through direct specification within the target dynamics. The importance of the second source was underlined by the Tag Game experiment data, in which the greedy EMT algorithm, applied to a POMDP-type state space specification, failed, since target description over such a state space was incapable of encoding the necessary behavior tendencies, e.g., tie-breaking and commitment to directed motion. The structural differences between DBC (and EMT in particular), and POMDPs, prohibits direct performance comparison, and places them on complementary tracks, each within a suitable niche. For instance, POMDPs could be seen as a much more natural formulation of economic sequential decision-making problems, while EMT is a better fit for continual demand for stochastic change, as happens in many robotic or embodied-agent problems. The complementary properties of POMDPs and EMT can be further exploited. There is recent interest in using POMDPs in hybrid solutions [17], in which the POMDPs can be used together with other control approaches to provide results not easily achievable with either approach by itself. DBC can be an effective partner in such a hybrid solution. For instance, POMDPs have prohibitively large off-line time requirements for policy computation, but can be readily used in simpler settings to expose beneficial behavioral trends; this can serve as a form of target dynamics that are provided to EMT in a larger domain for on-line operation. 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. Figure 5: Observation Model II: quarry not observed at hunter's position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. Since EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems.
Dynamics Based Control with an Application to Area-Sweeping Problems ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often addressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility. While theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7]. We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. The idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. To evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to "tag" a moving target (quarry) whose position is not known with certainty. Experimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent's position). The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 2. MOTIVATION AND RELATED WORK 3. DYNAMICS BASED CONTROL 3.1 DBC for Markovian Environments The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 3.2 DBC View of the State Space 4. EMT-BASED CONTROL AS A DBC 792 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.1 Multi-Target EMT 4.2 EMT-based Control Limitations The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 5. EMT PLAYING TAG 6. DISCUSSION 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. Figure 5: Observation Model II: quarry not observed at hunter's position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. Since EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems.
Dynamics Based Control with an Application to Area-Sweeping Problems ABSTRACT In this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments. Unlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics. We show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC. EMT employs greedy action selection to provide an efficient control algorithm in Markovian environments. We exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets). We show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation. 1. INTRODUCTION Planning and control constitutes a central research area in multiagent systems and artificial intelligence. In recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments. In this framework, the planning and control problem is often We take an alternative view of planning in stochastic environments. We do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics. We call this general planning framework Dynamics Based Control (DBC). In DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics. As actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16]. Here, optimality is measured in terms of probability of deviation magnitudes. In this paper, we present the structure of Dynamics Based Control. We show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC. EMT is an efficient instantiation of DBC. The paper is organized as follows. In Section 2 we motivate DBC using area-sweeping problems, and discuss related work. Section 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments. This is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4. That section also discusses the limitations of EMT-based control relative to the general DBC framework. Experimental settings and results are then presented in Section 5. Section 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work. 7. CONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework. DBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment. Optimality of DBC plans of action are measured The Sixth Intl. . Joint Conf. Figure 4: Observation Model I: Omniposition quarry. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. Figure 5: Observation Model II: quarry not observed at hunter's position. Entropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor. with respect to the deviation of actual system dynamics from the target dynamics. We show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC. In fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure. As enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference. This prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]). However, DBC in general has no such limitations, and readily enables the formulation of evasion games. In future work, we intend to proceed with the development of dynamics-based controllers for these problems.
I-42
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements
Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP.
[ "distribut constraint optim", "pseudotre arrang", "agent", "maximum sequenti path cost", "cross-edg pseudotre", "multi-agent system", "edg-travers heurist", "job shop schedul", "resourc alloc", "teamwork coordin", "multi-valu util function", "global util", "distribut constraint satisfact and optim", "multi-agent coordin" ]
[ "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "U", "U", "M", "U" ]
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements∗ James Atlas Computer and Information Sciences University of Delaware Newark, DE 19716 atlas@cis.udel.edu Keith Decker Computer and Information Sciences University of Delaware Newark, DE 19716 decker@cis.udel.edu ABSTRACT Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems General Terms Algorithms 1. INTRODUCTION Many historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP). With the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems. CSPs were originally extended to distributed agent environments in [9]. Early domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2]. Many domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint. Recent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions. Instead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility. This extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1]. Current algorithms that solve complete DCOPs use two main approaches: search and dynamic programming. Search based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3]. Dynamic programming based algorithms include DPOP and its extensions [5, 6, 7]. To date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem. It has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree. However, it was also shown that finding the optimal pseudotree was NP-Hard. We began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics. We found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors. We suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes. After exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees. Our hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types. In this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees. We begin with a definition of 741 978-81-904262-7-5 (RPS) c 2007 IFAAMAS DCOP, traditional pseudotrees, and cross-edged pseudotrees. We then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm. We discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics. We then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances. We conclude with a selection of ideas for future work and extensions for DCPOP. 2. PROBLEM DEFINITION DCOP has been formalized in slightly different ways in recent literature, so we will adopt the definition as presented in [6]. A Distributed Constraint Optimization Problem with n nodes and m constraints consists of the tuple < X, D, U > where: • X = {x1,. . ,xn} is a set of variables, each one assigned to a unique agent • D = {d1,. . ,dn} is a set of finite domains for each variable • U = {u1,. . ,um} is a set of utility functions such that each function involves a subset of variables in X and defines a utility for each combination of values among these variables An optimal solution to a DCOP instance consists of an assignment of values in D to X such that the sum of utilities in U is maximal. Problem domains that require minimum cost instead of maximum utility can map costs into negative utilities. The utility functions represent soft constraints but can also represent hard constraints by using arbitrarily large negative values. For this paper we only consider binary utility functions involving two variables. Higher order utility functions can be modeled with minor changes to the algorithm, but they also substantially increase the complexity. 2.1 Traditional Pseudotrees Pseudotrees are a common structure used in search procedures to allow parallel processing of independent branches. As defined in [6], a pseudotree is an arrangement of a graph G into a rooted tree T such that vertices in G that share an edge are in the same branch in T. A back-edge is an edge between a node X and any node which lies on the path from X to the root (excluding X``s parent). Figure 1 shows a pseudotree with four nodes, three edges (A-B, B-C, BD), and one back-edge (A-C). Also defined in [6] are four types of relationships between nodes exist in a pseudotree: • P(X) - the parent of a node X: the single node higher in the pseudotree that is connected to X directly through a tree edge • C(X) - the children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through tree edges • PP(X) - the pseudo-parents of a node X: the set of nodes higher in the pseudotree that are connected to X directly through back-edges (In Figure 1, A = PP(C)) • PC(X) - the pseudo-children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through back-edges (In Figure 1, C = PC(A)) Figure 1: A traditional pseudotree. Solid line edges represent parent-child relationships and the dashed line represents a pseudo-parent-pseudo-child relationship. Figure 2: A cross-edged pseudotree. Solid line edges represent parent-child relationships, the dashed line represents a pseudoparent-pseudo-child relationship, and the dotted line represents a branch-parent-branch-child relationship. The bolded node, B, is the merge point for node E. 2.2 Cross-edged Pseudotrees We define a cross-edge as an edge from node X to a node Y that is above X but not in the path from X to the root. A cross-edged pseudotree is a traditional pseudotree with the addition of cross-edges. Figure 2 shows a cross-edged pseudotree with a cross-edge (D-E). In a cross-edged pseudotree we designate certain edges as primary. The set of primary edges defines a spanning tree of the nodes. The parent, child, pseudo-parent, and pseudo-child relationships from the traditional pseudotree are now defined in the context of this primary edge spanning tree. This definition also yields two additional types of relationships that may exist between nodes: • BP(X) - the branch-parents of a node X: the set of nodes higher in the pseudotree that are connected to X but are not in the primary path from X to the root (In Figure 2, D = BP(E)) • BC(X) - the branch-children of a node X: the set of nodes lower in the pseudotree that are connected to X but are not in any primary path from X to any leaf node (In Figure 2, E = BC(D)) 2.3 Pseudotree Generation 742 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Current algorithms usually have a pre-execution phase to generate a traditional pseudotree from a general DCOP instance. Our DCPOP algorithm generates a cross-edged pseudotree in the same fashion. First, the DCOP instance < X, D, U > translates directly into a graph with X as the set of vertices and an edge for each pair of variables represented in U. Next, various heuristics are used to arrange this graph into a pseudotree. One common heuristic is to perform a guided depth-first search (DFS) as the resulting traversal is a pseudotree, and a DFS can easily be performed in a distributed fashion. We define an edge-traversal based method as any method that produces a pseudotree in which all parent/child pairs share an edge in the original graph. This includes DFS, breadth-first search, and best-first search based traversals. Our heuristics that generate cross-edged pseudotrees use a distributed best-first search traversal. 3. DPOP ALGORITHM The original DPOP algorithm operates in three main phases. The first phase generates a traditional pseudotree from the DCOP instance using a distributed algorithm. The second phase joins utility hypercubes from children and the local node and propagates them towards the root. The third phase chooses an assignment for each domain in a top down fashion beginning with the agent at the root node. The complexity of DPOP depends on the size of the largest computation and utility message during phase two. It has been shown that this size directly corresponds to the induced width of the pseudotree generated in phase one [6]. DPOP uses polynomial time heuristics to generate the pseudotree since finding the minimum induced width pseudotree is NP-hard. Several distributed edgetraversal heuristics have been developed to find low width pseudotrees [8]. At the end of the first phase, each agent knows its parent, children, pseudo-parents, and pseudo-children. 3.1 Utility Propagation Agents located at leaf nodes in the pseudotree begin the process by calculating a local utility hypercube. This hypercube at node X contains summed utilities for each combination of values in the domains for P(X) and PP(X). This hypercube has dimensional size equal to the number of pseudo-parents plus one. A message containing this hypercube is sent to P(X). Agents located at non-leaf nodes wait for all messages from children to arrive. Once the agent at node Y has all utility messages, it calculates its local utility hypercube which includes domains for P(Y), PP(Y), and Y. The local utility hypercube is then joined with all of the hypercubes from the child messages. At this point all utilities involving node Y are known, and the domain for Y may be safely eliminated from the joined hypercube. This elimination process chooses the best utility over the domain of Y for each combination of the remaining domains. A message containing this hypercube is now sent to P(Y). The dimensional size of this hypercube depends on the number of overlapping domains in received messages and the local utility hypercube. This dynamic programming based propagation phase continues until the agent at the root node of the pseudotree has received all messages from its children. 3.2 Value Propagation Value propagation begins when the agent at the root node Z has received all messages from its children. Since Z has no parents or pseudo-parents, it simply combines the utility hypercubes received from its children. The combined hypercube contains only values for the domain for Z. At this point the agent at node Z simply chooses the assignment for its domain that has the best utility. A value propagation message with this assignment is sent to each node in C(Z). Each other node then receives a value propagation message from its parent and chooses the assignment for its domain that has the best utility given the assignments received in the message. The node adds its domain assignment to the assignments it received and passes the set of assignments to its children. The algorithm is complete when all nodes have chosen an assignment for their domain. 4. DCPOP ALGORITHM Our extension to the original DPOP algorithm, shown in Algorithm 1, shares the same three phases. The first phase generates the cross-edged pseudotree for the DCOP instance. The second phase merges branches and propagates the utility hypercubes. The third phase chooses assignments for domains at branch merge points and in a top down fashion, beginning with the agent at the root node. For the first phase we generate a pseudotree using several distributed heuristics and select the one with lowest overall complexity. The complexity of the computation and utility message size in DCPOP does not directly correspond to the induced width of the cross-edged pseudotree. Instead, we use a polynomial time method for calculating the maximum computation and utility message size for a given cross-edged pseudotree. A description of this method and the pseudotree selection process appears in Section 5. At the end of the first phase, each agent knows its parent, children, pseudo-parents, pseudo-children, branch-parents, and branch-children. 4.1 Merging Branches and Utility Propagation In the original DPOP algorithm a node X only had utility functions involving its parent and its pseudo-parents. In DCPOP, a node X is allowed to have a utility function involving a branch-parent. The concept of a branch can be seen in Figure 2 with node E representing our node X. The two distinct paths from node E to node B are called branches of E. The single node where all branches of E meet is node B, which is called the merge point of E. Agents with nodes that have branch-parents begin by sending a utility propagation message to each branch-parent. This message includes a two dimensional utility hypercube with domains for the node X and the branch-parent BP(X). It also includes a branch information structure which contains the origination node of the branch, X, the total number of branches originating from X, and the number of branches originating from X that are merged into a single representation by this branch information structure (this number starts at 1). Intuitively when the number of merged branches equals the total number of originating branches, the algorithm has reached the merge point for X. In Figure 2, node E sends a utility propagation message to its branch-parent, node D. This message has dimensions for the domains of E and D, and includes branch information with an origin of E, 2 total branches, and 1 merged branch. As in the original DPOP utility propagation phase, an agent at leaf node X sends a utility propagation message to its parent. In DCPOP this message contains dimensions for the domains of P(X) and PP(X). If node X also has branch-parents, then the utility propagation message also contains a dimension for the domain of X, and will include a branch information structure. In Figure 2, node E sends a utility propagation message to its parent, node C. This message has dimensions for the domains of E and C, and includes branch information with an origin of E, 2 total branches, and 1 merged branch. When a node Y receives utility propagation messages from all of The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743 its children and branch-children, it merges any branches with the same origination node X. The merged branch information structure accumulates the number of merged branches for X. If the cumulative total number of merged branches equals the total number of branches, then Y is the merge point for X. This means that the utility hypercubes present at Y contain all information about the valuations for utility functions involving node X. In addition to the typical elimination of the domain of Y from the utility hypercubes, we can now safely eliminate the domain of X from the utility hypercubes. To illustrate this process, we will examine what happens in the second phase for node B in Figure 2. In the second phase Node B receives two utility propagation messages. The first comes from node C and includes dimensions for domains E, B, and A. It also has a branch information structure with origin of E, 2 total branches, and 1 merged branch. The second comes from node D and includes dimensions for domains E and B. It also has a branch information structure with origin of E, 2 total branches, and 1 merged branch. Node B then merges the branch information structures from both messages because they have the same origination, node E. Since the number of merged branches originating from E is now 2 and the total branches originating from E is 2, node B now eliminates the dimensions for domain E. Node B also eliminates the dimension for its own domain, leaving only information about domain A. Node B then sends a utility propagation message to node A, containing only one dimension for the domain of A. Although not possible in DPOP, this method of utility propagation and dimension elimination may produce hypercubes at node Y that do not share any domains. In DCPOP we do not join domain independent hypercubes, but instead may send multiple hypercubes in the utility propagation message sent to the parent of Y. This lazy approach to joins helps to reduce message sizes. 4.2 Value Propagation As in DPOP, value propagation begins when the agent at the root node Z has received all messages from its children. At this point the agent at node Z chooses the assignment for its domain that has the best utility. If Z is the merge point for the branches of some node X, Z will also choose the assignment for the domain of X. Thus any node that is a merge point will choose assignments for a domain other than its own. These assignments are then passed down the primary edge hierarchy. If node X in the hierarchy has branch-parents, then the value assignment message from P(X) will contain an assignment for the domain of X. Every node in the hierarchy adds any assignments it has chosen to the ones it received and passes the set of assignments to its children. The algorithm is complete when all nodes have chosen or received an assignment for their domain. 4.3 Proof of Correctness We will prove the correctness of DCPOP by first noting that DCPOP fully extends DPOP and then examining the two cases for value assignment in DCPOP. Given a traditional pseudotree as input, the DCPOP algorithm execution is identical to DPOP. Using a traditional pseudotree arrangement no nodes have branch-parents or branch-children since all edges are either back-edges or tree edges. Thus the DCPOP algorithm using a traditional pseudotree sends only utility propagation messages that contain domains belonging to the parent or pseudo-parents of a node. Since no node has any branch-parents, no branches exist, and thus no node serves as a merge point for any other node. Thus all value propagation assignments are chosen at the node of the assignment domain. For DCPOP execution with cross-edged pseudotrees, some nodes serve as merge points. We note that any node X that is not a merge point assigns its value exactly as in DPOP. The local utility hypercube at X contains domains for X, P(X), PP(X), and BC(X). As in DPOP the value assignment message received at X includes the values assigned to P(X) and PP(X). Also, since X is not a merge point, all assignments to BC(X) must have been calculated at merge points higher in the tree and are in the value assignment message from P(X). Thus after eliminating domains for which assignments are known, only the domain of X is left. The agent at node X can now correctly choose the assignment with maximum utility for its own domain. If node X is a merge point for some branch-child Y, we know that X must be a node along the path from Y to the root, and from P(Y) and all BP(Y) to the root. From the algorithm, we know that Y necessarily has all information from C(Y), PC(Y), and BC(Y) since it waits for their messages. Node X has information about all nodes below it in the tree, which would include Y, P(Y), BP(Y), and those PP(Y) that are below X in the tree. For any PP(Y) above X in the tree, X receives the assignment for the domain of PP(Y) in the value assignment message from P(X). Thus X has utility information about all of the utility functions of which Y is a part. By eliminating domains included in the value assignment message, node X is left with a local utility hypercube with domains for X and Y. The agent at node X can now correctly choose the assignments with maximum utility for the domains of X and Y. 4.4 Complexity Analysis The first phase of DCPOP sends one message to each P(X), PP(X), and BP(X). The second phase sends one value assignment message to each C(X). Thus, DCPOP produces a linear number of messages with respect to the number of edges (utility functions) in the cross-edged pseudotree and the original DCOP instance. The actual complexity of DCPOP depends on two additional measurements: message size and computation size. Message size and computation size in DCPOP depend on the number of overlapping branches as well as the number of overlapping back-edges. It was shown in [6] that the number of overlapping back-edges is equal to the induced width of the pseudotree. In a poorly constructed cross-edged pseudotree, the number of overlapping branches at node X can be as large as the total number of descendants of X. Thus, the total message size in DCPOP in a poorly constructed instance can be space-exponential in the total number of nodes in the graph. However, in practice a well constructed cross-edged pseudotree can achieve much better results. Later we address the issue of choosing well constructed crossedged pseudotrees from a set. We introduce an additional measurement of the maximum sequential path cost through the algorithm. This measurement directly relates to the maximum amount of parallelism achievable by the algorithm. To take this measurement we first store the total computation size for each node during phase two and three. This computation size represents the number of individual accesses to a value in a hypercube at each node. For example, a join between two domains of size 4 costs 4 ∗ 4 = 16. Two directed acyclic graphs (DAG) can then be drawn; one with the utility propagation messages as edges and the phase two costs at nodes, and the other with value assignment messages and the phase three costs at nodes. The maximum sequential path cost is equal to the sum of the longest path on each DAG from the root to any leaf node. 5. HEURISTICS In our assessment of complexity in DCPOP we focused on the worst case possibly produced by the algorithm. We acknowledge 744 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 1 DCPOP Algorithm 1: DCPOP(X; D; U) Each agent Xi executes: Phase 1: pseudotree creation 2: elect leader from all Xj ∈ X 3: elected leader initiates pseudotree creation 4: afterwards, Xi knows P(Xi), PP(Xi), BP(Xi), C(Xi), BC(Xi) and PC(Xi) Phase 2: UTIL message propagation 5: if |BP(Xi)| > 0 then 6: BRANCHXi ← |BP(Xi)| + 1 7: for all Xk ∈BP(Xi) do 8: UTILXi (Xk) ←Compute utils(Xi, Xk) 9: Send message(Xk,UTILXi (Xk),BRANCHXi ) 10: if |C(Xi)| = 0(i.e. Xi is a leaf node) then 11: UTILXi (P(Xi)) ← Compute utils(P(Xi),PP(Xi)) for all PP(Xi) 12: Send message(P(Xi), UTILXi (P(Xi)),BRANCHXi ) 13: Send message(PP(Xi), empty UTIL, empty BRANCH) to all PP(Xi) 14: activate UTIL Message handler() Phase 3: VALUE message propagation 15: activate VALUE Message handler() END ALGORITHM UTIL Message handler(Xk,UTILXk (Xi), BRANCHXk ) 16: store UTILXk (Xi),BRANCHXk (Xi) 17: if UTIL messages from all children and branch children arrived then 18: for all Bj ∈BRANCH(Xi) do 19: if Bj is merged then 20: join all hypercubes where Bj ∈UTIL(Xi) 21: eliminate Bj from the joined hypercube 22: if P(Xi) == null (that means Xi is the root) then 23: v ∗ i ← Choose optimal(null) 24: Send VALUE(Xi, v ∗ i) to all C(Xi) 25: else 26: UTILXi (P(Xi)) ← Compute utils(P(Xi), PP(Xi)) 27: Send message(P(Xi),UTILXi (P(Xi)), BRANCHXi (P(Xi))) VALUE Message handler(VALUEXi ,P(Xi)) 28: add all Xk ← v ∗ k ∈VALUEXi ,P(Xi) to agent view 29: Xi ← v ∗ i =Choose optimal(agent view) 30: Send VALUEXl , Xi to all Xl ∈C(Xi) that in real world problems the generation of the pseudotree has a significant impact on the actual performance. The problem of finding the best pseudotree for a given DCOP instance is NP-Hard. Thus a heuristic is used for generation, and the performance of the algorithm depends on the pseudotree found by the heuristic. Some previous research focused on finding heuristics to generate good pseudotrees [8]. While we have developed some heuristics that generate good cross-edged pseudotrees for use with DCPOP, our focus has been to use multiple heuristics and then select the best pseudotree from the generated pseudotrees. We consider only heuristics that run in polynomial time with respect to the number of nodes in the original DCOP instance. The actual DCPOP algorithm has worst case exponential complexity, but we can calculate the maximum message size, computation size, and sequential path cost for a given cross-edged pseudotree in linear space-time complexity. To do this, we simply run the algorithm without attempting to calculate any of the local utility hypercubes or optimal value assignments. Instead, messages include dimensional and branch information but no utility hypercubes. After each heuristic completes its generation of a pseudotree, we execute the measurement procedure and propagate the measurement information up to the chosen root in that pseudotree. The root then broadcasts the total complexity for that heuristic to all nodes. After all heuristics have had a chance to complete, every node knows which heuristic produced the best pseudotree. Each node then proceeds to begin the DCPOP algorithm using its knowledge of the pseudotree generated by the best heuristic. The heuristics used to generate traditional pseudotrees perform a distributed DFS traversal. The general distributed algorithm uses a token passing mechanism and a linear number of messages. Improved DFS based heuristics use a special procedure to choose the root node, and also provide an ordering function over the neighbors of a node to determine the order of path recursion. The DFS based heuristics used in our experiments come from the work done in [4, 8]. 5.1 The best-first cross-edged pseudotree heuristic The heuristics used to generate cross-edged pseudotrees perform a best-first traversal. A general distributed best-first algorithm for node expansion is presented in Algorithm 2. An evaluation function at each node provides the values that are used to determine the next best node to expand. Note that in this algorithm each node only exchanges its best value with its neighbors. In our experiments we used several evaluation functions that took as arguments an ordered list of ancestors and a node, which contains a list of neighbors (with each neighbor``s placement depth in the tree if it was placed). From these we can calculate branchparents, branch-children, and unknown relationships for a potential node placement. The best overall function calculated the value as ancestors−(branchparents+branchchildren) with the number of unknown relationships being a tiebreak. After completion each node has knowledge of its parent and ancestors, so it can easily determine which connected nodes are pseudo-parents, branchparents, pseudo-children, and branch-children. The complexity of the best-first traversal depends on the complexity of the evaluation function. Assuming a complexity of O(V ) for the evaluation function, which is the case for our best overall function, the best-first traversal is O(V · E) which is at worst O(n3 ). For each v ∈ V we perform a place operation, and find the next node to place using the getBestNeighbor operation. The place operation is at most O(V ) because of the sent messages. Finding the next node uses recursion and traverses only already placed The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745 Algorithm 2 Distributed Best-First Search Algorithm root ← electedleader next(root, ∅) place(node, parent) node.parent ← parent node.ancestors ← parent.ancestors ∪ parent send placement message (node, node.ancestors) to all neighbors of node next(current, previous) if current is not placed then place(current, previous) next(current, ∅) else best ← getBestNeighbor(current, previous) if best = ∅ then if previous = ∅ then terminate, all nodes are placed next(previous, ∅) else next(best, current) getBestNeighbor(current, previous) best ← ∅; score ← 0 for all n ∈ current.neighbors do if n! = previous then if n is placed then nscore ← getBestNeighbor(n, current) else nscore ← evaluate(current, n) if nscore > score then score ← nscore best ← n return best, score nodes, so it has O(V ) recursions. Each recursion performs a recursive getBestNeighbor operation that traverses all placed nodes and their neighbors. This operation is O(V · E), but results can be cached using only O(V ) space at each node. Thus we have O(V ·(V +V +V ·E)) = O(V 2 ·E). If we are smart about evaluating local changes when each node receives placement messages from its neighbors and cache the results the getBestNeighbor operation is only O(E). This increases the complexity of the place operation, but for all placements the total complexity is only O(V · E). Thus we have an overall complexity of O(V ·E+V ·(V +E)) = O(V ·E). 6. COMPARISON OF COMPLEXITY IN DPOP AND DCPOP We have already shown that given the same input, DCPOP performs the same as DPOP. We also have shown that we can accurately predict performance of a given pseudotree in linear spacetime complexity. If we use a constant number of heuristics to generate the set of pseudotrees, we can choose the best pseudotree in linear space-time complexity. We will now show that there exists a DCOP instance for which a cross-edged pseudotree outperforms all possible traditional pseudotrees (based on edge-traversal heuristics). In Figure 3(a) we have a DCOP instance with six nodes. This is a bipartite graph with each partition fully connected to the other (a) (b) (c) Figure 3: (a) The DCOP instance (b) A traditional pseudotree arrangement for the DCOP instance (c) A cross-edged pseudotree arrangement for the DCOP instance partition. In Figure 3(b) we see a traditional pseudotree arrangement for this DCOP instance. It is easy to see that any edgetraversal based heuristic cannot expand two nodes from the same partition in succession. We also see that no node can have more than one child because any such arrangement would be an invalid pseudotree. Thus any traditional pseudotree arrangement for this DCOP instance must take the form of Figure 3(b). We can see that the back-edges F-B and F-A overlap node C. Node C also has a parent E, and a back-edge with D. Using the original DPOP algorithm (or DCPOP since they are identical in this case), we find that the computation at node C involves five domains: A, B, C, D, and E. In contrast, the cross-edged pseudotree arrangement in Figure 3(c) requires only a maximum of four domains in any computation during DCPOP. Since node A is the merge point for branches from both B and C, we can see that each of the nodes D, E, and F have two overlapping branches. In addition each of these nodes has node A as its parent. Using the DCPOP algorithm we find that the computation at node D (or E or F) involves four domains: A, B, C, and D (or E or F). Since no better traditional pseudotree arrangement can be created using an edge-traversal heuristic, we have shown that DCPOP can outperform DPOP even if we use the optimal pseudotree found through edge-traversal. We acknowledge that pseudotree arrangements that allow parent-child relationships without an actual constraint can solve the problem in Figure 3(a) with maximum computation size of four domains. However, current heuristics used with DPOP do not produce such pseudotrees, and such a heuristic would be difficult to distribute since each node would require information about nodes with which it has no constraint. Also, while we do not prove it here, cross-edged pseudotrees can produce smaller message sizes than such pseudotrees even if the computation size is similar. In practice, since finding the best pseudotree arrangement is NP-Hard, we find that heuristics that produce cross-edged pseudotrees often produce significantly smaller computation and message sizes. 7. EXPERIMENTAL RESULTS 746 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Existing performance metrics for DCOP algorithms include the total number of messages, synchronous clock cycles, and message size. We have already shown that the total number of messages is linear with respect to the number of constraints in the DCOP instance. We also introduced the maximum sequential path cost (PC) as a measurement of the maximum amount of parallelism achievable by the algorithm. The maximum sequential path cost is equal to the sum of the computations performed on the longest path from the root to any leaf node. We also include as metrics the maximum computation size in number of dimensions (CD) and maximum message size in number of dimensions (MD). To analyze the relative complexity of a given DCOP instance, we find the minimum induced width (IW) of any traditional pseudotree produced by a heuristic for the original DPOP. 7.1 Generic DCOP instances For our initial tests we randomly generated two sets of problems with 3000 cases in each. Each problem was generated by assigning a random number (picked from a range) of constraints to each variable. The generator then created binary constraints until each variable reached its maximum number of constraints. The first set uses 20 variables, and the best DPOP IW ranges from 1 to 16 with an average of 8.5. The second set uses 100 variables, and the best DPOP IW ranged from 2 to 68 with an average of 39.3. Since most of the problems in the second set were too complex to actually compute the solution, we took measurements of the metrics using the techniques described earlier in Section 5 without actually solving the problem. Results are shown for the first set in Table 1 and for the second set in Table 2. For the two problem sets we split the cases into low density and high density categories. Low density cases consist of those problems that have a best DPOP IW less than or equal to half of the total number of nodes (e.g. IW ≤ 10 for the 20 node problems and IW ≤ 50 for the 100 node problems). High density problems consist of the remainder of the problem sets. In both Table 1 and Table 2 we have listed performance metrics for the original DPOP algorithm, the DCPOP algorithm using only cross-edged pseudotrees (DCPOP-CE), and the DCPOP algorithm using traditional and cross-edged pseudotrees (DCPOP-All). The pseudotrees used for DPOP were generated using 5 heuristics: DFS, DFS MCN, DFS CLIQUE MCN, DFS MCN DSTB, and DFS MCN BEC. These are all versions of the guided DFS traversal discussed in Section 5. The cross-edged pseudotrees used for DCPOP-CE were generated using 5 heuristics: MCN, LCN, MCN A-B, LCN A-B, and LCSG A-B. These are all versions of the best-first traversal discussed in Section 5. For both DPOP and DCPOP-CE we chose the best pseudotree produced by their respective 5 heuristics for each problem in the set. For DCPOP-All we chose the best pseudotree produced by all 10 heuristics for each problem in the set. For the CD and MD metrics the value shown is the average number of dimensions. For the PC metric the value shown is the natural logarithm of the maximum sequential path cost (since the actual value grows exponentially with the complexity of the problem). The final row in both tables is a measurement of improvement of DCPOP-All over DPOP. For the CD and MD metrics the value shown is a reduction in number of dimensions. For the PC metric the value shown is a percentage reduction in the maximum sequential path cost (% = DP OP −DCP OP DCP OP ∗ 100). Notice that DCPOPAll outperforms DPOP on all metrics. This logically follows from our earlier assertion that given the same input, DCPOP performs exactly the same as DPOP. Thus given the choice between the pseudotrees produced by all 10 heuristics, DCPOP-All will always outLow Density High Density Algorithm CD MD PC CD MD PC DPOP 7.81 6.81 3.78 13.34 12.34 5.34 DCPOP-CE 7.94 6.73 3.74 12.83 11.43 5.07 DCPOP-All 7.62 6.49 3.66 12.72 11.36 5.05 Improvement 0.18 0.32 13% 0.62 0.98 36% Table 1: 20 node problems Low Density High Density Algorithm CD MD PC CD MD PC DPOP 33.35 32.35 14.55 58.51 57.50 19.90 DCPOP-CE 33.49 29.17 15.22 57.11 50.03 20.01 DCPOP-All 32.35 29.57 14.10 56.33 51.17 18.84 Improvement 1.00 2.78 104% 2.18 6.33 256% Table 2: 100 node problems Figure 4: Computation Dimension Size Figure 5: Message Dimension Size The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 747 Figure 6: Path Cost DCPOP Improvement Ag Mtg Vars Const IW CD MD PC 10 4 12 13.5 2.25 -0.01 -0.01 5.6% 30 14 44 57.6 3.63 0.09 0.09 10.9% 50 24 76 101.3 4.17 0.08 0.09 10.7% 100 49 156 212.9 5.04 0.16 0.20 30.0% 150 74 236 321.8 5.32 0.21 0.23 35.8% 200 99 316 434.2 5.66 0.18 0.22 29.5% Table 3: Meeting Scheduling Problems perform DPOP. Another trend we notice is that the improvement is greater for high density problems than low density problems. We show this trend in greater detail in Figures 4, 5, and 6. Notice how the improvement increases as the complexity of the problem increases. 7.2 Meeting Scheduling Problem In addition to our initial generic DCOP tests, we ran a series of tests on the Meeting Scheduling Problem (MSP) as described in [6]. The problem setup includes a number of people that are grouped into departments. Each person must attend a specified number of meetings. Meetings can be held within departments or among departments, and can be assigned to one of eight time slots. The MSP maps to a DCOP instance where each variable represents the time slot that a specific person will attend a specific meeting. All variables that belong to the same person have mutual exclusion constraints placed so that the person cannot attend more than one meeting during the same time slot. All variables that belong to the same meeting have equality constraints so that all of the participants choose the same time slot. Unary constraints are placed on each variable to account for a person``s valuation of each meeting and time slot. For our tests we generated 100 sample problems for each combination of agents and meetings. Results are shown in Table 3. The values in the first five columns represent (in left to right order), the total number of agents, the total number of meetings, the total number of variables, the average total number of constraints, and the average minimum IW produced by a traditional pseudotree. The last three columns show the same metrics we used for the generic DCOP instances, except this time we only show the improvements of DCPOP-All over DPOP. Performance is better on average for all MSP instances, but again we see larger improvements for more complex problem instances. 8. CONCLUSIONS AND FUTURE WORK We presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements. Our algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase. Our algorithm also allows value assignments to occur at higher level merge points for lower level nodes. We have shown that DCPOP fully extends DPOP by performing the same operations given the same input. We have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees. We placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees. We have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity. Given the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees. Future work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP. 9. REFERENCES [1] J. Liu and K. P. Sycara. Exploiting problem structure for distributed constraint optimization. In V. Lesser, editor, Proceedings of the First International Conference on Multi-Agent Systems, pages 246-254, San Francisco, CA, 1995. MIT Press. [2] P. J. Modi, H. Jung, M. Tambe, W.-M. Shen, and S. Kulkarni. A dynamic distributed constraint satisfaction approach to resource allocation. Lecture Notes in Computer Science, 2239:685-700, 2001. [3] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo. An asynchronous complete method for distributed constraint optimization. In AAMAS 03, 2003. [4] A. Petcu. Frodo: A framework for open/distributed constraint optimization. Technical Report No. 2006/001 2006/001, Swiss Federal Institute of Technology (EPFL), Lausanne (Switzerland), 2006. http://liawww.epfl.ch/frodo/. [5] A. Petcu and B. Faltings. A-dpop: Approximations in distributed optimization. In poster in CP 2005, pages 802-806, Sitges, Spain, October 2005. [6] A. Petcu and B. Faltings. Dpop: A scalable method for multiagent constraint optimization. In IJCAI 05, pages 266-271, Edinburgh, Scotland, Aug 2005. [7] A. Petcu, B. Faltings, and D. Parkes. M-dpop: Faithful distributed implementation of efficient social choice problems. In AAMAS 06, pages 1397-1404, Hakodate, Japan, May 2006. [8] G. Ushakov. Solving meeting scheduling problems using distributed pseudotree-optimization procedure. Master``s thesis, ´Ecole Polytechnique F´ed´erale de Lausanne, 2005. [9] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara. Distributed constraint satisfaction for formalizing distributed problem solving. In International Conference on Distributed Computing Systems, pages 614-621, 1992. [10] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara. The distributed constraint satisfaction problem: Formalization and algorithms. Knowledge and Data Engineering, 10(5):673-685, 1998. 748 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements * ABSTRACT Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP. 1. INTRODUCTION Many historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP). With the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems. CSPs were originally extended to distributed agent environments in [9]. Early domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2]. Many domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint. Recent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions. Instead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility. This extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1]. Current algorithms that solve complete DCOPs use two main approaches: search and dynamic programming. Search based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3]. Dynamic programming based algorithms include DPOP and its extensions [5, 6, 7]. To date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem. It has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree. However, it was also shown that finding the optimal pseudotree was NP-Hard. We began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics. We found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors. We suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes. After exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees. Our hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types. In this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees. We begin with a definition of DCOP, traditional pseudotrees, and cross-edged pseudotrees. We then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm. We discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics. We then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances. We conclude with a selection of ideas for future work and extensions for DCPOP. 2. PROBLEM DEFINITION DCOP has been formalized in slightly different ways in recent literature, so we will adopt the definition as presented in [6]. A Distributed Constraint Optimization Problem with n nodes and m constraints consists of the tuple <X, D, U> where: • X = {x1,. . , xn} is a set of variables, each one assigned to a unique agent • D = {d1,. . , dn} is a set of finite domains for each variable • U = {u1,. . , um} is a set of utility functions such that each function involves a subset of variables in X and defines a utility for each combination of values among these variables An optimal solution to a DCOP instance consists of an assignment of values in D to X such that the sum of utilities in U is maximal. Problem domains that require minimum cost instead of maximum utility can map costs into negative utilities. The utility functions represent soft constraints but can also represent hard constraints by using arbitrarily large negative values. For this paper we only consider binary utility functions involving two variables. Higher order utility functions can be modeled with minor changes to the algorithm, but they also substantially increase the complexity. 2.1 Traditional Pseudotrees Pseudotrees are a common structure used in search procedures to allow parallel processing of independent branches. As defined in [6], a pseudotree is an arrangement of a graph G into a rooted tree T such that vertices in G that share an edge are in the same branch in T. A back-edge is an edge between a node X and any node which lies on the path from X to the root (excluding X's parent). Figure 1 shows a pseudotree with four nodes, three edges (A-B, B-C, BD), and one back-edge (A-C). Also defined in [6] are four types of relationships between nodes exist in a pseudotree: • P (X) - the parent of a node X: the single node higher in the pseudotree that is connected to X directly through a tree edge • C (X) - the children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through tree edges • PP (X) - the pseudo-parents of a node X: the set of nodes higher in the pseudotree that are connected to X directly through back-edges (In Figure 1, A = PP (C)) • PC (X) - the pseudo-children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through back-edges (In Figure 1, C = PC (A)) Figure 1: A traditional pseudotree. Solid line edges represent parent-child relationships and the dashed line represents a pseudo-parent-pseudo-child relationship. Figure 2: A cross-edged pseudotree. Solid line edges represent parent-child relationships, the dashed line represents a pseudoparent-pseudo-child relationship, and the dotted line represents a branch-parent-branch-child relationship. The bolded node, B, is the merge point for node E. 2.2 Cross-edged Pseudotrees We define a cross-edge as an edge from node X to a node Y that is above X but not in the path from X to the root. A cross-edged pseudotree is a traditional pseudotree with the addition of cross-edges. Figure 2 shows a cross-edged pseudotree with a cross-edge (D-E). In a cross-edged pseudotree we designate certain edges as primary. The set of primary edges defines a spanning tree of the nodes. The parent, child, pseudo-parent, and pseudo-child relationships from the traditional pseudotree are now defined in the context of this primary edge spanning tree. This definition also yields two additional types of relationships that may exist between nodes: • BP (X) - the branch-parents of a node X: the set of nodes higher in the pseudotree that are connected to X but are not in the primary path from X to the root (In Figure 2, D = BP (E)) • BC (X) - the branch-children of a node X: the set of nodes lower in the pseudotree that are connected to X but are not in any primary path from X to any leaf node (In Figure 2, E = BC (D)) 2.3 Pseudotree Generation 742 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Current algorithms usually have a pre-execution phase to generate a traditional pseudotree from a general DCOP instance. Our DCPOP algorithm generates a cross-edged pseudotree in the same fashion. First, the DCOP instance <X, D, U> translates directly into a graph with X as the set of vertices and an edge for each pair of variables represented in U. Next, various heuristics are used to arrange this graph into a pseudotree. One common heuristic is to perform a guided depth-first search (DFS) as the resulting traversal is a pseudotree, and a DFS can easily be performed in a distributed fashion. We define an edge-traversal based method as any method that produces a pseudotree in which all parent/child pairs share an edge in the original graph. This includes DFS, breadth-first search, and best-first search based traversals. Our heuristics that generate cross-edged pseudotrees use a distributed best-first search traversal. 3. DPOP ALGORITHM The original DPOP algorithm operates in three main phases. The first phase generates a traditional pseudotree from the DCOP instance using a distributed algorithm. The second phase joins utility hypercubes from children and the local node and propagates them towards the root. The third phase chooses an assignment for each domain in a top down fashion beginning with the agent at the root node. The complexity of DPOP depends on the size of the largest computation and utility message during phase two. It has been shown that this size directly corresponds to the induced width of the pseudotree generated in phase one [6]. DPOP uses polynomial time heuristics to generate the pseudotree since finding the minimum induced width pseudotree is NP-hard. Several distributed edgetraversal heuristics have been developed to find low width pseudotrees [8]. At the end of the first phase, each agent knows its parent, children, pseudo-parents, and pseudo-children. 3.1 Utility Propagation Agents located at leaf nodes in the pseudotree begin the process by calculating a local utility hypercube. This hypercube at node X contains summed utilities for each combination of values in the domains for P (X) and PP (X). This hypercube has dimensional size equal to the number of pseudo-parents plus one. A message containing this hypercube is sent to P (X). Agents located at non-leaf nodes wait for all messages from children to arrive. Once the agent at node Y has all utility messages, it calculates its local utility hypercube which includes domains for P (Y), PP (Y), and Y. The local utility hypercube is then joined with all of the hypercubes from the child messages. At this point all utilities involving node Y are known, and the domain for Y may be safely eliminated from the joined hypercube. This elimination process chooses the best utility over the domain of Y for each combination of the remaining domains. A message containing this hypercube is now sent to P (Y). The dimensional size of this hypercube depends on the number of overlapping domains in received messages and the local utility hypercube. This dynamic programming based propagation phase continues until the agent at the root node of the pseudotree has received all messages from its children. 3.2 Value Propagation Value propagation begins when the agent at the root node Z has received all messages from its children. Since Z has no parents or pseudo-parents, it simply combines the utility hypercubes received from its children. The combined hypercube contains only values for the domain for Z. At this point the agent at node Z simply chooses the assignment for its domain that has the best utility. A value propagation message with this assignment is sent to each node in C (Z). Each other node then receives a value propagation message from its parent and chooses the assignment for its domain that has the best utility given the assignments received in the message. The node adds its domain assignment to the assignments it received and passes the set of assignments to its children. The algorithm is complete when all nodes have chosen an assignment for their domain. 4. DCPOP ALGORITHM Our extension to the original DPOP algorithm, shown in Algorithm 1, shares the same three phases. The first phase generates the cross-edged pseudotree for the DCOP instance. The second phase merges branches and propagates the utility hypercubes. The third phase chooses assignments for domains at branch merge points and in a top down fashion, beginning with the agent at the root node. For the first phase we generate a pseudotree using several distributed heuristics and select the one with lowest overall complexity. The complexity of the computation and utility message size in DCPOP does not directly correspond to the induced width of the cross-edged pseudotree. Instead, we use a polynomial time method for calculating the maximum computation and utility message size for a given cross-edged pseudotree. A description of this method and the pseudotree selection process appears in Section 5. At the end of the first phase, each agent knows its parent, children, pseudo-parents, pseudo-children, branch-parents, and branch-children. 4.1 Merging Branches and Utility Propagation In the original DPOP algorithm a node X only had utility functions involving its parent and its pseudo-parents. In DCPOP, a node X is allowed to have a utility function involving a branch-parent. The concept of a branch can be seen in Figure 2 with node E representing our node X. The two distinct paths from node E to node B are called branches of E. The single node where all branches of E meet is node B, which is called the merge point of E. Agents with nodes that have branch-parents begin by sending a utility propagation message to each branch-parent. This message includes a two dimensional utility hypercube with domains for the node X and the branch-parent BP (X). It also includes a branch information structure which contains the origination node of the branch, X, the total number of branches originating from X, and the number of branches originating from X that are merged into a single representation by this branch information structure (this number starts at 1). Intuitively when the number of merged branches equals the total number of originating branches, the algorithm has reached the merge point for X. In Figure 2, node E sends a utility propagation message to its branch-parent, node D. This message has dimensions for the domains of E and D, and includes branch information with an origin of E, 2 total branches, and 1 merged branch. As in the original DPOP utility propagation phase, an agent at leaf node X sends a utility propagation message to its parent. In DCPOP this message contains dimensions for the domains of P (X) and PP (X). If node X also has branch-parents, then the utility propagation message also contains a dimension for the domain of X, and will include a branch information structure. In Figure 2, node E sends a utility propagation message to its parent, node C. This message has dimensions for the domains of E and C, and includes branch information with an origin of E, 2 total branches, and 1 merged branch. When a node Y receives utility propagation messages from all of The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743 its children and branch-children, it merges any branches with the same origination node X. The merged branch information structure accumulates the number of merged branches for X. If the cumulative total number of merged branches equals the total number of branches, then Y is the merge point for X. This means that the utility hypercubes present at Y contain all information about the valuations for utility functions involving node X. In addition to the typical elimination of the domain of Y from the utility hypercubes, we can now safely eliminate the domain of X from the utility hypercubes. To illustrate this process, we will examine what happens in the second phase for node B in Figure 2. In the second phase Node B receives two utility propagation messages. The first comes from node C and includes dimensions for domains E, B, and A. It also has a branch information structure with origin of E, 2 total branches, and 1 merged branch. The second comes from node D and includes dimensions for domains E and B. It also has a branch information structure with origin of E, 2 total branches, and 1 merged branch. Node B then merges the branch information structures from both messages because they have the same origination, node E. Since the number of merged branches originating from E is now 2 and the total branches originating from E is 2, node B now eliminates the dimensions for domain E. Node B also eliminates the dimension for its own domain, leaving only information about domain A. Node B then sends a utility propagation message to node A, containing only one dimension for the domain of A. Although not possible in DPOP, this method of utility propagation and dimension elimination may produce hypercubes at node Y that do not share any domains. In DCPOP we do not join domain independent hypercubes, but instead may send multiple hypercubes in the utility propagation message sent to the parent of Y. This lazy approach to joins helps to reduce message sizes. 4.2 Value Propagation As in DPOP, value propagation begins when the agent at the root node Z has received all messages from its children. At this point the agent at node Z chooses the assignment for its domain that has the best utility. If Z is the merge point for the branches of some node X, Z will also choose the assignment for the domain of X. Thus any node that is a merge point will choose assignments for a domain other than its own. These assignments are then passed down the primary edge hierarchy. If node X in the hierarchy has branch-parents, then the value assignment message from P (X) will contain an assignment for the domain of X. Every node in the hierarchy adds any assignments it has chosen to the ones it received and passes the set of assignments to its children. The algorithm is complete when all nodes have chosen or received an assignment for their domain. 4.3 Proof of Correctness We will prove the correctness of DCPOP by first noting that DCPOP fully extends DPOP and then examining the two cases for value assignment in DCPOP. Given a traditional pseudotree as input, the DCPOP algorithm execution is identical to DPOP. Using a traditional pseudotree arrangement no nodes have branch-parents or branch-children since all edges are either back-edges or tree edges. Thus the DCPOP algorithm using a traditional pseudotree sends only utility propagation messages that contain domains belonging to the parent or pseudo-parents of a node. Since no node has any branch-parents, no branches exist, and thus no node serves as a merge point for any other node. Thus all value propagation assignments are chosen at the node of the assignment domain. For DCPOP execution with cross-edged pseudotrees, some nodes serve as merge points. We note that any node X that is not a merge point assigns its value exactly as in DPOP. The local utility hypercube at X contains domains for X, P (X), PP (X), and BC (X). As in DPOP the value assignment message received at X includes the values assigned to P (X) and PP (X). Also, since X is not a merge point, all assignments to BC (X) must have been calculated at merge points higher in the tree and are in the value assignment message from P (X). Thus after eliminating domains for which assignments are known, only the domain of X is left. The agent at node X can now correctly choose the assignment with maximum utility for its own domain. If node X is a merge point for some branch-child Y, we know that X must be a node along the path from Y to the root, and from P (Y) and all BP (Y) to the root. From the algorithm, we know that Y necessarily has all information from C (Y), PC (Y), and BC (Y) since it waits for their messages. Node X has information about all nodes below it in the tree, which would include Y, P (Y), BP (Y), and those PP (Y) that are below X in the tree. For any PP (Y) above X in the tree, X receives the assignment for the domain of PP (Y) in the value assignment message from P (X). Thus X has utility information about all of the utility functions of which Y is a part. By eliminating domains included in the value assignment message, node X is left with a local utility hypercube with domains for X and Y. The agent at node X can now correctly choose the assignments with maximum utility for the domains of X and Y. 4.4 Complexity Analysis The first phase of DCPOP sends one message to each P (X), PP (X), and BP (X). The second phase sends one value assignment message to each C (X). Thus, DCPOP produces a linear number of messages with respect to the number of edges (utility functions) in the cross-edged pseudotree and the original DCOP instance. The actual complexity of DCPOP depends on two additional measurements: message size and computation size. Message size and computation size in DCPOP depend on the number of overlapping branches as well as the number of overlapping back-edges. It was shown in [6] that the number of overlapping back-edges is equal to the induced width of the pseudotree. In a poorly constructed cross-edged pseudotree, the number of overlapping branches at node X can be as large as the total number of descendants of X. Thus, the total message size in DCPOP in a poorly constructed instance can be space-exponential in the total number of nodes in the graph. However, in practice a well constructed cross-edged pseudotree can achieve much better results. Later we address the issue of choosing well constructed crossedged pseudotrees from a set. We introduce an additional measurement of the maximum sequential path cost through the algorithm. This measurement directly relates to the maximum amount of parallelism achievable by the algorithm. To take this measurement we first store the total computation size for each node during phase two and three. This computation size represents the number of individual accesses to a value in a hypercube at each node. For example, a join between two domains of size 4 costs 4 ∗ 4 = 16. Two directed acyclic graphs (DAG) can then be drawn; one with the utility propagation messages as edges and the phase two costs at nodes, and the other with value assignment messages and the phase three costs at nodes. The maximum sequential path cost is equal to the sum of the longest path on each DAG from the root to any leaf node. 5. HEURISTICS In our assessment of complexity in DCPOP we focused on the worst case possibly produced by the algorithm. We acknowledge 744 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2: elect leader from all Xj E X 3: elected leader initiates pseudotree creation 4: afterwards, Xi knows P (Xi), PP (Xi), BP (Xi), C (Xi), BC (Xi) and PC (Xi) Phase 2: UTIL message propagation 5: if IBP (Xi) I> 0 then 6: BRANCHXi +--IBP (Xi) I + 1 7: for all Xk EBP (Xi) do 8: UTILXi (Xk) +--Compute utils (Xi, Xk) 9: Send message (Xk, UTILXi (Xk), BRANCHXi) 10: if IC (Xi) I = 0 (i.e. Xi is a leaf node) then 11: UTILXi (P (Xi)) +--Compute utils (P (Xi), PP (Xi)) for all PP (Xi) 12: Send message (P (Xi), UTILXi (P (Xi)), BRANCHXi) 13: Send message (PP (Xi), empty UTIL, empty BRANCH) to all PP (Xi) 14: activate UTIL Message handler () Phase 3: VALUE message propagation 15: activate VALUE Message handler () END ALGORITHM UTIL Message handler (Xk, UTILXk (Xi), BRANCHXk) 16: store UTILXk (Xi), BRANCHXk (Xi) 17: if UTIL messages from all children and branch children arrived then 18: for all Bj EBRANCH (Xi) do 19: if Bj is merged then 20: join all hypercubes where Bj EUTIL (Xi) 21: eliminate Bj from the joined hypercube 22: if P (Xi) == null (that means Xi is the root) then 23: v * i +--Choose optimal (null) 24: Send VALUE (Xi, v * i) to all C (Xi) 25: else 26: UTILXi (P (Xi)) +--Compute utils (P (Xi), PP (Xi)) 27: Send message (P (Xi), UTILXi (P (Xi)), BRANCHXi (P (Xi))) VALUE Message handler (VALUEXi, P (Xi)) 28: add all Xk +--v * k EVALUEXi, P (Xi) to agent view 29: Xi +--v * i = Choose optimal (agent view) 30: Send VALUEXl, Xi to all Xl EC (Xi) that in real world problems the generation of the pseudotree has a significant impact on the actual performance. The problem of finding the best pseudotree for a given DCOP instance is NP-Hard. Thus a heuristic is used for generation, and the performance of the algorithm depends on the pseudotree found by the heuristic. Some previous research focused on finding heuristics to generate good pseudotrees [8]. While we have developed some heuristics that generate good cross-edged pseudotrees for use with DCPOP, our focus has been to use multiple heuristics and then select the best pseudotree from the generated pseudotrees. We consider only heuristics that run in polynomial time with respect to the number of nodes in the original DCOP instance. The actual DCPOP algorithm has worst case exponential complexity, but we can calculate the maximum message size, computation size, and sequential path cost for a given cross-edged pseudotree in linear space-time complexity. To do this, we simply run the algorithm without attempting to calculate any of the local utility hypercubes or optimal value assignments. Instead, messages include dimensional and branch information but no utility hypercubes. After each heuristic completes its generation of a pseudotree, we execute the measurement procedure and propagate the measurement information up to the chosen root in that pseudotree. The root then broadcasts the total complexity for that heuristic to all nodes. After all heuristics have had a chance to complete, every node knows which heuristic produced the best pseudotree. Each node then proceeds to begin the DCPOP algorithm using its knowledge of the pseudotree generated by the best heuristic. The heuristics used to generate traditional pseudotrees perform a distributed DFS traversal. The general distributed algorithm uses a token passing mechanism and a linear number of messages. Improved DFS based heuristics use a special procedure to choose the root node, and also provide an ordering function over the neighbors of a node to determine the order of path recursion. The DFS based heuristics used in our experiments come from the work done in [4, 8]. 5.1 The best-first cross-edged pseudotree heuristic The heuristics used to generate cross-edged pseudotrees perform a best-first traversal. A general distributed best-first algorithm for node expansion is presented in Algorithm 2. An evaluation function at each node provides the values that are used to determine the next best node to expand. Note that in this algorithm each node only exchanges its best value with its neighbors. In our experiments we used several evaluation functions that took as arguments an ordered list of ancestors and a node, which contains a list of neighbors (with each neighbor's placement depth in the tree if it was placed). From these we can calculate branchparents, branch-children, and unknown relationships for a potential node placement. The best overall function calculated the value as ancestors--(branchparents + branchchildren) with the number of unknown relationships being a tiebreak. After completion each node has knowledge of its parent and ancestors, so it can easily determine which connected nodes are pseudo-parents, branchparents, pseudo-children, and branch-children. The complexity of the best-first traversal depends on the complexity of the evaluation function. Assuming a complexity of O (V) for the evaluation function, which is the case for our best overall function, the best-first traversal is O (V • E) which is at worst O (n3). For each v E V we perform a place operation, and find the next node to place using the getBestNeighbor operation. The place operation is at most O (V) because of the sent messages. Finding the next node uses recursion and traverses only already placed The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745 return best, score nodes, so it has O (V) recursions. Each recursion performs a recursive getBestNeighbor operation that traverses all placed nodes and their neighbors. This operation is O (V • E), but results can be cached using only O (V) space at each node. Thus we have O (V • (V + V + V • E)) = O (V2 • E). If we are smart about evaluating local changes when each node receives placement messages from its neighbors and cache the results the getBestNeighbor operation is only O (E). This increases the complexity of the place operation, but for all placements the total complexity is only O (V • E). Thus we have an overall complexity of O (V • E+V • (V + E)) = O (V • E). 6. COMPARISON OF COMPLEXITY IN DPOP AND DCPOP We have already shown that given the same input, DCPOP performs the same as DPOP. We also have shown that we can accurately predict performance of a given pseudotree in linear spacetime complexity. If we use a constant number of heuristics to generate the set of pseudotrees, we can choose the best pseudotree in linear space-time complexity. We will now show that there exists a DCOP instance for which a cross-edged pseudotree outperforms all possible traditional pseudotrees (based on edge-traversal heuristics). In Figure 3 (a) we have a DCOP instance with six nodes. This is a bipartite graph with each partition fully connected to the other Figure 3: (a) The DCOP instance (b) A traditional pseudotree arrangement for the DCOP instance (c) A cross-edged pseudotree arrangement for the DCOP instance partition. In Figure 3 (b) we see a traditional pseudotree arrangement for this DCOP instance. It is easy to see that any edgetraversal based heuristic cannot expand two nodes from the same partition in succession. We also see that no node can have more than one child because any such arrangement would be an invalid pseudotree. Thus any traditional pseudotree arrangement for this DCOP instance must take the form of Figure 3 (b). We can see that the back-edges F-B and F-A overlap node C. Node C also has a parent E, and a back-edge with D. Using the original DPOP algorithm (or DCPOP since they are identical in this case), we find that the computation at node C involves five domains: A, B, C, D, and E. In contrast, the cross-edged pseudotree arrangement in Figure 3 (c) requires only a maximum of four domains in any computation during DCPOP. Since node A is the merge point for branches from both B and C, we can see that each of the nodes D, E, and F have two overlapping branches. In addition each of these nodes has node A as its parent. Using the DCPOP algorithm we find that the computation at node D (or E or F) involves four domains: A, B, C, and D (or E or F). Since no better traditional pseudotree arrangement can be created using an edge-traversal heuristic, we have shown that DCPOP can outperform DPOP even if we use the optimal pseudotree found through edge-traversal. We acknowledge that pseudotree arrangements that allow parent-child relationships without an actual constraint can solve the problem in Figure 3 (a) with maximum computation size of four domains. However, current heuristics used with DPOP do not produce such pseudotrees, and such a heuristic would be difficult to distribute since each node would require information about nodes with which it has no constraint. Also, while we do not prove it here, cross-edged pseudotrees can produce smaller message sizes than such pseudotrees even if the computation size is similar. In practice, since finding the best pseudotree arrangement is NP-Hard, we find that heuristics that produce cross-edged pseudotrees often produce significantly smaller computation and message sizes. 7. EXPERIMENTAL RESULTS 746 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Existing performance metrics for DCOP algorithms include the total number of messages, synchronous clock cycles, and message size. We have already shown that the total number of messages is linear with respect to the number of constraints in the DCOP instance. We also introduced the maximum sequential path cost (PC) as a measurement of the maximum amount of parallelism achievable by the algorithm. The maximum sequential path cost is equal to the sum of the computations performed on the longest path from the root to any leaf node. We also include as metrics the maximum computation size in number of dimensions (CD) and maximum message size in number of dimensions (MD). To analyze the relative complexity of a given DCOP instance, we find the minimum induced width (IW) of any traditional pseudotree produced by a heuristic for the original DPOP. 7.1 Generic DCOP instances For our initial tests we randomly generated two sets of problems with 3000 cases in each. Each problem was generated by assigning a random number (picked from a range) of constraints to each variable. The generator then created binary constraints until each variable reached its maximum number of constraints. The first set uses 20 variables, and the best DPOP IW ranges from 1 to 16 with an average of 8.5. The second set uses 100 variables, and the best DPOP IW ranged from 2 to 68 with an average of 39.3. Since most of the problems in the second set were too complex to actually compute the solution, we took measurements of the metrics using the techniques described earlier in Section 5 without actually solving the problem. Results are shown for the first set in Table 1 and for the second set in Table 2. For the two problem sets we split the cases into low density and high density categories. Low density cases consist of those problems that have a best DPOP IW less than or equal to half of the total number of nodes (e.g. IW <10 for the 20 node problems and IW <50 for the 100 node problems). High density problems consist of the remainder of the problem sets. In both Table 1 and Table 2 we have listed performance metrics for the original DPOP algorithm, the DCPOP algorithm using only cross-edged pseudotrees (DCPOP-CE), and the DCPOP algorithm using traditional and cross-edged pseudotrees (DCPOP-All). The pseudotrees used for DPOP were generated using 5 heuristics: DFS, DFS MCN, DFS CLIQUE MCN, DFS MCN DSTB, and DFS MCN BEC. These are all versions of the guided DFS traversal discussed in Section 5. The cross-edged pseudotrees used for DCPOP-CE were generated using 5 heuristics: MCN, LCN, MCN A-B, LCN A-B, and LCSG A-B. These are all versions of the best-first traversal discussed in Section 5. For both DPOP and DCPOP-CE we chose the best pseudotree produced by their respective 5 heuristics for each problem in the set. For DCPOP-All we chose the best pseudotree produced by all 10 heuristics for each problem in the set. For the CD and MD metrics the value shown is the average number of dimensions. For the PC metric the value shown is the natural logarithm of the maximum sequential path cost (since the actual value grows exponentially with the complexity of the problem). The final row in both tables is a measurement of improvement of DCPOP-All over DPOP. For the CD and MD metrics the value shown is a reduction in number of dimensions. For the PC metric the value shown is a percentage reduction in the maximum sequential path cost (% = DP OP − DCP OP DCP OP ∗ 100). Notice that DCPOPAll outperforms DPOP on all metrics. This logically follows from our earlier assertion that given the same input, DCPOP performs exactly the same as DPOP. Thus given the choice between the pseudotrees produced by all 10 heuristics, DCPOP-All will always out Table 1: 20 node problems Table 2: 100 node problems Figure 4: Computation Dimension Size Figure 5: Message Dimension Size Figure 6: Path Cost Table 3: Meeting Scheduling Problems perform DPOP. Another trend we notice is that the improvement is greater for high density problems than low density problems. We show this trend in greater detail in Figures 4, 5, and 6. Notice how the improvement increases as the complexity of the problem increases. 7.2 Meeting Scheduling Problem In addition to our initial generic DCOP tests, we ran a series of tests on the Meeting Scheduling Problem (MSP) as described in [6]. The problem setup includes a number of people that are grouped into departments. Each person must attend a specified number of meetings. Meetings can be held within departments or among departments, and can be assigned to one of eight time slots. The MSP maps to a DCOP instance where each variable represents the time slot that a specific person will attend a specific meeting. All variables that belong to the same person have mutual exclusion constraints placed so that the person cannot attend more than one meeting during the same time slot. All variables that belong to the same meeting have equality constraints so that all of the participants choose the same time slot. Unary constraints are placed on each variable to account for a person's valuation of each meeting and time slot. For our tests we generated 100 sample problems for each combination of agents and meetings. Results are shown in Table 3. The values in the first five columns represent (in left to right order), the total number of agents, the total number of meetings, the total number of variables, the average total number of constraints, and the average minimum IW produced by a traditional pseudotree. The last three columns show the same metrics we used for the generic DCOP instances, except this time we only show the improvements of DCPOP-All over DPOP. Performance is better on average for all MSP instances, but again we see larger improvements for more complex problem instances. 8. CONCLUSIONS AND FUTURE WORK We presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements. Our algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase. Our algorithm also allows value assignments to occur at higher level merge points for lower level nodes. We have shown that DCPOP fully extends DPOP by performing the same operations given the same input. We have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees. We placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees. We have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity. Given the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees. Future work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements * ABSTRACT Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP. 1. INTRODUCTION Many historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP). With the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems. CSPs were originally extended to distributed agent environments in [9]. Early domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2]. Many domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint. Recent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions. Instead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility. This extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1]. Current algorithms that solve complete DCOPs use two main approaches: search and dynamic programming. Search based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3]. Dynamic programming based algorithms include DPOP and its extensions [5, 6, 7]. To date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem. It has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree. However, it was also shown that finding the optimal pseudotree was NP-Hard. We began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics. We found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors. We suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes. After exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees. Our hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types. In this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees. We begin with a definition of DCOP, traditional pseudotrees, and cross-edged pseudotrees. We then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm. We discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics. We then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances. We conclude with a selection of ideas for future work and extensions for DCPOP. 2. PROBLEM DEFINITION 2.1 Traditional Pseudotrees 2.2 Cross-edged Pseudotrees 2.3 Pseudotree Generation 742 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. DPOP ALGORITHM 3.1 Utility Propagation 3.2 Value Propagation 4. DCPOP ALGORITHM 4.1 Merging Branches and Utility Propagation The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743 4.2 Value Propagation 4.3 Proof of Correctness 4.4 Complexity Analysis 5. HEURISTICS 744 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 The best-first cross-edged pseudotree heuristic The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745 6. COMPARISON OF COMPLEXITY IN DPOP AND DCPOP 7. EXPERIMENTAL RESULTS 746 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7.1 Generic DCOP instances 7.2 Meeting Scheduling Problem 8. CONCLUSIONS AND FUTURE WORK We presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements. Our algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase. Our algorithm also allows value assignments to occur at higher level merge points for lower level nodes. We have shown that DCPOP fully extends DPOP by performing the same operations given the same input. We have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees. We placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees. We have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity. Given the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees. Future work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.
A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements * ABSTRACT Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP. 1. INTRODUCTION Many historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP). Early domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2]. Recent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions. This extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1]. Current algorithms that solve complete DCOPs use two main approaches: search and dynamic programming. Dynamic programming based algorithms include DPOP and its extensions [5, 6, 7]. To date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem. It has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree. However, it was also shown that finding the optimal pseudotree was NP-Hard. We began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics. We found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors. We suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes. After exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees. Our hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types. In this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees. DCOP, traditional pseudotrees, and cross-edged pseudotrees. We then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm. We discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics. We then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances. We conclude with a selection of ideas for future work and extensions for DCPOP. 8. CONCLUSIONS AND FUTURE WORK We presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements. Our algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase. Our algorithm also allows value assignments to occur at higher level merge points for lower level nodes. We have shown that DCPOP fully extends DPOP by performing the same operations given the same input. We have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees. We placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees. We have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity. Given the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees. Future work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.
I-56
Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework
This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles.
[ "constraint", "algorithm", "bdi", "negoti", "distribut constraint satisfact problem", "dcsp", "share environ", "uma", "backtrack", "mediat", "resourc restrict", "privaci requir", "belief-desireintent model", "agent negoti" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U", "M", "R" ]
Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework Bao Chau Le Dinh and Kiam Tian Seow School of Computer Engineering Nanyang Technological University Republic of Singapore {ledi0002,asktseow}@ntu. edu.sg ABSTRACT This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent Agents, Multiagent Systems General Terms Algorithms, Design, Experimentation 1. INTRODUCTION At the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment. Important application examples include distributed resource allocation [1] and distributed scheduling [2]. Many important algorithms, such as distributed breakout (DBO) [3], asynchronous backtracking (ABT) [4], asynchronous partial overlay (APO) [5] and asynchronous weak-commitment (AWC) [4], have been developed to address the DCSP and provide the agent solution basis for its applications. Broadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents. While there has been no lack of efforts in this promising research field, especially in dealing with outstanding issues such as resource restrictions (e.g., limits on time and communication) [7] and privacy requirements [8], there is unfortunately no conceptually clear treatment to prise open the model-theoretic workings of the various agent algorithms that have been developed. As a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible. In this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9]. Negotiation is viewed as a process of several agents searching for a solution called an agreement. The search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step. Anchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies. The proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Our performance evaluation shows that UMA can outperform ABT and AWC in terms of the average number of computational cycles for both the sparse and critical coloring problems [6]. The rest of this paper is organized as follows. In Section 2, we provide a formal overview of DCSP. Section 3 presents a BDI negotiation model by which a DCSP agent reasons. Section 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol. A new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones. Section 6 concludes the paper and points to some future work. 2. DCSP: PROBLEM FORMALIZATION The DCSP [4] considers the following environment. • There are n agents with k variables x0, x1, · · · , xk−1, n ≤ k, which have values in domains D1, D2, · · · , Dk, respectively. We define a partial function B over the productrange {0, 1, ... , (n−1)}×{0, 1, ... , (k −1)} such that, that variable xj belongs to agent i is denoted by B(i, j)! . The exclamation mark `!'' means `is defined''. • There are m constraints c0, c1, · · · cm−1 to be conjunctively satisfied. In a similar fashion as defined for B(i, j), we use E(l, j)! , (0 ≤ l < m, 0 ≤ j < k), to denote that xj is relevant to the constraint cl. The DCSP may be formally stated as follows. Problem Statement: ∀i, j (0 ≤ i < n)(0 ≤ j < k) where B(i, j)! , find the assignment xj = dj ∈ Dj such that ∀l (0 ≤ l < m) where E(l, j)! , cl is satisfied. A constraint may consist of different variables belonging to different agents. An agent cannot change or modify the assignment values of other agents'' variables. Therefore, in cooperatively searching for a DCSP solution, the agents would need to communicate with one another, and adjust and re-adjust their own variable assignments in the process. 2.1 DCSP Agent Model In general, all DCSP agents must cooperatively interact, and essentially perform the assignment and reassignment of domain values to variables to resolve all constraint violations. If the agents succeed in their resolution, a solution is found. In order to engage in cooperative behavior, a DCSP agent needs five fundamental parameters, namely, (i) a variable [4] or a variable set [10], (ii) domains, (iii) priority, (iv) a neighbor list and (v) a constraint list. Each variable assumes a range of values called a domain. A domain value, which usually abstracts an action, is a possible option that an agent may take. Each agent has an assigned priority. These priority values help decide the order in which they revise or modify their variable assignments. An agent``s priority may be fixed (static) or changing (dynamic) when searching for a solution. If an agent has more than one variable, each variable can be assigned a different priority, to help determine which variable assignment the agent should modify first. An agent which shares the same constraint with another agent is called the latter``s neighbor. Each agent needs to refer to its list of neighbors during the search process. This list may also be kept unchanged or updated accordingly in runtime. Similarly, each agent maintains a constraint list. The agent needs to ensure that there is no violation of the constraints in this list. Constraints can be added or removed from an agent``s constraint list in runtime. As with an agent, a constraint can also be associated with a priority value. Constraints with a high priority are said to be more important than constraints with a lower priority. To distinguish it from the priority of an agent, the priority of a constraint is called its weight. 3. THE BDI NEGOTIATION MODEL The BDI model originates with the work of M. Bratman [11]. According to [12, Ch.1], the BDI architecture is based on a philosophical model of human practical reasoning, and draws out the process of reasoning by which an agent decides which actions to perform at consecutive moments when pursuing certain goals. Grounding the scope to the DCSP framework, the common goal of all agents is finding a combination of domain values to satisfy a set of predefined constraints. In automated negotiation [9], such a solution is called an agreement among the agents. Within this scope, we found that we were able to unearth the generic behavior of a DCSP agent and formulate it in a negotiation protocol, prescribed using the powerful concepts of BDI. Thus, our proposed negotiation model can be said to combine the BDI concepts with automated negotiation in a multiagent framework, allowing us to conceptually separate DCSP mechanisms into a common BDI interaction protocol and the adopted strategies. 3.1 The generic protocol Figure 1 shows the basic reasoning steps in an arbitrary round of negotiation that constitute the new protocol. The solid line indicates the common component or transition which always exists regardless of the strategy used. The dotted line indicates the Percept Belief Desire Intention Mediation Execution P B D I I I Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Figure 1: The BDI interaction protocol component or transition which may or may not appear depending on the adopted strategy. Two types of messages are exchanged through this protocol, namely, the info message and the negotiation message. An info message perceived is a message sent by another agent. The message will contain the current selected values and priorities of the variables of that sending agent. The main purpose of this message is to update the agent about the current environment. Info message is sent out at the end of one negotiation round (also called a negotiation cycle), and received at the beginning of next round. A negotiation message is a message which may be sent within a round. This message is for mediation purposes. The agent may put different contents into this type of message as long as it is agreed among the group. The format of the negotiation message and when it is to be sent out are subject to the strategy. A negotiation message can be sent out at the end of one reasoning step and received at the beginning of the next step. Mediation is a step of the protocol that depends on whether the agent``s interaction with others is synchronous or asynchronous. In synchronous mechanism, mediation is required in every negotiation round. In an asynchronous one, mediation is needed only in a negotiation round when the agent receives a negotiation message. A more in-depth view of this mediation step is provided later in this section. The BDI protocol prescribes the skeletal structure for DCSP negotiation. We will show in Section 4 that several well-known DCSP mechanisms all inherit this generic model. The details of the six main reasoning steps for the protocol (see Figure 1) are described as follows for a DCSP agent. For a conceptually clearer description, we assume that there is only one variable per agent. • Percept. In this step, the agent receives info messages from its neighbors in the environment, and using its Percept function, returns an image P. This image contains the current values assigned to the variables of all agents in its neighbor list. The image P will drive the agent``s actions in subsequent steps. The agent also updates its constraint list C using some criteria of the adopted strategy. • Belief. Using the image P and constraint list C, the agent will check if there is any violated constraint. If there is no violation, the agent will believe it is choosing a correct option and therefore will take no action. The agent will do nothing if it is in a local stable state - a snapshot of the variables assignments of the agent and all its neighbors by which they satisfy their shared constraints. When all agents are in their local stable states, the whole environment is said to be in a global stable state and an agreeThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 525 ment is found. In case the agent finds its value in conflict with some of its neighbors'', i.e., the combination of values assigned to the variables leads to a constraint violation, the agent will first try to reassign its own variable using a specific strategy. If it finds a suitable option which meets some criteria of the adopted strategy, the agent will believe it should change to the new option. However it does not always happen that an agent can successfully find such an option. If no option can be found, the agent will believe it has no option, and therefore will request its neighbors to reconsider their variable assignments. To summarize, there are three types of beliefs that a DCSP agent can form: (i) it can change its variable assignment to improve the current situation, (ii) it cannot change its variable assignment and some constraints violations cannot be resolved and (iii) it need not change its variable assignment as all the constraints are satisfied. Once the beliefs are formed, the agent will determine its desires, which are the options that attempt to resolve the current constraint violations. • Desire. If the agent takes Belief (i), it will generate a list of its own suitable domain values as its desire set. If the agent takes Belief (ii), it cannot ascertain its desire set, but will generate a sublist of agents from its neighbor list, whom it will ask to reconsider their variable assignments. How this sublist is created depends on the strategy devised for the agent. In this situation, the agent will use a virtual desire set that it determines based on its adopted strategy. If the agent takes Belief (iii), it will have no desire to revise its domain value, and hence no intention. • Intention. The agent will select a value from its desire set as its intention. An intention is the best desired option that the agent assigns to its variable. The criteria for selecting a desire as the agent``s intention depend on the strategy used. Once the intention is formed, the agent may either proceed to the execution step, or undergo mediation. Again, the decision to do so is determined by some criteria of the adopted strategy. • Mediation. This is an important function of the agent. Since, if the agent executes its intention without performing intention mediation with its neighbors, the constraint violation between the agents may not be resolved. Take for example, suppose two agents have variables, x1 and x2, associated with the same domain {1, 2}, and their shared constraint is (x1 + x2 = 3). Then if both the variables are initialized with value 1, they will both concurrently switch between the values 2 and 1 in the absence of mediation between them. There are two types of mediation: local mediation and group mediation. In the former, the agents exchange their intentions. When an agent receives another``s intention which conflicts with its own, the agent must mediate between the intentions, by either changing its own intention or informing the other agent to change its intention. In the latter, there is an agent which acts as a group mediator. This mediator will collect the intentions from the group - a union of the agent and its neighbors - and determine which intention is to be executed. The result of this mediation is passed back to the agents in the group. Following mediation, the agent may proceed to the next reasoning step to execute its intention or begin a new negotiation round. • Execution. This is the last step of a negotiation round. The agent will execute by updating its variable assignment if the intention obtained at this step is its own. Following execution, the agent will inform its neighbors about its new variable assignment and updated priority. To do so, the agent will send out an info message. 3.2 The strategy A strategy plays an important role in the negotiation process. Within the protocol, it will often determine the efficiency of the Percept Belief Desire Intention Mediation Execution P B D I Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Figure 2: BDI protocol with Asynchronous Backtracking strategy search process in terms of computational cycles and message communication costs. The design space when devising a strategy is influenced by the following dimensions: (i) asynchronous or synchronous, (ii) dynamic or static priority, (iii) dynamic or static constraint weight, (iv) number of negotiation messages to be communicated, (v) the negotiation message format and (vi) the completeness property. In other words, these dimensions provide technical considerations for a strategy design. 4. DCSP ALGORITHMS: BDI PROTOCOL + STRATEGIES In this section, we apply the proposed BDI negotiation model presented in Section 3 to expose the BDI protocol and the different strategies used for three well-known algorithms, ABT, AWC and DBO. All these algorithms assume that there is only one variable per agent. Under our framework, we call the strategies applied the ABT, AWC and DBO strategies, respectively. To describe each strategy formally, the following mathematical notations are used: • n is the number of agents, m is the number of constraints; • xi denotes the variable held by agent i, (0 ≤ i < n); • Di denotes the domain of variable xi; Fi denotes the neighbor list of agent i; Ci denotes its constraint list; • pi denotes the priority of agent i; and Pi = {(xj = vj, pj = k) | agent j ∈ Fi, vj ∈ Dj is the current value assigned to xj and the priority value k is a positive integer } is the perception of agent i; • wl denotes the weight of constraint l, (0 ≤ l < m); • Si(v) is the total weight of the violated constraints in Ci when its variable has the value v ∈ Di. 4.1 Asynchronous Backtracking Figure 2 presents the BDI negotiation model incorporating the Asynchronous Backtracking (ABT) strategy. As mentioned in Section 3, for an asynchronous mechanism that ABT is, the mediation step is needed only in a negotiation round when an agent receives a negotiation message. For agent i, beginning initially with (wl = 1, (0 ≤ l < m); pi = i, (0 ≤ i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven ABT strategy is described as follows. Step 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi). Update Ci to be the list of 526 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) constraints which only consists of agents in Fi that have equal or higher priority than this agent. Step 2 - Belief: The belief function GB (Pi,Ci) will return a value bi ∈ {0, 1, 2}, decided as follows: • bi = 0 when agent i can find an optimal option, i.e., if (Si(vi) = 0 or vi is in bad values list) and (∃a ∈ Di)(Si(a) = 0) and a is not in a list of domain values called bad values list. Initially this list is empty and it will be cleared when a neighbor of higher priority changes its variable assignment. • bi = 1 when it cannot find an optimal option, i.e., if (∀a ∈ Di)(Si(a) = 0) or a is in bad values list. • bi = 2 when its current variable assignment is an optimal option, i.e., if Si(vi) = 0 and vi is not in bad value list. Step 3 - Desire: The desire function GD (bi) will return a desire set denoted by DS, decided as follows: • If bi = 0, then DS = {a | (a = vi), (Si(a) = 0) and a is not in the bad value list }. • If bi = 1, then DS = ∅, the agent also finds agent k which is determined by {k | pk = min(pj) with agent j ∈ Fi and pk > pi }. • If bi = 2, then DS = ∅. Step 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: • If DS = ∅, then select an arbitrary value (say, vi) from DS as the intention. • If DS = ∅, then assign nil as the intention (to denote its lack thereof). Step 5 - Execution: • If agent i has a domain value as its intention, the agent will update its variable assignment with this value. • If bi = 1, agent i will send a negotiation message to agent k, then remove k from Fi and begin its next negotiation round. The negotiation message will contain the list of variable assignments of those agents in its neighbor list Fi that have a higher priority than agent i in the current image Pi. Mediation: When agent i receives a negotiation message, several sub-steps are carried out, as follows: • If the list of agents associated with the negotiation message contains agents which are not in Fi, it will add these agents to Fi, and request these agents to add itself to their neighbor lists. The request is considered as a type of negotiation message. • Agent i will first check if the sender agent is updated with its current value vi. The agent will add vi to its bad values list if it is so, or otherwise send its current value to the sender agent. Following this step, agent i proceeds to the next negotiation round. 4.2 Asynchronous Weak Commitment Search Figure 3 presents the BDI negotiation model incorporating the Asynchronous Weak Commitment (AWC) strategy. The model is similar to that of incorporating the ABT strategy (see Figure 2). This is not surprising; AWC and ABT are found to be strategically similar, differing only in the details of some reasoning steps. The distinguishing point of AWC is that when the agent cannot find a suitable variable assignment, it will change its priority to the highest among its group members ({i} ∪ Fi). For agent i, beginning initially with (wl = 1, (0 ≤ l < m); pi = i, (0 ≤ i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven AWC strategy is described as follows. Step 1 - Percept: This step is identical to the Percept step of ABT. Step 2 - Belief: The belief function GB (Pi,Ci) will return a value bi ∈ {0, 1, 2}, decided as follows: Percept Belief Desire Intention Mediation Execution P B D I Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Figure 3: BDI protocol with Asynchronous WeakCommitment strategy • bi = 0 when the agent can find an optimal option i.e., if (Si(vi) = 0 or the assignment xi = vi and the current variables assignments of the neighbors in Fi who have higher priority form a nogood [4]) stored in a list called nogood list and ∃a ∈ Di, Si(a) = 0 (initially the list is empty). • bi = 1 when the agent cannot find any optimal option i.e., if ∀a ∈ Di, Si(a) = 0. • bi = 2 when the current assignment is an optimal option i.e., if Si(vi) = 0 and the current state is not a nogood in nogood list. Step 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: • If bi = 0, then DS = {a | (a = vi), (Si(a) = 0) and the number of constraint violations with lower priority agents is minimized }. • If bi = 1, then DS = {a | a ∈ Di and the number of violations of all relevant constraints is minimized }. • If bi = 2, then DS = ∅. Following, if bi = 1, agent i will find a list Ki of higher priority neighbors, defined by Ki = {k | agent k ∈ Fi and pk > pi}. Step 4 - Intention: This step is similar to the Intention step of ABT. However, for this strategy, the negotiation message will contain the variable assignments (of the current image Pi) for all the agents in Ki. This list of assignment is considered as a nogood. If the same negotiation message had been sent out before, agent i will have nil intention. Otherwise, the agent will send the message and save the nogood in the nogood list. Step 5 - Execution: • If agent i has a domain value as its intention, the agent will update its variable assignment with this value. • If bi = 1, it will send the negotiation message to its neighbors in Ki, and set pi = max{pj} + 1, with agent j ∈ Fi. Mediation: This step is identical to the Mediation step of ABT, except that agent i will now add the nogood contained in the negotiation message received to its own nogood list. 4.3 Distributed Breakout Figure 4 presents the BDI negotiation model incorporating the Distributed Breakout (DBO) strategy. Essentially, by this synchronous strategy, each agent will search iteratively for improvement by reducing the total weight of the violated constraints. The iteration will continue until no agent can improve further, at which time if some constraints remain violated, the weights of The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 527 Percept Belief Desire Intention Mediation Execution P B D I I A Info Message Info Message Negotiation Message Negotiation Message Figure 4: BDI protocol with Distributed Breakout strategy these constraints will be increased by 1 to help `breakout'' from a local minimum. For agent i, beginning initially with (wl = 1, (0 ≤ l < m), pi = i, (0 ≤ i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven DBO strategy is described as follows. Step 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi). Update Ci to be the list of its relevant constraints. Step 2 - Belief: The belief function GB (Pi,Ci) will return a value bi ∈ {0, 1, 2}, decided as follows: • bi = 0 when agent i can find an option to reduce the number violations of the constraints in Ci, i.e., if ∃a ∈ Di, Si(a) < Si(vi). • bi = 1 when it cannot find any option to improve situation, i.e., if ∀a ∈ Di, a = vi, Si(a) ≥ Si(vi). • bi = 2 when its current assignment is an optimal option, i.e., if Si(vi) = 0. Step 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: • If bi = 0, then DS = {a | a = vi, Si(a) < Si(vi) and (Si(vi)−Si(a)) is maximized }. (max{(Si(vi)−Si(a))} will be referenced by hmax i in subsequent steps, and it defines the maximal reduction in constraint violations). • Otherwise, DS = ∅. Step 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: • If DS = ∅, then select an arbitrary value (say, vi) from DS as the intention. • If DS = ∅, then assign nil as the intention. Following, agent i will send its intention to all its neighbors. In return, it will receive intentions from these agents before proceeding to Mediation step. Mediation: Agent i receives all the intentions from its neighbors. If it finds that the intention received from a neighbor agent j is associated with hmax j > hmax i , the agent will automatically cancel its current intention. Step 5 - Execution: • If agent i did not cancel its intention, it will update its variable assignment with the intended value. Percept Belief Desire Intention Mediation Execution P B D I I A Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Figure 5: BDI protocol with Unsolicited Mutual Advice strategy • If all intentions received and its own one are nil intention, the agent will increase the weight of each currently violated constraint by 1. 5. THE UMA STRATEGY Figure 5 presents the BDI negotiation model incorporating the Unsolicited Mutual Advice(UMA) strategy. Unlike when using the strategies of the previous section, a DCSP agent using UMA will not only send out a negotiation message when concluding its Intention step, but also when concluding its Desire step. The negotiation message that it sends out to conclude the Desire step constitutes an unsolicited advice for all its neighbors. In turn, the agent will wait to receive unsolicited advices from all its neighbors, before proceeding on to determine its intention. For agent i, beginning initially with (wl = 1, (0 ≤ l < m), pi = i, (0 ≤ i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven UMA strategy is described as follows. Step 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi). Update Ci to be the list of constraints relevant to agent i. Step 2 - Belief: The belief function GB (Pi,Ci) will return a value bi ∈ {0, 1, 2}, decided as follows: • bi = 0 when agent i can find an option to reduce the number violations of the constraints in Ci, i.e., if ∃a ∈ Di, Si(a) < Si(vi) and the assignment xi = a and the current variable assignments of its neighbors do not form a local state stored in a list called bad states list (initially this list is empty). • bi = 1 when it cannot find a value a such as a ∈ Di, Si(a) < Si(vi), and the assignment xi = a and the current variable assignments of its neighbors do not form a local state stored in the bad states list. • bi = 2 when its current assignment is an optimal option, i.e., if Si(vi) = 0. Step 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: • If bi = 0, then DS = {a | a = vi, Si(a) < Si(vi) and (Si(vi) − Si(a)) is maximized } and the assignment xi = a and the current variable assignments of agent i``s neighbors do not form a state in the bad states list. In this case, DS is called a set of voluntary desires. max{(Si(vi)−Si(a))} will be referenced by hmax i in subsequent steps, and it defines 528 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the maximal reduction in constraint violations. It is also referred to as an improvement). • If bi = 1, then DS = {a | a = vi, Si(a) is minimized } and the assignment xi = a and the current variable assignments of agent i``s neighbors do not form a state in the bad states list. In this case, DS is called a set of reluctant desires • If bi = 2, then DS = ∅. Following, if bi = 0, agent i will send a negotiation message containing hmax i to all its neighbors. This message is called a voluntary advice. If bi = 1, agent i will send a negotiation message called change advice to the neighbors in Fi who share the violated constraints with agent i. Agent i receives advices from all its neighbors and stores them in a list called A, before proceeding to the next step. Step 4 - Intention: The intention function GI (DS, A) will return an intention, decided as follows: • If there is a voluntary advice from an agent j which is associated with hmax j > hmax i , assign nil as the intention. • If DS = ∅, DS is a set of voluntary desires and hmax i is the biggest improvement among those associated with the voluntary advices received, select an arbitrary value (say, vi) from DS as the intention. This intention is called a voluntary intention. • If DS = ∅, DS is a set of reluctant desires and agent i receives some change advices, select an arbitrary value (say, vi) from DS as the intention. This intention is called reluctant intention. • If DS = ∅, then assign nil as the intention. Following, if the improvement hmax i is the biggest improvement and equal to some improvements associated with the received voluntary advices, agent i will send its computed intention to all its neighbors. If agent i has a reluctant intention, it will also send this intention to all its neighbors. In both cases, agent i will attach the number of received change advices in the current negotiation round with its intention. In return, agent i will receive the intentions from its neighbors before proceeding to Mediation step. Mediation: If agent i does not send out its intention before this step, i.e., the agent has either a nil intention or a voluntary intention with biggest improvement, it will proceed to next step. Otherwise, agent i will select the best intention among all the intentions received, including its own (if any). The criteria to select the best intention are listed, applied in descending order of importance as follows. • A voluntary intention is preferred over a reluctant intention. • A voluntary intention (if any) with biggest improvement is selected. • If there is no voluntary intention, the reluctant intention with the lowest number of constraint violations is selected. • The intention from an agent who has received a higher number of change advices in the current negotiation round is selected. • Intention from an agent with highest priority is selected. If the selected intention is not agent i``s intention, it will cancel its intention. Step 5 - Execution: If agent i does not cancel its intention, it will update its variable assignment with the intended value. Termination Condition: Since each agent does not have full information about the global state, it may not know when it has reached a solution, i.e., when all the agents are in a global stable state. Hence an observer is needed that will keep track of the negotiation messages communicated in the environment. Following a certain period of time when there is no more message communication (and this happens when all the agents have no more intention to update their variable assignments), the observer will inform the agents in the environment that a solution has been found. 1 2 3 4 5 6 7 8 9 10 Figure 6: Example problem 5.1 An Example To illustrate how UMA works, consider a 2-color graph problem [6] as shown in Figure 6. In this example, each agent has a color variable representing a node. There are 10 color variables sharing the same domain {Black, White}. The following records the outcome of each step in every negotiation round executed. Round 1: Step 1 - Percept: Each agent obtains the current color assignments of those nodes (agents) adjacent to it, i.e., its neighbors''. Step 2 - Belief: Agents which have positive improvements are agent 1 (this agent believes it should change its color to White), agent 2 (this believes should change its color to White), agent 7 (this agent believes it should change its color to Black) and agent 10 (this agent believes it should change its value to Black). In this negotiation round, the improvements achieved by these agents are 1. Agents which do not have any improvements are agents 4, 5 and 8. Agents 3, 6 and 9 need not change as all their relevant constraints are satisfied. Step 3 - Desire: Agents 1, 2, 7 and 10 have the voluntary desire (White color for agents 1, 2 and Black color for agents 7, 10). These agents will send the voluntary advices to all their neighbors. Meanwhile, agents 4, 5 and 8 have the reluctant desires (White color for agent 4 and Black color for agents 5, 8). Agent 4 will send a change advice to agent 2 as agent 2 is sharing the violated constraint with it. Similarly, agents 5 and 8 will send change advices to agents 7 and 10 respectively. Agents 3, 6 and 9 do not have any desire to update their color assignments. Step 4 - Intention: Agents 2, 7 and 10 receive the change advices from agents 4, 5 and 8, respectively. They form their voluntary intentions. Agents 4, 5 and 8 receive the voluntary advices from agents 2, 7 and 10, hence they will not have any intention. Agents 3, 6 and 9 do not have any intention. Following, the intention from the agents will be sent to all their neighbors. Mediation: Agent 1 finds that the intention from agent 2 is better than its intention. This is because, although both agents have voluntary intentions with improvement of 1, agent 2 has received one change advice from agent 4 while agent 1 has not received any. Hence agent 1 cancels its intention. Agent 2 will keep its intention. Agents 7 and 10 keep their intentions since none of their neighbors has an intention. The rest of the agents do nothing in this step as they do not have any intention. Step 5 - Execution: Agent 2 changes its color to White. Agents 7 and 10 change their colors to Black. The new state after round 1 is shown in Figure 7. Round 2: Step 1 - Percept: The agents obtain the current color assignments of their neighbors. Step 2 - Belief: Agent 3 is the only agent who has a positive improvement which is 1. It believes it should change its The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 529 1 2 3 4 5 6 7 8 9 10 Figure 7: The graph after round 1 color to Black. Agent 2 does not have any positive improvement. The rest of the agents need not make any change as all their relevant constraints are satisfied. They will have no desire, and hence no intention. Step 3 - Desire: Agent 3 desires to change its color to Black voluntarily, hence it sends out a voluntary advice to its neighbor, i.e., agent 2. Agent 2 does not have any value for its reluctant desire set as the only option, Black color, will bring agent 2 and its neighbors to the previous state which is known to be a bad state. Since agent 2 is sharing the constraint violation with agent 3, it sends a change advice to agent 3. Step 4 - Intention: Agent 3 will have a voluntary intention while agent 2 will not have any intention as it receives the voluntary advice from agent 3. Mediation: Agent 3 will keep its intention as its only neighbor, agent 2, does not have any intention. Step 5 - Execution: Agent 3 changes its color to Black. The new state after round 2 is shown in Figure 8. Round 3: In this round, every agent finds that it has no desire and hence no intention to revise its variable assignment. Following, with no more negotiation message communication in the environment, the observer will inform all the agents that a solution has been found. 2 3 4 5 6 7 8 91 10 Figure 8: The solution obtained 5.2 Performance Evaluation To facilitate credible comparisons with existing strategies, we measured the execution time in terms of computational cycles as defined in [4], and built a simulator that could reproduce the published results for ABT and AWC. The definition of a computational cycle is as follows. • In one cycle, each agent receives all the incoming messages, performs local computation and sends out a reply. • A message which is sent at time t will be received at time t + 1. The network delay is neglected. • Each agent has it own clock. The initial clock``s value is 0. Agents attach their clock value as a time-stamp in the outgoing message and use the time-stamp in the incoming message to update their own clock``s value. Four benchmark problems [6] were considered, namely, n-queens and node coloring for sparse, dense and critical graphs. For each problem, a finite number of test cases were generated for various problem sizes n. The maximum execution time was set to 0 200 400 600 800 1000 10 50 100 Number of queens Cycles Asynchronous Backtracking Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 9: Relationship between execution time and problem size 10000 cycles for node coloring for critical graphs and 1000 cycles for other problems. The simulator program was terminated after this period and the algorithm was considered to fail a test case if it did not find a solution by then. In such a case, the execution time for the test was counted as 1000 cycles. 5.2.1 Evaluation with n-queens problem The n-queens problem is a traditional problem of constraint satisfaction. 10 test cases were generated for each problem size n ∈ {10, 50 and 100}. Figure 9 shows the execution time for different problem sizes when ABT, AWC and UMA were run. 5.2.2 Evaluation with graph coloring problem The graph coloring problem can be characterized by three parameters: (i) the number of colors k, the number of nodes/agents n and the number of links m. Based on the ratio m/n, the problem can be classified into three types [3]: (i) sparse (with m/n = 2), (ii) critical (with m/n = 2.7 or 4.7) and (iii) dense (with m/n = (n − 1)/4). For this problem, we did not include ABT in our empirical results as its failure rate was found to be very high. This poor performance of ABT was expected since the graph coloring problem is more difficult than the n-queens problem, on which ABT already did not perform well (see Figure 9). The sparse and dense (coloring) problem types are relatively easy while the critical type is difficult to solve. In the experiments, we fix k = 3. 10 test cases were created using the method described in [13] for each value of n ∈ {60, 90, 120}, for each problem type. The simulation results for each type of problem are shown in Figures 10 - 12. 0 40 80 120 160 200 60 90 120 150 Number of Nodes Cycles Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 10: Comparison between AWC and UMA (sparse graph coloring) 5.3 Discussion 5.3.1 Comparison with ABT and AWC 530 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 1000 2000 3000 4000 5000 6000 60 90 120 Number of Nodes Cycles Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 11: Comparison between AWC and UMA (critical graph coloring) 0 10 20 30 40 50 60 90 120 Number of Nodes Cycles Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 12: Comparison between AWC and UMA (dense graph coloring) Figure 10 shows that the average performance of UMA is slightly better than AWC for the sparse problem. UMA outperforms AWC in solving the critical problem as shown in Figure 11. It was observed that the latter strategy failed in some test cases. However, as seen in Figure 12, both the strategies are very efficient when solving the dense problem, with AWC showing slightly better performance. The performance of UMA, in the worst (time complexity) case, is similar to that of all evaluated strategies. The worst case occurs when all the possible global states of the search are reached. Since only a few agents have the right to change their variable assignments in a negotiation round, the number of redundant computational cycles and info messages is reduced. As we observe from the backtracking in ABT and AWC, the difference in the ordering of incoming messages can result in a different number of computational cycles to be executed by the agents. 5.3.2 Comparison with DBO The computational performance of UMA is arguably better than DBO for the following reasons: • UMA can guarantee that there will be a variable reassignment following every negotiation round whereas DBO cannot. • UMA introduces one more communication round trip (that of sending a message and awaiting a reply) than DBO, which occurs due to the need to communicate unsolicited advices. Although this increases the communication cost per negotiation round, we observed from our simulations that the overall communication cost incurred by UMA is lower due to the significantly lower number of negotiation rounds. • Using UMA, in the worst case, an agent will only take 2 or 3 communication round trips per negotiation round, following which the agent or its neighbor will do a variable assignment update. Using DBO, this number of round trips is uncertain as each agent might have to increase the weights of the violated constraints until an agent has a positive improvement; this could result in a infinite loop [3]. 6. CONCLUSION Applying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture. Our work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol. Importantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies. Towards the latter, we have proposed and formulated a new strategy - the UMA strategy. Empirical results and our discussion suggest that UMA is superior to ABT, AWC and DBO in some specific aspects. It was observed from our simulations that UMA possesses the completeness property. Future work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5]. The idea of DCSP agents using different strategies in the same environment will also be investigated. 7. REFERENCES [1] P. J. Modi, H. Jung, M. Tambe, W.-M. Shen, and S. Kulkarni, Dynamic distributed resource allocation: A distributed constraint satisfaction approach, in Lecture Notes in Computer Science, 2001, p. 264. [2] H. Schlenker and U. Geske, Simulating large railway networks using distributed constraint satisfaction, in 2nd IEEE International Conference on Industrial Informatics (INDIN-04), 2004, pp. 441- 446. [3] M. Yokoo, Distributed Constraint Satisfaction : Foundations of Cooperation in Multi-Agent Systems. Springer Verlag, 2000, springer Series on Agent Technology. [4] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara, The distributed constraint satisfaction problem : Formalization and algorithms, IEEE Transactions on Knowledge and Data Engineering, vol. 10, no. 5, pp. 673-685, September/October 1998. [5] R. Mailler and V. Lesser, Using cooperative mediation to solve distributed constraint satisfaction problems, in Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-04), 2004, pp. 446-453. [6] E. Tsang, Foundation of Constraint Satisfaction. Academic Press, 1993. [7] R. Mailler, R. Vincent, V. Lesser, T. Middlekoop, and J. Shen, Soft Real-Time, Cooperative Negotiation for Distributed Resource Allocation, AAAI Fall Symposium on Negotiation Methods for Autonomous Cooperative Systems, November 2001. [8] M. Yokoo, K. Suzuki, and K. Hirayama, Secure distributed constraint satisfaction: Reaching agreement without revealing private information, Artificial Intelligence, vol. 161, no. 1-2, pp. 229-246, 2005. [9] J. S. Rosenschein and G. Zlotkin, Rules of Encounter. The MIT Press, 1994. [10] M. Yokoo and K. Hirayama, Distributed constraint satisfaction algorithm for complex local problems, in Proceedings of the Third International Conference on Multiagent Systems (ICMAS-98), 1998, pp. 372-379. [11] M. E. Bratman, Intentions, Plans and Practical Reason. Harvard University Press, Cambridge, M.A, 1987. [12] G. Weiss, Ed., Multiagent System : A Modern Approach to Distributed Artificial Intelligence. The MIT Press, London, U.K, 1999. [13] S. Minton, M. D. Johnson, A. B. Philips, and P. Laird, Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems, Artificial Intelligence, vol. e58, no. 1-3, pp. 161-205, 1992. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 531
Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework ABSTRACT This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles. 1. INTRODUCTION At the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment. Important application examples include distributed resource allocation [1] and distributed scheduling [2]. Many important algorithms, such as distributed breakout (DBO) [3], asynchronous backtracking (ABT) [4], asynchronous partial overlay (APO) [5] and asynchronous weak-commitment (AWC) [4], have been developed to address the DCSP and provide the agent solution basis for its applications. Broadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents. While there has been no lack of efforts in this promising research field, especially in dealing with outstanding issues such as resource restrictions (e.g., limits on time and communication) [7] and privacy requirements [8], there is unfortunately no conceptually clear treatment to prise open the model-theoretic workings of the various agent algorithms that have been developed. As a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible. In this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9]. Negotiation is viewed as a process of several agents searching for a solution called an agreement. The search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step. Anchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies. The proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Our performance evaluation shows that UMA can outperform ABT and AWC in terms of the average number of computational cycles for both the sparse and critical coloring problems [6]. The rest of this paper is organized as follows. In Section 2, we provide a formal overview of DCSP. Section 3 presents a BDI negotiation model by which a DCSP agent reasons. Section 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol. A new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones. Section 6 concludes the paper and points to some future work. 2. DCSP: PROBLEM FORMALIZATION The DCSP [4] considers the following environment. • There are n agents with k variables x0, x1, · · ·, xk − 1, n < k, which have values in domains D1, D2, · · ·, Dk, respectively. We define a partial function B over the productrange 10, 1,..., (n − 1)} x 10, 1,..., (k − 1)} such that, that variable xj belongs to agent i is denoted by B (i, j)! . The exclamation mark `! ' means ` is defined'. • There are m constraints c0, c1, · · · cm − 1 to be conjunctively satisfied. In a similar fashion as defined for B (i, j), we use E (l, j)! , (0 <l <m, 0 <j <k), to denote that xj is relevant to the constraint cl. The DCSP may be formally stated as follows. m) where E (l, j)! , cl is satisfied. A constraint may consist of different variables belonging to different agents. An agent cannot change or modify the assignment values of other agents' variables. Therefore, in cooperatively searching for a DCSP solution, the agents would need to communicate with one another, and adjust and re-adjust their own variable assignments in the process. 2.1 DCSP Agent Model In general, all DCSP agents must cooperatively interact, and essentially perform the assignment and reassignment of domain values to variables to resolve all constraint violations. If the agents succeed in their resolution, a solution is found. In order to engage in cooperative behavior, a DCSP agent needs five fundamental parameters, namely, (i) a variable [4] or a variable set [10], (ii) domains, (iii) priority, (iv) a neighbor list and (v) a constraint list. Each variable assumes a range of values called a domain. A domain value, which usually abstracts an action, is a possible option that an agent may take. Each agent has an assigned priority. These priority values help decide the order in which they revise or modify their variable assignments. An agent's priority may be fixed (static) or changing (dynamic) when searching for a solution. If an agent has more than one variable, each variable can be assigned a different priority, to help determine which variable assignment the agent should modify first. An agent which shares the same constraint with another agent is called the latter's neighbor. Each agent needs to refer to its list of neighbors during the search process. This list may also be kept unchanged or updated accordingly in runtime. Similarly, each agent maintains a constraint list. The agent needs to ensure that there is no violation of the constraints in this list. Constraints can be added or removed from an agent's constraint list in runtime. As with an agent, a constraint can also be associated with a priority value. Constraints with a high priority are said to be more important than constraints with a lower priority. To distinguish it from the priority of an agent, the priority of a constraint is called its weight. 3. THE BDI NEGOTIATION MODEL The BDI model originates with the work of M. Bratman [11]. According to [12, Ch .1], the BDI architecture is based on a philosophical model of human practical reasoning, and draws out the process of reasoning by which an agent decides which actions to perform at consecutive moments when pursuing certain goals. Grounding the scope to the DCSP framework, the common goal of all agents is finding a combination of domain values to satisfy a set of predefined constraints. In automated negotiation [9], such a solution is called an agreement among the agents. Within this scope, we found that we were able to unearth the generic behavior of a DCSP agent and formulate it in a negotiation protocol, prescribed using the powerful concepts of BDI. Thus, our proposed negotiation model can be said to combine the BDI concepts with automated negotiation in a multiagent framework, allowing us to conceptually separate DCSP mechanisms into a common BDI interaction protocol and the adopted strategies. 3.1 The generic protocol Figure 1 shows the basic reasoning steps in an arbitrary round of negotiation that constitute the new protocol. The solid line indicates the common component or transition which always exists regardless of the strategy used. The dotted line indicates the Figure 1: The BDI interaction protocol component or transition which may or may not appear depending on the adopted strategy. Two types of messages are exchanged through this protocol, namely, the info message and the negotiation message. An info message perceived is a message sent by another agent. The message will contain the current selected values and priorities of the variables of that sending agent. The main purpose of this message is to update the agent about the current environment. Info message is sent out at the end of one negotiation round (also called a negotiation cycle), and received at the beginning of next round. A negotiation message is a message which may be sent within a round. This message is for mediation purposes. The agent may put different contents into this type of message as long as it is agreed among the group. The format of the negotiation message and when it is to be sent out are subject to the strategy. A negotiation message can be sent out at the end of one reasoning step and received at the beginning of the next step. Mediation is a step of the protocol that depends on whether the agent's interaction with others is synchronous or asynchronous. In synchronous mechanism, mediation is required in every negotiation round. In an asynchronous one, mediation is needed only in a negotiation round when the agent receives a negotiation message. A more in-depth view of this mediation step is provided later in this section. The BDI protocol prescribes the skeletal structure for DCSP negotiation. We will show in Section 4 that several well-known DCSP mechanisms all inherit this generic model. The details of the six main reasoning steps for the protocol (see Figure 1) are described as follows for a DCSP agent. For a conceptually clearer description, we assume that there is only one variable per agent. • Percept. In this step, the agent receives info messages from its neighbors in the environment, and using its Percept function, returns an image P. This image contains the current values assigned to the variables of all agents in its neighbor list. The image P will drive the agent's actions in subsequent steps. The agent also updates its constraint list C using some criteria of the adopted strategy. • Belief. Using the image P and constraint list C, the agent will check if there is any violated constraint. If there is no violation, the agent will believe it is choosing a correct option and therefore will take no action. The agent will do nothing if it is in a local stable state - a snapshot of the variables assignments of the agent and all its neighbors by which they satisfy their shared constraints. When all agents are in their local stable states, the whole environment is said to be in a global stable state and an agree The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 525 ment is found. In case the agent finds its value in conflict with some of its neighbors', i.e., the combination of values assigned to the variables leads to a constraint violation, the agent will first try to reassign its own variable using a specific strategy. If it finds a suitable option which meets some criteria of the adopted strategy, the agent will believe it should change to the new option. However it does not always happen that an agent can successfully find such an option. If no option can be found, the agent will believe it has no option, and therefore will request its neighbors to reconsider their variable assignments. To summarize, there are three types of beliefs that a DCSP agent can form: (i) it can change its variable assignment to improve the current situation, (ii) it cannot change its variable assignment and some constraints violations cannot be resolved and (iii) it need not change its variable assignment as all the constraints are satisfied. Once the beliefs are formed, the agent will determine its desires, which are the options that attempt to resolve the current constraint violations. • Desire. If the agent takes Belief (i), it will generate a list of its own suitable domain values as its desire set. If the agent takes Belief (ii), it cannot ascertain its desire set, but will generate a sublist of agents from its neighbor list, whom it will ask to reconsider their variable assignments. How this sublist is created depends on the strategy devised for the agent. In this situation, the agent will use a virtual desire set that it determines based on its adopted strategy. If the agent takes Belief (iii), it will have no desire to revise its domain value, and hence no intention. • Intention. The agent will select a value from its desire set as its intention. An intention is the best desired option that the agent assigns to its variable. The criteria for selecting a desire as the agent's intention depend on the strategy used. Once the intention is formed, the agent may either proceed to the execution step, or undergo mediation. Again, the decision to do so is determined by some criteria of the adopted strategy. • Mediation. This is an important function of the agent. Since, if the agent executes its intention without performing intention mediation with its neighbors, the constraint violation between the agents may not be resolved. Take for example, suppose two agents have variables, x1 and x2, associated with the same domain {1, 2}, and their shared constraint is (x1 + x2 = 3). Then if both the variables are initialized with value 1, they will both concurrently switch between the values 2 and 1 in the absence of mediation between them. There are two types of mediation: local mediation and group mediation. In the former, the agents exchange their intentions. When an agent receives another's intention which conflicts with its own, the agent must mediate between the intentions, by either changing its own intention or informing the other agent to change its intention. In the latter, there is an agent which acts as a group mediator. This mediator will collect the intentions from the group - a union of the agent and its neighbors - and determine which intention is to be executed. The result of this mediation is passed back to the agents in the group. Following mediation, the agent may proceed to the next reasoning step to execute its intention or begin a new negotiation round. • Execution. This is the last step of a negotiation round. The agent will execute by updating its variable assignment if the intention obtained at this step is its own. Following execution, the agent will inform its neighbors about its new variable assignment and updated priority. To do so, the agent will send out an info message. 3.2 The strategy A strategy plays an important role in the negotiation process. Within the protocol, it will often determine the efficiency of the Figure 2: BDI protocol with Asynchronous Backtracking strategy search process in terms of computational cycles and message communication costs. The design space when devising a strategy is influenced by the following dimensions: (i) asynchronous or synchronous, (ii) dynamic or static priority, (iii) dynamic or static constraint weight, (iv) number of negotiation messages to be communicated, (v) the negotiation message format and (vi) the completeness property. In other words, these dimensions provide technical considerations for a strategy design. 4. DCSP ALGORITHMS: BDI PROTOCOL + STRATEGIES In this section, we apply the proposed BDI negotiation model presented in Section 3 to expose the BDI protocol and the different strategies used for three well-known algorithms, ABT, AWC and DBO. All these algorithms assume that there is only one variable per agent. Under our framework, we call the strategies applied the ABT, AWC and DBO strategies, respectively. To describe each strategy formally, the following mathematical notations are used: • n is the number of agents, m is the number of constraints; • xi denotes the variable held by agent i, (0 <i <n); • Di denotes the domain of variable xi; Fi denotes the neighbor list of agent i; Ci denotes its constraint list; • pi denotes the priority of agent i; and Pi = {(xj = vj, pj = k) | agent j ∈ Fi, vj ∈ Dj is the current value assigned to xj and the priority value k is a positive integer} is the perception of agent i; • wl denotes the weight of constraint l, (0 <l <m); • Si (v) is the total weight of the violated constraints in Ci when its variable has the value v ∈ Di. 4.1 Asynchronous Backtracking Figure 2 presents the BDI negotiation model incorporating the Asynchronous Backtracking (ABT) strategy. As mentioned in Section 3, for an asynchronous mechanism that ABT is, the mediation step is needed only in a negotiation round when an agent receives a negotiation message. For agent i, beginning initially with (wl = 1, (0 <l <m); pi = i, (0 <i <n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven ABT strategy is described as follows. Step 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi). Update Ci to be the list of 526 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) constraints which only consists of agents in Fi that have equal or higher priority than this agent. Step 2 - Belief: The belief function GB (Pi, Ci) will return a value bi E {0, 1, 2}, decided as follows: • bi = 0 when agent i can find an optimal option, i.e., if (Si (vi) = ~ 0 or vi is in bad values list) and (zla E Di) (Si (a) = 0) and a is not in a list of domain values called bad values list. Initially this list is empty and it will be cleared when a neighbor of higher priority changes its variable assignment. • bi = 1 when it cannot find an optimal option, i.e., if (Va E Di) (Si (a) = ~ 0) or a is in bad values list. • bi = 2 when its current variable assignment is an optimal option, i.e., if Si (vi) = 0 and vi is not in bad value list. Step 3 - Desire: The desire function GD (bi) will return a desire set denoted by DS, decided as follows: • If bi = 0, then DS = {a | (a = ~ vi), (Si (a) = 0) and a is not in the bad value list}. • If bi = 1, then DS = 0, the agent also finds agent k which is determined by {k | pk = min (pj) with agent j E Fi and pk> pi}. • If bi = 2, then DS = 0. Step 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: • If DS = ~ 0, then select an arbitrary value (say, v ~ i) from DS as the intention. • If DS = 0, then assign nil as the intention (to denote its lack thereof). Step 5 - Execution: • If agent i has a domain value as its intention, the agent will update its variable assignment with this value. • If bi = 1, agent i will send a negotiation message to agent k, then remove k from Fi and begin its next negotiation round. The negotiation message will contain the list of variable assignments of those agents in its neighbor list Fi that have a higher priority than agent i in the current image Pi. Mediation: When agent i receives a negotiation message, several sub-steps are carried out, as follows: • If the list of agents associated with the negotiation message contains agents which are not in Fi, it will add these agents to Fi, and request these agents to add itself to their neighbor lists. The request is considered as a type of negotiation message. • Agent i will first check if the sender agent is updated with its current value vi. The agent will add vi to its bad values list if it is so, or otherwise send its current value to the sender agent. Following this step, agent i proceeds to the next negotiation round. 4.2 Asynchronous Weak Commitment Search Figure 3 presents the BDI negotiation model incorporating the Asynchronous Weak Commitment (AWC) strategy. The model is similar to that of incorporating the ABT strategy (see Figure 2). This is not surprising; AWC and ABT are found to be strategically similar, differing only in the details of some reasoning steps. The distinguishing point of AWC is that when the agent cannot find a suitable variable assignment, it will change its priority to the highest among its group members ({i} U Fi). For agent i, beginning initially with (wl = 1, (0 <l <m); pi = i, (0 <i <n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven AWC strategy is described as follows. Step 1 - Percept: This step is identical to the Percept step of ABT. Step 2 - Belief: The belief function GB (Pi, Ci) will return a value bi E {0, 1, 2}, decided as follows: Figure 3: BDI protocol with Asynchronous WeakCommitment strategy • bi = 0 when the agent can find an optimal option i.e., if (Si (vi) = ~ 0 or the assignment xi = vi and the current variables assignments of the neighbors in Fi who have higher priority form a nogood [4]) stored in a list called nogood list and zla E Di, Si (a) = 0 (initially the list is empty). • bi = 1 when the agent cannot find any optimal option i.e., if Va E Di, Si (a) = ~ 0. • bi = 2 when the current assignment is an optimal option i.e., if Si (vi) = 0 and the current state is not a nogood in nogood list. Step 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: • If bi = 0, then DS = {a | (a = ~ vi), (Si (a) = 0) and the number of constraint violations with lower priority agents is minimized}. • If bi = 1, then DS = {a | a E Di and the number of violations of all relevant constraints is minimized}. • If bi = 2, then DS = 0. Following, if bi = 1, agent i will find a list Ki of higher priority neighbors, defined by Ki = {k | agent k E Fi and pk> pi}. Step 4 - Intention: This step is similar to the Intention step of ABT. However, for this strategy, the negotiation message will contain the variable assignments (of the current image Pi) for all the agents in Ki. This list of assignment is considered as a nogood. If the same negotiation message had been sent out before, agent i will have nil intention. Otherwise, the agent will send the message and save the nogood in the nogood list. Step 5 - Execution: • If agent i has a domain value as its intention, the agent will update its variable assignment with this value. • If bi = 1, it will send the negotiation message to its neighbors in Ki, and set pi = max {pj} + 1, with agent j E Fi. Mediation: This step is identical to the Mediation step of ABT, except that agent i will now add the nogood contained in the negotiation message received to its own nogood list. 4.3 Distributed Breakout Figure 4 presents the BDI negotiation model incorporating the Distributed Breakout (DBO) strategy. Essentially, by this synchronous strategy, each agent will search iteratively for improvement by reducing the total weight of the violated constraints. The iteration will continue until no agent can improve further, at which time if some constraints remain violated, the weights of Figure 4: BDI protocol with Distributed Breakout strategy these constraints will be increased by 1 to help ` breakout' from a local minimum. For agent i, beginning initially with (wl = 1, (0 <l <m), pi = i, (0 <i <n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven DBO strategy is described as follows. Step 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi). Update Ci to be the list of its relevant constraints. Step 2 - Belief: The belief function GB (Pi, Ci) will return a value bi ∈ {0, 1, 2}, decided as follows: • bi = 0 when agent i can find an option to reduce the number violations of the constraints in Ci, i.e., if ∃ a ∈ Di, Si (a) <Si (vi). • bi = 1 when it cannot find any option to improve situation, i.e., if ∀ a ∈ Di, a = ~ vi, Si (a) ≥ Si (vi). • bi = 2 when its current assignment is an optimal option, i.e., if Si (vi) = 0. Step 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: • If bi = 0, then DS = {a | a = ~ vi, Si (a) <Si (vi) and (Si (vi) − Si (a)) is maximized}. (max {(Si (vi) − Si (a))} will be referenced by hmax i in subsequent steps, and it defines the maximal reduction in constraint violations). • Otherwise, DS = 0. Step 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: • If DS = ~ 0, then select an arbitrary value (say, v ~ i) from DS as the intention. • If DS = 0, then assign nil as the intention. Following, agent i will send its intention to all its neighbors. In return, it will receive intentions from these agents before proceeding to Mediation step. Mediation: Agent i receives all the intentions from its neighbors. If it finds that the intention received from a neighbor agent j is associated with hmax j> hmax i, the agent will automatically cancel its current intention. Step 5 - Execution: • If agent i did not cancel its intention, it will update its variable assignment with the intended value. Figure 5: BDI protocol with Unsolicited Mutual Advice strategy • If all intentions received and its own one are nil intention, the agent will increase the weight of each currently violated constraint by 1. 5. THE UMA STRATEGY Figure 5 presents the BDI negotiation model incorporating the Unsolicited Mutual Advice (UMA) strategy. Unlike when using the strategies of the previous section, a DCSP agent using UMA will not only send out a negotiation message when concluding its Intention step, but also when concluding its Desire step. The negotiation message that it sends out to conclude the Desire step constitutes an unsolicited advice for all its neighbors. In turn, the agent will wait to receive unsolicited advices from all its neighbors, before proceeding on to determine its intention. For agent i, beginning initially with (wl = 1, (0 <l <m), pi = i, (0 <i <n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven UMA strategy is described as follows. Step 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi). Update Ci to be the list of constraints relevant to agent i. Step 2 - Belief: The belief function GB (Pi, Ci) will return a value bi ∈ {0, 1, 2}, decided as follows: • bi = 0 when agent i can find an option to reduce the number violations of the constraints in Ci, i.e., if ∃ a ∈ Di, Si (a) <Si (vi) and the assignment xi = a and the current variable assignments of its neighbors do not form a local state stored in a list called bad states list (initially this list is empty). • bi = 1 when it cannot find a value a such as a ∈ Di, Si (a) <Si (vi), and the assignment xi = a and the current variable assignments of its neighbors do not form a local state stored in the bad states list. • bi = 2 when its current assignment is an optimal option, i.e., if Si (vi) = 0. Step 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: • If bi = 0, then DS = {a | a = ~ vi, Si (a) <Si (vi) and (Si (vi) − Si (a)) is maximized} and the assignment xi = a and the current variable assignments of agent i's neighbors do not form a state in the bad states list. In this case, DS is called a set of voluntary desires. max {(Si (vi) − Si (a))} will be referenced by hmax i in subsequent steps, and it defines 528 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the maximal reduction in constraint violations. It is also referred to as an improvement). • If bi = 1, then DS = {a | a = ~ vi, Si (a) is minimized} and the assignment xi = a and the current variable assignments of agent i's neighbors do not form a state in the bad states list. In this case, DS is called a set of reluctant desires • If bi = 2, then DS = 0. i to all its neighbors. This message is called a voluntary advice. If bi = 1, agent i will send a negotiation message called change advice to the neighbors in Fi who share the violated constraints with agent i. Agent i receives advices from all its neighbors and stores them in a list called A, before proceeding to the next step. Step 4 - Intention: The intention function GI (DS, A) will return an intention, decided as follows: • If there is a voluntary advice from an agent j which is associated with hmax j> hmax i, assign nil as the intention. • If DS = ~ 0, DS is a set of voluntary desires and hmax i is the biggest improvement among those associated with the voluntary advices received, select an arbitrary value (say, v ~ i) from DS as the intention. This intention is called a voluntary intention. • If DS = ~ 0, DS is a set of reluctant desires and agent ireceives some change advices, select an arbitrary value (say, v ~ i) from DS as the intention. This intention is called reluctant intention. • If DS = 0, then assign nil as the intention. Following, if the improvement hmaxi is the biggest improvement and equal to some improvements associated with the received voluntary advices, agent i will send its computed intention to all its neighbors. If agent i has a reluctant intention, it will also send this intention to all its neighbors. In both cases, agent i will attach the number of received change advices in the current negotiation round with its intention. In return, agent i will receive the intentions from its neighbors before proceeding to Mediation step. Mediation: If agent i does not send out its intention before this step, i.e., the agent has either a nil intention or a voluntary intention with biggest improvement, it will proceed to next step. Otherwise, agent i will select the best intention among all the intentions received, including its own (if any). The criteria to select the best intention are listed, applied in descending order of importance as follows. • A voluntary intention is preferred over a reluctant intention. • A voluntary intention (if any) with biggest improvement is selected. • If there is no voluntary intention, the reluctant intention with the lowest number of constraint violations is selected. • The intention from an agent who has received a higher number of change advices in the current negotiation round is selected. • Intention from an agent with highest priority is selected. If the selected intention is not agent i's intention, it will cancel its intention. Step 5 - Execution: If agent i does not cancel its intention, it will update its variable assignment with the intended value. Termination Condition: Since each agent does not have full information about the global state, it may not know when it has reached a solution, i.e., when all the agents are in a global stable state. Hence an observer is needed that will keep track of the negotiation messages communicated in the environment. Following a certain period of time when there is no more message communication (and this happens when all the agents have no more intention to update their variable assignments), the observer will inform the agents in the environment that a solution has been found. Figure 6: Example problem 5.1 An Example To illustrate how UMA works, consider a 2-color graph problem [6] as shown in Figure 6. In this example, each agent has a color variable representing a node. There are 10 color variables sharing the same domain {Black, White}. The following records the outcome of each step in every negotiation round executed. Round 1: Step 1 - Percept: Each agent obtains the current color assignments of those nodes (agents) adjacent to it, i.e., its neighbors'. Step 2 - Belief: Agents which have positive improvements are agent 1 (this agent believes it should change its color to White), agent 2 (this believes should change its color to White), agent 7 (this agent believes it should change its color to Black) and agent 10 (this agent believes it should change its value to Black). In this negotiation round, the improvements achieved by these agents are 1. Agents which do not have any improvements are agents 4, 5 and 8. Agents 3, 6 and 9 need not change as all their relevant constraints are satisfied. Step 3 - Desire: Agents 1, 2, 7 and 10 have the voluntary desire (White color for agents 1, 2 and Black color for agents 7, 10). These agents will send the voluntary advices to all their neighbors. Meanwhile, agents 4, 5 and 8 have the reluctant desires (White color for agent 4 and Black color for agents 5, 8). Agent 4 will send a change advice to agent 2 as agent 2 is sharing the violated constraint with it. Similarly, agents 5 and 8 will send change advices to agents 7 and 10 respectively. Agents 3, 6 and 9 do not have any desire to update their color assignments. Step 4 - Intention: Agents 2, 7 and 10 receive the change advices from agents 4, 5 and 8, respectively. They form their voluntary intentions. Agents 4, 5 and 8 receive the voluntary advices from agents 2, 7 and 10, hence they will not have any intention. Agents 3, 6 and 9 do not have any intention. Following, the intention from the agents will be sent to all their neighbors. Mediation: Agent 1 finds that the intention from agent 2 is better than its intention. This is because, although both agents have voluntary intentions with improvement of 1, agent 2 has received one change advice from agent 4 while agent 1 has not received any. Hence agent 1 cancels its intention. Agent 2 will keep its intention. Agents 7 and 10 keep their intentions since none of their neighbors has an intention. The rest of the agents do nothing in this step as they do not have any intention. Step 5 - Execution: Agent 2 changes its color to White. Agents 7 and 10 change their colors to Black. The new state after round 1 is shown in Figure 7. Round 2: Step 1 - Percept: The agents obtain the current color assignments of their neighbors. Step 2 - Belief: Agent 3 is the only agent who has a positive improvement which is 1. It believes it should change its Figure 7: The graph after round 1 color to Black. Agent 2 does not have any positive improvement. The rest of the agents need not make any change as all their relevant constraints are satisfied. They will have no desire, and hence no intention. Step 3 - Desire: Agent 3 desires to change its color to Black voluntarily, hence it sends out a voluntary advice to its neighbor, i.e., agent 2. Agent 2 does not have any value for its reluctant desire set as the only option, Black color, will bring agent 2 and its neighbors to the previous state which is known to be a bad state. Since agent 2 is sharing the constraint violation with agent 3, it sends a change advice to agent 3. Step 4 - Intention: Agent 3 will have a voluntary intention while agent 2 will not have any intention as it receives the voluntary advice from agent 3. Mediation: Agent 3 will keep its intention as its only neighbor, agent 2, does not have any intention. Step 5 - Execution: Agent 3 changes its color to Black. The new state after round 2 is shown in Figure 8. Round 3: In this round, every agent finds that it has no desire and hence no intention to revise its variable assignment. Following, with no more negotiation message communication in the environment, the observer will inform all the agents that a solution has been found. Figure 8: The solution obtained 5.2 Performance Evaluation To facilitate credible comparisons with existing strategies, we measured the execution time in terms of computational cycles as defined in [4], and built a simulator that could reproduce the published results for ABT and AWC. The definition of a computational cycle is as follows. • In one cycle, each agent receives all the incoming messages, performs local computation and sends out a reply. • A message which is sent at time t will be received at time t + 1. The network delay is neglected. • Each agent has it own clock. The initial clock's value is 0. Agents attach their clock value as a time-stamp in the outgoing message and use the time-stamp in the incoming message to update their own clock's value. Four benchmark problems [6] were considered, namely, n-queens and node coloring for sparse, dense and critical graphs. For each problem, a finite number of test cases were generated for various problem sizes n. The maximum execution time was set to Figure 9: Relationship between execution time and problem size 10000 cycles for node coloring for critical graphs and 1000 cycles for other problems. The simulator program was terminated after this period and the algorithm was considered to fail a test case if it did not find a solution by then. In such a case, the execution time for the test was counted as 1000 cycles. 5.2.1 Evaluation with n-queens problem The n-queens problem is a traditional problem of constraint satisfaction. 10 test cases were generated for each problem size n ∈ {10, 50 and 100}. Figure 9 shows the execution time for different problem sizes when ABT, AWC and UMA were run. 5.2.2 Evaluation with graph coloring problem The graph coloring problem can be characterized by three parameters: (i) the number of colors k, the number of nodes/agents n and the number of links m. Based on the ratio m/n, the problem can be classified into three types [3]: (i) sparse (with m/n = 2), (ii) critical (with m/n = 2.7 or 4.7) and (iii) dense (with m/n = (n − 1) / 4). For this problem, we did not include ABT in our empirical results as its failure rate was found to be very high. This poor performance of ABT was expected since the graph coloring problem is more difficult than the n-queens problem, on which ABT already did not perform well (see Figure 9). The sparse and dense (coloring) problem types are relatively easy while the critical type is difficult to solve. In the experiments, we fix k = 3. 10 test cases were created using the method described in [13] for each value of n ∈ {60, 90, 120}, for each problem type. The simulation results for each type of problem are shown in Figures 10 - 12. Figure 10: Comparison between AWC and UMA (sparse graph coloring) 5.3 Discussion 5.3.1 Comparison with ABT and AWC 530 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 11: Comparison between AWC and UMA (critical graph coloring) Figure 12: Comparison between AWC and UMA (dense graph coloring) Figure 10 shows that the average performance of UMA is slightly better than AWC for the sparse problem. UMA outperforms AWC in solving the critical problem as shown in Figure 11. It was observed that the latter strategy failed in some test cases. However, as seen in Figure 12, both the strategies are very efficient when solving the dense problem, with AWC showing slightly better performance. The performance of UMA, in the worst (time complexity) case, is similar to that of all evaluated strategies. The worst case occurs when all the possible global states of the search are reached. Since only a few agents have the right to change their variable assignments in a negotiation round, the number of redundant computational cycles and info messages is reduced. As we observe from the backtracking in ABT and AWC, the difference in the ordering of incoming messages can result in a different number of computational cycles to be executed by the agents. 5.3.2 Comparison with DBO The computational performance of UMA is arguably better than DBO for the following reasons: • UMA can guarantee that there will be a variable reassignment following every negotiation round whereas DBO cannot. • UMA introduces one more communication round trip (that of sending a message and awaiting a reply) than DBO, which occurs due to the need to communicate unsolicited advices. Although this increases the communication cost per negotiation round, we observed from our simulations that the overall communication cost incurred by UMA is lower due to the significantly lower number of negotiation rounds. • Using UMA, in the worst case, an agent will only take 2 or 3 communication round trips per negotiation round, following which the agent or its neighbor will do a variable assignment update. Using DBO, this number of round trips is uncertain as each agent might have to increase the weights of the violated constraints until an agent has a positive improvement; this could result in a infinite loop [3]. 6. CONCLUSION Applying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture. Our work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol. Importantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies. Towards the latter, we have proposed and formulated a new strategy - the UMA strategy. Empirical results and our discussion suggest that UMA is superior to ABT, AWC and DBO in some specific aspects. It was observed from our simulations that UMA possesses the completeness property. Future work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5]. The idea of DCSP agents using different strategies in the same environment will also be investigated.
Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework ABSTRACT This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles. 1. INTRODUCTION At the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment. Important application examples include distributed resource allocation [1] and distributed scheduling [2]. Many important algorithms, such as distributed breakout (DBO) [3], asynchronous backtracking (ABT) [4], asynchronous partial overlay (APO) [5] and asynchronous weak-commitment (AWC) [4], have been developed to address the DCSP and provide the agent solution basis for its applications. Broadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents. While there has been no lack of efforts in this promising research field, especially in dealing with outstanding issues such as resource restrictions (e.g., limits on time and communication) [7] and privacy requirements [8], there is unfortunately no conceptually clear treatment to prise open the model-theoretic workings of the various agent algorithms that have been developed. As a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible. In this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9]. Negotiation is viewed as a process of several agents searching for a solution called an agreement. The search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step. Anchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies. The proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Our performance evaluation shows that UMA can outperform ABT and AWC in terms of the average number of computational cycles for both the sparse and critical coloring problems [6]. The rest of this paper is organized as follows. In Section 2, we provide a formal overview of DCSP. Section 3 presents a BDI negotiation model by which a DCSP agent reasons. Section 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol. A new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones. Section 6 concludes the paper and points to some future work. 2. DCSP: PROBLEM FORMALIZATION 2.1 DCSP Agent Model 3. THE BDI NEGOTIATION MODEL 3.1 The generic protocol The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 525 3.2 The strategy 4. DCSP ALGORITHMS: BDI PROTOCOL + STRATEGIES 4.1 Asynchronous Backtracking 526 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 Asynchronous Weak Commitment Search 4.3 Distributed Breakout 5. THE UMA STRATEGY 528 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 An Example Round 1: Round 2: 5.2 Performance Evaluation 5.2.1 Evaluation with n-queens problem 5.2.2 Evaluation with graph coloring problem 5.3 Discussion 5.3.1 Comparison with ABT and AWC 530 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.3.2 Comparison with DBO 6. CONCLUSION Applying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture. Our work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol. Importantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies. Towards the latter, we have proposed and formulated a new strategy - the UMA strategy. Empirical results and our discussion suggest that UMA is superior to ABT, AWC and DBO in some specific aspects. It was observed from our simulations that UMA possesses the completeness property. Future work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5]. The idea of DCSP agents using different strategies in the same environment will also be investigated.
Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework ABSTRACT This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles. 1. INTRODUCTION At the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment. Important application examples include distributed resource allocation [1] and distributed scheduling [2]. Broadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents. As a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible. In this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9]. Negotiation is viewed as a process of several agents searching for a solution called an agreement. The search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step. Anchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies. The proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. The rest of this paper is organized as follows. In Section 2, we provide a formal overview of DCSP. Section 3 presents a BDI negotiation model by which a DCSP agent reasons. Section 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol. A new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones. Section 6 concludes the paper and points to some future work. 6. CONCLUSION Applying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture. Our work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol. Importantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies. Towards the latter, we have proposed and formulated a new strategy - the UMA strategy. Future work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5]. The idea of DCSP agents using different strategies in the same environment will also be investigated.
I-52
A Unified and General Framework for Argumentation-based Negotiation
This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue.
[ "framework", "argument", "argument", "negoti", "outcom", "theori", "agent", "argument-base negoti", "concess notion", "decis make mechan", "solut", "inform", "belief" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "R", "M", "U", "U", "U" ]
A Unified and General Framework for Argumentation-based Negotiation Leila Amgoud IRIT - CNRS 118, route de Narbonne 31062, Toulouse, France amgoud@irit.fr Yannis Dimopoulos University of Cyprus 75 Kallipoleos Str. PO Box 20537, Cyprus yannis@cs.ucy.ac.cy Pavlos Moraitis Paris-Descartes University 45 rue des Saints-Pères 75270 Paris Cedex 06, France pavlos@math-info.univparis5.fr ABSTRACT This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue. Categories and Subject Descriptors I.2.3 [Deduction and Theorem Proving]: Nonmonotonic reasoning and belief revision ; I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Human Factors, Theory 1. INTRODUCTION Roughly speaking, negotiation is a process aiming at finding some compromise or consensus between two or several agents about some matters of collective agreement, such as pricing products, allocating resources, or choosing candidates. Negotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce. Different approaches to automated negotiation have been investigated, including game-theoretic approaches (which usually assume complete information and unlimited computation capabilities) [11], heuristic-based approaches which try to cope with these limitations [6], and argumentation-based approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the importance of exchanging information and explanations between negotiating agents in order to mutually influence their behaviors (e.g. an agent may concede a goal having a small priority), and consequently the outcome of the dialogue. Indeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers. Integrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue. Indeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them. The basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change. For instance, an agent may reject an offer because it is not acceptable for it. However, the agent may change its mind if it receives a strong argument in favor of this offer. Several proposals have been made in the literature for modeling such an approach. However, the work is still preliminary. Some researchers have mainly focused on relating argumentation with protocols. They have shown how and when arguments in favor of offers can be computed and exchanged. Others have emphasized on the decision making problem. In [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem. They have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol. In most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues. It is not clear how argumentation can influence the outcome of the dialogue. Moreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied. This paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated. In this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known. The goal of a negotiation is to find among elements of O, an offer that satisfies more or less 967 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the preferences of both agents. Each agent is supposed to have a theory represented in an abstract way. A theory consists of a set A of arguments whose structure and origin are not known, a function specifying for each possible offer in O, the arguments of A that support it, a non specified conflict relation among the arguments, and finally a preference relation between the arguments. The status of each argument is defined using Dung``s acceptability semantics. Consequently, the set of offers is partitioned into four subsets: acceptable, rejected, negotiable and non-supported offers. We show how an agent``s theory may evolve during a negotiation dialogue. We define formally the notions of concession, compromise, and optimal solution. Then, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary. We show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist. This paper is organized as follows: Section 2 introduces the logical language that is used in the rest of the paper. Section 3 defines the agents as well as their theories. In section 4, we study the properties of these agents'' theories. Section 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue. Two kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached. Section 6 illustrates our general framework through some examples. Section 7 compares our formalism with existing ones. Section 8 concludes and presents some perspectives. Due to lack of space, the proofs are not included. These last are in a technical report that we will make available online at some later time. 2. THE LOGICAL LANGUAGE In what follows, L will denote a logical language, and ≡ is an equivalence relation associated with it. From L, a set O = {o1, ... , on} of n offers is identified, such that oi, oj ∈ O such that oi ≡ oj. This means that the offers are different. Offers correspond to the different alternatives that can be exchanged during a negotiation dialogue. For instance, if the agents try to decide the place of their next meeting, then the set O will contain different towns. Different arguments can be built from L. The set Args(L) will contain all those arguments. By argument, we mean a reason in believing or of doing something. In [3], it has been argued that the selection of the best offer to propose at a given step of the dialogue is a decision problem. In [4], it has been shown that in an argumentation-based approach for decision making, two kinds of arguments are distinguished: arguments supporting choices (or decisions), and arguments supporting beliefs. Moreover, it has been acknowledged that the two categories of arguments are formally defined in different ways, and they play different roles. Indeed, an argument in favor of a decision, built both on an agent``s beliefs and goals, tries to justify the choice; whereas an argument in favor of a belief, built only from beliefs, tries to destroy the decision arguments, in particular the beliefs part of those decision arguments. Consequently, in a negotiation dialogue, those two kinds of arguments are generally exchanged between agents. In what follows, the set Args(L) is then divided into two subsets: a subset Argso(L) of arguments supporting offers, and a subset Argsb(L) of arguments supporting beliefs. Thus, Args(L) = Argso(L) ∪ Argsb(L). As in [5], in what follows, we consider that the structure of the arguments is not known. Since the knowledge bases from which arguments are built may be inconsistent, the arguments may be conflicting too. In what follows, those conflicts will be captured by the relation RL, thus RL ⊆ Args(L) × Args(L). Three assumptions are made on this relation: First the arguments supporting different offers are conflicting. The idea behind this assumption is that since offers are exclusive, an agent has to choose only one at a given step of the dialogue. Note that, the relation RL is not necessarily symmetric between the arguments of Argsb(L). The second hypothesis says that arguments supporting the same offer are also conflicting. The idea here is to return the strongest argument among these arguments. The third condition does not allow an argument in favor of an offer to attack an argument supporting a belief. This avoids wishful thinking. Formally: Definition 1. RL ⊆ Args(L) × Args(L) is a conflict relation among arguments such that: • ∀a, a ∈ Argso(L), s.t. a = a , a RL a • a ∈ Argso(L) and a ∈ Argsb(L) such that a RL a Note that the relation RL is not symmetric. This is due to the fact that arguments of Argsb(L) may be conflicting but not necessarily in a symmetric way. In what follows, we assume that the set Args(L) of arguments is finite, and each argument is attacked by a finite number of arguments. 3. NEGOTIATING AGENTS THEORIES AND REASONING MODELS In this section we define formally the negotiating agents, i.e. their theories, as well as the reasoning model used by those agents in a negotiation dialogue. 3.1 Negotiating agents theories Agents involved in a negotiation dialogue, called negotiating agents, are supposed to have theories. In this paper, the theory of an agent will not refer, as usual, to its mental states (i.e. its beliefs, desires and intentions). However, it will be encoded in a more abstract way in terms of the arguments owned by the agent, a conflict relation among those arguments, a preference relation between the arguments, and a function that specifies which arguments support offers of the set O. We assume that an agent is aware of all the arguments of the set Args(L). The agent is even able to express a preference between any pair of arguments. This does not mean that the agent will use all the arguments of Args(L), but it encodes the fact that when an agent receives an argument from another agent, it can interpret it correctly, and it can also compare it with its own arguments. Similarly, each agent is supposed to be aware of the conflicts between arguments. This also allows us to encode the fact that an agent can recognize whether the received argument is in conflict or not with its arguments. However, in its theory, only the conflicts between its own arguments are considered. Definition 2 (Negotiating agent theory). Let O be a set of n offers. A negotiating agent theory is a tuple A, F, , R, Def such that: • A ⊆ Args(L). 968 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • F: O → 2A s.t ∀i, j with i = j, F(oi) ∩ F(oj) = ∅. Let AO = ∪F(oi) with i = 1, ... , n. • ⊆ Args(L) × Args(L) is a partial preorder denoting a preference relation between arguments. • R ⊆ RL such that R ⊆ A × A • Def ⊆ A × A such that ∀ a, b ∈ A, a defeats b, denoted a Def b iff: - a R b, and - not (b a) The function F returns the arguments supporting offers in O. In [4], it has been argued that any decision may have arguments supporting it, called arguments PRO, and arguments against it, called arguments CONS. Moreover, these two types of arguments are not necessarily conflicting. For simplicity reasons, in this paper we consider only arguments PRO. Moreover, we assume that an argument cannot support two distinct offers. However, it may be the case that an offer is not supported at all by arguments, thus F(oi) may be empty. Example 1. Let O = {o1, o2, o3} be a set of offers. The following theory is the theory of agent i: • A = {a1, a2, a3, a4} • F(o1) = {a1}, F(o2) = {a2}, F(o3) = ∅. Thus, Ao = {a1, a2} • = {(a1, a2), (a2, a1), (a3, a2), (a4, a3)} • R = {a1, a2), (a2, a1), (a3, a2), (a4, a3)} • Def = {(a4, a3), (a3, a2)} From the above definition of agent theory, the following hold: Property 1. • Def ⊆ R • ∀a, a ∈ F(oi), a R a 3.2 The reasoning model From the theory of an agent, one can define the argumentation system used by that agent for reasoning about the offers and the arguments, i.e. for computing the status of the different offers and arguments. Definition 3 (Argumentation system). Let A, F, , R, Def be the theory of an agent. The argumentation system of that agent is the pair A, Def . In [5], different acceptability semantics have been introduced for computing the status of arguments. These are based on two basic concepts, defence and conflict-free, defined as follows: Definition 4 (Defence/conflict-free). Let S ⊆ A. • S defends an argument a iff each argument that defeats a is defeated by some argument in S. • S is conflict-free iff there exist no a, a in S such that a Def a . Definition 5 (Acceptability semantics). Let S be a conflict-free set of arguments, and let T : 2A → 2A be a function such that T (S) = {a | a is defended by S}. • S is a complete extension iff S = T (S). • S is a preferred extension iff S is a maximal (w.r.t set ⊆) complete extension. • S is a grounded extension iff it is the smallest (w.r.t set ⊆) complete extension. Let E1, ... , Ex denote the different extensions under a given semantics. Note that there is only one grounded extension. It contains all the arguments that are not defeated, and those arguments that are defended directly or indirectly by nondefeated arguments. Theorem 1. Let A, Def the argumentation system defined as shown above. 1. It may have x ≥ 1 preferred extensions. 2. The grounded extensions is S = i≥1 T (∅). Note that when the grounded extension (or the preferred extension) is empty, this means that there is no acceptable offer for the negotiating agent. Example 2. In example 1, there is one preferred extension, E = {a1, a2, a4}. Now that the acceptability semantics is defined, we are ready to define the status of any argument. Definition 6 (Argument status). Let A, Def be an argumentation system, and E1, ... , Ex its extensions under a given semantics. Let a ∈ A. 1. a is accepted iff a ∈ Ei, ∀Ei with i = 1, ... , x. 2. a is rejected iff Ei such that a ∈ Ei. 3. a is undecided iff a is neither accepted nor rejected. This means that a is in some extensions and not in others. Note that A = {a|a is accepted} ∪ {a|a is rejected} ∪ {a|a is undecided}. Example 3. In example 1, the arguments a1, a2 and a4 are accepted, whereas the argument a3 is rejected. As said before, agents use argumentation systems for reasoning about offers. In a negotiation dialogue, agents propose and accept offers that are acceptable for them, and reject bad ones. In what follows, we will define the status of an offer. According to the status of arguments, one can define four statuses of the offers as follows: Definition 7 (Offers status). Let o ∈ O. • The offer o is acceptable for the negotiating agent iff ∃ a ∈ F(o) such that a is accepted. Oa = {oi ∈ O, such that oi is acceptable}. • The offer o is rejected for the negotiating agent iff ∀ a ∈ F(o), a is rejected. Or = {oi ∈ O, such that oi is rejected}. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969 • The offer o is negotiable iff ∀ a ∈ F(o), a is undecided. On = {oi ∈ O, such that oi is negotiable}. • The offer o is non-supported iff it is neither acceptable, nor rejected or negotiable. Ons = {oi ∈ O, such that oi is non-supported offers}. Example 4. In example 1, the two offers o1 and o2 are acceptable since they are supported by accepted arguments, whereas the offer o3 is non-supported since it has no argument in its favor. From the above definitions, the following results hold: Property 2. Let o ∈ O. • O = Oa ∪ Or ∪ On ∪ Ons. • The set Oa may contain more than one offer. From the above partition of the set O of offers, a preference relation between offers is defined. Let Ox and Oy be two subsets of O. Ox Oy means that any offer in Ox is preferred to any offer in the set Oy. We can write also for two offers oi, oj, oi oj iff oi ∈ Ox, oj ∈ Oy and Ox Oy. Definition 8 (Preference between offers). Let O be a set of offers, and Oa, Or, On, Ons its partition. Oa On Ons Or. Example 5. In example 1, we have o1 o3, and o2 o3. However, o1 and o2 are indifferent. 4. THE STRUCTURE OF NEGOTIATION THEORIES In this section, we study the properties of the system developed above. We first show that in the particular case where A = AO (ie. all of the agent``s arguments refer to offers), the corresponding argumentation system will return at least one non-empty preferred extension. Theorem 2. Let A, Def an argumentation system such that A = AO. Then the system returns at least one extension E, such that |E| ≥ 1. We now present some results that demonstrate the importance of indifference in negotiating agents, and more specifically its relation to acceptable outcomes. We first show that the set Oa may contain several offers when their corresponding accepted arguments are indifferent w.r.t the preference relation . Theorem 3. Let o1, o2 ∈ O. o1, o2 ∈ Oa iff ∃ a1 ∈ F(o1), ∃ a2 ∈ F(o2), such that a1 and a2 are accepted and are indifferent w.r.t (i.e. a b and b a). We now study acyclic preference relations that are defined formally as follows. Definition 9 (Acyclic relation). A relation R on a set A is acyclic if there is no sequence a1, a2, ... , an ∈ A, with n > 1, such that (ai, ai+1) ∈ R and (an, a1) ∈ R, with 1 ≤ i < n. Note that acyclicity prohibits pairs of arguments a, b such that a b and b a, ie., an acyclic preference relation disallows indifference. Theorem 4. Let A be a set of arguments, R the attacking relation of A defined as R ⊆ A × A, and an acyclic relation on A. Then for any pair of arguments a, b ∈ A, such that (a, b) ∈ R, either (a, b) ∈ Def or (b, a) ∈ Def (or both). The previous result is used in the proof of the following theorem that states that acyclic preference relations sanction extensions that support exactly one offer. Theorem 5. Let A be a set of arguments, and an acyclic relation on A. If E is an extension of <A, Def>, then |E ∩ AO| = 1. An immediate consequence of the above is the following. Property 3. Let A be a set of arguments such that A = AO. If the relation on A is acyclic, then each extension Ei of <A, Def>, |Ei| = 1. Another direct consequence of the above theorem is that in acyclic preference relations, arguments that support offers can participate in only one preferred extension. Theorem 6. Let A be a set of arguments, and an acyclic relation on A. Then the preferred extensions of A, Def are pairwise disjoint w.r.t arguments of AO. Using the above results we can prove the main theorem of this section that states that negotiating agents with acyclic preference relations do not have acceptable offers. Theorem 7. Let A, F, R, , Def be a negotiating agent such that A = AO and is an acyclic relation. Then the set of accepted arguments w.r.t A, Def is emtpy. Consequently, the set of acceptable offers, Oa is empty as well. 5. ARGUMENTATION-BASED NEGOTIATION In this section, we define formally a protocol that generates argumentation-based negotiation dialogues between two negotiating agents P and C. The two agents negotiate about an object whose possible values belong to a set O. This set O is supposed to be known and the same for both agents. For simplicity reasons, we assume that this set does not change during the dialogue. The agents are equipped with theories denoted respectively AP , FP , P , RP , DefP , and AC , FC , C , RC , DefC . Note that the two theories may be different in the sense that the agents may have different sets of arguments, and different preference relations. Worst yet, they may have different arguments in favor of the same offers. Moreover, these theories may evolve during the dialogue. 5.1 Evolution of the theories Before defining formally the evolution of an agent``s theory, let us first introduce the notion of dialogue moves, or moves for short. Definition 10 (Move). A move is a tuple mi = pi, ai, oi, ti such that: • pi ∈ {P, C} • ai ∈ Args(L) ∪ θ1 1 In what follows θ denotes the fact that no argument, or no offer is given 970 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • oi ∈ O ∪ θ • ti ∈ N∗ is the target of the move, such that ti < i The function Player (resp. Argument, Offer, Target) returns the player of the move (i.e. pi) (resp. the argument of a move, i.e ai, the offer oi, and the target of the move, ti). Let M denote the set of all the moves that can be built from {P, C}, Arg(L), O . Note that the set M is finite since Arg(L) and O are assumed to be finite. Let us now see how an agent``s theory evolves and why. The idea is that if an agent receives an argument from another agent, it will add the new argument to its theory. Moreover, since an argument may bring new information for the agent, thus new arguments can emerge. Let us take the following example: Example 6. Suppose that an agent P has the following propositional knowledge base: ΣP = {x, y → z}. From this base one cannot deduce z. Let``s assume that this agent receives the following argument {a, a → y} that justifies y. It is clear that now P can build an argument, say {a, a → y, y → z} in favor of z. In a similar way, if a received argument is in conflict with the arguments of the agent i, then those conflicts are also added to its relation Ri . Note that new conflicts may arise between the original arguments of the agent and the ones that emerge after adding the received arguments to its theory. Those new conflicts should also be considered. As a direct consequence of the evolution of the sets Ai and Ri , the defeat relation Defi is also updated. The initial theory of an agent i, (i.e. its theory before the dialogue starts), is denoted by Ai 0, Fi 0, i 0, Ri 0, Defi 0 , with i ∈ {P, C}. Besides, in this paper, we suppose that the preference relation i of an agent does not change during the dialogue. Definition 11 (Theory evolution). Let m1, ..., mt, ..., mj be a sequence of moves. The theory of an agent i at a step t > 0 is: Ai t, Fi t , i t, Ri t, Defi t such that: • Ai t = Ai 0 ∪ {ai, i = 1, ... , t, ai = Argument(mi)} ∪ A with A ⊆ Args(L) • Fi t = O → 2Ai t • i t = i 0 • Ri t = Ri 0 ∪ {(ai, aj) | ai = Argument(mi), aj = Argument(mj), i, j ≤ t, and ai RL aj} ∪ R with R ⊆ RL • Defi t ⊆ Ai t × Ai t The above definition captures the monotonic aspect of an argument. Indeed, an argument cannot be removed. However, its status may change. An argument that is accepted at step t of the dialogue by an agent may become rejected at step t + i. Consequently, the status of offers also change. Thus, the sets Oa, Or, On, and Ons may change from one step of the dialogue to another. That means for example that some offers could move from the set Oa to the set Or and vice-versa. Note that in the definition of Rt, the relation RL is used to denote a conflict between exchanged arguments. The reason is that, such a conflict may not be in the set Ri of the agent i. Thus, in order to recognize such conflicts, we have supposed that the set RL is known to the agents. This allows us to capture the situation where an agent is able to prove an argument that it was unable to prove before, by incorporating in its beliefs some information conveyed through the exchange of arguments with another agent. This, unknown at the beginning of the dialogue argument, could give to this agent the possibility to defeat an argument that it could not by using its initial arguments. This could even lead to a change of the status of these initial arguments and this change would lead to the one of the associated offers'' status. In what follows, Oi t,x denotes the set of offers of type x, where x ∈ {a, n, r, ns}, of the agent i at step t of the dialogue. In some places, we can use for short the notation Oi t to denote the partition of the set O at step t for agent i. Note that we have: not(Oi t,x ⊆ Oi t+1,x). 5.2 The notion of agreement As said in the introduction, negotiation is a process aiming at finding an agreement about some matters. By agreement, one means a solution that satisfies to the largest possible extent the preferences of both agents. In case there is no such solution, we say that the negotiation fails. In what follows, we will discuss the different kinds of solutions that may be reached in a negotiation. The first one is the optimal solution. An optimal solution is the best offer for both agents. Formally: Definition 12 (Optimal solution). Let O be a set of offers, and o ∈ O. The offer o is an optimal solution at a step t ≥ 0 iff o ∈ OP t,a ∩ OC t,a Such a solution does not always exist since agents may have conflicting preferences. Thus, agents make concessions by proposing/accepting less preferred offers. Definition 13 (Concession). Let o ∈ O be an offer. The offer o is a concession for an agent i iff o ∈ Oi x such that ∃Oi y = ∅, and Oi y Oi x. During a negotiation dialogue, agents exchange first their most preferred offers, and if these last are rejected, they make concessions. In this case, we say that their best offers are no longer defendable. In an argumentation setting, this means that the agent has already presented all its arguments supporting its best offers, and it has no counter argument against the ones presented by the other agent. Formally: Definition 14 (Defendable offer). Let Ai t, Fi t , i t, Ri t, Defi t be the theory of agent i at a step t > 0 of the dialogue. Let o ∈ O such that ∃j ≤ t with Player(mj) = i and offer(mj) = o. The offer o is defendable by the agent i iff: • ∃a ∈ Fi t (o), and k ≤ t s.t. Argument(mk) = a, or • ∃a ∈ At \Fi t (o) s.t. a Defi t b with - Argument(mk) = b, k ≤ t, and Player(mk) = i - l ≤ t, Argument(ml) = a The offer o is said non-defendable otherwise and NDi t is the set of non-defendable offers of agent i at a step t. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971 5.3 Negotiation dialogue Now that we have shown how the theories of the agents evolve during a dialogue, we are ready to define formally an argumentation-based negotiation dialogue. For that purpose, we need to define first the notion of a legal continuation. Definition 15 (Legal move). A move m is a legal continuation of a sequence of moves m1, ... , ml iff j, k < l, such that: • Offer(mj) = Offer(mk), and • Player(mj) = Player(mk) The idea here is that if the two agents present the same offer, then the dialogue should terminate, and there is no longer possible continuation of the dialogue. Definition 16 (Argumentation-based negotiation). An argumentation-based negotiation dialogue d between two agents P and C is a non-empty sequence of moves m1, ... , ml such that: • pi = P iff i is even, and pi = C iff i is odd • Player(m1) = P, Argument(m1) = θ, Offer(m1) = θ, and Target(m1) = 02 • ∀ mi, if Offer(mi) = θ, then Offer(mi) oj, ∀ oj ∈ O\(O Player(mi) i,r ∪ ND Player(mi) i ) • ∀i = 1, ... , l, mi is a legal continuation of m1, ... , mi−1 • Target(mi) = mj such that j < i and Player(mi) = Player(mj) • If Argument(mi) = θ, then: - if Offer(mi) = θ then Argument(mi) ∈ F(Offer(mi)) - if Offer(mi) = θ then Argument(mi) Def Player(mi) i Argument(Target(mi)) • i, j ≤ l such that mi = mj • m ∈ M such that m is a legal continuation of m1, ... , ml Let D be the set of all possible dialogues. The first condition says that the two agents take turn. The second condition says that agent P starts the negotiation dialogue by presenting an offer. Note that, in the first turn, we suppose that the agent does not present an argument. This assumption is made for strategical purposes. Indeed, arguments are exchanged as soon as a conflict appears. The third condition ensures that agents exchange their best offers, but never the rejected ones. This condition takes also into account the concessions that an agent will have to make if it was established that a concession is the only option for it at the current state of the dialogue. Of course, as we have shown in a previous section, an agent may have several good or acceptable offers. In this case, the agent chooses one of them randomly. The fourth condition ensures that the moves are legal. This condition allows to terminate the dialogue as soon as an offer is presented by both agents. The fifth condition allows agents to backtrack. The sixth 2 The first move has no target. condition says that an agent may send arguments in favor of offers, and in this case the offer should be stated in the same move. An agent can also send arguments in order to defeat arguments of the other agent. The next condition prevents repeating the same move. This is useful for avoiding loops. The last condition ensures that all the possible legal moves have been presented. The outcome of a negotiation dialogue is computed as follows: Definition 17 (Dialogue outcome). Let d = m1, ..., ml be a argumentation-based negotiation dialogue. The outcome of this dialogue, denoted Outcome, is Outcome(d) = Offer(ml) iff ∃j < l s.t. Offer(ml) = Offer(mj), and Player(ml) = Player(mj). Otherwise, Outcome(d) = θ. Note that when Outcome(d) = θ, the negotiation fails, and no agreement is reached by the two agents. However, if Outcome(d) = θ, the negotiation succeeds, and a solution that is either optimal or a compromise is found. Theorem 8. ∀di ∈ D, the argumentation-based negotiation di terminates. The above result is of great importance, since it shows that the proposed protocol avoids loops, and dialogues terminate. Another important result shows that the proposed protocol ensures to reach an optimal solution if it exists. Formally: Theorem 9 (Completeness). Let d = m1, ... , ml be a argumentation-based negotiation dialogue. If ∃t ≤ l such that OP t,a ∩ OC t,a = ∅, then Outcome(d) ∈ OP t,a ∩ OC t,a. We show also that the proposed dialogue protocol is sound in the sense that, if a dialogue returns a solution, then that solution is for sure a compromise. In other words, that solution is a common agreement at a given step of the dialogue. We show also that if the negotiation fails, then there is no possible solution. Theorem 10 (Soundness). Let d = m1, ... , ml be a argumentation-based negotiation dialogue. 1. If Outcome(d) = o, (o = θ), then ∃t ≤ l such that o ∈ OP t,x ∩ OC t,y, with x, y ∈ {a, n, ns}. 2. If Outcome(d) = θ, then ∀t ≤ l, OP t,x ∩ OC t,y = ∅, ∀ x, y ∈ {a, n, ns}. A direct consequence of the above theorem is the following: Property 4. Let d = m1, ... , ml be a argumentationbased negotiation dialogue. If Outcome(d) = θ, then ∀t ≤ l, • OP t,r = OC t,a ∪ OC t,n ∪ OC t,ns, and • OC t,r = OP t,a ∪ OP t,n ∪ OP t,ns. 6. ILLUSTRATIVE EXAMPLES In this section we will present some examples in order to illustrate our general framework. Example 7 (No argumentation). Let O = {o1, o2} be the set of all possible offers. Let P and C be two agents, equipped with the same theory: A, F, , R, Def such that A = ∅, F(o1) = F(o2) = ∅, = ∅, R = ∅, Def = ∅. In this case, it is clear that the two offers o1 and o2 are nonsupported. The proposed protocol (see Definition 16) will generate one of the following dialogues: 972 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) P: m1 = P, θ, o1, 0 C: m2 = C, θ, o1, 1 This dialogue ends with o1 as a compromise. Note that this solution is not considered as optimal since it is not an acceptable offer for the agents. P: m1 = P, θ, o1, 0 C: m2 = C, θ, o2, 1 P: m3 = P, θ, o2, 2 This dialogue ends with o2 as a compromise. P: m1 = P, θ, o2, 0 C: m2 = C, θ, o2, 1 This dialogue also ends with o2 as a compromise. The last possible dialgue is the following that ends with o1 as a compromise. P: m1 = P, θ, o2, 0 C: m2 = C, θ, o1, 1 P: m3 = P, θ, o1, 2 Note that in the above example, since there is no exchange of arguments, the theories of both agents do not change. Let us now consider the following example. Example 8 (Static theories). Let O = {o1, o2} be the set of all possible offers. The theory of agent P is AP , FP , P , RP , DefP such that: AP = {a1, a2}, FP (o1) = {a1}, FP (o2) = {a2}, P = {(a1, a2)}, RP = {(a1, a2), (a2, a1)}, DefP = {a1, a2}. The argumentation system AP , DefP of this agent will return a1 as an accepted argument, and a2 as a rejected one. Consequently, the offer o1 is acceptable and o2 is rejected. The theory of agent C is AC , FC , C , RC , DefC such that: AC = {a1, a2}, FC (o1) = {a1}, FC (o2) = {a2}, C = {(a2, a1)}, RC = {(a1, a2), (a2, a1)}, DefC = {a2, a1}. The argumentation system AC , DefC of this agent will return a2 as an accepted argument, and a1 as a rejected one. Consequently, the offer o2 is acceptable and o1 is rejected. The only possible dialogues that may take place between the two agents are the following: P: m1 = P, θ, o1, 0 C: m2 = C, θ, o2, 1 P: m3 = P, a1, o1, 2 C: m4 = C, a2, o2, 3 The second possible dialogue is the following: P: m1 = P, θ, o1, 0 C: m2 = C, a2, o2, 1 P: m3 = P, a1, o1, 2 C: m4 = C, θ, o2, 3 Both dialogues end with failure. Note that in both dialogues, the theories of both agents do not change. The reason is that the exchanged arguments are already known to both agents. The negotiation fails because the agents have conflicting preferences. Let us now consider an example in which argumentation will allow agents to reach an agreement. Example 9 (Dynamic theories). Let O = {o1, o2} be the set of all possible offers. The theory of agent P is AP , FP , P , RP , DefP such that: AP = {a1, a2}, FP (o1) = {a1}, FP (o2) = {a2}, P = {(a1, a2), (a3, a1)}, RP = {(a1, a2), (a2, a1)}, DefP = {(a1, a2)}. The argumentation system AP , DefP of this agent will return a1 as an accepted argument, and a2 as a rejected one. Consequently, the offer o1 is acceptable and o2 is rejected. The theory of agent C is AC , FC , C , RC , DefC such that: AC = {a1, a2, a3}, FC (o1) = {a1}, FC (o2) = {a2}, C = {(a1, a2), (a3, a1)}, RC = {(a1, a2), (a2, a1), (a3, a1)}, DefC = {(a1, a2), (a3, a1)}. The argumentation system AC , DefC of this agent will return a3 and a2 as accepted arguments, and a1 as a rejected one. Consequently, the offer o2 is acceptable and o1 is rejected. The following dialogue may take place between the two agents: P: m1 = P, θ, o1, 0 C: m2 = C, θ, o2, 1 P: m3 = P, a1, o1, 2 C: m4 = C, a3, θ, 3 C: m5 = P, θ, o2, 4 At step 4 of the dialogue, the agent P receives the argument a3 from P. Thus, its theory evolves as follows: AP = {a1, a2, a3}, RP = {(a1, a2), (a2, a1), (a3, a1)}, DefP = {(a1, a2), (a3, a1)}. At this step, the argument a1 which was accepted will become rejected, and the argument a2 which was at the beginning of the dialogue rejected will become accepted. Thus, the offer o2 will be acceptable for the agent, whereas o1 will become rejected. At this step 4, the offer o2 is acceptable for both agents, thus it is an optimal solution. The dialogue ends by returning this offer as an outcome. 7. RELATED WORK Argumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12]. In that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced. In [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed. Moreover, a particular framework for negotiation have been proposed. In [9, 13], different other frameworks have been proposed. Even if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments. However, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them. Thus the protocol for handling arguments was missing. Another limitation of the above frameworks is the fact that the argumentation frameworks they The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 973 use are quite poor, since they use a very simple acceptability semantics. In [2] a negotiation framework that fills the gap has been suggested. A protocol that handles the arguments was proposed. However, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue. Moreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue. In [1, 7], the authors have focused mainly on this decision problem. They have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue. In that work, agents are supposed to have a beliefs base and a goals base. Our framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs. The negotiation protocol is general as well. Thus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties. Our framework is also a unified one because frameworks like the ones presented above can be represented within this framework. For example the decision making mechanism proposed in [7] for the evaluation of arguments and therefore of offers, which is based on a priority relation between mutually attacked arguments, can be captured by the relation defeat proposed in our framework. This relation takes simultaneously into account the attacking and preference relations that may exist between two arguments. 8. CONCLUSIONS AND FUTURE WORK In this paper we have presented a unified and general framework for argumentation-based negotiation. Like any other argumentation-based negotiation framework, as it is evoked in (e.g. [10]), our framework has all the advantages that argumentation-based negotiation approaches present when related to the negotiation approaches based either on game theoretic models (see e.g. [11]) or heuristics ([6]). This work is a first attempt to formally define the role of argumentation in the negotiation process. More precisely, for the first time, it formally establishes the link that exists between the status of the arguments and the offers they support, it defines the notion of concession and shows how it influences the evolution of the negotiation, it determines how the theories of agents evolve during the dialogue and performs an analysis of the negotiation outcomes. It is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented. Our future work concerns several points. A first point is to relax the assumption that the set of possible offers is the same to both agents. Indeed, it is more natural to assume that agents may have different sets of offers. During a negotiation dialogue, these sets will evolve. Arguments in favor of the new offers may be built from the agent theory. Thus, the set of offers will be part of the agent theory. Another possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers. This is more akin to the way human take decisions. Considering both types of arguments will refine the evaluation of the offers status. In the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers. This preference relation can be refined. For instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument. In [4], different criteria have been proposed for comparing decisions. Our framework can thus be extended by integrating those criteria. Another interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles. By profile, we mean the criterion used by an agent to compare its offers. 9. REFERENCES [1] L. Amgoud, S. Belabbes, and H. Prade. Towards a formal framework for the search of a consensus between autonomous agents. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multi-Agents systems, pages 537-543, 2005. [2] L. Amgoud, S. Parsons, and N. Maudet. Arguments, dialogue, and negotiation. In Proceedings of the 14th European Conference on Artificial Intelligence, 2000. [3] L. Amgoud and H. Prade. Reaching agreement through argumentation: A possibilistic approach. In 9 th International Conference on the Principles of Knowledge Representation and Reasoning, KR``2004, 2004. [4] L. Amgoud and H. Prade. Explaining qualitative decision under uncertainty by argumentation. In 21st National Conference on Artificial Intelligence, AAAI``06, pages 16 - 20, 2006. [5] P. M. Dung. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77:321-357, 1995. [6] N. R. Jennings, P. Faratin, A. R. Lumuscio, S. Parsons, and C. Sierra. Automated negotiation: Prospects, methods and challenges. International Journal of Group Decision and Negotiation, 2001. [7] A. Kakas and P. Moraitis. Adaptive agent negotiation via argumentation. In Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agents systems, pages 384-391, 2006. [8] S. Kraus, K. Sycara, and A. Evenchik. Reaching agreements through argumentation: a logical model and implementation. Artificial Intelligence, 104:1-69, 1998. [9] S. Parsons and N. R. Jennings. Negotiation through argumentation-a preliminary report. In Proceedings of the 2nd International Conference on Multi Agent Systems, pages 267-274, 1996. [10] I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney, S. Parsons, and E. Sonenberg. Argumentation-based negotiation. Knowledge Engineering Review, 18 (4):343-375, 2003. [11] J. Rosenschein and G. Zlotkin. Rules of Encounter: Designing Conventions for Automated Negotiation Among Computers,. MIT Press, Cambridge, Massachusetts, 1994., 1994. [12] K. Sycara. Persuasive argumentation in negotiation. Theory and Decision, 28:203-242, 1990. [13] F. Tohm´e. Negotiation and defeasible reasons for choice. In Proceedings of the Stanford Spring Symposium on Qualitative Preferences in Deliberation and Practical Reasoning, pages 95-102, 1997. 974 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
A Unified and General Framework for Argumentation-based Negotiation ABSTRACT This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue. 1. INTRODUCTION Roughly speaking, negotiation is a process aiming at finding some compromise or consensus between two or several agents about some matters of collective agreement, such as pricing products, allocating resources, or choosing candidates. Negotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce. Different approaches to automated negotiation have been investigated, including game-theoretic approaches (which usu ally assume complete information and unlimited computation capabilities) [11], heuristic-based approaches which try to cope with these limitations [6], and argumentation-based approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the importance of exchanging information and explanations between negotiating agents in order to mutually influence their behaviors (e.g. an agent may concede a goal having a small priority), and consequently the outcome of the dialogue. Indeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers. Integrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue. Indeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them. The basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change. For instance, an agent may reject an offer because it is not acceptable for it. However, the agent may change its mind if it receives a strong argument in favor of this offer. Several proposals have been made in the literature for modeling such an approach. However, the work is still preliminary. Some researchers have mainly focused on relating argumentation with protocols. They have shown how and when arguments in favor of offers can be computed and exchanged. Others have emphasized on the decision making problem. In [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem. They have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol. In most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues. It is not clear how argumentation can influence the outcome of the dialogue. Moreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied. This paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated. In this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known. The goal of a negotiation is to find among elements of O, an offer that satisfies more or less the preferences of both agents. Each agent is supposed to have a theory represented in an abstract way. A theory consists of a set A of arguments whose structure and origin are not known, a function specifying for each possible offer in O, the arguments of A that support it, a non specified conflict relation among the arguments, and finally a preference relation between the arguments. The status of each argument is defined using Dung's acceptability semantics. Consequently, the set of offers is partitioned into four subsets: acceptable, rejected, negotiable and non-supported offers. We show how an agent's theory may evolve during a negotiation dialogue. We define formally the notions of concession, compromise, and optimal solution. Then, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary. We show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist. This paper is organized as follows: Section 2 introduces the logical language that is used in the rest of the paper. Section 3 defines the agents as well as their theories. In section 4, we study the properties of these agents' theories. Section 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue. Two kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached. Section 6 illustrates our general framework through some examples. Section 7 compares our formalism with existing ones. Section 8 concludes and presents some perspectives. Due to lack of space, the proofs are not included. These last are in a technical report that we will make available online at some later time. 2. THE LOGICAL LANGUAGE In what follows, L will denote a logical language, and ≡ is an equivalence relation associated with it. From L, a set O = {o1,..., on} of n offers is identified, such that oi, oj ∈ O such that oi ≡ oj. This means that the offers are different. Offers correspond to the different alternatives that can be exchanged during a negotiation dialogue. For instance, if the agents try to decide the place of their next meeting, then the set O will contain different towns. Different arguments can be built from L. The set Args (L) will contain all those arguments. By argument, we mean a reason in believing or of doing something. In [3], it has been argued that the selection of the best offer to propose at a given step of the dialogue is a decision problem. In [4], it has been shown that in an argumentation-based approach for decision making, two kinds of arguments are distinguished: arguments supporting choices (or decisions), and arguments supporting beliefs. Moreover, it has been acknowledged that the two categories of arguments are formally defined in different ways, and they play different roles. Indeed, an argument in favor of a decision, built both on an agent's beliefs and goals, tries to justify the choice; whereas an argument in favor of a belief, built only from beliefs, tries to destroy the decision arguments, in particular the beliefs part of those decision arguments. Consequently, in a negotiation dialogue, those two kinds of arguments are generally exchanged between agents. In what follows, the set Args (L) is then divided into two subsets: a subset Argso (L) of arguments supporting offers, and a subset Argsb (L) of arguments supporting beliefs. Thus, Args (L) = Argso (L) ∪ Argsb (L). As in [5], in what follows, we consider that the structure of the arguments is not known. Since the knowledge bases from which arguments are built may be inconsistent, the arguments may be conflicting too. In what follows, those conflicts will be captured by the relation Rc, thus Rc ⊆ Args (L) × Args (L). Three assumptions are made on this relation: First the arguments supporting different offers are conflicting. The idea behind this assumption is that since offers are exclusive, an agent has to choose only one at a given step of the dialogue. Note that, the relation Rc is not necessarily symmetric between the arguments of Argsb (L). The second hypothesis says that arguments supporting the same offer are also conflicting. The idea here is to return the strongest argument among these arguments. The third condition does not allow an argument in favor of an offer to attack an argument supporting a belief. This avoids wishful thinking. Formally: • ∀ a, a' ∈ Argso (L), s.t. a = ~ a', a Rc a' • a ∈ Argso (L) and a' ∈ Argsb (L) such that a Rc a ' Note that the relation Rc is not symmetric. This is due to the fact that arguments of Argsb (L) may be conflicting but not necessarily in a symmetric way. In what follows, we assume that the set Args (L) of arguments is finite, and each argument is attacked by a finite number of arguments. 3. NEGOTIATING AGENTS THEORIES AND REASONING MODELS In this section we define formally the negotiating agents, i.e. their theories, as well as the reasoning model used by those agents in a negotiation dialogue. 3.1 Negotiating agents theories Agents involved in a negotiation dialogue, called negotiating agents, are supposed to have theories. In this paper, the theory of an agent will not refer, as usual, to its mental states (i.e. its beliefs, desires and intentions). However, it will be encoded in a more abstract way in terms of the arguments owned by the agent, a conflict relation among those arguments, a preference relation between the arguments, and a function that specifies which arguments support offers of the set O. We assume that an agent is aware of all the arguments of the set Args (L). The agent is even able to express a preference between any pair of arguments. This does not mean that the agent will use all the arguments of Args (L), but it encodes the fact that when an agent receives an argument from another agent, it can interpret it correctly, and it can also compare it with its own arguments. Similarly, each agent is supposed to be aware of the conflicts between arguments. This also allows us to encode the fact that an agent can recognize whether the received argument is in conflict or not with its arguments. However, in its theory, only the conflicts between its own arguments are considered. • A ⊆ Args (L). 968 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • F: O--* 2 • 4 s.t ∀ i, j with i = ~ j, F (oi) ∩ F (oj) = 0. Let Ao = ∪ F (oi) with i = 1,..., n. •> - ⊆ Args (L) × Args (L) is a partial preorder denoting a preference relation between arguments. • R ⊆ Rc such that R ⊆ A × A • Def ⊆ A × A such that ∀ a, b ∈ A, a defeats b, denoted a Def b iff:--a R b, and--not (b> - a) The function F returns the arguments supporting offers in O. In [4], it has been argued that any decision may have arguments supporting it, called arguments PRO, and arguments against it, called arguments CONS. Moreover, these two types of arguments are not necessarily conflicting. For simplicity reasons, in this paper we consider only arguments PRO. Moreover, we assume that an argument cannot support two distinct offers. However, it may be the case that an offer is not supported at all by arguments, thus F (oi) may be empty. EXAMPLE 1. Let O = {o1, o2, o3} be a set of offers. The following theory is the theory of agent i: • A = {a1, a2, a3, a4} • F (o1) = {a1}, F (o2) = {a2}, F (o3) = 0. Thus, Ao = {a1, a2} •> - = {(a1, a2), (a2, a1), (a3, a2), (a4, a3)} • R = {a1, a2), (a2, a1), (a3, a2), (a4, a3)} • Def = {(a4, a3), (a3, a2)} From the above definition of agent theory, the following hold: PROPERTY 1. • Def ⊆ R • ∀ a, a' ∈ F (oi), a R a ' 3.2 The reasoning model From the theory of an agent, one can define the argumentation system used by that agent for reasoning about the offers and the arguments, i.e. for computing the status of the different offers and arguments. DEFINITION 3 (ARGUMENTATION SYSTEM). Let (A, F,> -, R, Def) be the theory of an agent. The argumentation system of that agent is the pair (A, Def). In [5], different acceptability semantics have been introduced for computing the status of arguments. These are based on two basic concepts, defence and conflict-free, defined as follows: DEFINITION 4 (DEFENCE/CONFLICT-FREE). Let S ⊆ A. • S defends an argument a iff each argument that defeats a is defeated by some argument in S. • S is conflict-free iff there exist no a, a' in S such that a Def a'. DEFINITION 5 (ACCEPTABILITY SEMANTICS). Let S be a conflict-free set of arguments, and let T: 2 • 4--* 2 • 4 be a function such that T (S) = {a | a is defended by S}. • S is a complete extension iff S = T (S). • S is a preferred extension iff S is a maximal (w.r.t set ⊆) complete extension. • S is a grounded extension iff it is the smallest (w.r.t set ⊆) complete extension. Let E1,..., Ex denote the different extensions under a given semantics. Note that there is only one grounded extension. It contains all the arguments that are not defeated, and those arguments that are defended directly or indirectly by nondefeated arguments. THEOREM 1. Let (A, Def) the argumentation system defined as shown above. 1. It may have x ≥ 1 preferred extensions. 2. The grounded extensions is S = Ui> 1 T (0). Note that when the grounded extension (or the preferred extension) is empty, this means that there is no acceptable offer for the negotiating agent. EXAMPLE 2. In example 1, there is one preferred extension, E = {a1, a2, a4}. Now that the acceptability semantics is defined, we are ready to define the status of any argument. 1. a is accepted iff a ∈ Ei, ∀ Ei with i = 1,..., x. 2. a is rejected iff Ei such that a ∈ Ei. 3. a is undecided iff a is neither accepted nor rejected. This means that a is in some extensions and not in others. Note that A = {a | a is accepted} ∪ {a | a is rejected} ∪ {a | a is undecided}. EXAMPLE 3. In example 1, the arguments a1, a2 and a4 are accepted, whereas the argument a3 is rejected. As said before, agents use argumentation systems for reasoning about offers. In a negotiation dialogue, agents propose and accept offers that are acceptable for them, and reject bad ones. In what follows, we will define the status of an offer. According to the status of arguments, one can define four statuses of the offers as follows: • The offer o is acceptable for the negotiating agent iff ∃ a ∈ F (o) such that a is accepted. Oa = {oi ∈ O, such that oi is acceptable}. • The offer o is rejected for the negotiating agent iff ∀ a ∈ F (o), a is rejected. Or = {oi ∈ O, such that oi is rejected}. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969 4. THE STRUCTURE OF NEGOTIATION THEORIES In this section, we study the properties of the system developed above. We first show that in the particular case where A = Ao (ie. all of the agent's arguments refer to offers), the corresponding argumentation system will return at least one non-empty preferred extension. THEOREM 2. Let ~ A, Def ~ an argumentation system such that A = Ao. Then the system returns at least one extension E, such that | E | ≥ 1. We now present some results that demonstrate the importance of indifference in negotiating agents, and more specifically its relation to acceptable outcomes. We first show that the set Oa may contain several offers when their corresponding accepted arguments are indifferent w.r.t the preference relation ~. THEOREM 3. Let o1, o2 ∈ O. o1, o2 ∈ Oa iff ∃ a1 ∈ F (o1), ∃ a2 ∈ F (o2), such that a1 and a2 are accepted and are indifferent w.r.t ~ (i.e. a ~ b and b ~ a). • The offer o is negotiable iff ∀ a ∈ F (o), a is undecided. On = {oi ∈ O, such that oi is negotiable}. • The offer o is non-supported iff it is neither acceptable, nor rejected or negotiable. Ons = {oi ∈ O, such that oi is non-supported offers}. EXAMPLE 4. In example 1, the two offers o1 and o2 are acceptable since they are supported by accepted arguments, whereas the offer o3 is non-supported since it has no argument in its favor. From the above definitions, the following results hold: PROPERTY 2. Let o ∈ O. • O = Oa ∪ Or ∪ On ∪ Ons. • The set Oa may contain more than one offer. From the above partition of the set O of offers, a preference relation between offers is defined. Let Ox and Oy be two subsets of O. Ox r> Oy means that any offer in Ox is preferred to any offer in the set Oy. We can write also for two offers oi, oj, oi r> oj iff oi ∈ Ox, oj ∈ Oy and Ox r> Oy. EXAMPLE 5. In example 1, we have o1 r> o3, and o2 r> o3. However, o1 and o2 are indifferent. We now study acyclic preference relations that are defined formally as follows. acyclic relation on A. Then the preferred extensions of ~ A, Def ~ are pairwise disjoint w.r.t arguments of Ao. Using the above results we can prove the main theorem of this section that states that negotiating agents with acyclic preference relations do not have acceptable offers. THEOREM 7. Let ~ A, F, R, ~, Def ~ be a negotiating agent such that A = Ao and ~ is an acyclic relation. Then the set of accepted arguments w.r.t ~ A, Def ~ is emtpy. Consequently, the set of acceptable offers, Oa is empty as well. 5. ARGUMENTATION-BASED NEGOTIATION In this section, we define formally a protocol that generates argumentation-based negotiation dialogues between two negotiating agents P and C. The two agents negotiate about an object whose possible values belong to a set O. This set O is supposed to be known and the same for both agents. For simplicity reasons, we assume that this set does not change during the dialogue. The agents are equipped with theories denoted respectively ~ AP, FP, ~ P, RP, DefP ~, and ~ AC, FC, ~ C, RC, DefC ~. Note that the two theories may be different in the sense that the agents may have different sets of arguments, and different preference relations. Worst yet, they may have different arguments in favor of the same offers. Moreover, these theories may evolve during the dialogue. 5.1 Evolution of the theories Before defining formally the evolution of an agent's theory, let us first introduce the notion of dialogue moves, or moves for short. • pi ∈ {P, C} • ai ∈ Args (L) ∪ θ1 970 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) • oi ∈ O ∪ θ • ti ∈ N ∗ is the target of the move, such that ti <i The function Player (resp. Argument, Offer, Target) returns the player of the move (i.e. pi) (resp. the argument of a move, i.e ai, the offer oi, and the target of the move, ti). Let M denote the set of all the moves that can be built from ~ {P, C}, Arg (L), O ~. Note that the set M is finite since Arg (L) and O are assumed to be finite. Let us now see how an agent's theory evolves and why. The idea is that if an agent receives an argument from another agent, it will add the new argument to its theory. Moreover, since an argument may bring new information for the agent, thus new arguments can emerge. Let us take the following example: EXAMPLE 6. Suppose that an agent P has the following propositional knowledge base: ΣP = {x, y → z}. From this base one cannot deduce z. Let's assume that this agent receives the following argument {a, a → y} that justifies y. It is clear that now P can build an argument, say {a, a → y, y → z} in favor of z. In a similar way, if a received argument is in conflict with the arguments of the agent i, then those conflicts are also added to its relation Ri. Note that new conflicts may arise between the original arguments of the agent and the ones that emerge after adding the received arguments to its theory. Those new conflicts should also be considered. As a direct consequence of the evolution of the sets Ai and Ri, the defeat relation Defi is also updated. The initial theory of an agent i, (i.e. its theory before the dialogue starts), is denoted by ~ Ai0, Fi0, ~ i0, Ri0, Defi0 ~, with i ∈ {P, C}. Besides, in this paper, we suppose that the preference relation ~ i of an agent does not change during the dialogue. DEFINITION 11 (THEORY EVOLUTION). Let m1,..., mt,..., mj be a sequence of moves. The theory of an agent i at a step t> 0 is: ~ Ai t, Fti, ~ i t, Ri t, Defit ~ such that: • Ait = Ai0 ∪ {ai, i = 1,..., t, ai = Argument (mi)} ∪ A' with A' ⊆ Args (L) t • Ft i = O → 2Ai • ~ it = ~ i0 • Rit = Ri0 ∪ {(ai, aj) | ai = Argument (mi), • Defit ⊆ Ait × Ait The above definition captures the monotonic aspect of an argument. Indeed, an argument cannot be removed. However, its status may change. An argument that is accepted at step t of the dialogue by an agent may become rejected at step t + i. Consequently, the status of offers also change. Thus, the sets Oa, Or, On, and Ons may change from one step of the dialogue to another. That means for example that some offers could move from the set Oa to the set Or and vice-versa. Note that in the definition of Rt, the relation RL is used to denote a conflict between exchanged arguments. The reason is that, such a conflict may not be in the set Ri of the agent i. Thus, in order to recognize such conflicts, we have supposed that the set RL is known to the agents. This allows us to capture the situation where an agent is able to prove an argument that it was unable to prove before, by incorporating in its beliefs some information conveyed through the exchange of arguments with another agent. This, unknown at the beginning of the dialogue argument, could give to this agent the possibility to defeat an argument that it could not by using its initial arguments. This could even lead to a change of the status of these initial arguments and this change would lead to the one of the associated offers' status. In what follows, Oit, x denotes the set of offers of type x, where x ∈ {a, n, r, ns}, of the agent i at step t of the dialogue. In some places, we can use for short the notation Oit to denote the partition of the set O at step t for agent i. Note that we have: not (Oit, x ⊆ Oit +1, x). 5.2 The notion of agreement As said in the introduction, negotiation is a process aiming at finding an agreement about some matters. By agreement, one means a solution that satisfies to the largest possible extent the preferences of both agents. In case there is no such solution, we say that the negotiation fails. In what follows, we will discuss the different kinds of solutions that may be reached in a negotiation. The first one is the optimal solution. An optimal solution is the best offer for both agents. Formally: DEFINITION 12 (OPTIMAL SOLUTION). Let O be a set of offers, and o ∈ O. The offer o is an optimal solution at a step t ≥ 0 iff o ∈ OPt, a ∩ OCt, a Such a solution does not always exist since agents may have conflicting preferences. Thus, agents make concessions by proposing/accepting less preferred offers. DEFINITION 13 (CONCESSION). Let o ∈ O be an offer. The offer o is a concession for an agent i iff o ∈ Oix such that ∃ Oiy = ∅, and Oiy ~ Oi x. During a negotiation dialogue, agents exchange first their most preferred offers, and if these last are rejected, they make concessions. In this case, we say that their best offers are no longer defendable. In an argumentation setting, this means that the agent has already presented all its arguments supporting its best offers, and it has no counter argument against the ones presented by the other agent. Formally: DEFINITION 14 (DEFENDABLE OFFER). Let ~ Ai t, Fti, ~ i t, Ri t, Defit ~ be the theory of agent i at a step t> 0 of the dialogue. Let o ∈ O such that ∃ j ≤ t with Player (mj) = i and offer (mj) = o. The offer o is defendable by the agent i iff: • ∃ a ∈ Fti (o), and k ≤ t s.t. Argument (mk) = a, or • ∃ a ∈ At \ Fti (o) s.t. a Defit b with--Argument (mk) = b, k ≤ t, and Player (mk) = i--l ≤ t, Argument (ml) = a The offer o is said non-defendable otherwise and NDit is the set of non-defendable offers of agent i at a step t. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971 5.3 Negotiation dialogue Now that we have shown how the theories of the agents evolve during a dialogue, we are ready to define formally an argumentation-based negotiation dialogue. For that purpose, we need to define first the notion of a legal continuation. DEFINITION 15 (LEGAL MOVE). A move m is a legal continuation of a sequence of moves m1,..., ml iff j, k <l, such that: • Offer (mj) = Offer (mk), and • Player (mj) = ~ Player (mk) The idea here is that if the two agents present the same offer, then the dialogue should terminate, and there is no longer possible continuation of the dialogue. DEFINITION 16 (ARGUMENTATION-BASED NEGOTIATION). An argumentation-based negotiation dialogue d between two agents P and C is a non-empty sequence of moves m1,..., ml such that: • pi = P iff i is even, and pi = C iff i is odd • Player (m1) = P, Argument (m1) = 0, Offer (m1) = ~ 0, and Target (m1) = 02 • ∀ i = 1,..., l, mi is a legal continuation of m1,..., mi − 1 • Target (mi) = mj such that j <i and Player (mi) = ~ Player (mj) • If Argument (mi) = ~ 0, then: The first condition says that the two agents take turn. The second condition says that agent P starts the negotiation dialogue by presenting an offer. Note that, in the first turn, we suppose that the agent does not present an argument. This assumption is made for strategical purposes. Indeed, arguments are exchanged as soon as a conflict appears. The third condition ensures that agents exchange their best offers, but never the rejected ones. This condition takes also into account the concessions that an agent will have to make if it was established that a concession is the only option for it at the current state of the dialogue. Of course, as we have shown in a previous section, an agent may have several good or acceptable offers. In this case, the agent chooses one of them randomly. The fourth condition ensures that the moves are legal. This condition allows to terminate the dialogue as soon as an offer is presented by both agents. The fifth condition allows agents to backtrack. The sixth 2The first move has no target. condition says that an agent may send arguments in favor of offers, and in this case the offer should be stated in the same move. An agent can also send arguments in order to defeat arguments of the other agent. The next condition prevents repeating the same move. This is useful for avoiding loops. The last condition ensures that all the possible legal moves have been presented. The outcome of a negotiation dialogue is computed as follows: DEFINITION 17 (DIALOGUE OUTCOME). Let d = m1,..., ml be a argumentation-based negotiation dialogue. The outcome of this dialogue, denoted Outcome, is Outcome (d) = Offer (ml) iff ∃ j <l s.t. Offer (ml) = Offer (mj), and Player (ml) = ~ Player (mj). Otherwise, Outcome (d) = 0. Note that when Outcome (d) = 0, the negotiation fails, and no agreement is reached by the two agents. However, if Outcome (d) = ~ 0, the negotiation succeeds, and a solution that is either optimal or a compromise is found. THEOREM 8. ∀ di ∈ D, the argumentation-based negotiation di terminates. The above result is of great importance, since it shows that the proposed protocol avoids loops, and dialogues terminate. Another important result shows that the proposed protocol ensures to reach an optimal solution if it exists. Formally: THEOREM 9 (COMPLETENESS). Let d = m1,..., ml be a argumentation-based negotiation dialogue. If ∃ t ≤ l such that OPt, a ∩ OCt, a = ~ ∅, then Outcome (d) ∈ OPt, a ∩ OCt, a. We show also that the proposed dialogue protocol is sound in the sense that, if a dialogue returns a solution, then that solution is for sure a compromise. In other words, that solution is a "common agreement" at a given step of the dialogue. We show also that if the negotiation fails, then there is no possible solution. THEOREM 10 (SOUNDNESS). Let d = m1,..., ml be a argumentation-based negotiation dialogue. 1. If Outcome (d) = o, (o = ~ 0), then ∃ t ≤ l such that o ∈ OP t, x ∩ OCt, y, with x, y ∈ {a, n, ns}. 2. If Outcome (d) = 0, then ∀ t ≤ l, OPt, x ∩ OCt, y = ∅, ∀ x, y ∈ {a, n, ns}. A direct consequence of the above theorem is the following: PROPERTY 4. Let d = m1,..., ml be a argumentationbased negotiation dialogue. If Outcome (d) = 0, then ∀ t ≤ l, • OP t, r = OC t, a ∪ OC t, n ∪ OCt, ns, and • OC t, r = OP t, a ∪ OP t, n ∪ OPt, ns. 6. ILLUSTRATIVE EXAMPLES In this section we will present some examples in order to illustrate our general framework. EXAMPLE 7 (NO ARGUMENTATION). Let O = {o1, o2} be the set of all possible offers. Let P and C be two agents, equipped with the same theory: A, F, ~, R, Def ~ such that A = ∅, F (o1) = F (o2) = ∅, ~ = ∅, R = ∅, Def = ∅. In this case, it is clear that the two offers o1 and o2 are nonsupported. The proposed protocol (see Definition 16) will generate one of the following dialogues: 972 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) This dialogue ends with o1 as a compromise. Note that this solution is not considered as optimal since it is not an acceptable offer for the agents. This dialogue also ends with o2 as a compromise. The last possible dialgue is the following that ends with o1 as a compromise. Note that in the above example, since there is no exchange of arguments, the theories of both agents do not change. Let us now consider the following example. EXAMPLE 8 (STATIC THEORIES). Let O = {o1, o2} be the set of all possible offers. The theory of agent P is ~ AP, FP, ~ P, RP, DefP ~ such that: AP = {a1, a2}, FP (o1) = {a1}, FP (o2) = {a2}, ~ P = {(a1, a2)}, RP = {(a1, a2), (a2, a1)}, DefP = {a1, a2}. The argumentation system ~ AP, DefP ~ of this agent will return a1 as an accepted argument, and a2 as a rejected one. Consequently, the offer o1 is acceptable and o2 is rejected. The theory of agent C is ~ AC, FC, ~ C, RC, DefC ~ such that: AC = {a1, a2}, FC (o1) = {a1}, FC (o2) = {a2}, ~ C = {(a2, a1)}, RC = {(a1, a2), (a2, a1)}, DefC = {a2, a1}. The argumentation system ~ AC, DefC ~ of this agent will return a2 as an accepted argument, and a1 as a rejected one. Consequently, the offer o2 is acceptable and o1 is rejected. The only possible dialogues that may take place between the two agents are the following: Both dialogues end with failure. Note that in both dialogues, the theories of both agents do not change. The reason is that the exchanged arguments are already known to both agents. The negotiation fails because the agents have conflicting preferences. Let us now consider an example in which argumentation will allow agents to reach an agreement. EXAMPLE 9 (DYNAMIC THEORIES). Let O = {o1, o2} be the set of all possible offers. The theory of agent P is ~ AP, FP, ~ P, RP, DefP ~ such that: AP = {a1, a2}, FP (o1) = {a1}, FP (o2) = {a2}, ~ P = {(a1, a2), (a3, a1)}, RP = {(a1, a2), (a2, a1)}, DefP = {(a1, a2)}. The argumentation system ~ AP, DefP ~ of this agent will return a1 as an accepted argument, and a2 as a rejected one. Consequently, the offer o1 is acceptable and o2 is rejected. The theory of agent C is ~ AC, FC, ~ C, RC, DefC ~ such that: AC = {a1, a2, a3}, FC (o1) = {a1}, FC (o2) = {a2}, ~ C = {(a1, a2), (a3, a1)}, RC = {(a1, a2), (a2, a1), (a3, a1)}, DefC = {(a1, a2), (a3, a1)}. The argumentation system ~ AC, DefC ~ of this agent will return a3 and a2 as accepted arguments, and a1 as a rejected one. Consequently, the offer o2 is acceptable and o1 is rejected. The following dialogue may take place between the two agents: At step 4 of the dialogue, the agent P receives the argument a3 from P. Thus, its theory evolves as follows: AP = {a1, a2, a3}, RP = {(a1, a2), (a2, a1), (a3, a1)}, DefP = {(a1, a2), (a3, a1)}. At this step, the argument a1 which was accepted will become rejected, and the argument a2 which was at the beginning of the dialogue rejected will become accepted. Thus, the offer o2 will be acceptable for the agent, whereas o1 will become rejected. At this step 4, the offer o2 is acceptable for both agents, thus it is an optimal solution. The dialogue ends by returning this offer as an outcome. 7. RELATED WORK Argumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12]. In that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced. In [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed. Moreover, a particular framework for negotiation have been proposed. In [9, 13], different other frameworks have been proposed. Even if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments. However, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them. Thus the protocol for handling arguments was missing. Another limitation of the above frameworks is the fact that the argumentation frameworks they The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 973 use are quite poor, since they use a very simple acceptability semantics. In [2] a negotiation framework that fills the gap has been suggested. A protocol that handles the arguments was proposed. However, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue. Moreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue. In [1, 7], the authors have focused mainly on this decision problem. They have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue. In that work, agents are supposed to have a beliefs base and a goals base. Our framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs. The negotiation protocol is general as well. Thus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties. Our framework is also a unified one because frameworks like the ones presented above can be represented within this framework. For example the decision making mechanism proposed in [7] for the evaluation of arguments and therefore of offers, which is based on a priority relation between mutually attacked arguments, can be captured by the relation defeat proposed in our framework. This relation takes simultaneously into account the attacking and preference relations that may exist between two arguments. 8. CONCLUSIONS AND FUTURE WORK In this paper we have presented a unified and general framework for argumentation-based negotiation. Like any other argumentation-based negotiation framework, as it is evoked in (e.g. [10]), our framework has all the advantages that argumentation-based negotiation approaches present when related to the negotiation approaches based either on game theoretic models (see e.g. [11]) or heuristics ([6]). This work is a first attempt to formally define the role of argumentation in the negotiation process. More precisely, for the first time, it formally establishes the link that exists between the status of the arguments and the offers they support, it defines the notion of concession and shows how it influences the evolution of the negotiation, it determines how the theories of agents evolve during the dialogue and performs an analysis of the negotiation outcomes. It is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented. Our future work concerns several points. A first point is to relax the assumption that the set of possible offers is the same to both agents. Indeed, it is more natural to assume that agents may have different sets of offers. During a negotiation dialogue, these sets will evolve. Arguments in favor of the new offers may be built from the agent theory. Thus, the set of offers will be part of the agent theory. Another possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers. This is more akin to the way human take decisions. Considering both types of arguments will refine the evaluation of the offers status. In the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers. This preference relation can be refined. For instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument. In [4], different criteria have been proposed for comparing decisions. Our framework can thus be extended by integrating those criteria. Another interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles. By profile, we mean the criterion used by an agent to compare its offers.
A Unified and General Framework for Argumentation-based Negotiation ABSTRACT This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue. 1. INTRODUCTION Roughly speaking, negotiation is a process aiming at finding some compromise or consensus between two or several agents about some matters of collective agreement, such as pricing products, allocating resources, or choosing candidates. Negotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce. Different approaches to automated negotiation have been investigated, including game-theoretic approaches (which usu ally assume complete information and unlimited computation capabilities) [11], heuristic-based approaches which try to cope with these limitations [6], and argumentation-based approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the importance of exchanging information and explanations between negotiating agents in order to mutually influence their behaviors (e.g. an agent may concede a goal having a small priority), and consequently the outcome of the dialogue. Indeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers. Integrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue. Indeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them. The basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change. For instance, an agent may reject an offer because it is not acceptable for it. However, the agent may change its mind if it receives a strong argument in favor of this offer. Several proposals have been made in the literature for modeling such an approach. However, the work is still preliminary. Some researchers have mainly focused on relating argumentation with protocols. They have shown how and when arguments in favor of offers can be computed and exchanged. Others have emphasized on the decision making problem. In [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem. They have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol. In most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues. It is not clear how argumentation can influence the outcome of the dialogue. Moreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied. This paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated. In this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known. The goal of a negotiation is to find among elements of O, an offer that satisfies more or less the preferences of both agents. Each agent is supposed to have a theory represented in an abstract way. A theory consists of a set A of arguments whose structure and origin are not known, a function specifying for each possible offer in O, the arguments of A that support it, a non specified conflict relation among the arguments, and finally a preference relation between the arguments. The status of each argument is defined using Dung's acceptability semantics. Consequently, the set of offers is partitioned into four subsets: acceptable, rejected, negotiable and non-supported offers. We show how an agent's theory may evolve during a negotiation dialogue. We define formally the notions of concession, compromise, and optimal solution. Then, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary. We show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist. This paper is organized as follows: Section 2 introduces the logical language that is used in the rest of the paper. Section 3 defines the agents as well as their theories. In section 4, we study the properties of these agents' theories. Section 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue. Two kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached. Section 6 illustrates our general framework through some examples. Section 7 compares our formalism with existing ones. Section 8 concludes and presents some perspectives. Due to lack of space, the proofs are not included. These last are in a technical report that we will make available online at some later time. 2. THE LOGICAL LANGUAGE 3. NEGOTIATING AGENTS THEORIES AND REASONING MODELS 3.1 Negotiating agents theories 968 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2 The reasoning model The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969 4. THE STRUCTURE OF NEGOTIATION THEORIES 5. ARGUMENTATION-BASED NEGOTIATION 5.1 Evolution of the theories 970 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.2 The notion of agreement The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971 5.3 Negotiation dialogue 6. ILLUSTRATIVE EXAMPLES 972 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7. RELATED WORK Argumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12]. In that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced. In [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed. Moreover, a particular framework for negotiation have been proposed. In [9, 13], different other frameworks have been proposed. Even if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments. However, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them. Thus the protocol for handling arguments was missing. Another limitation of the above frameworks is the fact that the argumentation frameworks they The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 973 use are quite poor, since they use a very simple acceptability semantics. In [2] a negotiation framework that fills the gap has been suggested. A protocol that handles the arguments was proposed. However, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue. Moreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue. In [1, 7], the authors have focused mainly on this decision problem. They have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue. In that work, agents are supposed to have a beliefs base and a goals base. Our framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs. The negotiation protocol is general as well. Thus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties. Our framework is also a unified one because frameworks like the ones presented above can be represented within this framework. For example the decision making mechanism proposed in [7] for the evaluation of arguments and therefore of offers, which is based on a priority relation between mutually attacked arguments, can be captured by the relation defeat proposed in our framework. This relation takes simultaneously into account the attacking and preference relations that may exist between two arguments. 8. CONCLUSIONS AND FUTURE WORK In this paper we have presented a unified and general framework for argumentation-based negotiation. Like any other argumentation-based negotiation framework, as it is evoked in (e.g. [10]), our framework has all the advantages that argumentation-based negotiation approaches present when related to the negotiation approaches based either on game theoretic models (see e.g. [11]) or heuristics ([6]). This work is a first attempt to formally define the role of argumentation in the negotiation process. More precisely, for the first time, it formally establishes the link that exists between the status of the arguments and the offers they support, it defines the notion of concession and shows how it influences the evolution of the negotiation, it determines how the theories of agents evolve during the dialogue and performs an analysis of the negotiation outcomes. It is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented. Our future work concerns several points. A first point is to relax the assumption that the set of possible offers is the same to both agents. Indeed, it is more natural to assume that agents may have different sets of offers. During a negotiation dialogue, these sets will evolve. Arguments in favor of the new offers may be built from the agent theory. Thus, the set of offers will be part of the agent theory. Another possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers. This is more akin to the way human take decisions. Considering both types of arguments will refine the evaluation of the offers status. In the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers. This preference relation can be refined. For instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument. In [4], different criteria have been proposed for comparing decisions. Our framework can thus be extended by integrating those criteria. Another interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles. By profile, we mean the criterion used by an agent to compare its offers.
A Unified and General Framework for Argumentation-based Negotiation ABSTRACT This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue. 1. INTRODUCTION Negotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce. Different approaches to automated negotiation have been investigated, including game-theoretic approaches (which usu Indeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers. Integrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue. Indeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them. The basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change. For instance, an agent may reject an offer because it is not acceptable for it. However, the agent may change its mind if it receives a strong argument in favor of this offer. Several proposals have been made in the literature for modeling such an approach. However, the work is still preliminary. Some researchers have mainly focused on relating argumentation with protocols. They have shown how and when arguments in favor of offers can be computed and exchanged. Others have emphasized on the decision making problem. In [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem. They have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol. In most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues. It is not clear how argumentation can influence the outcome of the dialogue. Moreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied. This paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated. In this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known. The goal of a negotiation is to find among elements of O, an offer that satisfies more or less the preferences of both agents. Each agent is supposed to have a theory represented in an abstract way. The status of each argument is defined using Dung's acceptability semantics. We show how an agent's theory may evolve during a negotiation dialogue. We define formally the notions of concession, compromise, and optimal solution. Then, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary. We show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist. Section 3 defines the agents as well as their theories. In section 4, we study the properties of these agents' theories. Section 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue. Two kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached. Section 6 illustrates our general framework through some examples. Section 7 compares our formalism with existing ones. Section 8 concludes and presents some perspectives. 7. RELATED WORK Argumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12]. In that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced. In [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed. Moreover, a particular framework for negotiation have been proposed. In [9, 13], different other frameworks have been proposed. Even if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments. However, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them. Thus the protocol for handling arguments was missing. Another limitation of the above frameworks is the fact that the argumentation frameworks they The Sixth Intl. . Joint Conf. In [2] a negotiation framework that fills the gap has been suggested. A protocol that handles the arguments was proposed. However, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue. Moreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue. They have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue. In that work, agents are supposed to have a beliefs base and a goals base. Our framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs. The negotiation protocol is general as well. Thus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties. Our framework is also a unified one because frameworks like the ones presented above can be represented within this framework. This relation takes simultaneously into account the attacking and preference relations that may exist between two arguments. 8. CONCLUSIONS AND FUTURE WORK In this paper we have presented a unified and general framework for argumentation-based negotiation. This work is a first attempt to formally define the role of argumentation in the negotiation process. It is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented. Our future work concerns several points. A first point is to relax the assumption that the set of possible offers is the same to both agents. Indeed, it is more natural to assume that agents may have different sets of offers. During a negotiation dialogue, these sets will evolve. Arguments in favor of the new offers may be built from the agent theory. Thus, the set of offers will be part of the agent theory. Another possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers. This is more akin to the way human take decisions. Considering both types of arguments will refine the evaluation of the offers status. In the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers. This preference relation can be refined. For instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument. In [4], different criteria have been proposed for comparing decisions. Our framework can thus be extended by integrating those criteria. Another interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles. By profile, we mean the criterion used by an agent to compare its offers.
I-46
Modular Interpreted Systems
We propose a new class of representations that can be used for modeling (and model checking) temporal, strategic and epistemic properties of agents and their teams. Our representations borrow the main ideas from interpreted systems of Halpern, Fagin et al.; however, they are also modular and compact in the way concurrent programs are. We also mention preliminary results on model checking alternating-time temporal logic for this natural class of models.
[ "modular interpret system", "model check", "model check", "open comput system", "tempor and strateg logic", "model methodolog", "multi-agent system", "higher level represent languag", "branch time", "comput tree logic ctl", "altern-time tempor logic", "kripk structur", "synchron concurr program", "reactiv modul" ]
[ "P", "P", "P", "M", "R", "M", "M", "M", "U", "M", "M", "U", "M", "U" ]
Modular Interpreted Systems Wojciech Jamroga Department of Informatics Clausthal University of Technology, Germany wjamroga@in.tu-clausthal.de Thomas Ågotnes Department of Computer Engineering Bergen University College, Norway tag@hib.no ABSTRACT We propose a new class of representations that can be used for modeling (and model checking) temporal, strategic and epistemic properties of agents and their teams. Our representations borrow the main ideas from interpreted systems of Halpern, Fagin et al.; however, they are also modular and compact in the way concurrent programs are. We also mention preliminary results on model checking alternating-time temporal logic for this natural class of models. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMultiagent Systems; I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods-Modal logic General Terms Theory 1. INTRODUCTION The logical foundations of multi-agent systems have received much attention in recent years. Logic has been used to represent and reason about, e.g., knowledge [7], time [6], cooperation and strategic ability [3]. Lately, an increasing amount of research has focused on higher level representation languages for models of such logics, motivated mainly by the need for compact representations, and for representations that correspond more closely to the actual systems which are modeled. Multi-agent systems are open systems, in the sense that agents interact with an environment only partially known in advance. Thus, we need representations of models of multi-agent systems which are modular, in the sense that a component, such as an agent, can be replaced, removed, or added, without major changes to the representation of the whole model. However, as we argue in this paper, few existing representation languages are both modular, compact and computationally grounded on the one hand, and allow for representing properties of both knowledge and strategic ability, on the other. In this paper we present a new class of representations for models of open multi-agent systems, which are modular, compact and come with an implicit methodology for modeling and designing actual systems. The structure of the paper is as follows. First, in Section 2, we present the background of our work - that is, logics that combine time, knowledge, and strategies. More precisely: modal logics that combine branching time, knowledge, and strategies under incomplete information. We start with computation tree logic CTL, then we add knowledge (CTLK), and then we discuss two variants of alternating-time temporal logic (ATL): one for the perfect, and one for the imperfect information case. The semantics of logics like the ones presented in Section 2 are usually defined over explicit models (Kripke structures) that enumerate all possible (global) states of the system. However, enumerating these states is one of the things one mostly wants to avoid, because there are too many of them even for simple systems. Thus, we usually need representations that are more compact. Another reason for using a more specialized class of models is that general Kripke structures do not always give enough help in terms of methodology, both at the stage of design, nor at implementation. This calls for a semantics which is more grounded, in the sense that the correspondence between elements of the model, and the entities that are modeled, is more immediate. In Section 3, we present an overview of representations that have been used for modeling and model checking systems in which time, action (and possibly knowledge) are important; we mention especially representations used for theoretical analysis. We point out that the compact and/or grounded representations of temporal models do not play their role in a satisfactory way when agents'' strategies are considered. Finally, in Section 4, we present our framework of modular interpreted systems (MIS), and show where it fits in the picture. We conclude with a somewhat surprising hypothesis, that model checking ability under imperfect information for MIS can be computationally cheaper than model checking perfect information. Until now, almost all complexity results were distinctly in favor of perfect information strategies (and the others were indifferent). 2. LOGICS OF TIME, KNOWLEDGE, AND STRATEGIC ABILITY First, we present the logics CTL, CTLK, ATL and ATLir that are the starting point of our study. 2.1 Branching Time: CTL Computation tree logic CTL [6] includes operators for temporal properties of systems: i.e., path quantifier E (there is a path), together with temporal operators: f(in the next state), 2 (always from now on) and U (until).1 Every occurrence of a temporal operator is immediately preceded by exactly one path quantifier (this variant of the language is sometimes called vanilla CTL). Let Π be a set of atomic propositions with a typical element p. CTL formulae ϕ are defined as follows: ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | E fϕ | E2ϕ | Eϕ U ϕ. The semantics of CTL is based on Kripke models M = St, R, π , which include a nonempty set of states St, a state transition relation R ⊆ St × St, and a valuation of propositions π : Π → P(St). A path λ in M refers to a possible behavior (or computation) of system M, and can be represented as an infinite sequence of states q0q1q2... such that qiRqi+1 for every i = 0, 1, 2, ... We denote the ith state in λ by λ[i]. A q-path is a path that starts in q. Interpretation of a formula in a state q in model M is defined as follows: M, q |= p iff q ∈ π(p); M, q |= ¬ϕ iff M, q |= ϕ; M, q |= ϕ ∧ ψ iff M, q |= ϕ and M, q |= ψ; M, q |= E fϕ iff there is a q-path λ such that M, λ[1] |= ϕ; M, q |= E2ϕ iff there is a q-path λ such that M, λ[i] |= ϕ for every i ≥ 0; M, q |= Eϕ U ψ iff there is a q-path λ and i ≥ 0 such that M, λ[i] |= ψ and M, λ[j] |= ϕ for every 0 ≤ j < i. 2.2 Adding Knowledge: CTLK CTLK [19] is a straightforward combination of CTL and standard epistemic logic [10, 7]. Let Agt = {1, ..., k} be a set of agents with a typical element a. Epistemic logic uses operators for representing agents'' knowledge: Kaϕ is read as agent a knows that ϕ. Models of CTLK extend models of CTL with epistemic indistinguishability relations ∼a⊆ St × St (one per agent). We assume that all ∼a are equivalences. The semantics of epistemic operators is defined as follows: M, q |= Kaϕ iff M, q |= ϕ for every q such that q ∼a q . Note that, when talking about agents'' knowledge, we implicitly assume that agents may have imperfect information about the actual current state of the world (otherwise the notion of knowledge would be trivial). This does not have influence on the way we model evolution of a system as a single unit, but it will become important when particular agents and their strategies come to the fore. 2.3 Agents and Their Strategies: ATL Alternating-time temporal logic ATL [3] is a logic for reasoning about temporal and strategic properties of open computational systems (multi-agent systems in particular). The language of ATL consists of the following formulae: ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | A fϕ | A 2ϕ | A ϕ U ϕ. where A ⊆ Agt. Informally, A ϕ says that agents A have a collective strategy to enforce ϕ. It should be noted that the CTL path quantifiers A, E can be expressed with ∅ , Agt respectively. The semantics of ATL is defined in so called concurrent game structures (CGSs). A CGS is a tuple M = Agt, St, Act, d, o, Π, π , 1 Additional operators A (for every path) and 3 (sometime in the future) are defined in the usual way. consisting of: a set Agt = {1, ... , k} of agents; set St of states; valuation of propositions π : Π → P(St); set Act of atomic actions. Function d : Agt × St → P(Act) indicates the actions available to agent a ∈ Agt in state q ∈ St. Finally, o is a deterministic transition function which maps a state q ∈ St and an action profile α1, ... , αk ∈ Actk , αi ∈ d(i, q), to another state q = o(q, α1, ... , αk). DEFINITION 1. A (memoryless) strategy of agent a is a function sa : St → Act such that sa(q) ∈ d(a, q).2 A collective strategy SA for a team A ⊆ Agt specifies an individual strategy for each agent a ∈ A. Finally, the outcome of strategy SA in state q is defined as the set of all computations that may result from executing SA from q on: out(q, SA) = {λ = q0q1q2... | q0 = q and for every i = 1, 2, ... there exists αi−1 1 , ..., αi−1 k such that αi−1 a = SA(a)(qi−1) for each a ∈ A, αi−1 a ∈ d(a, qi−1) for each a /∈ A, and o(qi−1, αi−1 1 , ..., αi−1 k ) = qi}. The semantics of cooperation modalities is as follows: M, q |= A fϕ iff there is a collective strategy SA such that, for every λ ∈ out(q, SA), we have M, λ[1] |= ϕ; M, q |= A 2ϕ iff there exists SA such that, for every λ ∈ out(q, SA), we have M, λ[i] for every i ≥ 0; M, q |= A ϕ U ψ iff there exists SA such that for every λ ∈ out(q, SA) there is a i ≥ 0, for which M, λ[i] |= ψ, and M, λ[j] |= ϕ for every 0 ≤ j < i. 2.4 Agents with Imperfect Information: ATLir As ATL does not include incomplete information in its scope, it can be seen as a logic for reasoning about agents who always have complete knowledge about the current state of the whole system. ATLir [21] includes the same formulae as ATL, except that the cooperation modalities are presented with a subscript: A ir indicates that they address agents with imperfect information and imperfect recall. Formally, the recursive definition of ATLir formulae is: ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | A ir fϕ | A ir2ϕ | A irϕ U ϕ Models of ATLir, concurrent epistemic game structures (CEGS), can be defined as tuples M = Agt, St, Act, d, o, ∼1, ..., ∼k, Π, π , where Agt, St, Act, d, o, Π, π is a CGS, and ∼1, ..., ∼k are epistemic (equivalence) relations. It is required that agents have the same choices in indistinguishable states: q ∼a q implies d(a, q) = d(a, q ). ATLir restricts the strategies that can be used by agents to uniform strategies, i.e. functions sa : St → Act, such that: (1) sa(q) ∈ d(a, q), and (2) if q ∼a q then sa(q) = sa(q ). A collective strategy is uniform if it contains only uniform individual strategies. Again, the function out(q, SA) returns the set of all paths that may result from agents A executing collective strategy SA from state q. The semantics of ATLir formulae can be defined as follows: M, q |= A ir fϕ iff there is a uniform collective strategy SA such that, for every a ∈ A, q such that q ∼a q , and λ ∈ out(SA, q ), we have M, λ[1] |= ϕ; 2 This is a deviation from the original semantics of ATL [3], where strategies assign agents'' choices to sequences of states, which suggests that agents can by definition recall the whole history of each game. While the choice of one or another notion of strategy affects the semantics of the full ATL ∗ , and most ATL extensions (e.g. for games with imperfect information), it should be pointed out that both types of strategies yield equivalent semantics for pure ATL (cf. [21]). 898 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) M, q |= A ir2ϕ iff there exists SA such that, for every a ∈ A, q such that q ∼a q , and λ ∈ out(SA, q ), we have M, λ[i] for every i ≥ 0; M, q |= A irϕ U ψ iff there exist SA such that, for every a ∈ A, q such that q ∼a q , and λ ∈ out(SA, q ), there is i ≥ 0 for which M, λ[i] |= ψ, and M, λ[j] |= ϕ for every 0 ≤ j < i. That is, A irϕ holds iff A have a uniform collective strategy, such that for every path that can possibly result from execution of the strategy according to at least one agent from A, ϕ is the case. 3. MODELS AND MODEL CHECKING In this section, we present and discuss various (existing) representations of systems that can be used for modeling and model checking. We believe that the two most important points of reference are in this case: (1) the modeling formalism (i.e., the logic and the semantics we use), and (2) the phenomenon, or more generally, the domain we are going to model (to which we will often refer as the real world). Our aim is a representation which is reasonably close to the real world (i.e., it is sufficiently compact and grounded), and still not too far away from the formalism (so that it e.g. easily allows for theoretical analysis of computational problems). We begin with discussing the merits of explicit modelsin our case, these are transition systems, concurrent game structures and CEGSs, presented in the previous section. 3.1 Explicit Models Obviously, an advantage of explicit models is that they are very close to the semantics of our logics (simply because they are the semantics). On the other hand, they are in many ways difficult to use to describe an actual system: • Exponential size: temporal models usually have an exponential number of states with respect to any higher-level description (e.g. Boolean variables, n-ary attributes etc.). Also, their size is exponential in the number of processes (or agents) if the evolution of a system results from joint (synchronous or asynchronous) actions of several active entities [15]. For CGSs the situation is even worse: here, also the number of transitions is exponential, even if we fix the number of states.3 In practice, this means that such representations are very seldom scalable. • Explicit models include no modularity. States in a model refer to global states of the system; transitions in the model correspond to global transitions as well, i.e., they represent (in an atomic way) everything that may happen in one single step, regardless of who has done it, to whom, and in what way. • Logics like ATL are often advertised as frameworks for modeling and reasoning about open computational systems. Ideally, one would like the elements of such a system to have as little interdependencies as possible, so that they can be plugged in and out without much hassle, for instance when we want to test various designs or implementations of the active component. In the case of a multi-agent system the 3 Another class of ATL models, alternating transition systems [2] represent transitions in a more succinct way. While we still have exponentially many states in an ATS, the number of transitions is simply quadratic wrt. to states (like for CTL models). Unfortunately, ATS are even less modular and harder to design than concurrent game structures, and they cannot be easily extended to handle incomplete information (cf. [9]). need is perhaps even more obvious. We do not only need to re-plug various designs of a single agent in the overall architecture; we usually also need to change (e.g., increase) the number of agents acting in a given environment without necessarily changing the design of the whole system. Unfortunately, ATL models are anything but open in this sense. Theoretical complexity results for explicit models are as follows. Model checking CTL and CTLK is P-complete, and can be done in time O(ml), where m is the number of transitions in the model, and l is the length of the formula [5]. Alternatively, it can be done in time O(n2 l), where n is the number of states. Model checking ATL is P-complete wrt. m, l and ΔP 3 -complete wrt. n, k, l (k being the number of agents) [3, 12, 16]. Model checking ATLir is ΔP 2complete wrt. m, l and ΔP 3 -complete wrt. n, k, l [21, 13]. 3.2 Compressed Representations Explicit representation of all states and transitions is inefficient in many ways. An alternative is to represent the state/transition space in a symbolic way [17, 18]. Such models offer some hope for feasible model checking properties of open/multi-agent systems, although it is well known that they are compact only in a fraction of all cases.4 For us, however, they are insufficient for another reason: they are merely optimized representations of explicit models. Thus, they are neither more open nor better grounded: they were meant to optimize implementation rather than facilitate design or modeling methodology. 3.3 Interpreted Systems Interpreted systems [11, 7] are held by many as a prime example of computationally grounded models of distributed systems. An interpreted system can be defined as a tuple IS = St1, ..., Stk, Stenv, R, π . St1, ..., Stk are local state spaces of agents 1, ..., k, and Stenv is the set of states of the environment. The set of global states is defined as St = St1 × ... × Stk × Stenv; R ⊆ St × St is a transition relation, and π : Π → P(St). While the transition relation encapsulates the (possible) evolution of the system over time, the epistemic dimension is defined by the local components of each global state: q1, ..., qk, qenv ∼i q1, ..., qk, qenv iff qi = qi . It is easy to see that such a representation is modular and compact as far as we are concerned with states. Moreover, it gives a natural (grounded) approach to knowledge, and suggests an intuitive methodology for modeling epistemic states. Unfortunately, the way transitions are represented in interpreted systems is neither compact, nor modular, nor grounded: the temporal aspect of the system is given by a joint transition function, exactly like in explicit models. This is not without a reason: if we separate activities of the agents too much, we cannot model interaction in the framework any more, and interaction is the most interesting thing here. But the bottom line is that the temporal dimension of an interpreted system has exponential representation. And it is almost as difficult to plug components in and out of an interpreted system, as for an ordinary CTL or ATL model, since the local activity of an agent is completely merged with his interaction with the rest of the system. 3.4 Concurrent Programs The idea of concurrent programs has been long known in the literature on distributed systems. Here, we use the formulation from [15]. A concurrent program P is composed of k concurrent processes, each described by a labeled transition system Pi = Sti, Acti, Ri, Πi, πi , where Sti is the set of local states of process 4 Representation R of an explicit model M is compact if the size of R is logarithmic with respect to the size of M. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 899 i, Acti is the set of local actions, Ri ⊆ Sti ×Acti ×Sti is a transition relation, and Πi, πi are the set of local propositions and their valuation. The behavior of program P is given by the product automaton of P1, ..., Pk under the assumption that processes work asynchronously, actions are interleaved, and synchronization is obtained through common action names. Concurrent programs have several advantages. First of all, they are modular and compact. They allow for local modeling of components - much more so than interpreted systems (not only states, but also actions are local here). Moreover, they allow for representing explicit interaction between local transitions of reactive processes, like willful communication, and synchronization. On the other hand, they do not allow for representing implicit, incidental, or not entirely benevolent interaction between processes. For example, if we want to represent the act of pushing somebody, the pushed object must explicitly execute an action of being pushed, which seems somewhat ridiculous. Side effects of actions are also not easy to model. Still, this is a minor complaint in the context of CTL, because for temporal logics we are only interested in the flow of transitions, and not in the underlying actions. For temporal reasoning about k asynchronous processes with no implicit interaction, concurrent programs seem just about perfect. The situation is different when we talk about autonomous, proactive components (like agents), acting together (cooperatively or adversely) in a common environment - and we want to address their strategies and abilities. Now, particular actions are no less important than the resulting transitions. Actions may influence other agents'' local states without their consent, they may have side effects on other agents'' states etc.. Passing messages and/or calling procedures is by no means the only way of interaction between agents. Moreover, the availability of actions (to an agent) should not depend on the actions that will be executed by other agents at the same time - these are the outcome states that may depend on these actions! Finally, we would often like to assume that agents act synchronously. In particular, all agents play simultaneously in concurrent game structures. But, assuming synchrony and autonomy of actions, synchronization can no longer be a means of coordination. To sum up, we need a representation which is very much like concurrent programs, but allows for modeling agents that play synchronously, and which enables modeling more sophisticated interaction between agents'' actions. The first postulate is easy to satisfy, as we show in the following section. The second will be addressed in Section 4. We note that model checking CTL against concurrent programs is PSPACE-complete in the number of local states and the length of the formula [15]. 3.5 Synchronous CP and Simple Reactive Modules The semantics of ATL is based on synchronous models where availability of actions does not depend on the actions currently executed by the other players. A slightly different variant of concurrent programs can be defined via synchronous product of programs, so that all agents play simultaneously.5 Unfortunately, under such interpretation, no direct interaction between agents'' actions can be modeled at all. DEFINITION 2. A synchronous concurrent program consists of k concurrent processes Pi = Sti, Acti, Ri, Πi, πi with the follow5 The concept is not new, of course, and has already existed in folk knowledge, although we failed to find an explicit definition in the literature. ing unfolding to a CGS: Agt = {1, ..., k}, St = Qk i=1 Sti, Act = Sk i=1 Acti, d(i, q1, ..., qk ) = {αi | qi, αi, qi ∈ Ri for some qi ∈ Sti}, o( q1, ..., qk , α1, ..., αk) = q1, ..., qk such that qi, αi, qi ∈ Ri for every i; Π = Sk i=1 Πi, and π(p) = πi(p) for p ∈ Πi. We note that the simple reactive modules (SRML) from [22] can be seen as a particular implementation of synchronous concurrent programs. DEFINITION 3. A SRML system is a tuple Σ, Π, m1, ... , mk , where Σ = {1, ... , k} is a set of modules (or agents), Π is a set of Boolean variables, and, for each i ∈ Σ, we have mi = ctri, initi, updatei , where ctri ⊆ Π. Sets initi and updatei consist of guarded commands of the form φ ; v1 := ψ1; ... ; vk := ψk, where every vj ∈ ctri, and φ, ψ1, ... , ψk are propositional formulae over Π. It is required that ctr1, ... ctrk partitions Π. The idea is that agent i controls the variables ctri. The init guarded commands are used to initialize the controlled variables, while the update guarded commands can change their values in each round. A guarded command is enabled if the guard φ is true in the current state of the system. In each round an enabled update guarded command is executed: each ψj is evaluated against the current state of the system, and its logical value is assigned to vj. Several guarded commands being enabled at the same time model non-deterministic choice. Model checking ATL for SRML has been proved EXPTIMEcomplete in the size of the model and the length of the formula [22]. 3.6 Concurrent Epistemic Programs Concurrent programs (both asynchronous and synchronous) can be used to encode epistemic relations too - exactly in the same way as interpreted systems do [20]. That is, when unfolding a concurrent program to a model of CTLK or ATLir, we define that q1, ..., qk ∼i q1, ..., qk iff qi = qi . Model checking CTLK against concurrent epistemic programs is PSPACE-complete [20]. SRML can be also interpreted in the same way; then, we would assume that every agent can see only the variables he controls. Concurrent epistemic programs are modular and have a grounded semantics. They are usually compact (albeit not always: for example, an agent with perfect information will always blow up the size of such a program). Still, they inherit all the problems of concurrent programs with perfect information, discussed in Section 3.4: limited interaction between components, availability of local actions depending on the actual transition etc.. The problems were already important for agents with perfect information, but they become even more crucial when agents have only limited knowledge of the current situation. One of the most important applications of logics that combine strategic and epistemic properties is verification of communication protocols (e.g., in the context of security). Now, we may want to, e.g., check agents'' ability to pass an information between them, without letting anybody else intercept the message. The point is that the action of intercepting is by definition enabled; we just look for a protocol in which the transition of successful interception is never carried out. So, availability of actions must be independent of the actions chosen by the other agents under incomplete information. On the other hand, interaction is arguably the most interesting feature of multi-agent systems, and it is really hard to imagine models of strategic-epistemic logics, in which it is not possible to represent communication. 3.7 Reactive Modules Reactive modules [1] can be seen as a refinement of concurrent epistemic programs (primarily used by the MOCHA model checker [4]), but they are much more powerful, expressive and 900 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) grounded. We have already mentioned a very limited variant of RML (i.e., SRML). The vocabulary of RML is very close to implementations (in terms of general computational systems): the modules are essentially collections of variables, states are just valuations of variables; events/actions are variable updates. However, the sets of variables controlled by different agents can overlap, they can change over time etc.. Moreover, reactive modules support incomplete information (through observability of variables), although it is not the main focus of RML. Again, the relationship between sets of observable variables (and to sets of controlled variables) is mostly left up to the designer of a system. Agents can act synchronously as well as asynchronously. To sum up, RML define a powerful framework for modeling distributed systems with various kinds of synchrony and asynchrony. However, we believe that there is still a need for a simpler and slightly more abstract class of representations. First, the framework of RML is technically complicated, involving a number auxiliary concepts and their definitions. Second, it is not always convenient to represent all that is going on in a multi-agent system as reading and/or writing from/to program variables. This view of a multi-agent system is arguably close to its computer implementation, but usually rather distant from the real world domainhence the need for a more abstract, and more conceptually flexible framework. Third, the separation of the local complexity, and the complexity of interaction is not straightforward. Our new proposal, more in the spirit of interpreted systems, takes these observations as the starting point. The proposed framework is presented in Section 4. 4. MODULAR INTERPRETED SYSTEMS The idea behind distributed systems (multi-agent systems even more so) is that we deal with several loosely coupled components, where most of the processing goes on inside components (i.e., locally), and only a small fraction of the processing occurs between the components. Interaction is crucial (which makes concurrent programs an insufficient modeling tool), but it usually consumes much less of the agent``s resources than local computations (which makes the explicit transition tables of CGS, CEGS, and interpreted systems an overkill). Modular interpreted systems, proposed here, extrapolate the modeling idea behind interpreted systems in a way that allows for a tight control of the interaction complexity. DEFINITION 4. A modular interpreted system (MIS) is defined as a tuple S = Agt, env, Act, In , where Agt = {a1, ..., ak} is a set of agents, env is the environment, Act is a set of actions, and In is a set of symbols called interaction alphabet. Each agent has the following internal structure: ai = Sti, di, outi, ini, oi, Πi, πi , where: • Sti is a set of local states, • di : Sti → P(Act) defines local availability of actions; for convenience of the notation, we additionally define the set of situated actions as Di = { qi, α | qi ∈ Sti, α ∈ di(qi)}, • outi, ini are interaction functions; outi : Di → In refers to the influence that a given situated action (of agent ai) may possibly have on the external world, and ini : Sti ×Ink → In translates external manifestations of the other agents (and the environment) into the impression that they make on ai``s transition function depending on the local state of ai, • oi : Di × In → Sti is a (deterministic) local transition function, • Πi is a set of local propositions of agent ai where we require that Πi and Πj are disjunct when i = j, and • πi : Πi → P(Sti) is a valuation of these propositions. The environment env = Stenv, outenv, inenv, oenv, Πenv, πenv has the same structure as an agent except that it does not perform actions, and that thus outenv : Stenv → In and oenv : Stenv × In → Stenv. Within our framework, we assume that every action is executed by an actor, that is, an agent. As a consequence, every actor is explicitly represented in a MIS as an agent, just like in the case of CGS and CEGS. The environment, on the other hand, represents the (passive) context of agents'' actions. In practice, it serves to capture the aspects of the global state that are not observable by any of the agents. The input functions ini seem to be the fragile spots here: when given explicitly as tables, they have size exponential wrt. the number of agents (and linear wrt. the size of In). However, we can use, e.g., a construction similar to the one from [16] to represent interaction functions more compactly. DEFINITION 5. Implicit input function for state q ∈ Sti is given by a sequence ϕ1, η1 , ..., ϕn, ηn , where each ηj ∈ In is an interaction symbol, and each ϕj is a boolean combination of propositions ˆηi , with η ∈ In; ˆηi stands for η is the symbol currently generated by agent i. The input function is now defined as follows: ini(q, 1, ..., k, env) = ηj iff j is the lowest index such that {ˆ1 1, ..., ˆk k, ˆenv env} |= ϕj. It is required that ϕn ≡ , so that the mapping is effective. REMARK 1. Every ini can be encoded as an implicit input function, with each ϕj being of polynomial size with respect to the number of interaction symbols (cf. [16]). Note that, for some domains, the MIS representation of a system requires exponentially many symbols in the interaction alphabet In. In such a case, the problem is inherent to the domain, and ini will have size exponential wrt the number of agents. 4.1 Representing Agent Systems with MIS Let Stg = ( Qk i=1 Sti)×Stenv be the set of all possible global states generated by a modular interpreted system S. DEFINITION 6. The unfolding of a MIS S for initial states Q ⊆ Stg to a CEGS cegs(S, Q) = Agt , St , Π , π , Act , d , o , ∼1, ..., ∼k is defined as follows: • Agt = {1, ..., k} and Act = Act, • St is the set of global states from Stg which are reachable from some state in Q via the transition relation defined by o (below), • Π = Sk i=1 Πi ∪ Πenv, • For each q = q1, ... , qk, qenv ∈ St and i = 1, ..., k, env, we define q ∈ π (p) iff p ∈ Πi and qi ∈ πi(p), • d (i, q) = di(qi) for global state q = q1, ..., qk, qenv , • The transition function is constructed as follows. Let q = q1, ..., qk, qenv ∈ St , and α = α1, ..., αk be an action profile s.t. αi ∈ d (i, q). We define inputi(q, α) = ini ` qi, out1(q1, α1), ... , outi−1(qi−1, αi−1), outi+1(qi+1, αi+1), ... , outk(qk, αk), outenv(qenv) ´ for each agent i = 1, ... , k, and inputenv(q, α) = inenv ` qenv, out1(q1, α1), ... , outk(qk, αk) ´ . Then, o (q, α) = o1( q1, α1 , input1(q, α)), ... , ok( qk, αk , inputk(q, α)), oenv(qenv, inputenv(q, α)) ; The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 901 • For each i = 1, ..., k: q1, ..., qk, qenv ∼i q1, ..., qk, qenv iff qi = qi .6 REMARK 2. Note that MISs can be used as representations of CGSs too. In that case, epistemic relations ∼i are simply omitted in the unfolding. We denote the unfolding of a MIS S for initial states Q into a CGS by cgs(S, Q). Propositions 3 and 5 state that modular interpreted systems can be used as representations for explicit models of multi-agent systems. On the other hand, these representations are not always compact, as demonstrated by Propositions 7 and 8. PROPOSITION 3. For every CEGS M, there is a MIS SM and a set of global states Q of SM such that cegs(SM , Q) is isomorphic to M.7 PROOF. Let M = {1, ... , k}, St, Act, d, o, Π, π, ∼1, ... , ∼k be a CEGS. We construct a MIS SM = {a1, ... , ak}, env, Act, In with agents ai = Sti, di, outi, ini, oi, Πi, πi and environment env = Stenv, outenv, inenv, oenv, Πenv, πenv , plus a set Q ⊆ Stg of global states, as follows. • In = Act ∪ St ∪ (Actk−1 × St), • Sti = {[q]∼i | q ∈ St} for 1 ≤ i ≤ k (i.e., Sti is the set of i``s indistinguishability classes in M), • Stenv = St, • di([q]∼i ) = d(i, q) for 1 ≤ i ≤ k (this is well-defined since d(i, q) = d(i, q ) whenever q ∼i q ), • outi([q]∼i , αi) = αi for 1 ≤ i ≤ k; outenv(q) = q, • ini([q]∼i , α1, ... , αi−1, αi+1, ... , αk, qenv) = α1, ... , αi−1, αi+1, ... , αk, qenv for i ∈ {1, ... , k}; inenv(q, α1 ... , αk) = α1, ... , αk ; ini(x) and inenv(x) are arbitrary for other arguments x, • oi( [q]∼i , αi , α1, ... , αi−1, αi+1, ... , αk, qenv ) = [o(qenv, α1, ... , αk)]∼i for 1 ≤ i ≤ k and αi ∈ di([q]∼i ); oenv(q, α1, ... , αk ) = o(q, α1, ... , αk); oi and oenv are arbitrary for other arguments, • Πi = ∅ for 1 ≤ i ≤ k, and Πenv = Π, • πenv(p) = π(p) • Q = { [q]∼1 , ... , [q]∼k , q : q ∈ St} Let M = cegs(SM , Q) = Agt , St , Act , d , o , Π , π , ∼1, ... , ∼k . We argue that M and M are isomorphic by establishing a oneto-one correspondence between the respective sets of states, and showing that the other parts of the structures agree on corresponding states. First we show that, for any ˆq = [q ]∼1 , ... , [q ]∼k , q ∈ Q and any α = α1, ... , αk such that αi ∈ d (i, ˆq ), we have o (ˆq , α) = [q]∼1 , ... , [q]∼k , q where q = o(q , α) (1) Let ˆq = o (ˆq , α). Now, for any i: inputi(ˆq , α) = ini([q ]∼i , out1([q ]∼1 , α1), ..., outi−1([q ]∼i−1 , αi−1), outi+1([q ]∼i+1 , αi+1), ... , outk([q ]∼k , αk), outenv(q )) = ini([q ]∼i , α1, ... , αi−1, αi+1, 6 This shows another difference between the environment and the agents: the environment does not possess knowledge. 7 We say that two CEGS are isomorphic if they only differ in the names of states and/or actions. ... , αk, q ) = α1, ... , αi−1, αi+1, ... , αk, q . Similarly, we get that inputenv(ˆq , α) = α1, ... , αk . Thus we get that o (ˆq , α) = o1( [q ]∼1 , α1 , input1(ˆq , α)), ... , ok( [q ]∼k , αk , inputk(ˆq , α)), oenv(q , inputenv(ˆq , α)) = [o(q , α1, ... , αk)]∼1 , ... , [o(q , α1, ... , αk)]∼k , o(q , α1, ... , αk) . Thus, ˆq = [q]∼1 , ... , [q]∼k , q for q = o(q , α1, ... , αk), which completes the proof of (1). We now argue that St = Q. Clearly, Q ⊆ St . Let ˆq ∈ St ; we must show that ˆq ∈ Q. The argument is on induction on the length of the least o path from Q to ˆq. The base case, ˆq ∈ Q, is immediate. For the inductive step, ˆq = o (ˆq , α) for some ˆq ∈ Q, and then we have that ˆq ∈ Q by (1). Thus, St = Q. Now we have a one-to-one correspondence between St and St : r ∈ St corresponds to [r]∼1 , ... , [r]∼k , r ∈ St . It remains to be shown that the other parts of the structures M and M agree on corresponding states: • Agt = Agt, • Act = Act, • Π = Sk i=1 Πi ∪ Πenv = Π, • For p ∈ Π = Π: [q ]∼1 , ... , [q ]∼k , q ∈ π (p) iff q ∈ πenv(p) iff q ∈ π(p) (same valuations at corresponding states), • d (i, [q ]∼1 , ... , [q ]∼k , q ) = di([q ]∼i ) = d(i, q), • It follows immediately from (1), and the fact that Q = St , that o ( [q ]∼1 , ... , [q ]∼k , q , α) = [r ]∼1 , ... , [r ]∼k , r iff o(q , α) = r (transitions on the same joint action in corresponding states lead to corresponding states), • [q ]∼1 , ... , [q ]∼k , q ∼i [r ]∼1 , ... , [r ]∼k , r iff [q ]∼i = [r ]∼i iff q ∼i r (the accessibility relations relate corresponding states), which completes the proof. COROLLARY 4. For every CEGS M, there is an ATLir-equivalent MIS S with initial states Q, that is, for every state q in M there is a state q in cegs(S, Q) satisfying exactly the same ATLir formulae, and vice versa. PROPOSITION 5. For every CGS M, there is a MIS SM and a set of global states Q of SM such that cgs(SM , Q) is isomorphic to M. PROOF. Let M = Agt, St, Act, d, o, Π, π be given. Now, let ˆM = Agt, St, Act, d, o, Π, π, ∼1, ... , ∼k for some arbitrary accessibility relations ∼i over St. By Proposition 3, there exists a MIS S ˆM with global states Q such that ˆM = cegs(S ˆM , Q) is isomorphic to ˆM. Let M be the CGS obtained by removing the accessibility relations from ˆM . Clearly, M is isomorphic to M. COROLLARY 6. For every CGS M, there is an ATL-equivalent MIS S with initial states Q. That is, for every state q in M there is a state q in cgs(S, Q) satisfying exactly the same ATL formulae, and vice versa. PROPOSITION 7. The local state spaces in a MIS are not always compact with respect to the underlying concurrent epistemic game structure. PROOF. Take a CEGS M in which agent i has always perfect information about the current global state of the system. When constructing a modular interpreted system S such that M = cegs(S, Q), we have that Sti must be isomorphic with St. 902 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The above property is a part of the interpreted systems heritage. The next proposition stems from the fact that explicit models (and interpreted systems) allow for intensive interaction between agents. PROPOSITION 8. The size of In in S is, in general, exponential with respect to the number of local states and local actions. This is the case even when epistemic relations are not relevant (i.e., when S is taken as a representation of an ordinary CGS). PROOF. Consider a CGS M with agents Agt = {1, ..., k}, global states St = Qk i=1{qi 0, ..., qi i}, and actions Act = {0, 1}, all enabled everywhere. The transition function is defined as o( q1 j1 , ..., qk jk , α1, ..., αk) = q1 l1 , ..., qk lk , where li = (ji + α1 + ... + αk) mod i. Note that M can be represented as a modular interpreted system with succinct local state spaces Sti = {qi 0, ..., qi i}. Still, the current actions of all agents are relevant to determine the resulting local transition of agent i. We will call items In, outi, ini the interaction layer of a modular interpreted system S; the other elements of S constitute the local layer of the MIS. In this paper we are ultimately interested in model checking complexity with respect to the size of the local layer. To this end, we will assume that the size of interaction layer is polynomial in the number of local states and actions. Note that, by Propositions 7 and 8, not every explicit model submits to compact representation with a MIS. Still, as we declared at the beginning of Section 4, we are mainly interested in a modeling framework for systems of loosely coupled components, where interaction is essential, but most processing is done locally anyway. More importantly, the framework of MIS allows for separating the interaction of agents from their local structure to a larger extent. Moreover, we can control and measure the complexity of each layer in a finer way than before. First, we can try to abstract from the complexity of a layer (e.g. like in this paper, by assuming that the other layer is kept within certain complexity bounds). Second, we can also measure separately the interaction complexity of different agents. 4.2 Modular Interpreted Systems vs. Simple Reactive Modules In this section we show that simple reactive modules are (as we already suggested) a specific (and somewhat limited) implementation of modular interpreted systems. First, we define our (quite strong) notion of equivalence of representations. DEFINITION 7. Two representations are equivalent if they unfold to isomorphic concurrent epistemic game structures. They are CGS-equivalent if they unfold to the same CGS. PROPOSITION 9. For any SRML there is a CGS-equivalent MIS. PROOF. Consider an SRML R with k modules and n variables. We construct S = Agt, Act, In with Agt = {a1, ..., ak}, Act = { 1, ..., n, ⊥1, ..., ⊥n}, and In = Sk i=1 Sti × Sti (the local state spaces Sti will be defined in a moment). Let us assume without loss of generality that ctri = {x1, ..., xr}. Also, we consider all guarded commands of i to be of type γi,ψ : ψ ; xi := , or γ⊥ i,ψ : ψ ; xi := ⊥. Now, agent ai in S has the following components: Sti = P(ctri) (i.e., local states of ai are valuations of variables controlled by i); di(qi) = { 1, ..., r, ⊥1, ..., ⊥r}; outi(qi, α) = qi, qi ; ini(qi, q1, q1 , ..., qi−1, qi−1 , qi+1, qi+1 , qk, qk ) = {xi ∈ ctri | q1, ..., qk |= W γi,ψ ψ}, {xi ∈ ctri | q1, ..., qk |= W γ⊥ i,ψ ψ} . To define local transitions, we consider three cases. If t = f = ∅ (no update is enabled), then oi(qi, α, t, f ) = qi for every action α. If t = ∅, we take any arbitrary ˆx ∈ t, and define oi(qi, j, t, f ) = qi ∪ {xj} if xj ∈ t, and qi ∪ {ˆx} otherwise; oi(qi, ⊥j, t, f ) = qi \ {xj} if xj ∈ f, and qi ∪ {ˆx} otherwise. Moreover, if t = ∅ = f, we take any arbitrary ˆx ∈ f, and define oi(qi, j, t, f ) = qi ∪ {xj} if xj ∈ t, and qi \ {ˆx} otherwise; oi(qi, ⊥j, t, f ) = qi \{xj} if xj ∈ f, and qi \{ˆx} otherwise. Finally, Πi = ctri, and qi ∈ πi(xj) iff xj ∈ qi. The above construction shows that SRML have more compact representation of states than MIS: ri local variables of agent i give rise to 2ri local states. In a way, reactive modules (both simple and full) are two-level representations: first, the system is represented as a product of modules; next, each module can be seen as a product of its variables (together with their update operations). Note, however, that specification of updates with respect to a single variable in an SRML may require guarded commands of total length O(2 Pk i=1 ri ). Thus, the representation of transitions in SRML is (in the worst case) no more compact than in MIS, despite the two-level structure of SRML. We observe finally that MIS are more general, because in SRML the current actions of other agents have no influence on the outcome of agent i``s current action (although the outcome can be influenced by other agents'' current local states). 4.3 Model Checking Modular Interpreted Systems One of our main aims was to study the complexity of symbolic model checking ATLir in a meaningful way. Following the reviewers'' remarks, we state our complexity results only as conjectures. Preliminary proofs can be found in [14]. CONJECTURE 10. Model checking ATL for modular interpreted systems is EXPTIME-complete. CONJECTURE 11. Model checking ATLir for the class of modular interpreted systems is PSPACE-complete. A summary of complexity results for model checking temporal and strategic logics (with and without epistemic component) is given in the table below. The table presents completeness results for various models and settings of input parameters. Symbols n, k, m stand for the number of states, agents and transitions in an explicit model; l is the length of the formula, and nlocal is the number of local states in a concurrent program or modular interpreted system. The new results, conjectured in this paper, are printed in italics. Note that the result for model checking ATL against modular interpreted systems is an extension of the result from [22]. m, l n, k, l nlocal, k, l CTL P [5] P [5] PSPACE [15] CTLK P [5, 8] P [5, 8] PSPACE [20] ATL P [3] ΔP 3 [12, 16] EXPTIME ATLir ΔP 2 [21, 13] ΔP 3 [13] PSPACE If we are right, then the results for ATL and ATLir form an intriguing pattern. When we compare model checking agents with perfect vs. imperfect information, the first problem appears to be much easier against explicit models measured with the number of transitions; next, we get the same complexity class against explicit models measured with the number of states and agents; finally, model checking imperfect information turns out to be easier than model checking perfect information for modular interpreted systems. Why can it be so? First, a MIS unfolds into CEGS and CGS in a different way. In the first case, the MIS is assumed to encode the epistemic relations explicitly (which makes it explode when we model agents with perfect, or almost perfect information). In the latter case, the epistemic The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 903 aspect is ignored, which gives some extra room for encoding the transition relation more efficiently. Another crucial factor is the number of available strategies (relative to the size of input parameters). The number of all strategies is exponential in the number of global states; for uniform strategies, there are usually much less of them but still exponentially many in general. Thus, the fact that perfect information strategies can be synthesized incrementally has a substantial impact on the complexity of the problem. However, measured in terms of local states and agents, the number of all strategies is doubly exponential, while there are only exponentially many uniform strategies - which settles the results in favor of imperfect information. 5. CONCLUSIONS We have presented a new class of representations for open multiagent systems. Our representations, called modular interpreted systems, are: modular, in the sense that components can be changed, replaced, removed or added, with as little changes to the whole representation as possible; more compact than traditional explicit representations; and grounded, in the sense that the correspondences between the primitives of the model and the entities being modeled are more immediate, giving a methodology for designing and implementing systems. We also conjecture that the complexity of model checking strategic ability for our representations is higher if we assume perfect information than if we assume imperfect information. The solutions, proposed in this paper, are not necessarily perfect (for example, the impression functions ini seem to be the main source of non-modularity in MIS, and can be perhaps improved), but we believe them to be a step in the right direction. We also do not mean to claim that our representations should replace more elaborate modeling languages like Promela or reactive modules. We only suggest that there is a need for compact, modular and reasonably grounded models that are more expressive than concurrent (epistemic) programs, and still allow for easier theoretical analysis than reactive modules. We also suggest that MIS might be better suited for modeling simple multi-agent domains, especially for human-oriented (as opposed to computer-oriented) design. 6. ACKNOWLEDGMENTS We thank the anonymous reviewers and Andrzej Tarlecki for their helpful remarks. Thomas Ågotnes'' work on this paper was supported by grants 166525/V30 and 176853/S10 from the Research Council of Norway. 7. REFERENCES [1] R. Alur and T. A. Henzinger. Reactive modules. Formal Methods in System Design, 15(1):7-48, 1999. [2] R. Alur, T. A. Henzinger, and O. Kupferman. Alternating-time Temporal Logic. Lecture Notes in Computer Science, 1536:23-60, 1998. [3] R. Alur, T. A. Henzinger, and O. Kupferman. Alternating-time Temporal Logic. Journal of the ACM, 49:672-713, 2002. [4] R. Alur, T.A. Henzinger, F.Y.C. Mang, S. Qadeer, S.K. Rajamani, and S. Tasiran. MOCHA user manual. In Proceedings of CAV``98, volume 1427 of Lecture Notes in Computer Science, pages 521-525, 1998. [5] E.M. Clarke, E.A. Emerson, and A.P. Sistla. Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Transactions on Programming Languages and Systems, 8(2):244-263, 1986. [6] E.A. Emerson and J.Y. Halpern. ``sometimes'' and ``not never'' revisited: On branching versus linear time temporal logic. In Proceedings of the Annual ACM Symposium on Principles of Programming Languages, pages 151-178, 1982. [7] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning about Knowledge. MIT Press: Cambridge, MA, 1995. [8] M. Franceschet, A. Montanari, and M. de Rijke. Model checking for combined logics. In Proceedings of the 3rd International Conference on Temporal Logic (ICTL), 2000. [9] V. Goranko and W. Jamroga. Comparing semantics of logics for multi-agent systems. Synthese, 139(2):241-280, 2004. [10] J. Y. Halpern. Reasoning about knowledge: a survey. In D. M. Gabbay, C. J. Hogger, and J. A. Robinson, editors, The Handbook of Logic in Artificial Intelligence and Logic Programming, Volume IV, pages 1-34. Oxford University Press, 1995. [11] J.Y. Halpern and R. Fagin. Modelling knowledge and action in distributed systems. Distributed Computing, 3(4):159-177, 1989. [12] W. Jamroga and J. Dix. Do agents make model checking explode (computationally)? In M. P˘echou˘cek, P. Petta, and L.Z. Varga, editors, Proceedings of CEEMAS 2005, volume 3690 of Lecture Notes in Computer Science, pages 398-407. Springer Verlag, 2005. [13] W. Jamroga and J. Dix. Model checking abilities of agents: A closer look. Submitted, 2006. [14] W. Jamroga and T. Ågotnes. Modular interpreted systems: A preliminary report. Technical Report IfI-06-15, Clausthal University of Technology, 2006. [15] O. Kupferman, M.Y. Vardi, and P. Wolper. An automata-theoretic approach to branching-time model checking. Journal of the ACM, 47(2):312-360, 2000. [16] F. Laroussinie, N. Markey, and G. Oreiby. Expressiveness and complexity of ATL. Technical Report LSV-06-03, CNRS & ENS Cachan, France, 2006. [17] K.L. McMillan. Symbolic Model Checking: An Approach to the State Explosion Problem. Kluwer Academic Publishers, 1993. [18] K.L. McMillan. Applying SAT methods in unbounded symbolic model checking. In Proceedings of CAV``02, volume 2404 of Lecture Notes in Computer Science, pages 250-264, 2002. [19] W. Penczek and A. Lomuscio. Verifying epistemic properties of multi-agent systems via bounded model checking. In Proceedings of AAMAS``03, pages 209-216, New York, NY, USA, 2003. ACM Press. [20] F. Raimondi and A. Lomuscio. The complexity of symbolic model checking temporal-epistemic logics. In L. Czaja, editor, Proceedings of CS&P``05, 2005. [21] P. Y. Schobbens. Alternating-time logic with imperfect recall. Electronic Notes in Theoretical Computer Science, 85(2), 2004. [22] W. van der Hoek, A. Lomuscio, and M. Wooldridge. On the complexity of practical ATL model checking. In P. Stone and G. Weiss, editors, Proceedings of AAMAS``06, pages 201-208, 2006. 904 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Modular Interpreted Systems ABSTRACT We propose a new class of representations that can be used for modeling (and model checking) temporal, strategic and epistemic properties of agents and their teams. Our representations borrow the main ideas from interpreted systems of Halpern, Fagin et al.; however, they are also modular and compact in the way concurrent programs are. We also mention preliminary results on model checking alternating-time temporal logic for this natural class of models. 1. INTRODUCTION The logical foundations of multi-agent systems have received much attention in recent years. Logic has been used to represent and reason about, e.g., knowledge [7], time [6], cooperation and strategic ability [3]. Lately, an increasing amount of research has focused on higher level representation languages for models of such logics, motivated mainly by the need for compact representations, and for representations that correspond more closely to the actual systems which are modeled. Multi-agent systems are open systems, in the sense that agents interact with an environment only partially known in advance. Thus, we need representations of models of multi-agent systems which are modular, in the sense that a component, such as an agent, can be replaced, removed, or added, without major changes to the representation of the whole model. However, as we argue in this paper, few existing representation languages are both modular, compact and computationally grounded on the one hand, and allow for representing properties of both knowledge and strategic ability, on the other. In this paper we present a new class of representations for models of open multi-agent systems, which are modular, compact and come with an implicit methodology for modeling and designing actual systems. The structure of the paper is as follows. First, in Section 2, we present the background of our work--that is, logics that combine time, knowledge, and strategies. More precisely: modal logics that combine branching time, knowledge, and strategies under incomplete information. We start with computation tree logic CTL, then we add knowledge (CTLK), and then we discuss two variants of alternating-time temporal logic (ATL): one for the perfect, and one for the imperfect information case. The semantics of logics like the ones presented in Section 2 are usually defined over explicit models (Kripke structures) that enumerate all possible (global) states of the system. However, enumerating these states is one of the things one mostly wants to avoid, because there are too many of them even for simple systems. Thus, we usually need representations that are more compact. Another reason for using a more specialized class of models is that general Kripke structures do not always give enough help in terms of methodology, both at the stage of design, nor at implementation. This calls for a semantics which is more grounded, in the sense that the correspondence between elements of the model, and the entities that are modeled, is more immediate. In Section 3, we present an overview of representations that have been used for modeling and model checking systems in which time, action (and possibly knowledge) are important; we mention especially representations used for theoretical analysis. We point out that the compact and/or grounded representations of temporal models do not play their role in a satisfactory way when agents' strategies are considered. Finally, in Section 4, we present our framework of modular interpreted systems (MIS), and show where it fits in the picture. We conclude with a somewhat surprising hypothesis, that model checking ability under imperfect information for MIS can be computationally cheaper than model checking perfect information. Until now, almost all complexity results were distinctly in favor of perfect information strategies (and the others were indifferent). 2. LOGICS OF TIME, KNOWLEDGE, AND STRATEGIC ABILITY First, we present the logics CTL, CTLK, ATL and ATLir that are the starting point of our study. 2.1 Branching Time: CTL Computation tree logic CTL [6] includes operators for temporal properties of systems: i.e., path quantifier E ("there is a path"), to gether with temporal operators: O ("in the next state"), o ("always from now on") and U ("until").1 Every occurrence of a temporal operator is immediately preceded by exactly one path quantifier (this variant of the language is sometimes called "vanilla" CTL). Let Π be a set of atomic propositions with a typical element p. CTL formulae ϕ are defined as follows: The semantics of CTL is based on Kripke models M = ~ St, R, π ~, which include a nonempty set of states St, a state transition relation R ⊆ St × St, and a valuation of propositions π: Π → P (St). A path λ in M refers to a possible behavior (or computation) of system M, and can be represented as an infinite sequence of states q0q1q2...such that qiRqi +1 for every i = 0, 1, 2,.... We denote the ith state in λ by λ [i]. A q-path is a path that starts in q. Interpretation of a formula in a state q in model M is defined as follows: M, q | = p iff q ∈ π (p); 2.2 Adding Knowledge: CTLK CTLK [19] is a straightforward combination of CTL and standard epistemic logic [10, 7]. Let Agt = {1,..., k} be a set of agents with a typical element a. Epistemic logic uses operators for representing agents' knowledge: Kaϕ is read as "agent a knows that ϕ". Models of CTLK extend models of CTL with epistemic indistinguishability relations ∼ a ⊆ St × St (one per agent). We assume that all ∼ a are equivalences. The semantics of epistemic operators is defined as follows: M, q | = Kaϕ iff M, q | = ϕ for every q ~ such that q ∼ a q ~. Note that, when talking about agents' knowledge, we implicitly assume that agents may have imperfect information about the actual current state of the world (otherwise the notion of knowledge would be trivial). This does not have influence on the way we model evolution of a system as a single unit, but it will become important when particular agents and their strategies come to the fore. 2.3 Agents and Their Strategies: ATL Alternating-time temporal logic ATL [3] is a logic for reasoning about temporal and strategic properties of open computational systems (multi-agent systems in particular). The language of ATL consists of the following formulae: where A ⊆ Agt. Informally, ~ ~ A ~ ~ ϕ says that agents A have a collective strategy to enforce ϕ. It should be noted that the CTL path quantifiers A, E can be expressed with ~ ~ ∅ ~ ~, ~ ~ Agt ~ ~ respectively. The semantics of ATL is defined in so called concurrent game structures (CGSs). A CGS is a tuple 1Additional operators A ("for every path") and O ("sometime in the future") are defined in the usual way. consisting of: a set Agt = {1,..., k} of agents; set St of states; valuation of propositions π: Π → P (St); set Act of atomic actions. Function d: Agt × St → P (Act) indicates the actions available to agent a ∈ Agt in state q ∈ St. Finally, o is a deterministic transition function which maps a state q ∈ St and an action profile ~ α1,..., αk ~ ∈ Actk, αi ∈ d (i, q), to another state q ~ = o (q, α1,..., αk). 2.4 Agents with Imperfect Information: ATLir As ATL does not include incomplete information in its scope, it can be seen as a logic for reasoning about agents who always have complete knowledge about the current state of the whole system. ATLir [21] includes the same formulae as ATL, except that the cooperation modalities are presented with a subscript: ~ ~ A ~ ~ ir indicates that they address agents with imperfect information and imperfect recall. Formally, the recursive definition of ATLir formulae is: Models of ATLir, concurrent epistemic game structures (CEGS), can be defined as tuples M = ~ Agt, St, Act, d, o, ∼ 1,..., ∼ k, Π, π ~, where ~ Agt, St, Act, d, o, Π, π ~ is a CGS, and ∼ 1,..., ∼ k are epistemic (equivalence) relations. It is required that agents have the same choices in indistinguishable states: q ∼ a q ~ implies d (a, q) = d (a, q ~). ATLir restricts the strategies that can be used by agents to uniform strategies, i.e. functions sa: St → Act, such that: (1) sa (q) ∈ d (a, q), and (2) if q ∼ a q ~ then sa (q) = sa (q ~). A collective strategy is uniform if it contains only uniform individual strategies. Again, the function out (q, SA) returns the set of all paths that may result from agents A executing collective strategy SA from state q. The semantics of ATLir formulae can be defined as follows: M, q | = ~ ~ A ~ ~ ir Oϕ iff there is a uniform collective strategy SA such that, for every a ∈ A, q ~ such that q ∼ a q ~, and λ ∈ out (SA, q ~), we have M, λ [1] | = ϕ; 2This is a deviation from the original semantics of ATL [3], where strategies assign agents' choices to sequences of states, which suggests that agents can by definition recall the whole history of each game. While the choice of one or another notion of strategy affects the semantics of the full ATL ∗, and most ATL extensions (e.g. for games with imperfect information), it should be pointed out that both types of strategies yield equivalent semantics for "pure" ATL (cf. [21]). 898 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) M, q | = ((A)) ir ✷ ϕ iff there exists SA such that, for every a E A, q' such that q ∼ a q', and λ E out (SA, q'), we have M, λ [i] for every i> 0; M, q | = ((A)) irϕ U ψ iff there exist SA such that, for every a E A, q' such that q ∼ a q', and λ E out (SA, q'), there is i> 0 for which M, λ [i] | = ψ, and M, λ [j] | = ϕ for every 0 <j <i. That is, ((A)) irϕ holds iff A have a uniform collective strategy, such that for every path that can possibly result from execution of the strategy according to at least one agent from A, ϕ is the case. 3. MODELS AND MODEL CHECKING In this section, we present and discuss various (existing) representations of systems that can be used for modeling and model checking. We believe that the two most important points of reference are in this case: (1) the modeling formalism (i.e., the logic and the semantics we use), and (2) the phenomenon, or more generally, the domain we are going to model (to which we will often refer as the "real world"). Our aim is a representation which is reasonably close to the real world (i.e., it is sufficiently compact and grounded), and still not too far away from the formalism (so that it e.g. easily allows for theoretical analysis of computational problems). We begin with discussing the merits of "explicit" models--in our case, these are transition systems, concurrent game structures and CEGSs, presented in the previous section. 3.1 Explicit Models Obviously, an advantage of explicit models is that they are very close to the semantics of our logics (simply because they are the semantics). On the other hand, they are in many ways difficult to use to describe an actual system: • Exponential size: temporal models usually have an exponential number of states with respect to any higher-level description (e.g. Boolean variables, n-ary attributes etc.). Also, their size is exponential in the number of processes (or agents) if the evolution of a system results from joint (synchronous or asynchronous) actions of several active entities [15]. For CGSs the situation is even worse: here, also the number of transitions is exponential, even if we fix the number of states .3 In practice, this means that such representations are very seldom scalable. • Explicit models include no modularity. States in a model refer to global states of the system; transitions in the model correspond to global transitions as well, i.e., they represent (in an atomic way) everything that may happen in one single step, regardless of who has done it, to whom, and in what way. • Logics like ATL are often advertised as frameworks for modeling and reasoning about open computational systems. Ideally, one would like the elements of such a system to have as little interdependencies as possible, so that they can be "plugged" in and out without much hassle, for instance when we want to test various designs or implementations of the active component. In the case of a multi-agent system the 3Another class of ATL models, alternating transition systems [2] represent transitions in a more succinct way. While we still have exponentially many states in an ATS, the number of transitions is simply quadratic wrt. to states (like for CTL models). Unfortunately, ATS are even less modular and harder to design than concurrent game structures, and they cannot be easily extended to handle incomplete information (cf. [9]). need is perhaps even more obvious. We do not only need to "re-plug" various designs of a single agent in the overall architecture; we usually also need to change (e.g., increase) the number of agents acting in a given environment without necessarily changing the design of the whole system. Unfortunately, ATL models are anything but open in this sense. Theoretical complexity results for explicit models are as follows. Model checking CTL and CTLK is P-complete, and can be done in time O (ml), where m is the number of transitions in the model, and l is the length of the formula [5]. Alternatively, it can be done in time O (n2l), where n is the number of states. Model checking ATL is P-complete wrt. m, l and AP3 - complete wrt. n, k, l (k being the number of agents) [3, 12, 16]. Model checking ATLir is AP 2 complete wrt. m, l and AP3 - complete wrt. n, k, l [21, 13]. 3.2 Compressed Representations Explicit representation of all states and transitions is inefficient in many ways. An alternative is to represent the state/transition space in a symbolic way [17, 18]. Such models offer some hope for feasible model checking properties of open/multi-agent systems, although it is well known that they are compact only in a fraction of all cases .4 For us, however, they are insufficient for another reason: they are merely optimized representations of explicit models. Thus, they are neither more open nor better grounded: they were meant to optimize implementation rather than facilitate design or modeling methodology. 3.3 Interpreted Systems Interpreted systems [11, 7] are held by many as a prime example of computationally grounded models of distributed systems. An interpreted system can be defined as a tuple IS = (St1,..., Stk, Stenv, R, π). St1,..., Stk are local state spaces of agents 1,..., k, and Stenv is the set of states of the environment. The set of global states is defined as St = St1 x...x Stk x Stenv; R C St x St is a transition relation, and π: Π → P (St). While the transition relation encapsulates the (possible) evolution of the system over time, the epistemic dimension is defined by the local components of each global state: (q1,..., qk, qenv) ∼ i (q' 1,..., qk', qenv) iff qi = q ` i. It is easy to see that such a representation is modular and compact as far as we are concerned with states. Moreover, it gives a natural ("grounded") approach to knowledge, and suggests an intuitive methodology for modeling epistemic states. Unfortunately, the way transitions are represented in interpreted systems is neither compact, nor modular, nor grounded: the temporal aspect of the system is given by a joint transition function, exactly like in explicit models. This is not without a reason: if we separate activities of the agents too much, we cannot model interaction in the framework any more, and interaction is the most interesting thing here. But the bottom line is that the temporal dimension of an interpreted system has exponential representation. And it is almost as difficult to "plug" components in and out of an interpreted system, as for an ordinary CTL or ATL model, since the "local" activity of an agent is completely merged with his interaction with the rest of the system. 3.4 Concurrent Programs The idea of concurrent programs has been long known in the literature on distributed systems. Here, we use the formulation from [15]. A concurrent program P is composed of k concurrent processes, each described by a labeled transition system Pi = (Sti, Acti, Ri, Πi, πi), where Sti is the set of local states of process 4Representation R of an explicit model M is compact if the size of R is logarithmic with respect to the size of M. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 899 i, Acti is the set of local actions, Ri C Sti x Acti x Sti is a transition relation, and Πi, πi are the set of local propositions and their valuation. The behavior of program P is given by the product automaton of P1,..., Pk under the assumption that processes work asynchronously, actions are interleaved, and synchronization is obtained through common action names. Concurrent programs have several advantages. First of all, they are modular and compact. They allow for "local" modeling of components--much more so than interpreted systems (not only states, but also actions are local here). Moreover, they allow for representing explicit interaction between local transitions of reactive processes, like willful communication, and synchronization. On the other hand, they do not allow for representing implicit, "incidental", or not entirely benevolent interaction between processes. For example, if we want to represent the act of pushing somebody, the pushed object must explicitly execute an action of "being pushed", which seems somewhat ridiculous. Side effects of actions are also not easy to model. Still, this is a minor complaint in the context of CTL, because for temporal logics we are only interested in the flow of transitions, and not in the underlying actions. For temporal reasoning about k asynchronous processes with no implicit interaction, concurrent programs seem just about perfect. The situation is different when we talk about autonomous, proactive components (like agents), acting together (cooperatively or adversely) in a common environment--and we want to address their strategies and abilities. Now, particular actions are no less important than the resulting transitions. Actions may influence other agents' local states without their consent, they may have side effects on other agents' states etc. . Passing messages and/or calling procedures is by no means the only way of interaction between agents. Moreover, the availability of actions (to an agent) should not depend on the actions that will be executed by other agents at the same time--these are the outcome states that may depend on these actions! Finally, we would often like to assume that agents act synchronously. In particular, all agents play simultaneously in concurrent game structures. But, assuming synchrony and autonomy of actions, synchronization can no longer be a means of coordination. To sum up, we need a representation which is very much like concurrent programs, but allows for modeling agents that play synchronously, and which enables modeling more sophisticated interaction between agents' actions. The first postulate is easy to satisfy, as we show in the following section. The second will be addressed in Section 4. We note that model checking CTL against concurrent programs is PSPACE-complete in the number of local states and the length of the formula [15]. 3.5 Synchronous CP and Simple Reactive Modules The semantics of ATL is based on synchronous models where availability of actions does not depend on the actions currently executed by the other players. A slightly different variant of concurrent programs can be defined via synchronous product of programs, so that all agents play simultaneously .5 Unfortunately, under such interpretation, no direct interaction between agents' actions can be modeled at all. DEFINITION 2. A synchronous concurrent program consists of k concurrent processes Pi = (Sti, Acti, Ri, Πi, πi) with the follow 5The concept is not new, of course, and has already existed in folk knowledge, although we failed to find an explicit definition in the literature. ing unfolding to a CGS: Agt = {1,..., k}, St = Qki = 1 Sti, Act = Ski = 1 Acti, d (i, (q1,..., qk)) = {αi | (qi, αi, q ~ i) E Ri for some q ~ i E Sti}, o ((q1,..., qk), α1,..., αk) = (q ~ 1,..., q ~ k) such that (qi, αi, q ~ i) E Ri for every i; Π = Ski = 1 Πi, and π (p) = πi (p) for p E Πi. We note that the simple reactive modules (SRML) from [22] can be seen as a particular implementation of synchronous concurrent programs. DEFINITION 3. A SRML system is a tuple (Σ, Π, m1,..., mk), where Σ = {1,..., k} is a set of modules (or agents), Π is a set of Boolean variables, and, for each i E Σ, we have mi = (ctri, initi, updatei), where ctri C Π. Sets initi and updatei consist of guarded commands of the form φ ❀ v ~ 1: = ψ1; ...; v ~ k: = ψk, where every vj E ctri, and φ, ψ1,..., ψk are propositional formulae over Π. It is required that ctr1,...ctrk partitions Π. The idea is that agent i controls the variables ctri. The init guarded commands are used to initialize the controlled variables, while the update guarded commands can change their values in each round. A guarded command is enabled if the guard φ is true in the current state of the system. In each round an enabled update guarded command is executed: each ψj is evaluated against the current state of the system, and its logical value is assigned to vj. Several guarded commands being enabled at the same time model non-deterministic choice. Model checking ATL for SRML has been proved EXPTIMEcomplete in the size of the model and the length of the formula [22]. 3.6 Concurrent Epistemic Programs Concurrent programs (both asynchronous and synchronous) can be used to encode epistemic relations too--exactly in the same way as interpreted systems do [20]. That is, when unfolding a concurrent program to a model of CTLK or ATLir, we define that (q1,..., qk)--i (q ~ 1,..., q ~ k) iff qi = q ~ i. Model checking CTLK against concurrent epistemic programs is PSPACE-complete [20]. SRML can be also interpreted in the same way; then, we would assume that every agent can see only the variables he controls. Concurrent epistemic programs are modular and have a "grounded" semantics. They are usually compact (albeit not always: for example, an agent with perfect information will always blow up the size of such a program). Still, they inherit all the problems of concurrent programs with perfect information, discussed in Section 3.4: limited interaction between components, availability of local actions depending on the actual transition etc. . The problems were already important for agents with perfect information, but they become even more crucial when agents have only limited knowledge of the current situation. One of the most important applications of logics that combine strategic and epistemic properties is verification of communication protocols (e.g., in the context of security). Now, we may want to, e.g., check agents' ability to pass an information between them, without letting anybody else intercept the message. The point is that the action of intercepting is by definition enabled; we just look for a protocol in which the transition of "successful interception" is never carried out. So, availability of actions must be independent of the actions chosen by the other agents under incomplete information. On the other hand, interaction is arguably the most interesting feature of multi-agent systems, and it is really hard to imagine models of strategic-epistemic logics, in which it is not possible to represent communication. 3.7 Reactive Modules Reactive modules [1] can be seen as a refinement of concurrent epistemic programs (primarily used by the MOCHA model checker [4]), but they are much more powerful, expressive and 900 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) grounded. We have already mentioned a very limited variant of RML (i.e., SRML). The vocabulary of RML is very close to implementations (in terms of general computational systems): the modules are essentially collections of variables, states are just valuations of variables; events/actions are variable updates. However, the sets of variables controlled by different agents can overlap, they can change over time etc. . Moreover, reactive modules support incomplete information (through observability of variables), although it is not the main focus of RML. Again, the relationship between sets of observable variables (and to sets of controlled variables) is mostly left up to the designer of a system. Agents can act synchronously as well as asynchronously. To sum up, RML define a powerful framework for modeling distributed systems with various kinds of synchrony and asynchrony. However, we believe that there is still a need for a simpler and slightly more abstract class of representations. First, the framework of RML is technically complicated, involving a number auxiliary concepts and their definitions. Second, it is not always convenient to represent all that is going on in a multi-agent system as reading and/or writing from/to program variables. This view of a multi-agent system is arguably close to its computer implementation, but usually rather distant from the real world domain--hence the need for a more abstract, and more conceptually flexible framework. Third, the separation of the "local" complexity, and the complexity of interaction is not straightforward. Our new proposal, more in the spirit of interpreted systems, takes these observations as the starting point. The proposed framework is presented in Section 4. 4. MODULAR INTERPRETED SYSTEMS The idea behind distributed systems (multi-agent systems even more so) is that we deal with several loosely coupled components, where most of the processing goes on inside components (i.e., locally), and only a small fraction of the processing occurs between the components. Interaction is crucial (which makes concurrent programs an insufficient modeling tool), but it usually consumes much less of the agent's resources than local computations (which makes the explicit transition tables of CGS, CEGS, and interpreted systems an overkill). Modular interpreted systems, proposed here, extrapolate the modeling idea behind interpreted systems in a way that allows for a tight control of the interaction complexity. DEFINITION 4. A modular interpreted system (MIS) is defined as a tuple where Agt = {a1,..., ak} is a set of agents, env is the environment, Act is a set of actions, and In is a set of symbols called interaction alphabet. Each agent has the following internal structure: ai = ~ Sti, di, outi, ini, oi, Πi, πi ~, where: • Sti is a set of local states, • di: Sti → P (Act) defines local availability of actions; for convenience of the notation, we additionally define the set of situated actions as Di = {~ qi, α ~ | qi ∈ Sti, α ∈ di (qi)}, • outi, ini are interaction functions; outi: Di → In refers to the influence that a given situated action (of agent ai) may possibly have on the external world, and ini: Sti × Ink → In translates external manifestations of the other agents (and the environment) into the "impression" that they make on ai's transition function depending on the local state of ai, • oi: Di × In → Sti is a (deterministic) local transition function, • Πi is a set of local propositions of agent ai where we require that Πi and Πj are disjunct when i = ~ j, and • πi: Πi → P (Sti) is a valuation of these propositions. The environment env = ~ Stenv, outenv, inenv, oenv, Πenv, πenv ~ has the same structure as an agent except that it does not perform actions, and that thus outenv: Stenv → In and oenv: Stenv × In → Stenv. Within our framework, we assume that every action is executed by an actor, that is, an agent. As a consequence, every actor is explicitly represented in a MIS as an agent, just like in the case of CGS and CEGS. The environment, on the other hand, represents the (passive) context of agents' actions. In practice, it serves to capture the aspects of the global state that are not observable by any of the agents. The input functions ini seem to be the fragile spots here: when given explicitly as tables, they have size exponential wrt. the number of agents (and linear wrt. the size of In). However, we can use, e.g., a construction similar to the one from [16] to represent interaction functions more compactly. DEFINITION 5. Implicit input function for state q ∈ Sti is given by a sequence ~ ~ ϕ1, η1 ~,..., ~ ϕn, ηn ~ ~, where each ηj ∈ In is an interaction symbol, and each ϕj is a boolean combination of propositions ˆηi, with η ∈ In; ˆηi stands for "η is the symbol currently generated by agent i". The input function is now defined as follows: ini (q, ~ 1,..., ~ k, ~ env) = ηj iff j is the lowest index such that {ˆ ~ 11,..., ˆ ~ kk, ˆ ~ env env} | = ϕj. It is required that ϕn ≡ ~, so that the mapping is effective. REMARK 1. Every ini can be encoded as an implicit inputfunction, with each ϕj being ofpolynomial size with respect to the number of interaction symbols (cf. [16]). Note that, for some domains, the MIS representation of a system requires exponentially many symbols in the interaction alphabet In. In such a case, the problem is inherent to the domain, and ini will have size exponential wrt the number of agents. 4.1 Representing Agent Systems with MIS Let Stg = (nki = 1 Sti) × Stenv be the set of all possible global states generated by a modular interpreted system S. DEFINITION 6. The unfolding of a MIS S for initial states Q ⊆ Stg to a CEGS cegs (S, Q) = ~ Agt', St', Π', π', Act', d', o', ∼' 1,..., ∼' k ~ is defined as follows: • Agt' = {1,..., k} and Act' = Act, • St' is the set of global states from Stg which are reachable from some state in Q via the transition relation defined by o' (below), • Π' = Uki = 1 Πi ∪ Πenv, • For each q = ~ q1,..., qk, qenv ~ ∈ St' and i = 1,..., k, env, we define q ∈ π' (p) iffp ∈ Πi and qi ∈ πi (p), • d' (i, q) = di (qi) for global state q = ~ q1,..., qk, qenv ~, • The transition function is constructed as follows. Let q = ~ q1,..., qk, qenv ~ ∈ St', and α = ~ α1,..., αk ~ be an action profile s.t. αi ∈ d' (i, q). We define inputi (q, α) = ini (qi, out1 (q1, α1),..., outi-1 (qi-1, αi-1), outi +1 (qi +1, αi +1),..., outk (qk, αk), outenv (qenv)) for each agent i = 1,..., k, and inputenv (q, α) = inenv (qenv, out1 (q1, α1),..., outk (qk, αk)). Then, o' (q, α) = ~ o1 (~ q1, α1 ~, input1 (q, α)),..., ok (~ qk, αk ~, inputk (q, α)), oenv (qenv, inputenv (q, α)) ~; The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 901 • For each i = 1,..., k: ~ q1,..., qk, qenv ~ ∼ ~ i ~ q ~ 1,..., q ~ k, q ~ env ~ iff qi = q ~ i. 6 REMARK 2. Note that MISs can be used as representations of CGSs too. In that case, epistemic relations ∼ ~ i are simply omitted in the unfolding. We denote the unfolding of a MIS S for initial states Q into a CGS by cgs (S, Q). Propositions 3 and 5 state that modular interpreted systems can be used as representations for explicit models of multi-agent systems. On the other hand, these representations are not always compact, as demonstrated by Propositions 7 and 8. PROPOSITION 3. For every CEGS M, there is a MIS SM and a set of global states Q of SM such that cegs (SM, Q) is isomorphic to M. 7 PROOF. Let M = ~ {1,..., k}, St, Act, d, o, Π, π, ∼ 1,..., ∼ k ~ be a CEGS. We construct a MIS SM = ~ {a1,..., ak}, env, Act, In ~ with agents ai = ~ Sti, di, outi, ini, oi, Πi, πi ~ and environment env = ~ Stenv, outenv, inenv, oenv, Πenv, πenv ~, plus a set Q ⊆ Stg of global states, as follows. • In = Act ∪ St ∪ (Actk − 1 × St), • Sti = {[q] ∼ i | q ∈ St} for 1 ≤ i ≤ k (i.e., Sti is the set of i's indistinguishability classes in M), • Stenv = St, • di ([q] ∼ i) = d (i, q) for 1 ≤ i ≤ k (this is well-defined since d (i, q) = d (i, q ~) whenever q ∼ i q ~), • outi ([q] ∼ i, αi) = αi for 1 ≤ i ≤ k; outenv (q) = q, • ini ([q] ∼ i, α1,..., αi − 1, αi +1,..., αk, qenv) = [o (q ~, α1,..., αk)] ∼ k, o (q ~, α1,..., αk) ~. Thus, qˆ = ~ [q] ∼ 1,..., [q] ∼ k, q ~ for q = o (q ~, α1,..., αk), which completes the proof of (1). We now argue that St ~ = Q. Clearly, Q ⊆ St ~. Let qˆ ∈ St ~; we must show that qˆ ∈ Q. The argument is on induction on the length of the least o ~ path from Q to ˆq. The base case, qˆ ∈ Q, is immediate. For the inductive step, qˆ = o ~ (ˆq ~, α) for some ˆq ~ ∈ Q, and then we have that qˆ ∈ Q by (1). Thus, St ~ = Q. Now we have a one-to-one correspondence between St and St ~: r ∈ St corresponds to ~ [r] ∼ 1,..., [r] ∼ k, r ~ ∈ St ~. It remains to be shown that the other parts of the structures M and M ~ agree on corresponding states: • Agt ~ = Agt, • Act ~ = Act, • Π ~ = Ski = 1 Πi ∪ Πenv = Π, • For p ∈ Π ~ = Π: ~ [q ~] ∼ 1,..., [q ~] ∼ k, q ~ ~ ∈ π ~ (p) iff q ~ ∈ πenv (p) iff q ~ ∈ π (p) (same valuations at corresponding states), • d ~ (i, ~ [q ~] ∼ 1,..., [q ~] ∼ k, q ~ ~) = di ([q ~] ∼ i) = d (i, q), • It follows immediately from (1), and the fact that Q = St ~, that o ~ (~ [q ~] ∼ 1,..., [q ~] ∼ k, q ~ ~, α) = ~ [r ~] ∼ 1,..., [r ~] ∼ k, r ~ ~ iff o (q ~, α) = r ~ (transitions on the same joint action in corresponding states lead to corresponding states), • ~ [q ~] ∼ 1,..., [q ~] ∼ k, q ~ ~ ∼ ~ i ~ [r ~] ∼ 1,..., [r ~] ∼ k, r ~ ~ iff [q ~] ∼ i = [r ~] ∼ i iff q ~ ∼ i r ~ (the accessibility relations relate corresponding states), which completes the proof. • oi (~ [q] ∼ i, αi ~, ~ α1,..., αi − 1, αi +1,..., αk, qenv ~) [o (qenv, α1,..., αk)] ∼ i for 1 ≤ i ≤ k and αi ∈ di ([q] ∼ i); oenv (q, ~ α1,..., αk ~) = o (q, α1,..., αk); oi and oenv are arbitrary for other arguments, COROLLARY 4. For every CEGS M, there is an ATLir-equivalent = MIS S with initial states Q, that is, for every state q in M there is a state q ~ in cegs (S, Q) satisfying exactly the same ATLir formulae, and vice versa. • Πi = ∅ for 1 ≤ i ≤ k, and Πenv = Π, • πenv (p) = π (p) • Q = {~ [q] ∼ 1,..., [q] ∼ k, q ~: q ∈ St} Let M ~ = cegs (SM, Q) = ~ Agt ~, St ~, Act ~, d ~, o ~, Π ~, π ~, ∼ ~ 1,..., ∼ ~ k ~. We argue that M and M ~ are isomorphic by establishing a oneto-one correspondence between the respective sets of states, and showing that the other parts of the structures agree on corresponding states. First we show that, for any ˆq ~ = ~ [q ~] ∼ 1,..., [q ~] ∼ k, q ~ ~ ∈ Q and any α = ~ α1,..., αk ~ such that αi ∈ d ~ (i, ˆq ~), we have Let qˆ = o ~ (ˆq ~, α). Now, for any i: inputi (ˆq ~, α) = ini ([q ~] ∼ i, out1 ([q ~] ∼ 1, α1),..., outi − 1 ([q ~] ∼ i − 1, αi − 1), outi +1 ([q ~] ∼ i +1, αi +1),..., outk ([q ~] ∼ k, αk), outenv (q ~)) = ini ([q ~] ∼ i, α1,..., αi − 1, αi +1, 6This shows another difference between the environment and the agents: the environment does not possess knowledge. 7We say that two CEGS are isomorphic if they only differ in the names of states and/or actions. PROPOSITION 5. For every CGS M, there is a MIS SM and a set of global states Q of SM such that cgs (SM, Q) is isomorphic to M. PROOF. Let M = ~ Agt, St, Act, d, o, Π, π ~ be given. Now, let Mˆ = ~ Agt, St, Act, d, o, Π, π, ∼ 1,..., ∼ k ~ for some arbitrary accessibility relations ∼ i over St. By Proposition 3, there exists a MIS SˆM with global states Q such that ˆM ~ = cegs (SˆM, Q) is isomorphic to ˆM. Let M ~ be the CGS obtained by removing the accessibility relations from ˆM ~. Clearly, M ~ is isomorphic to M. COROLLARY 6. For every CGS M, there is an ATL-equivalent MIS S with initial states Q. That is, for every state q in M there is a state q ~ in cgs (S, Q) satisfying exactly the same ATL formulae, and vice versa. PROPOSITION 7. The local state spaces in a MIS are not always compact with respect to the underlying concurrent epistemic game structure. PROOF. Take a CEGS M in which agent i has always perfect information about the current global state of the system. When constructing a modular interpreted system S such that M = cegs (S, Q), we have that Sti must be isomorphic with St. 902 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) The above property is a part of the interpreted systems heritage. The next proposition stems from the fact that explicit models (and interpreted systems) allow for intensive interaction between agents. PROPOSITION 8. The size of In in S is, in general, exponential with respect to the number of local states and local actions. This is the case even when epistemic relations are not relevant (i.e., when S is taken as a representation of an ordinary CGS). PROOF. Consider a CGS M with agents Agt = {1,..., k}, global states St = Qki = 1 {qi 0,..., qii}, and actions Act = {0, 1}, all enabled everywhere. The transition function is defined as o (~ q1j1,..., qkjk ~, α1,..., αk) = ~ q1l1,..., qklk ~, where li = (ji + α1 +...+ αk) mod i. Note that M can be represented as a modular interpreted system with succinct local state spaces Sti = {qi 0,..., qii}. Still, the current actions of all agents are relevant to determine the resulting local transition of agent i. We will call items In, outi, ini the interaction layer of a modular interpreted system S; the other elements of S constitute the local layer of the MIS. In this paper we are ultimately interested in model checking complexity with respect to the size of the local layer. To this end, we will assume that the size of interaction layer is polynomial in the number of local states and actions. Note that, by Propositions 7 and 8, not every explicit model submits to compact representation with a MIS. Still, as we declared at the beginning of Section 4, we are mainly interested in a modeling framework for systems of loosely coupled components, where interaction is essential, but most processing is done locally anyway. More importantly, the framework of MIS allows for separating the interaction of agents from their local structure to a larger extent. Moreover, we can control and measure the complexity of each layer in a finer way than before. First, we can try to abstract from the complexity of a layer (e.g. like in this paper, by assuming that the other layer is kept within certain complexity bounds). Second, we can also measure separately the interaction complexity of different agents. 4.2 Modular Interpreted Systems vs. Simple Reactive Modules In this section we show that simple reactive modules are (as we already suggested) a specific (and somewhat limited) implementation of modular interpreted systems. First, we define our (quite strong) notion of equivalence of representations. PROOF. Consider an SRML R with k modules and n variables. We construct S = ~ Agt, Act, In ~ with Agt = {a1,..., ak}, Act = {~ 1,..., ~ n, ⊥ 1,..., ⊥ n}, and In = Sk i = 1 Sti × Sti (the local state spaces Sti will be defined in a moment). Let us assume without loss of generality that ctri = {x1,..., xr}. Also, we consider all guarded commands of i to be of type γ ~ i, ψ: ψ ❀ xi: = ~, orγ ⊥ i, ψ: ψ ❀ xi: = ⊥. Now, agent ai in S has the following components: Sti = P (ctri) (i.e., local states of ai are valuations of variables controlled by i); di (qi) = {~ 1,..., ~ r, ⊥ 1,..., ⊥ r}; outi (qi, α) = ~ qi, qi ~; γ ⊥ i, ψ ψ} ~. To define local transitions, we consider three cases. If t = f = 0 (no update is enabled), then oi (qi, α, ~ t, f ~) = qi for every action α. If t = ~ 0, we take any arbitrary xˆ ∈ t, and define oi (qi, ~ j, ~ t, f ~) = qi ∪ {xj} if xj ∈ t, and qi ∪ {ˆx} otherwise; oi (qi, ⊥ j, ~ t, f ~) = qi \ {xj} if xj ∈ f, and qi ∪ {ˆx} otherwise. Moreover, if t = 0 = ~ f, we take any arbitrary xˆ ∈ f, and define oi (qi, ~ j, ~ t, f ~) = qi ∪ {xj} if xj ∈ t, and qi \ {ˆx} otherwise; oi (qi, ⊥ j, ~ t, f ~) = qi \ {xj} if xj ∈ f, and qi \ {ˆx} otherwise. Finally, Πi = ctri, and qi ∈ πi (xj) iff xj ∈ qi. The above construction shows that SRML have more compact representation of states than MIS: ri local variables of agent i give rise to 2ri local states. In a way, reactive modules (both simple and "full") are two-level representations: first, the system is represented as a product of modules; next, each module can be seen as a product of its variables (together with their update operations). Note, however, that specification of updates with respect to a single variable in an SRML may require guarded commands of total length O (Pk 2 i = 1 ri). Thus, the representation of transitions in SRML is (in the worst case) no more compact than in MIS, despite the two-level structure of SRML. We observe finally that MIS are more general, because in SRML the current actions of other agents have no influence on the outcome of agent i's current action (although the outcome can be influenced by other agents' current local states). 4.3 Model Checking Modular Interpreted Sys tems One of our main aims was to study the complexity of symbolic model checking ATLir in a meaningful way. Following the reviewers' remarks, we state our complexity results only as conjectures. Preliminary proofs can be found in [14]. CONJECTURE 10. Model checking ATL for modular interpreted systems is EXPTIME-complete. CONJECTURE 11. Model checking ATLir for the class of modular interpreted systems is PSPACE-complete. A summary of complexity results for model checking temporal and strategic logics (with and without epistemic component) is given in the table below. The table presents completeness results for various models and settings of input parameters. Symbols n, k, m stand for the number of states, agents and transitions in an explicit model; l is the length of the formula, and nlocal is the number of local states in a concurrent program or modular interpreted system. The new results, conjectured in this paper, are printed in italics. Note that the result for model checking ATL against modular interpreted systems is an extension of the result from [22]. If we are right, then the results for ATL and ATLir form an intriguing pattern. When we compare model checking agents with perfect vs. imperfect information, the first problem appears to be much easier against explicit models measured with the number of transitions; next, we get the same complexity class against explicit models measured with the number of states and agents; finally, model checking imperfect information turns out to be easier than model checking perfect information for modular interpreted systems. Why can it be so? First, a MIS unfolds into CEGS and CGS in a different way. In the first case, the MIS is assumed to encode the epistemic relations explicitly (which makes it explode when we model agents with perfect, or almost perfect information). In the latter case, the epistemic The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 903 aspect is ignored, which gives some extra room for encoding the transition relation more efficiently. Another crucial factor is the number of available strategies (relative to the size of input parameters). The number of all strategies is exponential in the number of global states; for uniform strategies, there are usually much less of them but still exponentially many in general. Thus, the fact that perfect information strategies can be synthesized incrementally has a substantial impact on the complexity of the problem. However, measured in terms of local states and agents, the number of all strategies is doubly exponential, while there are "only" exponentially many uniform strategies--which settles the results in favor of imperfect information. 5. CONCLUSIONS We have presented a new class of representations for open multiagent systems. Our representations, called modular interpreted systems, are: modular, in the sense that components can be changed, replaced, removed or added, with as little changes to the whole representation as possible; more compact than traditional explicit representations; and grounded, in the sense that the correspondences between the primitives of the model and the entities being modeled are more immediate, giving a methodology for designing and implementing systems. We also conjecture that the complexity of model checking strategic ability for our representations is higher if we assume perfect information than if we assume imperfect information. The solutions, proposed in this paper, are not necessarily perfect (for example, the "impression" functions ini seem to be the main source of non-modularity in MIS, and can be perhaps improved), but we believe them to be a step in the right direction. We also do not mean to claim that our representations should replace more elaborate modeling languages like Promela or reactive modules. We only suggest that there is a need for compact, modular and reasonably grounded models that are more expressive than concurrent (epistemic) programs, and still allow for easier theoretical analysis than reactive modules. We also suggest that MIS might be better suited for modeling simple multi-agent domains, especially for human-oriented (as opposed to computer-oriented) design.
Modular Interpreted Systems ABSTRACT We propose a new class of representations that can be used for modeling (and model checking) temporal, strategic and epistemic properties of agents and their teams. Our representations borrow the main ideas from interpreted systems of Halpern, Fagin et al.; however, they are also modular and compact in the way concurrent programs are. We also mention preliminary results on model checking alternating-time temporal logic for this natural class of models. 1. INTRODUCTION The logical foundations of multi-agent systems have received much attention in recent years. Logic has been used to represent and reason about, e.g., knowledge [7], time [6], cooperation and strategic ability [3]. Lately, an increasing amount of research has focused on higher level representation languages for models of such logics, motivated mainly by the need for compact representations, and for representations that correspond more closely to the actual systems which are modeled. Multi-agent systems are open systems, in the sense that agents interact with an environment only partially known in advance. Thus, we need representations of models of multi-agent systems which are modular, in the sense that a component, such as an agent, can be replaced, removed, or added, without major changes to the representation of the whole model. However, as we argue in this paper, few existing representation languages are both modular, compact and computationally grounded on the one hand, and allow for representing properties of both knowledge and strategic ability, on the other. In this paper we present a new class of representations for models of open multi-agent systems, which are modular, compact and come with an implicit methodology for modeling and designing actual systems. The structure of the paper is as follows. First, in Section 2, we present the background of our work--that is, logics that combine time, knowledge, and strategies. More precisely: modal logics that combine branching time, knowledge, and strategies under incomplete information. We start with computation tree logic CTL, then we add knowledge (CTLK), and then we discuss two variants of alternating-time temporal logic (ATL): one for the perfect, and one for the imperfect information case. The semantics of logics like the ones presented in Section 2 are usually defined over explicit models (Kripke structures) that enumerate all possible (global) states of the system. However, enumerating these states is one of the things one mostly wants to avoid, because there are too many of them even for simple systems. Thus, we usually need representations that are more compact. Another reason for using a more specialized class of models is that general Kripke structures do not always give enough help in terms of methodology, both at the stage of design, nor at implementation. This calls for a semantics which is more grounded, in the sense that the correspondence between elements of the model, and the entities that are modeled, is more immediate. In Section 3, we present an overview of representations that have been used for modeling and model checking systems in which time, action (and possibly knowledge) are important; we mention especially representations used for theoretical analysis. We point out that the compact and/or grounded representations of temporal models do not play their role in a satisfactory way when agents' strategies are considered. Finally, in Section 4, we present our framework of modular interpreted systems (MIS), and show where it fits in the picture. We conclude with a somewhat surprising hypothesis, that model checking ability under imperfect information for MIS can be computationally cheaper than model checking perfect information. Until now, almost all complexity results were distinctly in favor of perfect information strategies (and the others were indifferent). 2. LOGICS OF TIME, KNOWLEDGE, AND STRATEGIC ABILITY 2.1 Branching Time: CTL 2.2 Adding Knowledge: CTLK 2.3 Agents and Their Strategies: ATL 2.4 Agents with Imperfect Information: ATLir 898 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3. MODELS AND MODEL CHECKING 3.1 Explicit Models 3.2 Compressed Representations 3.3 Interpreted Systems 3.4 Concurrent Programs The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 899 3.5 Synchronous CP and Simple Reactive Modules 3.6 Concurrent Epistemic Programs 3.7 Reactive Modules 900 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. MODULAR INTERPRETED SYSTEMS 4.1 Representing Agent Systems with MIS The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 901 902 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 Modular Interpreted Systems vs. Simple Reactive Modules 4.3 Model Checking Modular Interpreted Sys The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 903 5. CONCLUSIONS We have presented a new class of representations for open multiagent systems. Our representations, called modular interpreted systems, are: modular, in the sense that components can be changed, replaced, removed or added, with as little changes to the whole representation as possible; more compact than traditional explicit representations; and grounded, in the sense that the correspondences between the primitives of the model and the entities being modeled are more immediate, giving a methodology for designing and implementing systems. We also conjecture that the complexity of model checking strategic ability for our representations is higher if we assume perfect information than if we assume imperfect information. The solutions, proposed in this paper, are not necessarily perfect (for example, the "impression" functions ini seem to be the main source of non-modularity in MIS, and can be perhaps improved), but we believe them to be a step in the right direction. We also do not mean to claim that our representations should replace more elaborate modeling languages like Promela or reactive modules. We only suggest that there is a need for compact, modular and reasonably grounded models that are more expressive than concurrent (epistemic) programs, and still allow for easier theoretical analysis than reactive modules. We also suggest that MIS might be better suited for modeling simple multi-agent domains, especially for human-oriented (as opposed to computer-oriented) design.
Modular Interpreted Systems ABSTRACT We propose a new class of representations that can be used for modeling (and model checking) temporal, strategic and epistemic properties of agents and their teams. Our representations borrow the main ideas from interpreted systems of Halpern, Fagin et al.; however, they are also modular and compact in the way concurrent programs are. We also mention preliminary results on model checking alternating-time temporal logic for this natural class of models. 1. INTRODUCTION The logical foundations of multi-agent systems have received much attention in recent years. Logic has been used to represent and reason about, e.g., knowledge [7], time [6], cooperation and strategic ability [3]. Lately, an increasing amount of research has focused on higher level representation languages for models of such logics, motivated mainly by the need for compact representations, and for representations that correspond more closely to the actual systems which are modeled. Multi-agent systems are open systems, in the sense that agents interact with an environment only partially known in advance. Thus, we need representations of models of multi-agent systems which are modular, in the sense that a component, such as an agent, can be replaced, removed, or added, without major changes to the representation of the whole model. However, as we argue in this paper, few existing representation languages are both modular, compact and computationally grounded on the one hand, and allow for representing properties of both knowledge and strategic ability, on the other. In this paper we present a new class of representations for models of open multi-agent systems, which are modular, compact and come with an implicit methodology for modeling and designing actual systems. The structure of the paper is as follows. First, in Section 2, we present the background of our work--that is, logics that combine time, knowledge, and strategies. More precisely: modal logics that combine branching time, knowledge, and strategies under incomplete information. The semantics of logics like the ones presented in Section 2 are usually defined over explicit models (Kripke structures) that enumerate all possible (global) states of the system. Thus, we usually need representations that are more compact. This calls for a semantics which is more grounded, in the sense that the correspondence between elements of the model, and the entities that are modeled, is more immediate. In Section 3, we present an overview of representations that have been used for modeling and model checking systems in which time, action (and possibly knowledge) are important; we mention especially representations used for theoretical analysis. We point out that the compact and/or grounded representations of temporal models do not play their role in a satisfactory way when agents' strategies are considered. Finally, in Section 4, we present our framework of modular interpreted systems (MIS), and show where it fits in the picture. We conclude with a somewhat surprising hypothesis, that model checking ability under imperfect information for MIS can be computationally cheaper than model checking perfect information. Until now, almost all complexity results were distinctly in favor of perfect information strategies (and the others were indifferent). 5. CONCLUSIONS We have presented a new class of representations for open multiagent systems. We also conjecture that the complexity of model checking strategic ability for our representations is higher if we assume perfect information than if we assume imperfect information. We also do not mean to claim that our representations should replace more elaborate modeling languages like Promela or reactive modules. We only suggest that there is a need for compact, modular and reasonably grounded models that are more expressive than concurrent (epistemic) programs, and still allow for easier theoretical analysis than reactive modules. We also suggest that MIS might be better suited for modeling simple multi-agent domains, especially for human-oriented (as opposed to computer-oriented) design.
I-47
Operational Semantics of Multiagent Interactions
The social stance advocated by institutional frameworks and most multi-agent system methodologies has resulted in a wide spectrum of organizational and communicative abstractions which have found currency in several programming frameworks and software platforms. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms -- both, organizational and communicative, share a common semantic core. In the realm of software architectures, the paper proposes a connector-based model of multi-agent interactions which attempts to identify the essential structure underlying multi-agent interactions. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of social interactions. The proposed model is intended as the abstract machine of an organizational programming language which allows programmers to accommodate an open set of interaction mechanisms.
[ "oper semant", "multiag interact", "institut framework", "organiz and commun abstract", "pre-defin abstract", "softwar architectur", "formal execut semant", "social interact", "organiz program languag", "multi-agent interact connector-base model", "softwar connector", "structur oper semant" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "R" ]
Operational Semantics of Multiagent Interactions Juan M. Serrano University Rey Juan Carlos C/Tulipan S/N Madrid, Spain juanmanuel.serrano@urjc.es Sergio Saugar University Rey Juan Carlos C/Tulipan S/N Madrid, Spain sergio.saugar@urjc.es ABSTRACT The social stance advocated by institutional frameworks and most multi-agent system methodologies has resulted in a wide spectrum of organizational and communicative abstractions which have found currency in several programming frameworks and software platforms. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms - both, organizational and communicative, share a common semantic core. In the realm of software architectures, the paper proposes a connector-based model of multi-agent interactions which attempts to identify the essential structure underlying multi-agent interactions. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of social interactions. The proposed model is intended as the abstract machine of an organizational programming language which allows programmers to accommodate an open set of interaction mechanisms. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-multi-agent systems General Terms Languages, Theory, Design 1. INTRODUCTION The suitability of agent-based computing to manage the complex patterns of interactions naturally occurring in the development of large scale, open systems, has become one of its major assets over the last few years [26, 24, 15]. Particularly, the organizational or social stance advocated by institutional frameworks [2] and most multi-agent system (MAS) methodologies [26, 10], provides an excellent basis to deal with the complexity and dynamism of the interactions among system components. This approach has resulted in a wide spectrum of organizational and communicative abstractions, such as institutions, normative positions, power relationships, organizations, groups, scenes, dialogue games, communicative actions (CAs), etc., to effectively model the interaction space of MAS. This wealth of computational abstractions has found currency in several programming frameworks and software platforms (AMELI [9], MadKit [13], INGENIAS toolkit [18], etc.), which leverage multi-agent middlewares built upon raw ACL-based interaction mechanism [14], and minimize the gap between organizational metamodels and target implementation languages. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms - both, organizational and communicative, share a common semantic core. This paper thus focuses on the fundamental building blocks of multi-agent interactions: those which may be composed, extended or refined in order to define more complex organizational or communicative types of interactions. Its first goal is to carry out a principled analysis of multiagent interactions, departing from general features commonly ascribed to agent-based computing: autonomy, situatedness and sociality [26]. To approach this issue, we draw on the notion of connector, put forward within the field of software architectures [1, 17]. The outcome of this analysis will be a connector-based model of multi-agent interactions between autonomous social and situated components, i.e. agents, attempting to identify their essential structure. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of multi-agent (or social) interactions. Structural Operational Semantics (SOS)[21], a common technique to specify the operational semantics of programming languages, is used for this purpose. The paper is structured as follows: first, the major entities and relationships which constitute the structure of social interactions are introduced. Next, the dynamics of social interactions will show how these entities and relationships evolve. Last, relevant work in the literature is discussed 889 978-81-904262-7-5 (RPS) c 2007 IFAAMAS with respect to the proposal, limitations are addressed, and current and future work is described. 2. SOCIAL INTERACTION STRUCTURE From an architectural point of view, interactions between software components are embodied in software connectors: first-class entities defined on the basis of the different roles played by software components and the protocols that regulate their behaviour [1]. The roles of a connector represent its participants, such as the caller and callee roles of an RPC connector, or the sender and receiver roles in a message passing connector. The attachment operation binds a component to the role of a given connector. The analysis of social interactions introduced in this section gives rise to a new kind of social connector. It refines the generic model in several respects, attending to the features commonly ascribed to agent-based computing: • According to the autonomy feature, we may distinguish a first kind of participant (i.e. role) in a social interaction, so-called agents. Basically, agents are those software components which will be regarded as autonomous within the scope of the interaction1 . • A second group of participants, so-called environmental resources, may be identified from the situatedness feature. Unlike agents, resources represent those nonautonomous components whose state may be externally controlled by other components (agents or resources) within the interaction. Moreover, the participation of resources in an interaction is not mandatory. • Last, according to the sociality of agents, the specification of social connector protocols - the glue linking agents among themselves and with resources, will rely on normative concepts such as permissions, obligations and empowerments [23]. Besides agents, resources and social protocols, two other kinds of entities are of major relevance in our analysis of social interactions: actions, which represent the way in which agents alter the environmental and social state of the interaction; and events, which represent the changes in the interaction resulting from the performance of actions or the activity of environmental resources. In the following, we describe the basic entities involved in social interactions. Each kind of entity T will be specified as a record type T l1 : T1, ... ln : Tn , possibly followed by a number of invariants, definitions, and the actions affecting their state. Instances or values v of a record type T will be represented as v = v1, ... , vn : T. The type SetT represents a collection of values drawn from type T. The type QueueT represents a queue of values v : T waiting to be processed. The value v in the expression [v| ] : Queue[T] represents the head of the queue. The type Enum {v1, ... , vn} 1 Note that we think of the autonomy feature in a relative, rather than absolute, perspective. Basically, this means that software components counting as agents in a social interaction may behave non-autonomously in other contexts, e.g. in their interactions through human-user interfaces. This conceptualization of agenthood resembles the way in which objects are understood in CORBA: as any kind of software component (C, Prolog, Cobol, etc.) attached to an ORB. represents an enumeration type whose values are v1, ... , vn. Given some value v : T, the term vl refers to the value of the field l of a record type T. Given some labels l1, l2, ... , the expression vl1,l2,... is syntactic sugar for ((vl1 )l2 ) .... The special term nil will be used to represent the absence of proper value for an optional field, so that vl = nil will be true in those cases and false otherwise. The formal model will be illustrated with several examples drawn from the design of a virtual organization to aid in the management of university courses. 2.1 Social Interactions Social interactions shall be considered as composite connectors [17], structured in terms of a tree of nested subinteractions. Let``s consider an interaction representing a university course (e.g. on data structures). On the one hand, this interaction is actually a complex one, made up of lower-level interactions. For instance, within the scope of the course agents will participate in programming assignment groups, lectures, tutoring meetings, examinations and so on. Assignment groups, in turn, may hold a number of assignment submissions and test requests interactions. A test request may also be regarded as a complex interaction, ultimately decomposed in the atomic, or bottom-level interactions represented by communicative actions (e.g. request, agree, refuse, ... ). On the other hand, courses are run within the scope of a particular degree (e.g. computer science), a higher-level interaction. Traversing upwards from a degree to its ancestors, we find its faculty, the university and, finally, the multi-agent community or agent society. The community is thus the top-level interaction which subsumes any other kind of multi-agent interaction2 . The organizational and communicative interaction types identified above clearly differ in many ways. However, we may identify four major components in all of them: the participating agents, the resources that agents manipulate, the protocol regulating the agent activities and the subinteraction space. Accordingly, we may specify the type I of social interactions, ranged over by the meta-variable i, as follows: I state : SI, ini : A, mem : Set A, env : Set R, sub : Set I, prot : P, ch : CH def. : (1) icontext = i1 ⇔ i ∈ isub 1 inv. : (2) iini = nil ⇔ icontext = nil act. : setUp, join, create, destroy where the member and environment fields represent the agents (A) and local resources (R) participating in the interaction; the sub-interaction field, its set of inner interactions; and the protocol field the rules that govern the interaction (P). The event channel, to be described in the next section, allows the dispatching of local events to external interactions. The context of some interaction is defined as its super-interaction (def. 1), so that the context of the toplevel interaction is nil. The type SI Enum {open, closing, closed} represents the possible execution states of the interaction. Any interaction, but the top-level one, is set up within the context of another interaction by an initiator agent. The initiator is 2 In the context of this application, a one-to-one mapping between human users and software components attached to the community as agents would be a right choice. 890 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) thus a mandatory feature for any interaction different to the community (inv. 2). The life-cycle of the interaction begins in the open state. Its sets of agent and resource participants, initially empty, vary as agents join and leave the interaction, and as they create and destroy resources from its local environment. Eventually, the interaction may come to an end (according to the protocol``s rules), or be explicitly closed by some agent, thus prematurely disabling the activity of its participants. The transient closing state will be described in the next section. 2.2 Agents Components attach themselves as agents in social interactions with the purpose of achieving something. The purpose declared by some agent when it joins an interaction shall be regarded as the institutional goal that it purports to satisfy within that context3 . The types of agents participating in a given interaction are primarily identified from their purposes. For instance, students are those agents participating in a course who purport to obtain a certificate in the course``s subject. Other members of the course include lecturers and teaching assistants. The type A of agents, ranged over by meta-variable a, is defined as follows: A state : SA, player : A, purp : F, att : Queue ACT , ev : Queue E, obl : Set O def. : (3) acontext = i ⇔ a ∈ imem (4) a1 ∈ aroles ⇔ aplayer 1 = a (5) i ∈ apartIn ⇔ a1 ∈ imem ∧ a1 ∈ aroles act. : see where the purpose is represented as a well-formed boolean formula, of a generic type F, which evaluates to true if the purpose is satisfied and false otherwise. The context of some agent is defined as the interaction in which it participates (def. 3). The type SA Enum {playing, leaving, succ, unsuc} represents the execution state of the agent. Its life-cycle begins in the playing state when its player agent joins the interaction, or some software component is attached as an agent to the multi-agent system (in this latter case, the player value is nil). The derived roles and partIn features represent the roles played by the agent and the contexts in which these roles are played (def. 4, 5)4 . An agent may play roles at interactions within or outside the scope of its context. For instance, students of a course are played by student agents belonging to the (undergraduate) degree, whereas lecturers may be played by teachers of a given department and the assistant role may be played by students of a Ph.D degree (both, the department and the Ph.D. degrees, are modelled as sub-interactions of the faculty). Components will normally attempt to perform different actions (e.g. to set up sub-interactions) in order to satisfy their purposes within some interaction. Moreover, components need to be aware of the current state of the interaction, so that they will also be capable of observing certain events from the interaction. Both, the visibility of the interaction 3 Thus, it may or may not correspond to actual internal goals or intentions of the component. 4 Free variables in the antecedents/consequents of implications shall be understood as universally/existentially quantified. and the attempts of members, are subject to the rules governing the interaction. The attempts and events fields of the agent structure represent the queues of attempts to execute some actions (ACT ), and the events (E) received by the agent which have not been observed yet. An agent may update its event queue by seeing the state of some entity of the community. The last field of the structure represents the obligations (O) of agents, to be described later. Eventually, the participation of some agent in the interaction will be over. This may either happen when certain conditions are met (specified by the protocol rules), or when the agent takes the explicit decision of leaving the interaction. In either case, the final state of the agent will be successful if its purpose was satisfied; unsuccessful otherwise. The transient leaving state will be described in the next section. 2.3 Resources Resources are software components which may represent different types of non-autonomous informational or computational entities. For instance, objectives, topics, assignments, grades and exams are different kinds of informational resources created by lecturers and assistants in the context of the course interaction. Students may also create programs to satisfy the requirements of some assignment. Other types of computational resources put at the disposal of students by teachers include compilers and interpreters. The type R of resources, ranged over by meta-variable r, can be specified by the following record type: R cr : A, owners : Set A, op : Set OP def. : (6) rcontext = i ⇔ r ∈ ienv act. : take, share, give, invoke Essentially, resources can be regarded as objects deployed in a social setting. This means that resources are created, accessed and manipulated by agents in a social interaction context (def. 6), according to the rules specified by its protocol. The mandatory feature creator represents the agent who created this resource. Moreover, resources may have owners. The ownership relationship between members and resources is considered as a normative device aimed at the simplification of the protocol``s rules that govern the interaction of agents and the environment. Members may gain ownership of some resource by taking it, and grant ownership to other agents by giving or sharing their own properties. For instance, the ownership of programs may be shared by several students if the assignment can be performed by groups of two or more students. The last operations feature represents the interface of the resource, consisting of a set of operations. A resource is structured around several public operations that participants may invoke, in accordance to the rules specified by the interaction``s protocol. The set of operations of a resource makes up its interface. 2.4 Protocols The protocol of any interaction is made up of the rules which govern its overall state and dynamics. The present specification abstracts away the particular formalism used to specify these rules, and focuses instead on several requirements concerning the structure and interface of protocols. Accordingly, the type P of protocols, ranged over by metaThe Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 891 variable p, is defined as follows5 : P emp : A × ACT → Boolean, perm : A × ACT → Boolean, obl :→ Set (A × Set O × Set E), monitor : E → Set A, finish :→ Boolean, over : A → Boolean def. : (7) pcontext = i ⇔ p = iprot inv. : (8) pfinish() ∧ s ∈ pcontext,sub ⇒ sprot,finish() (9) pfinish() ∧ a ∈ pcontext,mem ⇒ pover(a) (10) pover(a) ∧ ai ∈ aroles ⇒ acontext,prot,over i (ai) (11) αadd ∪ {a} ⊆ pmonitor( a, α, ) act. : Close, Leave We demand from protocols four major kinds of functions. Firstly, protocols shall include rules to identify the empowerments and permissions of any agent attempting to alter the state of the interaction (e.g. its members, the environment, etc.) through the execution of some action (e.g. join, create, etc.). Empowerments shall be regarded as the institutional capabilities which some agent possesses in order to satisfy its purpose. Corresponding rules, encapsulated by the empowered function field, shall allow to determine whether some agent is capable to perform a given action over the interaction. Empowerments may only be exercised under certain circumstances - that permissions specify. Permission rules shall allow to determine whether the attempt of an empowered agent to perform some particular action is satisfied or not (cf. permitted field). For instance, the course``s protocol specifies that the agents empowered to join the interaction as students are those students of the degree who have payed the fee established for the course``s subject, and own the certificates corresponding to its prerequisite subjects. Permission rules, in turn, specify that those students may only join the course in the admission stage. Hence, even if some student has paid the fee, the attempt to join the course will fail if the course has not entered the corresponding stage6 . Secondly, protocols shall allow to determine the obligations of agents towards the interaction. Obligations represent a normative device of social enforcement, fully compatible with the autonomy of agents, used to bias their behaviour in a certain direction. These kinds of rules shall allow to determine whether some agent must perform an action of a given type, as well as if some obligation was fulfilled, violated or needs to be revoked. The function obligations of the protocol structure thus identifies the agents whose obligation set must be updated. Moreover, it returns for each agent a collection of events representing the changes in the obligation set. For instance, the course``s protocol establishes that members of departments must join the course as teachers whenever they are assigned to the course``s subject. Thirdly, the protocol shall allow to specify monitoring rules for the different events originating within the interaction. Corresponding rules shall establish the set of agents that must be awared of some event. For instance, this func5 The formalization assumes that protocol``s functions implicitly recieve as input the interaction being regulated. 6 The hasPaidFee relationship between (degree) students and subject resources is represented by an additional, application-dependent field of the agent structure for this kind of roles. Similarly, the admission stage is an additional boolean field of the structure for school interactions. The generic types I, A, R and P are thus extendable. tionality is exploited by teachers in order to monitor the enrollment of students to the course. Last, the protocol shall allow to control the state of the interaction as well as the states of its members. Corresponding rules identify the conditions under which some interaction will be automatically finished, and whether the participation of some member agent will be automatically over. Thus, the function field finish returns true if the regulated interaction must finish its execution. If so happens, a well-defined set of protocols must ensure that its sub-interactions and members are finished as well (inv. 8,9). Similarly, the function over returns true if the participation of the specified member must be over. Well-formed protocols must ensure the consistency between these functions across playing roles (inv. 10)7 . For instance, the course``s protocol establishes that the participation of students is over when they gain ownership of the course``s certificate or the chances to get it are exhausted. It also establishes that the course must be finished when the admission stage has passed and all the students finished their participation. 3. SOCIAL INTERACTION DYNAMICS The dynamics of the multi-agent community is influenced by the external actions executed by software components and the protocols governing their interactions. This section focuses on the dynamics resulting from a particular kind of external action: the attempt of some component, attached to the community as an agent, to execute a given (internal) action. The description of other external actions concerning agents (e.g. observe the events from its event queue, enter or exit from the community) and resources (e.g. a timer resource may signal the pass of time) will be skipped. The processing of some attempt may give rise to changes in the scope of the target interaction, such as the instantiation of new participants (agents or resources) or the setting up of new sub-interactions. These resulting events may cause further changes in the state of other interactions (the target one included), namely, in its execution state as well as in the execution state, obligations and visibility of their members. This section will also describe the way in which these events are processed. The resulting dynamics described bellow allows for actions and events corresponding to different agents and interactions to be processed simultaneously. Due to lack of space, we only include some of the operational rules that formalise the execution semantics. 3.1 Attempt processing An attempt is defined by the structure AT T perf : A, act : ACT , where the performer represents the agent in charge of executing the specified action. This action is intended to alter the state of some target interaction (possibly, the performer``s context itself), and notify a collection of addressees of the changes resulting from a successful execution. Accordingly, the type ACT of actions, ranged over by meta-variable α, is specified as follows: ACT state : SACT , target : I, add : Set A def. : (12) αperf = a ⇔ α ∈ aatt 7 The close and leave actions update the finish and over function fields as explained in the next section. Additional actions, such as permit, forbid, empower, etc., to update other protocol``s fields are yet to be identified in future work. 892 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) where: the performer is formally defined as the agent who stores the action in its queue of attempts, and the state field represents the current phase of processing. This process goes through four major phases, as specified by the enumeration type SACT Enum {emp, perm, exec} : empowerment checking, permission checking and action execution, described in the sequel. 3.1.1 Empowerment checking The post-condition of an attempt consists of inserting the action in the queue of attempts of the specified performer. As rule 1 specifies8 , this will only be possible if the performer is empowered to execute that action according to the rules that govern the state of the target interaction. If this condition is not met, the attempt will simply be ignored. Moreover, the performer agent must be in the playing state (this pre-condition is also required for any rule concerning the processing of attempts). If these pre-conditions are satisfied the rule is fired and the processing of the action continues in the permission checking stage. For instance, when the software component attached as a student in a degree attempts to join as a student the course in which some subject is teached, the empowerment rules of the course interaction are checked. If the (degree) student has passed the course``s prerequisite subjects the join action will be inserted in its queue of attempts and considered for execution. αtarget,prot,emp(a, α) a = playing, , , qACT , , a,α :AT T −→ playing, , , qACT , , (1) W here : (α )state = perm (qACT ) = insert(α , qACT ) 3.1.2 Permissions checking The processing of the action resumes when the possible preceding actions in the performer``s queue of attempts are fully processed and removed from the queue. Moreover, there should be no pending events to be processed in the interaction, for these events may cause the member or the interaction to be finished (as will be shortly explained in the next sub-section). If these conditions are met the permissions to execute the given action (and notify the specified addressees) are checked (e.g. it will be checked whether the student paid the fee for the course``s subject). If the protocol of the target interaction grants permission, the processing of the attempt moves to the action execution stage (rule 2). Otherwise, the action is discharged and removed from the queue. Unlike unempowered attempts, a forbidden one will cause an event to be generated and transfered to the event channel for further processing. αstate = perm ∧ acontext,ch,in,ev = ∅ ∧ αtarget,prot,perm(a, α) a = playing, , , [α| ], , −→ playing, , , [α | ], , (2) W here : (α )state = exec 8 Labels of record instances are omitted to allow for more compact specifications. Moreover, note that record updates in where clauses only affect the specified fields. 3.1.3 Action execution The transitions fired in this stage are classified according to the different types of actions to be executed. The intended effects of some actions may directly be achieved in a single step, while others will required an indirect approach and possibly several execution steps. Actions of the first kind are constructive ones such as set up and join. The second group of actions include those, such as close and leave, whose effects are indirectly achieved by updating the interaction protocol. As an example of constructive action, let``s consider the execution of a set up action, whose type is defined as follows9 : SetUp ACT · new : I inv. : (13) αnew,mem = αnew,res = αnew,sub = ∅ (14) αnew,state = open where the new field represents the new interaction to be initiated. Its sets of participants (agents and resources) and sub-interactions must be empty (inv. 13) and its state must be open (inv. 14). The setting up of the new interaction may thus affect its protocol and possible application-dependent fields (e.g. the subject of a course interaction). According to rule 3, the outcome of the execution is threefold: firstly, the performer``s attempt queue is updated so that the executing action is removed; secondly, the new interaction is added to the target``s set of sub-interactions (moreover, its initiator field is set to the performer agent); last, the event representing this change (which includes a description of the change, the agent that caused it and the action performed) is inserted in the output port of the target``s event channel. αstate = exec ∧ α : SetUp ∧ αnew = i a = playing, , , [α|qACT ], , −→ playing, , , qACT , , αtarget = open, , , , , sI , c −→ open, , , , , sI ∪ i , c (3) W here : (i )ini = a (c )out,ev = insert( a, α, sub(αtarget , i ) , cout,ev ) Let``s consider now the case of a close action. This action represents an attempt by the performer to force some interaction to finish, thus bypassing its current protocol rules (those concerning the finish function). The way to achieve this effect is to cause an update on the protocol so that the finish function returns true afterwards10 . Accordingly, we may specify this type of action as follows: Close ACT · upd : (→ Bool) → (→ Bool) inv. : (15) αtarget,state = open (16) αtarget,context = nil (17) αupd(αtarget,prot,finish)() where the inherited target field represents the interaction to be closed (which must be open and different to the topinteraction, according to invariants 15 and 16) and the new 9 The resulting type consists of the fields of the ACT record extended with an additional new field. 10 This strategy is also followed in the definition of leave and may also be used in the definition of other types of actions such as fire, permit, forbid, etc.. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 893 update field represents a proper higher-order function to update the target``s protocol (inv. 17). The transition which models the execution of this action, specified by rule 4, defines two effects in the target interaction: its protocol is updated and the event representing this change is inserted in its output port. This event will actually trigger the closing process of the interaction as described in the next subsection. αstate = exec ∧ α : Close a = playing, , , [α|qACT ], , −→ playing, , , qACT , , αtarget = open, , , , , p, c −→ open, , , , , p , c (4) W here : (p )finish = αupd (pfinish ) (c )out,ev = insert( a, α, finish(αtarget ) , cout,ev ) 3.2 Event Processing The processing of events is encapsulated in the event channels of interactions. Channels, ranged over by meta-variable c, are defined by two input and output ports, according to the following definition: CH out : OutP, in : InP inv. : (18) ccontext ∈ cout,disp( , , finish(ccontext) ) (19) ccontext ∈ cout,disp( , , over(a) ) (20) ccontext,sub ⊆ cout,disp(closing(ccontext)) (21) apartsIn ⊆ cout,disp(leaving(a)) (22) ccontext ∈ cout,disp(closed(i)) (23) {ccontext, aplayer,context} ⊆ cout,disp(left(a)) OutP ev : Queue E, disp : E → Set I, int : Set I, ag : Set A InP ev : Queue E, stage : Enum {int, mem, obl}, ag : Set A The output port stores and processes the events originated within the scope of the channel``s interaction. Its first purpose is to dispatch the local events to the agents identified by the protocol``s monitoring function. Moreover, since these events may influence the results of the finishing, over and obligation functions of certain protocols, they will also be dispatched to the input ports of the interactions identified through a dispatching function - whose invariants will be explained later on. Thus, input ports serve as a coordination mechanism which activate the re-evaluation of the above functios whenever some event is received11 . Accordingly, the processing of some event goes through four major stages: event dispatching, interaction state update, member state update and obligations update. The first one takes place in the output port of the interaction in which the event originated, whereas the other ones execute in separate control threads associated to the input ports of the interactions to which the event was dispatched. 3.2.1 Event dispatching The processing of some event stored in the output port is triggered when all its preceding events have been dispatched. As a first step, the auxiliary int and ag fields are initialised 11 Alternatively, we may have assumed that interactions are fully aware of any change in the multi-agent community. In this scenario, interactions would trigger themselves without requiring any explicit notification. On the contrary, we adhere to the more realistic assumption of limited awareness. with the returned values of the dispatching and protocol``s monitoring functions, respectively (rule 5). Then, additional rules simply iterate over these collections until all agents and interactions have been notified (i.e., both sets are empty). Last, the event is removed from the queue and the auxiliary fields are re-set to nil. The dispatching function shall identify the set of interactions (possibly, empty) that may be affected by the event (which may include the channel``s interaction itself)12 . For instance, according to the finishing rule of university courses mentioned in the last section, the event representing the end of the admission stage, originated within the scope of the school interaction, will be dispatched to every course of the school``s degrees. Concerning the monitoring function, according to invariant 11 of protocols, if the event is generated as the result of an action performance, the agents to be notified will include the performer and addressees of that action. Thus, according to the monitoring rule of university courses, if a student of some degree joins a certain course and specifies a colleague as addressee of that action, the course``s teachers and itself will also be notified of the successful execution. ccontext,state s = open ∧ ccontext,prot,monitor s = mon cs = [e| ], d, nil, nil , −→ [e| ], , d(e), mon(e) , (5) 3.2.2 Interaction state update Input port activity is triggered when a new event is received. Irrespective of the kind of incoming event, the first processing action is to check whether the channel``s interaction must be finished. Thus, the dispatching of the finish event resulting from a close action (inv. 18) serves as a trigger of the closing procedure. If the interaction has not to be finished, the input port stage field is set to the member state update stage and the auxiliary ag field is initialised to the interaction members. Otherwise, we can consider two possible scenarios. In the first one, the interaction has no members and no sub-interactions. In this case, the interaction can be inmediately closed down. As rule 6 shows, the interaction is closed, removed from the context``s set of sub-interactions and a closed event is inserted in its output channel. According to invariant 22, this event will be later inserted to its input channel to allow for further treatment. cin,ev 1 = ∅ ∧ cin,stage 1 = int ∧ pfinish() , , , , {i} ∪ sI , , c −→ , , , , sI , , c i = , , ∅, , ∅, p, c1 −→ closed, , , , , , (6) W here : (c )out,ev = insert(closed(i), cout,ev ) In the second scenario, the interaction has some member or sub-interaction. In this case, clean-up is required prior to the disposal of the interaction (e.g. if the admission period ends and no student has matriculated for the course, teachers has to be finished before finishing the course itself). As rule 7 shows, the interaction is moved to the transient closing state and a corresponding event is inserted in the output port. According to invariant 20, the closing event will be dispatched to every sub-interaction in order to activate its closing procedure (guaranteed by invariant 8). Moreover, 12 This is essentially determined by the protocol rules of these interactions. The way in which the dispatching function is initialised and updated is out of the scope of this paper. 894 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the stage and ag fields are properly initialised so that the process goes on in the next member state update stage. This stage will further initiate the leaving process of the members (according to invariant 9). cin,ev = ∅ ∧ cin,stage = int ∧ pfinish() ∧ (sA = ∅ ∨ sI = ∅) i = open, , sA, , sI , p, c −→ closing, , sA, , sI , p, c (7) W here : (c )out,ev = insert(closing(i), cout,ev ) (c )in,stage = mem (c )in,ag = sA Eventually, every member will leave the interaction and every sub-interaction will be closed. Corresponding events will be received by the interaction (according to invariants 23 and 22) so that the conditions of the first scenario will hold. 3.2.3 Member state update This stage simply iterates over the members of the interaction to check whether they must be finished according to the protocol``s over function. When all members have been checked, the stage field will be set to the next obligation update stage and the auxiliary ag field will be initalised with the agents identified by the protocol``s obligation update function. If some member has to end its participation in the interaction and it is not playing any role, it will be inmediately abandoned (successfully or unsuccessfully, according to the satisfaction of its purpose). The corresponding event will be forwarded to its interaction and to the interaction of its player agent to account for further changes (inv. 23). Otherwise, the member enters the transient leaving state, thus preventing any action performance. Then, it waits for the completion of the leaving procedures of its played roles, triggered by proper dispatching of the leaving event (inv. 21). 3.2.4 Obligations update In this stage, the obligations of agents (not necessaryly members of the interaction) towards the interaction are updated accordingly. When all the identified agents have been updated, the event is removed from the input queue and the stage field is set back to the interaction state update. For instance, when a course interaction receives an event representing the assignment of some department member to its subject, an obligation to join the course as a teacher is created for that member. Moreover, the event representing this change is added to the output channel of the department interaction. 4. DISCUSSION This paper has attempted to expose a possible semantic core underlying the wide spectrum of interaction types between autonomous, social and situated software components. In the realm of software architectures, this core has been formalised as an operational model of social connectors, intended to describe both the basic structure and dynamics of multi-agent interactions, from the largest (the agent society itself) down to the smallest ones (communicative actions). Thus, top-level interactions may represent the kind of agent-web pursued by large-scale initiatives such as the Agentcities/openNet one [25]. Large-scale interactions, modelling complex aggregates of agent interactions such as those represented by e-institutions or virtual organizations [2, 26], are also amenable to be conceptualised as particular kinds of first-level social interactions. The last levels of the interaction tree may represent small-scale multiagent interactions such as those represented by interaction protocols [11], dialogue games [16], or scenes [2]. Finally, bottom-level interactions may represent communicative actions. From this perspective, the member types of a CA include the speaker and possibly many listeners. The purpose of the speaker coincides with the illocutionary purpose of the CA [22], whereas the purpose of any listener is to declare that it (actually, the software component) successfully processed the meaning of the CA. The analysis of social interactions put forward in this paper draws upon current proposals of the literature in several general respects, such as the institutional and organizational character of multi-agent systems [2, 26, 10, 7] and the normative perspective on multi-agent protocols [12, 23, 20]. These proposals as well as others focusing in relevant abstractions such as power relationships, contracts, trust and reputation mechanisms in organizational settings, etc., could be further exploited in order to characterize more accurately the organizational character of some multi-agent interactions. Similarly, the conceptualization of communicative actions as atomic interactions may similarly benefit from public semantics of communicative actions such as the one introduced in [3]. Last, the abstract model of protocols may be refined taking into account existing operational models of norms [12, 6]. These analyses shall result in new organizational and communicative abstractions obtained through a refinement and/or extension of the general model of social interactions. Thus, the proposed model is not intended to capture every organizational or communicative feature of multi-agent interactions, but to reveal their roots in basic interaction mechanisms. In turn, this would allow for the exploitation of common formalisms, particularly concerning protocols. Unlike the development of individual agents, which has greatly benefited from the design of several agent programming languages [4], societal features of multi-agent systems are mostly implemented in terms of visual modelling [8, 18] and a fixed set of interaction abstractions. We argue that the current field of multi-agent system programming may greatly benefit from multi-agent programming languages that allow programmers to accommodate an open set of interaction mechanisms. The model of social interactions put forward in this paper is intended as the abstract machine of a language of this type. This abstract machine would be independent of particular agent architectures and languages (i.e. software components may be programmed in a BDI language such as Jason [5] or in a non-agent oriented language). On top of the presented execution semantics, current and future work aims at the specification of the type system [19] which allows to program the abstract machine, the specification of the corresponding surface syntaxes (both textual and visual) and the design and implementation of a virtual machine over existing middleware technologies such as FIPA platforms or Web services. We also plan to study particular refinements and limitations to the proposed model, particularly with respect to the dispatching of events, semantics The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 895 of obligations, dynamic updates of protocols and rule formalisms. In this latter aspect, we plan to investigate the use of Answer Set Programming to specify the rules of protocols, attending to the role that incompleteness (rules may only specify either necessary or sufficient conditions, for instance), explicit negation (e.g. prohibitions) and defaults play in this domain. 5. ACKNOWLEDGMENTS The authors thank anonymous reviewers for their comments and suggestions. Research sponsored by the Spanish Ministry of Science and Education (MEC), project TIN200615455-C03-03. 6. REFERENCES [1] R. Allen and D. Garlan. A Formal Basis for Architectural Connection. ACM Transactions on Software Engineering and Methodology, 6(3):213-249, June 1997. [2] J. L. Arcos, M. Esteva, P. Noriega, J. A. Rodr´ıguez, and C. Sierra. Engineering open environments with electronic institutions. Journal on Engineering Applications of Artificial Intelligence, 18(2):191-204, 2005. [3] G. Boella, R. Damiano, J. Hulstijn, and L. W. N. van der Torre. Role-based semantics for agent communication: embedding of the ``mental attitudes'' and ``social commitments'' semantics. In AAMAS, pages 688-690, 2006. [4] R. H. Bordini, L. Braubach, M. Dastani, A. E. F. Seghrouchni, J. J. G. Sanz, J. Leite, G. O``Hare, A. Pokahr, and A. Ricci. A survey of programming languages and platforms for multi-agent systems. Informatica, 30:33-44, 2006. [5] R. H. Bordini, J. F. H¨ubner, and R. Vieira. Jason and the golden fleece of agent-oriented programming. In R. H. Bordini, D. M., J. Dix, and A. El Fallah Seghrouchni, editors, Multi-Agent Programming: Languages, Platforms and Applications, chapter 1. Springer-Verlag, 2005. [6] O. Cliffe, M. D. Vos, and J. A. Padget. Specifying and analysing agent-based social institutions using answer set programming. In EUMAS, pages 476-477, 2005. [7] V. Dignum, J. V´azquez-Salceda, and F. Dignum. Omni: Introducing social structure, norms and ontologies into agent organizations. In R. Bordini, M. Dastani, J. Dix, and A. Seghrouchni, editors, Programming Multi-Agent Systems Second International Workshop ProMAS 2004, volume 3346 of LNAI, pages 181-198. Springer, 2005. [8] M. Esteva, D. de la Cruz, and C. Sierra. ISLANDER: an electronic institutions editor. In M. Gini, T. Ishida, C. Castelfranchi, and W. L. Johnson, editors, Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS``02), pages 1045-1052. ACM Press, July 2002. [9] M. Esteva, B. Rosell, J. A. Rodr´ıguez-Aguilar, and J. L. Arcos. AMELI: An agent-based middleware for electronic institutions. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, volume 1, pages 236-243, 2004. [10] J. Ferber, O. Gutknecht, and F. Michel. From agents to organizations: An organizational view of multi-agent systems. In AOSE, pages 214-230, 2003. [11] Foundation for Intelligent Physical Agents. FIPA Interaction Protocol Library Specification. http://www.fipa.org/repository/ips.html, 2003. [12] A. Garc´ıa-Camino, J. A. Rodr´ıguez-Aguilar, C. Sierra, and W. Vasconcelos. Norm-oriented programming of electronic institutions. In AAMAS, pages 670-672, 2006. [13] O. Gutknecht and J. Ferber. The MadKit agent platform architecture. Lecture Notes in Computer Science, 1887:48-55, 2001. [14] JADE. The JADE project home page. http://jade.cselt.it, 2005. [15] M. Luck, P. McBurney, O. Shehory, and S. Willmott. Agent Technology: Computing as Interaction - A Roadmap for Agent-Based Computing. AgentLink III, 2005. [16] P. McBurney and S. Parsons. A formal framework for inter-agent dialogues. In J. P. M¨uller, E. Andre, S. Sen, and C. Frasson, editors, Proceedings of the Fifth International Conference on Autonomous Agents, pages 178-179, Montreal, Canada, May 2001. ACM Press. [17] N. R. Mehta, N. Medvidovic, and S. Phadke. Towards a taxonomy of software connectors. In Proceedings of the 22nd International Conference on Software Engineering, pages 178-187. ACM Press, June 2000. [18] J. Pav´on and J. G´omez-Sanz. Agent oriented software engineering with ingenias. In V. Marik, J. Muller, and M. Pechoucek, editors, Proceedings of the 3rd International Central and Eastern European Conference on Multi-Agent Systems. Springer Verlag, 2003. [19] B. C. Pierce. Types and Programming Languages. The MIT Press, Cambridge, MA, 2002. [20] J. Pitt, L. Kamara, M. Sergot, and A. Artikis. Voting in multi-agent systems. Feb. 27 2006. [21] G. Plotkin. A structural approach to operational semantics. Technical Report DAIMI FN-19, Aarhus University, Sept. 1981. [22] J. Searle. Speech Acts. Cambridge University Press, 1969. [23] M. Sergot. A computational theory of normative positions. ACM Transactions on Computational Logic, 2(4):581-622, Oct. 2001. [24] M. P. Singh. Agent-based abstractions for software development. In F. Bergenti, M.-P. Gleizes, and F. Zambonelli, editors, Methodologies and Software Engineering for Agent Systems, chapter 1, pages 5-18. Kluwer, 2004. [25] S. Willmot and al.. Agentcities / opennet testbed. http://x-opennet.net, 2004. [26] F. Zambonelli, N. R. Jennings, and M. Wooldridge. Developing multiagent systems: The Gaia methodology. ACM Transactions on Software Engineering and Methodology, 12(3):317-370, July 2003. 896 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Operational Semantics of Multiagent Interactions ABSTRACT The social stance advocated by institutional frameworks and most multi-agent system methodologies has resulted in a wide spectrum of organizational and communicative abstractions which have found currency in several programming frameworks and software platforms. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms--both, organizational and communicative, share a common semantic core. In the realm of software architectures, the paper proposes a connector-based model of multi-agent interactions which attempts to identify the essential structure underlying multi-agent interactions. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of social interactions. The proposed model is intended as the abstract machine of an organizational programming language which allows programmers to accommodate an open set of interaction mechanisms. 1. INTRODUCTION The suitability of agent-based computing to manage the complex patterns of interactions naturally occurring in the development of large scale, open systems, has become one of its major assets over the last few years [26, 24, 15]. Particularly, the organizational or social stance advocated by institutional frameworks [2] and most multi-agent system (MAS) methodologies [26, 10], provides an excellent basis to deal with the complexity and dynamism of the interactions among system components. This approach has resulted in a wide spectrum of organizational and communicative abstractions, such as institutions, normative positions, power relationships, organizations, groups, scenes, dialogue games, communicative actions (CAs), etc., to effectively model the interaction space of MAS. This wealth of computational abstractions has found currency in several programming frameworks and software platforms (AMELI [9], MadKit [13], INGENIAS toolkit [18], etc.), which leverage multi-agent middlewares built upon raw ACL-based interaction mechanism [14], and minimize the gap between organizational metamodels and target implementation languages. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms--both, organizational and communicative, share a common semantic core. This paper thus focuses on the fundamental building blocks of multi-agent interactions: those which may be composed, extended or refined in order to define more complex organizational or communicative types of interactions. Its first goal is to carry out a principled analysis of multiagent interactions, departing from general features commonly ascribed to agent-based computing: autonomy, situatedness and sociality [26]. To approach this issue, we draw on the notion of connector, put forward within the field of software architectures [1, 17]. The outcome of this analysis will be a connector-based model of multi-agent interactions between autonomous social and situated components, i.e. agents, attempting to identify their essential structure. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of multi-agent (or social) interactions. Structural Operational Semantics (SOS) [21], a common technique to specify the operational semantics of programming languages, is used for this purpose. The paper is structured as follows: first, the major entities and relationships which constitute the structure of social interactions are introduced. Next, the dynamics of social interactions will show how these entities and relationships evolve. Last, relevant work in the literature is discussed with respect to the proposal, limitations are addressed, and current and future work is described. 2. SOCIAL INTERACTION STRUCTURE From an architectural point of view, interactions between software components are embodied in software connectors: first-class entities defined on the basis of the different roles played by software components and the protocols that regulate their behaviour [1]. The roles of a connector represent its participants, such as the caller and callee roles of an RPC connector, or the sender and receiver roles in a message passing connector. The attachment operation binds a component to the role of a given connector. The analysis of social interactions introduced in this section gives rise to a new kind of social connector. It refines the generic model in several respects, attending to the features commonly ascribed to agent-based computing: • According to the autonomy feature, we may distinguish a first kind of participant (i.e. role) in a social interaction, so-called agents. Basically, agents are those software components which will be regarded as autonomous within the scope of the interaction1. • A second group of participants, so-called environmental resources, may be identified from the situatedness feature. Unlike agents, resources represent those nonautonomous components whose state may be externally controlled by other components (agents or resources) within the interaction. Moreover, the participation of resources in an interaction is not mandatory. • Last, according to the sociality of agents, the specification of social connector protocols--the glue linking agents among themselves and with resources, will rely on normative concepts such as permissions, obligations and empowerments [23]. Besides agents, resources and social protocols, two other kinds of entities are of major relevance in our analysis of social interactions: actions, which represent the way in which agents alter the environmental and social state of the interaction; and events, which represent the changes in the interaction resulting from the performance of actions or the activity of environmental resources. In the following, we describe the basic entities involved in social interactions. Each kind of entity T will be specified as a record type T g (l1: T1,...ln: Tn), possibly followed by a number of invariants, definitions, and the actions affecting their state. Instances or values v of a record type T will be represented as v = (v1,..., vn): T. The type SetT represents a collection of values drawn from type T. The type QueueT represents a queue of values v: T waiting to be processed. The value v in the expression [v |]: Queue [T] represents the head of the queue. The type Enum {v1,..., vn} 1Note that we think of the autonomy feature in a relative, rather than absolute, perspective. Basically, this means that software components counting as agents in a social interaction may behave non-autonomously in other contexts, e.g. in their interactions through human-user interfaces. This conceptualization of agenthood resembles the way in which objects are understood in CORBA: as any kind of software component (C, Prolog, Cobol, etc.) attached to an ORB. represents an enumeration type whose values are v1,..., vn. Given some value v: T, the term vl refers to the value of the field l of a record type T. Given some labels l1, l2,..., the expression vl1, l2,...is syntactic sugar for ((vl1) l2)... . The special term nil will be used to represent the absence of proper value for an optional field, so that vl = nil will be true in those cases and false otherwise. The formal model will be illustrated with several examples drawn from the design of a virtual organization to aid in the management of university courses. 2.1 Social Interactions Social interactions shall be considered as composite connectors [17], structured in terms of a tree of nested subinteractions. Let's consider an interaction representing a university course (e.g. on data structures). On the one hand, this interaction is actually a complex one, made up of lower-level interactions. For instance, within the scope of the course agents will participate in programming assignment groups, lectures, tutoring meetings, examinations and so on. Assignment groups, in turn, may hold a number of assignment submissions and test requests interactions. A test request may also be regarded as a complex interaction, ultimately decomposed in the atomic, or bottom-level interactions represented by communicative actions (e.g. request, agree, refuse, ...). On the other hand, courses are run within the scope of a particular degree (e.g. computer science), a higher-level interaction. Traversing upwards from a degree to its ancestors, we find its faculty, the university and, finally, the multi-agent community or agent society. The community is thus the top-level interaction which subsumes any other kind of multi-agent interaction2. The organizational and communicative interaction types identified above clearly differ in many ways. However, we may identify four major components in all of them: the participating agents, the resources that agents manipulate, the protocol regulating the agent activities and the subinteraction space. Accordingly, we may specify the type I of social interactions, ranged over by the meta-variable i, as follows: where the member and environment fields represent the agents (A) and local resources (R) participating in the interaction; the sub-interaction field, its set of inner interactions; and the protocol field the rules that govern the interaction (P). The event channel, to be described in the next section, allows the dispatching of local events to external interactions. The context of some interaction is defined as its super-interaction (def. 1), so that the context of the toplevel interaction is nil. The type SI g Enum {open, closing, closed} represents the possible execution states of the interaction. Any interaction, but the top-level one, is set up within the context of another interaction by an initiator agent. The initiator is 2In the context of this application, a one-to-one mapping between human users and software components attached to the community as agents would be a right choice. 890 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) thus a mandatory feature for any interaction different to the community (inv. 2). The life-cycle of the interaction begins in the open state. Its sets of agent and resource participants, initially empty, vary as agents join and leave the interaction, and as they create and destroy resources from its local environment. Eventually, the interaction may come to an end (according to the protocol's rules), or be explicitly closed by some agent, thus prematurely disabling the activity of its participants. The transient closing state will be described in the next section. 2.2 Agents Components attach themselves as agents in social interactions with the purpose of achieving something. The purpose declared by some agent when it joins an interaction shall be regarded as the institutional goal that it purports to satisfy within that context3. The types of agents participating in a given interaction are primarily identified from their purposes. For instance, students are those agents participating in a course who purport to obtain a certificate in the course's subject. Other members of the course include lecturers and teaching assistants. The type A of agents, ranged over by meta-variable a, is defined as follows: where the purpose is represented as a well-formed boolean formula, of a generic type F, which evaluates to true if the purpose is satisfied and false otherwise. The context of some agent is defined as the interaction in which it participates (def. 3). The type SA g Enum {playing, leaving, succ, unsuc} represents the execution state of the agent. Its life-cycle begins in the playing state when its player agent joins the interaction, or some software component is attached as an agent to the multi-agent system (in this latter case, the player value is nil). The derived roles and partIn features represent the roles played by the agent and the contexts in which these roles are played (def. 4, 5) 4. An agent may play roles at interactions within or outside the scope of its context. For instance, students of a course are played by student agents belonging to the (undergraduate) degree, whereas lecturers may be played by teachers of a given department and the assistant role may be played by students of a Ph.D degree (both, the department and the Ph.D. degrees, are modelled as sub-interactions of the faculty). Components will normally attempt to perform different actions (e.g. to set up sub-interactions) in order to satisfy their purposes within some interaction. Moreover, components need to be aware of the current state of the interaction, so that they will also be capable of observing certain events from the interaction. Both, the visibility of the interaction 3Thus, it may or may not correspond to actual internal "goals" or "intentions" of the component. 4Free variables in the antecedents/consequents of implications shall be understood as universally/existentially quantified. and the attempts of members, are subject to the rules governing the interaction. The attempts and events fields of the agent structure represent the queues of attempts to execute some actions (ACT), and the events (E) received by the agent which have not been observed yet. An agent may update its event queue by seeing the state of some entity of the community. The last field of the structure represents the obligations (O) of agents, to be described later. Eventually, the participation of some agent in the interaction will be over. This may either happen when certain conditions are met (specified by the protocol rules), or when the agent takes the explicit "decision" of leaving the interaction. In either case, the final state of the agent will be successful if its purpose was satisfied; unsuccessful otherwise. The transient leaving state will be described in the next section. 2.3 Resources Resources are software components which may represent different types of non-autonomous informational or computational entities. For instance, objectives, topics, assignments, grades and exams are different kinds of informational resources created by lecturers and assistants in the context of the course interaction. Students may also create programs to satisfy the requirements of some assignment. Other types of computational resources put at the disposal of students by teachers include compilers and interpreters. The type R of resources, ranged over by meta-variable r, can be specified by the following record type: Essentially, resources can be regarded as objects deployed in a social setting. This means that resources are created, accessed and manipulated by agents in a social interaction context (def. 6), according to the rules specified by its protocol. The mandatory feature creator represents the agent who created this resource. Moreover, resources may have owners. The ownership relationship between members and resources is considered as a normative device aimed at the simplification of the protocol's rules that govern the interaction of agents and the environment. Members may gain ownership of some resource by taking it, and grant ownership to other agents by giving or sharing their own properties. For instance, the ownership of programs may be shared by several students if the assignment can be performed by groups of two or more students. The last operations feature represents the interface of the resource, consisting of a set of operations. A resource is structured around several public operations that participants may invoke, in accordance to the rules specified by the interaction's protocol. The set of operations of a resource makes up its interface. 2.4 Protocols The protocol of any interaction is made up of the rules which govern its overall state and dynamics. The present specification abstracts away the particular formalism used to specify these rules, and focuses instead on several requirements concerning the structure and interface of protocols. Accordingly, the type P of protocols, ranged over by meta The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 891 We demand from protocols four major kinds of functions. Firstly, protocols shall include rules to identify the empowerments and permissions of any agent attempting to alter the state of the interaction (e.g. its members, the environment, etc.) through the execution of some action (e.g. join, create, etc.). Empowerments shall be regarded as the institutional capabilities which some agent possesses in order to satisfy its purpose. Corresponding rules, encapsulated by the empowered function field, shall allow to determine whether some agent is capable to perform a given action over the interaction. Empowerments may only be exercised under certain circumstances--that permissions specify. Permission rules shall allow to determine whether the attempt of an empowered agent to perform some particular action is satisfied or not (cf. permitted field). For instance, the course's protocol specifies that the agents empowered to join the interaction as students are those students of the degree who have payed the fee established for the course's subject, and own the certificates corresponding to its prerequisite subjects. Permission rules, in turn, specify that those students may only join the course in the admission stage. Hence, even if some student has paid the fee, the attempt to join the course will fail if the course has not entered the corresponding stage6. Secondly, protocols shall allow to determine the obligations of agents towards the interaction. Obligations represent a normative device of social enforcement, fully compatible with the autonomy of agents, used to bias their behaviour in a certain direction. These kinds of rules shall allow to determine whether some agent must perform an action of a given type, as well as if some obligation was fulfilled, violated or needs to be revoked. The function obligations of the protocol structure thus identifies the agents whose obligation set must be updated. Moreover, it returns for each agent a collection of events representing the changes in the obligation set. For instance, the course's protocol establishes that members of departments must join the course as teachers whenever they are assigned to the course's subject. Thirdly, the protocol shall allow to specify monitoring rules for the different events originating within the interaction. Corresponding rules shall establish the set of agents that must be awared of some event. For instance, this func5The formalization assumes that protocol's functions implicitly recieve as input the interaction being regulated. 6The hasPaidFee relationship between (degree) students and subject resources is represented by an additional, application-dependent field of the agent structure for this kind of roles. Similarly, the admission stage is an additional boolean field of the structure for school interactions. The generic types I, A, R and P are thus extendable. tionality is exploited by teachers in order to monitor the enrollment of students to the course. Last, the protocol shall allow to control the state of the interaction as well as the states of its members. Corresponding rules identify the conditions under which some interaction will be automatically finished, and whether the participation of some member agent will be automatically over. Thus, the function field finish returns true if the regulated interaction must finish its execution. If so happens, a well-defined set of protocols must ensure that its sub-interactions and members are finished as well (inv. 8,9). Similarly, the function over returns true if the participation of the specified member must be over. Well-formed protocols must ensure the consistency between these functions across playing roles (inv. 10) 7. For instance, the course's protocol establishes that the participation of students is over when they gain ownership of the course's certificate or the chances to get it are exhausted. It also establishes that the course must be finished when the admission stage has passed and all the students finished their participation. 3. SOCIAL INTERACTION DYNAMICS The dynamics of the multi-agent community is influenced by the external actions executed by software components and the protocols governing their interactions. This section focuses on the dynamics resulting from a particular kind of external action: the attempt of some component, attached to the community as an agent, to execute a given (internal) action. The description of other external actions concerning agents (e.g. observe the events from its event queue, enter or exit from the community) and resources (e.g. a timer resource may signal the pass of time) will be skipped. The processing of some attempt may give rise to changes in the scope of the target interaction, such as the instantiation of new participants (agents or resources) or the setting up of new sub-interactions. These resulting events may cause further changes in the state of other interactions (the target one included), namely, in its execution state as well as in the execution state, obligations and visibility of their members. This section will also describe the way in which these events are processed. The resulting dynamics described bellow allows for actions and events corresponding to different agents and interactions to be processed simultaneously. Due to lack of space, we only include some of the operational rules that formalise the execution semantics. 3.1 Attempt processing An attempt is defined by the structure ATT g (perf: A, act: ACT), where the performer represents the agent in charge of executing the specified action. This action is intended to alter the state of some target interaction (possibly, the performer's context itself), and notify a collection of addressees of the changes resulting from a successful execution. Accordingly, the type ACT of actions, ranged over by meta-variable α, is specified as follows: 7The close and leave actions update the finish and over function fields as explained in the next section. Additional actions, such as permit, forbid, empower, etc., to update other protocol's fields are yet to be identified in future work. 892 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) where: the performer is formally defined as the agent who stores the action in its queue of attempts, and the state field represents the current phase of processing. This process goes through four major phases, as specified by the enumeration type SACT Enum {emp, perm, exec}: empowerment checking, permission checking and action execution, described in the sequel. 3.1.1 Empowerment checking The post-condition of an attempt consists of inserting the action in the queue of attempts of the specified performer. As rule 1 specifies8, this will only be possible if the performer is empowered to execute that action according to the rules that govern the state of the target interaction. If this condition is not met, the attempt will simply be ignored. Moreover, the performer agent must be in the playing state (this pre-condition is also required for any rule concerning the processing of attempts). If these pre-conditions are satisfied the rule is fired and the processing of the action continues in the permission checking stage. For instance, when the software component attached as a student in a degree attempts to join as a student the course in which some subject is teached, the empowerment rules of the course interaction are checked. If the (degree) student has passed the course's prerequisite subjects the join action will be inserted in its queue of attempts and considered for execution. 3.1.2 Permissions checking The processing of the action resumes when the possible preceding actions in the performer's queue of attempts are fully processed and removed from the queue. Moreover, there should be no pending events to be processed in the interaction, for these events may cause the member or the interaction to be finished (as will be shortly explained in the next sub-section). If these conditions are met the permissions to execute the given action (and notify the specified addressees) are checked (e.g. it will be checked whether the student paid the fee for the course's subject). If the protocol of the target interaction grants permission, the processing of the attempt moves to the action execution stage (rule 2). Otherwise, the action is discharged and removed from the queue. Unlike unempowered attempts, a forbidden one will cause an event to be generated and transfered to the event channel for further processing. 8Labels of record instances are omitted to allow for more compact specifications. Moreover, note that record updates in "where" clauses only affect the specified fields. 3.1.3 Action execution The transitions fired in this stage are classified according to the different types of actions to be executed. The intended effects of some actions may directly be achieved in a single step, while others will required an indirect approach and possibly several execution steps. Actions of the first kind are "constructive" ones such as set up and join. The second group of actions include those, such as close and leave, whose effects are indirectly achieved by updating the interaction protocol. As an example of constructive action, let's consider the execution of a set up action, whose type is defined as follows9: where the new field represents the new interaction to be initiated. Its sets of participants (agents and resources) and sub-interactions must be empty (inv. 13) and its state must be open (inv. 14). The setting up of the new interaction may thus affect its protocol and possible application-dependent fields (e.g. the subject of a course interaction). According to rule 3, the outcome of the execution is threefold: firstly, the performer's attempt queue is updated so that the executing action is removed; secondly, the new interaction is added to the target's set of sub-interactions (moreover, its initiator field is set to the performer agent); last, the event representing this change (which includes a description of the change, the agent that caused it and the action performed) is inserted in the output port of the target's event channel. Let's consider now the case of a close action. This action represents an attempt by the performer to force some interaction to finish, thus bypassing its current protocol rules (those concerning the finish function). The way to achieve this effect is to cause an update on the protocol so that the finish function returns true afterwards10. Accordingly, we may specify this type of action as follows: where the inherited target field represents the interaction to be closed (which must be open and different to the topinteraction, according to invariants 15 and 16) and the new 9The resulting type consists of the fields of the ACT record extended with an additional new field. 10This strategy is also followed in the definition of leave and may also be used in the definition of other types of actions such as fire, permit, forbid, etc. . The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 893 update field represents a proper higher-order function to update the target's protocol (inv. 17). The transition which models the execution of this action, specified by rule 4, defines two effects in the target interaction: its protocol is updated and the event representing this change is inserted in its output port. This event will actually trigger the closing process of the interaction as described in the next subsection. 3.2 Event Processing The processing of events is encapsulated in the event channels of interactions. Channels, ranged over by meta-variable c, are defined by two input and output ports, according to the following definition: The output port stores and processes the events originated within the scope of the channel's interaction. Its first purpose is to dispatch the local events to the agents identified by the protocol's monitoring function. Moreover, since these events may influence the results of the finishing, over and obligation functions of certain protocols, they will also be dispatched to the input ports of the interactions identified through a dispatching function--whose invariants will be explained later on. Thus, input ports serve as a coordination mechanism which activate the re-evaluation of the above functios whenever some event is received11. Accordingly, the processing of some event goes through four major stages: event dispatching, interaction state update, member state update and obligations update. The first one takes place in the output port of the interaction in which the event originated, whereas the other ones execute in separate control threads associated to the input ports of the interactions to which the event was dispatched. 3.2.1 Event dispatching The processing of some event stored in the output port is triggered when all its preceding events have been dispatched. As a first step, the auxiliary int and ag fields are initialised 11Alternatively, we may have assumed that interactions are fully aware of any change in the multi-agent community. In this scenario, interactions would trigger themselves without requiring any explicit notification. On the contrary, we adhere to the more realistic assumption of limited awareness. with the returned values of the dispatching and protocol's monitoring functions, respectively (rule 5). Then, additional rules simply iterate over these collections until all agents and interactions have been notified (i.e., both sets are empty). Last, the event is removed from the queue and the auxiliary fields are re-set to nil. The dispatching function shall identify the set of interactions (possibly, empty) that may be affected by the event (which may include the channel's interaction itself) 12. For instance, according to the finishing rule of university courses mentioned in the last section, the event representing the end of the admission stage, originated within the scope of the school interaction, will be dispatched to every course of the school's degrees. Concerning the monitoring function, according to invariant 11 of protocols, if the event is generated as the result of an action performance, the agents to be notified will include the performer and addressees of that action. Thus, according to the monitoring rule of university courses, if a student of some degree joins a certain course and specifies a colleague as addressee of that action, the course's teachers and itself will also be notified of the successful execution. 3.2.2 Interaction state update Input port activity is triggered when a new event is received. Irrespective of the kind of incoming event, the first processing action is to check whether the channel's interaction must be finished. Thus, the dispatching of the finish event resulting from a close action (inv. 18) serves as a trigger of the closing procedure. If the interaction has not to be finished, the input port stage field is set to the member state update stage and the auxiliary ag field is initialised to the interaction members. Otherwise, we can consider two possible scenarios. In the first one, the interaction has no members and no sub-interactions. In this case, the interaction can be inmediately closed down. As rule 6 shows, the interaction is closed, removed from the context's set of sub-interactions and a closed event is inserted in its output channel. According to invariant 22, this event will be later inserted to its input channel to allow for further treatment. In the second scenario, the interaction has some member or sub-interaction. In this case, clean-up is required prior to the disposal of the interaction (e.g. if the admission period ends and no student has matriculated for the course, teachers has to be finished before finishing the course itself). As rule 7 shows, the interaction is moved to the transient closing state and a corresponding event is inserted in the output port. According to invariant 20, the closing event will be dispatched to every sub-interaction in order to activate its closing procedure (guaranteed by invariant 8). Moreover, 12This is essentially determined by the protocol rules of these interactions. The way in which the dispatching function is initialised and updated is out of the scope of this paper. 894 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) the stage and ag fields are properly initialised so that the process goes on in the next member state update stage. This stage will further initiate the leaving process of the members (according to invariant 9). Eventually, every member will leave the interaction and every sub-interaction will be closed. Corresponding events will be received by the interaction (according to invariants 23 and 22) so that the conditions of the first scenario will hold. 3.2.3 Member state update This stage simply iterates over the members of the interaction to check whether they must be finished according to the protocol's over function. When all members have been checked, the stage field will be set to the next obligation update stage and the auxiliary ag field will be initalised with the agents identified by the protocol's obligation update function. If some member has to end its participation in the interaction and it is not playing any role, it will be inmediately abandoned (successfully or unsuccessfully, according to the satisfaction of its purpose). The corresponding event will be forwarded to its interaction and to the interaction of its player agent to account for further changes (inv. 23). Otherwise, the member enters the transient leaving state, thus preventing any action performance. Then, it waits for the completion of the leaving procedures of its played roles, triggered by proper dispatching of the leaving event (inv. 21). 3.2.4 Obligations update In this stage, the obligations of agents (not necessaryly members of the interaction) towards the interaction are updated accordingly. When all the identified agents have been updated, the event is removed from the input queue and the stage field is set back to the interaction state update. For instance, when a course interaction receives an event representing the assignment of some department member to its subject, an obligation to join the course as a teacher is created for that member. Moreover, the event representing this change is added to the output channel of the department interaction.
Operational Semantics of Multiagent Interactions ABSTRACT The social stance advocated by institutional frameworks and most multi-agent system methodologies has resulted in a wide spectrum of organizational and communicative abstractions which have found currency in several programming frameworks and software platforms. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms--both, organizational and communicative, share a common semantic core. In the realm of software architectures, the paper proposes a connector-based model of multi-agent interactions which attempts to identify the essential structure underlying multi-agent interactions. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of social interactions. The proposed model is intended as the abstract machine of an organizational programming language which allows programmers to accommodate an open set of interaction mechanisms. 1. INTRODUCTION The suitability of agent-based computing to manage the complex patterns of interactions naturally occurring in the development of large scale, open systems, has become one of its major assets over the last few years [26, 24, 15]. Particularly, the organizational or social stance advocated by institutional frameworks [2] and most multi-agent system (MAS) methodologies [26, 10], provides an excellent basis to deal with the complexity and dynamism of the interactions among system components. This approach has resulted in a wide spectrum of organizational and communicative abstractions, such as institutions, normative positions, power relationships, organizations, groups, scenes, dialogue games, communicative actions (CAs), etc., to effectively model the interaction space of MAS. This wealth of computational abstractions has found currency in several programming frameworks and software platforms (AMELI [9], MadKit [13], INGENIAS toolkit [18], etc.), which leverage multi-agent middlewares built upon raw ACL-based interaction mechanism [14], and minimize the gap between organizational metamodels and target implementation languages. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms--both, organizational and communicative, share a common semantic core. This paper thus focuses on the fundamental building blocks of multi-agent interactions: those which may be composed, extended or refined in order to define more complex organizational or communicative types of interactions. Its first goal is to carry out a principled analysis of multiagent interactions, departing from general features commonly ascribed to agent-based computing: autonomy, situatedness and sociality [26]. To approach this issue, we draw on the notion of connector, put forward within the field of software architectures [1, 17]. The outcome of this analysis will be a connector-based model of multi-agent interactions between autonomous social and situated components, i.e. agents, attempting to identify their essential structure. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of multi-agent (or social) interactions. Structural Operational Semantics (SOS) [21], a common technique to specify the operational semantics of programming languages, is used for this purpose. The paper is structured as follows: first, the major entities and relationships which constitute the structure of social interactions are introduced. Next, the dynamics of social interactions will show how these entities and relationships evolve. Last, relevant work in the literature is discussed with respect to the proposal, limitations are addressed, and current and future work is described. 2. SOCIAL INTERACTION STRUCTURE 2.1 Social Interactions 890 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.2 Agents 2.3 Resources 2.4 Protocols The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 891 3. SOCIAL INTERACTION DYNAMICS 3.1 Attempt processing 892 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.1.1 Empowerment checking 3.1.2 Permissions checking 3.1.3 Action execution The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 893 3.2 Event Processing 3.2.1 Event dispatching 3.2.2 Interaction state update 894 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.2.3 Member state update 3.2.4 Obligations update
Operational Semantics of Multiagent Interactions ABSTRACT The social stance advocated by institutional frameworks and most multi-agent system methodologies has resulted in a wide spectrum of organizational and communicative abstractions which have found currency in several programming frameworks and software platforms. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms--both, organizational and communicative, share a common semantic core. In the realm of software architectures, the paper proposes a connector-based model of multi-agent interactions which attempts to identify the essential structure underlying multi-agent interactions. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of social interactions. The proposed model is intended as the abstract machine of an organizational programming language which allows programmers to accommodate an open set of interaction mechanisms. 1. INTRODUCTION The suitability of agent-based computing to manage the complex patterns of interactions naturally occurring in the Particularly, the organizational or social stance advocated by institutional frameworks [2] and most multi-agent system (MAS) methodologies [26, 10], provides an excellent basis to deal with the complexity and dynamism of the interactions among system components. This approach has resulted in a wide spectrum of organizational and communicative abstractions, such as institutions, normative positions, power relationships, organizations, groups, scenes, dialogue games, communicative actions (CAs), etc., to effectively model the interaction space of MAS. Still, these tools and frameworks are designed to support a limited range of interaction capabilities that constrain developers to a fixed set of particular, pre-defined abstractions. The main hypothesis motivating this paper is that the variety of multi-agent interaction mechanisms--both, organizational and communicative, share a common semantic core. This paper thus focuses on the fundamental building blocks of multi-agent interactions: those which may be composed, extended or refined in order to define more complex organizational or communicative types of interactions. Its first goal is to carry out a principled analysis of multiagent interactions, departing from general features commonly ascribed to agent-based computing: autonomy, situatedness and sociality [26]. The outcome of this analysis will be a connector-based model of multi-agent interactions between autonomous social and situated components, i.e. agents, attempting to identify their essential structure. Furthermore, the paper also provides this model with a formal execution semantics which describes the dynamics of multi-agent (or social) interactions. Structural Operational Semantics (SOS) [21], a common technique to specify the operational semantics of programming languages, is used for this purpose. The paper is structured as follows: first, the major entities and relationships which constitute the structure of social interactions are introduced. Next, the dynamics of social interactions will show how these entities and relationships evolve. Last, relevant work in the literature is discussed with respect to the proposal, limitations are addressed, and current and future work is described.
I-53
A Randomized Method for the Shapley Value for the Voting Game
The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #P-complete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%.
[ "uniqu and fair solut", "polynomi time", "approxim", "coalit format", "interact", "multi-agent system", "cooper game theori", "shaplei valu", "reach consensu mean", "randomis method", "gener function", "game-theori" ]
[ "P", "P", "P", "M", "U", "U", "M", "M", "U", "M", "M", "U" ]
A Randomized Method for the Shapley Value for the Voting Game Shaheen S. Fatima Department of Computer Science, University of Liverpool Liverpool L69 3BX, UK. shaheen@csc.liv.ac.uk Michael Wooldridge Department of Computer Science, University of Liverpool Liverpool L69 3BX, UK. mjw@csc.liv.ac.uk Nicholas R. Jennings School of Electronics and Computer Science University of Southampton Southampton SO17 1BJ, UK. nrj@ecs.soton.ac.uk ABSTRACT The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #P-complete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Design, Theory 1. INTRODUCTION Coalition formation, a key form of interaction in multi-agent systems, is the process of joining together two or more agents so as to achieve goals that individuals on their own cannot, or to achieve them more efficiently [1, 11, 14, 13]. Often, in such situations, there is more than one possible coalition and a player``s payoff depends on which one it joins. Given this, a key problem is to ensure that none of the parties in a coalition has any incentive to break away from it and join another coalition (i.e., the coalitions should be stable). However, in many cases there may be more than one solution (i.e., a stable coalition). In such cases, it becomes difficult to select a single solution from among the possible ones, especially if the parties are self-interested (i.e., they have different preferences over stable coalitions). In this context, cooperative game theory deals with the problem of coalition formation and offers a number of solution concepts that possess desirable properties like stability, fair division of joint gains, and uniqueness [16, 14]. Cooperative game theory differs from its non-cooperative counterpart in that for the former the players are allowed to form binding agreements and so there is a strong incentive to work together to receive the largest total payoff. Also, unlike non-cooperative game theory, cooperative game theory does not specify a game through a description of the strategic environment (including the order of players'' moves and the set of actions at each move) and the resulting payoffs, but, instead, it reduces this collection of data to the coalitional form where each coalition is represented by a single real number: there are no actions, moves, or individual payoffs. The chief advantage of this approach, at least in multiple-player environments, is its practical usefulness. Thus, many more real-life situations fit more easily into a coalitional form game, whose structure is more tractable than that of a non-cooperative game, whether that be in normal or extensive form and it is for this reason that we focus on such forms in this paper. Given these observations, a number of multiagent systems researchers have used and extended cooperative game-theoretic solutions to facilitate automated coalition formation [20, 21, 18]. Moreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19]. A player``s Shapley value gives an indication of its prospects of playing the game - the higher the Shapley value, the better its prospects. The main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness). However, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time. For instance, finding this value for the weighted voting game is, in general, #P-complete [6]. A problem is #P-hard if solving it is as hard as counting satisfying assignments of propositional logic formulae [15, p442]. Since #P-completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general. In other words, it is practically infeasible to try to compute the exact Shapley value. However, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents. Hence, our objective is to overcome the computational complexity of finding the Shapley value for this game. Specifically, we first show that there are some specific voting games for which the exact value can 959 978-81-904262-7-5 (RPS) c 2007 IFAAMAS be computed in polynomial time. By identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not. For the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value. The computational complexity of such games has typically been tackled using two main approaches. The first is to use generating functions [3]. This method trades time complexity for storage space. The second uses an approximation technique based on Monte Carlo simulation [12, 7]. However the method we propose is more general than either of these (see Section 6 for details). Moreover, no work has previously analysed the approximation error. The approximation error relates to how close the approximate is to the true Shapley value. Specifically, it is the difference between the true and the approximate Shapley value. It is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error. Thus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value. This analysis is carried out empirically. Our experiments show that the error is always less than 20%, and in most cases it is under 5%. Finally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space). Given this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. The rest of the paper is organised as follows. Section 2 defines the Shapley value and describes the weighted voting game. In Section 3 we describe voting games whose Shapley value can be found in polynomial time. In Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5. Section 6 discusses related literature. Finally, Section 7 concludes. 2. BACKGROUND We begin by introducing coalition games and the Shapley value and then define the weighted voting game. A coalition game is a game where groups of players (coalitions) may enforce cooperative behaviour between their members. Hence the game is a competition between coalitions of players, rather than between individual players. Depending on how the players measure utility, coalition game theory is split into two parts. If the players measure utility or the payoff in the same units and there is a means of exchange of utility such as side payments, we say the game has transferable utility; otherwise it has non-transferable utility. More formally, a coalition game with transferable utility, N, v , consists of: 1. a finite set (N = {1, 2, ... , n}) of players and 2. a function (v) that associates with every non-empty subset S of N (i.e., a coalition) a real number v(S) (the worth of S). For each coalition S, the number v(S) is the total payoff that is available for division among the members of S (i.e., the set of joint actions that coalition S can take consists of all possible divisions of v(S) among the members of S). Coalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way. For the former, each coalition is associated with a set of payoff vectors that is not necessarily the set of all possible divisions of some fixed amount. The focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs. Thus, in either case, the players will only join a coalition if they expect to gain from it. Here, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff. The problem then is how to split the total payoff between or among the players. In this context, Shapley [19] constructed a solution using an axiomatic approach. Shapley defined a value for games to be a function that assigns to a game (N, v), a number ϕi(N, v) for each i in N. This function satisfies three axioms [17]: 1. Symmetry. This axiom requires that the names of players play no role in determining the value. 2. Carrier. This axiom requires that the sum of ϕi(N, v) for all players i in any carrier C equal v(C). A carrier C is a subset of N such that v(S) = v(S ∩ C) for any subset of players S ⊂ N. 3. Additivity. This axiom specifies how the values of different games must be related to one another. It requires that for any games ϕi(N, v) and ϕi(N, v ), ϕi(N, v)+ϕi(N, v ) = ϕi(N, v + v ) for all i in N. Shapley showed that there is a unique function that satisfies these three axioms. Shapley viewed this value as an index for measuring the power of players in a game. Like a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities. Alternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game. We first introduce some notation and then define the Shapley value. Let S denote the set N − {i} and fi : S → 2N−{i} be a random variable that takes its values in the set of all subsets of N − {i}, and has the probability distribution function (g) defined as: g(fi(S) = S) = |S|! (n − |S| − 1)! n! The random variable fi is interpreted as the random choice of a coalition that player i joins. Then, a player``s Shapley value is defined in terms of its marginal contribution. Thus, the marginal contribution of player i to coalition S with i /∈ S is a function Δiv that is defined as follows: Δiv(S) = v(S ∪ {i}) − v(S) Thus a player``s marginal contribution to a coalition S is the increase in the value of S as a result of i joining it. DEFINITION 1. The Shapley value (ϕi) of the game N, v for player i is the expectation (E) of its marginal contribution to a coalition that is chosen randomly: ϕi(N, v) = E[Δiv ◦ fi] (1) The Shapley value is interpreted as follows. Suppose that all the players are arranged in some order, all orderings being equally likely. Then ϕi(N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him. The method for finding a player``s Shapley value depends on the definition of the value function (v). This function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1. 960 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.1 The weighted voting game We adopt the definition of the voting game given in [6]. Thus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament. The weighted voting game is then a game G = N, v in which: v(S) = j 1 if w(S) ≥ q 0 otherwise for some q ∈ IR+ and wi ∈ IRN + , where: w(S) = X i∈S wi for any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota). Note that for this game (denoted q; w1, ... , wn ), a player``s marginal contribution is either zero or one. This is because the value of any coalition is either zero or one. A coalition with value zero is called a losing coalition and with value one a winning coalition. If a player``s entry to a coalition changes it from losing to winning, then the player``s marginal contribution for that coalition is one; otherwise it is zero. The main advantage of the Shapley value is that it gives a solution that is both unique and fair. The property of uniqueness is desirable because it leaves no ambiguity. The property of fairness relates to how the gains from cooperation are split between the members of a coalition. In this case, a player``s Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value. Thus, from a player``s perspective, both uniqueness and fairness are desirable properties. 3. VOTING GAMES WITH POLYNOMIAL TIME SOLUTIONS Here we describe those voting games for which the Shapley value can be determined in polynomial time. This is achieved using the direct enumeration approach (i.e., listing all possible coalitions and finding a player``s marginal contribution to each of them). We characterise such games in terms of the number of players and their weights. 3.1 All players have equal weight Consider the game q; j, ... , j with m parties. Each party has j votes. If q ≤ j, then there would be no need for the players to form a coalition. On the other hand, if q = mj (m = |N| is the number of players), only the grand coalition is possible. The interesting games are those for which the quota (q) satisfies the constraint: (j + 1) ≤ q ≤ j(m − 1). For these games, the value of a coalition is one if the weight of the coalition is greater than or equal to q, otherwise it is zero. Let ϕ denote the Shapley value for a player. Consider any one player. This player can join a coalition as the ith member where 1 ≤ i ≤ m. However, the marginal contribution of the player is 1 only if it joins a coalition as the q/j th member. In all other cases, its marginal contribution is zero. Thus, the Shapley value for each player ϕ = 1/m. Since ϕ requires one division operation, it can be found in constant time (i.e., O(1)). 3.2 A single large party Consider a game in which there are two types of players: large (with weight wl > ws) and small (with weight ws). There is one large player and m small ones. The quota for this game is q; i.e., we have a game of the form q; wl, ws, ws, ... , ws . The total number of players is (m + 1). The value of a coalition is one if the weight of the coalition is greater than or equal to q, otherwise it is zero. Let ϕl denote the Shapley value for the large player and ϕs that for each small player. We first consider ws = 1 and then ws > 1. The smallest possible value for q is wl + 1. This is because, if q ≤ wl, then the large party can win the election on its own without the need for a coalition. Thus, the quota for the game satisfies the constraint wl + 1 ≤ q ≤ m + wl − 1. Also, the lower and upper limits for wl are 2 and (q − 1) respectively. The lower limit is 2 because the weight of the large party has to be greater than each small one. Furthermore, the weight of the large party cannot be greater than q, since in that case, there would be no need for the large party to form a coalition. Recall that for our voting game, a player``s marginal contribution to a coalition can only be zero or one. Consider the large party. This party can join a coalition as the ith member where 1 ≤ i ≤ (m + 1). However, the marginal contribution of the large party is one if it joins a coalition as the ith member where (q − wl) ≤ i < q. In all the remaining cases, its marginal contribution is zero. Thus, out of the total (m + 1) possible cases, its marginal contribution is one in wl cases. Hence, the Shapley value of the large party is: ϕl = wl/(m + 1). In the same way, we obtain the Shapley value of the large party for the general case where ws > 1 as: ϕl = wl/ws /(m + 1) Now consider a small player. We know that the sum of the Shapley values of all the m+1 players is one. Also, since the small parties have equal weights, their Shapley values are the same. Hence, we get: ϕs = 1 − ϕl m Thus, both ϕl and ϕs can be computed in constant time. This is because both require a constant number of basic operations (addition, subtraction, multiplication, and division). In the same way, the Shapley value for a voting game with a single large party and multiple small parties can be determined in constant time. 3.3 Multiple large and small parties We now consider a voting game that has two player types: large and small (as in Section 3.2), but now there are multiple large and multiple small parties. The set of parties consists of ml large parties and ms small parties. The weight of each large party is wl and that of each small one is ws, where ws < wl. We show the computational tractability for this game by considering the following four possible scenarios: S1 q ≤ mlwl and q ≤ msws S2 q ≤ mlwl and q ≥ msws S3 q ≥ mlwl and q ≥ msws S4 q ≥ mlwl and q ≤ msws For the first scenario, consider a large player. In order to determine the Shapley value for this player, we need to consider the number of all possible coalitions that give it a marginal contribution of one. It is possible for the marginal contribution of this player to be one if it joins a coalition in which the number of large players is between zero and (q −1)/wl. In other words, there are (q −1)/wl +1 such cases and we now consider each of them. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 961 Consider a coalition such that when the large player joins in, there are i large players and (q − iwl − 1)/ws small players already in it, and the remaining players join after the large player. Such a coalition gives the large player unit marginal contribution. Let C2 l (i, q) denote the number of all such coalitions. To begin, consider the case i = 0: C2 l (0, q) = C „ ms, q − 1 ws `` × FACTORIAL „ q − 1 ws `` × FACTORIAL „ ml + ms − q − 1 ws − 1 `` where C(y, x) denotes the number of possible combinations of x items from a set of y items. For i = 1, we get: C2 l (1, q) = C(ml, 1) × C „ ms, q − wl − 1 ws `` × FACTORIAL „ q − wl − 1 ws `` × FACTORIAL „ ml + ms − q − wl − 1 ws − 1 `` In general, for i > 1, we get: C2 l (i, q) = C(ml, i) × C „ ms, q − iwl − 1 ws `` × FACTORIAL „ q − iwl − 1 ws `` × FACTORIAL „ ml + ms − q − wl − 1 ws − 1 `` Thus the large player``s Shapley value is: ϕl = q−1 wlX i=0 C2 l (i, q)/FACTORIAL(ml + ms) For a given i, the time to find C2 l (i, q) is O(T) where T = (mlms(q − iwl − 1)(ml + ms))/ws Hence, the time to find the Shapley value is O(T q/wl). In the same way, a small player``s Shapley value is: ϕs = q−1 wsX i=0 C2 s (i, q)/FACTORIAL(ml + ms) and can be found in time O(T q/ws). Likewise, the remaining three scenarios (S2 to S4) can be shown to have the same time complexity. 3.4 Three player types We now consider a voting game that has three player types: 1, 2, and 3. The set of parties consists of m1 players of type 1 (each with weight w1), m2 players of type 2 (each with weight w2), and m3 players of type 3 (each with weight w3). For this voting game, consider a player of type 1. It is possible for the marginal contribution of this player to be one if it joins a coalition in which the number of type 1 players is between zero and (q − 1)/w1. In other words, there are (q − 1)/w1 + 1 such cases and we now consider each of them. Consider a coalition such that when the type 1 player joins in, there are i type 1 players already in it. The remaining players join after the type 1 player. Let C3 l (i, q) denote the number of all such coalitions that give a marginal contribution of one to the type 1 player where: C3 1 (i, q) = q−1 w1X i=0 q−iw1−1 w2X j=0 C2 1 (j, q − iw1) Therefore the Shapley value of the type 1 player is: ϕ1 = q−1 w1X i=0 C3 1 (i, q)/FACTORIAL(m1 + m2 + m3) The time complexity of finding this value is O(T q2 /w1w2) where: T = ( 3Y i=1 mi)(q − iwl − 1)( 3X i=1 mi)/(w2 + w3) Likewise, for the other two player types (2 and 3). Thus, we have identified games for which the exact Shapley value can be easily determined. However, the computational complexity of the above direct enumeration method increases with the number of player types. For a voting game with more than three player types, the time complexity of the above method is a polynomial of degree four or more. To deal with such situations, therefore, the following section presents a faster randomised method for finding the approximate Shapley value. 4. FINDING THE APPROXIMATE SHAPLEY VALUE We first give a brief introduction to randomized algorithms and then present our randomized method for finding the approximate Shapley value. Randomized algorithms are the most commonly used approach for finding approximate solutions to computationally hard problems. A randomized algorithm is an algorithm that, during some of its steps, performs random choices [2]. The random steps performed by the algorithm imply that by executing the algorithm several times with the same input we are not guaranteed to find the same solution. Now, since such algorithms generate approximate solutions, their performance is evaluated in terms of two criteria: their time complexity, and their error of approximation. The approximation error refers to the difference between the exact solution and its approximation. Against this background, we present a randomized method for finding the approximate Shapley value and empirically evaluate its error. We first describe the general voting game and then present our randomized algorithm. In its general form, a voting game has more than two types of players. Let wi denote the weight of player i. Thus, for m players and for quota q the game is of the form q; w1, w2, ... , wm . The weights are specified in terms of a probability distribution function. For such a game, we want to find the approximate Shapley value. We let P denote a population of players. The players'' weights in this population are defined by a probability distribution function. Irrespective of the actual probability distribution function, let μ be the mean weight for the population of players and ν the variance in the players'' weights. From this population of players we randomly draw samples and find the sum of the players'' weights in the sample using the following rule from Sampling Theory (see [8] p425): If w1, w2, ... , wn is a random sample of size n drawn from any distribution with mean μ and variance ν, then the sample sum has an approximate Normal distribution with mean nμ and variance ν n (the larger the n the better the approximation). 962 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) R-SHAPLEYVALUE (P, μ, ν, q, wi) P: Population of players μ: Mean weight of the population P ν: Variance in the weights for poulation P q: Quota for the voting game wi: Player i``s weight 1. Ti ← 0; a ← q − wi; b ← q − 2. For X from 1 to m repeatedly do the following 2.1. Select a random sample SX of size X from the population P 2.2. Evaluate expected marginal contribution (ΔX i ) of player i to SX as: ΔX i ← 1√ (2πν/X) R b a e−X (x−Xμ)2 2ν dx 2.3. Ti ← Ti + ΔX i 3. Evaluate Shapley value of player i as: ϕi ← Ti/m Table 1: Randomized algorithm to find the Shapley value for player i. We know from Definition 1, that the Shapley value for a player is the expectation (E) of its marginal contribution to a coalition that is chosen randomly. We use this rule to determine the Shapley value as follows. For player i with weight wi, let ϕi denote the Shapley value. Let X denote the size of a random sample drawn from a population in which the individual player weights have any distribution. The marginal contribution of player i to this random sample is one if the total weight of the X players in the sample is greater than or equal to a = q −wi but less than b = q − (where is an inifinitesimally small quantity). Otherwise, its marginal contribution is zero. Thus, the expected marginal contribution of player i (denoted ΔX i ) to the sample coalition is the area under the curve defined by N(Xμ, ν X ) in the interval [a, b]. This area is shown as the region B in Figure 1 (the dotted line in the figure is Xμ). Hence we get: ΔX i = 1 p (2πν/X) Z b a e−X (x−Xμ)2 2ν dx (2) and the Shapley value is: ϕi = 1 m mX X=1 ΔX i (3) The above steps are described in Table 1. In more detail, Step 1 does the initialization. In Step 2, we vary X between 1 and m and repeatedly do the following. In Step 2.1, we randomly select a sample SX of size X from the population P. Player i``s marginal contribution to the random coalition SX is found in Step 2.2. The average marginal contribution is found in Step 3 - and this is the Shapley value for player i. THEOREM 1. The time complexity of the proposed randomized method is linear in the number of players. PROOF. As per Equation 3, ΔX i must be computed m times. This is done in the for loop of Step 2 in Table 1. Hence, the time complexity of computing a player``s Shapley value is O(m). The following section analyses the approximation error for the proposed method. 5. PERFORMANCE OF THE RANDOMIZED METHOD We first derive the formula for measuring the error in the approximate Shapley value and then conduct experiments for evaluating this error in a wide range of settings. However, before doing so, we introduce the idea of error. The concept of error relates to a measurement made of a quantity which has an accepted value [22, 4]. Obviously, it cannot be determined exactly how far off a measurement is from the accepted value; if this could be done, it would be possible to just give a more accurate, corrected value. Thus, error has to do with uncertainty in measurements that nothing can be done about. If a measurement is repeated, the values obtained will differ and none of the results can be preferred over the others. However, although it is not possible to do anything about such error, it can be characterized. As described in Section 4, we make measurements on samples that are drawn randomly from a given population (P) of players. Now, there are statistical errors associated with sampling which are unavoidable and must be lived with. Hence, if the result of a measurement is to have meaning it cannot consist of the measured value alone. An indication of how accurate the result is must be included also. Thus, the result of any physical measurement has two essential components: 1. a numerical value giving the best estimate possible of the quantity measured, and 2. the degree of uncertainty associated with this estimated value. For example, if the estimate of a quantity is x and the uncertainty is e(x) the quantity would lie in x ± e(x). For sampling experiments, the standard error is by far the most common way of characterising uncertainty [22]. Given this, the following section defines this error and uses it to evaluate the performance of the proposed randomized method. 5.1 Approximation error The accuracy of the above randomized method depends on its sampling error which is defined as follows [22, 4]: DEFINITION 2. The sampling error (or standard error) is defined as the standard deviation for a set of measurements divided by the square root of the number of measurements. To this end, let e(σX ) be the sampling error in the sum of the weights for a sample of size X drawn from the distribution N(Xμ, ν X ) where: e(σX ) = p (ν/X)/ p (X) = p (ν)/X (4) Let e(ΔX i ) denote the error in the marginal contribution for player i (given in Equation 2). This error is obtained by propagating the error in Equation 4 to Equation 2. In Equation 2, a and b are the lower and upper limits for the sum of the players'' weights for a The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 963 b C B a − e (σX ) a A )X(σeb+ Sum of weights Z1 Z2 Figure 1: A normal distribution for the sum of players'' weights in a coalition of size X. 40 60 80 100 0 50 100 0 5 10 15 20 25 QuotaWeight PercentageerrorintheShapleyvalue Figure 2: Performance of the randomized method for m = 10 players. coalition of size X. Since the error in this sum is e(σX ), the actual values of a and b lie in the interval a ± e(σX ) and b ± e(σX ) respectively. Hence, the error in Equation 2 is either the probability that the sum lies between the limits a − e(σX ) and a (i.e., the area under the curve defined by N(Xμ, ν X ) between a − e(σX ) and a, which is the shaded region A in Figure 1) or the probability that the sum of weights lies between the limits b and b+e(σX ) (i.e., the area under the curve defined by N(Xμ, ν X ) between b and b + e(σX ), which is the shaded region C in Figure 1). More specifically, the error is the maximum of these two probabilities: e(ΔX i ) = 1 p (2πν/X) × MAX „Z a a−e(σX ) e−X (x−Xμ)2 2ν dx, Z b+e(σX ) b e−X (x−Xμ)2 2ν dx `` On the basis of the above error, we find the error in the Shapley value by using the following standard error propagation rules [22]: R1 If x and y are two random variables with errors e(x) and e(y) respectively, then the error in the random variable z = x + y is given by: e(z) = e(x) + e(y) R2 If x is a random variable with error e(x) and z = kx where 0 100 200 300 400 500 0 100 200 300 400 500 0 5 10 15 20 25 QuotaWeight PercentageerrorintheShapleyvalue Figure 3: Performance of the randomized method for m = 50 players. the constant k has no error, then the error in z is: e(z) = |k|e(x) Using the above rules, the error in the Shapley value (given in Equation 3) is obtained by propagating the error in Equation 4 to all coalitions between the sizes X = 1 and X = m. Let e(ϕi) denote this error where: e(ϕi) = 1 m mX X=1 e(ΔX i ) We analyze the performance of our method in terms of the percentage error PE in the approximate Shapley value which is defined as follows: PE = 100 × e(ϕi)/ϕi (5) 5.2 Experimental Results We now compute the percentage error in the Shapley value using the above equation for PE. Since this error depends on the parameters of the voting game, we evaluate it in a range of settings by systematically varying the parameters of the voting game. In particular, we conduct experiments in the following setting. For a player with weight w, the percentage error in a player``s Shapley value depends on the following five parameters (see Equation 3): 1. The number of parties (m). 2. The mean weight (μ). 3. The variance in the player``s weights (ν). 4. The quota for the voting game (q). 5. The given player``s weight (w). We fix μ = 10 and ν = 1. This is because, for the normal distribution, μ = 10 ensures that for almost all the players the weight is positive, and ν = 1 is used most commonly in statistical experiments (ν can be higher or lower but PE is increasing in νsee Equations 4 and 5). We then vary m, q, and w as follows. We vary m between 5 and 100 (since beyond 100 we found that the error is close to zero), for each m we vary q between 4μ and mμ (we impose these limits because they ensure that the size of the 964 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 200 400 600 800 1000 0 200 400 600 800 1000 0 5 10 15 20 25 QuotaWeight PercentageerrorintheShapleyvalue Figure 4: Performance of the randomized method for m = 100 players. winning coalition is more than one and less than m - see Section 3 for details), and for each q, we vary w between 1 and q−1 (because a winning coalition must contain at least two players). The results of these experiments are shown in Figures 2, 3, and 4. As seen in the figures, the maximum PE is around 20% and in most cases it is below 5%. We now analyse the effect of the three parameters: w, q, and m on the percentage error in more detail. - Effect of w. The PE depends on e(σX ) because, in Equation 5, the limits of integration depend on e(σX ). The interval over which the first integration in Equation 5 is done is a − a + e(σX ) = e(σX ), and the interval over which the second one is done is b + e(σX ) − b = e(σX ). Thus, the interval is the same for both integrations and it is independent of wi. Note that each of the two functions that are integrated in Equation 5 are the same as the function that is integrated in Equation 2. Only the limits of the integration are different. Also, the interval over which the integration for the marginal contribution of Equation 2 is done is b − a = wi − (see Figure 1). The error in the marginal contribution is either the area of the shaded region A (between a − e(σX ) and a) in Figure 1, or the shaded area C (between b and b + e(σX )). As per Equation 5, it is the maximum of these two areas. Since e(σX ) is independent of wi, as wi increases, e(σX ) remains unchanged. However, the area of the unshaded region B increases. Hence, as wi increases, the error in the marginal contribution decreases and PE also decreases. - Effect of q. For a given q, the Shapley value for player i is as given in Equation 3. We know that, for a sample of size X, the sum of the players'' weights is distributed normally with mean Xμ and variance ν/X. Since 99% of a normal distribution lies within two standard deviations of its mean [8], player i``s marginal contribution to a sample of size X is almost zero if: a < Xμ + 2 p ν/X or b > Xμ − 2 p ν/X This is because the three regions A, B, and C (in Figure 1) lie either to the right of Z2 or to the left of Z1. However, player i``s marginal contribution is greater than zero for those X for which the following constraint is satisfied: Xμ − 2 p ν/X < a < b < Xμ + 2 p ν/X For this constraint, the three regions A, B, and C lie somewhere between Z1 and Z2. Since a = q −wi and b = q − , Equation 6 can also be written as: Xμ − 2 p ν/X < q − wi < q − < Xμ + 2 p ν/X The smallest X that satisfies the constraint in Equation 6 strictly increases with q. As X increases, the error in sum of weights in a sample (i.e., e(σX ) = p (ν)/X) decreases. Consequently, the error in a player``s marginal contribution (see Equation 5) also decreases. This implies that as q increases, the error in the marginal contribution (and consequently the error in the Shapley value) decreases. - Effect of m. It is clear from Equation 4 that the error e(σX ) is highest for X = 1 and it decreases with X. Hence, for small m, e(σ1 ) has a significant effect on PE. But as m increases, the effect of e(σ1 ) on PE decreases and, as a result, PE decreases. 6. RELATED WORK In order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature. One approach is to use generating functions [3]. This method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial - it requires huge arrays. It also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas. The other method uses an approximation technique based on Monte Carlo simulation. In [12], for instance, the Shapley value is computed by considering a random sample from a large population of players. The method we propose differs from this in that they define the Shapley value by treating a player``s number of swings (if a player can change a losing coalition to a winning one, then, for the player, the coalition is counted as a swing) as a random variable, while we treat the players'' weights as random variables. In [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this. Since the voting game is defined in terms of the players'' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game. Our method also differs from [7] in that while [7] presents a method for the case where all the players'' weights are distributed normally, our method applies to any type of distribution for these weights. Thus, as stated in Section 1, our method is more general than [3, 12, 7]. Also, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value. A method for finding the Shapley value was also proposed in [5]. This method gives the exact Shapley value, but its time complexity is exponential. Furthermore, the method can be used only if the game is represented in a specific form (viz., the multi-issue representation), not otherwise. Finally, [9, 10] present a polynomial time method for finding the Shapley value. This method can be used if the coalition game is represented as a marginal contribution net. Furthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game. In contrast, our method is independent The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965 of the representation and gives an approximate Shapley value in linear time, without the need for an oracle. 7. CONCLUSIONS AND FUTURE WORK Coalition formation is an important form of interaction in multiagent systems. An important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition. In this context, cooperative game theory offers a solution concept called the Shapley value. The main advantage of the Shapley value is that it provides a solution that is both unique and fair. However, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is #P-complete. Although this problem is, in general #P-complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games. By doing so, we have shown when it is computationally feasible to find the exact Shapley value. For other complex voting games, we presented a new randomized method for determining the approximate Shapley value. The time complexity of the proposed method is linear in the number of players. We analysed the performance of this method in terms of the percentage error in the approximate Shapley value. Our experiments show that the percentage error in the Shapley value is at most 20. Furthermore, in most cases, the error is less than 5%. Finally, we analyse the effect of the different parameters of the voting game on this error. Our study shows that the error decreases as 1. a player``s weight increases, 2. the quota increases, and 3. the number of players increases. Given the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. In future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the production economy and the market economy. 8. REFERENCES [1] R. Aumann. Acceptable points in general cooperative n-person games. In Contributions to theTheory of Games volume IV. Princeton University Press, 1959. [2] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi. Complexity and approximation: Combinatorial optimization problems and their approximability properties. Springer, 2003. [3] J. M. Bilbao, J. R. Fernandez, A. J. Losada, and J. J. Lopez. Generating functions for computing power indices efficiently. Top 8, 2:191-213, 2000. [4] P. Bork, H. Grote, D. Notz, and M. Regler. Data Analysis Techniques in High Energy Physics Experiments. Cambridge University Press, 1993. [5] V. Conitzer and T. Sandholm. Computing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains. In Proceedings of the National Conference on Artificial Intelligence, pages 219-225, San Jose, California, 2004. [6] X. Deng and C. H. Papadimitriou. On the complexity of cooperative solution concepts. Mathematics of Operations Research, 19(2):257-266, 1994. [7] S. S. Fatima, M. Wooldridge, and N. R. Jennings. An analysis of the shapley value and its uncertainty for the voting game. In Proc. 7th Int. Workshop on Agent Mediated Electronic Commerce, pages 39-52, 2005. [8] A. Francis. Advanced Level Statistics. Stanley Thornes Publishers, 1979. [9] S. Ieong and Y. Shoham. Marginal contribution nets: A compact representation scheme for coalitional games. In Proceedings of the Sixth ACM Conference on Electronic Commerce, pages 193-202, Vancouver, Canada, 2005. [10] S. Ieong and Y. Shoham. Multi-attribute coalition games. In Proceedings of the Seventh ACM Conference on Electronic Commerce, pages 170-179, Ann Arbor, Michigan, 2006. [11] J. P. Kahan and A. Rapoport. Theories of Coalition Formation. Lawrence Erlbaum Associates Publishers, 1984. [12] I. Mann and L. S. Shapley. Values for large games iv: Evaluating the electoral college exactly. Technical report, The RAND Corporation, Santa Monica, 1962. [13] A. MasColell, M. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, 1995. [14] M. J. Osborne and A. Rubinstein. A Course in Game Theory. The MIT Press, 1994. [15] C. H. Papadimitriou. Computational Complexity. Addison Wesley Longman, 1994. [16] A. Rapoport. N-person Game Theory : Concepts and Applications. Dover Publications, Mineola, NY, 2001. [17] A. E. Roth. Introduction to the shapley value. In A. E. Roth, editor, The Shapley value, pages 1-27. University of Cambridge Press, Cambridge, 1988. [18] T. Sandholm and V. Lesser. Coalitions among computationally bounded agents. Artificial Intelligence Journal, 94(1):99-137, 1997. [19] L. S. Shapley. A value for n person games. In A. E. Roth, editor, The Shapley value, pages 31-40. University of Cambridge Press, Cambridge, 1988. [20] O. Shehory and S. Kraus. A kernel-oriented model for coalition-formation in general environments: Implemetation and results. In In Proceedings of the National Conference on Artificial Intelligence (AAAI-96), pages 131-140, 1996. [21] O. Shehory and S. Kraus. Methods for task allocation via agent coalition formation. Artificial Intelligence Journal, 101(2):165-200, 1998. [22] J. R. Taylor. An introduction to error analysis: The study of uncertainties in physical measurements. University Science Books, 1982. 966 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
A Randomized Method for the Shapley Value for the Voting Game ABSTRACT The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #P - complete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%. 1. INTRODUCTION Coalition formation, a key form of interaction in multi-agent systems, is the process of joining together two or more agents so as to achieve goals that individuals on their own cannot, or to achieve them more efficiently [1, 11, 14, 13]. Often, in such situations, there is more than one possible coalition and a player's payoff depends on which one it joins. Given this, a key problem is to ensure that none of the parties in a coalition has any incentive to break away from it and join another coalition (i.e., the coalitions should be stable). However, in many cases there may be more than one solution (i.e., a stable coalition). In such cases, it becomes difficult to select a single solution from among the possible ones, especially if the parties are self-interested (i.e., they have different preferences over stable coalitions). In this context, cooperative game theory deals with the problem of coalition formation and offers a number of solution concepts that possess desirable properties like stability, fair division ofjoint gains, and uniqueness [16, 14]. Cooperative game theory differs from its non-cooperative counterpart in that for the former the players are allowed to form binding agreements and so there is a strong incentive to work together to receive the largest total payoff. Also, unlike non-cooperative game theory, cooperative game theory does not specify a game through a description of the strategic environment (including the order of players' moves and the set of actions at each move) and the resulting payoffs, but, instead, it reduces this collection of data to the coalitional form where each coalition is represented by a single real number: there are no actions, moves, or individual payoffs. The chief advantage of this approach, at least in multiple-player environments, is its practical usefulness. Thus, many more real-life situations fit more easily into a coalitional form game, whose structure is more tractable than that of a non-cooperative game, whether that be in normal or extensive form and it is for this reason that we focus on such forms in this paper. Given these observations, a number of multiagent systems researchers have used and extended cooperative game-theoretic solutions to facilitate automated coalition formation [20, 21, 18]. Moreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19]. A player's Shapley value gives an indication of its prospects of playing the game--the higher the Shapley value, the better its prospects. The main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness). However, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time. For instance, finding this value for the weighted voting game is, in general, #P - complete [6]. A problem is #P - hard if solving it is as hard as counting satisfying assignments of propositional logic formulae [15, p442]. Since #P - completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general. In other words, it is practically infeasible to try to compute the exact Shapley value. However, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents. Hence, our objective is to overcome the computational complexity of finding the Shapley value for this game. Specifically, we first show that there are some specific voting games for which the exact value can be computed in polynomial time. By identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not. For the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value. The computational complexity of such games has typically been tackled using two main approaches. The first is to use generating functions [3]. This method trades time complexity for storage space. The second uses an approximation technique based on Monte Carlo simulation [12, 7]. However the method we propose is more general than either of these (see Section 6 for details). Moreover, no work has previously analysed the approximation error. The approximation error relates to how close the approximate is to the true Shapley value. Specifically, it is the difference between the true and the approximate Shapley value. It is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error. Thus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value. This analysis is carried out empirically. Our experiments show that the error is always less than 20%, and in most cases it is under 5%. Finally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space). Given this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. The rest of the paper is organised as follows. Section 2 defines the Shapley value and describes the weighted voting game. In Section 3 we describe voting games whose Shapley value can be found in polynomial time. In Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5. Section 6 discusses related literature. Finally, Section 7 concludes. 2. BACKGROUND We begin by introducing coalition games and the Shapley value and then define the weighted voting game. A coalition game is a game where groups of players ("coalitions") may enforce cooperative behaviour between their members. Hence the game is a competition between coalitions of players, rather than between individual players. Depending on how the players measure utility, coalition game theory is split into two parts. If the players measure utility or the payoff in the same units and there is a means of exchange of utility such as side payments, we say the game has transferable utility; otherwise it has non-transferable utility. More formally, a coalition game with transferable utility, ~ N, v ~, consists of: 1. a finite set (N = {1, 2,..., n}) of players and 2. a function (v) that associates with every non-empty subset S of N (i.e., a coalition) a real number v (S) (the worth of S). For each coalition S, the number v (S) is the total payoff that is available for division among the members of S (i.e., the set of joint actions that coalition S can take consists of all possible divisions of v (S) among the members of S). Coalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way. For the former, each coalition is associated with a set of payoff vectors that is not necessarily the set of all possible divisions of some fixed amount. The focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs. Thus, in either case, the players will only join a coalition if they expect to gain from it. Here, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff. The problem then is how to split the total payoff between or among the players. In this context, Shapley [19] constructed a solution using an axiomatic approach. Shapley defined a value for games to be a function that assigns to a game (N, v), a number ϕi (N, v) for each i in N. This function satisfies three axioms [17]: 1. Symmetry. This axiom requires that the names of players play no role in determining the value. 2. Carrier. This axiom requires that the sum of ϕi (N, v) for all players i in any carrier C equal v (C). A carrier C is a subset of N such that v (S) = v (S ∩ C) for any subset of players S ⊂ N. 3. Additivity. This axiom specifies how the values of different games must be related to one another. It requires that for any games ϕi (N, v) and ϕi (N, v'), ϕi (N, v) + ϕi (N, v) = ϕi (N, v + v) for all i in N. Shapley showed that there is a unique function that satisfies these three axioms. Shapley viewed this value as an index for measuring the power of players in a game. Like a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities. Alternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game. We first introduce some notation and then define the Shapley value. Let S denote the set N − {i} and fi: S → 2N − {i} be a random variable that takes its values in the set of all subsets of N − {i}, and has the probability distribution function (g) defined as: The random variable fi is interpreted as the random choice of a coalition that player i joins. Then, a player's Shapley value is defined in terms of its marginal contribution. Thus, the marginal contribution of player i to coalition S with i ∈ / S is a function Div that is defined as follows: Thus a player's marginal contribution to a coalition S is the increase in the value of S as a result of i joining it. The Shapley value is interpreted as follows. Suppose that all the players are arranged in some order, all orderings being equally likely. Then ϕi (N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him. The method for finding a player's Shapley value depends on the definition of the value function (v). This function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1. 960 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.1 The weighted voting game We adopt the definition of the voting game given in [6]. Thus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament. The weighted voting game is then a game G = (N, v) in which: for any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota). Note that for this game (denoted (q •, w1,..., wn)), a player's marginal contribution is either zero or one. This is because the value of any coalition is either zero or one. A coalition with value zero is called a "losing coalition" and with value one a "winning coalition". If a player's entry to a coalition changes it from losing to winning, then the player's marginal contribution for that coalition is one; otherwise it is zero. The main advantage of the Shapley value is that it gives a solution that is both unique and fair. The property of uniqueness is desirable because it leaves no ambiguity. The property of fairness relates to how the gains from cooperation are split between the members of a coalition. In this case, a player's Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value. Thus, from a player's perspective, both uniqueness and fairness are desirable properties. 3. VOTING GAMES WITH POLYNOMIAL TIME SOLUTIONS Here we describe those voting games for which the Shapley value can be determined in polynomial time. This is achieved using the direct enumeration approach (i.e., listing all possible coalitions and finding a player's marginal contribution to each of them). We characterise such games in terms of the number of players and their weights. 3.1 All players have equal weight Consider the game (q •, j,..., j) with m parties. Each party has j votes. If q <j, then there would be no need for the players to form a coalition. On the other hand, if q = mj (m = IN I is the number of players), only the grand coalition is possible. The interesting games are those for which the quota (q) satisfies the constraint: (j + 1) <q <j (m--1). For these games, the value of a coalition is one if the weight of the coalition is greater than or equal to q, otherwise it is zero. Let ϕ denote the Shapley value for a player. Consider any one player. This player can join a coalition as the ith member where 1 <i <m. However, the marginal contribution of the player is 1 only if it joins a coalition as the rq/j1th member. In all other cases, its marginal contribution is zero. Thus, the Shapley value for each player ϕ = 1/m. Since ϕ requires one division operation, it can be found in constant time (i.e., 0 (1)). 3.2 A single large party Consider a game in which there are two types of players: large (with weight wl> ws) and small (with weight ws). There is one large player and m small ones. The quota for this game is q; i.e., we have a game of the form (q •, wl, ws, ws,..., ws). The total number of players is (m + 1). The value of a coalition is one if the weight of the coalition is greater than or equal to q, otherwise it is zero. Let ϕl denote the Shapley value for the large player and ϕs that for each small player. We first consider ws = 1 and then ws> 1. The smallest possible value for q is wl + 1. This is because, if q <wl, then the large party can win the election on its own without the need for a coalition. Thus, the quota for the game satisfies the constraint wl + 1 <q <m + wl--1. Also, the lower and upper limits for wl are 2 and (q--1) respectively. The lower limit is 2 because the weight of the large party has to be greater than each small one. Furthermore, the weight of the large party cannot be greater than q, since in that case, there would be no need for the large party to form a coalition. Recall that for our voting game, a player's marginal contribution to a coalition can only be zero or one. Consider the large party. This party can join a coalition as the ith member where 1 <i <(m + 1). However, the marginal contribution of the large party is one if it joins a coalition as the ith member where (q--wl) <i <q. In all the remaining cases, its marginal contribution is zero. Thus, out of the total (m + 1) possible cases, its marginal contribution is one in wl cases. Hence, the Shapley value of the large party is: ϕl = wl / (m + 1). In the same way, we obtain the Shapley value of the large party for the general case where ws> 1 as: Now consider a small player. We know that the sum of the Shapley values of all the m + 1 players is one. Also, since the small parties have equal weights, their Shapley values are the same. Hence, we get: Thus, both ϕl and ϕs can be computed in constant time. This is because both require a constant number of basic operations (addition, subtraction, multiplication, and division). In the same way, the Shapley value for a voting game with a single large party and multiple small parties can be determined in constant time. 3.3 Multiple large and small parties We now consider a voting game that has two player types: large and small (as in Section 3.2), but now there are multiple large and multiple small parties. The set of parties consists of ml large parties and ms small parties. The weight of each large party is wl and that of each small one is ws, where ws <wl. We show the computational tractability for this game by considering the following four possible scenarios: S1 q <mlwl and q <msws S2 q <mlwl and q> msws S3 q> mlwl and q> msws S4 q> mlwl and q <msws For the first scenario, consider a large player. In order to determine the Shapley value for this player, we need to consider the number of all possible coalitions that give it a marginal contribution of one. It is possible for the marginal contribution of this player to be one if it joins a coalition in which the number of large players is between zero and (q--1) / wl. In other words, there are (q--1) / wl + 1 such cases and we now consider each of them. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 961 Consider a coalition such that when the large player joins in, there are i large players and (q − iwl − 1) / ws small players already in it, and the remaining players join after the large player. Such a coalition gives the large player unit marginal contribution. Let C2l (i, q) denote the number of all such coalitions. To begin, consider the case i = 0: where C (y, x) denotes the number of possible combinations of x items from a set of y items. For i = 1, we get: Hence, the time to find the Shapley value is O (Tq/wl). In the same way, a small player's Shapley value is: and can be found in time O (Tq/ws). Likewise, the remaining three scenarios (S2 to S4) can be shown to have the same time complexity. 3.4 Three player types We now consider a voting game that has three player types: 1, 2, and 3. The set of parties consists of m1 players of type 1 (each with weight w1), m2 players of type 2 (each with weight w2), and m3 players of type 3 (each with weight w3). For this voting game, consider a player of type 1. It is possible for the marginal contribution of this player to be one if it joins a coalition in which the number of type 1 players is between zero and (q − 1) / w1. In other words, there are (q − 1) / w1 + 1 such cases and we now consider each of them. Consider a coalition such that when the type 1 player joins in, there are i type 1 players already in it. The remaining players join after the type 1 player. Let C3l (i, q) denote the number of all such coalitions that give a marginal contribution of one to the type 1 player where: The time complexity of finding this value is O (Tq2/w1w2) where: Likewise, for the other two player types (2 and 3). Thus, we have identified games for which the exact Shapley value can be easily determined. However, the computational complexity of the above direct enumeration method increases with the number of player types. For a voting game with more than three player types, the time complexity of the above method is a polynomial of degree four or more. To deal with such situations, therefore, the following section presents a faster randomised method for finding the approximate Shapley value. 4. FINDING THE APPROXIMATE SHAPLEY VALUE We first give a brief introduction to randomized algorithms and then present our randomized method for finding the approximate Shapley value. Randomized algorithms are the most commonly used approach for finding approximate solutions to computationally hard problems. A randomized algorithm is an algorithm that, during some of its steps, performs random choices [2]. The random steps performed by the algorithm imply that by executing the algorithm several times with the same input we are not guaranteed to find the same solution. Now, since such algorithms generate approximate solutions, their performance is evaluated in terms of two criteria: their time complexity, and their error of approximation. The approximation error refers to the difference between the exact solution and its approximation. Against this background, we present a randomized method for finding the approximate Shapley value and empirically evaluate its error. We first describe the general voting game and then present our randomized algorithm. In its general form, a voting game has more than two types of players. Let wi denote the weight of player i. Thus, for m players and for quota q the game is of the form (q; w1, w2,..., wm). The weights are specified in terms of a probability distribution function. For such a game, we want to find the approximate Shapley value. We let P denote a population of players. The players' weights in this population are defined by a probability distribution function. Irrespective of the actual probability distribution function, let μ be the mean weight for the population of players and ν the variance in the players' weights. From this population of players we randomly draw samples and find the sum of the players' weights in the sample using the following rule from Sampling Theory (see [8] p425): If w1, w2,..., wn is a random sample of size n drawn from any distribution with mean μ and variance ν, then the sample sum has an approximate Normal distribution with mean nμ and variance νn (the larger the n the better the approximation). 962 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) R-SHAPLEYVALUE (P, μ, ν, q, wi) P: Population of players μ: Mean weight of the population P ν: Variance in the weights for poulation P q: Quota for the voting game wi: Player i's weight 1. Ti 4--0; a 4--q − wi; b 4--q − ~ 2. For X from 1 to m repeatedly do the following 2.1. Select a random sample SX of size X from the population P 2.2. Evaluate expected marginal contribution (ΔXi) of player i to SX as: 3. Evaluate Shapley value of player i as: ϕi 4--Ti/m Table 1: Randomized algorithm to find the Shapley value for player i. We know from Definition 1, that the Shapley value for a player is the expectation (E) of its marginal contribution to a coalition that is chosen randomly. We use this rule to determine the Shapley value as follows. For player i with weight wi, let ϕi denote the Shapley value. Let X denote the size of a random sample drawn from a population in which the individual player weights have any distribution. The marginal contribution of player i to this random sample is one if the total weight of the X players in the sample is greater than or equal to a = q − wi but less than b = q − ~ (where ~ is an inifinitesimally small quantity). Otherwise, its marginal contribution is zero. Thus, the expected marginal contribution of player i (denoted ΔXi) to the sample coalition is the area under the curve defined by N (Xμ, Xν) in the interval [a, b]. This area is shown as the region B in Figure 1 (the dotted line in the figure is Xμ). Hence we get: and the Shapley value is: The above steps are described in Table 1. In more detail, Step 1 does the initialization. In Step 2, we vary X between 1 and m and repeatedly do the following. In Step 2.1, we randomly select a sample SX of size X from the population P. Player i's marginal contribution to the random coalition SX is found in Step 2.2. The average marginal contribution is found in Step 3--and this is the Shapley value for player i. THEOREM 1. The time complexity of the proposed randomized method is linear in the number ofplayers. PROOF. As per Equation 3, ΔXi must be computed m times. This is done in the for loop of Step 2 in Table 1. Hence, the time complexity of computing a player's Shapley value is O (m). The following section analyses the approximation error for the proposed method. 5. PERFORMANCE OF THE RANDOMIZED METHOD We first derive the formula for measuring the error in the approximate Shapley value and then conduct experiments for evaluating this error in a wide range of settings. However, before doing so, we introduce the idea of error. The concept of error relates to a measurement made of a quantity which has an accepted value [22, 4]. Obviously, it cannot be determined exactly how far off a measurement is from the accepted value; if this could be done, it would be possible to just give a more accurate, corrected value. Thus, error has to do with uncertainty in measurements that nothing can be done about. If a measurement is repeated, the values obtained will differ and none of the results can be preferred over the others. However, although it is not possible to do anything about such error, it can be characterized. As described in Section 4, we make measurements on samples that are drawn randomly from a given population (P) of players. Now, there are statistical errors associated with sampling which are unavoidable and must be lived with. Hence, if the result of a measurement is to have meaning it cannot consist of the measured value alone. An indication of how "accurate" the result is must be included also. Thus, the result of any physical measurement has two essential components: 1. a numerical value giving the best "estimate" possible of the quantity measured, and 2. the degree of uncertainty associated with this estimated value. For example, if the estimate of a quantity is x and the uncertainty is e (x) the quantity would lie in x f e (x). For sampling experiments, the standard error is by far the most common way of characterising uncertainty [22]. Given this, the following section defines this error and uses it to evaluate the performance of the proposed randomized method. 5.1 Approximation error The accuracy of the above randomized method depends on its sampling error which is defined as follows [22, 4]: DEFINITION 2. The sampling error (or standard error) is defined as the standard deviation for a set of measurements divided by the square root of the number of measurements. To this end, let e (σX) be the sampling error in the sum of the weights for a sample of size X drawn from the distribution N (Xμ, νX) where: Let e (ΔXi) denote the error in the marginal contribution for player i (given in Equation 2). This error is obtained by propagating the error in Equation 4 to Equation 2. In Equation 2, a and b are the lower and upper limits for the sum of the players' weights for a The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 963 Figure 1: A normal distribution for the sum of players' weights in a coalition of size X. Figure 3: Performance of the randomized method for m = 50 players. the constant k has no error, then the error in z is: Using the above rules, the error in the Shapley value (given in 100 Equation 3) is obtained by propagating the error in Equation 4 to all coalitions between the sizes X = 1 and X = m. Let e (~ i) denote this error where: Figure 2: Performance of the randomized method for m = 10 players. coalition of size X. Since the error in this sum is e (~ X), the actual values of a and b lie in the interval a ± e (~ X) and b ± e (~ X) respectively. Hence, the error in Equation 2 is either the probability that the sum lies between the limits a − e (~ X) and a (i.e., the area under the curve defined by N (Xµ, Xν) between a − e (~ X) and a, which is the shaded region A in Figure 1) or the probability that the sum of weights lies between the limits b and b + e (~ X) (i.e., the area under the curve defined by N (Xµ, νX) between b and b + e (~ X), which is the shaded region C in Figure 1). More specifically, the error is the maximum of these two probabilities: On the basis of the above error, we find the error in the Shapley value by using the following standard error propagation rules [22]: R1 If x and y are two random variables with errors e (x) and e (y) respectively, then the error in the random variable z = x + y is given by: We analyze the performance of our method in terms of the percentage error PE in the approximate Shapley value which is defined as follows: 5.2 Experimental Results We now compute the percentage error in the Shapley value using the above equation for PE. Since this error depends on the parameters of the voting game, we evaluate it in a range of settings by systematically varying the parameters of the voting game. In particular, we conduct experiments in the following setting. For a player with weight w, the percentage error in a player's Shapley value depends on the following five parameters (see Equation 3): 1. The number of parties (m). 2. The mean weight (µ). 3. The variance in the player's weights (~). 4. The quota for the voting game (q). 5. The given player's weight (w). We fix µ = 10 and ~ = 1. This is because, for the normal distribution, µ = 10 ensures that for almost all the players the weight is positive, and ~ = 1 is used most commonly in statistical experiments (~ can be higher or lower but PE is increasing in ~--see Equations 4 and 5). We then vary m, q, and w as follows. We vary m between 5 and 100 (since beyond 100 we found that the error is close to zero), for each m we vary q between 4µ and mµ (we impose these limits because they ensure that the size of the For this constraint, the three regions A, B, and C lie somewhere between Z1 and Z2. Since a = q--wi and b = q--~, Equation 6 can also be written as: of weights in a sample (i.e., e (~ X) = (~) / X) decreases. 1000 Consequently, the error in a player's marginal contribution (see Equation 5) also decreases. This implies that as q increases, the error in the marginal contribution (and consequently the error in the Shapley value) decreases. Figure 4: Performance of the randomized method for m = 100 players. winning coalition is more than one and less than m--see Section 3 for details), and for each q, we vary w between 1 and q--1 (because a winning coalition must contain at least two players). The results of these experiments are shown in Figures 2, 3, and 4. As seen in the figures, the maximum PE is around 20% and in most cases it is below 5%. We now analyse the effect of the three parameters: w, q, and m on the percentage error in more detail. - Effect of w. The PE depends on e (~ X) because, in Equation 5, the limits of integration depend on e (~ X). The interval over which the first integration in Equation 5 is done is a--a + e (~ X) = e (~ X), and the interval over which the second one is done is b + e (~ X)--b = e (~ X). Thus, the interval is the same for both integrations and it is independent of wi. Note that each of the two functions that are integrated in Equation 5 are the same as the function that is integrated in Equation 2. Only the limits of the integration are different. Also, the interval over which the integration for the marginal contribution of Equation 2 is done is b--a = wi--~ (see Figure 1). The error in the marginal contribution is either the area of the shaded region A (between a--e (~ X) and a) in Figure 1, or the shaded area C (between b and b + e (~ X)). As per Equation 5, it is the maximum of these two areas. Since e (~ X) is independent of wi, as wi increases, e (~ X) remains unchanged. However, the area of the unshaded region B increases. Hence, as wi increases, the error in the marginal contribution decreases and PE also decreases. - Effect of q. For a given q, the Shapley value for player i is as given in Equation 3. We know that, for a sample of size X, the sum of the players' weights is distributed normally with mean Xµ and variance ~ / X. Since 99% of a normal distribution lies within two standard deviations of its mean [8], player i's marginal contribution to a sample of size X is almost zero if: This is because the three regions A, B, and C (in Figure 1) lie either to the right of Z2 or to the left of Z1. However, player i's marginal contribution is greater than zero for those - Effect of m. It is clear from Equation 4 that the error e (~ X) is highest for X = 1 and it decreases with X. Hence, for small m, e (~ 1) has a significant effect on PE. But as m increases, the effect of e (~ 1) on PE decreases and, as a result, PE decreases. 6. RELATED WORK In order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature. One approach is to use generating functions [3]. This method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial--it requires huge arrays. It also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas. The other method uses an approximation technique based on Monte Carlo simulation. In [12], for instance, the Shapley value is computed by considering a random sample from a large population of players. The method we propose differs from this in that they define the Shapley value by treating a player's number of swings (if a player can change a losing coalition to a winning one, then, for the player, the coalition is counted as a swing) as a random variable, while we treat the players' weights as random variables. In [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this. Since the voting game is defined in terms of the players' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game. Our method also differs from [7] in that while [7] presents a method for the case where all the players' weights are distributed normally, our method applies to any type of distribution for these weights. Thus, as stated in Section 1, our method is more general than [3, 12, 7]. Also, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value. A method for finding the Shapley value was also proposed in [5]. This method gives the "exact" Shapley value, but its time complexity is exponential. Furthermore, the method can be used only if the game is represented in a specific form (viz., the "multi-issue representation"), not otherwise. Finally, [9, 10] present a polynomial time method for finding the Shapley value. This method can be used if the coalition game is represented as a "marginal contribution net". Furthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game. In contrast, our method is independent The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965 of the representation and gives an approximate Shapley value in linear time, without the need for an oracle. 7. CONCLUSIONS AND FUTURE WORK Coalition formation is an important form of interaction in multiagent systems. An important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition. In this context, cooperative game theory offers a solution concept called the Shapley value. The main advantage of the Shapley value is that it provides a solution that is both unique and fair. However, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is #P - complete. Although this problem is, in general #P - complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games. By doing so, we have shown when it is computationally feasible to find the exact Shapley value. For other complex voting games, we presented a new randomized method for determining the approximate Shapley value. The time complexity of the proposed method is linear in the number of players. We analysed the performance of this method in terms of the percentage error in the approximate Shapley value. Our experiments show that the percentage error in the Shapley value is at most 20. Furthermore, in most cases, the error is less than 5%. Finally, we analyse the effect of the different parameters of the voting game on this error. Our study shows that the error decreases as 1. a player's weight increases, 2. the quota increases, and 3. the number of players increases. Given the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. In future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the "production economy" and the "market economy".
A Randomized Method for the Shapley Value for the Voting Game ABSTRACT The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #P - complete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%. 1. INTRODUCTION Coalition formation, a key form of interaction in multi-agent systems, is the process of joining together two or more agents so as to achieve goals that individuals on their own cannot, or to achieve them more efficiently [1, 11, 14, 13]. Often, in such situations, there is more than one possible coalition and a player's payoff depends on which one it joins. Given this, a key problem is to ensure that none of the parties in a coalition has any incentive to break away from it and join another coalition (i.e., the coalitions should be stable). However, in many cases there may be more than one solution (i.e., a stable coalition). In such cases, it becomes difficult to select a single solution from among the possible ones, especially if the parties are self-interested (i.e., they have different preferences over stable coalitions). In this context, cooperative game theory deals with the problem of coalition formation and offers a number of solution concepts that possess desirable properties like stability, fair division ofjoint gains, and uniqueness [16, 14]. Cooperative game theory differs from its non-cooperative counterpart in that for the former the players are allowed to form binding agreements and so there is a strong incentive to work together to receive the largest total payoff. Also, unlike non-cooperative game theory, cooperative game theory does not specify a game through a description of the strategic environment (including the order of players' moves and the set of actions at each move) and the resulting payoffs, but, instead, it reduces this collection of data to the coalitional form where each coalition is represented by a single real number: there are no actions, moves, or individual payoffs. The chief advantage of this approach, at least in multiple-player environments, is its practical usefulness. Thus, many more real-life situations fit more easily into a coalitional form game, whose structure is more tractable than that of a non-cooperative game, whether that be in normal or extensive form and it is for this reason that we focus on such forms in this paper. Given these observations, a number of multiagent systems researchers have used and extended cooperative game-theoretic solutions to facilitate automated coalition formation [20, 21, 18]. Moreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19]. A player's Shapley value gives an indication of its prospects of playing the game--the higher the Shapley value, the better its prospects. The main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness). However, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time. For instance, finding this value for the weighted voting game is, in general, #P - complete [6]. A problem is #P - hard if solving it is as hard as counting satisfying assignments of propositional logic formulae [15, p442]. Since #P - completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general. In other words, it is practically infeasible to try to compute the exact Shapley value. However, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents. Hence, our objective is to overcome the computational complexity of finding the Shapley value for this game. Specifically, we first show that there are some specific voting games for which the exact value can be computed in polynomial time. By identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not. For the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value. The computational complexity of such games has typically been tackled using two main approaches. The first is to use generating functions [3]. This method trades time complexity for storage space. The second uses an approximation technique based on Monte Carlo simulation [12, 7]. However the method we propose is more general than either of these (see Section 6 for details). Moreover, no work has previously analysed the approximation error. The approximation error relates to how close the approximate is to the true Shapley value. Specifically, it is the difference between the true and the approximate Shapley value. It is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error. Thus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value. This analysis is carried out empirically. Our experiments show that the error is always less than 20%, and in most cases it is under 5%. Finally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space). Given this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. The rest of the paper is organised as follows. Section 2 defines the Shapley value and describes the weighted voting game. In Section 3 we describe voting games whose Shapley value can be found in polynomial time. In Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5. Section 6 discusses related literature. Finally, Section 7 concludes. 2. BACKGROUND We begin by introducing coalition games and the Shapley value and then define the weighted voting game. A coalition game is a game where groups of players ("coalitions") may enforce cooperative behaviour between their members. Hence the game is a competition between coalitions of players, rather than between individual players. Depending on how the players measure utility, coalition game theory is split into two parts. If the players measure utility or the payoff in the same units and there is a means of exchange of utility such as side payments, we say the game has transferable utility; otherwise it has non-transferable utility. More formally, a coalition game with transferable utility, ~ N, v ~, consists of: 1. a finite set (N = {1, 2,..., n}) of players and 2. a function (v) that associates with every non-empty subset S of N (i.e., a coalition) a real number v (S) (the worth of S). For each coalition S, the number v (S) is the total payoff that is available for division among the members of S (i.e., the set of joint actions that coalition S can take consists of all possible divisions of v (S) among the members of S). Coalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way. For the former, each coalition is associated with a set of payoff vectors that is not necessarily the set of all possible divisions of some fixed amount. The focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs. Thus, in either case, the players will only join a coalition if they expect to gain from it. Here, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff. The problem then is how to split the total payoff between or among the players. In this context, Shapley [19] constructed a solution using an axiomatic approach. Shapley defined a value for games to be a function that assigns to a game (N, v), a number ϕi (N, v) for each i in N. This function satisfies three axioms [17]: 1. Symmetry. This axiom requires that the names of players play no role in determining the value. 2. Carrier. This axiom requires that the sum of ϕi (N, v) for all players i in any carrier C equal v (C). A carrier C is a subset of N such that v (S) = v (S ∩ C) for any subset of players S ⊂ N. 3. Additivity. This axiom specifies how the values of different games must be related to one another. It requires that for any games ϕi (N, v) and ϕi (N, v'), ϕi (N, v) + ϕi (N, v) = ϕi (N, v + v) for all i in N. Shapley showed that there is a unique function that satisfies these three axioms. Shapley viewed this value as an index for measuring the power of players in a game. Like a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities. Alternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game. We first introduce some notation and then define the Shapley value. Let S denote the set N − {i} and fi: S → 2N − {i} be a random variable that takes its values in the set of all subsets of N − {i}, and has the probability distribution function (g) defined as: The random variable fi is interpreted as the random choice of a coalition that player i joins. Then, a player's Shapley value is defined in terms of its marginal contribution. Thus, the marginal contribution of player i to coalition S with i ∈ / S is a function Div that is defined as follows: Thus a player's marginal contribution to a coalition S is the increase in the value of S as a result of i joining it. The Shapley value is interpreted as follows. Suppose that all the players are arranged in some order, all orderings being equally likely. Then ϕi (N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him. The method for finding a player's Shapley value depends on the definition of the value function (v). This function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1. 960 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.1 The weighted voting game We adopt the definition of the voting game given in [6]. Thus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament. The weighted voting game is then a game G = (N, v) in which: for any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota). Note that for this game (denoted (q •, w1,..., wn)), a player's marginal contribution is either zero or one. This is because the value of any coalition is either zero or one. A coalition with value zero is called a "losing coalition" and with value one a "winning coalition". If a player's entry to a coalition changes it from losing to winning, then the player's marginal contribution for that coalition is one; otherwise it is zero. The main advantage of the Shapley value is that it gives a solution that is both unique and fair. The property of uniqueness is desirable because it leaves no ambiguity. The property of fairness relates to how the gains from cooperation are split between the members of a coalition. In this case, a player's Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value. Thus, from a player's perspective, both uniqueness and fairness are desirable properties. 3. VOTING GAMES WITH POLYNOMIAL TIME SOLUTIONS 3.1 All players have equal weight 3.2 A single large party 3.3 Multiple large and small parties The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 961 3.4 Three player types 4. FINDING THE APPROXIMATE SHAPLEY VALUE 962 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5. PERFORMANCE OF THE RANDOMIZED METHOD 5.1 Approximation error The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 963 5.2 Experimental Results 6. RELATED WORK In order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature. One approach is to use generating functions [3]. This method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial--it requires huge arrays. It also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas. The other method uses an approximation technique based on Monte Carlo simulation. In [12], for instance, the Shapley value is computed by considering a random sample from a large population of players. The method we propose differs from this in that they define the Shapley value by treating a player's number of swings (if a player can change a losing coalition to a winning one, then, for the player, the coalition is counted as a swing) as a random variable, while we treat the players' weights as random variables. In [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this. Since the voting game is defined in terms of the players' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game. Our method also differs from [7] in that while [7] presents a method for the case where all the players' weights are distributed normally, our method applies to any type of distribution for these weights. Thus, as stated in Section 1, our method is more general than [3, 12, 7]. Also, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value. A method for finding the Shapley value was also proposed in [5]. This method gives the "exact" Shapley value, but its time complexity is exponential. Furthermore, the method can be used only if the game is represented in a specific form (viz., the "multi-issue representation"), not otherwise. Finally, [9, 10] present a polynomial time method for finding the Shapley value. This method can be used if the coalition game is represented as a "marginal contribution net". Furthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game. In contrast, our method is independent The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965 of the representation and gives an approximate Shapley value in linear time, without the need for an oracle. 7. CONCLUSIONS AND FUTURE WORK Coalition formation is an important form of interaction in multiagent systems. An important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition. In this context, cooperative game theory offers a solution concept called the Shapley value. The main advantage of the Shapley value is that it provides a solution that is both unique and fair. However, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is #P - complete. Although this problem is, in general #P - complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games. By doing so, we have shown when it is computationally feasible to find the exact Shapley value. For other complex voting games, we presented a new randomized method for determining the approximate Shapley value. The time complexity of the proposed method is linear in the number of players. We analysed the performance of this method in terms of the percentage error in the approximate Shapley value. Our experiments show that the percentage error in the Shapley value is at most 20. Furthermore, in most cases, the error is less than 5%. Finally, we analyse the effect of the different parameters of the voting game on this error. Our study shows that the error decreases as 1. a player's weight increases, 2. the quota increases, and 3. the number of players increases. Given the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. In future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the "production economy" and the "market economy".
A Randomized Method for the Shapley Value for the Voting Game ABSTRACT The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #P - complete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%. 1. INTRODUCTION Often, in such situations, there is more than one possible coalition and a player's payoff depends on which one it joins. However, in many cases there may be more than one solution (i.e., a stable coalition). Moreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19]. A player's Shapley value gives an indication of its prospects of playing the game--the higher the Shapley value, the better its prospects. The main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness). However, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time. For instance, finding this value for the weighted voting game is, in general, #P - complete [6]. Since #P - completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general. In other words, it is practically infeasible to try to compute the exact Shapley value. However, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents. Hence, our objective is to overcome the computational complexity of finding the Shapley value for this game. Specifically, we first show that there are some specific voting games for which the exact value can be computed in polynomial time. By identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not. For the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value. The computational complexity of such games has typically been tackled using two main approaches. The first is to use generating functions [3]. This method trades time complexity for storage space. However the method we propose is more general than either of these (see Section 6 for details). Moreover, no work has previously analysed the approximation error. The approximation error relates to how close the approximate is to the true Shapley value. Specifically, it is the difference between the true and the approximate Shapley value. It is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error. Thus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value. Our experiments show that the error is always less than 20%, and in most cases it is under 5%. Finally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space). Given this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. Section 2 defines the Shapley value and describes the weighted voting game. In Section 3 we describe voting games whose Shapley value can be found in polynomial time. In Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5. Section 6 discusses related literature. Finally, Section 7 concludes. 2. BACKGROUND We begin by introducing coalition games and the Shapley value and then define the weighted voting game. A coalition game is a game where groups of players ("coalitions") may enforce cooperative behaviour between their members. Hence the game is a competition between coalitions of players, rather than between individual players. Depending on how the players measure utility, coalition game theory is split into two parts. More formally, a coalition game with transferable utility, ~ N, v ~, consists of: 1. a finite set (N = {1, 2,..., n}) of players and 2. Coalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way. The focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs. Thus, in either case, the players will only join a coalition if they expect to gain from it. Here, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff. The problem then is how to split the total payoff between or among the players. In this context, Shapley [19] constructed a solution using an axiomatic approach. Shapley defined a value for games to be a function that assigns to a game (N, v), a number ϕi (N, v) for each i in N. This function satisfies three axioms [17]: 1. Symmetry. This axiom requires that the names of players play no role in determining the value. 2. Carrier. This axiom requires that the sum of ϕi (N, v) for all players i in any carrier C equal v (C). A carrier C is a subset of N such that v (S) = v (S ∩ C) for any subset of players S ⊂ N. 3. Additivity. This axiom specifies how the values of different games must be related to one another. Shapley showed that there is a unique function that satisfies these three axioms. Shapley viewed this value as an index for measuring the power of players in a game. Like a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities. Alternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game. We first introduce some notation and then define the Shapley value. The random variable fi is interpreted as the random choice of a coalition that player i joins. Then, a player's Shapley value is defined in terms of its marginal contribution. Thus, the marginal contribution of player i to coalition S with i ∈ / S is a function Div that is defined as follows: Thus a player's marginal contribution to a coalition S is the increase in the value of S as a result of i joining it. The Shapley value is interpreted as follows. Suppose that all the players are arranged in some order, all orderings being equally likely. Then ϕi (N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him. The method for finding a player's Shapley value depends on the definition of the value function (v). This function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1. 960 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.1 The weighted voting game We adopt the definition of the voting game given in [6]. Thus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament. The weighted voting game is then a game G = (N, v) in which: for any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota). Note that for this game (denoted (q •, w1,..., wn)), a player's marginal contribution is either zero or one. This is because the value of any coalition is either zero or one. A coalition with value zero is called a "losing coalition" and with value one a "winning coalition". If a player's entry to a coalition changes it from losing to winning, then the player's marginal contribution for that coalition is one; otherwise it is zero. The main advantage of the Shapley value is that it gives a solution that is both unique and fair. The property of fairness relates to how the gains from cooperation are split between the members of a coalition. In this case, a player's Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value. Thus, from a player's perspective, both uniqueness and fairness are desirable properties. 6. RELATED WORK In order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature. One approach is to use generating functions [3]. This method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial--it requires huge arrays. It also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas. The other method uses an approximation technique based on Monte Carlo simulation. In [12], for instance, the Shapley value is computed by considering a random sample from a large population of players. In [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this. Since the voting game is defined in terms of the players' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game. Thus, as stated in Section 1, our method is more general than [3, 12, 7]. Also, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value. A method for finding the Shapley value was also proposed in [5]. This method gives the "exact" Shapley value, but its time complexity is exponential. Furthermore, the method can be used only if the game is represented in a specific form (viz., the "multi-issue representation"), not otherwise. Finally, [9, 10] present a polynomial time method for finding the Shapley value. This method can be used if the coalition game is represented as a "marginal contribution net". Furthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game. In contrast, our method is independent The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965 of the representation and gives an approximate Shapley value in linear time, without the need for an oracle. 7. CONCLUSIONS AND FUTURE WORK Coalition formation is an important form of interaction in multiagent systems. An important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition. In this context, cooperative game theory offers a solution concept called the Shapley value. The main advantage of the Shapley value is that it provides a solution that is both unique and fair. However, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is #P - complete. Although this problem is, in general #P - complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games. By doing so, we have shown when it is computationally feasible to find the exact Shapley value. For other complex voting games, we presented a new randomized method for determining the approximate Shapley value. The time complexity of the proposed method is linear in the number of players. We analysed the performance of this method in terms of the percentage error in the approximate Shapley value. Our experiments show that the percentage error in the Shapley value is at most 20. Furthermore, in most cases, the error is less than 5%. Finally, we analyse the effect of the different parameters of the voting game on this error. Our study shows that the error decreases as 1. a player's weight increases, 2. the quota increases, and 3. the number of players increases. Given the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents. In future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the "production economy" and the "market economy".
I-45
Implementing Commitment-Based Interactions
Although agent interaction plays a vital role in MAS, and message-centric approaches to agent interaction have their drawbacks, present agent-oriented programming languages do not provide support for implementing agent interaction that is flexible and robust. Instead, messages are provided as a primitive building block. In this paper we consider one approach for modelling agent interactions: the commitment machines framework. This framework supports modelling interactions at a higher level (using social commitments), resulting in more flexible interactions. We investigate how commitment-based interactions can be implemented in conventional agent-oriented programming languages. The contributions of this paper are: a mapping from a commitment machine to a collection of BDI-style plans; extensions to the semantics of BDI programming languages; and an examination of two issues that arise when distributing commitment machines (turn management and race conditions) and solutions to these problems.
[ "agent interact", "commit machin framework", "commit machin", "social commit", "bdi", "race condit", "commit-base interact", "messagecentr approach", "agent-orient program languag", "bdi-style plan", "interact goal", "turn track", "belief manag method", "herm design", "netbil interact", "agent orient program languag", "belief desir intent" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "M", "M", "M", "M", "M", "U", "M", "M", "U" ]
Implementing Commitment-Based Interactions∗ Michael Winikoff School of Computer Science and IT RMIT University Melbourne, Australia michael.winikoff@rmit.edu.au ABSTRACT Although agent interaction plays a vital role in MAS, and messagecentric approaches to agent interaction have their drawbacks, present agent-oriented programming languages do not provide support for implementing agent interaction that is flexible and robust. Instead, messages are provided as a primitive building block. In this paper we consider one approach for modelling agent interactions: the commitment machines framework. This framework supports modelling interactions at a higher level (using social commitments), resulting in more flexible interactions. We investigate how commitmentbased interactions can be implemented in conventional agent-oriented programming languages. The contributions of this paper are: a mapping from a commitment machine to a collection of BDI-style plans; extensions to the semantics of BDI programming languages; and an examination of two issues that arise when distributing commitment machines (turn management and race conditions) and solutions to these problems. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMultiagent systems; I.2.5 [Artificial Intelligence]: Programming Languages and Software General Terms Design 1. INTRODUCTION Agents are social, and agent interaction plays a vital role in multiagent systems. Consequently, design and implementation of agent interaction is an important research topic. The standard approach for designing agent interactions is messagecentric: interactions are defined by interaction protocols that give the permissible sequences of messages, specified using notations such as finite state machines, Petri nets, or Agent UML. It has been argued that this message-centric approach to interaction design is not a good match for intelligent agents. Intelligent agents should exhibit the ability to persist in achieving their goals in the face of failure (robustness) by trying different approaches (flexibility). On the other hand, when following an interaction protocol, an agent has limited flexibility and robustness: the ability to persistently try alternative means to achieving the interaction``s aim is limited to those options that the protocol``s designer provided, and in practice, message-centric design processes do not tend to lead to protocols that are flexible or robust. Recognising these limitations of the traditional approach to designing agent interactions, a number of approaches have been proposed in recent years that move away from message-centric interaction protocols, and instead consider designing agent interactions using higher-level concepts such as social commitments [8, 10, 18] or interaction goals [2]. There has also been work on richer forms of interaction in specific settings, such as teams of cooperative agents [5, 11]. However, although there has been work on designing flexible and robust agent interactions, there has been virtually no work on providing programming language support for implementing such interactions. Current Agent Oriented Programming Languages (AOPLs) do not provide support for implementing flexible and robust agent interactions using higher-level concepts than messages. Indeed, modern AOPLs [1], with virtually no exceptions, provide only simple message sending as the basis for implementing agent interaction. This paper presents what, to the best of our knowledge, is the second AOPL to support high-level, flexible, and robust agent interaction implementation. The first such language, STAPLE, was proposed a few years ago [9], but is not described in detail, and is arguably impractical for use by non-specialists, due to its logical basis and heavy reliance on temporal and modal logic. This paper presents a scheme for extending BDI-like AOPLs to support direct implementation of agent interactions that are designed using Yolum & Singh``s commitment machine (CM) framework [19]. In the remainder of this paper we briefly review commitment machines and present a simple abstraction of BDI AOPLs which lies in the common subset of languages such as Jason, 3APL, and CAN. We then present a scheme for translating commitment machines to this language, and indicate how the language needs to be extended to support this. We then extend our scheme to address a range of issues concerned with distribution, including turn tracking [7], and race conditions. 2. BACKGROUND 2.1 Commitment Machines The aim of the commitment machine framework is to allow for the definition of interactions that are more flexible than traditional message-centric approaches. A Commitment Machine (CM) [19] specifies an interaction between entities (e.g. agents, services, processes) in terms of actions that change the interaction state. This interact state consists of fluents (predicates that change value over time), but also social commitments, both base-level and conditional. A base-level social commitment is an undertaking by debtor A to creditor B to bring about condition p, denoted C(A, B, p). This is sometimes abbreviated to C(p), where it is not important to specify the identities of the entities in question. For example, a commitment by customer C to merchant M to make the fluent paid true would be written as C(C, M, paid). A conditional social commitment is an undertaking by debtor A to creditor B that should condition q become true, A will then commit to bringing about condition p. This is denoted by CC(A, B, q, p), and, where the identity of the entities involved is unimportant (or obvious), is abbreviated to CC(q p) where the arrow is a reminder of the causal link between q becoming true and the creation of a commitment to make p true. For example, a commitment to make the fluent paid true once goods have been received would be written CC(goods paid). The semantics of commitments (both base-level and conditional) is defined with rules that specify how commitments change over time. For example, the commitment C(p) (or CC(q p)) is discharged when p becomes true; and the commitment CC(q p) is replaced by C(p) when q becomes true. In this paper we use the more symmetric semantics proposed by [15] and subsequently reformalised by [14]. In brief, these semantics deal with a number of more complex cases, such as where commitments are created when conditions already hold: if p holds when CC(p q) is meant to be created, then C(q) is created instead of CC(p q). An interaction is defined by specifying the entities involved, the possible contents of the interaction state (both fluents and commitments), and (most importantly) the actions that each entity can perform along with the preconditions and effects of each action, specified as add and delete lists. A commitment machine (CM) defines a range of possible interactions that each start in some state1 , and perform actions until reaching a final state. A final state is one that has no base-level commitments. One way of visualising the interactions that are possible with a given commitment machine is to generate the finite state machine corresponding to the CM. For example, figure 1 gives the FSM2 corresponding to the NetBill [18] commitment machine: a simple CM where a customer (C) and merchant (M) attempt to trade using the following actions3 : 1 Unlike standard interaction protocols, or finite state machines, there is no designated initial state for the interaction. 2 The finite state machine is software-generated: the nodes and connections were computed by an implementation of the axioms (available from http://www.winikoff.net/CM) and were then laid out by graphviz (http://www.graphviz.org/). 3 We use the notation A(X) : P ⇒ E to indicate that action A is performed by entity X, has precondition P (with : P omitted if empty) and effect E. • sendRequest(C) ⇒ request • sendQuote(M) ⇒ offer where offer ≡ promiseGoods ∧ promiseReceipt and promiseGoods ≡ CC(M, C, accept, goods) and promiseReceipt ≡ CC(M, C, pay, receipt) • sendAccept(C) ⇒ accept where accept ≡ CC(C, M, goods, pay) • sendGoods(M) ⇒ promiseReceipt ∧ goods where promiseReceipt ≡ CC(M, C, pay, receipt) • sendEPO(C) : goods ⇒ pay • sendReceipt(M) : pay ⇒ receipt. The commitment accept is the customer``s promise to pay once goods have been sent, promiseGoods is the merchant``s promise to send the goods once the customer accepts, and promiseReceipt is the merchant``s promise to send a receipt once payment has been made. As seen in figure 1, commitment machines can support a range of interaction sequences. 2.2 An Abstract Agent ProgrammingLanguage Agent programming languages in the BDI tradition (e.g. dMARS, JAM, PRS, UM-PRS, JACK, AgentSpeak(L), Jason, 3APL, CAN, Jadex) define agent behaviour in terms of event-triggered plans, where each plan specifies what it is triggered by, under what situations it can be considered to be applicable (defined using a so-called context condition), and a plan body: a sequence of steps that can include posting events which in turn triggers further plans. Given a collection of plans and an event e that has been posted the agent first collects all plans types that are triggered by that event (the relevant plans), then evaluates the context conditions of these plans to obtain a set of applicable plan instances. One of these is chosen and is executed. We now briefly define the formal syntax and semantics of a Simple Abstract (BDI) Agent Programming Language (SAAPL). This language is intended to be an abstraction that is in the common subset of such languages as Jason [1, Chapter 1], 3APL [1, Chapter 2], and CAN [16]. Thus, it is intentionally incomplete in some areas, for instance it doesn``t commit to a particular mechanism for dealing with plan failure, since different mechanisms are used by different AOPLs. An agent program (denoted by Π) consists of a collection of plan clauses of the form e : C ← P where e is an event, C is a context condition (a logical formula over the agent``s beliefs), and P is the plan body. The plan body is built up from the following constructs. We have the empty step which always succeeds and does nothing, operations to add (+b) and delete (−b) beliefs, sending a message m to agent N (↑N m), and posting an event4 (e). These can be sequenced (P; P). C ::= b | C ∧ C | C ∨ C | ¬C | ∃x.C P ::= | +b | −b | e | ↑N m | P; P Formal semantics for this language is given in figure 2. This semantics is based on the semantics for AgentSpeak given by [12], which in turn is based on the semantics for CAN [16]. The semantics is in the style of Plotkin``s Structural Operational Semantics, and assumes that operations exist that check whether a condition 4 We use ↓N m as short hand for the event corresponding to receiving message m from agent N. 874 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Finite State Machine for NetBill (shaded = final states) follows from a belief set, that add a belief to a belief set, and that delete a belief from a belief set. In the case of beliefs being a set of ground atoms these operations are respectively consequence checking (B |= C), and set addition (B ∪ {b}) and deletion (B \ {b}). More sophisticated belief management methods may be used, but are not considered here. We define a basic configuration S = Q, N, B, P where Q is a (global) message queue (modelled as a sequence5 where messages are added at one end and removed from the other end), N is the name of the agent, B is the beliefs of the agent and P is the plan body being executed (i.e. the intention). We also define an agent configuration, where instead of a single plan body P there is a set of plan instances, Γ. Finally, a complete MAS is a pair Q, As of a global message queue Q and a set of agent configurations (without the queue, Q). The global message queue is a sequence of triplets of the form sender:recipient:message. A transition S0 −→ S1 specifies that executing S0 a single step yields S1. We annotate the arrow with an indication of whether the configuration in question is basic, an agent configuration, or a MAS configuration. The transition relation is defined using rules of the form S −→ S or of the form S −→ Sr S −→ Sr ; the latter are conditional with the top (numerator) being the premise and the bottom (denominator) being the conclusion. Note that there is non-determinism in SAAPL, e.g. the choice of plan to execute from a set of applicable plans. This is resolved by using selection functions: SO selects one of the applicable plan instances to handle a given event, SI selects which of the plan instances that can be executed should be executed next, and SA selects which agent should execute (a step) next. 3. IMPLEMENTING COMMITMENT-BASED INTERACTIONS In this section we present a mapping from a commitment machine to a collection of SAAPL programs (one for each role). We begin by considering the simple case of two interacting agents, and 5 The + operator is used to denote sequence concatenation. assume that the agents take turns to act. In section 4 we relax these assumptions. Each action A(X) : P ⇒ E is mapped to a number of plans: there is a plan (for agent X) with context condition P that performs the action (i.e. applies the effects E to the agent``s beliefs) and sends a message to the other agent, and a plan (for the other agent) that updates its state when a message is received from X. For example, given the action sendAccept(C) ⇒ accept we have the following plans, where each plan is preceded by M: or C: to indicate which agent that plan belongs to. Note that where the identify of the sender (respectively recipient) is obvious, i.e. the other agent, we abbreviate ↑N m to ↑m (resp. ↓N m to ↓m). Turn taking is captured through the event ı (short for interact): the agent that is active has an ı event that is being handled. Handling the event involves sending a message to the other agent, and then doing nothing until a response is received. C: ı : true ← +accept; ↑sendAccept. M: ↓sendAccept : true ← +accept; ı. If the action has a non-trivial precondition then there are two plans in the recipient: one to perform the action (if possible), and another to report an error if the action``s precondition doesn``t hold (we return to this in section 4). For example, the action sendReceipt(M) : pay ⇒ receipt generates the following plans: M: ı : pay ← +receipt; ↑sendReceipt. C: ↓sendReceipt : pay ← +receipt; ı. C: ↓sendReceipt : ¬pay ← ... report error ... In addition to these plans, we also need plans to start and finish the interaction. An interaction can be completed whenever there are no base-level commitments, so both agents have the following plans: ı : ¬∃p.C(p) ← ↑done. ↓done : ¬∃p.C(p) ← . ↓done : ∃p.C(p) ← ... report error ... An interaction is started by setting up an agent``s initial beliefs, and then having it begin to interact. Exactly how to do this depends on the agent platform: e.g. the agent platform in question may offer a simple way to load beliefs from a file. A generic approach that is a little cumbersome, but is portable, is to send each of the agents involved in the interaction a sequence of init messages, each The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 875 Q, N, B, +b Basic −→ Q, N, B ∪ {b}, Q, N, B, −b Basic −→ Q, N, B \ {b}, Δ = {Piθ|(ti : ci ← Pi) ∈ Π ∧ tiθ = e ∧ B |= ciθ} Q, N, B, e Basic −→ Q, N, B, SO(Δ) Q, N, B, P1 Basic −→ Q , N, B , P Q, N, B, P1; P2 Basic −→ Q , N, B , P ; P2 Q, N, B, ; P Basic −→ Q, N, B, P Q, N, B, ↑NB m Basic −→ Q + N:NB:m, N, B, Q = NA:N:m + Q Q, N, B, Γ Agent −→ Q , N, B, Γ ∪ {↓NA m} P = SI(Γ) Q, N, B, P Basic −→ Q , N, B , P Q, N, B, Γ Agent −→ Q , N, B , (Γ \ {P}) ∪ {P } P = SI(Γ) P = Q, N, B, Γ Agent −→ Q, N, B, (Γ \ {P}) N, B, Γ = SA(As) Q, N, B, Γ Agent −→ Q , N, B , Γ Q, As MAS −→ Q , (As ∪ { N, B , Γ }) \ { N, B, Γ } Figure 2: Operational Semantics for SAAPL containing a belief to be added; and then send one of the agents a start message which begins the interaction. Both agents thus have the following two plans: ↓init(B) : true ← +B. ↓start : true ← ı. Figure 3 gives the SAAPL programs for both merchant and customer that implement the NetBill protocol. For conciseness the error reporting plans are omitted. We now turn to refining the context conditions. There are three refinements that we consider. Firstly, we need to prevent performing actions that have no effect on the interaction state. Secondly, an agent may want to specify that certain actions that it is able to perform should not be performed unless additional conditions hold. For example, the customer may not want to agree to the merchant``s offer unless the goods have a certain price or property. Thirdly, the context conditions of the plans that terminate the interaction need to be refined in order to avoid terminating the interaction prematurely. For each plan of the form ı : P ← +E; ↑m we replace the context condition P with the enhanced condition P ∧ P ∧ ¬E where P is any additional conditions that the agent wishes to impose, and ¬E is the negation of the effects of the action. For example, the customer``s payment plan becomes (assuming no additional conditions, i.e. no P ): ı : goods ∧ ¬pay ← +pay; ↑sendEPO. For each plan of the form ↓m : P ← +E; ı we could add ¬E to the precondition, but this is redundant, since it is already checked by the performer of the action, and if the action has no effect then Customer``s plans: ı : true ← +request; ↑sendRequest. ı : true ← +accept; ↑sendAccept. ı : goods ← +pay; ↑sendEPO. ↓sendQuote : true ← +promiseGoods; +promiseReceipt; ı. ↓sendGoods : true ← +promiseReceipt; +goods; ı. ↓sendReceipt : pay ← +receipt; ı. Merchant``s plans: ı : true ← +promiseGoods; +promiseReceipt; ↑sendQuote. ı : true ← +promiseReceipt; +goods; ↑sendGoods. ı : pay ← +receipt; ↑sendReceipt. ↓sendRequest : true ← +request; ı. ↓sendAccept : true ← +accept; ı. ↓sendEPO : goods ← +pay; ı. Shared plans (i.e. plans of both agents): ı : ¬∃p.C(p) ← ↑done. ↓done : ¬∃p.C(p) ← . ↓init(B) : true ← +B. ↓start : true ← ı. Where accept ≡ CC(goods pay) promiseGoods ≡ CC(accept goods) promiseReceipt ≡ CC(pay receipt) offer ≡ promiseGoods ∧ promiseReceipt Figure 3: SAAPL Implementation of NetBill the sender won``t perform it and send the message (see also the discussion in section 4). When specifying additional conditions (P ), some care needs to be taken to avoid situations where progress cannot be made because the only action(s) possible are prevented by additional conditions. One way of indicating preference between actions (in many agent platforms) is to reorder the agent``s plans. This is clearly safe, since actions are not prevented, just considered in a different order. The third refinement of context conditions concerns the plans that terminate the interaction. In the Commitment Machine framework any state that has no base-level commitment is final, in that the interaction may end there (or it may continue). However, only some of these final states are desirable final states. Which final states are considered to be desirable depends on the domain and the desired interaction outcome. In the NetBill example, the desirable final state is one where the goods have been sent and paid for, and a receipt issued (i.e. goods ∧ pay ∧ receipt). In order to prevent an agent from terminating the interaction too early we add this as a precondition to the termination plan: ı : goods ∧ pay ∧ receipt ∧ ¬∃p.C(p) ← ↑done. Figure 4 shows the plans that are changed from figure 3. In order to support the realisation of CMs, we need to change SAAPL in a number of ways. These changes, which are discussed below, can be applied to existing BDI languages to make them commitment machine supportive. We present the three changes, explain what they involve, and for each change explain how the change was implemented using the 3APL agent oriented programming language. The three changes are: 1. extending the beliefs of the agent so that they can contain commitments; 876 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Customer``s plans: ı : ¬request ← +request; ↑sendRequest. ı : ¬accept ← +accept; ↑sendAccept. ı : goods ∧ ¬pay ← +pay; ↑sendEPO. Merchant``s plans: ı : ¬offer ← +promiseGoods; +promiseReceipt; ↑sendQuote. ı : ¬(promiseReceipt ∧ goods) ← +promiseReceipt; +goods; ↑sendGoods. ı : pay ∧ ¬receipt ← +receipt; ↑sendReceipt. Where accept ≡ CC(goods pay) promiseGoods ≡ CC(accept goods) promiseReceipt ≡ CC(pay receipt) offer ≡ promiseGoods ∧ promiseReceipt Figure 4: SAAPL Implementation of NetBill with refined context conditions (changed plans only) 2. changing the definition of |= to encompass implied commitments; and 3. whenever a belief is added, updating existing commitments, according to the rules of commitment dynamics. Extending the notion of beliefs to encompass commitments in fact requires no change in agent platforms that are prolog-like and support terms as beliefs (e.g. Jason, 3APL, CAN). However, other agent platforms do require an extension. For example, JACK, which is an extension of Java, would require changes to support commitments that can be nested. In the case of 3APL no change is needed to support this. Whenever a context condition contains commitments, determining whether the context condition is implied by the agent``s beliefs (B |= C) needs to take into account the notion of implied commitments [15]. In brief, a commitment can be considered to follow from a belief set B if the commitment is in the belief set (C ∈ B), but also under other conditions. For example, a commitment to pay C(pay) can be considered to be implied by a belief set containing pay because the commitment may have held and been discharged when pay was made true. Similar rules apply for conditional commitments. These rules, which were introduced in [15] were subsequently re-formalised in a simpler form by [14] resulting in the four inference rules in the bottom part of figure 5. The change that needs to be made to SAAPL to support commitment machine implementations is to extend the definition of |= to include these four rules. For 3APL this was realised by having each agent include the following Prolog clauses: holds(X) :- clause(X,true). holds(c(P)) :- holds(P). holds(c(P)) :- clause(cc(Q,P),true), holds(Q). holds(cc(_,Q)) :- holds(Q). holds(cc(_,Q)) :- holds(c(Q)). The first clause simply says that anything holds if it is in agent``s beliefs (clause(X,true) is true if X is a fact). The remaining four clauses correspond respectively to the inference rules C1, C2, CC1 and CC2. To use these rules we then modify context conditions in our program so that instead of writing, for example, cc(m,c, pay, receipt) we write holds(cc(m,c, pay, receipt)). B = norm(B ∪ {b}) Q, N, B, +b −→ Q, N, B , function norm(B) B ← B for each b ∈ B do if b = C(p) ∧ B |= p then B ← B \ {b} elseif b = CC(p q) then if B |= q then B ← B \ {b} elseif B |= p then B ← (B \ {b}) ∪ {C(q)} elseif B |= C(q) then B ← B \ {b} endif endif endfor return B end function B |= P B |= C(P) C1 CC(Q P) ∈ B B |= Q B |= P C2 B |= CC(P Q) B |= Q CC1 B |= C(Q) B |= CC(P Q) CC2 Figure 5: New Operational Semantics The final change is to update commitments when a belief is added. Formally, this is done by modifying the semantic rule for belief addition so that it applies an algorithm to update commitments. The modified rule and algorithm (which mirrors the definition of norm in [14]) can be found in the top part of figure 5. For 3APL this final change was achieved by manually inserting update() after updating beliefs, and defining the following rules for update(): update() <- c(P) AND holds(P) | {Deletec(P) ; update()}, update() <- cc(P,Q) AND holds(Q) | {Deletecc(P,Q) ; update()}, update() <- cc(P,Q) AND holds(P) | {Deletecc(P,Q) ; Addc(Q) ; update()}, update() <- cc(P,Q) AND holds(c(Q)) | {Deletecc(P,Q) ; update()}, update() <- true | Skip where Deletec and Deletecc delete respectively a base-level and conditional commitment, and Addc adds a base-level commitment. One aspect that doesn``t require a change is linking commitments and actions. This is because commitments don``t trigger actions directly: they may trigger actions indirectly, but in general their effect is to prevent completion of an interaction while there are outstanding (base level) commitments. Figure 6 shows the message sequences from a number of runs of a 3APL implementation of the NetBill commitment machine6 . In order to illustrate the different possible interactions the code was modified so that each agent selected randomly from the actions that it could perform, and a number of runs were made with the customer as the initiator, and then with the merchant as the initiator. There are other possible sequences of messages, not shown, 6 Source code is available from http://www.winikoff.net/CM The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 877 Figure 6: Sample runs from 3APL implementation (alternating turns) including the obvious one: request, quote, accept, goods, payment, receipt, and then done. One minor difference between the 3APL implementation and SAAPL concerns the semantics of messages. In the semantics of SAAPL (and of most AOPLs), receiving a message is treated as an event. However, in 3APL, receiving a message is modelled as the addition to the agent``s beliefs of a fact indicating that the message was received [6]. Thus in the 3APL implementation we have PG rules that are triggered by these beliefs, rather than by any event. One issue with this approach is that the belief remains there, so we need to ensure that the belief in question is either deleted once handled, or that we modify preconditions of plans to avoid handling it more than once. In our implementation we delete these received beliefs when they are handled, to avoid duplicate handling of messages. 4. BEYOND TWO PARTICIPANTS Generalising to more than two interaction participants requires revisiting how turn management is done, since it is no longer possible to assume alternating turns [7]. In fact, perhaps surprisingly, even in the two participant setting, an alternating turn setup is an unreasonable assumption! For example, consider the path (in figure 1) from state 1 to 15 (sendGoods) then to state 12 (sendAccept). The result, in an alternating turn setup, is a dead-end: there is only a single possible action in state 12, namely sendEPO, but this action is done by the customer, and it is the merchant``s turn to act! Figure 7 shows the FSM for NetBill with alternating initiative. A solution to this problem that works in this example, but doesn``t generalise7 , is to weaken the alternating turn taking regime by allowing an agent to act twice in a row if its second action is driven by a commitment. A general solution is to track whose turn it is to act. This can be done by working out which agents have actions that are able to be performed in the current state. If there is only a single active agent, then it is clearly that agent``s turn to act. However, if more than one agent is active then somehow the agents need to work out who should act next. Working this out by negotiation is not a particularly good solution for two reasons. Firstly, this negotiation has to be done at every step of the interaction where more than one agent is active (in the NetBill, this applies to seven out of sixteen states), so it is highly desirable to have a light-weight mechanism for doing this. Secondly, it is not clear how the negotiation can avoid an infinite regress situation (you go first, no, you go first, . . .) without imposing some arbitrary rule. It is also possible to resolve who should act by imposing an arbitrary rule, for example, that the customer always acts in preference to the merchant, or that each agent has a numerical priority (perhaps determined by the order in which they joined the interaction?) that determines who acts. An alternative solution, which exploits the symmetrical properties of commitment machines, is to not try and manage turn taking. 7 Consider actions A1(C) ⇒ p, A2(C) ⇒ q, and A3(M) : p ∧ q ⇒ r. Figure 7: NetBill with alternating initiative Instead of tracking and controlling whose turn it is, we simply allow the agents to act freely, and rely on the properties of the interaction space to ensure that things work out, a notion that we shall make precise, and prove, in the remainder of this section. The issue with having multiple agents be active simultaneously is that instead of all agents agreeing on the current interaction state, agents can be in different states. This can be visualised as each agent having its own copy of the FSM that it navigates through where it is possible for agents to follow different paths through the FSM. The two specific issues that need to be addressed are: 1. Can agents end up in different final states? 2. Can an agent be in a position where an error occurs because it cannot perform an action corresponding to a received message? We will show that, because actions commute under certain assumptions, agents cannot end up in different final states, and furthermore, that errors cannot occur (again, under certain assumptions). By actions commute we mean that the state resulting from performing a sequence of actions A1 ... An is the same, regardless of the order in which the actions are performed. This means that even if agents take different paths through the FSM, they still end up in the same resulting state, because once all messages have been processed, all agents will have performed the same set of actions. This addresses the issue of ending up in different final states. We return to the possibility of errors occurring shortly. Definition 1 (Monotonicity) An action is monotonic if it does not delete8 any fluents or commitments. A Commitment Machine is 8 That is directly deletes, it is fine to discharge commitments by adding fluents/commitments. 878 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) monotonic if all of its actions are monotonic. (Adapted from [14, Definition 6]) Theorem 1 If A1 and A2 are monotonic actions, then performing A1 followed by A2 has the same effect on the agent``s beliefs as performing A2 followed by A1. (Adapted from [14, Theorem 2]). This assumes that both actions can be performed. However, it is possible for the performance of A1 to disable A2 from being done. For example, if A1 has the effect +p, and A2 has precondition ¬p, then although both actions may be enabled in the initial state, they cannot be performed in either order. We can prevent this by ensuring that actions'' preconditions do not contain negation (or implication), since a monotonic action cannot result in a precondition that is negation-free becoming false. Note that this restriction only applies to the original action precondition, P, not to any additional preconditions imposed by the agent (P ). This is because only P is used to determine whether another agent is able to perform the action. Thus monotonic CMs with preconditions that do not contain negations have actions that commute. However, in fact, the restriction to monotonic CMs is unnecessarily strong: all that is needed is that whenever there is a choice of agent that can act, then the possible actions are monotonic. If there is only a single agent that can act, then no restriction is needed on the actions: they may or may not be monotonic. Definition 2 (Locally Monotonic) A commitment machine is locally monotonic if for any state S either (a) only a single agent has actions that can be performed; or (b) all actions that can be performed in S are monotonic. Theorem 2 In a locally monotonic CM, once all messages have been processed, all agents will be in the same state. Furthermore, no errors can occur. Proof: Once all messages have been processed we have that all agents will have performed the same action set, perhaps in a different order. The essence of the proof is to argue that as long as agents haven``t yet converged to the same state, all actions must be monotonic, and hence that these actions commute, and cannot disable any other actions. Consider the first point of divergence, where an agent performs action A and at the same time another agent (call it XB) performs action B. Clearly, this state has actions of more than one agent enabled, so, since the CM is locally monotonic, the relevant actions must be monotonic. Therefore, after doing A, the action B must still be enabled, and so the message to do B can be processed by updating the recipient agent``s beliefs with the effects of B. Furthermore, because monotonic actions commute, the result of doing A before B is the same as doing B before A: S A −−−−−→ SA ? ? yB B ? ? y SB −−−−−→ A SAB However, what happens if the next action after A is not B, but C? Because B is enabled, and C is not done by agent XB (see below), we must have that C is also monotonic, and hence (a) the result of doing A and B and C is the same regardless of the order in which the three actions are done; and (b) C doesn``t disable B, so B can still be done after C. S A −−−−−→ SA C −−−−−→ SAC ? ? yB B ? ? y B ? ? y SB −−−−−→ A SAB −−−−−→ C SABC The reason why C cannot be done by XB is that messages are processed in the order of their arrival9 . From the perspective of XB the action B was done before C, and therefore from any other agent``s perspective the message saying that B was done must be received (and processed) before a message saying that C is done. This argument can be extended to show that once agents start taking different paths through the FSM all actions taken until the point where they converge on a single state must be monotonic, and hence it is always possible to converge (because actions aren``t disabled), so the interaction is error free; and the resulting state once convergence occurs is the same (because monotonic actions commute). This theorem gives a strong theoretical guarantee that not doing turn management will not lead to disaster. This is analogous to proving that disabling all traffic lights would not lead to any accidents, and is only possible because the refined CM axioms are symmetrical. Based on this theorem the generic transformation from CM to code should allow agents to act freely, which is achieved by simply changing ı : P ∧ P ∧ ¬E ← +E; ↑A to ı : P ∧ P ∧ ¬E ← +E; ↑A; ı For example, instead of ı : ¬request ← +request; ↑sendRequest we have ı : ¬request ← +request; ↑sendRequest; ı. One consequence of the theorem is that it is not necessary to ensure that agents process messages before continuing to interact. However, in order to avoid unnecessary parallelism, which can make debugging harder, it may still be desirable to process messages before performing actions. Figure 8 shows a number of runs from the 3APL implementation that has been modified to allow free, non-alternating, interaction. 5. DISCUSSION We have presented a scheme for mapping commitment machines to BDI platforms (using SAAPL as an exemplar), identified three changes that needed to be made to SAAPL to support CM-based interaction, and shown that turn management can be avoided in CMbased interaction, provided the CM is locally monotonic. The three changes to SAAPL, and the translation scheme from commitment machine to BDI plans are both applicable to any BDI language. As we have mentioned in section 1, there has been some work on designing flexible and robust agent interaction, but virtually no work on implementing flexible and robust interactions. We have already discussed STAPLE [9, 10]. Another piece of work that is relevant is the work by Cheong and Winikoff on their Hermes methodology [2]. Although the main focus of their work is a pragmatic design methodology, they also provide guidelines for implementing Hermes designs using BDI platforms (specifically Jadex) [3]. However, since Hermes does not yield a design that is formal, it is only possible to generate skeleton code that then needs to be completed. Also, they do not address the turn taking issue: how to decide which agent acts when more than one agent is able to act. 9 We also assume that the communication medium does not deliver messages out of order, which is the case for (e.g.) TCP. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 879 Figure 8: Sample runs from 3APL implementation (non-alternating turns) The work of Kremer and Flores (e.g. [8]) also uses commitments, and deals with implementation. However, they provide infrastructure support (CASA) rather than a programming language, and do not appear to provide assistance to a programmer seeking to implement agents. Although we have implemented the NetBill interaction using 3APL, the changes to the semantics were done by modifying our NetBill 3APL program, rather than by modifying the 3APL implementation itself. Clearly, it would be desirable to modify the semantics of 3APL (or of another language) directly, by changing the implementation. Also, although we have not done so, it should be clear that the translation from a CM to its implementation could easily be automated. Another area for further work is to look at how the assumptions required to ensure that actions commute can be relaxed. Finally, there is a need to perform empirical evaluation. There has already been some work on comparing Hermes with a conventional message-centric approach to designing interaction, and this has shown that using Hermes results in designs that are significantly more flexible and robust [4]. It would be interesting to compare commitment machines with Hermes, but, since commitment machines are a framework, not a design methodology, we need to compare Hermes with a methodology for designing interactions that results in commitment machines [13, 17]. 6. REFERENCES [1] R. H. Bordini, M. Dastani, J. Dix, and A. E. F. Seghrouchni, editors. Multi-Agent Programming: Languages, Platforms and Applications. Springer, 2005. [2] C. Cheong and M. Winikoff. Hermes: Designing goal-oriented agent interactions. In Proceedings of the 6th International Workshop on Agent-Oriented Software Engineering (AOSE-2005), July 2005. [3] C. Cheong and M. Winikoff. Hermes: Implementing goal-oriented agent interactions. In Proceedings of the Third international Workshop on Programming Multi-Agent Systems (ProMAS), July 2005. [4] C. Cheong and M. Winikoff. Hermes versus prometheus: A comparative evaluation of two agent interaction design approaches. Submitted for publication, 2007. [5] P. R. Cohen and H. J. Levesque. Teamwork. Nous, 25(4):487-512, 1991. [6] M. Dastani, J. van der Ham, and F. Dignum. Communication for goal directed agents. In Proceedings of the Agent Communication Languages and Conversation Policies Workshop, 2002. [7] F. P. Dignum and G. A. Vreeswijk. Towards a testbed for multi-party dialogues. In Advances in Agent Communication, pages 212-230. Springer, LNCS 2922, 2004. [8] R. Kremer and R. Flores. Using a performative subsumption lattice to support commitment-based conversations. In F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. P. Singh, and M. Wooldridge, editors, Autonomous Agents and Multi-Agent Systems (AAMAS), pages 114-121. ACM Press, 2005. [9] S. Kumar and P. R. Cohen. STAPLE: An agent programming language based on the joint intention theory. In Proceedings of the Third International Joint Conference on Autonomous Agents & Multi-Agent Systems (AAMAS 2004), pages 1390-1391. ACM Press, July 2004. [10] S. Kumar, M. J. Huber, and P. R. Cohen. Representing and executing protocols as joint actions. In Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 543 - 550, Bologna, Italy, 15 - 19 July 2002. ACM Press. [11] M. Tambe and W. Zhang. Towards flexible teamwork in persistent teams: Extended report. Journal of Autonomous Agents and Multi-agent Systems, 2000. Special issue on Best of ICMAS 98. [12] M. Winikoff. An AgentSpeak meta-interpreter and its applications. In Third International Workshop on Programming Multi-Agent Systems (ProMAS), pages 123-138. Springer, LNCS 3862 (post-proceedings, 2006), 2005. [13] M. Winikoff. Designing commitment-based agent interactions. In Proceedings of the 2006 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT-06), 2006. [14] M. Winikoff. Implementing flexible and robust agent interactions using distributed commitment machines. Multiagent and Grid Systems, 2(4), 2006. [15] M. Winikoff, W. Liu, and J. Harland. Enhancing commitment machines. In J. Leite, A. Omicini, P. Torroni, and P. Yolum, editors, Declarative Agent Languages and Technologies II, number 3476 in Lecture Notes in Artificial Intelligence (LNAI), pages 198-220. Springer, 2004. [16] M. Winikoff, L. Padgham, J. Harland, and J. Thangarajah. Declarative & procedural goals in intelligent agent systems. In Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002), Toulouse, France, 2002. [17] P. Yolum. Towards design tools for protocol development. In F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. P. Singh, and M. Wooldridge, editors, Autonomous Agents and Multi-Agent Systems (AAMAS), pages 99-105. ACM Press, 2005. [18] P. Yolum and M. P. Singh. Flexible protocol specification and execution: Applying event calculus planning using commitments. In Proceedings of the 1st Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS), pages 527-534, 2002. [19] P. Yolum and M. P. Singh. Reasoning about commitments in the event calculus: An approach for specifying and executing protocols. Annals of Mathematics and Artificial Intelligence (AMAI), 2004. 880 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Implementing Commitment-Based Interactions * ABSTRACT Although agent interaction plays a vital role in MAS, and messagecentric approaches to agent interaction have their drawbacks, present agent-oriented programming languages do not provide support for implementing agent interaction that is flexible and robust. Instead, messages are provided as a primitive building block. In this paper we consider one approach for modelling agent interactions: the commitment machines framework. This framework supports modelling interactions at a higher level (using social commitments), resulting in more flexible interactions. We investigate how commitmentbased interactions can be implemented in conventional agent-oriented programming languages. The contributions of this paper are: a mapping from a commitment machine to a collection of BDI-style plans; extensions to the semantics of BDI programming languages; and an examination of two issues that arise when distributing commitment machines (turn management and race conditions) and solutions to these problems. 1. INTRODUCTION Agents are social, and agent interaction plays a vital role in multiagent systems. Consequently, design and implementation of agent interaction is an important research topic. The standard approach for designing agent interactions is messagecentric: interactions are defined by interaction protocols that give the permissible sequences of messages, specified using notations such as finite state machines, Petri nets, or Agent UML. It has been argued that this message-centric approach to interaction design is not a good match for intelligent agents. Intelligent agents should exhibit the ability to persist in achieving their goals in the face of failure (robustness) by trying different approaches (flexibility). On the other hand, when following an interaction protocol, an agent has limited flexibility and robustness: the ability to persistently try alternative means to achieving the interaction's aim is limited to those options that the protocol's designer provided, and in practice, message-centric design processes do not tend to lead to protocols that are flexible or robust. Recognising these limitations of the traditional approach to designing agent interactions, a number of approaches have been proposed in recent years that move away from message-centric interaction protocols, and instead consider designing agent interactions using higher-level concepts such as social commitments [8, 10, 18] or interaction goals [2]. There has also been work on richer forms of interaction in specific settings, such as teams of cooperative agents [5, 11]. However, although there has been work on designing flexible and robust agent interactions, there has been virtually no work on providing programming language support for implementing such interactions. Current Agent Oriented Programming Languages (AOPLs) do not provide support for implementing flexible and robust agent interactions using higher-level concepts than messages. Indeed, modern AOPLs [1], with virtually no exceptions, provide only simple message sending as the basis for implementing agent interaction. This paper presents what, to the best of our knowledge, is the second AOPL to support high-level, flexible, and robust agent interaction implementation. The first such language, STAPLE, was proposed a few years ago [9], but is not described in detail, and is arguably impractical for use by non-specialists, due to its logical basis and heavy reliance on temporal and modal logic. This paper presents a scheme for extending BDI-like AOPLs to support direct implementation of agent interactions that are designed using Yolum & Singh's commitment machine (CM) framework [19]. In the remainder of this paper we briefly review commitment machines and present a simple abstraction of BDI AOPLs which lies in the common subset of languages such as Jason, 3APL, and CAN. We then present a scheme for translating commitment machines to this language, and indicate how the language needs to be extended to support this. We then extend our scheme to address a range of issues concerned with distribution, including turn tracking [7], and race conditions. 2. BACKGROUND 2.1 Commitment Machines The aim of the commitment machine framework is to allow for the definition of interactions that are more flexible than traditional message-centric approaches. A Commitment Machine (CM) [19] specifies an interaction between entities (e.g. agents, services, processes) in terms of actions that change the interaction state. This interact state consists of fluents (predicates that change value over time), but also social commitments, both base-level and conditional. A base-level social commitment is an undertaking by debtor A to creditor B to bring about condition p, denoted C (A, B, p). This is sometimes abbreviated to C (p), where it is not important to specify the identities of the entities in question. For example, a commitment by customer C to merchant M to make the fluent paid true would be written as C (C, M, paid). A conditional social commitment is an undertaking by debtor A to creditor B that should condition q become true, A will then commit to bringing about condition p. This is denoted by CC (A, B, q, p), and, where the identity of the entities involved is unimportant (or obvious), is abbreviated to CC (q p) where the arrow is a reminder of the causal link between q becoming true and the creation of a commitment to make p true. For example, a commitment to make the fluent paid true once goods have been received would be written CC (goods paid). The semantics of commitments (both base-level and conditional) is defined with rules that specify how commitments change over time. For example, the commitment C (p) (or CC (q p)) is discharged when p becomes true; and the commitment CC (q p) is replaced by C (p) when q becomes true. In this paper we use the more symmetric semantics proposed by [15] and subsequently reformalised by [14]. In brief, these semantics deal with a number of more complex cases, such as where commitments are created when conditions already hold: if p holds when CC (p q) is meant to be created, then C (q) is created instead of CC (p q). An interaction is defined by specifying the entities involved, the possible contents of the interaction state (both fluents and commitments), and (most importantly) the actions that each entity can perform along with the preconditions and effects of each action, specified as add and delete lists. A commitment machine (CM) defines a range of possible interactions that each start in some state1, and perform actions until reaching a final state. A final state is one that has no base-level commitments. One way of visualising the interactions that are possible with a given commitment machine is to generate the finite state machine corresponding to the CM. For example, figure 1 gives the FSM2 corresponding to the NetBill [18] commitment machine: a simple CM where a customer (C) and merchant (M) attempt to trade using the following actions3: 1Unlike standard interaction protocols, or finite state machines, there is no designated initial state for the interaction. 2The finite state machine is software-generated: the nodes and connections were computed by an implementation of the axioms (available from http://www.winikoff.net/CM) and were then laid out by graphviz (http://www.graphviz.org/). 3We use the notation A (X): P = * E to indicate that action A is performed by entity X, has precondition P (with ": P" omitted if empty) and effect E. • sendRequest (C) = * request • sendQuote (M) = * offer where offer - promiseGoods n promiseReceipt and promiseGoods - CC (M, C, accept, goods) and promiseReceipt - CC (M, C, pay, receipt) • sendAccept (C) = * accept where accept - CC (C, M, goods, pay) • sendGoods (M) = * promiseReceipt n goods where promiseReceipt - CC (M, C, pay, receipt) • sendEPO (C): goods = * pay • sendReceipt (M): pay = * receipt. The commitment accept is the customer's promise to pay once goods have been sent, promiseGoods is the merchant's promise to send the goods once the customer accepts, and promiseReceipt is the merchant's promise to send a receipt once payment has been made. As seen in figure 1, commitment machines can support a range of interaction sequences. 2.2 An Abstract Agent Programming Language Agent programming languages in the BDI tradition (e.g. dMARS, JAM, PRS, UM-PRS, JACK, AgentSpeak (L), Jason, 3APL, CAN, Jadex) define agent behaviour in terms of event-triggered plans, where each plan specifies what it is triggered by, under what situations it can be considered to be applicable (defined using a so-called context condition), and a plan body: a sequence of steps that can include posting events which in turn triggers further plans. Given a collection of plans and an event e that has been posted the agent first collects all plans types that are triggered by that event (the relevant plans), then evaluates the context conditions of these plans to obtain a set of applicable plan instances. One of these is chosen and is executed. We now briefly define the formal syntax and semantics of a Simple Abstract (BDI) Agent Programming Language (SAAPL). This language is intended to be an abstraction that is in the common subset of such languages as Jason [1, Chapter 1], 3APL [1, Chapter 2], and CAN [16]. Thus, it is intentionally incomplete in some areas, for instance it doesn't commit to a particular mechanism for dealing with plan failure, since different mechanisms are used by different AOPLs. An agent program (denoted by Π) consists of a collection of plan clauses of the form e: C <--P where e is an event, C is a context condition (a logical formula over the agent's beliefs), and P is the plan body. The plan body is built up from the following constructs. We have the empty step a which always succeeds and does nothing, operations to add (+ b) and delete (− b) beliefs, sending a message m to agent N (TN m), and posting an event4 (e). These can be sequenced (P; P). Formal semantics for this language is given in figure 2. This semantics is based on the semantics for AgentSpeak given by [12], which in turn is based on the semantics for CAN [16]. The semantics is in the style of Plotkin's Structural Operational Semantics, and assumes that operations exist that check whether a condition 4We use IN m as short hand for the event corresponding to receiving message m from agent N. 874 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Finite State Machine for NetBill (shaded = final states) follows from a belief set, that add a belief to a belief set, and that delete a belief from a belief set. In the case of beliefs being a set of ground atoms these operations are respectively consequence checking (B | = C), and set addition (B ∪ {b}) and deletion (B \ {b}). More sophisticated belief management methods may be used, but are not considered here. We define a basic configuration S = ~ Q, N, B, P ~ where Q is a (global) message queue (modelled as a sequence5 where messages are added at one end and removed from the other end), N is the name of the agent, B is the beliefs of the agent and P is the plan body being executed (i.e. the intention). We also define an agent configuration, where instead of a single plan body P there is a set of plan instances, Γ. Finally, a complete MAS is a pair ~ Q, As ~ of a global message queue Q and a set of agent configurations (without the queue, Q). The global message queue is a sequence of triplets of the form sender: recipient: message. A transition S0 − → S1 specifies that executing S0 a single step yields S1. We annotate the arrow with an indication of whether the configuration in question is basic, an agent configuration, or a MAS configuration. The transition relation is defined using rules S' − → Sr of the form S − → S' or of the form S − → S' r; the latter are conditional with the top (numerator) being the premise and the bottom (denominator) being the conclusion. Note that there is non-determinism in SAAPL, e.g. the choice of plan to execute from a set of applicable plans. This is resolved by using selection functions: So selects one of the applicable plan instances to handle a given event, ST selects which of the plan instances that can be executed should be executed next, and S „ 4 selects which agent should execute (a step) next. 3. IMPLEMENTING COMMITMENT-BASED INTERACTIONS In this section we present a mapping from a commitment machine to a collection of SAAPL programs (one for each role). We begin by considering the simple case of two interacting agents, and 5The "+" operator is used to denote sequence concatenation. assume that the agents take turns to act. In section 4 we relax these assumptions. Each action A (X): P ⇒ E is mapped to a number of plans: there is a plan (for agent X) with context condition P that performs the action (i.e. applies the effects E to the agent's beliefs) and sends a message to the other agent, and a plan (for the other agent) that updates its state when a message is received from X. For example, given the action sendAccept (C) ⇒ accept we have the following plans, where each plan is preceded by "M:" or "C:" to indicate which agent that plan belongs to. Note that where the identify of the sender (respectively recipient) is obvious, i.e. the other agent, we abbreviate ↑ Nm to ↑ m (resp. ↓ N m to ↓ m). Turn taking is captured through the event ı (short for "interact"): the agent that is active has an ı event that is being handled. Handling the event involves sending a message to the other agent, and then doing nothing until a response is received. C: ı: true ← + accept; ↑ sendAccept. M: ↓ sendAccept: true ← + accept; ı. If the action has a non-trivial precondition then there are two plans in the recipient: one to perform the action (if possible), and another to report an error if the action's precondition doesn't hold (we return to this in section 4). For example, the action sendReceipt (M): pay ⇒ receipt generates the following plans: In addition to these plans, we also need plans to start and finish the interaction. An interaction can be completed whenever there are no base-level commitments, so both agents have the following plans: An interaction is started by setting up an agent's initial beliefs, and then having it begin to interact. Exactly how to do this depends on the agent platform: e.g. the agent platform in question may offer a simple way to load beliefs from a file. A generic approach that is a little cumbersome, but is portable, is to send each of the agents involved in the interaction a sequence of init messages, each The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 875 Figure 2: Operational Semantics for SAAPL containing a belief to be added; and then send one of the agents a start message which begins the interaction. Both agents thus have the following two plans: Figure 3 gives the SAAPL programs for both merchant and customer that implement the NetBill protocol. For conciseness the error reporting plans are omitted. We now turn to refining the context conditions. There are three refinements that we consider. Firstly, we need to prevent performing actions that have no effect on the interaction state. Secondly, an agent may want to specify that certain actions that it is able to perform should not be performed unless additional conditions hold. For example, the customer may not want to agree to the merchant's offer unless the goods have a certain price or property. Thirdly, the context conditions of the plans that terminate the interaction need to be refined in order to avoid terminating the interaction prematurely. For each plan of the form ı: P ← + E; ↑ m we replace the context condition P with the enhanced condition P ∧ P ~ ∧ ¬ E where P ~ is any additional conditions that the agent wishes to impose, and ¬ E is the negation of the effects of the action. For example, the customer's payment plan becomes (assuming no additional conditions, i.e. no P ~): ı: goods ∧ ¬ pay ← + pay; ↑ sendEPO. For each plan of the form ↓ m: P ← + E; ı we could add ¬ E to the precondition, but this is redundant, since it is already checked by the performer of the action, and if the action has no effect then Figure 3: SAAPL Implementation of NetBill the sender won't perform it and send the message (see also the discussion in section 4). When specifying additional conditions (P ~), some care needs to be taken to avoid situations where progress cannot be made because the only action (s) possible are prevented by additional conditions. One way of indicating preference between actions (in many agent platforms) is to reorder the agent's plans. This is clearly safe, since actions are not prevented, just considered in a different order. The third refinement of context conditions concerns the plans that terminate the interaction. In the Commitment Machine framework any state that has no base-level commitment is final, in that the interaction may end there (or it may continue). However, only some of these final states are desirable final states. Which final states are considered to be desirable depends on the domain and the desired interaction outcome. In the NetBill example, the desirable final state is one where the goods have been sent and paid for, and a receipt issued (i.e. goods ∧ pay ∧ receipt). In order to prevent an agent from terminating the interaction too early we add this as a precondition to the termination plan: Figure 4 shows the plans that are changed from figure 3. In order to support the realisation of CMs, we need to change SAAPL in a number of ways. These changes, which are discussed below, can be applied to existing BDI languages to make them "commitment machine supportive". We present the three changes, explain what they involve, and for each change explain how the change was implemented using the 3APL agent oriented programming language. The three changes are: 1. extending the beliefs of the agent so that they can contain commitments; 876 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: SAAPL Implementation of NetBill with refined context conditions (changed plans only) 2. changing the definition of "| =" to encompass implied commitments; and 3. whenever a belief is added, updating existing commitments, according to the rules of commitment dynamics. Extending the notion of beliefs to encompass commitments in fact requires no change in agent platforms that are prolog-like and support terms as beliefs (e.g. Jason, 3APL, CAN). However, other agent platforms do require an extension. For example, JACK, which is an extension of Java, would require changes to support commitments that can be nested. In the case of 3APL no change is needed to support this. Whenever a context condition contains commitments, determining whether the context condition is implied by the agent's beliefs (B | = C) needs to take into account the notion of implied commitments [15]. In brief, a commitment can be considered to follow from a belief set B if the commitment is in the belief set (C ∈ B), but also under other conditions. For example, a commitment to pay C (pay) can be considered to be implied by a belief set containing pay because the commitment may have held and been discharged when pay was made true. Similar rules apply for conditional commitments. These rules, which were introduced in [15] were subsequently re-formalised in a simpler form by [14] resulting in the four inference rules in the bottom part of figure 5. The change that needs to be made to SAAPL to support commitment machine implementations is to extend the definition of | = to include these four rules. For 3APL this was realised by having each agent include the following Prolog clauses: The first clause simply says that anything holds if it is in agent's beliefs (clause (X, true) is true if X is a fact). The remaining four clauses correspond respectively to the inference rules C1, C2, CC1 and CC2. To use these rules we then modify context conditions in our program so that instead of writing, for example, cc (m, c, pay, receipt) we write holds (cc (m, c, pay, receipt)). Figure 5: New Operational Semantics The final change is to update commitments when a belief is added. Formally, this is done by modifying the semantic rule for belief addition so that it applies an algorithm to update commitments. The modified rule and algorithm (which mirrors the definition of norm in [14]) can be found in the top part of figure 5. For 3APL this final change was achieved by manually inserting update () after updating beliefs, and defining the following rules for update (): where Deletec and Deletecc delete respectively a base-level and conditional commitment, and Addc adds a base-level commitment. One aspect that doesn't require a change is linking commitments and actions. This is because commitments don't trigger actions directly: they may trigger actions indirectly, but in general their effect is to prevent completion of an interaction while there are outstanding (base level) commitments. Figure 6 shows the message sequences from a number of runs of a 3APL implementation of the NetBill commitment machine6. In order to illustrate the different possible interactions the code was modified so that each agent selected randomly from the actions that it could perform, and a number of runs were made with the customer as the initiator, and then with the merchant as the initiator. There are other possible sequences of messages, not shown, Figure 6: Sample runs from 3APL implementation (alternating turns) including the obvious one: request, quote, accept, goods, payment, receipt, and then done. One minor difference between the 3APL implementation and SAAPL concerns the semantics of messages. In the semantics of SAAPL (and of most AOPLs), receiving a message is treated as an event. However, in 3APL, receiving a message is modelled as the addition to the agent's beliefs of a fact indicating that the message was received [6]. Thus in the 3APL implementation we have PG rules that are triggered by these beliefs, rather than by any event. One issue with this approach is that the belief remains there, so we need to ensure that the belief in question is either deleted once handled, or that we modify preconditions of plans to avoid handling it more than once. In our implementation we delete these "received" beliefs when they are handled, to avoid duplicate handling of messages. 4. BEYOND TWO PARTICIPANTS Generalising to more than two interaction participants requires revisiting how turn management is done, since it is no longer possible to assume alternating turns [7]. In fact, perhaps surprisingly, even in the two participant setting, an alternating turn setup is an unreasonable assumption! For example, consider the path (in figure 1) from state 1 to 15 (sendGoods) then to state 12 (sendAccept). The result, in an alternating turn setup, is a dead-end: there is only a single possible action in state 12, namely sendEPO, but this action is done by the customer, and it is the merchant's turn to act! Figure 7 shows the FSM for NetBill with alternating initiative. A solution to this problem that works in this example, but doesn't generalise7, is to weaken the alternating turn taking regime by allowing an agent to act twice in a row if its second action is driven by a commitment. A general solution is to track whose turn it is to act. This can be done by working out which agents have actions that are able to be performed in the current state. If there is only a single active agent, then it is clearly that agent's turn to act. However, if more than one agent is active then somehow the agents need to work out who should act next. Working this out by negotiation is not a particularly good solution for two reasons. Firstly, this negotiation has to be done at every step of the interaction where more than one agent is active (in the NetBill, this applies to seven out of sixteen states), so it is highly desirable to have a light-weight mechanism for doing this. Secondly, it is not clear how the negotiation can avoid an infinite regress situation ("you go first", "no, you go first", ...) without imposing some arbitrary rule. It is also possible to resolve who should act by imposing an arbitrary rule, for example, that the customer always acts in preference to the merchant, or that each agent has a numerical priority (perhaps determined by the order in which they joined the interaction?) that determines who acts. An alternative solution, which exploits the symmetrical properties of commitment machines, is to not try and manage turn taking. 7Consider actions A1 (C) = * p, A2 (C) = * q, and A3 (M): p ∧ q = * r. Figure 7: NetBill with alternating initiative Instead of tracking and controlling whose turn it is, we simply allow the agents to act freely, and rely on the properties of the interaction space to ensure that "things work out", a notion that we shall make precise, and prove, in the remainder of this section. The issue with having multiple agents be active simultaneously is that instead of all agents agreeing on the current interaction state, agents can be in different states. This can be visualised as each agent having its own copy of the FSM that it navigates through where it is possible for agents to follow different paths through the FSM. The two specific issues that need to be addressed are: 1. Can agents end up in different final states? 2. Can an agent be in a position where an error occurs because it cannot perform an action corresponding to a received message? We will show that, because actions commute under certain assumptions, agents cannot end up in different final states, and furthermore, that errors cannot occur (again, under certain assumptions). By "actions commute" we mean that the state resulting from performing a sequence of actions A1...An is the same, regardless of the order in which the actions are performed. This means that even if agents take different paths through the FSM, they still end up in the same resulting state, because once all messages have been processed, all agents will have performed the same set of actions. This addresses the issue of ending up in different final states. We return to the possibility of errors occurring shortly. Definition 1 (Monotonicity) An action is monotonic if it does not delete8 any fluents or commitments. A Commitment Machine is 8That is directly deletes, it is fine to discharge commitments by adding fluents/commitments. 878 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) monotonic if all of its actions are monotonic. (Adapted from [14, Definition 6]) Theorem 1 If A1 and A2 are monotonic actions, then performing A1 followed by A2 has the same effect on the agent's beliefs as performing A2 followed by A1. (Adapted from [14, Theorem 2]). This assumes that both actions can be performed. However, it is possible for the performance of A1 to disable A2 from being done. For example, if A1 has the effect + p, and A2 has precondition ¬ p, then although both actions may be enabled in the initial state, they cannot be performed in either order. We can prevent this by ensuring that actions' preconditions do not contain negation (or implication), since a monotonic action cannot result in a precondition that is negation-free becoming false. Note that this restriction only applies to the original action precondition, P, not to any additional preconditions imposed by the agent (P'). This is because only P is used to determine whether another agent is able to perform the action. Thus monotonic CMs with preconditions that do not contain negations have actions that commute. However, in fact, the restriction to monotonic CMs is unnecessarily strong: all that is needed is that whenever there is a choice of agent that can act, then the possible actions are monotonic. If there is only a single agent that can act, then no restriction is needed on the actions: they may or may not be monotonic. Definition 2 (Locally Monotonic) A commitment machine is locally monotonic if for any state S either (a) only a single agent has actions that can be performed; or (b) all actions that can be performed in S are monotonic. Theorem 2 In a locally monotonic CM, once all messages have been processed, all agents will be in the same state. Furthermore, no errors can occur. Proof: Once all messages have been processed we have that all agents will have performed the same action set, perhaps in a different order. The essence of the proof is to argue that as long as agents haven't yet converged to the same state, all actions must be monotonic, and hence that these actions commute, and cannot disable any other actions. Consider the first point of divergence, where an agent performs action A and at the same time another agent (call it XB) performs action B. Clearly, this state has actions of more than one agent enabled, so, since the CM is locally monotonic, the relevant actions must be monotonic. Therefore, after doing A, the action B must still be enabled, and so the message to do B can be processed by updating the recipient agent's beliefs with the effects of B. Furthermore, because monotonic actions commute, the result of doing A before B is the same as doing B before A: However, what happens if the next action after A is not B, but C? Because B is enabled, and C is not done by agent XB (see below), we must have that C is also monotonic, and hence (a) the result of doing A and B and C is the same regardless of the order in which the three actions are done; and (b) C doesn't disable B, so B can still be done after C. The reason why C cannot be done by XB is that messages are processed in the order of their arrival9. From the perspective of XB the action B was done before C, and therefore from any other agent's perspective the message saying that B was done must be received (and processed) before a message saying that C is done. This argument can be extended to show that once agents start taking different paths through the FSM all actions taken until the point where they converge on a single state must be monotonic, and hence it is always possible to converge (because actions aren't disabled), so the interaction is error free; and the resulting state once convergence occurs is the same (because monotonic actions commute). This theorem gives a strong theoretical guarantee that not doing turn management will not lead to disaster. This is analogous to proving that disabling all traffic lights would not lead to any accidents, and is only possible because the refined CM axioms are symmetrical. Based on this theorem the generic transformation from CM to code should allow agents to act freely, which is achieved by simply changing ı: P ∧ P' ∧ ¬ E ← + E; ↑ A to For example, instead of ı: ¬ request ← + request; ↑ sendRequest we have ı: ¬ request ← + request; ↑ sendRequest; ı. One consequence of the theorem is that it is not necessary to ensure that agents process messages before continuing to interact. However, in order to avoid unnecessary parallelism, which can make debugging harder, it may still be desirable to process messages before performing actions. Figure 8 shows a number of runs from the 3APL implementation that has been modified to allow free, non-alternating, interaction.
Implementing Commitment-Based Interactions * ABSTRACT Although agent interaction plays a vital role in MAS, and messagecentric approaches to agent interaction have their drawbacks, present agent-oriented programming languages do not provide support for implementing agent interaction that is flexible and robust. Instead, messages are provided as a primitive building block. In this paper we consider one approach for modelling agent interactions: the commitment machines framework. This framework supports modelling interactions at a higher level (using social commitments), resulting in more flexible interactions. We investigate how commitmentbased interactions can be implemented in conventional agent-oriented programming languages. The contributions of this paper are: a mapping from a commitment machine to a collection of BDI-style plans; extensions to the semantics of BDI programming languages; and an examination of two issues that arise when distributing commitment machines (turn management and race conditions) and solutions to these problems. 1. INTRODUCTION Agents are social, and agent interaction plays a vital role in multiagent systems. Consequently, design and implementation of agent interaction is an important research topic. The standard approach for designing agent interactions is messagecentric: interactions are defined by interaction protocols that give the permissible sequences of messages, specified using notations such as finite state machines, Petri nets, or Agent UML. It has been argued that this message-centric approach to interaction design is not a good match for intelligent agents. Intelligent agents should exhibit the ability to persist in achieving their goals in the face of failure (robustness) by trying different approaches (flexibility). On the other hand, when following an interaction protocol, an agent has limited flexibility and robustness: the ability to persistently try alternative means to achieving the interaction's aim is limited to those options that the protocol's designer provided, and in practice, message-centric design processes do not tend to lead to protocols that are flexible or robust. Recognising these limitations of the traditional approach to designing agent interactions, a number of approaches have been proposed in recent years that move away from message-centric interaction protocols, and instead consider designing agent interactions using higher-level concepts such as social commitments [8, 10, 18] or interaction goals [2]. There has also been work on richer forms of interaction in specific settings, such as teams of cooperative agents [5, 11]. However, although there has been work on designing flexible and robust agent interactions, there has been virtually no work on providing programming language support for implementing such interactions. Current Agent Oriented Programming Languages (AOPLs) do not provide support for implementing flexible and robust agent interactions using higher-level concepts than messages. Indeed, modern AOPLs [1], with virtually no exceptions, provide only simple message sending as the basis for implementing agent interaction. This paper presents what, to the best of our knowledge, is the second AOPL to support high-level, flexible, and robust agent interaction implementation. The first such language, STAPLE, was proposed a few years ago [9], but is not described in detail, and is arguably impractical for use by non-specialists, due to its logical basis and heavy reliance on temporal and modal logic. This paper presents a scheme for extending BDI-like AOPLs to support direct implementation of agent interactions that are designed using Yolum & Singh's commitment machine (CM) framework [19]. In the remainder of this paper we briefly review commitment machines and present a simple abstraction of BDI AOPLs which lies in the common subset of languages such as Jason, 3APL, and CAN. We then present a scheme for translating commitment machines to this language, and indicate how the language needs to be extended to support this. We then extend our scheme to address a range of issues concerned with distribution, including turn tracking [7], and race conditions. 2. BACKGROUND 2.1 Commitment Machines The aim of the commitment machine framework is to allow for the definition of interactions that are more flexible than traditional message-centric approaches. A Commitment Machine (CM) [19] specifies an interaction between entities (e.g. agents, services, processes) in terms of actions that change the interaction state. This interact state consists of fluents (predicates that change value over time), but also social commitments, both base-level and conditional. A base-level social commitment is an undertaking by debtor A to creditor B to bring about condition p, denoted C (A, B, p). This is sometimes abbreviated to C (p), where it is not important to specify the identities of the entities in question. For example, a commitment by customer C to merchant M to make the fluent paid true would be written as C (C, M, paid). A conditional social commitment is an undertaking by debtor A to creditor B that should condition q become true, A will then commit to bringing about condition p. This is denoted by CC (A, B, q, p), and, where the identity of the entities involved is unimportant (or obvious), is abbreviated to CC (q p) where the arrow is a reminder of the causal link between q becoming true and the creation of a commitment to make p true. For example, a commitment to make the fluent paid true once goods have been received would be written CC (goods paid). The semantics of commitments (both base-level and conditional) is defined with rules that specify how commitments change over time. For example, the commitment C (p) (or CC (q p)) is discharged when p becomes true; and the commitment CC (q p) is replaced by C (p) when q becomes true. In this paper we use the more symmetric semantics proposed by [15] and subsequently reformalised by [14]. In brief, these semantics deal with a number of more complex cases, such as where commitments are created when conditions already hold: if p holds when CC (p q) is meant to be created, then C (q) is created instead of CC (p q). An interaction is defined by specifying the entities involved, the possible contents of the interaction state (both fluents and commitments), and (most importantly) the actions that each entity can perform along with the preconditions and effects of each action, specified as add and delete lists. A commitment machine (CM) defines a range of possible interactions that each start in some state1, and perform actions until reaching a final state. A final state is one that has no base-level commitments. One way of visualising the interactions that are possible with a given commitment machine is to generate the finite state machine corresponding to the CM. For example, figure 1 gives the FSM2 corresponding to the NetBill [18] commitment machine: a simple CM where a customer (C) and merchant (M) attempt to trade using the following actions3: 1Unlike standard interaction protocols, or finite state machines, there is no designated initial state for the interaction. 2The finite state machine is software-generated: the nodes and connections were computed by an implementation of the axioms (available from http://www.winikoff.net/CM) and were then laid out by graphviz (http://www.graphviz.org/). 3We use the notation A (X): P = * E to indicate that action A is performed by entity X, has precondition P (with ": P" omitted if empty) and effect E. • sendRequest (C) = * request • sendQuote (M) = * offer where offer - promiseGoods n promiseReceipt and promiseGoods - CC (M, C, accept, goods) and promiseReceipt - CC (M, C, pay, receipt) • sendAccept (C) = * accept where accept - CC (C, M, goods, pay) • sendGoods (M) = * promiseReceipt n goods where promiseReceipt - CC (M, C, pay, receipt) • sendEPO (C): goods = * pay • sendReceipt (M): pay = * receipt. The commitment accept is the customer's promise to pay once goods have been sent, promiseGoods is the merchant's promise to send the goods once the customer accepts, and promiseReceipt is the merchant's promise to send a receipt once payment has been made. As seen in figure 1, commitment machines can support a range of interaction sequences. 2.2 An Abstract Agent Programming Language Agent programming languages in the BDI tradition (e.g. dMARS, JAM, PRS, UM-PRS, JACK, AgentSpeak (L), Jason, 3APL, CAN, Jadex) define agent behaviour in terms of event-triggered plans, where each plan specifies what it is triggered by, under what situations it can be considered to be applicable (defined using a so-called context condition), and a plan body: a sequence of steps that can include posting events which in turn triggers further plans. Given a collection of plans and an event e that has been posted the agent first collects all plans types that are triggered by that event (the relevant plans), then evaluates the context conditions of these plans to obtain a set of applicable plan instances. One of these is chosen and is executed. We now briefly define the formal syntax and semantics of a Simple Abstract (BDI) Agent Programming Language (SAAPL). This language is intended to be an abstraction that is in the common subset of such languages as Jason [1, Chapter 1], 3APL [1, Chapter 2], and CAN [16]. Thus, it is intentionally incomplete in some areas, for instance it doesn't commit to a particular mechanism for dealing with plan failure, since different mechanisms are used by different AOPLs. An agent program (denoted by Π) consists of a collection of plan clauses of the form e: C <--P where e is an event, C is a context condition (a logical formula over the agent's beliefs), and P is the plan body. The plan body is built up from the following constructs. We have the empty step a which always succeeds and does nothing, operations to add (+ b) and delete (− b) beliefs, sending a message m to agent N (TN m), and posting an event4 (e). These can be sequenced (P; P). Formal semantics for this language is given in figure 2. This semantics is based on the semantics for AgentSpeak given by [12], which in turn is based on the semantics for CAN [16]. The semantics is in the style of Plotkin's Structural Operational Semantics, and assumes that operations exist that check whether a condition 4We use IN m as short hand for the event corresponding to receiving message m from agent N. 874 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Finite State Machine for NetBill (shaded = final states) follows from a belief set, that add a belief to a belief set, and that delete a belief from a belief set. In the case of beliefs being a set of ground atoms these operations are respectively consequence checking (B | = C), and set addition (B ∪ {b}) and deletion (B \ {b}). More sophisticated belief management methods may be used, but are not considered here. We define a basic configuration S = ~ Q, N, B, P ~ where Q is a (global) message queue (modelled as a sequence5 where messages are added at one end and removed from the other end), N is the name of the agent, B is the beliefs of the agent and P is the plan body being executed (i.e. the intention). We also define an agent configuration, where instead of a single plan body P there is a set of plan instances, Γ. Finally, a complete MAS is a pair ~ Q, As ~ of a global message queue Q and a set of agent configurations (without the queue, Q). The global message queue is a sequence of triplets of the form sender: recipient: message. A transition S0 − → S1 specifies that executing S0 a single step yields S1. We annotate the arrow with an indication of whether the configuration in question is basic, an agent configuration, or a MAS configuration. The transition relation is defined using rules S' − → Sr of the form S − → S' or of the form S − → S' r; the latter are conditional with the top (numerator) being the premise and the bottom (denominator) being the conclusion. Note that there is non-determinism in SAAPL, e.g. the choice of plan to execute from a set of applicable plans. This is resolved by using selection functions: So selects one of the applicable plan instances to handle a given event, ST selects which of the plan instances that can be executed should be executed next, and S „ 4 selects which agent should execute (a step) next. 3. IMPLEMENTING COMMITMENT-BASED INTERACTIONS The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 875 876 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. BEYOND TWO PARTICIPANTS 878 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Implementing Commitment-Based Interactions * ABSTRACT Although agent interaction plays a vital role in MAS, and messagecentric approaches to agent interaction have their drawbacks, present agent-oriented programming languages do not provide support for implementing agent interaction that is flexible and robust. Instead, messages are provided as a primitive building block. In this paper we consider one approach for modelling agent interactions: the commitment machines framework. This framework supports modelling interactions at a higher level (using social commitments), resulting in more flexible interactions. We investigate how commitmentbased interactions can be implemented in conventional agent-oriented programming languages. The contributions of this paper are: a mapping from a commitment machine to a collection of BDI-style plans; extensions to the semantics of BDI programming languages; and an examination of two issues that arise when distributing commitment machines (turn management and race conditions) and solutions to these problems. 1. INTRODUCTION Agents are social, and agent interaction plays a vital role in multiagent systems. Consequently, design and implementation of agent interaction is an important research topic. The standard approach for designing agent interactions is messagecentric: interactions are defined by interaction protocols that give the permissible sequences of messages, specified using notations such as finite state machines, Petri nets, or Agent UML. It has been argued that this message-centric approach to interaction design is not a good match for intelligent agents. Intelligent agents should exhibit the ability to persist in achieving their goals in the face of failure (robustness) by trying different approaches (flexibility). Recognising these limitations of the traditional approach to designing agent interactions, a number of approaches have been proposed in recent years that move away from message-centric interaction protocols, and instead consider designing agent interactions using higher-level concepts such as social commitments [8, 10, 18] or interaction goals [2]. There has also been work on richer forms of interaction in specific settings, such as teams of cooperative agents [5, 11]. However, although there has been work on designing flexible and robust agent interactions, there has been virtually no work on providing programming language support for implementing such interactions. Current Agent Oriented Programming Languages (AOPLs) do not provide support for implementing flexible and robust agent interactions using higher-level concepts than messages. Indeed, modern AOPLs [1], with virtually no exceptions, provide only simple message sending as the basis for implementing agent interaction. This paper presents what, to the best of our knowledge, is the second AOPL to support high-level, flexible, and robust agent interaction implementation. This paper presents a scheme for extending BDI-like AOPLs to support direct implementation of agent interactions that are designed using Yolum & Singh's commitment machine (CM) framework [19]. In the remainder of this paper we briefly review commitment machines and present a simple abstraction of BDI AOPLs which lies in the common subset of languages such as Jason, 3APL, and CAN. We then present a scheme for translating commitment machines to this language, and indicate how the language needs to be extended to support this. We then extend our scheme to address a range of issues concerned with distribution, including turn tracking [7], and race conditions. 2. BACKGROUND 2.1 Commitment Machines The aim of the commitment machine framework is to allow for the definition of interactions that are more flexible than traditional message-centric approaches. A Commitment Machine (CM) [19] specifies an interaction between entities (e.g. agents, services, processes) in terms of actions that change the interaction state. This interact state consists of fluents (predicates that change value over time), but also social commitments, both base-level and conditional. A base-level social commitment is an undertaking by debtor A to creditor B to bring about condition p, denoted C (A, B, p). This is sometimes abbreviated to C (p), where it is not important to specify the identities of the entities in question. For example, a commitment by customer C to merchant M to make the fluent paid true would be written as C (C, M, paid). A conditional social commitment is an undertaking by debtor A to creditor B that should condition q become true, A will then commit to bringing about condition p. For example, a commitment to make the fluent paid true once goods have been received would be written CC (goods paid). The semantics of commitments (both base-level and conditional) is defined with rules that specify how commitments change over time. For example, the commitment C (p) (or CC (q p)) is discharged when p becomes true; and the commitment CC (q p) is replaced by C (p) when q becomes true. In this paper we use the more symmetric semantics proposed by [15] and subsequently reformalised by [14]. In brief, these semantics deal with a number of more complex cases, such as where commitments are created when conditions already hold: if p holds when CC (p q) is meant to be created, then C (q) is created instead of CC (p q). An interaction is defined by specifying the entities involved, the possible contents of the interaction state (both fluents and commitments), and (most importantly) the actions that each entity can perform along with the preconditions and effects of each action, specified as add and delete lists. A commitment machine (CM) defines a range of possible interactions that each start in some state1, and perform actions until reaching a final state. A final state is one that has no base-level commitments. One way of visualising the interactions that are possible with a given commitment machine is to generate the finite state machine corresponding to the CM. For example, figure 1 gives the FSM2 corresponding to the NetBill [18] commitment machine: a simple CM where a customer (C) and merchant (M) attempt to trade using the following actions3: 1Unlike standard interaction protocols, or finite state machines, there is no designated initial state for the interaction. The commitment accept is the customer's promise to pay once goods have been sent, promiseGoods is the merchant's promise to send the goods once the customer accepts, and promiseReceipt is the merchant's promise to send a receipt once payment has been made. As seen in figure 1, commitment machines can support a range of interaction sequences. 2.2 An Abstract Agent Programming Language Given a collection of plans and an event e that has been posted the agent first collects all plans types that are triggered by that event (the relevant plans), then evaluates the context conditions of these plans to obtain a set of applicable plan instances. One of these is chosen and is executed. We now briefly define the formal syntax and semantics of a Simple Abstract (BDI) Agent Programming Language (SAAPL). Thus, it is intentionally incomplete in some areas, for instance it doesn't commit to a particular mechanism for dealing with plan failure, since different mechanisms are used by different AOPLs. An agent program (denoted by Π) consists of a collection of plan clauses of the form e: C <--P where e is an event, C is a context condition (a logical formula over the agent's beliefs), and P is the plan body. The plan body is built up from the following constructs. We have the empty step a which always succeeds and does nothing, operations to add (+ b) and delete (− b) beliefs, sending a message m to agent N (TN m), and posting an event4 (e). These can be sequenced (P; P). Formal semantics for this language is given in figure 2. This semantics is based on the semantics for AgentSpeak given by [12], which in turn is based on the semantics for CAN [16]. The semantics is in the style of Plotkin's Structural Operational Semantics, and assumes that operations exist that check whether a condition 4We use IN m as short hand for the event corresponding to receiving message m from agent N. 874 The Sixth Intl. . Joint Conf. Figure 1: Finite State Machine for NetBill (shaded = final states) follows from a belief set, that add a belief to a belief set, and that delete a belief from a belief set. More sophisticated belief management methods may be used, but are not considered here. We also define an agent configuration, where instead of a single plan body P there is a set of plan instances, Γ. Finally, a complete MAS is a pair ~ Q, As ~ of a global message queue Q and a set of agent configurations (without the queue, Q). The global message queue is a sequence of triplets of the form sender: recipient: message. A transition S0 − → S1 specifies that executing S0 a single step yields S1. We annotate the arrow with an indication of whether the configuration in question is basic, an agent configuration, or a MAS configuration. The transition relation is defined using rules S' − → Sr Note that there is non-determinism in SAAPL, e.g. the choice of plan to execute from a set of applicable plans. This is resolved by using selection functions: So selects one of the applicable plan instances to handle a given event, ST selects which of the plan instances that can be executed should be executed next, and S „ 4 selects which agent should execute (a step) next.
I-51
Learning and Joint Deliberation through Argumentation in Multi-Agent Systems
In this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication. The AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques. For join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision. We experimentally show that the argumentation among committees of agents improves both the individual and joint performance. For learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance.
[ "joint deliber", "argument", "argument framework", "learn agent", "learn from commun", "multi-agent system", "argument protocol", "collabor", "group", "predict accuraci", "case-base polici", "multi-agent learn", "case-base reason" ]
[ "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "M", "M", "U" ]
Learning and Joint Deliberation through Argumentation in Multi-Agent Systems Santi Ontañón CCL, Cognitive Computing Lab Georgia Institute of Technology Atlanta, GA 303322/0280 santi@cc.gatech.edu Enric Plaza IIIA, Artificial Intelligence Research Institute CSIC, Spanish Council for Scientific Research Campus UAB, 08193 Bellaterra, Catalonia (Spain) enric@iiia.csic.es ABSTRACT In this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication. The AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques. For join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision. We experimentally show that the argumentation among committees of agents improves both the individual and joint performance. For learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems, Intelligent Agents 1. INTRODUCTION Argumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution. In this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication. Argumentation-based joint deliberation involves discussion over the outcome of a particular situation or the appropriate course of action for a particular situation. Learning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation at hand. However, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited. Thus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process. Most existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]). Usually, an argument is seen as a logical statement, while a counterargument is an argument offered in opposition to another argument [4, 13]; agents use a preference relation to resolve conflicting arguments. However, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation. In this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience. Thus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework. Having learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples. Counterexamples offer the possibility of agents learning during the argumentation process. Moreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments. Specifically, we will need to address two issues: (1) how to define a technique to generate arguments and counterarguments from examples, and (2)how to define a preference relation over two conflicting arguments that have been induced from examples. This paper presents a case-based approach to address both issues. The agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation. We propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem - moreover, the reasoning needed to support the argumentation process will also be based on cases. In particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments. Finally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them. The paper is structured as follows. Section 2 discusses the relation among argumentation, collaboration and learning. Then Section 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction. After that, Section 4 formally defines our argumentation framework. Sections 5 and 6 present our case-based preference relation and argument generation policies respectively. Later, Section 7 presents the argumentation protocol in our AMAL framework. After that, Section 8 presents an exemplification of the argumentation framework. Finally, Section 9 presents an empirical evaluation of our two main hypotheses. The paper closes with related work and conclusions sections. 2. ARGUMENTATION,COLLABORATION AND LEARNING Both learning and collaboration are ways in which an agent can improve individual performance. In fact, there is a clear parallelism between learning and collaboration in multi-agent systems, since both are ways in which agents can deal with their shortcomings. Let us show which are the main motivations that an agent can have to learn or to collaborate. • Motivations to learn: - Increase quality of prediction, - Increase efficiency, - Increase the range of solvable problems. • Motivations to collaborate: - Increase quality of prediction, - Increase efficiency, - Increase the range of solvable problems, - Increase the range of accessible resources. Looking at the above lists of motivation, we can easily see that learning and collaboration are very related in multi-agent systems. In fact, with the exception of the last item in the motivations to collaborate list, they are two extremes of a continuum of strategies to improve performance. An agent may choose to increase performance by learning, by collaborating, or by finding an intermediate point that combines learning and collaboration in order to improve performance. In this paper we will propose AMAL, an argumentation framework for learning agents, and will also also show how AMAL can be used both for learning from communication and for solving problems in a collaborative way: • Agents can solve problems in a collaborative way via engaging an argumentation process about the prediction for the situation at hand. Using this collaboration, the prediction can be done in a more informed way, since the information known by several agents has been taken into account. • Agents can also learn from communication with other agents by engaging an argumentation process. Agents that engage in such argumentation processes can learn from the arguments and counterexamples received from other agents, and use this information for predicting the outcomes of future situations. In the rest of this paper we will propose an argumentation framework and show how it can be used both for learning and for solving problems in a collaborative way. 3. MULTI-AGENT CBR SYSTEMS A Multi-Agent Case Based Reasoning System (MAC) M = {(A1, C1), ..., (An, Cn)} is a multi-agent system composed of A = {Ai, ..., An}, a set of CBR agents, where each agent Ai ∈ A possesses an individual case base Ci. Each individual agent Ai in a MAC is completely autonomous and each agent Ai has access only to its individual and private case base Ci. A case base Ci = {c1, ..., cm} is a collection of cases. Agents in a MAC system are able to individually solve problems, but they can also collaborate with other agents to solve problems. In this framework, we will restrict ourselves to analytical tasks, i.e. tasks like classification, where the solution of a problem is achieved by selecting a solution class from an enumerated set of solution classes. In the following we will note the set of all the solution classes by S = {S1, ..., SK }. Therefore, a case c = P, S is a tuple containing a case description P and a solution class S ∈ S. In the following, we will use the terms problem and case description indistinctly. Moreover, we will use the dot notation to refer to elements inside a tuple; e.g., to refer to the solution class of a case c, we will write c.S. Therefore, we say a group of agents perform joint deliberation, when they collaborate to find a joint solution by means of an argumentation process. However, in order to do so, an agent has to be able to justify its prediction to the other agents (i.e. generate an argument for its predicted solution that can be examined and critiqued by the other agents). The next section addresses this issue. 3.1 Justified Predictions Both expert systems and CBR systems may have an explanation component [14] in charge of justifying why the system has provided a specific answer to the user. The line of reasoning of the system can then be examined by a human expert, thus increasing the reliability of the system. Most of the existing work on explanation generation focuses on generating explanations to be provided to the user. However, in our approach we use explanations (or justifications) as a tool for improving communication and coordination among agents. We are interested in justifications since they can be used as arguments. For that purpose, we will benefit from the ability of some machine learning methods to provide justifications. A justification built by a CBR method after determining that the solution of a particular problem P was Sk is a description that contains the relevant information from the problem P that the CBR method has considered to predict Sk as the solution of P. In particular, CBR methods work by retrieving similar cases to the problem at hand, and then reusing their solutions for the current problem, expecting that since the problem and the cases are similar, the solutions will also be similar. Thus, if a CBR method has retrieved a set of cases C1, ..., Cn to solve a particular problem P the justification built will contain the relevant information from the problem P that made the CBR system retrieve that particular set of cases, i.e. it will contain the relevant information that P and C1, ..., Cn have in common. For example, Figure 1 shows a justification build by a CBR system for a toy problem (in the following sections we will show justifications for real problems). In the figure, a problem has two attributes (Traffic_light, and Cars_passing), the retrieval mechanism of the CBR system notices that by considering only the attribute Traffic_light, it can retrieve two cases that predict the same solution: wait. Thus, since only this attribute has been used, it is the only one appearing in the justification. The values of the rest of attributes are irrelevant, since whatever their value the solution class would have been the same. 976 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Problem Traffic_light: red Cars_passing: no Case 1 Traffic_light: red Cars_passing: no Solution: wait Case 3 Traffic_light: red Cars_passing: yes Solution: wait Case 4 Traffic_light: green Cars_passing: yes Solution: wait Case 2 Traffic_light: green Cars_passing: no Solution: cross Retrieved cases Solution: wait Justification Traffic_light: red Figure 1: An example of justification generation in a CBR system. Notice that, since the only relevant feature to decide is Traffic_light (the only one used to retrieve cases), it is the only one appearing in the justification. In general, the meaning of a justification is that all (or most of) the cases in the case base of an agent that satisfy the justification (i.e. all the cases that are subsumed by the justification) belong to the predicted solution class. In the rest of the paper, we will use to denote the subsumption relation. In our work, we use LID [2], a CBR method capable of building symbolic justifications such as the one exemplified in Figure 1. When an agent provides a justification for a prediction, the agent generates a justified prediction: DEFINITION 3.1. A Justified Prediction is a tuple J = A, P, S, D where agent A considers S the correct solution for problem P, and that prediction is justified a symbolic description D such that J.D J.P. Justifications can have many uses for CBR systems [8, 9]. In this paper, we are going to use justifications as arguments, in order to allow learning agents to engage in argumentation processes. 4. ARGUMENTS AND COUNTERARGUMENTS For our purposes an argument α generated by an agent A is composed of a statement S and some evidence D supporting S as correct. In the remainder of this section we will see how this general definition of argument can be instantiated in specific kind of arguments that the agents can generate. In the context of MAC systems, agents argue about predictions for new problems and can provide two kinds of information: a) specific cases P, S , and b) justified predictions: A, P, S, D . Using this information, we can define three types of arguments: justified predictions, counterarguments, and counterexamples. A justified prediction α is generated by an agent Ai to argue that Ai believes that the correct solution for a given problem P is α.S, and the evidence provided is the justification α.D. In the example depicted in Figure 1, an agent Ai may generate the argument α = Ai, P, Wait, (Traffic_light = red) , meaning that the agent Ai believes that the correct solution for P is Wait because the attribute Traffic_light equals red. A counterargument β is an argument offered in opposition to another argument α. In our framework, a counterargument consists of a justified prediction Aj, P, S , D generated by an agent Aj with the intention to rebut an argument α generated by another agent Ai, that endorses a solution class S different from that of α.S for the problem at hand and justifies this with a justification D . In the example in Figure 1, if an agent generates the argument α = Ai, P, Walk, (Cars_passing = no) , an agent that thinks that the correct solution is Wait might answer with the counterargument β = Aj, P, Wait, (Cars_passing = no ∧ Traffic_light = red) , meaning that, although there are no cars passing, the traffic light is red, and the street cannot be crossed. A counterexample c is a case that contradicts an argument α. Thus a counterexample is also a counterargument, one that states that a specific argument α is not always true, and the evidence provided is the case c. Specifically, for a case c to be a counterexample of an argument α, the following conditions have to be met: α.D c and α.S = c.S, i.e. the case must satisfy the justification α.D and the solution of c must be different than the predicted by α. By exchanging arguments and counterarguments (including counterexamples), agents can argue about the correct solution of a given problem, i.e. they can engage a joint deliberation process. However, in order to do so, they need a specific interaction protocol, a preference relation between contradicting arguments, and a decision policy to generate counterarguments (including counterexamples). In the following sections we will present these elements. 5. PREFERENCE RELATION A specific argument provided by an agent might not be consistent with the information known to other agents (or even to some of the information known by the agent that has generated the justification due to noise in training data). For that reason, we are going to define a preference relation over contradicting justified predictions based on cases. Basically, we will define a confidence measure for each justified prediction (that takes into account the cases owned by each agent), and the justified prediction with the highest confidence will be the preferred one. The idea behind case-based confidence is to count how many of the cases in an individual case base endorse a justified prediction, and how many of them are counterexamples of it. The more the endorsing cases, the higher the confidence; and the more the counterexamples, the lower the confidence. Specifically, to assess the confidence of a justified prediction α, an agent obtains the set of cases in its individual case base that are subsumed by α.D. With them, an agent Ai obtains the Y (aye) and N (nay) values: • Y Ai α = |{c ∈ Ci| α.D c.P ∧ α.S = c.S}| is the number of cases in the agent``s case base subsumed by the justification α.D that belong to the solution class α.S, • NAi α = |{c ∈ Ci| α.D c.P ∧ α.S = c.S}| is the number of cases in the agent``s case base subsumed by justification α.D that do not belong to that solution class. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 977 + + + + + + - - + Figure 2: Confidence of arguments is evaluated by contrasting them against the case bases of the agents. An agent estimates the confidence of an argument as: CAi (α) = Y Ai α 1 + Y Ai α + NAi α i.e. the confidence on a justified prediction is the number of endorsing cases divided by the number of endorsing cases plus counterexamples. Notice that we add 1 to the denominator, this is to avoid giving excessively high confidences to justified predictions whose confidence has been computed using a small number of cases. Notice that this correction follows the same idea than the Laplace correction to estimate probabilities. Figure 2 illustrates the individual evaluation of the confidence of an argument, in particular, three endorsing cases and one counterexample are found in the case base of agents Ai, giving an estimated confidence of 0.6 Moreover, we can also define the joint confidence of an argument α as the confidence computed using the cases present in the case bases of all the agents in the group: C(α) = i Y Ai α 1 + i Y Ai α + NAi α Notice that, to collaboratively compute the joint confidence, the agents only have to make public the aye and nay values locally computed for a given argument. In our framework, agents use this joint confidence as the preference relation: a justified prediction α is preferred over another one β if C(α) ≥ C(β). 6. GENERATION OF ARGUMENTS In our framework, arguments are generated by the agents from cases, using learning methods. Any learning method able to provide a justified prediction can be used to generate arguments. For instance, decision trees and LID [2] are suitable learning methods. Specifically, in the experiments reported in this paper agents use LID. Thus, when an agent wants to generate an argument endorsing that a specific solution class is the correct solution for a problem P, it generates a justified prediction as explained in Section 3.1. For instance, Figure 3 shows a real justification generated by LID after solving a problem P in the domain of marine sponges identification. In particular, Figure 3 shows how when an agent receives a new problem to solve (in this case, a new sponge to determine its order), the agent uses LID to generate an argument (consisting on a justified prediction) using the cases in the case base of the agent. The justification shown in Figure 3 can be interpreted saying that the predicted solution is hadromerida because the smooth form of the megascleres of the spiculate skeleton of the sponge is of type tylostyle, the spikulate skeleton of the sponge has no uniform length, and there is no gemmules in the external features of the sponge. Thus, the argument generated will be α = A1, P, hadromerida, D1 . 6.1 Generation of Counterarguments As previously stated, agents may try to rebut arguments by generating counterargument or by finding counterexamples. Let us explain how they can be generated. An agent Ai wants to generate a counterargument β to rebut an argument α when α is in contradiction with the local case base of Ai. Moreover, while generating such counterargument β, Ai expects that β is preferred over α. For that purpose, we will present a specific policy to generate counterarguments based on the specificity criterion [10]. The specificity criterion is widely used in deductive frameworks for argumentation, and states that between two conflicting arguments, the most specific should be preferred since it is, in principle, more informed. Thus, counterarguments generated based on the specificity criterion are expected to be preferable (since they are more informed) to the arguments they try to rebut. However, there is no guarantee that such counterarguments will always win, since, as we have stated in Section 5, agents in our framework use a preference relation based on joint confidence. Moreover, one may think that it would be better that the agents generate counterarguments based on the joint confidence preference relation; however it is not obvious how to generate counterarguments based on joint confidence in an efficient way, since collaboration is required in order to evaluate joint confidence. Thus, the agent generating the counterargument should constantly communicate with the other agents at each step of the induction algorithm used to generate counterarguments (presently one of our future research lines). Thus, in our framework, when an agent wants to generate a counterargument β to an argument α, β has to be more specific than α (i.e. α.D < β.D). The generation of counterarguments using the specificity criterion imposes some restrictions over the learning method, although LID or ID3 can be easily adapted for this task. For instance, LID is an algorithm that generates a description starting from scratch and heuristically adding features to that term. Thus, at every step, the description is made more specific than in the previous step, and the number of cases that are subsumed by that description is reduced. When the description covers only (or almost only) cases of a single solution class LID terminates and predicts that solution class. To generate a counterargument to an argument α LID just has to use as starting point the description α.D instead of starting from scratch. In this way, the justification provided by LID will always be subsumed by α.D, and thus the resulting counterargument will be more specific than α. However, notice that LID may sometimes not be able to generate counterarguments, since LID may not be able to specialize the description α.D any further, or because the agent Ai has no case inCi that is subsumed by α.D. Figure 4 shows how an agent A2 that disagreed with the argument shown in Figure 3, generates a counterargument using LID. Moreover, Figure 4 shows the generation of a counterargument β1 2 for the argument α0 1 (in Figure 3) that is a specialization of α0 1. 978 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Solution: hadromerida Justification: D1 Sponge Spikulate skeleton External features External features Gemmules: no Spikulate Skeleton Megascleres Uniform length: no Megascleres Smooth form: tylostyle Case Base of A1 LID New sponge P Figure 3: Example of a real justification generated by LID in the marine sponges data set. Specifically, in our experiments, when an agent Ai wants to rebut an argument α, uses the following policy: 1. Agent Ai uses LID to try to find a counterargument β more specific than α; if found, β is sent to the other agent as a counterargument of α. 2. If not found, then Ai searches for a counterexample c ∈ Ci of α. If a case c is found, then c is sent to the other agent as a counterexample of α. 3. If no counterexamples are found, then Ai cannot rebut the argument α. 7. ARGUMENTATION-BASED MULTI-AGENT LEARNING The interaction protocol of AMAL allows a group of agents A1, ..., An to deliberate about the correct solution of a problem P by means of an argumentation process. If the argumentation process arrives to a consensual solution, the joint deliberation ends; otherwise a weighted vote is used to determine the joint solution. Moreover, AMAL also allows the agents to learn from the counterexamples received from other agents. The AMAL protocol consists on a series of rounds. In the initial round, each agent states which is its individual prediction for P. Then, at each round an agent can try to rebut the prediction made by any of the other agents. The protocol uses a token passing mechanism so that agents (one at a time) can send counterarguments or counterexamples if they disagree with the prediction made by any other agent. Specifically, each agent is allowed to send one counterargument or counterexample each time he gets the token (notice that this restriction is just to simplify the protocol, and that it does not restrict the number of counterargument an agent can sent, since they can be delayed for subsequent rounds). When an agent receives a counterargument or counterexample, it informs the other agents if it accepts the counterargument (and changes its prediction) or not. Moreover, agents have also the opportunity to answer to counterarguments when they receive the token, by trying to generate a counterargument to the counterargument. When all the agents have had the token once, the token returns to the first agent, and so on. If at any time in the protocol, all the agents agree or during the last n rounds no agent has generated any counterargument, the protocol ends. Moreover, if at the end of the argumentation the agents have not reached an agreement, then a voting mechanism that uses the confidence of each prediction as weights is used to decide the final solution (Thus, AMAL follows the same mechanism as human committees, first each individual member of a committee exposes his arguments and discuses those of the other members (joint deliberation), and if no consensus is reached, then a voting mechanism is required). At each iteration, agents can use the following performatives: • assert(α): the justified prediction held during the next round will be α. An agent can only hold a single prediction at each round, thus is multiple asserts are send, only the last one is considered as the currently held prediction. • rebut(β, α): the agent has found a counterargument β to the prediction α. We will define Ht = αt 1, ..., αt n as the predictions that each of the n agents hold at a round t. Moreover, we will also define contradict(αt i) = {α ∈ Ht|α.S = αt i.S} as the set of contradicting arguments for an agent Ai in a round t, i.e. the set of arguments at round t that support a different solution class than αt i. The protocol is initiated because one of the agents receives a problem P to be solved. After that, the agent informs all the other agents about the problem P to solve, and the protocol starts: 1. At round t = 0, each one of the agents individually solves P, and builds a justified prediction using its own CBR method. Then, each agent Ai sends the performative assert(α0 i ) to the other agents. Thus, the agents know H0 = α0 i , ..., α0 n . Once all the predictions have been sent the token is given to the first agent A1. 2. At each round t (other than 0), the agents check whether their arguments in Ht agree. If they do, the protocol moves to step 5. Moreover, if during the last n rounds no agent has sent any counterexample or counterargument, the protocol also moves to step 5. Otherwise, the agent Ai owner of the token tries to generate a counterargument for each of the opposing arguments in contradict(αt i) ⊆ Ht (see Section 6.1). Then, the counterargument βt i against the prediction αt j with the lowest confidence C(αt j) is selected (since αt j is the prediction more likely to be successfully rebutted). • If βt i is a counterargument, then, Ai locally compares αt i with βt i by assessing their confidence against its individual case base Ci (see Section 5) (notice that Ai is comparing its previous argument with the counterargument that Ai itself has just generated and that is about The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 979 Sponge Spikulate skeleton External features External features Gemmules: no Growing: Spikulate Skeleton Megascleres Uniform length: no Megascleres Smooth form: tylostyle Growing Grow: massive Case Base of A2 LID Solution: astrophorida Justification: D2 Figure 4: Generation of a counterargument using LID in the sponges data set. to send to Aj). If CAi (βt i ) > CAi (αt i), then Ai considers that βt i is stronger than its previous argument, changes its argument to βt i by sending assert(βt i ) to the rest of the agents (the intuition behind this is that since a counterargument is also an argument, Ai checks if the newly counterargument is a better argument than the one he was previously holding) and rebut(βt i , αt j) to Aj. Otherwise (i.e. CAi (βt i ) ≤ CAi (αt i)), Ai will send only rebut(βt i , αt j) to Aj. In any of the two situations the protocol moves to step 3. • If βt i is a counterexample c, then Ai sends rebut(c, αt j) to Aj. The protocol moves to step 4. • If Ai cannot generate any counterargument or counterexample, the token is sent to the next agent, a new round t + 1 starts, and the protocol moves to state 2. 3. The agent Aj that has received the counterargument βt i , locally compares it against its own argument, αt j, by locally assessing their confidence. If CAj (βt i ) > CAj (αt j), then Aj will accept the counterargument as stronger than its own argument, and it will send assert(βt i ) to the other agents. Otherwise (i.e. CAj (βt i ) ≤ CAj (αt j)), Aj will not accept the counterargument, and will inform the other agents accordingly. Any of the two situations start a new round t + 1, Ai sends the token to the next agent, and the protocol moves back to state 2. 4. The agent Aj that has received the counterexample c retains it into its case base and generates a new argument αt+1 j that takes into account c, and informs the rest of the agents by sending assert(αt+1 j ) to all of them. Then, Ai sends the token to the next agent, a new round t + 1 starts, and the protocol moves back to step 2. 5. The protocol ends yielding a joint prediction, as follows: if the arguments in Ht agree then their prediction is the joint prediction, otherwise a voting mechanism is used to decide the joint prediction. The voting mechanism uses the joint confidence measure as the voting weights, as follows: S = arg max Sk∈S αi∈Ht|αi.S=Sk C(αi) Moreover, in order to avoid infinite iterations, if an agent sends twice the same argument or counterargument to the same agent, the message is not considered. 8. EXEMPLIFICATION Let us consider a system composed of three agents A1, A2 and A3. One of the agents, A1 receives a problem P to solve, and decides to use AMAL to solve it. For that reason, invites A2 and A3 to take part in the argumentation process. They accept the invitation, and the argumentation protocol starts. Initially, each agent generates its individual prediction for P, and broadcasts it to the other agents. Thus, all of them can compute H0 = α0 1, α0 2, α0 3 . In particular, in this example: • α0 1 = A1, P, hadromerida, D1 • α0 2 = A2, P, astrophorida, D2 • α0 3 = A3, P, axinellida, D3 A1 starts owning the token and tries to generate counterarguments for α0 2 and α0 3, but does not succeed, however it has one counterexample c13 for α0 3. Thus, A1 sends the the message rebut( c13, α0 3) to A3. A3 incorporates c13 into its case base and tries to solve the problem P again, now taking c13 into consideration. A3 comes up with the justified prediction α1 3 = A3, P, hadromerida, D4 , and broadcasts it to the rest of the agents with the message assert(α1 3). Thus, all of them know the new H1 = α0 1, α0 2, α1 3 . Round 1 starts and A2 gets the token. A2 tries to generate counterarguments for α0 1 and α1 3 and only succeeds to generate a counterargument β1 2 = A2, P, astrophorida, D5 against α1 3. The counterargument is sent to A3 with the message rebut(β1 2 , α1 3). Agent A3 receives the counterargument and assesses its local confidence. The result is that the individual confidence of the counterargument β1 2 is lower than the local confidence of α1 3. Therefore, A3 does not accept the counterargument, and thus H2 = α0 1, α0 2, α1 3 . Round 2 starts and A3 gets the token. A3 generates a counterargument β2 3 = A3, P, hadromerida, D6 for α0 2 and sends it to A2 with the message rebut(β2 3 , α0 2). Agent A2 receives the counterargument and assesses its local confidence. The result is that the local confidence of the counterargument β2 3 is higher than the local confidence of α0 2. Therefore, A2 accepts the counterargument and informs the rest of the agents with the message assert(β2 3 ). After that, H3 = α0 1, β2 3 , α1 3 . At Round 3, since all the agents agree (all the justified predictions in H3 predict hadromerida as the solution class) The protocol ends, and A1 (the agent that received the problem) considers hadromerida as the joint solution for the problem P. 9. EXPERIMENTAL EVALUATION 980 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) SPONGE 75 77 79 81 83 85 87 89 91 2 3 4 5 AMAL Voting Individual SOYBEAN 55 60 65 70 75 80 85 90 2 3 4 5 AMAL Voting Individual Figure 5: Individual and joint accuracy for 2 to 5 agents. In this section we empirically evaluate the AMAL argumentation framework. We have made experiments in two different data sets: soybean (from the UCI machine learning repository) and sponge (a relational data set). The soybean data set has 307 examples and 19 solution classes, while the sponge data set has 280 examples and 3 solution classes. In an experimental run, the data set is divided in 2 sets: the training set and the test set. The training set examples are distributed among 5 different agents without replication, i.e. there is no example shared by two agents. In the testing stage, problems in the test set arrive randomly to one of the agents, and their goal is to predict the correct solution. The experiments are designed to test two hypotheses: (H1) that argumentation is a useful framework for joint deliberation and can improve over other typical methods such as voting; and (H2) that learning from communication improves the individual performance of a learning agent participating in an argumentation process. Moreover, we also expect that the improvement achieved from argumentation will increase as the number of agents participating in the argumentation increases (since more information will be taken into account). Concerning H1 (argumentation is a useful framework for joint deliberation), we ran 4 experiments, using 2, 3, 4, and 5 agents respectively (in all experiments each agent has a 20% of the training data, since the training is always distributed among 5 agents). Figure 5 shows the result of those experiments in the sponge and soybean data sets. Classification accuracy is plotted in the vertical axis, and in the horizontal axis the number of agents that took part in the argumentation processes is shown. For each number of agents, three bars are shown: individual, Voting, and AMAL. The individual bar shows the average accuracy of individual agents predictions; the voting bar shows the average accuracy of the joint prediction achieved by voting but without any argumentation; and finally the AMAL bar shows the average accuracy of the joint prediction using argumentation. The results shown are the average of 5 10-fold cross validation runs. Figure 5 shows that collaboration (voting and AMAL) outperforms individual problem solving. Moreover, as we expected, the accuracy improves as more agents collaborate, since more information is taken into account. We can also see that AMAL always outperforms standard voting, proving that joint decisions are based on better information as provided by the argumentation process. For instance, the joint accuracy for 2 agents in the sponge data set is of 87.57% for AMAL and 86.57% for voting (while individual accuracy is just 80.07%). Moreover, the improvement achieved by AMAL over Voting is even larger in the soybean data set. The reason is that the soybean data set is more difficult (in the sense that agents need more data to produce good predictions). These experimental results show that AMAL effectively exploits the opportunity for improvement: the accuracy is higher only because more agents have changed their opinion during argumentation (otherwise they would achieve the same result as Voting). Concerning H2 (learning from communication in argumentation processes improves individual prediction ), we ran the following experiment: initially, we distributed a 25% of the training set among the five agents; after that, the rest of the cases in the training set is sent to the agents one by one; when an agent receives a new training case, it has several options: the agent can discard it, the agent can retain it, or the agent can use it for engaging an argumentation process. Figure 6 shows the result of that experiment for the two data sets. Figure 6 contains three plots, where NL (not learning) shows accuracy of an agent with no learning at all; L (learning), shows the evolution of the individual classification accuracy when agents learn by retaining the training cases they individually receive (notice that when all the training cases have been retained at 100%, the accuracy should be equal to that of Figure 5 for individual agents); and finally LFC (learning from communication) shows the evolution of the individual classification accuracy of learning agents that also learn by retaining those counterexamples received during argumentation (i.e. they learn both from training examples and counterexamples). Figure 6 shows that if an agent Ai learns also from communication, Ai can significantly improve its individual performance with just a small number of additional cases (those selected as relevant counterexamples for Ai during argumentation). For instance, in the soybean data set, individual agents have achieved an accuracy of 70.62% when they also learn from communication versus an accuracy of 59.93% when they only learn from their individual experience. The number of cases learnt from communication depends on the properties of the data set: in the sponges data set, agents have retained only very few additional cases, and significantly improved individual accuracy; namely they retain 59.96 cases in average (compared to the 50.4 cases retained if they do not learn from communication). In the soybean data set more counterexamples are learnt to significantly improve individual accuracy, namely they retain 87.16 cases in average (compared to 55.27 cases retained if they do not learn from communication). Finally, the fact that both data sets show a significant improvement points out the adaptive nature of the argumentation-based approach to learning from communication: the useful cases are selected as counterexamples (and no more than those needed), and they have the intended effect. 10. RELATED WORK Concerning CBR in a multi-agent setting, the first research was on negotiated case retrieval [11] among groups of agents. Our work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning. Finally, another interesting approach is multi-case-base reasoning (MCBR) [5], that deals with The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981 SPONGE 60 65 70 75 80 85 25% 40% 55% 70% 85% 100% LFC L NL SOYBEAN 20 30 40 50 60 70 80 90 25% 40% 55% 70% 85% 100% LFC L NL Figure 6: Learning from communication resulting from argumentation in a system composed of 5 agents. distributed systems where there are several case bases available for the same task and addresses the problems of cross-case base adaptation. The main difference is that our MAC approach is a way to distribute the Reuse process of CBR (using a voting system) while Retrieve is performed individually by each agent; the other multiagent CBR approaches, however, focus on distributing the Retrieve process. Research on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation. Approaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13]. Although argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments. In our framework we have addressed both argument selection and preference relations using a case-based approach. 11. CONCLUSIONS AND FUTURE WORK In this paper we have presented an argumentation-based framework for multi-agent learning. Specifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments. The experimental evaluation shows that the increased amount of information provided to the agents by the argumentation process increases their predictive accuracy, and specially when an adequate number of agents take part in the argumentation. The main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication. Finally, in the experiments presented here a learning agent would retain all counterexamples submitted by the other agent; however, this is a very simple case retention policy, and we will like to experiment with more informed policies - with the goal that individual learning agents could significantly improve using only a small set of cases proposed by other agents. Finally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication. 12. REFERENCES [1] Agnar Aamodt and Enric Plaza. Case-based reasoning: Foundational issues, methodological variations, and system approaches. Artificial Intelligence Communications, 7(1):39-59, 1994. [2] E. Armengol and E. Plaza. Lazy induction of descriptions for relational case-based learning. In ECML``2001, pages 13-24, 2001. [3] Gerhard Brewka. Dynamic argument systems: A formal model of argumentation processes based on situation calculus. Journal of Logic and Computation, 11(2):257-282, 2001. [4] Carlos I. Chesñevar and Guillermo R. Simari. Formalizing Defeasible Argumentation using Labelled Deductive Systems. Journal of Computer Science & Technology, 1(4):18-33, 2000. [5] D. Leake and R. Sooriamurthi. Automatically selecting strategies for multi-case-base reasoning. In S. Craw and A. Preece, editors, ECCBR``2002, pages 204-219, Berlin, 2002. Springer Verlag. [6] Francisco J. Martín, Enric Plaza, and Josep-Lluis Arcos. Knowledge and experience reuse through communications among competent (peer) agents. International Journal of Software Engineering and Knowledge Engineering, 9(3):319-341, 1999. [7] Lorraine McGinty and Barry Smyth. Collaborative case-based reasoning: Applications in personalized route planning. In I. Watson and Q. Yang, editors, ICCBR, number 2080 in LNAI, pages 362-376. Springer-Verlag, 2001. [8] Santi Ontañón and Enric Plaza. Justification-based multiagent learning. In ICML``2003, pages 576-583. Morgan Kaufmann, 2003. [9] Enric Plaza, Eva Armengol, and Santiago Ontañón. The explanatory power of symbolic similarity in case-based reasoning. Artificial Intelligence Review, 24(2):145-161, 2005. [10] David Poole. On the comparison of theories: Preferring the most specific explanation. In IJCAI-85, pages 144-147, 1985. [11] M V Nagendra Prassad, Victor R Lesser, and Susan Lander. Retrieval and reasoning in distributed case bases. Technical report, UMass Computer Science Department, 1995. [12] K. Sycara S. Kraus and A. Evenchik. Reaching agreements through argumentation: a logical model and implementation. Artificial Intelligence Journal, 104:1-69, 1998. [13] N. R. Jennings S. Parsons, C. Sierra. Agents that reason and negotiate by arguing. Journal of Logic and Computation, 8:261-292, 1998. [14] Bruce A. Wooley. Explanation component of software systems. ACM CrossRoads, 5.1, 1998. 982 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)
Learning and Joint Deliberation through Argumentation in Multi-Agent Systems ABSTRACT In this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication. The AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques. For join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision. We experimentally show that the argumentation among committees of agents improves both the individual and joint performance. For learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance. 1. INTRODUCTION Argumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution. In this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication. Argumentation-based joint deliberation involves discussion over the outcome of a particular situation or the appropriate course of action for a particular situation. Learning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation at hand. However, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited. Thus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process. Most existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]). Usually, an argument is seen as a logical statement, while a counterargument is an argument offered in opposition to another argument [4, 13]; agents use a preference relation to resolve conflicting arguments. However, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation. In this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience. Thus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework. Having learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples. Counterexamples offer the possibility of agents learning during the argumentation process. Moreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments. Specifically, we will need to address two issues: (1) how to define a technique to generate arguments and counterarguments from examples, and (2) how to define a preference relation over two conflicting arguments that have been induced from examples. This paper presents a case-based approach to address both issues. The agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation. We propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem--moreover, the reasoning needed to support the argumentation process will also be based on cases. In particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments. Finally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them. The paper is structured as follows. Section 2 discusses the relation among argumentation, collaboration and learning. Then Sec tion 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction. After that, Section 4 formally defines our argumentation framework. Sections 5 and 6 present our case-based preference relation and argument generation policies respectively. Later, Section 7 presents the argumentation protocol in our AMAL framework. After that, Section 8 presents an exemplification of the argumentation framework. Finally, Section 9 presents an empirical evaluation of our two main hypotheses. The paper closes with related work and conclusions sections. 2. ARGUMENTATION, COLLABORATION AND LEARNING Both learning and collaboration are ways in which an agent can improve individual performance. In fact, there is a clear parallelism between learning and collaboration in multi-agent systems, since both are ways in which agents can deal with their shortcomings. Let us show which are the main motivations that an agent can have to learn or to collaborate. • Motivations to learn:--Increase quality of prediction,--Increase efficiency,--Increase the range of solvable problems. • Motivations to collaborate:--Increase quality of prediction,--Increase efficiency,--Increase the range of solvable problems,--Increase the range of accessible resources. Looking at the above lists of motivation, we can easily see that learning and collaboration are very related in multi-agent systems. In fact, with the exception of the last item in the motivations to collaborate list, they are two extremes of a continuum of strategies to improve performance. An agent may choose to increase performance by learning, by collaborating, or by finding an intermediate point that combines learning and collaboration in order to improve performance. In this paper we will propose AMAL, an argumentation framework for learning agents, and will also also show how AMAL can be used both for learning from communication and for solving problems in a collaborative way: • Agents can solve problems in a collaborative way via engaging an argumentation process about the prediction for the situation at hand. Using this collaboration, the prediction can be done in a more informed way, since the information known by several agents has been taken into account. • Agents can also learn from communication with other agents by engaging an argumentation process. Agents that engage in such argumentation processes can learn from the argu ments and counterexamples received from other agents, and use this information for predicting the outcomes of future situations. In the rest of this paper we will propose an argumentation framework and show how it can be used both for learning and for solving problems in a collaborative way. 3. MULTI-AGENT CBR SYSTEMS A Multi-Agent Case Based Reasoning System (MAC) M = {(Al, Cl),..., (An, Cn)} is a multi-agent system composed of A = {Ai,..., An}, a set of CBR agents, where each agent Ai ∈ A possesses an individual case base Ci. Each individual agent Ai in a MAC is completely autonomous and each agent Ai has access only to its individual and private case base Ci. A case base Ci = {cl,..., cm} is a collection of cases. Agents in a MAC system are able to individually solve problems, but they can also collaborate with other agents to solve problems. In this framework, we will restrict ourselves to analytical tasks, i.e. tasks like classification, where the solution of a problem is achieved by selecting a solution class from an enumerated set of solution classes. In the following we will note the set of all the solution classes by S = {Sl,..., SK}. Therefore, a case c = (P, S) is a tuple containing a case description P and a solution class S ∈ S. In the following, we will use the terms problem and case description indistinctly. Moreover, we will use the dot notation to refer to elements inside a tuple; e.g., to refer to the solution class of a case c, we will write c.S. Therefore, we say a group of agents perform joint deliberation, when they collaborate to find a joint solution by means of an argumentation process. However, in order to do so, an agent has to be able to justify its prediction to the other agents (i.e. generate an argument for its predicted solution that can be examined and critiqued by the other agents). The next section addresses this issue. 3.1 Justified Predictions Both expert systems and CBR systems may have an explanation component [14] in charge of justifying why the system has provided a specific answer to the user. The line of reasoning of the system can then be examined by a human expert, thus increasing the reliability of the system. Most of the existing work on explanation generation focuses on generating explanations to be provided to the user. However, in our approach we use explanations (or justifications) as a tool for improving communication and coordination among agents. We are interested in justifications since they can be used as arguments. For that purpose, we will benefit from the ability of some machine learning methods to provide justifications. A justification built by a CBR method after determining that the solution of a particular problem P was Sk is a description that contains the relevant information from the problem P that the CBR method has considered to predict Sk as the solution of P. In particular, CBR methods work by retrieving similar cases to the problem at hand, and then reusing their solutions for the current problem, expecting that since the problem and the cases are similar, the solutions will also be similar. Thus, if a CBR method has retrieved a set of cases Cl,..., Cn to solve a particular problem P the justification built will contain the relevant information from the problem P that made the CBR system retrieve that particular set of cases, i.e. it will contain the relevant information that P and Cl,..., Cn have in common. For example, Figure 1 shows a justification build by a CBR system for a toy problem (in the following sections we will show justifications for real problems). In the figure, a problem has two attributes (Traffic_light, and Cars_passing), the retrieval mechanism of the CBR system notices that by considering only the attribute Traffic_light, it can retrieve two cases that predict the same solution: wait. Thus, since only this attribute has been used, it is the only one appearing in the justification. The values of the rest of attributes are irrelevant, since whatever their value the solution class would have been the same. 976 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: An example of justification generation in a CBR system. Notice that, since the only relevant feature to decide is Traffic_light (the only one used to retrieve cases), it is the only one appearing in the justification. In general, the meaning of a justification is that all (or most of) the cases in the case base of an agent that satisfy the justification (i.e. all the cases that are subsumed by the justification) belong to the predicted solution class. In the rest of the paper, we will use C to denote the subsumption relation. In our work, we use LID [2], a CBR method capable of building symbolic justifications such as the one exemplified in Figure 1. When an agent provides a justification for a prediction, the agent generates a justified prediction: Justifications can have many uses for CBR systems [8, 9]. In this paper, we are going to use justifications as arguments, in order to allow learning agents to engage in argumentation processes. 4. ARGUMENTS AND COUNTERARGUMENTS For our purposes an argument α generated by an agent A is composed of a statement S and some evidence D supporting S as correct. In the remainder of this section we will see how this general definition of argument can be instantiated in specific kind of arguments that the agents can generate. In the context of MAC systems, agents argue about predictions for new problems and can provide two kinds of information: a) specific cases (P, S), and b) justified predictions: (A, P, S, D). Using this information, we can define three types of arguments: justified predictions, counterarguments, and counterexamples. A justified prediction α is generated by an agent Ai to argue that Ai believes that the correct solution for a given problem P is α.S, and the evidence provided is the justification α.D. In the example depicted in Figure 1, an agent Ai may generate the argument α = (Ai, P, Wait, (Traffic-light = red)), meaning that the agent Ai believes that the correct solution for P is Wait because the attribute Traffic-light equals red. A counterargument β is an argument offered in opposition to another argument α. In our framework, a counterargument consists of a justified prediction (Aj, P, S', D') generated by an agent Aj with the intention to rebut an argument α generated by another agent Ai, that endorses a solution class S' different from that of α.S for the problem at hand and justifies this with a justification D'. In the example in Figure 1, if an agent generates the argument α = (Ai, P, Walk, (Cars-passing = no)), an agent that thinks that the correct solution is Wait might answer with the counterargument β = (Aj, P, Wait, (Cars-passing = no n Traffic-light = red)), meaning that, although there are no cars passing, the traffic light is red, and the street cannot be crossed. A counterexample c is a case that contradicts an argument α. Thus a counterexample is also a counterargument, one that states that a specific argument α is not always true, and the evidence provided is the case c. Specifically, for a case c to be a counterexample of an argument α, the following conditions have to be met: α.D C c and α.S = ~ c.S, i.e. the case must satisfy the justification α.D and the solution of c must be different than the predicted by α. By exchanging arguments and counterarguments (including counterexamples), agents can argue about the correct solution of a given problem, i.e. they can engage a joint deliberation process. However, in order to do so, they need a specific interaction protocol, a preference relation between contradicting arguments, and a decision policy to generate counterarguments (including counterexamples). In the following sections we will present these elements. 5. PREFERENCE RELATION A specific argument provided by an agent might not be consistent with the information known to other agents (or even to some of the information known by the agent that has generated the justification due to noise in training data). For that reason, we are going to define a preference relation over contradicting justified predictions based on cases. Basically, we will define a confidence measure for each justified prediction (that takes into account the cases owned by each agent), and the justified prediction with the highest confidence will be the preferred one. The idea behind case-based confidence is to count how many of the cases in an individual case base endorse a justified prediction, and how many of them are counterexamples of it. The more the endorsing cases, the higher the confidence; and the more the counterexamples, the lower the confidence. Specifically, to assess the confidence of a justified prediction α, an agent obtains the set of cases in its individual case base that are subsumed by α.D. With them, an agent Ai obtains the Y (aye) and N (nay) values: • Y Ai α = I1c E CiI α.D C c.P n α.S = c.S} I is the number of cases in the agent's case base subsumed by the justification α.D that belong to the solution class α.S, • NAi α = I1c E CiI α.D C c.P n α.S = ~ c.S} I is the number of cases in the agent's case base subsumed by justification α.D that do not belong to that solution class. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 977 Figure 2: Confidence of arguments is evaluated by contrasting them against the case bases of the agents. An agent estimates the confidence of an argument as: 1 + Yα Ai + NAi α i.e. the confidence on a justified prediction is the number of endorsing cases divided by the number of endorsing cases plus counterexamples. Notice that we add 1 to the denominator, this is to avoid giving excessively high confidences to justified predictions whose confidence has been computed using a small number of cases. Notice that this correction follows the same idea than the Laplace correction to estimate probabilities. Figure 2 illustrates the individual evaluation of the confidence of an argument, in particular, three endorsing cases and one counterexample are found in the case base of agents Ai, giving an estimated confidence of 0.6 Moreover, we can also define the joint confidence of an argument α as the confidence computed using the cases present in the case bases of all the agents in the group: Notice that, to collaboratively compute the joint confidence, the agents only have to make public the aye and nay values locally computed for a given argument. In our framework, agents use this joint confidence as the preference relation: a justified prediction α is preferred over another one,3 if C (α) ≥ C (,3). 6. GENERATION OF ARGUMENTS In our framework, arguments are generated by the agents from cases, using learning methods. Any learning method able to provide a justified prediction can be used to generate arguments. For instance, decision trees and LID [2] are suitable learning methods. Specifically, in the experiments reported in this paper agents use LID. Thus, when an agent wants to generate an argument endorsing that a specific solution class is the correct solution for a problem P, it generates a justified prediction as explained in Section 3.1. For instance, Figure 3 shows a real justification generated by LID after solving a problem P in the domain of marine sponges identification. In particular, Figure 3 shows how when an agent receives a new problem to solve (in this case, a new sponge to determine its order), the agent uses LID to generate an argument (consisting on a justified prediction) using the cases in the case base of the agent. The justification shown in Figure 3 can be interpreted saying that "the predicted solution is hadromerida because the smooth form of the megascleres of the spiculate skeleton of the sponge is of type tylostyle, the spikulate skeleton of the sponge has no uniform length, and there is no gemmules in the external features of the sponge". Thus, the argument generated will be α = (Ar, P, hadromerida, Dr). 6.1 Generation of Counterarguments As previously stated, agents may try to rebut arguments by generating counterargument or by finding counterexamples. Let us explain how they can be generated. An agent Ai wants to generate a counterargument,3 to rebut an argument α when α is in contradiction with the local case base of Ai. Moreover, while generating such counterargument,3, Ai expects that,3 is preferred over α. For that purpose, we will present a specific policy to generate counterarguments based on the specificity criterion [10]. The specificity criterion is widely used in deductive frameworks for argumentation, and states that between two conflicting arguments, the most specific should be preferred since it is, in principle, more informed. Thus, counterarguments generated based on the specificity criterion are expected to be preferable (since they are more informed) to the arguments they try to rebut. However, there is no guarantee that such counterarguments will always win, since, as we have stated in Section 5, agents in our framework use a preference relation based on joint confidence. Moreover, one may think that it would be better that the agents generate counterarguments based on the joint confidence preference relation; however it is not obvious how to generate counterarguments based on joint confidence in an efficient way, since collaboration is required in order to evaluate joint confidence. Thus, the agent generating the counterargument should constantly communicate with the other agents at each step of the induction algorithm used to generate counterarguments (presently one of our future research lines). Thus, in our framework, when an agent wants to generate a counterargument,3 to an argument α,,3 has to be more specific than α (i.e. α.D ❁,3. D). The generation of counterarguments using the specificity criterion imposes some restrictions over the learning method, although LID or ID3 can be easily adapted for this task. For instance, LID is an algorithm that generates a description starting from scratch and heuristically adding features to that term. Thus, at every step, the description is made more specific than in the previous step, and the number of cases that are subsumed by that description is reduced. When the description covers only (or almost only) cases of a single solution class LID terminates and predicts that solution class. To generate a counterargument to an argument α LID just has to use as starting point the description α.D instead of starting from scratch. In this way, the justification provided by LID will always be subsumed by α.D, and thus the resulting counterargument will be more specific than α. However, notice that LID may sometimes not be able to generate counterarguments, since LID may not be able to specialize the description α.D any further, or because the agent Ai has no case inCi that is subsumed by α.D. Figure 4 shows how an agent A2 that disagreed with the argument shown in Figure 3, generates a counterargument using LID. Moreover, Figure 4 shows the generation of a counterargument,3 r2 for the argument α0r (in Figure 3) that is a specialization of α0r. 978 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: Example of a real justification generated by LID in the marine sponges data set. Specifically, in our experiments, when an agent Ai wants to rebut an argument α, uses the following policy: 1. Agent Ai uses LID to try to find a counterargument β more specific than α; if found, β is sent to the other agent as a counterargument of α. 2. If not found, then Ai searches for a counterexample c E Ci of α. If a case c is found, then c is sent to the other agent as a counterexample of α. 3. If no counterexamples are found, then Ai cannot rebut the argument α. 7. ARGUMENTATION-BASED MULTI-AGENT LEARNING The interaction protocol of AMAL allows a group of agents A1,..., An to deliberate about the correct solution of a problem P by means of an argumentation process. If the argumentation process arrives to a consensual solution, the joint deliberation ends; otherwise a weighted vote is used to determine the joint solution. Moreover, AMAL also allows the agents to learn from the counterexamples received from other agents. The AMAL protocol consists on a series of rounds. In the initial round, each agent states which is its individual prediction for P. Then, at each round an agent can try to rebut the prediction made by any of the other agents. The protocol uses a token passing mechanism so that agents (one at a time) can send counterarguments or counterexamples if they disagree with the prediction made by any other agent. Specifically, each agent is allowed to send one counterargument or counterexample each time he gets the token (notice that this restriction is just to simplify the protocol, and that it does not restrict the number of counterargument an agent can sent, since they can be delayed for subsequent rounds). When an agent receives a counterargument or counterexample, it informs the other agents if it accepts the counterargument (and changes its prediction) or not. Moreover, agents have also the opportunity to answer to counterarguments when they receive the token, by trying to generate a counterargument to the counterargument. When all the agents have had the token once, the token returns to the first agent, and so on. If at any time in the protocol, all the agents agree or during the last n rounds no agent has generated any counterargument, the protocol ends. Moreover, if at the end of the argumentation the agents have not reached an agreement, then a voting mechanism that uses the confidence of each prediction as weights is used to decide the final solution (Thus, AMAL follows the same mechanism as human committees, first each individual member of a committee exposes his arguments and discuses those of the other members (joint deliberation), and if no consensus is reached, then a voting mechanism is required). At each iteration, agents can use the following performatives: • assert (α): the justified prediction held during the next round will be α. An agent can only hold a single prediction at each round, thus is multiple asserts are send, only the last one is considered as the currently held prediction. • rebut (β, α): the agent has found a counterargument β to the prediction α. We will define Ht = (αt1,..., αtn) as the predictions that each of the n agents hold at a round t. Moreover, we will also define contradict (αti) = {α E Ht | α.S = ~ αti.S} as the set of contradicting arguments for an agent Ai in a round t, i.e. the set of arguments at round t that support a different solution class than αti. The protocol is initiated because one of the agents receives a problem P to be solved. After that, the agent informs all the other agents about the problem P to solve, and the protocol starts: 1. At round t = 0, each one of the agents individually solves P, and builds a justified prediction using its own CBR method. Then, each agent Ai sends the performative assert (α0i) to the other agents. Thus, the agents know H0 = (α0i,..., α0n). Once all the predictions have been sent the token is given to the first agent A1. 2. At each round t (other than 0), the agents check whether their arguments in Ht agree. If they do, the protocol moves to step 5. Moreover, if during the last n rounds no agent has sent any counterexample or counterargument, the protocol also moves to step 5. Otherwise, the agent Ai owner of the token tries to generate a counterargument for each of the opposing arguments in contradict (αti) C Ht (see Section 6.1). Then, the counterargument βti against the prediction αtj with the lowest confidence C (αtj) is selected (since αtj is the prediction more likely to be successfully rebutted). • If βti is a counterargument, then, Ai locally compares αti with βti by assessing their confidence against its individual case base Ci (see Section 5) (notice that Ai is comparing its previous argument with the counterargument that Ai itself has just generated and that is about The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 979 Figure 4: Generation of a counterargument using LID in the sponges data set. to send to Aj). If CAi (,3 ti)> CAi (αti), then Ai considers that,3 ti is stronger than its previous argument, changes its argument to,3 ti by sending assert (,3 ti) to the rest of the agents (the intuition behind this is that since a counterargument is also an argument, Ai checks if the newly counterargument is a better argument than the one he was previously holding) and rebut (,3 ti, αtj) to Aj. Otherwise (i.e. CAi (,3 ti) <CAi (αti)), Ai will send only rebut (,3 ti, αtj) to Aj. In any of the two situations the protocol moves to step 3. • If,3 ti is a counterexample c, then Ai sends rebut (c, αtj) to Aj. The protocol moves to step 4. • If Ai cannot generate any counterargument or counterexample, the token is sent to the next agent, a new round t + 1 starts, and the protocol moves to state 2. 3. The agent Aj that has received the counterargument,3 ti, locally compares it against its own argument, αtj, by locally assessing their confidence. If CAj (,3 ti)> CAj (αtj), then Aj will accept the counterargument as stronger than its own argument, and it will send assert (,3 ti) to the other agents. Otherwise (i.e. CAj (,3 ti) <CAj (αtj)), Aj will not accept the counterargument, and will inform the other agents accordingly. Any of the two situations start a new round t + 1, Ai sends the token to the next agent, and the protocol moves back to state 2. 4. The agent Aj that has received the counterexample c retains it into its case base and generates a new argument αt +1 j that takes into account c, and informs the rest of the agents by sending assert (αt +1 j) to all of them. Then, Ai sends the token to the next agent, a new round t + 1 starts, and the protocol moves back to step 2. 5. The protocol ends yielding a joint prediction, as follows: if the arguments in Ht agree then their prediction is the joint prediction, otherwise a voting mechanism is used to decide the joint prediction. The voting mechanism uses the joint confidence measure as the voting weights, as follows: Moreover, in order to avoid infinite iterations, if an agent sends twice the same argument or counterargument to the same agent, the message is not considered. 8. EXEMPLIFICATION Let us consider a system composed of three agents A1, A2 and A3. One of the agents, A1 receives a problem P to solve, and decides to use AMAL to solve it. For that reason, invites A2 and A3 to take part in the argumentation process. They accept the invitation, and the argumentation protocol starts. Initially, each agent generates its individual prediction for P, and broadcasts it to the other agents. Thus, all of them can compute H0 = (α0 1, α02, α03). In particular, in this example: • α01 = (A1, P, hadromerida, D1) • α02 = (A2, P, astrophorida, D2) • α03 = (A3, P, axinellida, D3) A1 starts owning the token and tries to generate counterarguments for α02 and α0 3, but does not succeed, however it has one counterexample c13 for α0 3. Thus, A1 sends the the message rebut (c13, α03) to A3. A3 incorporates c13 into its case base and tries to solve the problem P again, now taking c13 into consideration. A3 comes up with the justified prediction α13 = (A3, P, hadromerida, D4), and broadcasts it to the rest of the agents with the message assert (α13). Thus, all of them know the new H1 = (α0 1, α02, α13). Round 1 starts and A2 gets the token. A2 tries to generate counterarguments for α01 and α13 and only succeeds to generate a counterargument,312 = (A2, P, astrophorida, D5) against α1 3. The counterargument is sent to A3 with the message rebut (,312, α13). Agent A3 receives the counterargument and assesses its local confidence. The result is that the individual confidence of the counterargument,31 2 is lower than the local confidence of α1 3. Therefore, A3 does not accept the counterargument, and thus H2 = (α0 1, α02, α13). Round 2 starts and A3 gets the token. A3 generates a counterargument,32 3 = (A3, P, hadromerida, D6) for α02 and sends it to A2 with the message rebut (,32 3, α02). Agent A2 receives the counterargument and assesses its local confidence. The result is that the local confidence of the counterargument,323 is higher than the local confidence of α0 2. Therefore, A2 accepts the counterargument and informs the rest of the agents with the message assert (,323). After that, H3 = (α0 1,,32 3, α13). At Round 3, since all the agents agree (all the justified predictions in H3 predict hadromerida as the solution class) The protocol ends, and A1 (the agent that received the problem) considers hadromerida as the joint solution for the problem P. 9. EXPERIMENTAL EVALUATION Figure 5: Individual and joint accuracy for 2 to 5 agents. In this section we empirically evaluate the AMAL argumentation framework. We have made experiments in two different data sets: soybean (from the UCI machine learning repository) and sponge (a relational data set). The soybean data set has 307 examples and 19 solution classes, while the sponge data set has 280 examples and 3 solution classes. In an experimental run, the data set is divided in 2 sets: the training set and the test set. The training set examples are distributed among 5 different agents without replication, i.e. there is no example shared by two agents. In the testing stage, problems in the test set arrive randomly to one of the agents, and their goal is to predict the correct solution. The experiments are designed to test two hypotheses: (H1) that argumentation is a useful framework for joint deliberation and can improve over other typical methods such as voting; and (H2) that learning from communication improves the individual performance of a learning agent participating in an argumentation process. Moreover, we also expect that the improvement achieved from argumentation will increase as the number of agents participating in the argumentation increases (since more information will be taken into account). Concerning H1 (argumentation is a useful framework for joint deliberation), we ran 4 experiments, using 2, 3, 4, and 5 agents respectively (in all experiments each agent has a 20% of the training data, since the training is always distributed among 5 agents). Figure 5 shows the result of those experiments in the sponge and soybean data sets. Classification accuracy is plotted in the vertical axis, and in the horizontal axis the number of agents that took part in the argumentation processes is shown. For each number of agents, three bars are shown: individual, Voting, and AMAL. The individual bar shows the average accuracy of individual agents predictions; the voting bar shows the average accuracy of the joint prediction achieved by voting but without any argumentation; and finally the AMAL bar shows the average accuracy of the joint prediction using argumentation. The results shown are the average of 5 10-fold cross validation runs. Figure 5 shows that collaboration (voting and AMAL) outperforms individual problem solving. Moreover, as we expected, the accuracy improves as more agents collaborate, since more information is taken into account. We can also see that AMAL always outperforms standard voting, proving that joint decisions are based on better information as provided by the argumentation process. For instance, the joint accuracy for 2 agents in the sponge data set is of 87.57% for AMAL and 86.57% for voting (while individual accuracy is just 80.07%). Moreover, the improvement achieved by AMAL over Voting is even larger in the soybean data set. The reason is that the soybean data set is more "difficult' (in the sense that agents need more data to produce good predictions). These experimental results show that AMAL effectively exploits the opportunity for improvement: the accuracy is higher only because more agents have changed their opinion during argumentation (otherwise they would achieve the same result as Voting). Concerning H2 (learning from communication in argumentation processes improves individual prediction), we ran the following experiment: initially, we distributed a 25% of the training set among the five agents; after that, the rest of the cases in the training set is sent to the agents one by one; when an agent receives a new training case, it has several options: the agent can discard it, the agent can retain it, or the agent can use it for engaging an argumentation process. Figure 6 shows the result of that experiment for the two data sets. Figure 6 contains three plots, where NL (not learning) shows accuracy of an agent with no learning at all; L (learning), shows the evolution of the individual classification accuracy when agents learn by retaining the training cases they individually receive (notice that when all the training cases have been retained at 100%, the accuracy should be equal to that of Figure 5 for individual agents); and finally LFC (learning from communication) shows the evolution of the individual classification accuracy of learning agents that also learn by retaining those counterexamples received during argumentation (i.e. they learn both from training examples and counterexamples). Figure 6 shows that if an agent Ai learns also from communication, Ai can significantly improve its individual performance with just a small number of additional cases (those selected as relevant counterexamples for Ai during argumentation). For instance, in the soybean data set, individual agents have achieved an accuracy of 70.62% when they also learn from communication versus an accuracy of 59.93% when they only learn from their individual experience. The number of cases learnt from communication depends on the properties of the data set: in the sponges data set, agents have retained only very few additional cases, and significantly improved individual accuracy; namely they retain 59.96 cases in average (compared to the 50.4 cases retained if they do not learn from communication). In the soybean data set more counterexamples are learnt to significantly improve individual accuracy, namely they retain 87.16 cases in average (compared to 55.27 cases retained if they do not learn from communication). Finally, the fact that both data sets show a significant improvement points out the adaptive nature of the argumentation-based approach to learning from communication: the useful cases are selected as counterexamples (and no more than those needed), and they have the intended effect. 10. RELATED WORK Concerning CBR in a multi-agent setting, the first research was on "negotiated case retrieval' [11] among groups of agents. Our work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning. Finally, another interesting approach is multi-case-base reasoning (MCBR) [5], that deals with The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981 Figure 6: Learning from communication resulting from argumentation in a system composed of 5 agents. distributed systems where there are several case bases available for the same task and addresses the problems of cross-case base adaptation. The main difference is that our MAC approach is a way to distribute the Reuse process of CBR (using a voting system) while Retrieve is performed individually by each agent; the other multiagent CBR approaches, however, focus on distributing the Retrieve process. Research on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation. Approaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13]. Although argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments. In our framework we have addressed both argument selection and preference relations using a case-based approach. 11. CONCLUSIONS AND FUTURE WORK In this paper we have presented an argumentation-based framework for multi-agent learning. Specifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments. The experimental evaluation shows that the increased amount of information provided to the agents by the argumentation process increases their predictive accuracy, and specially when an adequate number of agents take part in the argumentation. The main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication. Finally, in the experiments presented here a learning agent would retain all counterexamples submitted by the other agent; however, this is a very simple case retention policy, and we will like to experiment with more informed policies--with the goal that individual learning agents could significantly improve using only a small set of cases proposed by other agents. Finally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication.
Learning and Joint Deliberation through Argumentation in Multi-Agent Systems ABSTRACT In this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication. The AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques. For join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision. We experimentally show that the argumentation among committees of agents improves both the individual and joint performance. For learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance. 1. INTRODUCTION Argumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution. In this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication. Argumentation-based joint deliberation involves discussion over the outcome of a particular situation or the appropriate course of action for a particular situation. Learning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation at hand. However, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited. Thus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process. Most existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]). Usually, an argument is seen as a logical statement, while a counterargument is an argument offered in opposition to another argument [4, 13]; agents use a preference relation to resolve conflicting arguments. However, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation. In this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience. Thus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework. Having learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples. Counterexamples offer the possibility of agents learning during the argumentation process. Moreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments. Specifically, we will need to address two issues: (1) how to define a technique to generate arguments and counterarguments from examples, and (2) how to define a preference relation over two conflicting arguments that have been induced from examples. This paper presents a case-based approach to address both issues. The agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation. We propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem--moreover, the reasoning needed to support the argumentation process will also be based on cases. In particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments. Finally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them. The paper is structured as follows. Section 2 discusses the relation among argumentation, collaboration and learning. Then Sec tion 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction. After that, Section 4 formally defines our argumentation framework. Sections 5 and 6 present our case-based preference relation and argument generation policies respectively. Later, Section 7 presents the argumentation protocol in our AMAL framework. After that, Section 8 presents an exemplification of the argumentation framework. Finally, Section 9 presents an empirical evaluation of our two main hypotheses. The paper closes with related work and conclusions sections. 2. ARGUMENTATION, COLLABORATION AND LEARNING 3. MULTI-AGENT CBR SYSTEMS 3.1 Justified Predictions 976 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. ARGUMENTS AND COUNTERARGUMENTS 5. PREFERENCE RELATION The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 977 1 + Yα Ai + NAi 6. GENERATION OF ARGUMENTS 6.1 Generation of Counterarguments 978 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 7. ARGUMENTATION-BASED MULTI-AGENT LEARNING The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 979 8. EXEMPLIFICATION 9. EXPERIMENTAL EVALUATION 10. RELATED WORK Concerning CBR in a multi-agent setting, the first research was on "negotiated case retrieval' [11] among groups of agents. Our work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning. Finally, another interesting approach is multi-case-base reasoning (MCBR) [5], that deals with The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981 Figure 6: Learning from communication resulting from argumentation in a system composed of 5 agents. distributed systems where there are several case bases available for the same task and addresses the problems of cross-case base adaptation. The main difference is that our MAC approach is a way to distribute the Reuse process of CBR (using a voting system) while Retrieve is performed individually by each agent; the other multiagent CBR approaches, however, focus on distributing the Retrieve process. Research on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation. Approaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13]. Although argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments. In our framework we have addressed both argument selection and preference relations using a case-based approach. 11. CONCLUSIONS AND FUTURE WORK In this paper we have presented an argumentation-based framework for multi-agent learning. Specifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments. The experimental evaluation shows that the increased amount of information provided to the agents by the argumentation process increases their predictive accuracy, and specially when an adequate number of agents take part in the argumentation. The main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication. Finally, in the experiments presented here a learning agent would retain all counterexamples submitted by the other agent; however, this is a very simple case retention policy, and we will like to experiment with more informed policies--with the goal that individual learning agents could significantly improve using only a small set of cases proposed by other agents. Finally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication.
Learning and Joint Deliberation through Argumentation in Multi-Agent Systems ABSTRACT In this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication. The AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques. For join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision. We experimentally show that the argumentation among committees of agents improves both the individual and joint performance. For learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance. 1. INTRODUCTION Argumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution. In this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication. Learning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation at hand. However, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited. Thus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process. Most existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]). However, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation. In this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience. Thus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework. Having learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples. Counterexamples offer the possibility of agents learning during the argumentation process. Moreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments. This paper presents a case-based approach to address both issues. The agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation. We propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem--moreover, the reasoning needed to support the argumentation process will also be based on cases. In particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments. Finally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them. The paper is structured as follows. Section 2 discusses the relation among argumentation, collaboration and learning. Then Sec tion 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction. After that, Section 4 formally defines our argumentation framework. Sections 5 and 6 present our case-based preference relation and argument generation policies respectively. Later, Section 7 presents the argumentation protocol in our AMAL framework. After that, Section 8 presents an exemplification of the argumentation framework. Finally, Section 9 presents an empirical evaluation of our two main hypotheses. The paper closes with related work and conclusions sections. 10. RELATED WORK Concerning CBR in a multi-agent setting, the first research was on "negotiated case retrieval' [11] among groups of agents. Our work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981 Figure 6: Learning from communication resulting from argumentation in a system composed of 5 agents. Research on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation. Approaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13]. Although argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments. In our framework we have addressed both argument selection and preference relations using a case-based approach. 11. CONCLUSIONS AND FUTURE WORK In this paper we have presented an argumentation-based framework for multi-agent learning. Specifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments. The main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication. Finally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication.
I-50
Agents, Beliefs, and Plausible Behavior in a Temporal Setting
Logics of knowledge and belief are often too static and inflexible to be used on real-world problems. In particular, they usually offer no concept for expressing that some course of events is more likely to happen than another. We address this problem and extend CTLK (computation tree logic with knowledge) with a notion of plausibility, which allows for practical and counterfactual reasoning. The new logic CTLKP (CTLK with plausibility) includes also a particular notion of belief. A plausibility update operator is added to this logic in order to change plausibility assumptions dynamically. Furthermore, we examine some important properties of these concepts. In particular, we show that, for a natural class of models, belief is a KD45 modality. We also show that model checking CTLKP is PTIME-complete and can be done in time linear with respect to the size of models and formulae.
[ "belief", "plausibl", "logic", "comput tree logic", "reason", "plausibl updat oper", "plausibl notion", "belief notion", "multi-agent system", "indistinguish", "framework", "semant", "tempor logic" ]
[ "P", "P", "P", "P", "P", "P", "R", "R", "U", "U", "U", "U", "R" ]
Agents, Beliefs, and Plausible Behavior in a Temporal Setting Nils Bulling and Wojciech Jamroga Department of Informatics, Clausthal University of Technology, Germany {bulling,wjamroga}@in. tu-clausthal. de ABSTRACT Logics of knowledge and belief are often too static and inflexible to be used on real-world problems. In particular, they usually offer no concept for expressing that some course of events is more likely to happen than another. We address this problem and extend CTLK (computation tree logic with knowledge) with a notion of plausibility, which allows for practical and counterfactual reasoning. The new logic CTLKP (CTLK with plausibility) includes also a particular notion of belief. A plausibility update operator is added to this logic in order to change plausibility assumptions dynamically. Furthermore, we examine some important properties of these concepts. In particular, we show that, for a natural class of models, belief is a KD45 modality. We also show that model checking CTLKP is PTIME-complete and can be done in time linear with respect to the size of models and formulae. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems; I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods-Modal logic General Terms Theory 1. INTRODUCTION Notions like time, knowledge, and beliefs are very important for analyzing the behavior of agents and multi-agent systems. In this paper, we extend modal logics of time and knowledge with a concept of plausible behavior: this notion is added to the language of CTLK [19], which is a straightforward combination of the branching-time temporal logic CTL [4, 3] and standard epistemic logic [9, 5]. In our approach, plausibility can be seen as a temporal property of behaviors. That is, some behaviors of the system can be assumed plausible and others implausible, with the underlying idea that the latter should perhaps be ignored in practical reasoning about possible future courses of action. Moreover, behaviors can be formally understood as temporal paths in the Kripke structure modeling a multiagent system. As a consequence, we obtain a language to reason about what can (or must) plausibly happen. We propose a particular notion of beliefs (inspired by [20, 7]), defined in terms of epistemic relations and plausibility. The main intuition is that beliefs are facts that an agent would know if he assumed that only plausible things could happen. We believe that humans use such a concept of plausibility and practical beliefs quite often in their everyday reasoning. Restricting one``s reasoning to plausible possibilities is essential to make the reasoning feasible, as the space of all possibilities is exceedingly large in real life. We investigate some important properties of plausibility, knowledge, and belief in this new framework. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. Finally, we show that the relationship between knowledge and belief for plausibly serial models is natural and reflects the initial intuition well. We also show how plausibility assumptions can be specified in the object language via a plausibility update operator, and we study properties of such updates. Finally, we show that model checking of the new logic is no more complex than model checking CTL and CTLK. Our ultimate goal is to come up with a logic that allows the study of strategies, time, knowledge, and plausible/rational behavior under both perfect and imperfect information. As combining all these dimensions is highly nontrivial (cf. [12, 14]) it seems reasonable to split this task. While this paper deals with knowledge, plausibility, and belief, the companion paper [11] proposes a general framework for multi-agent systems that regard game-theoretical rationality criteria like Nash equilibrium, Pareto optimality, etc.. The latter approach is based on the more powerful logic ATL [1]. The paper is structured as follows. Firstly, we briefly present branching-time logic with knowledge, CTLK. In Section 3 we present our approach to plausibility and formally define CTLK with plausibility. We also show how 582 978-81-904262-7-5 (RPS) c 2007 IFAAMAS temporal formulae can be used to describe plausible paths, and we compare our logic with existing related work. In Section 4, properties of knowledge, belief, and plausibility are explored. Finally, we present verification complexity results for CTLKP in Section 5. 2. BRANCHING TIME AND KNOWLEDGE In this paper we develop a framework for agents'' beliefs about how the world can (or must) evolve. Thus, we need a notion of time and change, plus a notion of what the agents are supposed to know in particular situations. CTLK [19] is a straightforward combination of the computation tree logic CTL [4, 3] and standard epistemic logic [9, 5]. CTL includes operators for temporal properties of systems: i.e., path quantifier E (there is a path), together with temporal operators: f(in the next state), 2 (always from now on) and U (until).1 Every occurrence of a temporal operator is preceded by exactly one path quantifier in CTL (this variant of the language is sometimes called vanilla CTL). Epistemic logic uses operators for representing agents'' knowledge: Kaϕ is read as agent a knows that ϕ. Let Π be a set of atomic propositions with a typical element p, and Agt = {1, ..., k} be a set of agents with a typical element a. The language of CTLK consists of formulae ϕ, given as follows: ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Eγ | Kaϕ γ ::= fϕ | 2 ϕ | ϕU ϕ. We will sometimes refer to formulae ϕ as (vanilla) state formulae and to formulae γ as (vanilla) path formulae. The semantics of CTLK is based on Kripke models M = Q, R, ∼1, ..., ∼k, π , which include a nonempty set of states Q, a state transition relation R ⊆ Q × Q, epistemic indistinguishability relations ∼a⊆ Q × Q (one per agent), and a valuation of propositions π : Π → P(Q). We assume that relation R is serial and that all ∼a are equivalence relations. A path λ in M refers to a possible behavior (or computation) of system M, and can be represented as an infinite sequence of states that follow relation R, that is, a sequence q0q1q2... such that qiRqi+1 for every i = 0, 1, 2, ... We denote the ith state in λ by λ[i]. The set of all paths in M is denoted by ΛM (if the model is clear from context, M will be omitted). A q-path is a path that starts from q, i.e., λ[0] = q. A q-subpath is a sequence of states, starting from q, which is a subpath of some path in the model, i.e. a sequence q0q1... such that q = q0 and there are q0 , ..., qi such that q0 ...qi q0q1... ∈ ΛM.2 The semantics of CTLK is defined as follows: M, q |= p iff q ∈ π(p); M, q |= ¬ϕ iff M, q |= ϕ; M, q |= ϕ ∧ ψ iff M, q |= ϕ and M, q |= ψ; M, q |= E fϕ iff there is a q-path λ such that M, λ[1] |= ϕ; M, q |= E2 ϕ iff there is a q-path λ such that M, λ[i] |= ϕ for every i ≥ 0; 1 Additional operators A (for every path) and ♦ (sometime in the future) are defined in the usual way. 2 For CTLK models, λ is a q-subpath iff it is a q-path. It will not always be so when plausible paths are introduced. M, q |= EϕU ψ iff there is a q-path λ and i ≥ 0 such that M, λ[i] |= ψ, and M, λ[j] |= ϕ for every 0 ≤ j < i; M, q |= Kaϕ iff M, q |= ϕ for every q such that q ∼a q . 3. EXTENDING TIME AND KNOWLEDGE WITH PLAUSIBILITY AND BELIEFS In this section we discuss the central concept of this paper, i.e. the concept of plausibility. First, we outline the idea informally. Then, we extend CTLK with the notion of plausibility by adding plausible path operators Pl a and physical path operator Ph to the logic. Formula Pl aϕ has the intended meaning: according to agent a, it is plausible that ϕ holds; formula Ph ϕ reads as: ϕ holds in all physically possible scenarios (i.e., even in implausible ones). The plausible path operator restricts statements only to those paths which are defined to be sensible, whereas the physical path operator generates statements about all paths that may theoretically occur. Furthermore, we define beliefs on top of plausibility and knowledge, as the facts that an agent would know if he assumed that only plausible things could happen. Finally, we discuss related work [7, 8, 20, 18, 16], and compare it with our approach. 3.1 The Concept of Plausibility It is well known how knowledge (or beliefs) can be modeled with Kripke structures. However, it is not so obvious how we can capture knowledge and beliefs in a sensible way in one framework. Clearly, there should be a connection between these two notions. Our approach is to use the notion of plausibility for this purpose. Plausibility can serve as a primitive concept that helps to define the semantics of beliefs, in a similar way as indistinguishability of states (represented by relation ∼a) is the semantic concept that underlies knowledge. In this sense, our work follows [7, 20]: essentially, beliefs are what an agent would know if he took only plausible options into account. In our approach, however, plausibility is explicitly seen as a temporal property. That is, we do not consider states (or possible worlds) to be more plausible than others but rather define some behaviors to be plausible, and others implausible. Moreover, behaviors can be formally understood as temporal paths in the Kripke structure modeling a multi-agent system. An actual notion of plausibility (that is, a particular set of plausible paths) can emerge in many different ways. It may result from observations and learning; an agent can learn from its observations and see specific patterns of events as plausible (a lot of people wear black shoes if they wear a suit). Knowledge exchange is another possibility (e.g., an agent a can tell agent b that player c always bluffs when he is smiling). Game theory, with its rationality criteria (undominated strategies, maxmin, Nash equilibrium etc.) is another viable source of plausibility assumptions. Last but not least, folk knowledge can be used to establish plausibilityrelated classifications of behavior (players normally want to win a game, people want to live). In any case, restricting the reasoning to plausible possibilities can be essential if we want to make the reasoning feasible, as the space of all possibilities (we call them physical possibilities in the rest of the paper) is exceedingly large in real life. Of course, this does not exclude a more extensive analysis in special cases, e.g. when our plausibility assumptions do not seem accurate any more, or when the cost of The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 583 inaccurate assumptions can be too high (as in the case of high-budget business decisions). But even in these cases, we usually do not get rid of plausibility assumptions completely - we only revise them to make them more cautious.3 To formalize this idea, we extend models of CTLK with sets of plausible paths and add plausibility operators Pl a, physical paths operator Ph , and belief operators Ba to the language of CTLK. Now, it is possible to make statements that refer to plausible paths only, as well as statements that regard all paths that may occur in the system. 3.2 CTLK with Plausibility In this section, we extend the logic of CTLK with plausibility; we call the resulting logic CTLKP. Formally, the language of CTLKP is defined as: ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Eγ | Pl aϕ | Ph ϕ | Kaϕ | Baϕ γ ::= fϕ | 2 ϕ | ϕU ϕ. For instance, we may claim it is plausible to assume that a shop is closed after the opening hours, though the manager may be physically able to open it at any time: Pl aA2 (late → ¬open) ∧ Ph E♦ (late ∧ open). The semantics of CTLKP extends that of CTLK as follows. Firstly, we augment the models with sets of plausible paths. A model with plausibility is given as M = Q, R, ∼1, ..., ∼k, Υ1, ..., Υk, π , where Q, R, ∼1, ..., ∼k, π is a CTLK model, and Υa ⊆ ΛM is the set of paths in M that are plausible according to agent a. If we want to make it clear that Υa is taken from model M, we will write ΥM a . It seems worth emphasizing that this notion of plausibility is subjective and holistic. It is subjective because Υa represents agent a``s subjective view on what is plausible - and indeed, different agents may have different ideas on plausibility (i.e., Υa may differ from Υb). It is holistic because Υa represents agent a``s idea of the plausible behavior of the whole system (including the behavior of other agents). Remark 1. In our models, plausibility is also global, i.e., plausibility sets do not depend on the state of the system. Investigating systems, in which plausibility is relativized with respect to states (like in [7]), might be an interesting avenue of future work. However, such an approach - while obviously more flexible - allows for potentially counterintuitive system descriptions. For example, it might be the case that path λ is plausible in q = λ[0], but the set of plausible paths in q = λ[1] is empty. That is, by following plausible path λ we are bound to get to an implausible situation. But then, does it make sense to consider λ as plausible? Secondly, we use a non-standard satisfaction relation |=P , which we call plausible satisfaction. Let M be a CTLKP 3 That is, when planning to open an industrial plant in the UK, we will probably consider the possibility of our main contractor taking her life, but we will still not take into account the possibilities of: an invasion of UFO, England being destroyed by a meteorite, Fidel Castro becoming the British Prime Minister etc.. Note that this is fundamentally different from using a probabilistic model in which all these unlikely scenarios are assigned very low probabilities: in that case, they also have a very small influence on our final decision, but we must process the whole space of physical possibilities to evaluate the options. model and P ⊆ ΛM be an arbitrary subset of paths in M (not necessarily any ΥM a ). |=P restricts the evaluation of temporal formulae to the paths given in P only. The absolute satisfaction relation |= is defined as |=ΛM . Let on(P) be the set of all states that lie on at least one path in P, i.e. on(P) = {q ∈ Q | ∃λ ∈ P∃i (λ[i] = q)}. Now, the semantics of CTLKP can be given through the following clauses: M, q |=P p iff q ∈ π(p); M, q |=P ¬ϕ iff M, q |=P ϕ; M, q |=P ϕ ∧ ψ iff M, q |=P ϕ and M, q |=P ψ; M, q |=P E fϕ iff there is a q-subpath λ ∈ P such that M, λ[1] |=P ϕ; M, q |=P E2 ϕ iff there is a q-subpath λ ∈ P such that M, λ[i] |=P ϕ for every i ≥ 0; M, q |=P EϕU ψ iff there is a q-subpath λ ∈ P and i ≥ 0 such that M, λ[i] |=P ψ, and M, λ[j] |=P ϕ for every 0 ≤ j < i; M, q |=P Pl aϕ iff M, q |=Υa ϕ; M, q |=P Ph ϕ iff M, q |= ϕ; M, q |=P Kaϕ iff M, q |= ϕ for every q such that q ∼a q ; M, q |=P Baϕ iff for all q ∈ on(Υa) with q ∼a q , we have that M, q |=Υa ϕ. One of the main reasons for using the concept of plausibility is that we want to define agents'' beliefs out of more primitive concepts - in our case, these are plausibility and indistinguishability - in a way analogous to [20, 7]. If an agent knows that ϕ, he must be sure about it. However, beliefs of an agent are not necessarily about reliable facts. Still, they should make sense to the agent; if he believes that ϕ, then the formula should at least hold in all futures that he envisages as plausible. Thus, beliefs of an agent may be seen as things known to him if he disregards all non-plausible possibilities. We say that ϕ is M-true (M |= ϕ) if M, q |= ϕ for all q ∈ QM. ϕ is valid (|= ϕ) if M |= ϕ for all models M. ϕ is M-strongly true (M |≡ ϕ) if M, q |=P ϕ for all q ∈ QM and all P ⊆ ΛM. ϕ is strongly valid ( |≡ ϕ) if M |≡ ϕ for all models M. Proposition 2. Strong truth and strong validity imply truth and validity, respectively. The reverse does not hold. Ultimately, we are going to be interested in normal (not strong) validity, as parameterizing the satisfaction relation with a set P is just a technical device for propagating sets of plausible paths Υa into the semantics of nested formulae. The importance of strong validity, however, lies in the fact that |≡ ϕ ↔ ψ makes ϕ and ψ completely interchangeable, while the same is not true for normal validity. Proposition 3. Let Φ[ϕ/ψ] denote formula Φ in which every occurrence of ψ was replaced by ϕ. Also, let |≡ ϕ ↔ ψ. Then for all M, q, P: M, q |=P Φ iff M, q |=P Φ[ϕ/ψ] (in particular, M, q |= Φ iff M, q |= Φ[ϕ/ψ]). Note that |= ϕ ↔ ψ does not even imply that M, q |= Φ iff M, q |= Φ[ϕ/ψ]. 584 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Guessing Robots game Example 1 (Guessing Robots). Consider a simple game with two agents a and b, shown in Figure 1. First, a chooses a real number r ∈ [0, 1] (without revealing the number to b); then, b chooses a real number r ∈ [0, 1]. The agents win the game (and collect EUR 1, 000, 000) if both chose 1, otherwise they lose. Formally, we model the game with a CTLKP model M, in which the set of states Q includes qs for the initial situation, states qr, r ∈ [0, 1], for the situations after a has chosen number r, and final states qw, ql for the winning and the losing situation, respectively. The transition relation is as follows: qsRqr and qrRql for all r ∈ [0, 1]; q1Rqw, qwRqw, and qlRql. Moreover, π(one) = {q1} and π(win) = {qw}. Player a has perfect information in the game (i.e., q ∼a q iff q = q ), but player b does not distinguish between states qr (i.e., qr ∼b qr for all r, r ∈ [0, 1]). Obviously, the only sensible thing to do for both agents is to choose 1 (using game-theoretical vocabulary, these strategies are strongly dominant for the respective players). Thus, there is only one plausible course of events if we assume that our players are rational, and hence Υa = Υb = {qsq1qwqw ...}. Note that, in principle, the outcome of the game is uncertain: M, qs |= ¬A♦ win∧¬A2 ¬win. However, assuming rationality of the players makes it only plausible that the game must end up with a win: M, qs |= Pla A♦ win ∧ Plb A♦ win, and the agents believe that this will be the case: M, qs |= BaA♦ win ∧ BbA♦ win. Note also that, in any of the states qr, agent b believes that a (being rational) has played 1: M, qr |= Bbone for all r ∈ [0, 1]. 3.3 Defining Plausible Paths with Formulae So far, we have assumed that sets of plausible paths are somehow given in models. In this section we present a dynamic approach where an actual notion of plausibility can be specified in the object language. Note that we want to specify (usually infinite) sets of infinite paths, and we need a finite representation of these structures. One logical solution is given by using path formulae γ. These formulae describe properties of paths; therefore, a specific formula can be used to characterize a set of paths. For instance, think about a country in Africa where it has never snowed. Then, plausible paths might be defined as ones in which it never snows, i.e., all paths that satisfy 2 ¬snows. Formally, let γ be a CTLK path formula. We define |γ|M to be the set of paths that satisfy γ in model M: | fϕ|M = {λ | M, λ[1] |= ϕ} |2 ϕ|M = {λ | ∀i (M, λ[i] |= ϕ)} |ϕ1U ϕ2|M = {λ | ∃i ` M, λ[i] |= ϕ2 ∧ ∀j(0 ≤ j < i ⇒ M, λ[j] |= ϕ1) ´ }. Moreover, we define the plausible paths model update as follows. Let M = Q, R, ∼1, ..., ∼k, Υ1, ..., Υk, π be a CTLKP model, and let P ⊆ ΛM be a set of paths. Then Ma,P = Q, R, ∼1, ..., ∼k, Υ1, ..., Υa−1, P, Υa+1, ..., Υk, π denotes model M with a``s set of plausible paths reset to P. Now we can extend the language of CTLKP with formulae (set-pla γ)ϕ with the intuitive reading: suppose that γ exactly characterizes the set of plausible paths, then ϕ holds, and formal semantics given below: M, q |=P (set-pla γ)ϕ iff Ma,|γ|M , q |=P ϕ. We observe that this update scheme is similar to the one proposed in [13]. 3.4 Comparison to Related Work Several modal notions of plausibility were already discussed in the existing literature [7, 8, 20, 18, 16]. In these papers, like in ours, plausibility is used as a primitive semantic concept that helps to define beliefs on top of agents'' knowledge. A similar idea was introduced by Moses and Shoham in [18]. Their work preceded both [7, 8] and [20]and although Moses and Shoham do not explicitly mention the term plausibility, it seems appropriate to summarize their idea first. Moses and Shoham: Beliefs as Conditional Knowledge In [18], beliefs are relativized with respect to a formula α (which can be seen as a plausibility assumption expressed in the object language). More precisely, worlds that satisfy α can be considered as plausible. This concept is expressed via symbols Bα i ϕ; the index i ∈ {1, 2, 3} is used to distinguish between three different implementations of beliefs. The first version is given by Bα 1 ϕ ≡ K(α → ϕ).4 A drawback of this version is that if α is false, then everything will be believed with respect to α. The second version overcomes this problem: Bα 2 ϕ ≡ K(α → ϕ) ∧ (K¬α → Kϕ); now ϕ is only believed if it is known that ϕ follows from assumption α, and ϕ must be known if assumption α is known to be false. Finally, Bα 3 ϕ ≡ K(α → ϕ) ∧ ¬K¬α: if the assumption α is known to be false, nothing should be believed with respect to α. The strength of these different notions is given as follows: Bα 3 ϕ implies Bα 2 ϕ, and Bα 2 ϕ implies Bα 1 ϕ. In this approach, belief is strongly connected to knowledge in the sense that belief is knowledge with respect to a given assumption. Friedman and Halpern: Plausibility Spaces The work of Friedman and Halpern [7] extends the concepts of knowledge and belief with an explicit notion of plausibility; i.e., some worlds are more plausible for an agent than others. To implement this idea, Kripke models are extended with function P which assigns a plausibility space P(q, a) = (Ω(q,a), (q,a)) to every state, or more generally every possible world q, and agent a. The plausibility space 4 Unlike in most approaches, K is interpreted over all worlds and not only over the indistinguishable worlds. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 585 is just a partially ordered subset of states/worlds; that is, Ω(q, a) ⊆ Q, and (q,a)⊆ Q ×Q is a reflexive and transitive relation. Let S, T ⊆ Ω(q,a) be finite subsets of states; now, T is defined to be plausible given S with respect to P(q, a), denoted by S →P (q,a) T, iff all minimal points/states in S (with respect to (q,a)) are also in T.5 Friedman and Halpern``s view to modal plausibility is closely related to probability and, more generally, plausibility measures. Logics of plausibility can be seen as a qualitative description of agents preferences/knowledge; logics of probability [6, 15], on the other hand, offer a quantitative description. The logic from [7] is defined by the following grammar: ϕ ::= p | ϕ∧ϕ | ¬ϕ | Kaϕ | ϕ →a ϕ, where the semantics of all operators except →a is given as usual, and formulae ϕ →a ψ have the meaning that ψ is true in the most plausible worlds in which ϕ holds. Formally, the semantics for →a is given as: M, q |= ϕ →a ψ iff Sϕ P (q,a) →P(q,a) Sψ P (q,a), where Sϕ (q,a) = {q ∈ Ω(q,a) | M, q |= ϕ} are the states in Ω(q,a) that satisfy ϕ. The idea of defining beliefs is given by the assumption that an agent believes in something if he knows that it is true in the most plausible worlds of Ω(q,a); formally, this can be stated as Baϕ ≡ Ka( →a ϕ). Friedman and Halpern have shown that the KD45 axioms are valid for operator Ba if plausibility spaces satisfy consistency (for all states q ∈ Q it holds that Ω(q,a) ⊆ { q ∈ Q | q ∼a q }) and normality (for all states q ∈ Q it holds that Ω(q,a) = ∅).6 A temporal extension of the language (mentioned briefly in [7], and discussed in more detail in [8]) uses the interpreted systems approach [10, 5]. A system R is given by runs, where a run r : N → Q is a function from time moments (modeled by N) to global states, and a time point (r, i) is given by a time point i ∈ N and a run r. A global state is a combination of local states, one per agent. An interpreted system M = (R, π) is given by a system R and a valuation of propositions π. Epistemic relations are defined over time points, i.e., (r , m ) ∼a (r, m) iff agent a``s local states ra(m ) and ra(m) of (r , m ) and (r, m) are equal. Formulae are interpreted in a straightforward way with respect to interpreted systems, e.g. M, r, m |= Kaϕ iff M, r , m |= ϕ for all (r , m ) ∼a (r, m). Now, these are time points that play the role of possible worlds; consequently, plausibility spaces P(r,m,a) are assigned to each point (r, m) and agent a. Su et al.: KBC Logic Su et al. [20] have developed a multi-modal, computationally grounded logic with modalities K, B, and C (knowledge, belief, and certainty). The computational model consists of (global) states q = (qvis , qinv , qper , Qpls ) where the environment is divided into a visible (qvis ) and an invisible part (qinv ), and qper captures the agent``s perception of the visible part of the environment. External sources may provide the agent with information about the invisible part of a state, which results in a set of states Qpls that are plausible for the agent. Given a global state q, we additionally define V is(q) = qvis , Inv(q) = qinv , Per(q) = qper , and Pls(q) = Qpls . The 5 When there are infinite chains ... q3 q2 a q1, the definition is much more sophisticated. An interested reader is referred to [7] for more details. 6 Note that this normality is essentially seriality of states wrt plausibility spaces. semantics is given by an extension of interpreted systems [10, 5], here, it is called interpreted KBC systems. KBC formulae are defined as ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Kϕ | Bϕ | Cϕ. The epistemic relation ∼vis is captured in the following way: (r, i) ∼vis (r , i ) iff V is(r(i)) = V is(r (i )). The semantic clauses for belief and certainty are given below. M, r, i |= Bϕ iff M, r , i |= ϕ for all (r , i ) with V is(r (i )) = Per(r(i)) and Inv(r (i )) ∈ Pls(r(i)) M, r, i |= Cϕ iff M, r , i |= ϕ for all (r (i )) with V is(r (i )) = Per(r(i)) Thus, an agent believes ϕ if, and only if, ϕ is true in all states which look like what he sees now and seem plausible in the current state. Certainty is stronger: if an agent is certain about ϕ, the formula must hold in all states with a visible part equal to the current perception, regardless of whether the invisible part is plausible or not. The logic does not include temporal formulae, although it might be extended with temporal operators, as time is already present in KBC models. What Are the Differences to Our Logic? In our approach, plausibility is explicitly seen as a temporal property, i.e., it is a property of temporal paths rather than states. In the object language, this is reflected by the fact that plausibility assumptions are specified through path formulae. In contrast, the approach of [18] and [20] is static: not only the logics do not include operators for talking about time and/or change, but these are states that are assumed plausible or not in their semantics. The differences to [7, 8] are more subtle. Firstly, the framework of Friedman and Halpern is static in the sense that plausibility is taken as a property of (abstract) possible worlds. This formulation is flexible enough to allow for incorporating time; still, in our approach, time is inherent to plausibility rather than incidental. Secondly, our framework is more computationally oriented. The implementation of temporal plausibility in [7, 8] is based on the interpreted systems approach with time points (r, m) being subject to plausibility. As runs are included in time points, they can also be defined plausible or implausible.7 However, it also means that time points serve the role of possible worlds in the basic formulation, which yields Kripke structures with uncountable possible world spaces in all but the most trivial cases. Thirdly, [7, 8] build on linear time: a run (more precisely, a time moment (r, m)) is fixed when a formula is interpreted. In contrast, we use branching time with explicit quantification over temporal paths.8 We believe that branching time is more suitable for non-deterministic domains (cf. e.g. [4]), of which multi-agent systems are a prime example. Note that branching time makes our notion of belief different from Friedman and Halpern``s. Most notably, property Kϕ → Bϕ is valid in their approach, but not in ours: an agent may 7 Friedman and Halpern even briefly mention how plausibility of runs can be embedded in their framework. 8 To be more precise, time in [7] does implicitly branch at epistemic states. This is because (r, m) ∼a (r , m ) iff a``s local state corresponding to both time points is the same (ra(m) = ra(m )). In consequence, the semantics of Kaϕ can be read as for every run, and every moment on this run that yields the same local state as now, ϕ holds. 586 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) know that some course of events is in principle possible, without believing that it can really become the case (see Section 4.2). As Proposition 13 suggests, such a subtle distinction between knowledge and beliefs is possible in our approach because branching time logics allow for existential quantification over runs. Fourthly, while Friedman and Halpern``s models are very flexible, they also enable system descriptions that may seem counterintuitive. Suppose that (r, m) is plausible in itself (formally: (r, m) is minimal wrt (r,m,a)), but (r, m + 1) is not plausible in (r, m + 1). This means that following the plausible path makes it implausible (cf. Remark 1), which is even stranger in the case of linear time. Combining the argument with computational aspects, we suggest that our approach can be more natural and straightforward for many applications. Last but not least, our logic provides a mechanism for specifying (and updating) sets of plausible paths in the object language. Thus, plausibility sets can be specified in a succinct way, which is another feature that makes our framework computation-friendly. The model checking results from Section 5 are especially encouraging in this light. 4. PLAUSIBILITY, KNOWLEDGE, AND BELIEFS IN CTLKP In this section we study some relevant properties of plausibility, knowledge, and beliefs; in particular, axioms KDT45 are examined. But first, we identify two important subclasses of models with plausibility. A CTLKP model is plausibly serial (or p-serial) for agent a if every state of the system is part of a plausible path according to a, i.e. on(Υa) = Q. As we will see further, a weaker requirement is sometimes sufficient. We call a model weakly p-serial if every state has at least one indistinguishable counterpart which lies on a plausible path, i.e. for each q ∈ Q there is a q ∈ Q such that q ∼a q and q ∈ on(Υa). Obviously, p-seriality implies weak p-seriality. We get the following characterization of both model classes. Proposition 4. M is plausibly serial for agent a iff formula Pl aE f is valid in M. M is weakly p-serial for agent a iff ¬KaPl aA f⊥ is valid in M. 4.1 Axiomatic Properties Theorem 5. Axioms K, D, 4, and 5 for knowledge are strongly valid, and axiom T is valid. That is, modalities Ka form system S5 in the sense of normal validity, and KD45 in the sense of strong validity. We do not include proofs here due to lack of space. The interested reader is referred to [2], where detailed proofs are given. Proposition 6. Axioms K, 4, and 5 for beliefs are strongly valid. That is, we have: |≡ (Baϕ ∧ Ba(ϕ → ψ)) → Baψ, |≡ (Baϕ → BaBaϕ), and |≡ (¬Baϕ → Ba¬Baϕ). The next proposition concerns the consistency axiom D: Baϕ → ¬Ba¬ϕ. It is easy to see that the axiom is not valid in general: as we have no restrictions on plausibility sets Υa, it may be as well that Υa = ∅. In that case we have Baϕ ∧ Ba¬ϕ for all formulae ϕ, because the set of states to be considered becomes empty. However, it turns out that D is valid for a very natural class of models. Proposition 7. Axiom D for beliefs is not valid in the class of all CTLKP models. However, it is strongly valid in the class of weak p-serial models (and therefore also in the class of p-serial models). Moreover, as one may expect, beliefs do not have to be always true. Proposition 8. Axiom T for beliefs is not valid; i.e., |= (Baϕ → ϕ). The axiom is not even valid in the class of p-serial models. Theorem 9. Belief modalities Ba form system K45 in the class of all models, and KD45 in the class of weakly plausibly serial models (in the sense of both normal and strong validity). Axiom T is not even valid for p-serial models. 4.2 Plausibility, Knowledge, and Beliefs First, we investigate the relationship between knowledge and plausibility/physicality operators. Then, we look at the interaction between knowledge and beliefs. Proposition 10. Let ϕ be a CTLKP formula, and M be a CTLKP model. We have the following strong validities: (i) |≡ Pl aKaϕ ↔ Kaϕ (ii) |≡ Ph Kaϕ ↔ KaPh ϕ and |≡ KaPh ϕ ↔ Kaϕ We now want to examine the relationship between knowledge and belief. For instance, if agent a believes in something, he knows that he believes it. Or, if he knows a fact, he also believes that he knows it. On the other hand, for instance, an agent does not necessarily believe in all the things he knows. For example, we may know that an invasion from another galaxy is in principle possible (KaE♦ invasion), but if we do not take this possibility as plausible (¬Pl aE♦ invasion), then we reject the corresponding belief in consequence (¬BaE♦ invasion). Note that this property reflects the strong connection between belief and plausibility in our framework. Proposition 11. The following formulae are strongly valid: (i) Baϕ → KaBaϕ, (ii) KaBaϕ → Baϕ, (iii) Kaϕ → BaKaϕ. The following formulae are not valid: (iv) Baϕ → BaKaϕ, (v) Kaϕ → Baϕ The last invalidity is especially important: it is not the case that knowing something implies believing in it. This emphasizes that we study a specific concept of beliefs here. Note that its specific is not due to the plausibility-based definition of beliefs. The reason lies rather in the fact that we investigate knowledge, beliefs and plausibility in a temporal framework, as Proposition 12 shows. Proposition 12. Let ϕ be a CTLKP formula that does not include any temporal operators. Then Kaϕ → Baϕ is strongly valid, and in the class of p-serial models we have even that |≡ Kaϕ ↔ Baϕ. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 587 Moreover, it is important that we use branching time with explicit quantification over paths; this observation is formalized in Proposition 13. Definition 1. We define the universal sublanguage of CTLK in a way similar to [21]: ϕu ::= p | ¬p | ϕu ∧ ϕu | ϕu ∨ ϕu | Aγu | Kaϕu, γu ::= fϕu | 2 ϕu | ϕuU ϕu. We call such ϕu universal formulae, and γu universal path formulae. Proposition 13. Let ϕu be a universal CTLK formula. Then |≡ Kaϕu → Baϕu. The following two theorems characterize the relationship between knowledge and beliefs: first for the class of p-serial models, and then, finally, for all models. Theorem 14. The following formulae are strongly valid in the class of plausibly serial CTLKP models: (i) Baϕ ↔ KaPl aϕ, (ii) Kaϕ ↔ BaPh ϕ. Theorem 15. Formula Baϕ ↔ KaPl a(E f → ϕ) is strongly valid. Note that this characterization has a strong commonsense reading: believing in ϕ is knowing that ϕ plausibly holds in all plausibly imaginable situations. 4.3 Properties of the Update The first notable property of plausibility update is that it influences only formulae in which plausibility plays a role, i.e. ones in which belief or plausibility modalities occur. Proposition 16. Let ϕ be a CTLKP formula that does not include operators Pl a and Ba, and γ be a CTLKP path formula. Then, we have |≡ ϕ ↔ (set-pla γ)ϕ. What can be said about the result of an update? At first sight, formula (set-pla γ)Pl aAγ seems a natural characterization; however, it is not valid. This is because, by leaving the other (implausible) paths out of scope, we may leave out of |γ| some paths that were needed to satisfy γ (see the example in Section 4.2). We propose two alternative ways out: the first one restricts the language of the update similarly to [21]; the other refers to physical possibilities, in a way analogous to [13]. Proposition 17. The CTLKP formula (set-pla γ)Pl aAγ is not valid. However, we have the following validities: (i) |≡ (set-pla γu)Pl aAγu, where γu is a universal CTLK path formula from Definition 1. (ii) If ϕ, ϕ1, ϕ2 are arbitrary CTLK formulae, then: |≡ (set-pla fϕ)Pl aA f(Ph ϕ), |≡ (set-pla 2 ϕ)Pl aA2 (Ph ϕ), and |≡ (set-pla ϕ1U ϕ2)Pl aA(Ph ϕ1)U (Ph ϕ2). 5. VERIFICATION OF PLAUSIBILITY, TIME AND BELIEFS In this section we report preliminary results on model checking CTLKP formulae. Clearly, verifying CTLKP properties directly against models with plausibility does not make much sense, since these models are inherently infinite; what we need is a finite representation of plausibility sets. One such representation has been discussed in Section 3.3: plausibility sets can be defined by path formulae and the update operator (set-pla γ). We follow this idea here, studying the complexity of model checking CTLKP formulae against CTLK models (which can be seen as a compact representation of CTLKP models in which all the paths are assumed plausible), with the underlying idea that plausibility sets, when needed, must be defined explicitly in the object language. Below we sketch an algorithm that model-checks CTLKP formulae in time linear wrt the size of the model and the length of the formula. This means that we have extended CTLK to a more expressive language with no computational price to pay. First of all, we get rid of the belief operators (due to Theorem 15), replacing every occurrence of Baϕ with KaPl a(E f → ϕ). Now, let −→γ = γ1, ..., γk be a vector of vanilla path formulae (one per agent), with the initial vector −→γ0 = , ..., , and −→γ [γ /a] denoting vector −→γ , in which −→γ [a] is replaced with γ . Additionally, we define −→γ [0] = . We translate the resulting CTLKP formulae to ones without plausibility via function tr(ϕ) = tr−→γ0,0(ϕ), defined as follows: tr−→γ ,i(p) = p, tr−→γ ,i(ϕ1 ∧ ϕ2) = tr−→γ ,i(ϕ1) ∧ tr−→γ ,i(ϕ2), tr−→γ ,i(¬ϕ) = ¬tr−→γ ,i(ϕ), tr−→γ ,i(Kaϕ) = Ka tr−→γ ,0(ϕ), tr−→γ ,i(Pla ϕ) = tr−→γ ,a(ϕ), tr−→γ ,i((set-pla γ )ϕ) = tr−→γ [γ /a],i(ϕ), tr−→γ ,i(Ph ϕ) = tr−→γ ,0(ϕ), tr−→γ ,i( fϕ) = ftr−→γ ,i(ϕ), tr−→γ ,i(2 ϕ) = 2 tr−→γ ,i(ϕ), tr−→γ ,i(ϕ1U ϕ2) = tr−→γ ,i(ϕ1)U tr−→γ ,i(ϕ2), tr−→γ ,i(Eγ ) = E(−→γ [i] ∧ tr−→γ ,i(γ )). Note that the resulting sentences belong to the logic of CTLK+, that is CTL+ (where each path quantifier can be followed by a Boolean combination of vanilla path formulae)9 with epistemic modalities. The following proposition justifies the translation. Proposition 18. For any CTLKP formula ϕ without Ba, we have that M, q |=CTLKP ϕ iff M, q |=CTLK+ tr(ϕ). In general, model checking CTL+ (and also CTLK+) is ΔP 2 -complete. However, in our case, the Boolean combinations of path subformulae are always conjunctions of at most two non-negated elements, which allows us to propose the following model checking algorithm. First, subformulae are evaluated recursively: for every subformula ψ of ϕ, the set of states in M that satisfy ψ is computed and labeled with a new proposition pψ. Now, it is enough to define checking M, q |= ϕ for ϕ in which all (state) subformulae are propositions, with the following cases: Case M, q |= E(2 p ∧ γ): If M, q |= p, then return no. Otherwise, remove from M all the states that do not satisfy p (yielding a sparser model M ), and check the CTL formula Eγ in M , q with any CTL model-checker. Case M, q |= E( fp ∧ γ): Create M by adding a copy q of state q, in which only the transitions to states satisfying p are kept (i.e., M, q |= r iff M, q |= r; and q Rq iff qRq and M, q |= p). Then, check Eγ in M , q . 9 For the semantics of CTL+, and discussion of model checking complexity, cf. [17]. 588 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Case M, q |= E(p1U p2 ∧ p3U p4): Note that this is equivalent to checking E(p1 ∧ p3)U (p2 ∧ Ep3U p4) ∨ E(p1 ∧ p3)U (p4 ∧ Ep1U p2), which is a CTL formula. Other cases: The above cases cover all possible formulas that begin with a path quantifier. For other cases, standard CTLK model checking can be used. Theorem 19. Model checking CTLKP against CTLK models is PTIME-complete, and can be done in time O(ml), where m is the number of transitions in the model, and l is the length of the formula to be checked. That is, the complexity is no worse than for CTLK itself. 6. CONCLUSIONS In this paper a notion of plausible behavior is considered, with the underlying idea that implausible options should be usually ignored in practical reasoning about possible future courses of action. We add the new notion of plausibility to the logic of CTLK [19], and obtain a language which enables reasoning about what can (or must) plausibly happen. As a technical device to define the semantics of the resulting logic, we use a non-standard satisfaction relation |=P that allows to propagate the current set of plausible paths into subformulae. Furthermore, we propose a non-standard notion of beliefs, defined in terms of indistinguishability and plausibility. We also propose how plausibility assumptions can be specified in the object language via a plausibility update operator (in a way similar to [13]). We use this new framework to investigate some important properties of plausibility, knowledge, beliefs, and updates. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. We also prove that believing in ϕ is knowing that ϕ plausibly holds in all plausibly possible situations. That is, the relationship between knowledge and beliefs is very natural and reflects the initial intuition precisely. Moreover, the model checking results from Section 5 show that verification for CTLKP is no more complex than for CTL and CTLK. We would like to stress that we do not see this contribution as a mere technical exercise in formal logic. Human agents use a similar concept of plausibility and practical beliefs in their everyday reasoning in order to reduce the search space and make the reasoning feasible. As a consequence, we suggest that the framework we propose may prove suitable for modeling, design, and analysis resource-bounded agents in general. We would like to thank Juergen Dix for fruitful discussions, useful comments and improvements. 7. REFERENCES [1] R. Alur, T. A. Henzinger, and O. Kupferman. Alternating-time Temporal Logic. Journal of the ACM, 49:672-713, 2002. [2] N. Bulling and W. Jamroga. Agents, beliefs and plausible behavior in a temporal setting. Technical Report IfI-06-05, Clausthal Univ. of Technology, 2006. [3] E. A. Emerson. Temporal and modal logic. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B, pages 995-1072. Elsevier, 1990. [4] E.A. Emerson and J.Y. Halpern. sometimes and not never revisited: On branching versus linear time temporal logic. Journal of the ACM, 33(1):151-178, 1986. [5] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning about Knowledge. MIT Press: Cambridge, MA, 1995. [6] R. Fagin and J.Y. Halpern. Reasoning about knowledge and probability. Journal of ACM, 41(2):340-367, 1994. [7] N. Friedman and J.Y. Halpern. A knowledge-based framework for belief change, Part I: Foundations. In Proceedings of TARK, pages 44-64, 1994. [8] N. Friedman and J.Y. Halpern. A knowledge-based framework for belief change, Part II: Revision and update. In Proceedings of KR``94, 1994. [9] J.Y. Halpern. Reasoning about knowledge: a survey. In Handbook of Logic in Artificial Intelligence and Logic Programming. Vol. 4: Epistemic and Temporal Reasoning, pages 1-34. Oxford University Press, Oxford, 1995. [10] J.Y. Halpern and R. Fagin. Modelling knowledge and action in distributed systems. Distributed Computing, 3(4):159-177, 1989. [11] W. Jamroga and N. Bulling. A general framework for reasoning about rational agents. In Proceedings of AAMAS``07, 2007. Short paper. [12] W. Jamroga and W. van der Hoek. Agents that know how to play. Fundamenta Informaticae, 63(2-3):185-219, 2004. [13] W. Jamroga, W. van der Hoek, and M. Wooldridge. Intentions and strategies in game-like scenarios. In Progress in Artificial Intelligence: Proceedings of EPIA 2005, volume 3808 of LNAI, pages 512-523. Springer Verlag, 2005. [14] W. Jamroga and Thomas ˚Agotnes. Constructive knowledge: What agents can achieve under incomplete information. Technical Report IfI-05-10, Clausthal University of Technology, 2005. [15] B.P. Kooi. Probabilistic dynamic epistemic logic. Journal of Logic, Language and Information, 12(4):381-408, 2003. [16] P. Lamarre and Y. Shoham. Knowledge, certainty, belief, and conditionalisation (abbreviated version). In Proceedings of KR``94, pages 415-424, 1994. [17] F. Laroussinie, N. Markey, and Ph. Schnoebelen. Model checking CTL+ and FCTL is hard. In Proceedings of FoSSaCS``01, volume 2030 of LNCS, pages 318-331. Springer, 2001. [18] Y. Moses and Y. Shoham. Belief as defeasible knowledge. Artificial Intelligence, 64(2):299-321, 1993. [19] W. Penczek and A. Lomuscio. Verifying epistemic properties of multi-agent systems via bounded model checking. In Proceedings of AAMAS``03, pages 209-216, New York, NY, USA, 2003. ACM Press. [20] K. Su, A. Sattar, G. Governatori, and Q. Chen. A computationally grounded logic of knowledge, belief and certainty. In Proceedings of AAMAS``05, pages 149-156. ACM Press, 2005. [21] W. van der Hoek, M. Roberts, and M. Wooldridge. Social laws in alternating time: Effectiveness, feasibility and synthesis. Synthese, 2005. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 589
Agents, Beliefs, and Plausible Behavior in a Temporal Setting ABSTRACT Logics of knowledge and belief are often too static and inflexible to be used on real-world problems. In particular, they usually offer no concept for expressing that some course of events is more likely to happen than another. We address this problem and extend CTLK (computation tree logic with knowledge) with a notion of plausibility, which allows for practical and counterfactual reasoning. The new logic CTLKP (CTLK with plausibility) includes also a particular notion of belief. A plausibility update operator is added to this logic in order to change plausibility assumptions dynamically. Furthermore, we examine some important properties of these concepts. In particular, we show that, for a natural class of models, belief is a KD45 modality. We also show that model checking CTLKP is PTIME-complete and can be done in time linear with respect to the size of models and formulae. 1. INTRODUCTION Notions like time, knowledge, and beliefs are very important for analyzing the behavior of agents and multi-agent systems. In this paper, we extend modal logics of time and knowledge with a concept of plausible behavior: this notion is added to the language of CTLK [19], which is a straightforward combination of the branching-time temporal logic CTL [4, 3] and standard epistemic logic [9, 5]. In our approach, plausibility can be seen as a temporal property of behaviors. That is, some behaviors of the system can be assumed plausible and others implausible, with the underlying idea that the latter should perhaps be ignored in practical reasoning about possible future courses of action. Moreover, behaviors can be formally understood as temporal paths in the Kripke structure modeling a multiagent system. As a consequence, we obtain a language to reason about what can (or must) plausibly happen. We propose a particular notion of beliefs (inspired by [20, 7]), defined in terms of epistemic relations and plausibility. The main intuition is that beliefs are facts that an agent would know if he assumed that only plausible things could happen. We believe that humans use such a concept of plausibility and "practical beliefs" quite often in their everyday reasoning. Restricting one's reasoning to plausible possibilities is essential to make the reasoning feasible, as the space of all possibilities is exceedingly large in real life. We investigate some important properties of plausibility, knowledge, and belief in this new framework. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. Finally, we show that the relationship between knowledge and belief for plausibly serial models is natural and reflects the initial intuition well. We also show how plausibility assumptions can be specified in the object language via a plausibility update operator, and we study properties of such updates. Finally, we show that model checking of the new logic is no more complex than model checking CTL and CTLK. Our ultimate goal is to come up with a logic that allows the study of strategies, time, knowledge, and plausible/rational behavior under both perfect and imperfect information. As combining all these dimensions is highly nontrivial (cf. [12, 14]) it seems reasonable to split this task. While this paper deals with knowledge, plausibility, and belief, the companion paper [11] proposes a general framework for multi-agent systems that regard game-theoretical rationality criteria like Nash equilibrium, Pareto optimality, etc. . The latter approach is based on the more powerful logic ATL [1]. The paper is structured as follows. Firstly, we briefly present branching-time logic with knowledge, CTLK. In Section 3 we present our approach to plausibility and formally define CTLK with plausibility. We also show how temporal formulae can be used to describe plausible paths, and we compare our logic with existing related work. In Section 4, properties of knowledge, belief, and plausibility are explored. Finally, we present verification complexity results for CTLKP in Section 5. 2. BRANCHING TIME AND KNOWLEDGE In this paper we develop a framework for agents' beliefs about how the world can (or must) evolve. Thus, we need a notion of time and change, plus a notion of what the agents are supposed to know in particular situations. CTLK [19] is a straightforward combination of the computation tree logic CTL [4, 3] and standard epistemic logic [9, 5]. CTL includes operators for temporal properties of systems: i.e., path quantifier E ("there is a path"), together with temporal operators: ❢ ("in the next state"), ❑ ("always from now on") and U ("until").1 Every occurrence of a temporal operator is preceded by exactly one path quantifier in CTL (this variant of the language is sometimes called "vanilla" CTL). Epistemic logic uses operators for representing agents' knowledge: Kaϕ is read as "agent a knows that ϕ". Let Π be a set of atomic propositions with a typical element p, and Agt = {1,..., k} be a set of agents with a typical element a. The language of CTLK consists of formulae ϕ, given as follows: We will sometimes refer to formulae ϕ as ("vanilla") state formulae and to formulae γ as ("vanilla") path formulae. The semantics of CTLK is based on Kripke models M = ~ Q, R, ∼ 1,..., ∼ k, π ~, which include a nonempty set of states Q, a state transition relation R ⊆ Q × Q, epistemic indistinguishability relations ∼ a ⊆ Q × Q (one per agent), and a valuation of propositions π: Π → P (Q). We assume that relation R is serial and that all ∼ a are equivalence relations. A path λ in M refers to a possible behavior (or computation) of system M, and can be represented as an infinite sequence of states that follow relation R, that is, a sequence q0q1q2...such that qiRqi +1 for every i = 0, 1, 2,...We denote the ith state in λ by λ [i]. The set of all paths in M is denoted by ΛM (if the model is clear from context, M will be omitted). A q-path is a path that starts from q, i.e., λ [0] = q. A q-subpath is a sequence of states, starting from q, which is a subpath of some path in the model, i.e. a sequence q0q1...such that q = q0 and there are q0,..., qi such that q0...qiq0q1...∈ ΛM .2 The semantics of CTLK is defined as follows: 3. EXTENDING TIME AND KNOWLEDGE WITH PLAUSIBILITY AND BELIEFS In this section we discuss the central concept of this paper, i.e. the concept of plausibility. First, we outline the idea informally. Then, we extend CTLK with the notion of plausibility by adding plausible path operators Pl a and physical path operator Ph to the logic. Formula Pl aϕ has the intended meaning: according to agent a, it is plausible that ϕ holds; formula Ph ϕ reads as: ϕ holds in all "physically" possible scenarios (i.e., even in implausible ones). The plausible path operator restricts statements only to those paths which are defined to be "sensible", whereas the physical path operator generates statements about all paths that may theoretically occur. Furthermore, we define beliefs on top of plausibility and knowledge, as the facts that an agent would know if he assumed that only plausible things could happen. Finally, we discuss related work [7, 8, 20, 18, 16], and compare it with our approach. 3.1 The Concept of Plausibility It is well known how knowledge (or beliefs) can be modeled with Kripke structures. However, it is not so obvious how we can capture knowledge and beliefs in a sensible way in one framework. Clearly, there should be a connection between these two notions. Our approach is to use the notion of plausibility for this purpose. Plausibility can serve as a primitive concept that helps to define the semantics of beliefs, in a similar way as indistinguishability of states (represented by relation ∼ a) is the semantic concept that underlies knowledge. In this sense, our work follows [7, 20]: essentially, beliefs are what an agent would know if he took only plausible options into account. In our approach, however, plausibility is explicitly seen as a temporal property. That is, we do not consider states (or possible worlds) to be more plausible than others but rather define some behaviors to be plausible, and others implausible. Moreover, behaviors can be formally understood as temporal paths in the Kripke structure modeling a multi-agent system. An actual notion of plausibility (that is, a particular set of plausible paths) can emerge in many different ways. It may result from observations and learning; an agent can learn from its observations and see specific patterns of events as plausible ("a lot of people wear black shoes if they wear a suit"). Knowledge exchange is another possibility (e.g., an agent a can tell agent b that "player c always bluffs when he is smiling"). Game theory, with its rationality criteria (undominated strategies, maxmin, Nash equilibrium etc.) is another viable source of plausibility assumptions. Last but not least, folk knowledge can be used to establish plausibilityrelated classifications of behavior ("players normally want to win a game", "people want to live"). In any case, restricting the reasoning to plausible possibilities can be essential if we want to make the reasoning feasible, as the space of all possibilities (we call them "physical" possibilities in the rest of the paper) is exceedingly large in real life. Of course, this does not exclude a more extensive analysis in special cases, e.g. when our plausibility assumptions do not seem accurate any more, or when the cost of The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 583 inaccurate assumptions can be too high (as in the case of high-budget business decisions). But even in these cases, we usually do not get rid of plausibility assumptions completely--we only revise them to make them more cautious .3 To formalize this idea, we extend models of CTLK with sets of plausible paths and add plausibility operators Pl a, physical paths operator Ph, and belief operators Ba to the language of CTLK. Now, it is possible to make statements that refer to plausible paths only, as well as statements that regard all paths that may occur in the system. 3.2 CTLK with Plausibility In this section, we extend the logic of CTLK with plausibility; we call the resulting logic CTLKP. Formally, the language of CTLKP is defined as: For instance, we may claim it is plausible to assume that a shop is closed after the opening hours, though the manager may be physically able to open it at any time: Pl aA ❑ (late → ¬ open) ∧ Ph E ♦ (late ∧ open). The semantics of CTLKP extends that of CTLK as follows. Firstly, we augment the models with sets of plausible paths. A model with plausibility is given as where ~ Q, R, ∼ 1,..., ∼ k, π ~ is a CTLK model, and Ta ⊆ Λm is the set of paths in M that are plausible according to agent a. If we want to make it clear that Ta is taken from model M, we will write Tam. It seems worth emphasizing that this notion of plausibility is subjective and holistic. It is subjective because Ta represents agent a's subjective view on what is plausible--and indeed, different agents may have different ideas on plausibility (i.e., Ta may differ from Tb). It is holistic because Ta represents agent a's idea of the plausible behavior of the whole system (including the behavior of other agents). REMARK 1. In our models, plausibility is also global, i.e., plausibility sets do not depend on the state of the system. Investigating systems, in which plausibility is relativized with respect to states (like in [7]), might be an interesting avenue of future work. However, such an approach--while obviously more flexible--allows for potentially counterintuitive system descriptions. For example, it might be the case that path λ is plausible in q = λ [0], but the set of plausible paths in q' = λ [1] is empty. That is, by following plausible path λ we are bound to get to an implausible situation. But then, does it make sense to consider λ as plausible? Secondly, we use a non-standard satisfaction relation | =P, which we call plausible satisfaction. Let M be a CTLKP 3That is, when planning to open an industrial plant in the UK, we will probably consider the possibility of our main contractor taking her life, but we will still not take into account the possibilities of: an invasion of UFO, England being destroyed by a meteorite, Fidel Castro becoming the British Prime Minister etc. . Note that this is fundamentally different from using a probabilistic model in which all these unlikely scenarios are assigned very low probabilities: in that case, they also have a very small influence on our final decision, but we must process the whole space of physical possibilities to evaluate the options. model and P ⊆ Λm be an arbitrary subset of paths in M (not necessarily any Ta m). | =P restricts the evaluation of temporal formulae to the paths given in P only. The "absolute" satisfaction relation | = is defined as | = ΛM. Let on (P) be the set of all states that lie on at least one path in P, i.e. on (P) = {q ∈ Q | ∃ λ ∈ P ∃ i (λ [i] = q)}. Now, the semantics of CTLKP can be given through the following clauses: One of the main reasons for using the concept of plausibility is that we want to define agents' beliefs out of more primitive concepts--in our case, these are plausibility and indistinguishability--in a way analogous to [20, 7]. If an agent knows that ϕ, he must be "sure" about it. However, beliefs of an agent are not necessarily about reliable facts. Still, they should make sense to the agent; if he believes that ϕ, then the formula should at least hold in all futures that he envisages as plausible. Thus, beliefs of an agent may be seen as things known to him if he disregards all non-plausible possibilities. We say that ϕ is M-true (M | = ϕ) if M, q | = ϕ for all q ∈ Qm. ϕ is valid (| = ϕ) if M | = ϕ for all models M. ϕ is M-strongly true (M | ≡ ϕ) if M, q | =P ϕ for all q ∈ Qm and all P ⊆ Λm. ϕ is strongly valid (| ≡ ϕ) if M | ≡ ϕ for all models M. PROPOSITION 2. Strong truth and strong validity imply truth and validity, respectively. The reverse does not hold. Ultimately, we are going to be interested in normal (not strong) validity, as parameterizing the satisfaction relation with a set P is just a technical device for propagating sets of plausible paths Ta into the semantics of nested formulae. The importance of strong validity, however, lies in the fact that | ≡ ϕ ↔ ψ makes ϕ and ψ completely interchangeable, while the same is not true for normal validity. PROPOSITION 3. Let Φ [ϕ / ψ] denote formula Φ in which every occurrence of ψ was replaced by ϕ. Also, let | ≡ ϕ ↔ ψ. Then for all M, q, P: M, q | =P Φ iff M, q | =P Φ [ϕ / ψ] (in particular, M, q | = Φ iff M, q | = Φ [ϕ / ψ]). Note that | = ϕ ↔ ψ does not even imply that M, q | = Φ iff M, q | = Φ [ϕ / ψ]. 584 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Guessing Robots game EXAMPLE 1 (GUESSING ROBOTS). Consider a simple game with two agents a and b, shown in Figure 1. First, a chooses a real number r ∈ [0, 1] (without revealing the number to b); then, b chooses a real number r' ∈ [0, 1]. The agents win the game (and collect EUR 1, 000, 000) if both chose 1, otherwise they lose. Formally, we model the game with a CTLKP model M, in which the set of states Q includes qs for the initial situation, states qr, r ∈ [0, 1], for the situations after a has chosen number r, and "final" states qw, ql for the winning and the losing situation, respectively. The transition relation is as follows: qsRqr and qrRql for all r ∈ [0, 1]; q1Rqw, qwRqw, and qlRql. Moreover, π (one) = {q1} and π (win) = {qw}. Player a has perfect information in the game (i.e., q ∼ a q' iff q = q'), but player b does not distinguish between states qr (i.e., qr ∼ b qr' for all r, r' ∈ [0, 1]). Obviously, the only sensible thing to do for both agents is to choose 1 (using game-theoretical vocabulary, these strategies are strongly dominant for the respective players). Thus, there is only one plausible course of events if we assume that our players are rational, and hence Υa = Υb = {qsq1qwqw ...}. Note that, in principle, the outcome of the game is uncertain: M, qs | = ¬ A0 win ∧ ¬ A ❑ ¬ win. However, assuming rationality of the players makes it only plausible that the game must end up with a win: M, qs | = Pla A0 win ∧ Plb A0 win, and the agents believe that this will be the case: M, qs | = BaA0 win ∧ BbA0 win. Note also that, in any of the states qr, agent b believes that a (being rational) has played 1: M, qr | = Bbone for all r ∈ [0, 1]. 3.3 Defining Plausible Paths with Formulae So far, we have assumed that sets of plausible paths are somehow given in models. In this section we present a dynamic approach where an actual notion of plausibility can be specified in the object language. Note that we want to specify (usually infinite) sets of infinite paths, and we need a finite representation of these structures. One logical solution is given by using path formulae γ. These formulae describe properties of paths; therefore, a specific formula can be used to characterize a set of paths. For instance, think about a country in Africa where it has never snowed. Then, plausible paths might be defined as ones in which it never snows, i.e., all paths that satisfy ❑ ¬ snows. Formally, let γ be a CTLK path formula. We define | γ | M to be the set of paths that satisfy γ in model M: Moreover, we define the plausible paths model update as follows. Let M = Q, R, ∼ 1,..., ∼ k, Υ1,..., Υk, π be a CTLKP model, and let P ⊆ Λm be a set of paths. Then Ma, P = Q, R, ∼ 1,..., ∼ k, Υ1,..., Υa-1, P, Υa +1,..., Υk, π denotes model M with a's set of plausible paths reset to P. Now we can extend the language of CTLKP with formulae (set-pla γ) ϕ with the intuitive reading: "suppose that γ exactly characterizes the set of plausible paths, then ϕ holds", and formal semantics given below: M, q | =P (set-pla γ) ϕ iff Ma,1 γ1M, q | =P ϕ. We observe that this update scheme is similar to the one proposed in [13]. 3.4 Comparison to Related Work Several modal notions of plausibility were already discussed in the existing literature [7, 8, 20, 18, 16]. In these papers, like in ours, plausibility is used as a primitive semantic concept that helps to define beliefs on top of agents' knowledge. A similar idea was introduced by Moses and Shoham in [18]. Their work preceded both [7, 8] and [20]--and although Moses and Shoham do not explicitly mention the term "plausibility", it seems appropriate to summarize their idea first. Moses and Shoham: Beliefs as Conditional Knowledge In [18], beliefs are relativized with respect to a formula α (which can be seen as a plausibility assumption expressed in the object language). More precisely, worlds that satisfy α can be considered as plausible. This concept is expressed via symbols Bαi ϕ; the index i ∈ {1, 2, 3} is used to distinguish between three different implementations of beliefs. The first version is given by Bα1 ϕ ≡ K (α → ϕ).4 A drawback of this version is that if α is false, then everything will be believed with respect to α. The second version overcomes this problem: Bα2 ϕ ≡ K (α → ϕ) ∧ (K ¬ α → Kϕ); now ϕ is only believed if it is known that ϕ follows from assumption α, and ϕ must be known if assumption α is known to be false. Finally, Bα3 ϕ ≡ K (α → ϕ) ∧ ¬ K ¬ α: if the assumption α is known to be false, nothing should be believed with respect to α. The strength of these different notions is given as follows: Bα3 ϕ implies Bα2 ϕ, and Bα2 ϕ implies Bα1 ϕ. In this approach, belief is strongly connected to knowledge in the sense that belief is knowledge with respect to a given assumption. Friedman and Halpern: Plausibility Spaces The work of Friedman and Halpern [7] extends the concepts of knowledge and belief with an explicit notion of plausibility; i.e., some worlds are more plausible for an agent than others. To implement this idea, Kripke models are extended with function P which assigns a plausibility space P (q, a) = (Ω (q, a), ~ (q, a)) to every state, or more generally every possible world q, and agent a. The plausibility space 4Unlike in most approaches, K is interpreted over all worlds and not only over the indistinguishable worlds. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 585 is just a partially ordered subset of states/worlds; that is, Ω (q, a) ⊆ Q, and ~ (q, a) ⊆ Q × Q is a reflexive and transitive relation. Let S, T ⊆ Ω (q, a) be finite subsets of states; now, T is defined to be plausible given S with respect to P (q, a), denoted by S → P (q, a) T, iff all minimal points/states in S (with respect to ~ (q, a)) are also in T. 5 Friedman and Halpern's view to modal plausibility is closely related to probability and, more generally, plausibility measures. Logics of plausibility can be seen as a qualitative description of agents preferences/knowledge; logics of probability [6, 15], on the other hand, offer a quantitative description. The logic from [7] is defined by the following grammar: ϕ:: = p | ϕ ∧ ϕ | ¬ ϕ | Kaϕ | ϕ → a ϕ, where the semantics of all operators except → a is given as usual, and formulae ϕ → a ψ have the meaning that ψ is true in the most plausible worlds in which ϕ holds. Formally, the semantics for → a is given as: M, q | = ϕ → a ψ iff SϕP (q, a) → P (q, a) SψP (q, a), where Sϕ (q, a) = {q' ∈ Ω (q, a) | M, q' | = ϕ} are the states in Ω (q, a) that satisfy ϕ. The idea of defining beliefs is given by the assumption that an agent believes in something if he knows that it is true in the most plausible worlds of Ω (q, a); formally, this can be stated as Baϕ ≡ Ka (~ → a ϕ). Friedman and Halpern have shown that the KD45 axioms are valid for operator Ba if plausibility spaces satisfy consistency (for all states q ∈ Q it holds that Ω (q, a) ⊆ {q' ∈ Q | q ∼ a q'}) and normality (for all states q ∈ Q it holds that Ω (q, a) = ∅).6 A temporal extension of the language (mentioned briefly in [7], and discussed in more detail in [8]) uses the interpreted systems approach [10, 5]. A system R is given by runs, where a run r: N → Q is a function from time moments (modeled by N) to global states, and a time point (r, i) is given by a time point i ∈ N and a run r. A global state is a combination of local states, one per agent. An interpreted system M = (R, π) is given by a system R and a valuation of propositions π. Epistemic relations are defined over time points, i.e., (r', m') ∼ a (r, m) iff agent a's local states r' a (m') and ra (m) of (r', m') and (r, m) are equal. Formulae are interpreted in a straightforward way with respect to interpreted systems, e.g. M, r, m | = Kaϕ iff M, r', m' | = ϕ for all (r', m') ∼ a (r, m). Now, these are time points that play the role of possible worlds; consequently, plausibility spaces P (r, m, a) are assigned to each point (r, m) and agent a. Su et al. [20] have developed a multi-modal, computationally grounded logic with modalities K, B, and C (knowledge, belief, and certainty). The computational model consists of (global) states q = (qvis, qinv, qper, Qpls) where the environment is divided into a visible (qvis) and an invisible part (qinv), and qper captures the agent's perception of the visible part of the environment. External sources may provide the agent with information about the invisible part of a state, which results in a set of states Qpls that are plausible for the agent. Given a global state q, we additionally define Vis (q) = qvis, Inv (q) = qinv, Per (q) = qper, and Pls (q) = Qpls. The 5When there are infinite chains...~ q3 ~ q2 ~ a q1, the definition is much more sophisticated. An interested reader is referred to [7] for more details. 6Note that this "normality" is essentially seriality of states wrt plausibility spaces. semantics is given by an extension of interpreted systems [10, 5], here, it is called interpreted KBC systems. KBC formulae are defined as ϕ:: = p | ¬ ϕ | ϕ ∧ ϕ | Kϕ | Bϕ | Cϕ. The epistemic relation ∼ vis is captured in the following way: (r, i) ∼ vis (r', i') iff Vis (r (i)) = Vis (r' (i')). The semantic clauses for belief and certainty are given below. Thus, an agent believes ϕ if, and only if, ϕ is true in all states which look like what he sees now and seem plausible in the current state. Certainty is stronger: if an agent is certain about ϕ, the formula must hold in all states with a visible part equal to the current perception, regardless of whether the invisible part is plausible or not. The logic does not include temporal formulae, although it might be extended with temporal operators, as time is already present in KBC models. What Are the Differences to Our Logic? In our approach, plausibility is explicitly seen as a temporal property, i.e., it is a property of temporal paths rather than states. In the object language, this is reflected by the fact that plausibility assumptions are specified through path formulae. In contrast, the approach of [18] and [20] is static: not only the logics do not include operators for talking about time and/or change, but these are states that are assumed plausible or not in their semantics. The differences to [7, 8] are more subtle. Firstly, the framework of Friedman and Halpern is static in the sense that plausibility is taken as a property of (abstract) possible worlds. This formulation is flexible enough to allow for incorporating time; still, in our approach, time is inherent to plausibility rather than incidental. Secondly, our framework is more computationally oriented. The implementation of temporal plausibility in [7, 8] is based on the interpreted systems approach with time points (r, m) being subject to plausibility. As runs are included in time points, they can also be defined plausible or implausible .7 However, it also means that time points serve the role of possible worlds in the basic formulation, which yields Kripke structures with uncountable possible world spaces in all but the most trivial cases. Thirdly, [7, 8] build on linear time: a run (more precisely, a time moment (r, m)) is fixed when a formula is interpreted. In contrast, we use branching time with explicit quantification over temporal paths .8 We believe that branching time is more suitable for non-deterministic domains (cf. e.g. [4]), of which multi-agent systems are a prime example. Note that branching time makes our notion of belief different from Friedman and Halpern's. Most notably, property Kϕ → Bϕ is valid in their approach, but not in ours: an agent may 7Friedman and Halpern even briefly mention how plausibility of runs can be embedded in their framework. 8To be more precise, time in [7] does implicitly branch at epistemic states. This is because (r, m) ∼ a (r', m') iff a's local state corresponding to both time points is the same (ra (m) = r' a (m')). In consequence, the semantics of Kaϕ can be read as "for every run, and every moment on this run that yields the same local state as now, ϕ holds". 586 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) know that some course of events is in principle possible, without believing that it can really become the case (see Section 4.2). As Proposition 13 suggests, such a subtle distinction between knowledge and beliefs is possible in our approach because branching time logics allow for existential quantification over runs. Fourthly, while Friedman and Halpern's models are very flexible, they also enable system descriptions that may seem counterintuitive. Suppose that (r, m) is plausible in itself (formally: (r, m) is minimal wrt ~ (r, m, a)), but (r, m + 1) is not plausible in (r, m + 1). This means that following the plausible path makes it implausible (cf. Remark 1), which is even stranger in the case of linear time. Combining the argument with computational aspects, we suggest that our approach can be more natural and straightforward for many applications. Last but not least, our logic provides a mechanism for specifying (and updating) sets of plausible paths in the object language. Thus, plausibility sets can be specified in a succinct way, which is another feature that makes our framework computation-friendly. The model checking results from Section 5 are especially encouraging in this light. 4. PLAUSIBILITY, KNOWLEDGE, AND BELIEFS IN CTLKP In this section we study some relevant properties of plausibility, knowledge, and beliefs; in particular, axioms KDT45 are examined. But first, we identify two important subclasses of models with plausibility. A CTLKP model is plausibly serial (or p-serial) for agent a if every state of the system is part of a plausible path according to a, i.e. on (Υa) = Q. As we will see further, a weaker requirement is sometimes sufficient. We call a model weakly p-serial if every state has at least one indistinguishable counterpart which lies on a plausible path, i.e. for each q ∈ Q there is a q ~ ∈ Q such that q ∼ a q ~ and q ~ ∈ on (Υa). Obviously, p-seriality implies weak p-seriality. We get the following characterization of both model classes. PROPOSITION 4. M is plausibly serial for agent a iff formula Pl aE 0 ~ is valid in M. M is weakly p-serial for agent a iff ¬ KaPl aA 0 ⊥ is valid in M. 4.1 Axiomatic Properties THEOREM 5. Axioms K, D, 4, and 5 for knowledge are strongly valid, and axiom T is valid. That is, modalities Ka form system S5 in the sense of normal validity, and KD45 in the sense of strong validity. We do not include proofs here due to lack of space. The interested reader is referred to [2], where detailed proofs are given. PROPOSITION 6. Axioms K, 4, and 5 for beliefs are strongly valid. That is, we have:| ≡ (Baϕ ∧ Ba (ϕ → ψ)) → Baψ, | ≡ (Baϕ → BaBaϕ), and | ≡ (¬ Baϕ → Ba ¬ Baϕ). The next proposition concerns the "consistency" axiom D: Baϕ → ¬ Ba ¬ ϕ. It is easy to see that the axiom is not valid in general: as we have no restrictions on plausibility sets Υa, it may be as well that Υa = ∅. In that case we have Baϕ ∧ Ba ¬ ϕ for all formulae ϕ, because the set of states to be considered becomes empty. However, it turns out that D is valid for a very natural class of models. PROPOSITION 7. Axiom D for beliefs is not valid in the class of all CTLKP models. However, it is strongly valid in the class of weak p-serial models (and therefore also in the class of p-serial models). Moreover, as one may expect, beliefs do not have to be always true. PROPOSITION 8. Axiom T for beliefs is not valid; i.e., ~ | = (Baϕ → ϕ). The axiom is not even valid in the class of p-serial models. THEOREM 9. Belief modalities Ba form system K45 in the class of all models, and KD45 in the class of weakly plausibly serial models (in the sense of both normal and strong validity). Axiom T is not even valid for p-serial models. 4.2 Plausibility, Knowledge, and Beliefs First, we investigate the relationship between knowledge and plausibility/physicality operators. Then, we look at the interaction between knowledge and beliefs. PROPOSITION 10. Let ϕ be a CTLKP formula, and M be a CTLKP model. We have the following strong validities: (i) | ≡ Pl aKaϕ ↔ Kaϕ (ii) | ≡ Ph Kaϕ ↔ KaPh ϕ and | ≡ KaPh ϕ ↔ Kaϕ We now want to examine the relationship between knowledge and belief. For instance, if agent a believes in something, he knows that he believes it. Or, if he knows a fact, he also believes that he knows it. On the other hand, for instance, an agent does not necessarily believe in all the things he knows. For example, we may know that an invasion from another galaxy is in principle possible (KaE ♦ invasion), but if we do not take this possibility as plausible (¬ Pl aE ♦ invasion), then we reject the corresponding belief in consequence (¬ BaE ♦ invasion). Note that this property reflects the strong connection between belief and plausibility in our framework. The last invalidity is especially important: it is not the case that knowing something implies believing in it. This emphasizes that we study a specific concept of beliefs here. Note that its specific is not due to the plausibility-based definition of beliefs. The reason lies rather in the fact that we investigate knowledge, beliefs and plausibility in a temporal framework, as Proposition 12 shows. PROPOSITION 12. Let ϕ be a CTLKP formula that does not include any temporal operators. Then Kaϕ → Baϕ is strongly valid, and in the class of p-serial models we have even that | ≡ Kaϕ ↔ Baϕ. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 587 Moreover, it is important that we use branching time with explicit quantification over paths; this observation is formalized in Proposition 13. We call such ϕu universal formulae, and γu universal path The following two theorems characterize the relationship between knowledge and beliefs: first for the class of p-serial models, and then, finally, for all models. 4.3 Properties of the Update The first notable property of plausibility update is that it influences only formulae in which plausibility plays a role, i.e. ones in which belief or plausibility modalities occur. PROPOSITION 16. Let ϕ be a CTLKP formula that does not include operators Pl a and Ba, and γ be a CTLKP path formula. Then, we have | ≡ ϕ ↔ (set-pla γ) ϕ. What can be said about the result of an update? At first sight, formula (set-pla γ) Pl aAγ seems a natural characterization; however, it is not valid. This is because, by leaving the other (implausible) paths out of scope, we may leave out of | γ | some paths that were needed to satisfy γ (see the example in Section 4.2). We propose two alternative ways out: the first one restricts the language of the update similarly to [21]; the other refers to physical possibilities, in a way analogous to [13]. PROPOSITION 17. The CTLKP formula (set-pla γ) Pl aAγ is not valid. However, we have the following validities: (i) | ≡ (set-pla γu) Pl aAγu, where γu is a universal CTLK path formula from Definition 1. (ii) If ϕ, ϕ1, ϕ2 are arbitrary CTLK formulae, then:| ≡ (set-pla Oϕ) Pl aA O (Ph ϕ), | ≡ (set-pla ❑ ϕ) Pl aA ❑ (Ph ϕ), and | ≡ (set-pla ϕ1U ϕ2) Pl aA (Ph ϕ1) U (Ph ϕ2). 5. VERIFICATION OF PLAUSIBILITY, TIME AND BELIEFS In this section we report preliminary results on model checking CTLKP formulae. Clearly, verifying CTLKP properties directly against models with plausibility does not make much sense, since these models are inherently infinite; what we need is a finite representation of plausibility sets. One such representation has been discussed in Section 3.3: plausibility sets can be defined by path formulae and the update operator (set-pla γ). We follow this idea here, studying the complexity of model checking CTLKP formulae against CTLK models (which can be seen as a compact representation of CTLKP models in which all the paths are assumed plausible), with the underlying idea that plausibility sets, when needed, must be defined explicitly in the object language. Below we sketch an algorithm that model-checks CTLKP formulae in time linear wrt the size of the model and the length of the formula. This means that we have extended CTLK to a more expressive language with no computational price to pay. First of all, we get rid of the belief operators (due to Theorem 15), replacing every occurrence of Baϕ with KaPl a (E O ~ → ϕ). Now, let → − γ = ~ γ1,..., γk be a vector of "vanilla" path formulae (one per agent), with the initial vector − γ0 → = ~ ~,..., ~, and → − γ [γ' / a] denoting vector → − γ, in which → − γ [a] is replaced with γ'. Additionally, we define → − γ [0] = ~. We translate the resulting CTLKP formulae to ones without plausibility via function tr (ϕ) = tr − → γ0,0 (ϕ), defined as follows: Note that the resulting sentences belong to the logic of CTLK +, that is CTL + (where each path quantifier can be followed by a Boolean combination of "vanilla" path formulae) 9 with epistemic modalities. The following proposition justifies the translation. PROPOSITION 18. For any CTLKP formula ϕ without Ba, we have that M, q | = CTLKP ϕ iff M, q | = CTLK + tr (ϕ). In general, model checking CTL + (and also CTLK +) is ΔP2 - complete. However, in our case, the Boolean combinations of path subformulae are always conjunctions of at most two non-negated elements, which allows us to propose the following model checking algorithm. First, subformulae are evaluated recursively: for every subformula ψ of ϕ, the set of states in M that satisfy ψ is computed and labeled with a new proposition pψ. Now, it is enough to define checking M, q | = ϕ for ϕ in which all (state) subformulae are propositions, with the following cases: Case M, q | = E (❑ p ∧ γ): If M, q | = p, then return no. Otherwise, remove from M all the states that do not satisfy p (yielding a sparser model M'), and check the CTL formula Eγ in M', q with any CTL model-checker. Case M, q | = E (Op ∧ γ): Create M' by adding a copy q' of state q, in which only the transitions to states satisfying p are kept (i.e., M, q' | = r iff M, q | = r; and q' Rq' ' iff qRq' ' and M, q' ' | = p). Then, check Eγ in M', q'. 588 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Case M, q | = E (p1U p2 ∧ p3U p4): Note that this is equivalent to checking E (p1 ∧ p3) U (p2 ∧ Ep3U p4) ∨ E (p1 ∧ p3) U (p4 ∧ Ep1U p2), which is a CTL formula. Other cases: The above cases cover all possible formulas that begin with a path quantifier. For other cases, standard CTLK model checking can be used. THEOREM 19. Model checking CTLKP against CTLK models is PTIME-complete, and can be done in time O (ml), where m is the number of transitions in the model, and l is the length of the formula to be checked. That is, the complexity is no worse than for CTLK itself. 6. CONCLUSIONS In this paper a notion of plausible behavior is considered, with the underlying idea that implausible options should be usually ignored in practical reasoning about possible future courses of action. We add the new notion of plausibility to the logic of CTLK [19], and obtain a language which enables reasoning about what can (or must) plausibly happen. As a technical device to define the semantics of the resulting logic, we use a non-standard satisfaction relation | =P that allows to propagate the "current" set of plausible paths into subformulae. Furthermore, we propose a non-standard notion of beliefs, defined in terms of indistinguishability and plausibility. We also propose how plausibility assumptions can be specified in the object language via a plausibility update operator (in a way similar to [13]). We use this new framework to investigate some important properties of plausibility, knowledge, beliefs, and updates. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. We also prove that believing in ϕ is knowing that ϕ plausibly holds in all plausibly possible situations. That is, the relationship between knowledge and beliefs is very natural and reflects the initial intuition precisely. Moreover, the model checking results from Section 5 show that verification for CTLKP is no more complex than for CTL and CTLK. We would like to stress that we do not see this contribution as a mere technical exercise in formal logic. Human agents use a similar concept of plausibility and "practical" beliefs in their everyday reasoning in order to reduce the search space and make the reasoning feasible. As a consequence, we suggest that the framework we propose may prove suitable for modeling, design, and analysis resource-bounded agents in general. We would like to thank Juergen Dix for fruitful discussions, useful comments and improvements.
Agents, Beliefs, and Plausible Behavior in a Temporal Setting ABSTRACT Logics of knowledge and belief are often too static and inflexible to be used on real-world problems. In particular, they usually offer no concept for expressing that some course of events is more likely to happen than another. We address this problem and extend CTLK (computation tree logic with knowledge) with a notion of plausibility, which allows for practical and counterfactual reasoning. The new logic CTLKP (CTLK with plausibility) includes also a particular notion of belief. A plausibility update operator is added to this logic in order to change plausibility assumptions dynamically. Furthermore, we examine some important properties of these concepts. In particular, we show that, for a natural class of models, belief is a KD45 modality. We also show that model checking CTLKP is PTIME-complete and can be done in time linear with respect to the size of models and formulae. 1. INTRODUCTION Notions like time, knowledge, and beliefs are very important for analyzing the behavior of agents and multi-agent systems. In this paper, we extend modal logics of time and knowledge with a concept of plausible behavior: this notion is added to the language of CTLK [19], which is a straightforward combination of the branching-time temporal logic CTL [4, 3] and standard epistemic logic [9, 5]. In our approach, plausibility can be seen as a temporal property of behaviors. That is, some behaviors of the system can be assumed plausible and others implausible, with the underlying idea that the latter should perhaps be ignored in practical reasoning about possible future courses of action. Moreover, behaviors can be formally understood as temporal paths in the Kripke structure modeling a multiagent system. As a consequence, we obtain a language to reason about what can (or must) plausibly happen. We propose a particular notion of beliefs (inspired by [20, 7]), defined in terms of epistemic relations and plausibility. The main intuition is that beliefs are facts that an agent would know if he assumed that only plausible things could happen. We believe that humans use such a concept of plausibility and "practical beliefs" quite often in their everyday reasoning. Restricting one's reasoning to plausible possibilities is essential to make the reasoning feasible, as the space of all possibilities is exceedingly large in real life. We investigate some important properties of plausibility, knowledge, and belief in this new framework. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. Finally, we show that the relationship between knowledge and belief for plausibly serial models is natural and reflects the initial intuition well. We also show how plausibility assumptions can be specified in the object language via a plausibility update operator, and we study properties of such updates. Finally, we show that model checking of the new logic is no more complex than model checking CTL and CTLK. Our ultimate goal is to come up with a logic that allows the study of strategies, time, knowledge, and plausible/rational behavior under both perfect and imperfect information. As combining all these dimensions is highly nontrivial (cf. [12, 14]) it seems reasonable to split this task. While this paper deals with knowledge, plausibility, and belief, the companion paper [11] proposes a general framework for multi-agent systems that regard game-theoretical rationality criteria like Nash equilibrium, Pareto optimality, etc. . The latter approach is based on the more powerful logic ATL [1]. The paper is structured as follows. Firstly, we briefly present branching-time logic with knowledge, CTLK. In Section 3 we present our approach to plausibility and formally define CTLK with plausibility. We also show how temporal formulae can be used to describe plausible paths, and we compare our logic with existing related work. In Section 4, properties of knowledge, belief, and plausibility are explored. Finally, we present verification complexity results for CTLKP in Section 5. 2. BRANCHING TIME AND KNOWLEDGE 3. EXTENDING TIME AND KNOWLEDGE WITH PLAUSIBILITY AND BELIEFS 3.1 The Concept of Plausibility The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 583 3.2 CTLK with Plausibility 584 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.3 Defining Plausible Paths with Formulae 3.4 Comparison to Related Work Moses and Shoham: Beliefs as Conditional Knowledge Friedman and Halpern: Plausibility Spaces The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 585 What Are the Differences to Our Logic? 586 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. PLAUSIBILITY, KNOWLEDGE, AND BELIEFS IN CTLKP 4.1 Axiomatic Properties 4.2 Plausibility, Knowledge, and Beliefs The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 587 4.3 Properties of the Update 5. VERIFICATION OF PLAUSIBILITY, TIME AND BELIEFS 588 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 6. CONCLUSIONS In this paper a notion of plausible behavior is considered, with the underlying idea that implausible options should be usually ignored in practical reasoning about possible future courses of action. We add the new notion of plausibility to the logic of CTLK [19], and obtain a language which enables reasoning about what can (or must) plausibly happen. As a technical device to define the semantics of the resulting logic, we use a non-standard satisfaction relation | =P that allows to propagate the "current" set of plausible paths into subformulae. Furthermore, we propose a non-standard notion of beliefs, defined in terms of indistinguishability and plausibility. We also propose how plausibility assumptions can be specified in the object language via a plausibility update operator (in a way similar to [13]). We use this new framework to investigate some important properties of plausibility, knowledge, beliefs, and updates. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. We also prove that believing in ϕ is knowing that ϕ plausibly holds in all plausibly possible situations. That is, the relationship between knowledge and beliefs is very natural and reflects the initial intuition precisely. Moreover, the model checking results from Section 5 show that verification for CTLKP is no more complex than for CTL and CTLK. We would like to stress that we do not see this contribution as a mere technical exercise in formal logic. Human agents use a similar concept of plausibility and "practical" beliefs in their everyday reasoning in order to reduce the search space and make the reasoning feasible. As a consequence, we suggest that the framework we propose may prove suitable for modeling, design, and analysis resource-bounded agents in general. We would like to thank Juergen Dix for fruitful discussions, useful comments and improvements.
Agents, Beliefs, and Plausible Behavior in a Temporal Setting ABSTRACT Logics of knowledge and belief are often too static and inflexible to be used on real-world problems. In particular, they usually offer no concept for expressing that some course of events is more likely to happen than another. We address this problem and extend CTLK (computation tree logic with knowledge) with a notion of plausibility, which allows for practical and counterfactual reasoning. The new logic CTLKP (CTLK with plausibility) includes also a particular notion of belief. A plausibility update operator is added to this logic in order to change plausibility assumptions dynamically. Furthermore, we examine some important properties of these concepts. In particular, we show that, for a natural class of models, belief is a KD45 modality. We also show that model checking CTLKP is PTIME-complete and can be done in time linear with respect to the size of models and formulae. 1. INTRODUCTION Notions like time, knowledge, and beliefs are very important for analyzing the behavior of agents and multi-agent systems. In this paper, we extend modal logics of time and knowledge with a concept of plausible behavior: this notion is added to the language of CTLK [19], which is a straightforward combination of the branching-time temporal logic CTL [4, 3] and standard epistemic logic [9, 5]. In our approach, plausibility can be seen as a temporal property of behaviors. Moreover, behaviors can be formally understood as temporal paths in the Kripke structure modeling a multiagent system. As a consequence, we obtain a language to reason about what can (or must) plausibly happen. We propose a particular notion of beliefs (inspired by [20, 7]), defined in terms of epistemic relations and plausibility. The main intuition is that beliefs are facts that an agent would know if he assumed that only plausible things could happen. We believe that humans use such a concept of plausibility and "practical beliefs" quite often in their everyday reasoning. We investigate some important properties of plausibility, knowledge, and belief in this new framework. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. Finally, we show that the relationship between knowledge and belief for plausibly serial models is natural and reflects the initial intuition well. We also show how plausibility assumptions can be specified in the object language via a plausibility update operator, and we study properties of such updates. Finally, we show that model checking of the new logic is no more complex than model checking CTL and CTLK. Our ultimate goal is to come up with a logic that allows the study of strategies, time, knowledge, and plausible/rational behavior under both perfect and imperfect information. The latter approach is based on the more powerful logic ATL [1]. The paper is structured as follows. Firstly, we briefly present branching-time logic with knowledge, CTLK. In Section 3 we present our approach to plausibility and formally define CTLK with plausibility. We also show how temporal formulae can be used to describe plausible paths, and we compare our logic with existing related work. In Section 4, properties of knowledge, belief, and plausibility are explored. Finally, we present verification complexity results for CTLKP in Section 5. 6. CONCLUSIONS In this paper a notion of plausible behavior is considered, with the underlying idea that implausible options should be usually ignored in practical reasoning about possible future courses of action. We add the new notion of plausibility to the logic of CTLK [19], and obtain a language which enables reasoning about what can (or must) plausibly happen. Furthermore, we propose a non-standard notion of beliefs, defined in terms of indistinguishability and plausibility. We also propose how plausibility assumptions can be specified in the object language via a plausibility update operator (in a way similar to [13]). We use this new framework to investigate some important properties of plausibility, knowledge, beliefs, and updates. In particular, we show that knowledge is an S5 modality, and that beliefs satisfy axioms K45 in general, and KD45 for the class of plausibly serial models. We also prove that believing in ϕ is knowing that ϕ plausibly holds in all plausibly possible situations. That is, the relationship between knowledge and beliefs is very natural and reflects the initial intuition precisely. Moreover, the model checking results from Section 5 show that verification for CTLKP is no more complex than for CTL and CTLK. We would like to stress that we do not see this contribution as a mere technical exercise in formal logic. Human agents use a similar concept of plausibility and "practical" beliefs in their everyday reasoning in order to reduce the search space and make the reasoning feasible.
H-38
DiffusionRank: A Possible Penicillin for Web Spamming
While the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated. To handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank. DiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph. Theoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient γ tends to infinity. In such a case 1/γ = 0, DiffusionRank (PageRank) has low ability of anti-manipulation. When γ = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored. Consequently, γ is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation. It is found empirically that, when γ = 1, DiffusionRank has a Penicillin-like effect on the link manipulation. Moreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities. Experimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected.
[ "diffusionrank", "rank", "web spam", "pagerank", "web graph", "group-to-group relat", "link commun", "random graph", "keyword stuf", "link stuf", "machin learn", "link analysi", "seed select algorithm", "gaussian kernel smooth", "equal vote abil" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U", "M", "U", "M", "M", "U", "M" ]
DiffusionRank: A Possible Penicillin for Web Spamming Haixuan Yang, Irwin King, and Michael R. Lyu Dept. of Computer Science and Engineering The Chinese University of Hong Kong Shatin, NT, Hong Kong {hxyang,king,lyu}@cse. cuhk.edu.hk ABSTRACT While the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated. To handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank. DiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph. Theoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient γ tends to infinity. In such a case 1/γ = 0, DiffusionRank (PageRank) has low ability of anti-manipulation. When γ = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored. Consequently, γ is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation. It is found empirically that, when γ = 1, DiffusionRank has a Penicillin-like effect on the link manipulation. Moreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities. Experimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected. Categories and Subject Descriptors: H.3.3 [Information Systems]: Information Search and Retrieval; G2.2 [Discrete Mathematics]: Graph Theory General Terms: Algorithms. 1. INTRODUCTION While the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by people for commercial interests. The manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7]. It is reported that approximately 70% of all pages in the . biz domain and about 35% of the pages in the . us domain belong to the spam category [12]. The reason for the increasing amount of Web spam is explained in [12]: some web site operators try to influence the positioning of their pages within search results because of the large fraction of web traffic originating from searches and the high potential monetary value of this traffic. From the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12]. From the viewpoint of the search engine managers, the Web spam is very harmful to the users'' evaluations and thus their preference to choosing search engines because people believe that a good search engine should not return irrelevant or low-quality results. There are two methods being employed to combat the Web spam problem. Machine learning methods are employed to handle the keyword stuffing. To successfully apply machine learning methods, we need to dig out some useful textual features for Web pages, to mark part of the Web pages as either spam or non-spam, then to apply supervised learning techniques to mark other pages. For example, see [5, 12]. Link analysis methods are also employed to handle the link stuffing problem. One example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links. This paper focuses on the link-based method. The rest of the materials are organized as follows. In the next section, we give a brief literature review on various related ranking techniques. We establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4. In Section 5, we describe the data sets that we worked on and the experimental results. Finally, we draw conclusions in Section 6. 2. LITERATURE REVIEW The importance of a Web page is determined by either the textual content of pages or the hyperlink structure or both. As in previous work [7, 13], we focus on ranking methods solely determined by hyperlink structure of the Web graph. All the mentioned ranking algorithms are established on a graph. For our convenience, we first give some notations. Denote a static graph by G = (V, E), where V = {v1, v2, ... , vn}, E = {(vi, vj) | there is an edge from vi to vj}. Ii and di denote the in-degree and the out-degree of page i respectively. 2.1 PageRank The importance of a Web page is an inherently subjective matter, which depends on the reader``s interests, knowledge and attitudes [13]. However, the average importance of all readers can be considered as an objective matter. PageRank tries to find such average importance based on the Web link structure, which is considered to contain a large amount of statistical data. The Web is modelled by a directed graph G in the PageRank algorithms, and the rank or importance xi for page vi ∈ V is defined recursively in terms of pages which point to it: xi = (j,i)∈E aijxj, where aij is assumed to be 1/dj if there is a link from j to i, and 0 otherwise. Or in matrix terms, x = Ax. When the concept of random jump is introduced, the matrix form is changed to x = [(1 − α)g1T + αA]x, (1) where α is the probability of following the actual link from a page, (1 − α) is the probability of taking a random jump, and g is a stochastic vector, i.e., 1T g = 1. Typically, α = 0.85, and g = 1 n 1 is one of the standard settings, where 1 is the vector of all ones [6, 13]. 2.2 TrustRank TrustRank [7] is composed of two parts. The first part is the seed selection algorithm, in which the inverse PageRank was proposed to help an expert of determining a good node. The second part is to utilize the biased PageRank, in which the stochastic distribution g is set to be shared by all the trusted pages found in the first part. Moreover, the initial input of x is also set to be g. The justification for the inverse PageRank and the solid experiments support its advantage in combating the Web spam. Although there are many variations of PageRank, e.g., a family of link-based ranking algorithms in [2], TrustRank is especially chosen for comparisons for three reasonss: (1) it is designed for combatting spamming; (2) its fixed parameters make a comparison easy; and (3) it has a strong theoretical relations with PageRank and DiffusionRank. 2.3 Manifold Ranking In [17], the idea of ranking on the data manifolds was proposed. The data points represented as vectors in Euclidean space are considered to be drawn from a manifold. From the data points on such a manifold, an undirected weighted graph is created, then the weight matrix is given by the Gaussian Kernel smoothing. While the manifold ranking algorithm achieves an impressive result on ranking images, the biased vector g and the parameter k in the general personalized PageRank in [17] are unknown in the Web graph setting; therefore we do not include it in the comparisons. 2.4 Heat Diffusion Heat diffusion is a physical phenomena. In a medium, heat always flow from position with high temperature to position with low temperature. Heat kernel is used to describe the amount of heat that one point receives from another point. Recently, the idea of heat kernel on a manifold is borrowed in applications such as dimension reduction [3] and classification [9, 10, 14]. In these work, the input data is considered to lie in a special structure. All the above topics are related to our work. The readers can find that our model is a generalization of PageRank in order to resist Web manipulation, that we inherit the first part of TrustRank, that we borrow the concept of ranking on the manifold to introduce our model, and that heat diffusion is a main scheme in this paper. 3. HEAT DIFFUSION MODEL Heat diffusion provides us with another perspective about how we can view the Web and also a way to calculate ranking values. In this paper, the Web pages are considered to be drawn from an unknown manifold, and the link structure forms a directed graph, which is considered as an approximation to the unknown manifold. The heat kernel established on the Web graph is considered as the representation of the relationship between Web pages. The temperature distribution after a fixed time period, induced by a special initial temperature distribution, is considered as the rank scores on the Web pages. Before establishing the proposed models, we first show our motivations. 3.1 Motivations There are two points to explain that PageRank is susceptible to web spam. • Over-democratic. There is a belief behind PageRank-all pages are born equal. This can be seen from the equal voting ability of one page: the sum of each column is equal to one. This equal voting ability of all pages gives the chance for a Web site operator to increase a manipulated page by creating a large number of new pages pointing to this page since all the newly created pages can obtain an equal voting right. • Input-independent. For any given non-zero initial input, the iteration will converge to the same stable distribution corresponding to the maximum eigenvalue 1 of the transition matrix. This input-independent property makes it impossible to set a special initial input (larger values for trusted pages and less values even negative values for spam pages) to avoid web spam. The input-independent feature of PageRank can be further explained as follows. P = [(1 − α)g1T + αA] is a positive stochastic matrix if g is set to be a positive stochastic vector (the uniform distribution is one of such settings), and so the largest eigenvalue is 1 and no other eigenvalue whose absolute value is equal to 1, which is guaranteed by the Perron Theorem [11]. Let y be the eigenvector corresponding to 1, then we have Py = y. Let {xk} be the sequence generated from the iterations xk+1 = Pxk, and x0 is the initial input. If {xk} converges to x, then xk+1 = Pxk implies that x must satisfy Px = x. Since the only maximum eigenvalue is 1, we have x = cy where c is a constant, and if both x and y are normalized by their sums, then c = 1. The above discussions show that PageRank is independent of the initial input x0. In our opinion, g and α are objective parameters determined by the users'' behaviors and preferences. A, α and g are the true web structure. While A is obtained by a crawler and the setting α = 0.85 is accepted by the people, we think that g should be determined by a user behavior investigation, something like [1]. Without any prior knowledge, g has to be set as g = 1 n 1. TrustRank model does not follow the true web structure by setting a biased g, but the effects of combatting spamming are achieved in [7]; PageRank is on the contrary in some ways. We expect a ranking algorithm that has an effect of anti-manipulation as TrustRank while respecting the true web structure as PageRank. We observe that the heat diffusion model is a natural way to avoid the over-democratic and input-independent feature of PageRank. Since heat always flows from a position with higher temperatures to one with lower temperatures, points are not equal as some points are born with high temperatures while others are born with low temperatures. On the other hand, different initial temperature distributions will give rise to different temperature distributions after a fixed time period. Based on these considerations, we propose the novel DiffusionRank. This ranking algorithm is also motivated by the viewpoint for the Web structure. We view all the Web pages as points drawn from a highly complex geometric structure, like a manifold in a high dimensional space. On a manifold, heat can flow from one point to another through the underlying geometric structure in a given time period. Different geometric structures determine different heat diffusion behaviors, and conversely the diffusion behavior can reflect the geometric structure. More specifically, on the manifold, the heat flows from one point to another point, and in a given time period, if one point x receives a large amount of heat from another point y, we can say x and y are well connected, and thus x and y have a high similarity in the sense of a high mutual connection. We note that on a point with unit mass, the temperature and the heat of this point are equivalent, and these two terms are interchangeable in this paper. In the following, we first show the HDM on a manifold, which is the origin of HDM, but cannot be employed to the World Wide Web directly, and so is considered as the ideal case. To connect the ideal case and the practical case, we then establish HDM on a graph as an intermediate case. To model the real world problem, we further build HDM on a random graph as a practical case. Finally we demonstrate the DiffusionRank which is derived from the HDM on a random graph. 3.2 Heat Flow On a Known Manifold If the underlying manifold is known, the heat flow throughout a geometric manifold with initial conditions can be described by the following second order differential equation: ∂f(x,t) ∂t − ∆f(x, t) = 0, where f(x, t) is the heat at location x at time t, and ∆f is the Laplace-Beltrami operator on a function f. The heat diffusion kernel Kt(x, y) is a special solution to the heat equation with a special initial condition-a unit heat source at position y when there is no heat in other positions. Based on this, the heat kernel Kt(x, y) describes the heat distribution at time t diffusing from the initial unit heat source at position y, and thus describes the connectivity (which is considered as a kind of similarity) between x and y. However, it is very difficult to represent the World Wide Web as a regular geometry with a known dimension; even the underlying is known, it is very difficult to find the heat kernel Kt(x, y), which involves solving the heat equation with the delta function as the initial condition. This motivates us to investigate the heat flow on a graph. The graph is considered as an approximation to the underlying manifold, and so the heat flow on the graph is considered as an approximation to the heat flow on the manifold. 3.3 On an Undirected Graph On an undirected graph G, the edge (vi, vj) is considered as a pipe that connects nodes vi and vj. The value fi(t) describes the heat at node vi at time t, beginning from an initial distribution of heat given by fi(0) at time zero. f(t) (f(0)) denotes the vector consisting of fi(t) (fi(0)). We construct our model as follows. Suppose, at time t, each node i receives M(i, j, t, ∆t) amount of heat from its neighbor j during a period of ∆t. The heat M(i, j, t, ∆t) should be proportional to the time period ∆t and the heat difference fj(t) − fi(t). Moreover, the heat flows from node j to node i through the pipe that connects nodes i and j. Based on this consideration, we assume that M(i, j, t, ∆t) = γ(fj(t) − fi(t))∆t. As a result, the heat difference at node i between time t + ∆t and time t will be equal to the sum of the heat that it receives from all its neighbors. This is formulated as fi(t + ∆t) − fi(t) = j:(j,i)∈E γ(fj(t) − fi(t))∆t, (2) where E is the set of edges. To find a closed form solution to Eq. (2), we express it in a matrix form: (f(t + ∆t) − f(t))/∆t = γHf(t), where d(v) denotes the degree of the node v. In the limit ∆t → 0, it becomes d dt f(t) = γHf(t). Solving it, we obtain f(t) = eγtH f(0), especially we have f(1) = eγH f(0), Hij = −d(vj), j = i, 1, (vj, vi) ∈ E, 0, otherwise, (3) where eγH is defined as eγH = I+γH+ γ2 2! H2 + γ3 3! H3 +· · · . 3.4 On a Directed Graph The above heat diffusion model must be modified to fit the situation where the links between Web pages are directed. On one Web page, when the page-maker creates a link (a, b) to another page b, he actually forces the energy flow, for example, people``s click-through activities, to that page, and so there is added energy imposed on the link. As a result, heat flows in a one-way manner, only from a to b, not from b to a. Based on such consideration, we modified the heat diffusion model on an undirected graph as follows. On a directed graph G, the pipe (vi, vj) is forced by added energy such that heat flows only from vi to vj. Suppose, at time t, each node vi receives RH = RH(i, j, t, ∆t) amount of heat from vj during a period of ∆t. We have three assumptions: (1) RH should be proportional to the time period ∆t; (2) RH should be proportional to the the heat at node vj; and (3) RH is zero if there is no link from vj to vi. As a result, vi will receive j:(vj ,vi)∈E σjfj(t)∆t amount of heat from all its neighbors that points to it. On the other hand, node vi diffuses DH(i, t, ∆t) amount of heat to its subsequent nodes. We assume that: (1) The heat DH(i, t, ∆t) should be proportional to the time period ∆t. (2) The heat DH(i, t, ∆t) should be proportional to the the heat at node vi. (3) Each node has the same ability of diffusing heat. This fits the intuition that a Web surfer only has one choice to find the next page that he wants to browse. (4) The heat DH(i, t, ∆t) should be uniformly distributed to its subsequent nodes. The real situation is more complex than what we assume, but we have to make this simple assumption in order to make our model concise. As a result, node vi will diffuse γfi(t)∆t/di amount of heat to any of its subsequent nodes, and each of its subsequent node should receive γfi(t)∆t/di amount of heat. Therefore σj = γ/dj. To sum up, the heat difference at node vi between time t+∆t and time t will be equal to the sum of the heat that it receives, deducted by what it diffuses. This is formulated as fi(t + ∆t) − fi(t) = −γfi(t)∆t + j:(vj ,vi)∈E γ/djfj(t)∆t. Similarly, we obtain f(1) = eγH f(0), Hij = −1, j = i, 1/dj, (vj, vi) ∈ E, 0, otherwise. (4) 3.5 On a Random Directed Graph For real world applications, we have to consider random edges. This can be seen in two viewpoints. The first one is that in Eq. (1), the Web graph is actually modelled as a random graph, there is an edge from node vi to node vj with a probability of (1 − α)gj (see the item (1 − α)g1T ), and that the Web graph is predicted by a random graph [15, 16]. The second one is that the Web structure is a random graph in essence if we consider the content similarity between two pages, though this is not done in this paper. For these reasons, the model would become more flexible if we extend it to random graphs. The definition of a random graph is given below. Definition 1. A random graph RG = (V, P = (pij)) is defined as a graph with a vertex set V in which the edges are chosen independently, and for 1 ≤ i, j ≤ |V | the probability of (vi, vj) being an edge is exactly pij. The original definition of random graphs in [4], is changed slightly to consider the situation of directed graphs. Note that every static graph can be considered as a special random graph in the sense that pij can only be 0 or 1. On a random graph RG = (V, P), where P = (pij) is the probability of the edge (vi, vj) exists. In such a random graph, the expected heat difference at node i between time t + ∆t and time t will be equal to the sum of the expected heat that it receives from all its antecedents, deducted by the expected heat that it diffuses. Since the probability of the link (vj, vi) is pji, the expected heat flow from node j to node i should be multiplied by pji, and so we have fi(t + ∆t) − fi(t) = −γ fi(t) ∆t + j:(vj ,vi)∈E γpjifj(t)∆t/RD+ (vj), where RD+ (vi) is the expected out-degree of node vi, it is defined as k pik. Similarly we have f(1) = eγR f(0), Rij = −1, j = i; pji/RD+ (vj), j = i. (5) When the graph is large, a direct computation of eγR is time-consuming, and we adopt its discrete approximation: f(1) = (I + γ N R)N f(0). (6) The matrix (I+ γ N R)N in Eq. (6) and matrix eγR in Eq. (5) are called Discrete Diffusion Kernel and the Continuous Diffusion Kernel respectively. Based on the Heat Diffusion Models and their solutions, DiffusionRank can be established on undirected graphs, directed graphs, and random graphs. In the next section, we mainly focus on DiffusionRank in the random graph setting. 4. DIFFUSIONRANK For a random graph, the matrix (I + γ N R)N or eγR can measure the similarity relationship between nodes. Let fi(0)= 1, fj(0) = 0 if j = i, then the vector f(0) represent the unit heat at node vi while all other nodes has zero heat. For such f(0) in a random graph, we can find the heat distribution at time 1 by using Eq. (5) or Eq. (6). The heat distribution is exactly the i−th row of the matrix of (I + γ N R)N or eγR . So the ith-row jth-column element hij in the matrix (I + γ∆tR)N or eγR means the amount of heat that vi can receive from vj from time 0 to 1. Thus the value hij can be used to measure the similarity from vj to vi. For a static graph, similarly the matrix (I + γ N H)N or eγH can measure the similarity relationship between nodes. The intuition behind is that the amount h(i, j) of heat that a page vi receives from a unit heat in a page vj in a unit time embodies the extent of the link connections from page vj to page vi. Roughly speaking, when there are more uncrossed paths from vj to vi, vi will receive more heat from vj; when the path length from vj to vi is shorter, vi will receive more heat from vj; and when the pipe connecting vj and vi is wide, the heat will flow quickly. The final heat that vi receives will depend on various paths from vj to vi, their length, and the width of the pipes. Algorithm 1 DiffusionRank Function Input: The transition matrix A; the inverse transition matrix U; the decay factor αI for the inverse PageRank; the decay factor αB for PageRank; number of iterations MI for the inverse PageRank; the number of trusted pages L; the thermal conductivity coefficient γ. Output: DiffusionRank score vector h. 1: s = 1 2: for i = 1 TO MI do 3: s = αI · U · s + (1 − αI ) · 1 n · 1 4: end for 5: Sort s in a decreasing order: π = Rank({1, ... , n}, s) 6: d = 0, Count = 0, i = 0 7: while Count ≤ L do 8: if π(i) is evaluated as a trusted page then 9: d(π(i)) = 1, Count + + 10: end if 11: i + + 12: end while 13: d = d/|d| 14: h = d 15: Find the iteration number MB according to λ 16: for i = 1 TO MB do 17: h = (1 − γ MB )h + γ MB (αB · A · h + (1 − αB) · 1 n · 1) 18: end for 19: RETURN h 4.1 Algorithm For the ranking task, we adopt the heat kernel on a random graph. Formally the DiffusionRank is described in Algorithm 1, in which,the element Uij in the inverse transition matrix U is defined to be 1/Ij if there is a link from i to j, and 0 otherwise. This trusted pages selection procedure by inverse PageRank is completely borrowed from TrustRank [7] except for a fix number of the size of the trusted set. Although the inverse PageRank is not perfect in its ability of determining the maximum coverage, it is appealing because of its polynomial execution time and its reasonable intuition-we actually inverse the original link when we try to build the seed set from those pages that point to many pages that in turn point to many pages and so on. In the algorithm, the underlying random graph is set as P = αB · A + (1 − αB) · 1 n · 1n×n, which is induced by the Web graph. As a result, R = −I + P. In fact, the more general setting for DiffusionRank is P = αB ·A+(1−αB)· 1 n ·g·1T . By such a setting, DiffusionRank is a generalization of TrustRank when γ tends to infinity and when g is set in the same way as TrustRank. However, the second part of TrustRank is not adopted by us. In our model, g should be the true teleportation determined by the user``s browse habits, popularity distribution over all the Web pages, and so on; P should be the true model of the random nature of the World Wide Web. Setting g according to the trusted pages will not be consistent with the basic idea of Heat Diffusion on a random graph. We simply set g = 1 only because we cannot find it without any priori knowledge. Remark. In a social network interpretation, DiffusionRank first recognizes a group of trusted people, who may not be highly ranked, but they know many other people. The initially trusted people are endowed with the power to decide who can be further trusted, but cannot decide the final voting results, and so they are not dictators. 4.2 Advantages Next we show the four advantages for DiffusionRank. 4.2.1 Two closed forms First, its solutions have two forms, both of which are closed form. One takes the discrete form, and has the advantage of fast computing while the other takes the continuous form, and has the advantage of being easily analyzed in theoretical aspects. The theoretical advantage has been shown in the proof of theorem in the next section. (a) Group to Group Relations (b) An undirected graph Figure 1: Two graphs 4.2.2 Group-group relations Second, it can be naturally employed to detect the groupgroup relation. For example, let G2 and G1 denote two groups, containing pages (j1, j2, ... , js) and (i1, i2, ... , it), respectively. Then u,v hiu,jv is the total amounts of heat that G1 receives from G2, where hiu,jv is the iu−th row jv−th column element of the heat kernel. More specifically, we need to first set f(0) for such an application as follows. In f(0) = (f1(0), f2(0), ... , fn(0))T , if i ∈ {j1, j2, ... , js}, then fi(0) = 1, and 0 otherwise. Then we employ Eq. (5) to calculate f(1) = (f1(1), f2(1), ... , fn(1))T , finally we sum those fj(1) where j ∈ {i1, i2, ... , it}. Fig. 1 (a) shows the results generated by the DiffusionRank. We consider five groups-five departments in our Engineering Faculty: CSE, MAE, EE, IE, and SE. γ is set to be 1, the numbers in Fig. 1 (a) are the amount of heat that they diffuse to each other. These results are normalized by the total number of each group, and the edges are ignored if the values are less than 0.000001. The group-to-group relations are therefore detected, for example, we can see that the most strong overall tie is from EE to IE. While it is a natural application for DiffusionRank because of the easy interpretation by the amount heat from one group to another group, it is difficult to apply other ranking techniques to such an application because they lack such a physical meaning. 4.2.3 Graph cut Third, it can be used to partition the Web graph into several parts. A quick example is shown below. The graph in Fig. 1 (b) is an undirected graph, and so we employ the Eq. (3). If we know that node 1 belongs to one community and that node 12 belongs to another community, then we can put one unit positive heat source on node 1 and one unit negative heat source on node 12. After time 1, if we set γ = 0.5, the heat distribution is [0.25, 0.16, 0.17, 0.16, 0.15, 0.09, 0.01, -0.04, -0.18 -0.21, -0.21, -0.34], and if we set γ = 1, it will be [0.17, 0.16, 0.17, 0.16, 0.16, 0.12, 0.02, -0.07, -0.18, -0.22, -0.24, -0.24]. In both settings, we can easily divide the graph into two parts: {1, 2, 3, 4, 5, 6, 7} with positive temperatures and {8, 9, 10, 11, 12} with negative temperatures. For directed graphs and random graphs, similarly we can cut them by employing corresponding heat solution. 4.2.4 Anti-manipulation Fourth, it can be used to combat manipulation. Let G2 contain trusted Web pages (j1, j2, ... , js), then for each page i, v hi,jv is the heat that page i receives from G2, and can be computed by the discrete approximation of Eq. (4) in the case of a static graph or Eq. (6) in the case of a random graph, in which f(0) is set to be a special initial heat distribution so that the trusted Web pages have unit heat while all the others have zero heat. In doing so, manipulated Web page will get a lower rank unless it has strong in-links from the trusted Web pages directly or indirectly. The situation is quite different for PageRank because PageRank is inputindependent as we have shown in Section 3.1. Based on the fact that the connection from a trusted page to a bad page should be weak-less uncross paths, longer distance and narrower pipe, we can say DiffusionRank can resist web spam if we can select trusted pages. It is fortunate that the trusted pages selection method in [7]-the first part of TrustRank can help us to fulfill this task. For such an application of DiffusionRank, the computation complexity for Discrete Diffusion Kernel is the same as that for PageRank in cases of both a static graph and a random graph. This can be seen in Eq. (6), by which we need N iterations and for each iteration we need a multiplication operation between a matrix and a vector, while in Eq. (1) we also need a multiplication operation between a matrix and a vector for each iteration. 4.3 The Physical Meaning of γ γ plays an important role in the anti-manipulation effect of DiffusionRank. γ is the thermal conductivity-the heat diffusion coefficient. If it has a high value, heat will diffuse very quickly. Conversely, if it is small, heat will diffuse slowly. In the extreme case, if it is infinitely large, then heat will diffuse from one node to other nodes immediately, and this is exactly the case corresponding to PageRank. Next, we will interpret it mathematically. Theorem 1. When γ tends to infinity and f(0) is not the zero vector, eγR f(0) is proportional to the stable distribution produced by PageRank. Let g = 1 n 1. By the Perron Theorem [11], we have shown that 1 is the largest eigenvalue of P = [(1 − α)g1T + αA], and that no other eigenvalue whose absolute value is equal to 1. Let x be the stable distribution, and so Px = x. x is the eigenvector corresponding to the eigenvalue 1. Assume the n − 1 other eigenvalues of P are |λ2| < 1, ... , |λn| < 1, we can find an invertible matrix S = ( x S1 ) such that S−1 PS = 1 ∗ ∗ ∗ 0 λ2 ∗ ∗ 0 0 ... ∗ 0 0 0 λn . (7) Since eγR = eγ(−I+P) = S−1 1 ∗ ∗ ∗ 0 eγ(λ2−1) ∗ ∗ 0 0 ... ∗ 0 0 0 eγ(λn−1) S, (8) all eigenvalues of the matrix eγR are 1, eγ(λ2−1) , ... , eγ(λn−1) . When γ → ∞, they become 1, 0, ... , 0, which means that 1 is the only nonzero eigenvalue of eγR when γ → ∞. We can see that when γ → ∞, eγR eγR f(0) = eγR f(0), and so eγR f(0) is an eigenvector of eγR when γ → ∞. On the other hand, eγR x = (I+γR+γ2 2! R2 +γ3 3! R3 +...)x = Ix+γRx+γ2 2! R2 x+ γ3 3! R3 x + ... = x since Rx = (−I + P)x = −x + x = 0, and hence x is the eigenvector of eγR for any γ. Therefore both x and eγR f(0) are the eigenvectors corresponding the unique eigenvalue 1 of eγR when γ → ∞, and consequently x = ceγR f(0). By this theorem, we see that DiffusionRank is a generalization of PageRank. When γ = 0, the ranking value is most robust to manipulation since no heat is diffused and the system is unchangeable, but the Web structure is completely ignored since eγR f(0) = e0R f(0) = If(0) = f(0); when γ = ∞, DiffusionRank becomes PageRank, it can be manipulated easily. We expect an appropriate setting of γ that can balance both. For this, we have no theoretical result, but in practice we find that γ = 1 works well in Section 5. Next we discuss how to determine the number of iterations if we employ the discrete heat kernel. 4.4 The Number of Iterations While we enjoy the advantage of the concise form of the exponential heat kernel, it is better for us to calculate DiffusionRank by employing Eq. (6) in an iterative way. Then the problem about determining N-the number of iterations arises: For a given threshold , find N such that ||((I + γ N R)N − eγR )f(0)|| < for any f(0) whose sum is one. Since it is difficult to solve this problem, we propose a heuristic motivated by the following observations. When R = −I+P, by Eq. (7), we have (I+ γ N R)N = (I+ γ N (−I+ P))N = S−1 1 ∗ ∗ ∗ 0 (1 + γ(λ2−1) N )N ∗ ∗ 0 0 ... ∗ 0 0 0 (1 + γ(λn−1) N )N S. (9) Comparing Eq. (8) and Eq. (9), we observe that the eigenvalues of (I + γ N R)N − eγR are (1 + γ(λn−1) N )N − eγ(λn−1) . We propose a heuristic method to determine N so that the difference between the eigenvalues are less than a threshold for only positive λs. We also observe that if γ = 1, λ < 1, then |(1+ γ(λ−1) N )N − eγ(λ−1) | < 0.005 if N ≥ 100, and |(1+ γ(λ−1) N )N −eγ(λ−1) | < 0.01 if N ≥ 30. So we can set N = 30, or N = 100, or others according to different accuracy requirements. In this paper, we use the relatively accurate setting N = 100 to make the real eigenvalues in (I + γ N R)N − eγR less than 0.005. 5. EXPERIMENTS In this section, we show the experimental data, the methodology, the setting, and the results. 5.1 Data Preparation Our input data consist of a toy graph, a middle-size realworld graph, and a large-size real-world graph. The toy graph is shown in Fig. 2 (a). The graph below it shows node 1 is being manipulated by adding new nodes A, B, C, ... such that they all point to node 1, and node 1 points to them all. The data of two real Web graph were obtained from the domain in our institute in October, 2004. The total number of pages found are 18,542 in the middle-size graph, and 607,170 in the large-size graph respectively. The middle-size graph is a subgraph of the large-size graph, and they were obtained by the same crawler: one is recorded by the crawler in its earlier time, and the other is obtained when the crawler stopped. 5.2 Methodology The algorithms we run include PageRank, TrustRank and DiffusionRank. All the rank values are multiplied by the number of nodes so that the sum of the rank values is equal to the number of nodes. By this normalization, we can compare the results on graphs with different sizes since the average rank value is one for any graph after such normalization. We will need value difference and pairwise order difference as comparison measures. Their definitions are listed as follows. Value Difference. The value difference between A = {Ai}n i=1 and B = {Bi}n i=1 is measured as n i=1 |Ai − Bi|. Pairwise Order Difference. The order difference between A and B is measured as the number of significant order differences between A and B. The pair (A[i], A[j]) and (B[i], B[j]) is considered as a significant order difference if one of the following cases happens: both A[i] > [ <]A[j]+0.1 and B[i] ≤ [ ≥]A[j]; both A[i] ≤ [ ≥]A[j] and B[i] > [ < ]A[j] + 0.1. A 1 B C ... 2 5 6 3 4 1 2 5 6 3 4 0 1 2 3 4 5 6 0 2 4 6 8 10 12 Gamma ValueDifference Trust set={1} Trust set={2} Trust set={3} Trust set={4} Trust set={5} Trust set={6} (a) (b) Figure 2: (a) The toy graph consisting of six nodes, and node 1 is being manipulated by adding new nodes A, B, C, ... (b) The approximation tendency to PageRank by DiffusionRank 5.3 Experimental Set-up The experiments on the middle-size graph and the largesize graphs are conducted on the workstation, whose hardware model is Nix Dual Intel Xeon 2.2GHz with 1GB RAM and a Linux Kernel 2.4.18-27smp (RedHat7.3). In calculating DiffusionRank, we employ Eq. (6) and the discrete approximation of Eq. (4) for such graphs. The related tasks are implemented using C language. While in the toy graph, we employ the continuous diffusion kernel in Eq. (4) and Eq. (5), and implement related tasks using Matlab. For nodes that have zero out-degree (dangling nodes), we employ the method in the modified PageRank algorithm [8], in which dangling nodes of are considered to have random links uniformly to each node. We set α = αI = αB = 0.85 in all algorithms. We also set g to be the uniform distribution in both PageRank and DiffusionRank. For DiffusionRank, we set γ = 1. According to the discussions in Section 4.3 and Section 4.4, we set the iteration number to be MB = 100 in DiffusionRank, and for accuracy consideration, the iteration number in all the algorithms is set to be 100. 5.4 Approximation of PageRank We show that when γ tends to infinity, the value differences between DiffusionRank and PageRank tend to zero. Fig. 2 (b) shows the approximation property of DiffusionRank, as proved in Theorem 1, on the toy graph. The horizontal axis of Fig. 2 (b) marks the γ value, and vertical axis corresponds to the value difference between DiffusionRank and PageRank. All the possible trusted sets with L = 1 are considered. For L > 1, the results should be the linear combination of some of these curves because of the linearity of the solutions to heat equations. On other graphs, the situations are similar. 5.5 Results of Anti-manipulation In this section, we show how the rank values change as the intensity of manipulation increases. We measure the intensity of manipulation by the number of newly added points that point to the manipulated point. The horizontal axes of Fig. 3 stand for the numbers of newly added points, and vertical axes show the corresponding rank values of the manipulated nodes. To be clear, we consider all six situations. Every node in Fig. 2 (a) is manipulated respectively, and its 0 50 100 0 10 20 30 40 50 RankoftheManipulatdNode−1 DiffusionRank−Trust 4 PageRank TrustRanl−Trust 4 0 50 100 0 10 20 30 40 50 RankoftheManipulatdNode−2 DiffusionRank−Trust 4 PageRank TrustRanl−Trust 4 0 50 100 0 10 20 30 40 50 RankoftheManipulatdNode−3 DiffusionRank−Trust 4 PageRank TrustRanl−Trust 4 0 50 100 0 10 20 30 40 50 Number of New Added Nodes RankoftheManipulatdNode−4 DiffusionRank−Trust 3 PageRank TrustRanl−Trust 3 0 50 100 0 10 20 30 40 50 Number of New Added Nodes RankoftheManipulatdNode−5 DiffusionRank−Trust 4 PageRank TrustRanl−Trust 4 0 50 100 0 10 20 30 40 50 Number of New Added Nodes RankoftheManipulatdNode−6 DiffusionRank−Trust 4 PageRank TrustRanl−Trust 4 Figure 3: The rank values of the manipulated nodes on the toy graph 200040006000800010000 0 1000 2000 3000 4000 5000 6000 7000 8000 Number of New Added Points RankoftheManipulatdNode PageRank DiffusionRank−uniform DiffusionRank0 DiffusionRank1 DiffusionRank2 DiffusionRank3 TrustRank0 TrustRank1 TrustRank2 TrustRank3 2000 4000 6000 8000 10000 0 20 40 60 80 100 120 140 160 180 Number of New Added Points RankoftheManipulatdNode PageRank DiffusionRank TrustRank DiffusionRank−uniform (a) (b) Figure 4: (a) The rank values of the manipulated nodes on the middle-size graph; (b) The rank values of the manipulated nodes on the large-size graph corresponding values for PageRank, TrustRank (TR), DiffusionRank (DR) are shown in the one of six sub-figures in Fig. 3. The vertical axes show which node is being manipulated. In each sub-figure, the trusted sets are computed below. Since the inverse PageRank yields the results [1.26, 0.85, 1.31, 1.36, 0.51, 0.71]. Let L = 1. If the manipulated node is not 4, then the trusted set is {4}, and otherwise {3}. We observe that in all the cases, rank values of the manipulated node for DiffusionRank grow slowest as the number of the newly added nodes increases. On the middle-size graph and the large-size graph, this conclusion is also true, see Fig. 4. Note that, in Fig. 4 (a), we choose four trusted sets (L = 1), on which we test DiffusionRank and TrustRank, the results are denoted by DiffusionRanki and TrustRanki (i = 0, 1, 2, 3 denotes the four trusted set); in Fig. 4 (b), we choose one trusted set (L = 1). Moreover, in both Fig. 4 (a) and Fig. 4 (b), we show the results for DiffusionRank when we have no trusted set, and we trust all the pages before some of them are manipulated. We also test the order difference between the ranking order A before the page is manipulated and the ranking order PA after the page is manipulated. Because after manipulation, the number of pages changes, we only compare the common part of A and PA.. This experiment is used to test the stability of all these algorithms. The less the order difference, the stabler the algorithm, in the sense that only a smaller part of the order relations is affected by the manipulation. Figure 5 (a) shows that the order difference values change when we add new nodes that point to the manipulated node. We give several γ settings. We find that when γ = 1, the least order difference is achieved by DiffusionRank. It is interesting to point out that as γ increases, the order difference will increase first; after reaching a maximum value, it will decrease, and finally it tends to the PageRank results. We show this tendency in Fig. 5 (b), in which we choose three different settings-the number of manipulated nodes are 2,000, 5,000, and 10,000 respectively. From these figures, we can see that when γ < 2, the values are less than those for PageRank, and that when γ > 20, the difference between PageRank and DiffusionRank is very small. After these investigations, we find that in all the graphs we tested, DiffusionRank (when γ = 1) is most robust to manipulation both in value difference and order difference. The trust set selection algorithm proposed in [7] is effective for both TrustRank and DiffusionRank. 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 0.5 1 1.5 2 2.5 3 x 10 5 Number of New Added Points PairwiseOrderDifference PageRank DiffusionRank−Gamma=1 DiffusionRank−Gamma=2 DiffusionRank−Gamma=3 DiffusionRank−Gamma=4 DiffusionRank−Gamma=5 DiffusionRank−Gamma=15 TrustRank 0 5 10 15 20 0 0.5 1 1.5 2 2.5 x 10 5 Gamma PairwiseOrderDifference DiffusionRank: when added 2000 nodes DiffusionRank: when added 5000 nodes DiffusionRank: when added 10000 nodes PageRank (a) (b) Figure 5: (a) Pairwise order difference on the middle-size graph, the least it is, the more stable the algorithm; (b) The tendency of varying γ 6. CONCLUSIONS We conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient γ can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations. The experimental results show that we can actually achieve such a balance by setting γ = 1, although the best setting including varying γi is still under further investigation. This anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming. Moreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities. All these advantages can be achieved in the same computational complexity as PageRank. For the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms. 7. ACKNOWLEDGMENTS We thank Patrick Lau, Zhenjiang Lin and Zenglin Xu for their help. This work is fully supported by two grants from the Research Grants Council of the Hong Kong Special administrative Region, China (Project No. CUHK4205/04E and Project No. CUHK4235/04E). 8. REFERENCES [1] E. Agichtein, E. Brill, and S. T. Dumais. Improving web search ranking by incorporating user behavior information. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, and K. J¨arvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 19-26, 2006. [2] R. A. Baeza-Yates, P. Boldi, and C. Castillo. Generalizing pagerank: damping functions for link-based ranking algorithms. In E. N. Efthimiadis, S. T. Dumais, D. Hawking, and K. J¨arvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 308-315, 2006. [3] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373-1396, Jun 2003. [4] B. Bollob´as. Random Graphs. Academic Press Inc. (London), 1985. [5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning (ICML), pages 89-96, 2005. [6] N. Eiron, K. S. McCurley, and J. A. Tomlin. Ranking the web frontier. In Proceeding of the 13th World Wide Web Conference (WWW), pages 309-318, 2004. [7] Z. Gy¨ongyi, H. Garcia-Molina, and J. Pedersen. Combating web spam with trustrank. In M. A. Nascimento, M. T. ¨Ozsu, D. Kossmann, R. J. Miller, J. A. Blakeley, and K. B. Schiefer, editors, Proceedings of the Thirtieth International Conference on Very Large Data Bases (VLDB), pages 576-587, 2004. [8] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub. Exploiting the block structure of the web for computing pagerank. Technical report, Stanford University, 2003. [9] R. I. Kondor and J. D. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In C. Sammut and A. G. Hoffmann, editors, Proceedings of the Nineteenth International Conference on Machine Learning (ICML), pages 315-322, 2002. [10] J. Lafferty and G. Lebanon. Diffusion kernels on statistical manifolds. Journal of Machine Learning Research, 6:129-163, Jan 2005. [11] C. R. MacCluer. The many proofs and applications of perron``s theorem. SIAM Review, 42(3):487-498, 2000. [12] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly. Detecting spam web pages through content analysis. In Proceedings of the 15th international conference on World Wide Web (WWW), pages 83-92, 2006. [13] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical Report Paper SIDL-WP-1999-0120 (version of 11/11/1999), Stanford Digital Library Technologies Project, 1999. [14] H. Yang, I. King, and M. R. Lyu. NHDC and PHDC: Non-propagating and propagating heat diffusion classifiers. In Proceedings of the 12th International Conference on Neural Information Processing (ICONIP), pages 394-399, 2005. [15] H. Yang, I. King, and M. R. Lyu. Predictive ranking: a novel page ranking approach by estimating the web structure. In Proceedings of the 14th international conference on World Wide Web (WWW) - Special interest tracks and posters, pages 944-945, 2005. [16] H. Yang, I. King, and M. R. Lyu. Predictive random graph ranking on the web. In Proceedings of the IEEE World Congress on Computational Intelligence (WCCI), pages 3491-3498, 2006. [17] D. Zhou, J. Weston, A. Gretton, O. Bousquet, and B. Sch¨olkopf. Ranking on data manifolds. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16 (NIPS 2003), 2004.
DiffusionRank: A Possible Penicillin for Web Spamming ABSTRACT While the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated. To handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank. DiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph. Theoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient - y tends to infinity. In such a case 1 / - y = 0, DiffusionRank (PageRank) has low ability of anti-manipulation. When - y = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored. Consequently, - y is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation. It is found empirically that, when - y = 1, DiffusionRank has a Penicillin-like effect on the link manipulation. Moreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities. Experimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected. 1. INTRODUCTION While the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by peo ple for commercial interests. The manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7]. It is reported that approximately 70% of all pages in the. biz domain and about 35% of the pages in the. us domain belong to the spam category [12]. The reason for the increasing amount of Web spam is explained in [12]: some web site operators try to influence the positioning of their pages within search results because of the large fraction of web traffic originating from searches and the high potential monetary value of this traffic. From the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12]. From the viewpoint of the search engine managers, the Web spam is very harmful to the users' evaluations and thus their preference to choosing search engines because people believe that a good search engine should not return irrelevant or low-quality results. There are two methods being employed to combat the Web spam problem. Machine learning methods are employed to handle the keyword stuffing. To successfully apply machine learning methods, we need to dig out some useful textual features for Web pages, to mark part of the Web pages as either spam or non-spam, then to apply supervised learning techniques to mark other pages. For example, see [5, 12]. Link analysis methods are also employed to handle the link stuffing problem. One example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links. This paper focuses on the link-based method. The rest of the materials are organized as follows. In the next section, we give a brief literature review on various related ranking techniques. We establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4. In Section 5, we describe the data sets that we worked on and the experimental results. Finally, we draw conclusions in Section 6. 2. LITERATURE REVIEW The importance of a Web page is determined by either the textual content of pages or the hyperlink structure or both. As in previous work [7, 13], we focus on ranking methods solely determined by hyperlink structure of the Web graph. All the mentioned ranking algorithms are established on a graph. For our convenience, we first give some notations. Denote a static graph by G = (V, E), where V = {v1, v2,..., vn}, E = {(vi, vj) | there is an edge from vi to vj}. Ii and di denote the in-degree and the out-degree of page i respectively. 2.1 PageRank The importance of a Web page is an inherently subjective matter, which depends on the reader's interests, knowledge and attitudes [13]. However, the average importance of all readers can be considered as an objective matter. PageRank tries to find such average importance based on the Web link structure, which is considered to contain a large amount of statistical data. The Web is modelled by a directed graph G in the PageRank algorithms, and the rank or "importance" which point to it: xi = E xi for page vi ∈ V is defined recursively in terms of pages (j, i) ∈ E aijxj, where aij is assumed to be 1/dj if there is a link from j to i, and 0 otherwise. Or in matrix terms, x = Ax. When the concept of "random jump" is introduced, the matrix form is changed to where α is the probability of following the actual link from a page, (1 − α) is the probability of taking a "random jump", and g is a stochastic vector, i.e., 1T g = 1. Typically, α = 0.85, and g = n1 1 is one of the standard settings, where 1 is the vector of all ones [6, 13]. 2.2 TrustRank TrustRank [7] is composed of two parts. The first part is the seed selection algorithm, in which the inverse PageRank was proposed to help an expert of determining a good node. The second part is to utilize the biased PageRank, in which the stochastic distribution g is set to be shared by all the trusted pages found in the first part. Moreover, the initial input of x is also set to be g. The justification for the inverse PageRank and the solid experiments support its advantage in combating the Web spam. Although there are many variations of PageRank, e.g., a family of link-based ranking algorithms in [2], TrustRank is especially chosen for comparisons for three reasonss: (1) it is designed for combatting spamming; (2) its fixed parameters make a comparison easy; and (3) it has a strong theoretical relations with PageRank and DiffusionRank. 2.3 Manifold Ranking In [17], the idea of ranking on the data manifolds was proposed. The data points represented as vectors in Euclidean space are considered to be drawn from a manifold. From the data points on such a manifold, an undirected weighted graph is created, then the weight matrix is given by the Gaussian Kernel smoothing. While the manifold ranking algorithm achieves an impressive result on ranking images, the biased vector g and the parameter k in the general personalized PageRank in [17] are unknown in the Web graph setting; therefore we do not include it in the comparisons. 2.4 Heat Diffusion Heat diffusion is a physical phenomena. In a medium, heat always flow from position with high temperature to position with low temperature. Heat kernel is used to describe the amount of heat that one point receives from another point. Recently, the idea of heat kernel on a manifold is borrowed in applications such as dimension reduction [3] and classification [9, 10, 14]. In these work, the input data is considered to lie in a special structure. All the above topics are related to our work. The readers can find that our model is a generalization of PageRank in order to resist Web manipulation, that we inherit the first part of TrustRank, that we borrow the concept of ranking on the manifold to introduce our model, and that heat diffusion is a main scheme in this paper. 3. HEAT DIFFUSION MODEL Heat diffusion provides us with another perspective about how we can view the Web and also a way to calculate ranking values. In this paper, the Web pages are considered to be drawn from an unknown manifold, and the link structure forms a directed graph, which is considered as an approximation to the unknown manifold. The heat kernel established on the Web graph is considered as the representation of the relationship between Web pages. The temperature distribution after a fixed time period, induced by a special initial temperature distribution, is considered as the rank scores on the Web pages. Before establishing the proposed models, we first show our motivations. 3.1 Motivations There are two points to explain that PageRank is susceptible to web spam. • Over-democratic. There is a belief behind PageRank--all pages are born equal. This can be seen from the equal voting ability of one page: the sum of each column is equal to one. This equal voting ability of all pages gives the chance for a Web site operator to increase a manipulated page by creating a large number of new pages pointing to this page since all the newly created pages can obtain an equal voting right. • Input-independent. For any given non-zero initial input, the iteration will converge to the same stable distribution corresponding to the maximum eigenvalue 1 of the transition matrix. This input-independent property makes it impossible to set a special initial input (larger values for trusted pages and less values even negative values for spam pages) to avoid web spam. The input-independent feature of PageRank can be further explained as follows. P = [(1 − α) g1T + αA] is a positive stochastic matrix if g is set to be a positive stochastic vector (the uniform distribution is one of such settings), and so the largest eigenvalue is 1 and no other eigenvalue whose absolute value is equal to 1, which is guaranteed by the Perron Theorem [11]. Let y be the eigenvector corresponding to 1, then we have Py = y. Let {xk} be the sequence generated from the iterations xk +1 = Pxk, and x0 is the initial input. If {xk} converges to x, then xk +1 = Pxk implies that x must satisfy Px = x. Since the only maximum eigenvalue is 1, we have x = cy where c is a constant, and if both x and y are normalized by their sums, then c = 1. The above discussions show that PageRank is independent of the initial input x0. In our opinion, g and α are objective parameters determined by the users' behaviors and preferences. A, α and g are the "true" web structure. While A is obtained by a crawler and the setting α = 0.85 is accepted by the people, we think that g should be determined by a user behavior investigation, something like [1]. Without any prior knowledge, g has to be set as g = n1 1. TrustRank model does not follow the "true" web structure by setting a biased g, but the effects of combatting spamming are achieved in [7]; PageRank is on the contrary in some ways. We expect a ranking algorithm that has an effect of anti-manipulation as TrustRank while respecting the "true" web structure as PageRank. We observe that the heat diffusion model is a natural way to avoid the over-democratic and input-independent feature of PageRank. Since heat always flows from a position with higher temperatures to one with lower temperatures, points are not equal as some points are born with high temperatures while others are born with low temperatures. On the other hand, different initial temperature distributions will give rise to different temperature distributions after a fixed time period. Based on these considerations, we propose the novel DiffusionRank. This ranking algorithm is also motivated by the viewpoint for the Web structure. We view all the Web pages as points drawn from a highly complex geometric structure, like a manifold in a high dimensional space. On a manifold, heat can flow from one point to another through the underlying geometric structure in a given time period. Different geometric structures determine different heat diffusion behaviors, and conversely the diffusion behavior can reflect the geometric structure. More specifically, on the manifold, the heat flows from one point to another point, and in a given time period, if one point x receives a large amount of heat from another point y, we can say x and y are well connected, and thus x and y have a high similarity in the sense of a high mutual connection. We note that on a point with unit mass, the temperature and the heat of this point are equivalent, and these two terms are interchangeable in this paper. In the following, we first show the HDM on a manifold, which is the origin of HDM, but cannot be employed to the World Wide Web directly, and so is considered as the ideal case. To connect the ideal case and the practical case, we then establish HDM on a graph as an intermediate case. To model the real world problem, we further build HDM on a random graph as a practical case. Finally we demonstrate the DiffusionRank which is derived from the HDM on a random graph. 3.2 Heat Flow On a Known Manifold If the underlying manifold is known, the heat flow throughout a geometric manifold with initial conditions can be described by the following second order differential equation: at time t, and Af is the Laplace-Beltrami operator on a function f. The heat diffusion kernel Kt (x, y) is a special solution to the heat equation with a special initial condition--a unit heat source at position y when there is no heat in other positions. Based on this, the heat kernel Kt (x, y) describes the heat distribution at time t diffusing from the initial unit heat source at position y, and thus describes the connectivity (which is considered as a kind of similarity) between x and y. However, it is very difficult to represent the World Wide Web as a regular geometry with a known dimension; even the underlying is known, it is very difficult to find the heat kernel Kt (x, y), which involves solving the heat equation with the delta function as the initial condition. This motivates us to investigate the heat flow on a graph. The graph is considered as an approximation to the underlying manifold, and so the heat flow on the graph is considered as an approximation to the heat flow on the manifold. 3.3 On an Undirected Graph On an undirected graph G, the edge (vi, vj) is considered as a pipe that connects nodes vi and vj. The value fi (t) describes the heat at node vi at time t, beginning from an initial distribution of heat given by fi (0) at time zero. f (t) (f (0)) denotes the vector consisting of fi (t) (fi (0)). We construct our model as follows. Suppose, at time t, each node i receives M (i, j, t, At) amount of heat from its neighbor j during a period of At. The heat M (i, j, t, At) should be proportional to the time period At and the heat difference fj (t)--fi (t). Moreover, the heat flows from node j to node i through the pipe that connects nodes i and j. Based on this consideration, we assume that M (i, j, t, At) = - y (fj (t)--fi (t)) At. As a result, the heat difference at node i between time t + At and time t will be equal to the sum of the heat that it receives from all its neighbors. This is formulated as where E is the set of edges. To find a closed form solution to Eq. (2), we express it in a matrix form: (f (t + At)--f (t)) / At = - yHf (t), where d (v) denotes the degree of the node v. In the limit At → 0, it becomes dtd f (t) = - yHf (t). Solving it, we obtain f (t) = eγtHf (0), especially we have 3.4 On a Directed Graph The above heat diffusion model must be modified to fit the situation where the links between Web pages are directed. On one Web page, when the page-maker creates a link (a, b) to another page b, he actually forces the energy flow, for example, people's click-through activities, to that page, and so there is added energy imposed on the link. As a result, heat flows in a one-way manner, only from a to b, not from b to a. Based on such consideration, we modified the heat diffusion model on an undirected graph as follows. On a directed graph G, the pipe (vi, vj) is forced by added energy such that heat flows only from vi to vj. Suppose, at time t, each node vi receives RH = RH (i, j, t, At) amount of heat from vj during a period of At. We have three assumptions: (1) RH should be proportional to the time period At; (2) RH should be proportional to the the heat at node vj; and (3) RH is zero if there is no link from vj to vi. As a result, vi will receive Ej: (vj, vi) ∈ E σj fj (t) At amount of heat from all its neighbors that points to it. On the other hand, node vi diffuses DH (i, t, At) amount of heat to its subsequent nodes. We assume that: (1) The heat DH (i, t, At) should be proportional to the time period At. (2) The heat DH (i, t, At) should be proportional to the the heat at node vi. (3) Each node has the same ability of diffusing heat. This fits the intuition that a Web surfer only has one choice to find the next page that he wants to browse. (4) The heat DH (i, t, At) should be uniformly distributed to its subsequent nodes. The real situation is more complex than what we assume, but we have to make this simple assumption in order to make our model concise. As a result, node vi will diffuse - yfi (t) At/di amount of heat to any of its subsequent nodes, and each of its subsequent node should receive - yfi (t) Δt / di amount of heat. Therefore σj = - y/dj. To sum up, the heat difference at node vi between time t + Δt and time t will be equal to the sum of the heat that it receives, deducted by what it diffuses. This is formulated as fi (t + Δt) − fi (t) = − - yfi (t) Δt + Ej: (vj, vi) ∈ E - y/djfj (t) Δt. Similarly, we obtain 3.5 On a Random Directed Graph For real world applications, we have to consider random edges. This can be seen in two viewpoints. The first one is that in Eq. (1), the Web graph is actually modelled as a random graph, there is an edge from node vi to node vj with a probability of (1 − α) gj (see the item (1 − α) g1T), and that the Web graph is predicted by a random graph [15, 16]. The second one is that the Web structure is a random graph in essence if we consider the content similarity between two pages, though this is not done in this paper. For these reasons, the model would become more flexible if we extend it to random graphs. The definition of a random graph is given below. The original definition of random graphs in [4], is changed slightly to consider the situation of directed graphs. Note that every static graph can be considered as a special random graph in the sense that pij can only be 0 or 1. On a random graph RG = (V, P), where P = (pij) is the probability of the edge (vi, vj) exists. In such a random graph, the expected heat difference at node i between time t + Δt and time t will be equal to the sum of the expected heat that it receives from all its antecedents, deducted by the expected heat that it diffuses. Since the probability of the link (vj, vi) is pji, the expected heat flow from node j to node i should be multiplied by pji, and so we have fi (t + Δt) − fi (t) = − - y fi (t) Δt + P j: (vj, vi) ∈ E - ypjifj (t) Δt / RD + (vj), where RD + (vi) is the expected out-degree of node vi, it is defined as Ek pik. Similarly we have When the graph is large, a direct computation of eγR is time-consuming, and we adopt its discrete approximation: The matrix (I + γNR) N in Eq. (6) and matrix eγR in Eq. (5) are called Discrete Diffusion Kernel and the Continuous Diffusion Kernel respectively. Based on the Heat Diffusion Models and their solutions, DiffusionRank can be established on undirected graphs, directed graphs, and random graphs. In the next section, we mainly focus on DiffusionRank in the random graph setting. 4. DIFFUSIONRANK For a random graph, the matrix (I + γNR) N or eγR can measure the similarity relationship between nodes. Let fi (0) = 1, fj (0) = 0 if j = 6 i, then the vector f (0) represent the unit heat at node vi while all other nodes has zero heat. For such f (0) in a random graph, we can find the heat distribution at time 1 by using Eq. (5) or Eq. (6). The heat distribution is exactly the i − th row of the matrix of (I + γN R) N or eγR. So the ith-row jth-column element hij in the matrix (I + - yΔtR) N or eγR means the amount of heat that vi can receive from vj from time 0 to 1. Thus the value hij can be used to measure the similarity from vj to vi. For a static graph, similarly the matrix (I + γN H) N or eγH can measure the similarity relationship between nodes. The intuition behind is that the amount h (i, j) of heat that a page vi receives from a unit heat in a page vj in a unit time embodies the extent of the link connections from page vj to page vi. Roughly speaking, when there are more uncrossed paths from vj to vi, vi will receive more heat from vj; when the path length from vj to vi is shorter, vi will receive more heat from vj; and when the pipe connecting vj and vi is wide, the heat will flow quickly. The final heat that vi receives will depend on various paths from vj to vi, their length, and the width of the pipes. Algorithm 1 DiffusionRank Function Input: The transition matrix A; the inverse transition matrix U; the decay factor αI for the inverse PageRank; the decay factor αB for PageRank; number of iterations MI for the inverse PageRank; the number of trusted pages L; the thermal conductivity coefficient - y. Output: DiffusionRank score vector h. 1: s = 1 2: for i = 1 TO MI do 3: s = αI · U · s + (1 − αI) · n1 · 1 4: end for 5: Sort s in a decreasing order: π = Rank ({1,..., n}, s) 6: d = 0, Count = 0, i = 0 7: while Count ≤ L do 8: if π (i) is evaluated as a trusted page then 9: d (π (i)) = 1, Count + + 10: end if 11: i + + 12: end while 13: d = d / | d | 14: h = d 15: Find the iteration number MB according to λ 16: for i = 1 TO MB do 17: h = (1 − MB γ) h + MB γ (αB · A · h + (1 − αB) · n 1 · 1) 18: end for 19: RETURN h 4.1 Algorithm For the ranking task, we adopt the heat kernel on a random graph. Formally the DiffusionRank is described in Algorithm 1, in which, the element Uij in the inverse transition matrix U is defined to be 1/Ij if there is a link from i to j, and 0 otherwise. This trusted pages selection procedure by inverse PageRank is completely borrowed from TrustRank [7] except for a fix number of the size of the trusted set. Although the inverse PageRank is not perfect in its abil ity of determining the maximum coverage, it is appealing because of its polynomial execution time and its reasonable intuition--we actually inverse the original link when we try to build the seed set from those pages that point to many pages that in turn point to many pages and so on. In the algorithm, the underlying random graph is set as P = αB · A + (1 − αB) · n1 · 1n × n, which is induced by the Web graph. As a result, R = − I + P. In fact, the more general setting for DiffusionRank is P = αB · A + (1 − αB) · n1 · g · 1T. By such a setting, DiffusionRank is a generalization of TrustRank when - y tends to infinity and when g is set in the same way as TrustRank. However, the second part of TrustRank is not adopted by us. In our model, g should be the true "teleportation" determined by the user's browse habits, popularity distribution over all the Web pages, and so on; P should be the true model of the random nature of the World Wide Web. Setting g according to the trusted pages will not be consistent with the basic idea of Heat Diffusion on a random graph. We simply set g = 1 only because we cannot find it without any priori knowledge. Remark. In a social network interpretation, DiffusionRank first recognizes a group of trusted people, who may not be highly ranked, but they know many other people. The initially trusted people are endowed with the power to decide who can be further trusted, but cannot decide the final voting results, and so they are not dictators. 4.2 Advantages Next we show the four advantages for DiffusionRank. 4.2.1 Two closed forms First, its solutions have two forms, both of which are closed form. One takes the discrete form, and has the advantage of fast computing while the other takes the continuous form, and has the advantage of being easily analyzed in theoretical aspects. The theoretical advantage has been shown in the proof of theorem in the next section. Figure 1: Two graphs 4.2.2 Group-group relations Second, it can be naturally employed to detect the groupgroup relation. For example, let G2 and G1 denote two groups, containing pages (j1, j2,..., js) and (i1, i2,..., it), respectively. Then Eu, v hi., j „ is the total amounts of heat that G1 receives from G2, where hi., j „ is the iu − th row jv − th column element of the heat kernel. More specifically, we need to first set f (0) for such an application as follows. In f (0) = (f1 (0), f2 (0),..., fn (0)) T, if i ∈ {j1, j2,..., js}, then fi (0) = 1, and 0 otherwise. Then we employ Eq. (5) to calculate f (1) = (f1 (1), f2 (1),..., fn (1)) T, finally we sum those fj (1) where j ∈ {i1, i2,..., it}. Fig. 1 (a) shows the results generated by the DiffusionRank. We consider five groups--five departments in our Engineering Faculty: CSE, MAE, EE, IE, and SE. - y is set to be 1, the numbers in Fig. 1 (a) are the amount of heat that they diffuse to each other. These results are normalized by the total number of each group, and the edges are ignored if the values are less than 0.000001. The group-to-group relations are therefore detected, for example, we can see that the most strong overall tie is from EE to IE. While it is a natural application for DiffusionRank because of the easy interpretation by the amount heat from one group to another group, it is difficult to apply other ranking techniques to such an application because they lack such a physical meaning. 4.2.3 Graph cut Third, it can be used to partition the Web graph into several parts. A quick example is shown below. The graph in Fig. 1 (b) is an undirected graph, and so we employ the Eq. (3). If we know that node 1 belongs to one community and that node 12 belongs to another community, then we can put one unit positive heat source on node 1 and one unit negative heat source on node 12. After time 1, if we set - y = 0.5, the heat distribution is [0.25, 0.16, 0.17, 0.16, 0.15, 0.09, 0.01, -0.04, -0.18 -0.21, -0.21, -0.34], and if we set - y = 1, it will be [0.17, 0.16, 0.17, 0.16, 0.16, 0.12, 0.02, -0.07, -0.18, -0.22, -0.24, -0.24]. In both settings, we can easily divide the graph into two parts: {1, 2, 3, 4, 5, 6, 7} with positive temperatures and {8, 9, 10, 11, 12} with negative temperatures. For directed graphs and random graphs, similarly we can cut them by employing corresponding heat solution. 4.2.4 Anti-manipulation Fourth, it can be used to combat manipulation. Let G2 contain trusted Web pages (j1, j2,..., js), then for each page i, & hi, j „ is the heat that page i receives from G2, and can be computed by the discrete approximation of Eq. (4) in the case of a static graph or Eq. (6) in the case of a random graph, in which f (0) is set to be a special initial heat distribution so that the trusted Web pages have unit heat while all the others have zero heat. In doing so, manipulated Web page will get a lower rank unless it has strong in-links from the trusted Web pages directly or indirectly. The situation is quite different for PageRank because PageRank is inputindependent as we have shown in Section 3.1. Based on the fact that the connection from a trusted page to a "bad" page should be weak--less uncross paths, longer distance and narrower pipe, we can say DiffusionRank can resist web spam if we can select trusted pages. It is fortunate that the trusted pages selection method in [7]--the first part of TrustRank can help us to fulfill this task. For such an application of DiffusionRank, the computation complexity for Discrete Diffusion Kernel is the same as that for PageRank in cases of both a static graph and a random graph. This can be seen in Eq. (6), by which we need N iterations and for each iteration we need a multiplication operation between a matrix and a vector, while in Eq. (1) we also need a multiplication operation between a matrix and a vector for each iteration. 4.3 The Physical Meaning of γ - y plays an important role in the anti-manipulation effect of DiffusionRank. - y is the thermal conductivity--the heat diffusion coefficient. If it has a high value, heat will diffuse very quickly. Conversely, if it is small, heat will diffuse slowly. In the extreme case, if it is infinitely large, then heat will diffuse from one node to other nodes immediately, and this is exactly the case corresponding to PageRank. Next, we will interpret it mathematically. THEOREM 1. When - y tends to infinity and f (0) is not the zero vector, eγRf (0) is proportional to the stable distribution produced by PageRank. Let g = n1 1. By the Perron Theorem [11], we have shown that 1 is the largest eigenvalue of P = [(1 − α) g1T + αA], and that no other eigenvalue whose absolute value is equal to 1. Let x be the stable distribution, and so Px = x. x is the eigenvector corresponding to the eigenvalue 1. Assume the n − 1 other eigenvalues of P are | λ2 | <1,..., | λn | <1, all eigenvalues of the matrix eγR are 1, eγ (λ2-1),..., eγ (λn-1). When - y--+ oo, they become 1, 0,..., 0, which means that 1 is the only nonzero eigenvalue of eγR when - y--+ oo. We can see that when - y--+ oo, eγReγRf (0) = eγRf (0), and so eγRf (0) is an eigenvector of eγR when - y--+ oo. On the other hand, eγRx = (I + - yR + γ22! R2 + γ33! R3 + ...) x = Ix + - yRx + γ22! R2x + γ3 3! R3x +...= x since Rx = (− I + P) x = − x + x = 0, and hence x is the eigenvector of eγR for any - y. Therefore both x and eγRf (0) are the eigenvectors corresponding the unique eigenvalue 1 of eγR when - y--+ oo, and consequently x = ceγRf (0). By this theorem, we see that DiffusionRank is a generalization of PageRank. When - y = 0, the ranking value is most robust to manipulation since no heat is diffused and the system is unchangeable, but the Web structure is completely ignored since eγRf (0) = e0Rf (0) = If (0) = f (0); when - y = oo, DiffusionRank becomes PageRank, it can be manipulated easily. We expect an appropriate setting of - y that can balance both. For this, we have no theoretical result, but in practice we find that - y = 1 works well in Section 5. Next we discuss how to determine the number of iterations if we employ the discrete heat kernel. 4.4 The Number of Iterations While we enjoy the advantage of the concise form of the exponential heat kernel, it is better for us to calculate DiffusionRank by employing Eq. (6) in an iterative way. Then the problem about determining N--the number of iterations arises: For a given threshold e, find N such that | | ((I + γNR) N − eγR) f (0) | | <e for any f (0) whose sum is one. Since it is difficult to solve this problem, we propose a heuristic motivated by the following observations. When Comparing Eq. (8) and Eq. (9), we observe that the eigenvalues of (I + γNR) N − eγR are (1 + γ (λn-1) N) N − eγ (λn-1). We propose a heuristic method to determine N so that the difference between the eigenvalues are less than a threshold for only positive λs. We also observe that if - y = 1, λ <1, then | (1 + γ (λ-1) according to different accuracy requirements. In this paper, we use the relatively accurate setting N = 100 to make the real eigenvalues in (I + γN R) N − eγR less than 0.005. 5. EXPERIMENTS In this section, we show the experimental data, the methodology, the setting, and the results. 5.1 Data Preparation Our input data consist of a toy graph, a middle-size realworld graph, and a large-size real-world graph. The toy graph is shown in Fig. 2 (a). The graph below it shows node 1 is being manipulated by adding new nodes A, B, C,. . . such that they all point to node 1, and node 1 points to them all. The data of two real Web graph were obtained from the domain in our institute in October, 2004. The total number of pages found are 18,542 in the middle-size graph, and 607,170 in the large-size graph respectively. The middle-size graph is a subgraph of the large-size graph, and they were obtained by the same crawler: one is recorded by the crawler in its earlier time, and the other is obtained when the crawler stopped. 5.2 Methodology The algorithms we run include PageRank, TrustRank and DiffusionRank. All the rank values are multiplied by the number of nodes so that the sum of the rank values is equal to the number of nodes. By this normalization, we can compare the results on graphs with different sizes since the average rank value is one for any graph after such normalization. We will need value difference and pairwise order difference as comparison measures. Their definitions are listed as follows. Value Difference. The value difference between A = {Ai} ni = 1 and B = {Bi} ni = 1 is measured as Eni = 1 | Ai − Bi |. Pairwise Order Difference. The order difference between A and B is measured as the number of significant order differences between A and B. The pair (A [i], A [j]) and (B [i], B [j]) is considered as a significant order difference if one of the following cases happens: both A [i]> [<] A [j] +0.1 and B [i] <[>] A [j]; both A [i] <[>] A [j] and B [i]> [<] A [j] + 0.1. Figure 2: (a) The toy graph consisting of six nodes, and node 1 is being manipulated by adding new Figure 3: The rank values of the manipulated nodes nodes A, B, C,...(b) The approximation tendency to on the toy graph PageRank by DiffusionRank 5.3 Experimental Set-up The experiments on the middle-size graph and the largesize graphs are conducted on the workstation, whose hardware model is Nix Dual Intel Xeon 2.2 GHz with 1GB RAM and a Linux Kernel 2.4.18-27smp (RedHat7 .3). In calculating DiffusionRank, we employ Eq. (6) and the discrete approximation of Eq. (4) for such graphs. The related tasks are implemented using C language. While in the toy graph, we employ the continuous diffusion kernel in Eq. (4) and Eq. (5), and implement related tasks using Matlab. For nodes that have zero out-degree (dangling nodes), we employ the method in the modified PageRank algorithm [8], in which dangling nodes of are considered to have random links uniformly to each node. We set α = αI = αB = 0.85 in all algorithms. We also set g to be the uniform distribution in both PageRank and DiffusionRank. For DiffusionRank, we set - y = 1. According to the discussions in Section 4.3 and Section 4.4, we set the iteration number to be MB = 100 in DiffusionRank, and for accuracy consideration, the iteration number in all the algorithms is set to be 100. 5.4 Approximation of PageRank We show that when - y tends to infinity, the value differences between DiffusionRank and PageRank tend to zero. Fig. 2 (b) shows the approximation property of DiffusionRank, as proved in Theorem 1, on the toy graph. The horizontal axis of Fig. 2 (b) marks the - y value, and vertical axis corresponds to the value difference between DiffusionRank and PageRank. All the possible trusted sets with L = 1 are considered. For L> 1, the results should be the linear combination of some of these curves because of the linearity of the solutions to heat equations. On other graphs, the situations are similar. 5.5 Results of Anti-manipulation In this section, we show how the rank values change as the intensity of manipulation increases. We measure the intensity of manipulation by the number of newly added points that point to the manipulated point. The horizontal axes of Fig. 3 stand for the numbers of newly added points, and vertical axes show the corresponding rank values of the manipulated nodes. To be clear, we consider all six situations. Every node in Fig. 2 (a) is manipulated respectively, and its corresponding values for PageRank, TrustRank (TR), DiffusionRank (DR) are shown in the one of six sub-figures in Fig. 3. The vertical axes show which node is being manipulated. In each sub-figure, the trusted sets are computed below. Since the inverse PageRank yields the results [1.26, 0.85, 1.31, 1.36, 0.51, 0.71]. Let L = 1. If the manipulated node is not 4, then the trusted set is {4}, and otherwise {3}. We observe that in all the cases, rank values of the manipulated node for DiffusionRank grow slowest as the number of the newly added nodes increases. On the middle-size graph and the large-size graph, this conclusion is also true, see Fig. 4. Note that, in Fig. 4 (a), we choose four trusted sets (L = 1), on which we test DiffusionRank and TrustRank, the results are denoted by DiffusionRanki and TrustRanki (i = 0, 1, 2, 3 denotes the four trusted set); in Fig. 4 (b), we choose one trusted set (L = 1). Moreover, in both Fig. 4 (a) and Fig. 4 (b), we show the results for DiffusionRank when we have no trusted set, and we trust all the pages before some of them are manipulated. We also test the order difference between the ranking order A before the page is manipulated and the ranking order PA after the page is manipulated. Because after manipu Figure 4: (a) The rank values of the manipulated nodes on the middle-size graph; (b) The rank values of the manipulated nodes on the large-size graph lation, the number of pages changes, we only compare the common part of A and PA. . This experiment is used to test the stability of all these algorithms. The less the order difference, the stabler the algorithm, in the sense that only a smaller part of the order relations is affected by the manipulation. Figure 5 (a) shows that the order difference values change when we add new nodes that point to the manipulated node. We give several ry settings. We find that when ry = 1, the least order difference is achieved by DiffusionRank. It is interesting to point out that as ry increases, the order difference will increase first; after reaching a maximum value, it will decrease, and finally it tends to the PageRank results. We show this tendency in Fig. 5 (b), in which we choose three different settings--the number of manipulated nodes are 2,000, 5,000, and 10,000 respectively. From these figures, we can see that when ry <2, the values are less than those for PageRank, and that when ry> 20, the difference between PageRank and DiffusionRank is very small. After these investigations, we find that in all the graphs we tested, DiffusionRank (when ry = 1) is most robust to manipulation both in value difference and order difference. The trust set selection algorithm proposed in [7] is effective for both TrustRank and DiffusionRank. Figure 5: (a) Pairwise order difference on the middle-size graph, the least it is, the more stable the algorithm; (b) The tendency of varying ry 6. CONCLUSIONS We conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient ry can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations. The experimental results show that we can actually achieve such a balance by setting ry = 1, although the best setting including varying rya is still under further investigation. This anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming. Moreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities. All these advantages can be achieved in the same computational complexity as PageRank. For the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.
DiffusionRank: A Possible Penicillin for Web Spamming ABSTRACT While the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated. To handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank. DiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph. Theoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient - y tends to infinity. In such a case 1 / - y = 0, DiffusionRank (PageRank) has low ability of anti-manipulation. When - y = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored. Consequently, - y is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation. It is found empirically that, when - y = 1, DiffusionRank has a Penicillin-like effect on the link manipulation. Moreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities. Experimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected. 1. INTRODUCTION While the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by peo ple for commercial interests. The manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7]. It is reported that approximately 70% of all pages in the. biz domain and about 35% of the pages in the. us domain belong to the spam category [12]. The reason for the increasing amount of Web spam is explained in [12]: some web site operators try to influence the positioning of their pages within search results because of the large fraction of web traffic originating from searches and the high potential monetary value of this traffic. From the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12]. From the viewpoint of the search engine managers, the Web spam is very harmful to the users' evaluations and thus their preference to choosing search engines because people believe that a good search engine should not return irrelevant or low-quality results. There are two methods being employed to combat the Web spam problem. Machine learning methods are employed to handle the keyword stuffing. To successfully apply machine learning methods, we need to dig out some useful textual features for Web pages, to mark part of the Web pages as either spam or non-spam, then to apply supervised learning techniques to mark other pages. For example, see [5, 12]. Link analysis methods are also employed to handle the link stuffing problem. One example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links. This paper focuses on the link-based method. The rest of the materials are organized as follows. In the next section, we give a brief literature review on various related ranking techniques. We establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4. In Section 5, we describe the data sets that we worked on and the experimental results. Finally, we draw conclusions in Section 6. 2. LITERATURE REVIEW 2.1 PageRank 2.2 TrustRank 2.3 Manifold Ranking 2.4 Heat Diffusion 3. HEAT DIFFUSION MODEL 3.1 Motivations 3.2 Heat Flow On a Known Manifold 3.3 On an Undirected Graph 3.4 On a Directed Graph 3.5 On a Random Directed Graph 4. DIFFUSIONRANK 4.1 Algorithm 4.2 Advantages 4.2.1 Two closed forms 4.2.2 Group-group relations 4.2.3 Graph cut 4.2.4 Anti-manipulation 4.3 The Physical Meaning of γ 4.4 The Number of Iterations 5. EXPERIMENTS 5.1 Data Preparation 5.2 Methodology 5.3 Experimental Set-up 5.4 Approximation of PageRank 5.5 Results of Anti-manipulation 6. CONCLUSIONS We conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient ry can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations. The experimental results show that we can actually achieve such a balance by setting ry = 1, although the best setting including varying rya is still under further investigation. This anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming. Moreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities. All these advantages can be achieved in the same computational complexity as PageRank. For the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.
DiffusionRank: A Possible Penicillin for Web Spamming ABSTRACT While the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated. To handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank. DiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph. Theoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient - y tends to infinity. In such a case 1 / - y = 0, DiffusionRank (PageRank) has low ability of anti-manipulation. When - y = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored. Consequently, - y is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation. It is found empirically that, when - y = 1, DiffusionRank has a Penicillin-like effect on the link manipulation. Moreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities. Experimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected. 1. INTRODUCTION While the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by peo ple for commercial interests. The manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7]. It is reported that approximately 70% of all pages in the. biz domain and about 35% of the pages in the. us domain belong to the spam category [12]. From the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12]. There are two methods being employed to combat the Web spam problem. Machine learning methods are employed to handle the keyword stuffing. Link analysis methods are also employed to handle the link stuffing problem. One example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links. This paper focuses on the link-based method. In the next section, we give a brief literature review on various related ranking techniques. We establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4. In Section 5, we describe the data sets that we worked on and the experimental results. Finally, we draw conclusions in Section 6. 6. CONCLUSIONS We conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient ry can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations. The experimental results show that we can actually achieve such a balance by setting ry = 1, although the best setting including varying rya is still under further investigation. This anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming. Moreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities. All these advantages can be achieved in the same computational complexity as PageRank. For the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.
H-77
Automatic Extraction of Titles from General Documents using Machine Learning
In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.
[ "machin learn", "search", "titl extract", "genr", "automat titl extract", "format inform", "document retriev", "languag independ", "metada of document", "linguist featur", "comparison between model", "model gener", "extract titl us", "classifi", "inform extract", "metada extract" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "M", "R", "M", "U", "R", "M" ]
Automatic Extraction of Titles from General Documents using Machine Learning Yunhua Hu1 Computer Science Department Xi``an Jiaotong University No 28, Xianning West Road Xi'an, China, 710049 yunhuahu@mail.xjtu.edu.cn Hang Li, Yunbo Cao Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian, Beijing, China, 100080 {hangli,yucao}@microsoft. com Qinghua Zheng Computer Science Department Xi``an Jiaotong University No 28, Xianning West Road Xi'an, China, 710049 qhzheng@mail.xjtu.edu.cn Dmitriy Meyerzon Microsoft Corporation One Microsoft Way Redmond, WA, USA, 98052 dmitriym@microsoft.com ABSTRACT In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - Search Process; H.4.1 [Information Systems Applications]: Office Automation - Word processing; D.2.8 [Software Engineering]: Metrics - complexity measures, performance measures General Terms Algorithms, Experimentation, Performance. 1. INTRODUCTION Metadata of documents is useful for many kinds of document processing such as search, browsing, and filtering. Ideally, metadata is defined by the authors of documents and is then used by various systems. However, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26]. Thus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue. Methods for performing the task have been proposed. However, the focus was mainly on extraction from research papers. For instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers. They formalized the problem as that of classification and employed Support Vector Machines as the classifier. They mainly used linguistic features in the model.1 In this paper, we consider metadata extraction from general documents. By general documents, we mean documents that may belong to any one of a number of specific genres. General documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is sorely needed. Research papers usually have well-formed styles and noticeable characteristics. In contrast, the styles of general documents can vary greatly. It has not been clarified whether a machine learning based approach can work well for this task. There are many types of metadata: title, author, date of creation, etc.. As a case study, we consider title extraction in this paper. General documents can be in many different file formats: Microsoft Office, PDF (PS), etc.. As a case study, we consider extraction from Office including Word and PowerPoint. We take a machine learning approach. We annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models. In the models, we mainly utilize formatting information such as font size as features. We employ the following models: Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron. In this paper, we also investigate the following three problems, which did not seem to have been examined previously. (1) Comparison between models: among the models above, which model performs best for title extraction; (2) Generality of model: whether it is possible to train a model on one domain and apply it to another domain, and whether it is possible to train a model in one language and apply it to another language; (3) Usefulness of extracted titles: whether extracted titles can improve document processing such as search. Experimental results indicate that our approach works well for title extraction from general documents. Our method can significantly outperform the baselines: one that always uses the first lines as titles and the other that always uses the lines in the largest font sizes as titles. Precision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively. It turns out that the use of format features is the key to successful title extraction. (1) We have observed that Perceptron based models perform better in terms of extraction accuracies. (2) We have empirically verified that the models trained with our approach are generic in the sense that they can be trained on one domain and applied to another, and they can be trained in one language and applied to another. (3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%). We conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications. The rest of the paper is organized as follows. In section 2, we introduce related work, and in section 3, we explain the motivation and problem setting of our work. In section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles. Section 6 gives our experimental results. We make concluding remarks in section 7. 2. RELATED WORK 2.1 Document Metadata Extraction Methods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers. The proposed methods fall into two categories: the rule based approach and the machine learning based approach. Giuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript. They used rules like titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes. Liddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies. Mao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information. The rule-based approach can achieve high performance. However, it also has disadvantages. It is less adaptive and robust when compared with the machine learning approach. Han et al. [10], for instance, conducted metadata extraction with the machine learning approach. They viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier. They mainly used linguistic information as features. They reported high extraction accuracy from research papers in terms of precision and recall. 2.2 Information Extraction Metadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested. Hidden Markov Model [6], Maximum Entropy Model [1, 4], Maximum Entropy Markov Model [17], Support Vector Machines [3], Conditional Random Field [12], and Voted Perceptron [2] are widely used information extraction models. Information extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19]. 2.3 Search Using Title Information Title information is useful for document retrieval. In the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8]. In web search, the title fields (i.e., file properties) and anchor texts of web pages (HTML documents) can be viewed as `titles'' of the pages [5]. Many search engines seem to utilize them for web page retrieval [7, 11, 18, 22]. Zhang et al., found that web pages with well-defined metadata are more easily retrieved than those without well-defined metadata [24]. To the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents. 146 3. MOTIVATION AND PROBLEM SETTING We consider the issue of automatically extracting titles from general documents. By general documents, we mean documents that belong to one of any number of specific genres. The documents can be presentations, books, book chapters, technical papers, brochures, reports, memos, specifications, letters, announcements, or resumes. General documents are more widely available in digital libraries, intranets, and internet, and thus investigation on title extraction from them is sorely needed. Figure 1 shows an estimate on distributions of file formats on intranet and internet [15]. Office and PDF are the main file formats on the intranet. Even on the internet, the documents in the formats are still not negligible, given its extremely large size. In this paper, without loss of generality, we take Office documents as an example. Figure 1. Distributions of file formats in internet and intranet. For Office documents, users can define titles as file properties using a feature provided by Office. We found in an experiment, however, that users seldom use the feature and thus titles in file properties are usually very inaccurate. That is to say, titles in file properties are usually inconsistent with the `true'' titles in the file bodies that are created by the authors and are visible to readers. We collected 6,000 Word and 6,000 PowerPoint documents from an intranet and the internet and examined how many titles in the file properties are correct. We found that surprisingly the accuracy was only 0.265 (cf., Section 6.3 for details). A number of reasons can be considered. For example, if one creates a new file by copying an old file, then the file property of the new file will also be copied from the old file. In another experiment, we found that Google uses the titles in file properties of Office documents in search and browsing, but the titles are not very accurate. We created 50 queries to search Word and PowerPoint documents and examined the top 15 results of each query returned by Google. We found that nearly all the titles presented in the search results were from the file properties of the documents. However, only 0.272 of them were correct. Actually, `true'' titles usually exist at the beginnings of the bodies of documents. If we can accurately extract the titles from the bodies of documents, then we can exploit reliable title information in document processing. This is exactly the problem we address in this paper. More specifically, given a Word document, we are to extract the title from the top region of the first page. Given a PowerPoint document, we are to extract the title from the first slide. A title sometimes consists of a main title and one or two subtitles. We only consider extraction of the main title. As baselines for title extraction, we use that of always using the first lines as titles and that of always using the lines with largest font sizes as titles. Figure 2. Title extraction from Word document. Figure 3. Title extraction from PowerPoint document. Next, we define a `specification'' for human judgments in title data annotation. The annotated data will be used in training and testing of the title extraction methods. Summary of the specification: The title of a document should be identified on the basis of common sense, if there is no difficulty in the identification. However, there are many cases in which the identification is not easy. There are some rules defined in the specification that guide identification for such cases. The rules include a title is usually in consecutive lines in the same format, a document can have no title, titles in images are not considered, a title should not contain words like `draft'', 147 `whitepaper'', etc, if it is difficult to determine which is the title, select the one in the largest font size, and if it is still difficult to determine which is the title, select the first candidate. (The specification covers all the cases we have encountered in data annotation.) Figures 2 and 3 show examples of Office documents from which we conduct title extraction. In Figure 2, `Differences in Win32 API Implementations among Windows Operating Systems'' is the title of the Word document. `Microsoft Windows'' on the top of this page is a picture and thus is ignored. In Figure 3, `Building Competitive Advantages through an Agile Infrastructure'' is the title of the PowerPoint document. We have developed a tool for annotation of titles by human annotators. Figure 4 shows a snapshot of the tool. Figure 4. Title annotation tool. 4. TITLE EXTRACTION METHOD 4.1 Outline Title extraction based on machine learning consists of training and extraction. The same pre-processing step occurs before training and extraction. During pre-processing, from the top region of the first page of a Word document or the first slide of a PowerPoint document a number of units for processing are extracted. If a line (lines are separated by `return'' symbols) only has a single format, then the line will become a unit. If a line has several parts and each of them has its own format, then each part will become a unit. Each unit will be treated as an instance in learning. A unit contains not only content information (linguistic information) but also formatting information. The input to pre-processing is a document and the output of pre-processing is a sequence of units (instances). Figure 5 shows the units obtained from the document in Figure 2. Figure 5. Example of units. In learning, the input is sequences of units where each sequence corresponds to a document. We take labeled units (labeled as title_begin, title_end, or other) in the sequences as training data and construct models for identifying whether a unit is title_begin title_end, or other. We employ four types of models: Perceptron, Maximum Entropy (ME), Perceptron Markov Model (PMM), and Maximum Entropy Markov Model (MEMM). In extraction, the input is a sequence of units from one document. We employ one type of model to identify whether a unit is title_begin, title_end, or other. We then extract units from the unit labeled with `title_begin'' to the unit labeled with `title_end''. The result is the extracted title of the document. The unique characteristic of our approach is that we mainly utilize formatting information for title extraction. Our assumption is that although general documents vary in styles, their formats have certain patterns and we can learn and utilize the patterns for title extraction. This is in contrast to the work by Han et al., in which only linguistic features are used for extraction from research papers. 4.2 Models The four models actually can be considered in the same metadata extraction framework. That is why we apply them together to our current problem. Each input is a sequence of instances kxxx L21 together with a sequence of labels kyyy L21 . ix and iy represents an instance and its label, respectively ( ki ,,2,1 L= ). Recall that an instance here represents a unit. A label represents title_begin, title_end, or other. Here, k is the number of units in a document. In learning, we train a model which can be generally denoted as a conditional probability distribution )|( 11 kk XXYYP LL where iX and iY denote random variables taking instance ix and label iy as values, respectively ( ki ,,2,1 L= ). Learning Tool Extraction Tool 21121 2222122221 1121111211 nknnknn kk kk yyyxxx yyyxxx yyyxxx LL LL LL LL → → → )|(maxarg 11 mkmmkm xxyyP LL )|( 11 kk XXYYP LL Conditional Distribution mkmm xxx L21 Figure 6. Metadata extraction model. We can make assumptions about the general model in order to make it simple enough for training. 148 For example, we can assume that kYY ,,1 L are independent of each other given kXX ,,1 L . Thus, we have )|()|( )|( 11 11 kk kk XYPXYP XXYYP L LL = In this way, we decompose the model into a number of classifiers. We train the classifiers locally using the labeled data. As the classifier, we employ the Perceptron or Maximum Entropy model. We can also assume that the first order Markov property holds for kYY ,,1 L given kXX ,,1 L . Thus, we have )|()|( )|( 111 11 kkk kk XYYPXYP XXYYP −= L LL Again, we obtain a number of classifiers. However, the classifiers are conditioned on the previous label. When we employ the Percepton or Maximum Entropy model as a classifier, the models become a Percepton Markov Model or Maximum Entropy Markov Model, respectively. That is to say, the two models are more precise. In extraction, given a new sequence of instances, we resort to one of the constructed models to assign a sequence of labels to the sequence of instances, i.e., perform extraction. For Perceptron and ME, we assign labels locally and combine the results globally later using heuristics. Specifically, we first identify the most likely title_begin. Then we find the most likely title_end within three units after the title_begin. Finally, we extract as a title the units between the title_begin and the title_end. For PMM and MEMM, we employ the Viterbi algorithm to find the globally optimal label sequence. In this paper, for Perceptron, we actually employ an improved variant of it, called Perceptron with Uneven Margin [13]. This version of Perceptron can work well especially when the number of positive instances and the number of negative instances differ greatly, which is exactly the case in our problem. We also employ an improved version of Perceptron Markov Model in which the Perceptron model is the so-called Voted Perceptron [2]. In addition, in training, the parameters of the model are updated globally rather than locally. 4.3 Features There are two types of features: format features and linguistic features. We mainly use the former. The features are used for both the title-begin and the title-end classifiers. 4.3.1 Format Features Font Size: There are four binary features that represent the normalized font size of the unit (recall that a unit has only one type of font). If the font size of the unit is the largest in the document, then the first feature will be 1, otherwise 0. If the font size is the smallest in the document, then the fourth feature will be 1, otherwise 0. If the font size is above the average font size and not the largest in the document, then the second feature will be 1, otherwise 0. If the font size is below the average font size and not the smallest, the third feature will be 1, otherwise 0. It is necessary to conduct normalization on font sizes. For example, in one document the largest font size might be `12pt'', while in another the smallest one might be `18pt''. Boldface: This binary feature represents whether or not the current unit is in boldface. Alignment: There are four binary features that respectively represent the location of the current unit: `left'', `center'', `right'', and `unknown alignment''. The following format features with respect to `context'' play an important role in title extraction. Empty Neighboring Unit: There are two binary features that represent, respectively, whether or not the previous unit and the current unit are blank lines. Font Size Change: There are two binary features that represent, respectively, whether or not the font size of the previous unit and the font size of the next unit differ from that of the current unit. Alignment Change: There are two binary features that represent, respectively, whether or not the alignment of the previous unit and the alignment of the next unit differ from that of the current one. Same Paragraph: There are two binary features that represent, respectively, whether or not the previous unit and the next unit are in the same paragraph as the current unit. 4.3.2 Linguistic Features The linguistic features are based on key words. Positive Word: This binary feature represents whether or not the current unit begins with one of the positive words. The positive words include `title:'', `subject:'', `subject line:'' For example, in some documents the lines of titles and authors have the same formats. However, if lines begin with one of the positive words, then it is likely that they are title lines. Negative Word: This binary feature represents whether or not the current unit begins with one of the negative words. The negative words include `To'', `By'', `created by'', `updated by'', etc.. There are more negative words than positive words. The above linguistic features are language dependent. Word Count: A title should not be too long. We heuristically create four intervals: [1, 2], [3, 6], [7, 9] and [9, ∞) and define one feature for each interval. If the number of words in a title falls into an interval, then the corresponding feature will be 1; otherwise 0. Ending Character: This feature represents whether the unit ends with `:'', `-'', or other special characters. A title usually does not end with such a character. 5. DOCUMENT RETRIEVAL METHOD We describe our method of document retrieval using extracted titles. Typically, in information retrieval a document is split into a number of fields including body, title, and anchor text. A ranking function in search can use different weights for different fields of 149 the document. Also, titles are typically assigned high weights, indicating that they are important for document retrieval. As explained previously, our experiment has shown that a significant number of documents actually have incorrect titles in the file properties, and thus in addition of using them we use the extracted titles as one more field of the document. By doing this, we attempt to improve the overall precision. In this paper, we employ a modification of BM25 that allows field weighting [21]. As fields, we make use of body, title, extracted title and anchor. First, for each term in the query we count the term frequency in each field of the document; each field frequency is then weighted according to the corresponding weight parameter: ∑= f tfft tfwwtf Similarly, we compute the document length as a weighted sum of lengths of each field. Average document length in the corpus becomes the average of all weighted document lengths. ∑= f ff dlwwdl In our experiments we used 75.0,8.11 == bk . Weight for content was 1.0, title was 10.0, anchor was 10.0, and extracted title was 5.0. 6. EXPERIMENTAL RESULTS 6.1 Data Sets and Evaluation Measures We used two data sets in our experiments. First, we downloaded and randomly selected 5,000 Word documents and 5,000 PowerPoint documents from an intranet of Microsoft. We call it MS hereafter. Second, we downloaded and randomly selected 500 Word and 500 PowerPoint documents from the DotGov and DotCom domains on the internet, respectively. Figure 7 shows the distributions of the genres of the documents. We see that the documents are indeed `general documents'' as we define them. Figure 7. Distributions of document genres. Third, a data set in Chinese was also downloaded from the internet. It includes 500 Word documents and 500 PowerPoint documents in Chinese. We manually labeled the titles of all the documents, on the basis of our specification. Not all the documents in the two data sets have titles. Table 1 shows the percentages of the documents having titles. We see that DotCom and DotGov have more PowerPoint documents with titles than MS. This might be because PowerPoint documents published on the internet are more formal than those on the intranet. Table 1. The portion of documents with titles Domain Type MS DotCom DotGov Word 75.7% 77.8% 75.6% PowerPoint 82.1% 93.4% 96.4% In our experiments, we conducted evaluations on title extraction in terms of precision, recall, and F-measure. The evaluation measures are defined as follows: Precision: P = A / ( A + B ) Recall: R = A / ( A + C ) F-measure: F1 = 2PR / ( P + R ) Here, A, B, C, and D are numbers of documents as those defined in Table 2. Table 2. Contingence table with regard to title extraction Is title Is not title Extracted A B Not extracted C D 6.2 Baselines We test the accuracies of the two baselines described in section 4.2. They are denoted as `largest font size'' and `first line'' respectively. 6.3 Accuracy of Titles in File Properties We investigate how many titles in the file properties of the documents are reliable. We view the titles annotated by humans as true titles and test how many titles in the file properties can approximately match with the true titles. We use Edit Distance to conduct the approximate match. (Approximate match is only used in this evaluation). This is because sometimes human annotated titles can be slightly different from the titles in file properties on the surface, e.g., contain extra spaces). Given string A and string B: if ( (D == 0) or ( D / ( La + Lb ) < θ ) ) then string A = string B D: Edit Distance between string A and string B La: length of string A Lb: length of string B θ: 0.1 ∑ × ++− + = t t n N wtf avwdl wdl bbk kwtf FBM )log( ))1(( )1( 25 1 1 150 Table 3. Accuracies of titles in file properties File Type Domain Precision Recall F1 MS 0.299 0.311 0.305 DotCom 0.210 0.214 0.212Word DotGov 0.182 0.177 0.180 MS 0.229 0.245 0.237 DotCom 0.185 0.186 0.186PowerPoint DotGov 0.180 0.182 0.181 6.4 Comparison with Baselines We conducted title extraction from the first data set (Word and PowerPoint in MS). As the model, we used Perceptron. We conduct 4-fold cross validation. Thus, all the results reported here are those averaged over 4 trials. Tables 4 and 5 show the results. We see that Perceptron significantly outperforms the baselines. In the evaluation, we use exact matching between the true titles annotated by humans and the extracted titles. Table 4. Accuracies of title extraction with Word Precision Recall F1 Model Perceptron 0.810 0.837 0.823 Largest font size 0.700 0.758 0.727 Baselines First line 0.707 0.767 0.736 Table 5. Accuracies of title extraction with PowerPoint Precision Recall F1 Model Perceptron 0.875 0. 895 0.885 Largest font size 0.844 0.887 0.865 Baselines First line 0.639 0.671 0.655 We see that the machine learning approach can achieve good performance in title extraction. For Word documents both precision and recall of the approach are 8 percent higher than those of the baselines. For PowerPoint both precision and recall of the approach are 2 percent higher than those of the baselines. We conduct significance tests. The results are shown in Table 6. Here, `Largest'' denotes the baseline of using the largest font size, `First'' denotes the baseline of using the first line. The results indicate that the improvements of machine learning over baselines are statistically significant (in the sense p-value < 0.05) Table 6. Sign test results Documents Type Sign test between p-value Perceptron vs. Largest 3.59e-26 Word Perceptron vs. First 7.12e-10 Perceptron vs. Largest 0.010 PowerPoint Perceptron vs. First 5.13e-40 We see, from the results, that the two baselines can work well for title extraction, suggesting that font size and position information are most useful features for title extraction. However, it is also obvious that using only these two features is not enough. There are cases in which all the lines have the same font size (i.e., the largest font size), or cases in which the lines with the largest font size only contain general descriptions like `Confidential'', `White paper'', etc.. For those cases, the `largest font size'' method cannot work well. For similar reasons, the `first line'' method alone cannot work well, either. With the combination of different features (evidence in title judgment), Perceptron can outperform Largest and First. We investigate the performance of solely using linguistic features. We found that it does not work well. It seems that the format features play important roles and the linguistic features are supplements. . Figure 8. An example Word document. Figure 9. An example PowerPoint document. We conducted an error analysis on the results of Perceptron. We found that the errors fell into three categories. (1) About one third of the errors were related to `hard cases''. In these documents, the layouts of the first pages were difficult to understand, even for humans. Figure 8 and 9 shows examples. (2) Nearly one fourth of the errors were from the documents which do not have true titles but only contain bullets. Since we conduct extraction from the top regions, it is difficult to get rid of these errors with the current approach. (3). Confusions between main titles and subtitles were another type of error. Since we only labeled the main titles as titles, the extractions of both titles were considered incorrect. This type of error does little harm to document processing like search, however. 6.5 Comparison between Models To compare the performance of different machine learning models, we conducted another experiment. Again, we perform 4-fold cross 151 validation on the first data set (MS). Table 7, 8 shows the results of all the four models. It turns out that Perceptron and PMM perform the best, followed by MEMM, and ME performs the worst. In general, the Markovian models perform better than or as well as their classifier counterparts. This seems to be because the Markovian models are trained globally, while the classifiers are trained locally. The Perceptron based models perform better than the ME based counterparts. This seems to be because the Perceptron based models are created to make better classifications, while ME models are constructed for better prediction. Table 7. Comparison between different learning models for title extraction with Word Model Precision Recall F1 Perceptron 0.810 0.837 0.823 MEMM 0.797 0.824 0.810 PMM 0.827 0.823 0.825 ME 0.801 0.621 0.699 Table 8. Comparison between different learning models for title extraction with PowerPoint Model Precision Recall F1 Perceptron 0.875 0. 895 0. 885 MEMM 0.841 0.861 0.851 PMM 0.873 0.896 0.885 ME 0.753 0.766 0.759 6.6 Domain Adaptation We apply the model trained with the first data set (MS) to the second data set (DotCom and DotGov). Tables 9-12 show the results. Table 9. Accuracies of title extraction with Word in DotGov Precision Recall F1 Model Perceptron 0.716 0.759 0.737 Largest font size 0.549 0.619 0.582Baselines First line 0.462 0.521 0.490 Table 10. Accuracies of title extraction with PowerPoint in DotGov Precision Recall F1 Model Perceptron 0.900 0.906 0.903 Largest font size 0.871 0.888 0.879Baselines First line 0.554 0.564 0.559 Table 11. Accuracies of title extraction with Word in DotCom Precisio n Recall F1 Model Perceptron 0.832 0.880 0.855 Largest font size 0.676 0.753 0.712Baselines First line 0.577 0.643 0.608 Table 12. Performance of PowerPoint document title extraction in DotCom Precisio n Recall F1 Model Perceptron 0.910 0.903 0.907 Largest font size 0.864 0.886 0.875Baselines First line 0.570 0.585 0.577 From the results, we see that the models can be adapted to different domains well. There is almost no drop in accuracy. The results indicate that the patterns of title formats exist across different domains, and it is possible to construct a domain independent model by mainly using formatting information. 6.7 Language Adaptation We apply the model trained with the data in English (MS) to the data set in Chinese. Tables 13-14 show the results. Table 13. Accuracies of title extraction with Word in Chinese Precision Recall F1 Model Perceptron 0.817 0.805 0.811 Largest font size 0.722 0.755 0.738Baselines First line 0.743 0.777 0.760 Table 14. Accuracies of title extraction with PowerPoint in Chinese Precision Recall F1 Model Perceptron 0.766 0.812 0.789 Largest font size 0.753 0.813 0.782Baselines First line 0.627 0.676 0.650 We see that the models can be adapted to a different language. There are only small drops in accuracy. Obviously, the linguistic features do not work for Chinese, but the effect of not using them is negligible. The results indicate that the patterns of title formats exist across different languages. From the domain adaptation and language adaptation results, we conclude that the use of formatting information is the key to a successful extraction from general documents. 6.8 Search with Extracted Titles We performed experiments on using title extraction for document retrieval. As a baseline, we employed BM25 without using extracted titles. The ranking mechanism was as described in Section 5. The weights were heuristically set. We did not conduct optimization on the weights. The evaluation was conducted on a corpus of 1.3 M documents crawled from the intranet of Microsoft using 100 evaluation queries obtained from this intranet``s search engine query logs. 50 queries were from the most popular set, while 50 queries other were chosen randomly. Users were asked to provide judgments of the degree of document relevance from a scale of 1to 5 (1 meaning detrimental, 2 - bad, 3 - fair, 4 - good and 5 - excellent). 152 Figure 10 shows the results. In the chart two sets of precision results were obtained by either considering good or excellent documents as relevant (left 3 bars with relevance threshold 0.5), or by considering only excellent documents as relevant (right 3 bars with relevance threshold 1.0) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 P@10 P@5 Reciprocal P@10 P@5 Reciprocal 0.5 1 BM25 Anchor, Title, Body BM25 Anchor, Title, Body, ExtractedTitle Name All RelevanceThreshold Data Description Figure 10. Search ranking results. Figure 10 shows different document retrieval results with different ranking functions in terms of precision @10, precision @5 and reciprocal rank: • Blue bar - BM25 including the fields body, title (file property), and anchor text. • Purple bar - BM25 including the fields body, title (file property), anchor text, and extracted title. With the additional field of extracted title included in BM25 the precision @10 increased from 0.132 to 0.145, or by ~10%. Thus, it is safe to say that the use of extracted title can indeed improve the precision of document retrieval. 7. CONCLUSION In this paper, we have investigated the problem of automatically extracting titles from general documents. We have tried using a machine learning approach to address the problem. Previous work showed that the machine learning approach can work well for metadata extraction from research papers. In this paper, we showed that the approach can work for extraction from general documents as well. Our experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents. Previous work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information. It appeared that using formatting information is a key for successfully conducting title extraction from general documents. We tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron. We found that the performance of the Perceptorn models was the best. We applied models constructed in one domain to another domain and applied models trained in one language to another language. We found that the accuracies did not drop substantially across different domains and across different languages, indicating that the models were generic. We also attempted to use the extracted titles in document retrieval. We observed a significant improvement in document ranking performance for search when using extracted title information. All the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach. 8. ACKNOWLEDGEMENTS We thank Chunyu Wei and Bojuan Zhao for their work on data annotation. We acknowledge Jinzhu Li for his assistance in conducting the experiments. We thank Ming Zhou, John Chen, Jun Xu, and the anonymous reviewers of JCDL``05 for their valuable comments on this paper. 9. REFERENCES [1] Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J. A maximum entropy approach to natural language processing. Computational Linguistics, 22:39-71, 1996. [2] Collins, M. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. In Proceedings of Conference on Empirical Methods in Natural Language Processing, 1-8, 2002. [3] Cortes, C. and Vapnik, V. Support-vector networks. Machine Learning, 20:273-297, 1995. [4] Chieu, H. L. and Ng, H. T. A maximum entropy approach to information extraction from semi-structured and free text. In Proceedings of the Eighteenth National Conference on Artificial Intelligence, 768-791, 2002. [5] Evans, D. K., Klavans, J. L., and McKeown, K. R. Columbia newsblaster: multilingual news summarization on the Web. In Proceedings of Human Language Technology conference / North American chapter of the Association for Computational Linguistics annual meeting, 1-4, 2004. [6] Ghahramani, Z. and Jordan, M. I. Factorial hidden markov models. Machine Learning, 29:245-273, 1997. [7] Gheel, J. and Anderson, T. Data and metadata for finding and reminding, In Proceedings of the 1999 International Conference on Information Visualization, 446-451,1999. [8] Giles, C. L., Petinot, Y., Teregowda P. B., Han, H., Lawrence, S., Rangaswamy, A., and Pal, N. eBizSearch: a niche search engine for e-Business. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 413414, 2003. [9] Giuffrida, G., Shek, E. C., and Yang, J. Knowledge-based metadata extraction from PostScript files. In Proceedings of the Fifth ACM Conference on Digital Libraries, 77-84, 2000. [10] Han, H., Giles, C. L., Manavoglu, E., Zha, H., Zhang, Z., and Fox, E. A. Automatic document metadata extraction using support vector machines. In Proceedings of the Third ACM/IEEE-CS Joint Conference on Digital Libraries, 37-48, 2003. [11] Kobayashi, M., and Takeda, K. Information retrieval on the Web. ACM Computing Surveys, 32:144-173, 2000. [12] Lafferty, J., McCallum, A., and Pereira, F. Conditional random fields: probabilistic models for segmenting and 153 labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, 282-289, 2001. [13] Li, Y., Zaragoza, H., Herbrich, R., Shawe-Taylor J., and Kandola, J. S. The perceptron algorithm with uneven margins. In Proceedings of the Nineteenth International Conference on Machine Learning, 379-386, 2002. [14] Liddy, E. D., Sutton, S., Allen, E., Harwell, S., Corieri, S., Yilmazel, O., Ozgencil, N. E., Diekema, A., McCracken, N., and Silverstein, J. Automatic Metadata generation & evaluation. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 401-402, 2002. [15] Littlefield, A. Effective enterprise information retrieval across new content formats. In Proceedings of the Seventh Search Engine Conference, http://www.infonortics.com/searchengines/sh02/02prog.html, 2002. [16] Mao, S., Kim, J. W., and Thoma, G. R. A dynamic feature generation system for automated metadata extraction in preservation of digital materials. In Proceedings of the First International Workshop on Document Image Analysis for Libraries, 225-232, 2004. [17] McCallum, A., Freitag, D., and Pereira, F. Maximum entropy markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, 591-598, 2000. [18] Murphy, L. D. Digital document metadata in organizations: roles, analytical approaches, and future research directions. In Proceedings of the Thirty-First Annual Hawaii International Conference on System Sciences, 267-276, 1998. [19] Pinto, D., McCallum, A., Wei, X., and Croft, W. B. Table extraction using conditional random fields. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 235242, 2003. [20] Ratnaparkhi, A. Unsupervised statistical models for prepositional phrase attachment. In Proceedings of the Seventeenth International Conference on Computational Linguistics. 1079-1085, 1998. [21] Robertson, S., Zaragoza, H., and Taylor, M. Simple BM25 extension to multiple weighted fields, In Proceedings of ACM Thirteenth Conference on Information and Knowledge Management, 42-49, 2004. [22] Yi, J. and Sundaresan, N. Metadata based Web mining for relevance, In Proceedings of the 2000 International Symposium on Database Engineering & Applications, 113121, 2000. [23] Yilmazel, O., Finneran, C. M., and Liddy, E. D. MetaExtract: An NLP system to automatically assign metadata. In Proceedings of the 2004 Joint ACM/IEEE Conference on Digital Libraries, 241-242, 2004. [24] Zhang, J. and Dimitroff, A. Internet search engines' response to metadata Dublin Core implementation. Journal of Information Science, 30:310-320, 2004. [25] Zhang, L., Pan, Y., and Zhang, T. Recognising and using named entities: focused named entity recognition using machine learning. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 281-288, 2004. [26] http://dublincore.org/groups/corporate/Seattle/ 154
Automatic Extraction of Titles from General Documents using Machine Learning ABSTRACT In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles. 1. INTRODUCTION Metadata of documents is useful for many kinds of document processing such as search, browsing, and filtering. Ideally, metadata is defined by the authors of documents and is then used by various systems. However, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26]. Thus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue. Methods for performing the task have been proposed. However, the focus was mainly on extraction from research papers. For instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers. They formalized the problem as that of classification and employed Support Vector Machines as the classifier. They mainly used linguistic features in the model. In this paper, we consider metadata extraction from general documents. By general documents, we mean documents that may belong to any one of a number of specific genres. General documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is sorely needed. Research papers usually have well-formed styles and noticeable characteristics. In contrast, the styles of general documents can vary greatly. It has not been clarified whether a machine learning based approach can work well for this task. There are many types of metadata: title, author, date of creation, etc. . As a case study, we consider title extraction in this paper. General documents can be in many different file formats: Microsoft Office, PDF (PS), etc. . As a case study, we consider extraction from Office including Word and PowerPoint. We take a machine learning approach. We annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models. In the models, we mainly utilize formatting information such as font size as features. We employ the following models: Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron. In this paper, we also investigate the following three problems, which did not seem to have been examined previously. (1) Comparison between models: among the models above, which model performs best for title extraction; (2) Generality of model: whether it is possible to train a model on one domain and apply it to another domain, and whether it is possible to train a model in one language and apply it to another language; (3) Usefulness of extracted titles: whether extracted titles can improve document processing such as search. Experimental results indicate that our approach works well for title extraction from general documents. Our method can significantly outperform the baselines: one that always uses the first lines as titles and the other that always uses the lines in the largest font sizes as titles. Precision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively. It turns out that the use of format features is the key to successful title extraction. (1) We have observed that Perceptron based models perform better in terms of extraction accuracies. (2) We have empirically verified that the models trained with our approach are generic in the sense that they can be trained on one domain and applied to another, and they can be trained in one language and applied to another. (3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%). We conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications. The rest of the paper is organized as follows. In section 2, we introduce related work, and in section 3, we explain the motivation and problem setting of our work. In section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles. Section 6 gives our experimental results. We make concluding remarks in section 7. 2. RELATED WORK 2.1 Document Metadata Extraction Methods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers. The proposed methods fall into two categories: the rule based approach and the machine learning based approach. Giuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript. They used rules like "titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes". Liddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies. Mao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information. The rule-based approach can achieve high performance. However, it also has disadvantages. It is less adaptive and robust when compared with the machine learning approach. Han et al. [10], for instance, conducted metadata extraction with the machine learning approach. They viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier. They mainly used linguistic information as features. They reported high extraction accuracy from research papers in terms of precision and recall. 2.2 Information Extraction Metadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested. Hidden Markov Model [6], Maximum Entropy Model [1, 4], Maximum Entropy Markov Model [17], Support Vector Machines [3], Conditional Random Field [12], and Voted Perceptron [2] are widely used information extraction models. Information extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19]. 2.3 Search Using Title Information Title information is useful for document retrieval. In the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8]. In web search, the title fields (i.e., file properties) and anchor texts of web pages (HTML documents) can be viewed as ` titles' of the pages [5]. Many search engines seem to utilize them for web page retrieval [7, 11, 18, 22]. Zhang et al., found that web pages with well-defined metadata are more easily retrieved than those without well-defined metadata [24]. To the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents. 3. MOTIVATION AND PROBLEM SETTING We consider the issue of automatically extracting titles from general documents. By general documents, we mean documents that belong to one of any number of specific genres. The documents can be presentations, books, book chapters, technical papers, brochures, reports, memos, specifications, letters, announcements, or resumes. General documents are more widely available in digital libraries, intranets, and internet, and thus investigation on title extraction from them is sorely needed. Figure 1 shows an estimate on distributions of file formats on intranet and internet [15]. Office and PDF are the main file formats on the intranet. Even on the internet, the documents in the formats are still not negligible, given its extremely large size. In this paper, without loss of generality, we take Office documents as an example. Figure 1. Distributions of file formats in internet and intranet. For Office documents, users can define titles as file properties using a feature provided by Office. We found in an experiment, however, that users seldom use the feature and thus titles in file properties are usually very inaccurate. That is to say, titles in file properties are usually inconsistent with the ` true' titles in the file bodies that are created by the authors and are visible to readers. We collected 6,000 Word and 6,000 PowerPoint documents from an intranet and the internet and examined how many titles in the file properties are correct. We found that surprisingly the accuracy was only 0.265 (cf., Section 6.3 for details). A number of reasons can be considered. For example, if one creates a new file by copying an old file, then the file property of the new file will also be copied from the old file. In another experiment, we found that Google uses the titles in file properties of Office documents in search and browsing, but the titles are not very accurate. We created 50 queries to search Word and PowerPoint documents and examined the top 15 results of each query returned by Google. We found that nearly all the titles presented in the search results were from the file properties of the documents. However, only 0.272 of them were correct. Actually, ` true' titles usually exist at the beginnings of the bodies of documents. If we can accurately extract the titles from the bodies of documents, then we can exploit reliable title information in document processing. This is exactly the problem we address in this paper. More specifically, given a Word document, we are to extract the title from the top region of the first page. Given a PowerPoint document, we are to extract the title from the first slide. A title sometimes consists of a main title and one or two subtitles. We only consider extraction of the main title. As baselines for title extraction, we use that of always using the first lines as titles and that of always using the lines with largest font sizes as titles. Figure 2. Title extraction from Word document. Figure 3. Title extraction from PowerPoint document. Next, we define a ` specification' for human judgments in title data annotation. The annotated data will be used in training and testing of the title extraction methods. Summary of the specification: The title of a document should be identified on the basis of common sense, if there is no difficulty in the identification. However, there are many cases in which the identification is not easy. There are some rules defined in the specification that guide identification for such cases. The rules include "a title is usually in consecutive lines in the same format", "a document can have no title", "titles in images are not considered", "a title should not contain words like ` draft', ` whitepaper', etc", "if it is difficult to determine which is the title, select the one in the largest font size", and "if it is still difficult to determine which is the title, select the first candidate". (The specification covers all the cases we have encountered in data annotation.) Figures 2 and 3 show examples of Office documents from which we conduct title extraction. In Figure 2, ` Differences in Win32 API Implementations among Windows Operating Systems' is the title of the Word document. ` Microsoft Windows' on the top of this page is a picture and thus is ignored. In Figure 3, ` Building Competitive Advantages through an Agile Infrastructure' is the title of the PowerPoint document. We have developed a tool for annotation of titles by human annotators. Figure 4 shows a snapshot of the tool. Figure 4. Title annotation tool. 4. TITLE EXTRACTION METHOD 4.1 Outline Title extraction based on machine learning consists of training and extraction. The same pre-processing step occurs before training and extraction. During pre-processing, from the top region of the first page of a Word document or the first slide of a PowerPoint document a number of units for processing are extracted. If a line (lines are separated by ` return' symbols) only has a single format, then the line will become a unit. If a line has several parts and each of them has its own format, then each part will become a unit. Each unit will be treated as an instance in learning. A unit contains not only content information (linguistic information) but also formatting information. The input to pre-processing is a document and the output of pre-processing is a sequence of units (instances). Figure 5 shows the units obtained from the document in Figure 2. Figure 5. Example of units. In learning, the input is sequences of units where each sequence corresponds to a document. We take labeled units (labeled as title_begin, title_end, or other) in the sequences as training data and construct models for identifying whether a unit is title_begin title_end, or other. We employ four types of models: Perceptron, Maximum Entropy (ME), Perceptron Markov Model (PMM), and Maximum Entropy Markov Model (MEMM). In extraction, the input is a sequence of units from one document. We employ one type of model to identify whether a unit is title_begin, title_end, or other. We then extract units from the unit labeled with ` title_begin' to the unit labeled with ` title_end'. The result is the extracted title of the document. The unique characteristic of our approach is that we mainly utilize formatting information for title extraction. Our assumption is that although general documents vary in styles, their formats have certain patterns and we can learn and utilize the patterns for title extraction. This is in contrast to the work by Han et al., in which only linguistic features are used for extraction from research papers. 4.2 Models The four models actually can be considered in the same metadata extraction framework. That is why we apply them together to our current problem. Each input is a sequence of instances x1x2...xk together with a sequence of labels y1 y2...yk. xi and yi represents an instance and its label, respectively (i = 1,2,..., k). Recall that an instance here represents a unit. A label represents title_begin, title_end, or other. Here, k is the number of units in a document. In learning, we train a model which can be generally denoted as a conditional probability distribution P (Y1...Yk | X1...Xk) where Xi and Yi denote random variables taking instance xi and label yi as values, respectively (i = 1,2,..., k). Figure 6. Metadata extraction model. We can make assumptions about the general model in order to make it simple enough for training. For example, we can assume that Y1,..., Yk are independent of each other given X 1,..., X k. Thus, we have In this way, we decompose the model into a number of classifiers. We train the classifiers locally using the labeled data. As the classifier, we employ the Perceptron or Maximum Entropy model. We can also assume that the first order Markov property holds for Y1,..., Yk given X 1,..., X k. Thus, we have Again, we obtain a number of classifiers. However, the classifiers are conditioned on the previous label. When we employ the Percepton or Maximum Entropy model as a classifier, the models become a Percepton Markov Model or Maximum Entropy Markov Model, respectively. That is to say, the two models are more precise. In extraction, given a new sequence of instances, we resort to one of the constructed models to assign a sequence of labels to the sequence of instances, i.e., perform extraction. For Perceptron and ME, we assign labels locally and combine the results globally later using heuristics. Specifically, we first identify the most likely title_begin. Then we find the most likely title_end within three units after the title_begin. Finally, we extract as a title the units between the title_begin and the title_end. For PMM and MEMM, we employ the Viterbi algorithm to find the globally optimal label sequence. In this paper, for Perceptron, we actually employ an improved variant of it, called Perceptron with Uneven Margin [13]. This version of Perceptron can work well especially when the number of positive instances and the number of negative instances differ greatly, which is exactly the case in our problem. We also employ an improved version of Perceptron Markov Model in which the Perceptron model is the so-called Voted Perceptron [2]. In addition, in training, the parameters of the model are updated globally rather than locally. 4.3 Features There are two types of features: format features and linguistic features. We mainly use the former. The features are used for both the title-begin and the title-end classifiers. 4.3.1 Format Features Font Size: There are four binary features that represent the normalized font size of the unit (recall that a unit has only one type of font). If the font size of the unit is the largest in the document, then the first feature will be 1, otherwise 0. If the font size is the smallest in the document, then the fourth feature will be 1, otherwise 0. If the font size is above the average font size and not the largest in the document, then the second feature will be 1, otherwise 0. If the font size is below the average font size and not the smallest, the third feature will be 1, otherwise 0. It is necessary to conduct normalization on font sizes. For example, in one document the largest font size might be ` 12pt', while in another the smallest one might be ` 18pt'. Boldface: This binary feature represents whether or not the current unit is in boldface. Alignment: There are four binary features that respectively represent the location of the current unit: ` left', ` center', ` right', and ` unknown alignment'. The following format features with respect to ` context' play an important role in title extraction. Empty Neighboring Unit: There are two binary features that represent, respectively, whether or not the previous unit and the current unit are blank lines. Font Size Change: There are two binary features that represent, respectively, whether or not the font size of the previous unit and the font size of the next unit differ from that of the current unit. Alignment Change: There are two binary features that represent, respectively, whether or not the alignment of the previous unit and the alignment of the next unit differ from that of the current one. Same Paragraph: There are two binary features that represent, respectively, whether or not the previous unit and the next unit are in the same paragraph as the current unit. 4.3.2 Linguistic Features The linguistic features are based on key words. Positive Word: This binary feature represents whether or not the current unit begins with one of the positive words. The positive words include ` title:', ` subject:', ` subject line:' For example, in some documents the lines of titles and authors have the same formats. However, if lines begin with one of the positive words, then it is likely that they are title lines. Negative Word: This binary feature represents whether or not the current unit begins with one of the negative words. The negative words include ` To', ` By', ` created by', ` updated by', etc. . There are more negative words than positive words. The above linguistic features are language dependent. Word Count: A title should not be too long. We heuristically create four intervals: [1, 2], [3, 6], [7, 9] and [9, ∞) and define one feature for each interval. If the number of words in a title falls into an interval, then the corresponding feature will be 1; otherwise 0. Ending Character: This feature represents whether the unit ends with `:', ` -', or other special characters. A title usually does not end with such a character. 5. DOCUMENT RETRIEVAL METHOD We describe our method of document retrieval using extracted titles. Typically, in information retrieval a document is split into a number of fields including body, title, and anchor text. A ranking function in search can use different weights for different fields of the document. Also, titles are typically assigned high weights, indicating that they are important for document retrieval. As explained previously, our experiment has shown that a significant number of documents actually have incorrect titles in the file properties, and thus in addition of using them we use the extracted titles as one more field of the document. By doing this, we attempt to improve the overall precision. In this paper, we employ a modification of BM25 that allows field weighting [21]. As fields, we make use of body, title, extracted title and anchor. First, for each term in the query we count the term frequency in each field of the document; each field frequency is then weighted according to the corresponding weight parameter: Similarly, we compute the document length as a weighted sum of lengths of each field. Average document length in the corpus becomes the average of all weighted document lengths. Third, a data set in Chinese was also downloaded from the internet. It includes 500 Word documents and 500 PowerPoint documents in Chinese. We manually labeled the titles of all the documents, on the basis of our specification. Not all the documents in the two data sets have titles. Table 1 shows the percentages of the documents having titles. We see that DotCom and DotGov have more PowerPoint documents with titles than MS. This might be because PowerPoint documents published on the internet are more formal than those on the intranet. Table 1. The portion of documents with titles as In our experiments we used k1 = 1. 8, b = 0.75. Weight for content was 1.0, title was 10.0, anchor was 10.0, and extracted title 5.0. 6. EXPERIMENTAL RESULTS 6.1 Data Sets and Evaluation Measures We used two data sets in our experiments. First, we downloaded and randomly selected 5,000 Word documents and 5,000 PowerPoint documents from an intranet of Microsoft. We call it MS hereafter. Second, we downloaded and randomly selected 500 Word and 500 PowerPoint documents from the DotGov and DotCom domains on the internet, respectively. Figure 7 shows the distributions of the genres of the documents. We see that the documents are indeed ` general documents' as we define them. Figure 7. Distributions of document genres. In our experiments, we conducted evaluations on title extraction in terms of precision, recall, and F-measure. The evaluation measures are defined as follows: Here, A, B, C, and D are numbers of documents as those defined in Table 2. Table 2. Contingence table with regard to title extraction 6.2 Baselines We test the accuracies of the two baselines described in section 4.2. They are denoted as ` largest font size' and ` first line' respectively. an 6.3 Accuracy of Titles in File Properties We investigate how many titles in the file properties of the documents are reliable. We view the titles annotated by humans as true titles and test how many titles in the file properties approximately match with the true titles. We use Edit Distance to conduct the approximate match. (Approximate match is only used in this evaluation). This is because sometimes human annotated titles can be slightly different from the titles in file properties on the surface, e.g., contain extra spaces). Given string A and string B: if ((D == 0) or (D / (La + Lb) <θ)) then string A = string B D: Edit Distance between string A and string B La: length of string A Lb: length of string B Table 3. Accuracies of titles in file properties 6.4 Comparison with Baselines We conducted title extraction from the first data set (Word and PowerPoint in MS). As the model, we used Perceptron. We conduct 4-fold cross validation. Thus, all the results reported here are those averaged over 4 trials. Tables 4 and 5 show the results. We see that Perceptron significantly outperforms the baselines. In the evaluation, we use exact matching between the true titles annotated by humans and the extracted titles. Table 4. Accuracies of title extraction with Word Table 5. Accuracies of title extraction with PowerPoint We see that the machine learning approach can achieve good performance in title extraction. For Word documents both precision and recall of the approach are 8 percent higher than those of the baselines. For PowerPoint both precision and recall of the approach are 2 percent higher than those of the baselines. We conduct significance tests. The results are shown in Table 6. Here, ` Largest' denotes the baseline of using the largest font size, ` First' denotes the baseline of using the first line. The results indicate that the improvements of machine learning over baselines are statistically significant (in the sense p-value <0.05) Table 6. Sign test results We see, from the results, that the two baselines can work well for title extraction, suggesting that font size and position information are most useful features for title extraction. However, it is also obvious that using only these two features is not enough. There are cases in which all the lines have the same font size (i.e., the largest font size), or cases in which the lines with the largest font size only contain general descriptions like ` Confidential', ` White paper', etc. . For those cases, the ` largest font size' method cannot work well. For similar reasons, the ` first line' method alone cannot work well, either. With the combination of different features (evidence in title judgment), Perceptron can outperform Largest and First. We investigate the performance of solely using linguistic features. We found that it does not work well. It seems that the format features play important roles and the linguistic features are supplements. . We conducted an error analysis on the results of Perceptron. We found that the errors fell into three categories. (1) About one third of the errors were related to ` hard cases'. In these documents, the layouts of the first pages were difficult to understand, even for humans. Figure 8 and 9 shows examples. (2) Nearly one fourth of the errors were from the documents which do not have true titles but only contain bullets. Since we conduct extraction from the top regions, it is difficult to get rid of these errors with the current approach. (3). Confusions between main titles and subtitles were another type of error. Since we only labeled the main titles as titles, the extractions of both titles were considered incorrect. This type of error does little harm to document processing like search, however. 6.5 Comparison between Models To compare the performance of different machine learning models, we conducted another experiment. Again, we perform 4-fold cross Figure 8. An example Word document. Figure 9. An example PowerPoint document. validation on the first data set (MS). Table 7, 8 shows the results of all the four models. It turns out that Perceptron and PMM perform the best, followed by MEMM, and ME performs the worst. In general, the Markovian models perform better than or as well as their classifier counterparts. This seems to be because the Markovian models are trained globally, while the classifiers are trained locally. The Perceptron based models perform better than the ME based counterparts. This seems to be because the Perceptron based models are created to make better classifications, while ME models are constructed for better prediction. Table 7. Comparison between different learning models for Table 8. Comparison between different learning models for 6.6 Domain Adaptation We apply the model trained with the first data set (MS) to the second data set (DotCom and DotGov). Tables 9-12 show the results. Table 9. Accuracies of title extraction with Word in DotGov Table 10. Accuracies of title extraction with PowerPoint in Table 11. Accuracies of title extraction with Word in DotCom Table 12. Performance of PowerPoint document title From the results, we see that the models can be adapted to different domains well. There is almost no drop in accuracy. The results indicate that the patterns of title formats exist across different domains, and it is possible to construct a domain independent model by mainly using formatting information. 6.7 Language Adaptation We apply the model trained with the data in English (MS) to the data set in Chinese. Tables 13-14 show the results. Table 13. Accuracies of title extraction with Word in Chinese Table 14. Accuracies of title extraction with PowerPoint in We see that the models can be adapted to a different language. There are only small drops in accuracy. Obviously, the linguistic features do not work for Chinese, but the effect of not using them is negligible. The results indicate that the patterns of title formats exist across different languages. From the domain adaptation and language adaptation results, we conclude that the use of formatting information is the key to a successful extraction from general documents. 6.8 Search with Extracted Titles We performed experiments on using title extraction for document retrieval. As a baseline, we employed BM25 without using extracted titles. The ranking mechanism was as described in Section 5. The weights were heuristically set. We did not conduct optimization on the weights. The evaluation was conducted on a corpus of 1.3 M documents crawled from the intranet of Microsoft using 100 evaluation queries obtained from this intranet's search engine query logs. 50 queries were from the most popular set, while 50 queries other were chosen randomly. Users were asked to provide judgments of the degree of document relevance from a scale of 1to 5 (1 meaning detrimental, 2--bad, 3--fair, 4--good and 5--excellent). Figure 10 shows the results. In the chart two sets of precision results were obtained by either considering good or excellent documents as relevant (left 3 bars with relevance threshold 0.5), or by considering only excellent documents as relevant (right 3 bars with relevance threshold 1.0) Figure 10. Search ranking results. Figure 10 shows different document retrieval results with different ranking functions in terms of precision @ 10, precision @ 5 and reciprocal rank: • Blue bar--BM25 including the fields body, title (file property), and anchor text. • Purple bar--BM25 including the fields body, title (file property), anchor text, and extracted title. With the additional field of extracted title included in BM25 the precision @ 10 increased from 0.132 to 0.145, or by ~ 10%. Thus, it is safe to say that the use of extracted title can indeed improve the precision of document retrieval. 7. CONCLUSION In this paper, we have investigated the problem of automatically extracting titles from general documents. We have tried using a machine learning approach to address the problem. Previous work showed that the machine learning approach can work well for metadata extraction from research papers. In this paper, we showed that the approach can work for extraction from general documents as well. Our experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents. Previous work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information. It appeared that using formatting information is a key for successfully conducting title extraction from general documents. We tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron. We found that the performance of the Perceptorn models was the best. We applied models constructed in one domain to another domain and applied models trained in one language to another language. We found that the accuracies did not drop substantially across different domains and across different languages, indicating that the models were generic. We also attempted to use the extracted titles in document retrieval. We observed a significant improvement in document ranking performance for search when using extracted title information. All the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.
Automatic Extraction of Titles from General Documents using Machine Learning ABSTRACT In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles. 1. INTRODUCTION Metadata of documents is useful for many kinds of document processing such as search, browsing, and filtering. Ideally, metadata is defined by the authors of documents and is then used by various systems. However, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26]. Thus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue. Methods for performing the task have been proposed. However, the focus was mainly on extraction from research papers. For instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers. They formalized the problem as that of classification and employed Support Vector Machines as the classifier. They mainly used linguistic features in the model. In this paper, we consider metadata extraction from general documents. By general documents, we mean documents that may belong to any one of a number of specific genres. General documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is sorely needed. Research papers usually have well-formed styles and noticeable characteristics. In contrast, the styles of general documents can vary greatly. It has not been clarified whether a machine learning based approach can work well for this task. There are many types of metadata: title, author, date of creation, etc. . As a case study, we consider title extraction in this paper. General documents can be in many different file formats: Microsoft Office, PDF (PS), etc. . As a case study, we consider extraction from Office including Word and PowerPoint. We take a machine learning approach. We annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models. In the models, we mainly utilize formatting information such as font size as features. We employ the following models: Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron. In this paper, we also investigate the following three problems, which did not seem to have been examined previously. (1) Comparison between models: among the models above, which model performs best for title extraction; (2) Generality of model: whether it is possible to train a model on one domain and apply it to another domain, and whether it is possible to train a model in one language and apply it to another language; (3) Usefulness of extracted titles: whether extracted titles can improve document processing such as search. Experimental results indicate that our approach works well for title extraction from general documents. Our method can significantly outperform the baselines: one that always uses the first lines as titles and the other that always uses the lines in the largest font sizes as titles. Precision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively. It turns out that the use of format features is the key to successful title extraction. (1) We have observed that Perceptron based models perform better in terms of extraction accuracies. (2) We have empirically verified that the models trained with our approach are generic in the sense that they can be trained on one domain and applied to another, and they can be trained in one language and applied to another. (3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%). We conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications. The rest of the paper is organized as follows. In section 2, we introduce related work, and in section 3, we explain the motivation and problem setting of our work. In section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles. Section 6 gives our experimental results. We make concluding remarks in section 7. 2. RELATED WORK 2.1 Document Metadata Extraction Methods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers. The proposed methods fall into two categories: the rule based approach and the machine learning based approach. Giuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript. They used rules like "titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes". Liddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies. Mao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information. The rule-based approach can achieve high performance. However, it also has disadvantages. It is less adaptive and robust when compared with the machine learning approach. Han et al. [10], for instance, conducted metadata extraction with the machine learning approach. They viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier. They mainly used linguistic information as features. They reported high extraction accuracy from research papers in terms of precision and recall. 2.2 Information Extraction Metadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested. Hidden Markov Model [6], Maximum Entropy Model [1, 4], Maximum Entropy Markov Model [17], Support Vector Machines [3], Conditional Random Field [12], and Voted Perceptron [2] are widely used information extraction models. Information extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19]. 2.3 Search Using Title Information Title information is useful for document retrieval. In the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8]. In web search, the title fields (i.e., file properties) and anchor texts of web pages (HTML documents) can be viewed as ` titles' of the pages [5]. Many search engines seem to utilize them for web page retrieval [7, 11, 18, 22]. Zhang et al., found that web pages with well-defined metadata are more easily retrieved than those without well-defined metadata [24]. To the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents. 3. MOTIVATION AND PROBLEM SETTING 4. TITLE EXTRACTION METHOD 4.1 Outline 4.2 Models 4.3 Features 4.3.1 Format Features 4.3.2 Linguistic Features 5. DOCUMENT RETRIEVAL METHOD 6. EXPERIMENTAL RESULTS 6.1 Data Sets and Evaluation Measures 7. Distributions of document genres. 6.2 Baselines 6.3 Accuracy of Titles in File Properties 6.4 Comparison with Baselines 6.5 Comparison between Models 6.6 Domain Adaptation 6.7 Language Adaptation 6.8 Search with Extracted Titles 7. CONCLUSION In this paper, we have investigated the problem of automatically extracting titles from general documents. We have tried using a machine learning approach to address the problem. Previous work showed that the machine learning approach can work well for metadata extraction from research papers. In this paper, we showed that the approach can work for extraction from general documents as well. Our experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents. Previous work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information. It appeared that using formatting information is a key for successfully conducting title extraction from general documents. We tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron. We found that the performance of the Perceptorn models was the best. We applied models constructed in one domain to another domain and applied models trained in one language to another language. We found that the accuracies did not drop substantially across different domains and across different languages, indicating that the models were generic. We also attempted to use the extracted titles in document retrieval. We observed a significant improvement in document ranking performance for search when using extracted title information. All the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.
Automatic Extraction of Titles from General Documents using Machine Learning ABSTRACT In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles. 1. INTRODUCTION Metadata of documents is useful for many kinds of document processing such as search, browsing, and filtering. Ideally, metadata is defined by the authors of documents and is then used by various systems. However, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26]. Thus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue. Methods for performing the task have been proposed. However, the focus was mainly on extraction from research papers. For instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers. They mainly used linguistic features in the model. In this paper, we consider metadata extraction from general documents. By general documents, we mean documents that may belong to any one of a number of specific genres. General documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is sorely needed. Research papers usually have well-formed styles and noticeable characteristics. In contrast, the styles of general documents can vary greatly. It has not been clarified whether a machine learning based approach can work well for this task. There are many types of metadata: title, author, date of creation, etc. . As a case study, we consider title extraction in this paper. General documents can be in many different file formats: Microsoft Office, PDF (PS), etc. . As a case study, we consider extraction from Office including Word and PowerPoint. We take a machine learning approach. We annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models. In the models, we mainly utilize formatting information such as font size as features. Experimental results indicate that our approach works well for title extraction from general documents. Precision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively. It turns out that the use of format features is the key to successful title extraction. (1) We have observed that Perceptron based models perform better in terms of extraction accuracies. (3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%). We conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications. The rest of the paper is organized as follows. In section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles. Section 6 gives our experimental results. We make concluding remarks in section 7. 2. RELATED WORK 2.1 Document Metadata Extraction Methods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers. The proposed methods fall into two categories: the rule based approach and the machine learning based approach. Giuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript. They used rules like "titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes". Liddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies. Mao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information. The rule-based approach can achieve high performance. However, it also has disadvantages. It is less adaptive and robust when compared with the machine learning approach. Han et al. [10], for instance, conducted metadata extraction with the machine learning approach. They viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier. They mainly used linguistic information as features. They reported high extraction accuracy from research papers in terms of precision and recall. 2.2 Information Extraction Metadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested. Information extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19]. 2.3 Search Using Title Information Title information is useful for document retrieval. In the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8]. To the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents. 7. CONCLUSION In this paper, we have investigated the problem of automatically extracting titles from general documents. We have tried using a machine learning approach to address the problem. Previous work showed that the machine learning approach can work well for metadata extraction from research papers. In this paper, we showed that the approach can work for extraction from general documents as well. Our experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents. Previous work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information. It appeared that using formatting information is a key for successfully conducting title extraction from general documents. We tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron. We found that the performance of the Perceptorn models was the best. We applied models constructed in one domain to another domain and applied models trained in one language to another language. We also attempted to use the extracted titles in document retrieval. We observed a significant improvement in document ranking performance for search when using extracted title information. All the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.
H-63
Location based Indexing Scheme for DAYS
Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results.
[ "index scheme", "index scheme", "index", "wireless channel", "locat depend data", "ldd", "wireless data dissemin", "locat base servic", "mobil user", "data broadcast system", "tree structur", "dai", "pull base data access", "wireless broadcast data map", "wireless data broadcast", "data stage" ]
[ "P", "P", "P", "P", "P", "P", "R", "M", "M", "M", "M", "U", "M", "M", "R", "M" ]
Location based Indexing Scheme for DAYS Debopam Acharya and Vijay Kumar 1 Computer Science and Informatics University of Missouri-Kansas City Kansas City, MO 64110 dargc(kumarv)@umkc. edu ABSTRACT Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results. Categories and Subject Descriptors H.3.1 [Information Systems]: Information Storage and Retrieval - content analysis and indexing; H.3.3 [Information Systems]: Information Storage and Retrieval - information search and retrieval. General Terms Algorithms, Performance, Experimentation 1. INTRODUCTION Wireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users. The mode of data transfer is essentially asymmetric, that is, the capacity of the transfer of data (downstream communication) from the server to the client (mobile user) is significantly larger than the client or mobile user to the server (upstream communication). The effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime. One of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data. An example would be someone using their mobile device to search for a vegetarian restaurant. The LBS application would interact with other location technology components or use the mobile user's input to determine the user's location and download the information about the restaurants in proximity to the user by tuning into the wireless channel which is disseminating LDD. We see a limited deployment of LBS by some service providers. But there are every indications that with time some of the complex technical problems such as uniform location framework, calculating and tracking locations in all types of places, positioning in various environments, innovative location applications, etc., will be resolved and LBS will become a common facility and will help to improve market productivity and customer comfort. In our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data. A simple broadcast, however, is likely to cause significant performance degradation in the energy constrained mobile devices and a common solution to this problem is the use of efficient air indexing. The indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it. A mobile user, thus, has some free time to go into the doze mode which conserves valuable power. It also allows the user to personalize his own mobile device by selectively tuning to the information of his choice. Access efficiency and energy conservation are the two issues which are significant for data broadcast systems. Access efficiency refers to the latency experienced when a request is initiated till the response is received. Energy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data. Two parameters that affect these are the tuning time and the access latency. Tuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data. It can also be defined as the number of buckets tuned by the mobile device in active state to get its required data. Access latency may be defined as the time elapsed since a request has been issued till the response has been received. 1 This research was supported by a grant from NSF IIS-0209170. Several indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17]. The main disadvantages of the tree based schemes are that they are based on centralized tree structures. To start a search, the MU has to wait until it reaches the root of the next broadcast tree. This significantly affects the tuning time of the mobile unit. The exponential schemes facilitate index replication by sharing links in different search trees. For broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency. Also, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency. None of the above indexing schemes is equally effective in broadcasting location dependent data. In addition to providing low latency, they lack properties which are used to address LDD issues. We propose an indexing scheme in DAYS which takes care of some these problems. We show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time. The rest of the paper is presented as follows. In section 2, we discuss previous work related to indexing of broadcast data. Section 3 describes our DAYS architecture. Location dependent data, its generation and subsequent broadcast is presented in section 4. Section 5 discusses our indexing scheme in detail. Simulation of our scheme and its performance evaluation is presented in section 6. Section 7 concludes the paper and mentions future related work. 2. PREVIOUS WORK Several disk-based indexing techniques have been used for air indexing. Imielinski et al. [5, 6] applied the B+ index tree, where the leaf nodes store the arrival times of the data items. The distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast. Specifically, the index tree is divided into a replicated part and a non replicated part. Each broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it. As such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time. Chen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access. These structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data. Tan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9]. A flexible indexing method was proposed in [5]. The flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments. The first bucket in each data segment contains a control index, which is a binary index mapping a given key value to the segment containing that key, and a local index, which is an m-entry index mapping a given key value to the buckets within the current segment. By tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency. Another indexing technique proposed is the exponential indexing scheme [17]. In this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time. It facilitates index replication by linking different search trees. All of the above mentioned schemes have been applied to data which are non related to each other. These non related data may be clustered or non clustered. However, none of them has specifically addressed the requirements of LDD. Location dependent data are data which are associated with a location. Presently there are several applications that deal with LDD [13, 16]. Almost all of them depict LDD with the help of hierarchical structures [3, 4]. This is based on the containment property of location dependent data. The Containment property helps determining relative position of an object by defining or identifying locations that contains those objects. The subordinate locations are hierarchically related to each other. Thus, Containment property limits the range of availability or operation of a service. We use this containment property in our indexing scheme to index LDD. 3. DAYS ARCHITECTURE DAYS has been conceptualized to disseminate topical and nontopical data to users in a local broadcast space and to accept queries from individual users globally. Topical data, for example, weather information, traffic information, stock information, etc., constantly changes over time. Non topical data such as hotel, restaurant, real estate prices, etc., do not change so often. Thus, we envision the presence of two types of data distribution: In the first case, server pushes data to local users through wireless channels. The other case deals with the server sending results of user queries through downlink wireless channels. Technically, we see the presence of two types of queues in the pull based data access. One is a heavily loaded queue containing globally uploaded queries. The other is a comparatively lightly loaded queue consisting of locally uploaded queries. The DAYS architecture [12] as shown in figure 1 consists of a Data Server, Broadcast Scheduler, DAYS Coordinator, Network of LEO satellites for global data delivery and a Local broadcast space. Data is pushed into the local broadcast space so that users may tune into the wireless channels to access the data. The local broadcast space consists of a broadcast tower, mobile units and a network of data staging machines called the surrogates. Data staging in surrogates has been earlier investigated as a successful technique [12, 15] to cache users' related data. We believe that data staging can be used to drastically reduce the latency time for both the local broadcast data as well as global responses. Query request in the surrogates may subsequently be used to generate the popularity patterns which ultimately decide the broadcast schedule [12]. 18 Popularity Feedback from Surrogates for Broadcast Scheduler Local Broadcast Space Broadcast Tower SurrogateMU MU MU MU Data ServerBroadcast schedulerDAYS Coordinator Local downlink channel Global downlink channel Pull request queue Global request queue Local request queue Location based index Starbucks Plaza Kansas City Figure 1. DAYS Architecture Figure 2. Location Structure ofStarbucks, Plaza 4. LOCATION DEPENDENT DATA (LDD) We argue that incorporating location information in wireless data broadcast can significantly decrease the access latency. This property becomes highly useful for mobile unit which has limited storage and processing capability. There are a variety of applications to obtain information about traffic, restaurant and hotel booking, fast food, gas stations, post office, grocery stores, etc.. If these applications are coupled with location information, then the search will be fast and highly cost effective. An important property of the locations is Containment which helps to determine the relative location of an object with respect to its parent that contains the object. Thus, Containment limits the range of availability of a data. We use this property in our indexing scheme. The database contains the broadcast contents which are converted into LDD [14] by associating them with respective locations so that it can be broadcasted in a clustered manner. The clustering of LDD helps the user to locate information efficiently and supports containment property. We present an example to justify our proposition. Example: Suppose a user issues query Starbucks Coffee in Plaza please. to access information about the Plaza branch of Starbucks Coffee in Kansas City. In the case of location independent set up the system will list all Starbucks coffee shops in Kansas City area. It is obvious that such responses will increase access latency and are not desirable. These can be managed efficiently if the server has location dependent data, i.e., a mapping between a Starbucks coffee shop data and its physical location. Also, for a query including range of locations of Starbucks, a single query requesting locations for the entire region of Kansas City, as shown in Figure 2, will suffice. This will save enormous amount of bandwidth by decreasing the number of messages and at the same time will be helpful in preventing the scalability bottleneck in highly populated area. 4.1 Mapping Function for LDD The example justifies the need for a mapping function to process location dependent queries. This will be especially important for pull based queries across the globe for which the reply could be composed for different parts of the world. The mapping function is necessary to construct the broadcast schedule. We define Global Property Set (GPS) [11], Information Content (IC) set, and Location Hierarchy (LH) set where IC ⊆ GPS and LH ⊆ GPS to develop a mapping function. LH = {l1, l2, l3...,lk} where li represent locations in the location tree and IC = {ic1, ic2, ic3,...,icn} where ici represent information type. For example, if we have traffic, weather, and stock information are in broadcast then IC = {ictraffic, icweather, and icstock}. The mapping scheme must be able to identify and select an IC member and a LH node for (a) correct association, (b) granularity match, (c) and termination condition. For example, weather ∈ IC could be associated with a country or a state or a city or a town of LH. The granularity match between the weather and a LH node is as per user requirement. Thus, with a coarse granularity weather information is associated with a country to get country``s weather and with town in a finer granularity. If a town is the finest granularity, then it defines the terminal condition for association between IC and LH for weather. This means that a user cannot get weather information about subdivision of a town. In reality weather of a subdivision does not make any sense. We develop a simple heuristic mapping approach scheme based on user requirement. Let IC = {m1, m2,m3 . ,..., mk}, where mi represent its element and let LH = {n1, n2, n3, ..., nl}, where ni represents LH``s member. We define GPS for IC (GPSIC) ⊆ GPS and for LH (GPSLH) ⊆ GPS as GPSIC = {P1, P2,..., Pn}, where P1, P2, P3,..., Pn are properties of its members and GPSLH = {Q1, Q2,..., Qm} where Q1, Q2,..., Qm are properties of its members. The properties of a particular member of IC are a subset of GPSIC. It is generally true that (property set (mi∈ IC) ∪ property set (mj∈ IC)) ≠ ∅, however, there may be cases where the intersection is not null. For example, stock ∈ IC and movie ∈ IC rating do not have any property in common. We assume that any two or more members of IC have at least one common geographical property (i.e. location) because DAYS broadcasts information about those categories, which are closely tied with a location. For example, stock of a company is related to a country, weather is related to a city or state, etc.. We define the property subset of mi∈ IC as PSm i ∀ mi ∈ IC and PSm i = {P1, P2, ..., Pr} where r ≤ n. ∀ Pr {Pr ∈ PSm i → Pr∈ GPSIC} which implies that ∀ i, PSm i ⊆ GPSIC. The geographical properties of this set are indicative of whether mi ∈ IC can be mapped to only a single granularity level (i.e. a single location) in LH or a multiple granularity levels (i.e. more than one nodes in 19 the hierarchy) in LH. How many and which granularity levels should a mi map to, depends upon the level at which the service provider wants to provide information about the mi in question. Similarly we define a property subset of LH members as PSn j ∀ nj ∈ LH which can be written as PSn j ={Q1, Q2, Q3, ..., Qs} where s ≤ m. In addition, ∀ Qs {Qs∈ PSn j → Qs∈ GPSLH} which implies that ∀j, PSn j ⊆ GPSLH. The process of mapping from IC to LH is then identifying for some mx∈ IC one or more ny∈ LH such that PSmx ∩ PSnv ≠ φ. This means that when mx maps to ny and all children of ny if mx can map to multiple granularity levels or mx maps only to ny if mx can map to a single granularity level. We assume that new members can join and old member can leave IC or LH any time. The deletion of members from the IC space is simple but addition of members to the IC space is more restrictive. If we want to add a new member to the IC space, then we first define a property set for the new member: PSmnew_m ={P1, P2, P3, ..., Pt} and add it to the IC only if the condition:∀ Pw {Pw∈ PSpnew_m → Pw∈ GPSIC} is satisfied. This scheme has an additional benefit of allowing the information service providers to have a control over what kind of information they wish to provide to the users. We present the following example to illustrate the mapping concept. IC = {Traffic, Stock, Restaurant, Weather, Important history dates, Road conditions} LH = {Country, State, City, Zip-code, Major-roads} GPSIC = {Surface-mobility, Roads, High, Low, Italian-food, StateName, Temp, CityName, Seat-availability, Zip, Traffic-jams, Stock-price, CountryName, MajorRoadName, Wars, Discoveries, World} GPSLH = {Country, CountrySize, StateName, CityName, Zip, MajorRoadName} Ps(ICStock) = {Stock-price, CountryName, High, Low} Ps(ICTraffic) = {Surface-mobility, Roads, High, Low, Traffic-jams, CityName} Ps(ICImportant dates in history) = {World, Wars, Discoveries} Ps(ICRoad conditions) = {Precipitation, StateName, CityName} Ps(ICRestaurant) = {Italian-food, Zip code} Ps(ICWeather) = {StateName, CityName, Precipitation, Temperature} PS(LHCountry) = {CountryName, CountrySize} PS(LHState = {StateName, State size}, PS(LHCity) ={CityName, City size} PS(LHZipcode) = {ZipCodeNum } PS(LHMajor roads) = {MajorRoadName} Now, only PS(ICStock) ∩ PSCountry ≠φ. In addition, PS(ICStock) indicated that Stock can map to only a single location Country. When we consider the member Traffic of IC space, only PS(ICTraffic) ∩ PScity ≠ φ. As PS(ICTraffic) indicates that Traffic can map to only a single location, it maps only to City and none of its children. Now unlike Stock, mapping of Traffic with Major roads, which is a child of City, is meaningful. However service providers have right to control the granularity levels at which they want to provide information about a member of IC space. PS(ICRoad conditions) ∩ PSState ≠φ and PS(ICRoad conditions) ∩ PSCity≠φ. So Road conditions maps to State as well as City. As PS(ICRoad conditions) indicates that Road conditions can map to multiple granularity levels, Road conditions will also map to Zip Code and Major roads, which are the children of State and City. Similarly, Restaurant maps only to Zip code, and Weather maps to State, City and their children, Major Roads and Zip Code. 5. LOCATION BASED INDEXING SCHEME This section discusses our location based indexing scheme (LBIS). The scheme is designed to conform to the LDD broadcast in our project DAYS. As discussed earlier, we use the containment property of LDD in the indexing scheme. This significantly limits the search of our required data to a particular portion of broadcast. Thus, we argue that the scheme provides bounded tuning time. We describe the architecture of our indexing scheme. Our scheme contains separate data buckets and index buckets. The index buckets are of two types. The first type is called the Major index. The Major index provides information about the types of data broadcasted. For example, if we intend to broadcast information like Entertainment, Weather, Traffic etc., then the major index points to either these major types of information and/or their main subtypes of information, the number of main subtypes varying from one information to another. This strictly limits number of accesses to a Major index. The Major index never points to the original data. It points to the sub indexes called the Minor index. The minor indexes are the indexes which actually points to the original data. We called these minor index pointers as Location Pointers as they points to the data which are associated with a location. Thus, our search for a data includes accessing of a major index and some minor indexes, the number of minor index varying depending on the type of information. Thus, our indexing scheme takes into account the hierarchical nature of the LDD, the Containment property, and requires our broadcast schedule to be clustered based on data type and location. The structure of the location hierarchy requires the use of different types of index at different levels. The structure and positions of index strictly depend on the location hierarchy as described in our mapping scheme earlier. We illustrate the implementation of our scheme with an example. The rules for framing the index are mentioned subsequently. 20 A1 Entertainment Resturant Movie A2 A3 A4 R1 R2 R3 R4 R5 R6 R7 R8 Weather KC SL JC SF Entertainment R1 R2 R3 R4 R5 R6 R7 R8 KC SL JC SF (A, R, NEXT = 8) 3, R5 4, R7 Type (S, L) ER W E EM (1, 4) (5, 4) (1, 4), (9, 4) (9, 4) Type (S, L) W E EM ER (1, 4) (5, 8) (5, 4) (9, 4) Type (S, L) E EM ER W (1, 8) (1, 4) (5, 4) (9, 4) A1 A2 A3 A4 Movie Resturant Weather 1 2 3 4 5 6 7 8 9 10 11 12 Major index Major index Major index Minor index Major index Minor index Figure 3. Location Mapped Information for Broadcast Figure 4. Data coupled with Location based Index Example: Let us suppose that our broadcast content contains ICentertainment and ICweather which is represented as shown in Fig. 3. Ai represents Areas of City and Ri represents roads in a certain area. The leaves of Weather structure represent four cities. The index structure is given in Fig. 4 which shows the position of major and minor index and data in the broadcast schedule. We propose the following rules for the creation of the air indexed broadcast schedule: • The major index and the minor index are created. • The major index contains the position and range of different types of data items (Weather and Entertainment, Figure 3) and their categories. The sub categories of Entertainment, Movie and Restaurant, are also in the index. Thus, the major index contains Entertainment (E), Entertainment-Movie (EM), Entertainment-Restaurant (ER), and Weather (W). The tuple (S, L) represents the starting position (S) of the data item and L represents the range of the item in terms of number of data buckets. • The minor index contains the variables A, R and a pointer Next. In our example (Figure 3), road R represents the first node of area A. The minor index is used to point to actual data buckets present at the lowest levels of the hierarchy. In contrast, the major index points to a broader range of locations and so it contains information about main and sub categories of data. • Index information is not incorporated in the data buckets. Index buckets are separate containing only the control information. • The number of major index buckets m=#(IC), IC = {ic1, ic2, ic3,...,icn} where ici represent information type and # represents the cardinality of the Information Content set IC. In this example, IC= {icMovie, icWeather, icRestaurant} and so #(IC) =3. Hence, the number of major index buckets is 3. • Mechanism to resolve the query is present in the java based coordinator in MU. For example, if a query Q is presented as Q (Entertainment, Movie, Road_1), then the resultant search will be for the EM information in the major index. We say, Q EM. Our proposed index works as follows: Let us suppose that an MU issues a query which is represented by Java Coordinator present in the MU as Restaurant information on Road 7. This is resolved by the coordinator as Q ER. This means one has to search for ER unit of index in the major index. Let us suppose that the MU logs into the channel at R2. The first index it receives is a minor index after R2. In this index, value of Next variable = 4, which means that the next major index is present after bucket 4. The MU may go into doze mode. It becomes active after bucket 4 and receives the major index. It searches for ER information which is the first entry in this index. It is now certain that the MU will get the position of the data bucket in the adjoining minor index. The second unit in the minor index depicts the position of the required data R7. It tells that the data bucket is the first bucket in Area 4. The MU goes into doze mode again and becomes active after bucket 6. It gets the required data in the next bucket. We present the algorithm for searching the location based Index. Algorithm 1 Location based Index Search in DAYS 1. Scan broadcast for the next index bucket, found=false 2. While (not found) do 3. if bucket is Major Index then 4. Find the Type & Tuple (S, L) 5. if S is greater than 1, go into doze mode for S seconds 6. end if 7. Wake up at the Sth bucket and observe the Minor Index 8. end if 9. if bucket is Minor Index then 10. if TypeRequested not equal to Typefound and (A,R)Request not equal to (A,R)found then 11. Go into doze mode till NEXT & repeat from step 3 12. end if 13. else find entry in Minor Index which points to data 14. Compute time of arrival T of data bucket 15. Go into doze mode till T 16. Wake up at T and access data, found = true 17. end else 18. end if 19. end While 21 6. PERFORMANCE EVALUATION Conservation of energy is the main concern when we try to access data from wireless broadcast. An efficient scheme should allow the mobile device to access its required data by staying active for a minimum amount of time. This would save considerable amount of energy. Since items are distributed based on types and are mapped to suitable locations, we argue that our broadcast deals with clustered data types. The mobile unit has to access a larger major index and a relatively much smaller minor index to get information about the time of arrival of data. This is in contrast to the exponential scheme where the indexes are of equal sizes. The example discussed and Algorithm 1 reveals that to access any data, we need to access the major index only once followed by one or more accesses to the minor index. The number of minor index access depends on the number of internal locations. As the number of internal locations vary for item to item (for example, Weather is generally associated with a City whereas traffic is granulated up to major and minor roads of a city), we argue that the structure of the location mapped information may be visualized as a forest which is a collection of general trees, the number of general trees depending on the types of information broadcasted and depth of a tree depending on the granularity of the location information associated with the information. For our experiments, we assume the forest as a collection of balanced M-ary trees. We further assume the M-ary trees to be full by assuming the presence of dummy nodes in different levels of a tree. Thus, if the number of data items is d and the height of the tree is m, then n= (m*d-1)/(m-1) where n is the number of vertices in the tree and i= (d-1)/(m-1) where i is the number of internal vertices. Tuning time for a data item involves 1 unit of time required to access the major index plus time required to access the data items present in the leaves of the tree. Thus, tuning time with d data items is t = logmd+1 We can say that tuning time is bounded by O(logmd). We compare our scheme with the distributed indexing and exponential scheme. We assume a flat broadcast and number of pages varying from 5000 to 25000. The various simulation parameters are shown in Table 1. Figure 5-8 shows the relative tuning times of three indexing algorithms, ie, the LBIS, exponential scheme and the distributed tree scheme. Figure 5 shows the result for number of internal location nodes = 3. We can see that LBIS significantly outperforms both the other schemes. The tuning time in LBIS ranges from approx 6.8 to 8. This large tuning time is due to the fact that after reaching the lowest minor index, the MU may have to access few buckets sequentially to get the required data bucket. We can see that the tuning time tends to become stable as the length of broadcast increases. In figure 6 we consider m= 4. Here we can see that the exponential and the distributed perform almost similarly, though the former seems to perform slightly better as the broadcast length increases. A very interesting pattern is visible in figure 7. For smaller broadcast size, the LBIS seems to have larger tuning time than the other two schemes. But as the length of broadcast increases, it is clearly visible the LBIS outperforms the other two schemes. The Distributed tree indexing shows similar behavior like the LBIS. The tuning time in LBIS remains low because the algorithm allows the MU to skip some intermediate Minor Indexes. This allows the MU to move into lower levels directly after coming into active mode, thus saving valuable energy. This action is not possible in the distributed tree indexing and hence we can observe that its tuning time is more than the LBIS scheme, although it performs better than the exponential scheme. Figure 8, in contrast, shows us that the tuning time in LBIS, though less than the other two schemes, tends to increase sharply as the broadcast length becomes greater than the 15000 pages. This may be attributed both due to increase in time required to scan the intermediate Minor Indexes and the length of the broadcast. But we can observe that the slope of the LBIS curve is significantly less than the other two curves. Table 1 Simulation Parameters P Definition Values N Number of data Items 5000 - 25000 m Number of internal location nodes 3, 4, 5, 6 B Capacity of bucket without index (for exponential index) 10,64,128,256 i Index base for exponential index 2,4,6,8 k Index size for distributed tree 8 bytes The simulation results establish some facts about our location based indexing scheme. The scheme performs better than the other two schemes in terms of tuning time in most of the cases. As the length of the broadcast increases, after a certain point, though the tuning time increases as a result of factors which we have described before, the scheme always performs better than the other two schemes. Due to the prescribed limit of the number of pages in the paper, we are unable to show more results. But these omitted results show similar trend as the results depicted in figure 5-8. 7. CONCLUSION AND FUTURE WORK In this paper we have presented a scheme for mapping of wireless broadcast data with their locations. We have presented an example to show how the hierarchical structure of the location tree maps with the data to create LDD. We have presented a scheme called LBIS to index this LDD. We have used the containment property of LDD in the scheme that limits the search to a narrow range of data in the broadcast, thus saving valuable energy in the device. The mapping of data with locations and the indexing scheme will be used in our DAYS project to create the push based architecture. The LBIS has been compared with two other prominent indexing schemes, i.e., the distributed tree indexing scheme and the exponential indexing scheme. We showed in our simulations that the LBIS scheme has the lowest tuning time for broadcasts having large number of pages, thus saving valuable battery power in the MU. 22 In the future work we try to incorporate pull based architecture in our DAYS project. Data from the server is available for access by the global users. This may be done by putting a request to the source server. The query in this case is a global query. It is transferred from the user``s source server to the destination server through the use of LEO satellites. We intend to use our LDD scheme and data staging architecture in the pull based architecture. We will show that the LDD scheme together with the data staging architecture significantly improves the latency for global as well as local query. 8. REFERENCES [1] Acharya, S., Alonso, R. Franklin, M and Zdonik S. Broadcast disk: Data management for asymmetric communications environments. In Proceedings of ACM SIGMOD Conference on Management of Data, pages 199-210, San Jose, CA, May 1995. [2] Chen, M.S.,Wu, K.L. and Yu, P. S. Optimizing index allocation for sequential data broadcasting in wireless mobile computing. IEEE Transactions on Knowledge and Data Engineering (TKDE), 15(1):161-173, January/February 2003. Figure 5. Broadcast Size (# buckets) Dist tree Expo LBIS Figure 6. Broadcast Size (# buckets) Dist tree Expo LBIS Figure 7. Broadcast Size (# buckets) Dist tree Expo LBIS Figure 8. Broadcast Size (# buckets) Dist tree Expo LBIS Averagetuningtime Averagetuningtime Averagetuningtime Averagetuningtime 23 [3] Hu, Q. L., Lee, D. L. and Lee, W.C. Performance evaluation of a wireless hierarchical data dissemination system. In Proceedings of the 5th Annual ACM International Conference on Mobile Computing and Networking (MobiCom``99), pages 163-173, Seattle, WA, August 1999. [4] Hu, Q. L. Lee, W.C. and Lee, D. L. Power conservative multi-attribute queries on data broadcast. In Proceedings of the 16th International Conference on Data Engineering (ICDE``00), pages 157-166, San Diego, CA, February 2000. [5] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Power efficient filtering of data on air. In Proceedings of the 4th International Conference on Extending Database Technology (EDBT``94), pages 245-258, Cambridge, UK, March 1994. [6] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Data on air - Organization and access. IEEE Transactions on Knowledge and Data Engineering (TKDE), 9(3):353-372, May/June 1997. [7] Shih, E., Bahl, P. and Sinclair, M. J. Wake on wireless: An event driven energy saving strategy for battery operated devices. In Proceedings of the 8th Annual ACM International Conference on Mobile Computing and Networking (MobiCom``02), pages 160-171, Atlanta, GA, September 2002. [8] Shivakumar N. and Venkatasubramanian, S. Energy-efficient indexing for information dissemination in wireless systems. ACM/Baltzer Journal of Mobile Networks and Applications (MONET), 1(4):433-446, December 1996. [9] Tan K. L. and Yu, J. X. Energy efficient filtering of non uniform broadcast. In Proceedings of the 16th International Conference on Distributed Computing Systems (ICDCS``96), pages 520-527, Hong Kong, May 1996. [10] Viredaz, M. A., Brakmo, L. S. and Hamburgen, W. R. Energy management on handheld devices. ACM Queue, 1(7):44-52, October 2003. [11] Garg, N. Kumar, V., & Dunham, M.H. Information Mapping and Indexing in DAYS, 6th International Workshop on Mobility in Databases and Distributed Systems, in conjunction with the 14th International Conference on Database and Expert Systems Applications September 1-5, Prague, Czech Republic, 2003. [12] Acharya D., Kumar, V., & Dunham, M.H. InfoSpace: Hybrid and Adaptive Public Data Dissemination System for Ubiquitous Computing. Accepted for publication in the special issue of Pervasive Computing. Wiley Journal for Wireless Communications and Mobile Computing, 2004. [13] Acharya D., Kumar, V., & Prabhu, N. Discovering and using Web Services in M-Commerce, Proceedings for 5th VLDB Workshop on Technologies for E-Services, Toronto, Canada,2004. [14] Acharya D., Kumar, V. Indexing Location Dependent Data in broadcast environment. Accepted for publication, JDIM special issue on Distributed Data Management, 2005. [15] Flinn, J., Sinnamohideen, S., & Satyanarayan, M. Data Staging on Untrusted Surrogates, Intel Research, Pittsburg, Unpublished Report, 2003. [16] Seydim, A.Y., Dunham, M.H. & Kumar, V. Location dependent query processing, Proceedings of the 2nd ACM international workshop on Data engineering for wireless and mobile access, p.47-53, Santa Barbara, California, USA, 2001. [17] Xu, J., Lee, W.C., Tang., X. Exponential Index: A Parameterized Distributed Indexing Scheme for Data on Air. In Proceedings of the 2nd ACM/USENIX International Conference on Mobile Systems, Applications, and Services (MobiSys'04), Boston, MA, June 2004. 24
Location based Indexing Scheme for DAYS ABSTRACT Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results. 1. INTRODUCTION Wireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users. The mode of data transfer is essentially asymmetric, that is, the capacity of the transfer of data (downstream communication) from the server to the client (mobile user) is significantly larger than the client or mobile user to the server (upstream communication). The effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime. One of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data. An example would be someone using their mobile device to search for a vegetarian restaurant. The LBS application would interact with other location technology components or use the mobile user's input to determine the user's location and download the information about the restaurants in proximity to the user by tuning into the wireless channel which is disseminating LDD. We see a limited deployment of LBS by some service providers. But there are every indications that with time some of the complex technical problems such as uniform location framework, calculating and tracking locations in all types of places, positioning in various environments, innovative location applications, etc., will be resolved and LBS will become a common facility and will help to improve market productivity and customer comfort. In our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data. A simple broadcast, however, is likely to cause significant performance degradation in the energy constrained mobile devices and a common solution to this problem is the use of efficient air indexing. The indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it. A mobile user, thus, has some free time to go into the doze mode which conserves valuable power. It also allows the user to personalize his own mobile device by selectively tuning to the information of his choice. Access efficiency and energy conservation are the two issues which are significant for data broadcast systems. Access efficiency refers to the latency experienced when a request is initiated till the response is received. Energy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data. Two parameters that affect these are the tuning time and the access latency. Tuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data. It can also be defined as the number of buckets tuned by the mobile device in active state to get its required data. Access latency may be defined as the time elapsed since a request has been issued till the response has been received. Several indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17]. The main disadvantages of the tree based schemes are that they are based on centralized tree structures. To start a search, the MU has to wait until it reaches the root of the next broadcast tree. This significantly affects the tuning time of the mobile unit. The exponential schemes facilitate index replication by sharing links in different search trees. For broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency. Also, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency. None of the above indexing schemes is equally effective in broadcasting location dependent data. In addition to providing low latency, they lack properties which are used to address LDD issues. We propose an indexing scheme in DAYS which takes care of some these problems. We show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time. The rest of the paper is presented as follows. In section 2, we discuss previous work related to indexing of broadcast data. Section 3 describes our DAYS architecture. Location dependent data, its generation and subsequent broadcast is presented in section 4. Section 5 discusses our indexing scheme in detail. Simulation of our scheme and its performance evaluation is presented in section 6. Section 7 concludes the paper and mentions future related work. 2. PREVIOUS WORK Several disk-based indexing techniques have been used for air indexing. Imielinski et al. [5, 6] applied the B + index tree, where the leaf nodes store the arrival times of the data items. The distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast. Specifically, the index tree is divided into a replicated part and a non replicated part. Each broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it. As such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time. Chen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access. These structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data. Tan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9]. A flexible indexing method was proposed in [5]. The flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments. The first bucket in each data segment contains a control index, which is a binary index mapping a given key value to the segment containing that key, and a local index, which is an m-entry index mapping a given key value to the buckets within the current segment. By tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency. Another indexing technique proposed is the exponential indexing scheme [17]. In this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time. It facilitates index replication by linking different search trees. All of the above mentioned schemes have been applied to data which are non related to each other. These non related data may be clustered or non clustered. However, none of them has specifically addressed the requirements of LDD. Location dependent data are data which are associated with a location. Presently there are several applications that deal with LDD [13, 16]. Almost all of them depict LDD with the help of hierarchical structures [3, 4]. This is based on the containment property of location dependent data. The Containment property helps determining relative position of an object by defining or identifying locations that contains those objects. The subordinate locations are hierarchically related to each other. Thus, Containment property limits the range of availability or operation of a service. We use this containment property in our indexing scheme to index LDD. 3. DAYS ARCHITECTURE DAYS has been conceptualized to disseminate topical and nontopical data to users in a local broadcast space and to accept queries from individual users globally. Topical data, for example, weather information, traffic information, stock information, etc., constantly changes over time. Non topical data such as hotel, restaurant, real estate prices, etc., do not change so often. Thus, we envision the presence of two types of data distribution: In the first case, server pushes data to local users through wireless channels. The other case deals with the server sending results of user queries through downlink wireless channels. Technically, we see the presence of two types of queues in the pull based data access. One is a heavily loaded queue containing globally uploaded queries. The other is a comparatively lightly loaded queue consisting of locally uploaded queries. The DAYS architecture [12] as shown in figure 1 consists of a Data Server, Broadcast Scheduler, DAYS Coordinator, Network of LEO satellites for global data delivery and a Local broadcast space. Data is pushed into the local broadcast space so that users may tune into the wireless channels to access the data. The local broadcast space consists of a broadcast tower, mobile units and a network of data staging machines called the surrogates. Data staging in surrogates has been earlier investigated as a successful technique [12, 15] to cache users' related data. We believe that data staging can be used to drastically reduce the latency time for both the local broadcast data as well as global responses. Query request in the surrogates may subsequently be used to generate the popularity patterns which ultimately decide the broadcast schedule [12]. Figure 1. DAYS Architecture Figure 2. Location Structure of Starbucks, Plaza 4. LOCATION DEPENDENT DATA (LDD) We argue that incorporating location information in wireless data broadcast can significantly decrease the access latency. This property becomes highly useful for mobile unit which has limited storage and processing capability. There are a variety of applications to obtain information about traffic, restaurant and hotel booking, fast food, gas stations, post office, grocery stores, etc. . If these applications are coupled with location information, then the search will be fast and highly cost effective. An important property of the locations is Containment which helps to determine the relative location of an object with respect to its parent that contains the object. Thus, Containment limits the range of availability of a data. We use this property in our indexing scheme. The database contains the broadcast contents which are converted into LDD [14] by associating them with respective locations so that it can be broadcasted in a clustered manner. The clustering of LDD helps the user to locate information efficiently and supports containment property. We present an example to justify our proposition. Example: Suppose a user issues query "Starbucks Coffee in Plaza please." to access information about the Plaza branch of Starbucks Coffee in Kansas City. In the case of location independent set up the system will list all Starbucks coffee shops in Kansas City area. It is obvious that such responses will increase access latency and are not desirable. These can be managed efficiently if the server has location dependent data, i.e., a mapping between a Starbucks coffee shop data and its physical location. Also, for a query including range of locations of Starbucks, a single query requesting locations for the entire region of Kansas City, as shown in Figure 2, will suffice. This will save enormous amount of bandwidth by decreasing the number of messages and at the same time will be helpful in preventing the scalability bottleneck in highly populated area. 4.1 Mapping Function for LDD The example justifies the need for a mapping function to process location dependent queries. This will be especially important for pull based queries across the globe for which the reply could be composed for different parts of the world. The mapping function is necessary to construct the broadcast schedule. We define Global Property Set (GPS) [11], Information Content (IC) set, and Location Hierarchy (LH) set where IC ⊆ GPS and LH ⊆ GPS to develop a mapping function. LH = {l1, l2, l3..., lk} where li represent locations in the location tree and IC = {ic1, ic2, ic3,..., icn} where ici represent information type. For example, if we have traffic, weather, and stock information are in broadcast then IC = {ictraffic, icweather, and icstock}. The mapping scheme must be able to identify and select an IC member and a LH node for (a) correct association, (b) granularity match, (c) and termination condition. For example, weather ∈ IC could be associated with a country or a state or a city or a town of LH. The granularity match between the weather and a LH node is as per user requirement. Thus, with a coarse granularity weather information is associated with a country to get country's weather and with town in a finer granularity. If a town is the finest granularity, then it defines the terminal condition for association between IC and LH for weather. This means that a user cannot get weather information about subdivision of a town. In reality weather of a subdivision does not make any sense. We develop a simple heuristic mapping approach scheme based on user requirement. Let IC = (m1, m2, m3. ,, mk}, where mi represent its element and let LH = (n1, n2, n3,..., nl}, where ni represents LH's member. We define GPS for IC (GPSIC) ⊆ GPS and for LH (GPSLH) ⊆ GPS as GPSIC = (P1, P2,..., Pn}, where P1, P2, P3,..., Pn are properties of its members and GPSLH = (Q1, Q2,..., Qm} where Q1, Q2,..., Qm are properties of its members. The properties of a particular member of IC are a subset of GPSIC. It is generally true that (property set (mi ∈ IC) ∪ property set (mj ∈ IC)) ≠ ∅, however, there may be cases where the intersection is not null. For example, stock ∈ IC and movie ∈ IC rating do not have any property in common. We assume that any two or more members of IC have at least one common geographical property (i.e. location) because DAYS broadcasts information about those categories, which are closely tied with a location. For example, stock of a company is related to a country, weather is related to a city or state, etc. . We define the property subset of mi ∈ IC as PSmi ∀ mi ∈ IC and GPSIC} which implies that ∀ i, PSmi ⊆ GPSIC. The geographical properties of this set are indicative of whether mi ∈ IC can be mapped to only a single granularity level (i.e. a single location) in LH or a multiple granularity levels (i.e. more than one nodes in the hierarchy) in LH. How many and which granularity levels should a mi map to, depends upon the level at which the service provider wants to provide information about the mi in question. Similarly we define a property subset of LH members as PSn ∀ nj The process of mapping from IC to LH is then identifying for some mx ∈ IC one or more ny ∈ LH such that PSmx ∩ PSnv ≠ φ. This means that when mx maps to ny and all children of ny if mx can map to multiple granularity levels or mx maps only to ny if mx can map to a single granularity level. We assume that new members can join and old member can leave IC or LH any time. The deletion of members from the IC space is simple but addition of members to the IC space is more restrictive. If we want to add a new member to the IC space, then we first define a property set for the new member: PSmnew_m = {P1, P2, P3,..., Pt} and add it to the IC only if the condition: ∀ Pw {Pw ∈ PSpnew_m → Pw ∈ GPSIC} is satisfied. This scheme has an additional benefit of allowing the information service providers to have a control over what kind of information they wish to provide to the users. We present the following example to illustrate the mapping concept. Now, only PS (ICStock) ∩ PSCountry ≠ φ. In addition, PS (ICStock) indicated that Stock can map to only a single location Country. When we consider the member Traffic of IC space, only PS (ICTraffic) ∩ PScity ≠ φ. As PS (ICTraffic) indicates that Traffic can map to only a single location, it maps only to City and none of its children. Now unlike Stock, mapping of Traffic with Major roads, which is a child of City, is meaningful. However service providers have right to control the granularity levels at which they want to provide information about a member of IC space. PS (ICRoad conditions) ∩ PSState ≠ φ and PS (ICRoad conditions) ∩ PSCity ≠ φ. So Road conditions maps to State as well as City. As PS (ICRoad conditions) indicates that Road conditions can map to multiple granularity levels, Road conditions will also map to Zip Code and Major roads, which are the children of State and City. Similarly, Restaurant maps only to Zip code, and Weather maps to State, City and their children, Major Roads and Zip Code. 5. LOCATION BASED INDEXING SCHEME This section discusses our location based indexing scheme (LBIS). The scheme is designed to conform to the LDD broadcast in our project DAYS. As discussed earlier, we use the containment property of LDD in the indexing scheme. This significantly limits the search of our required data to a particular portion of broadcast. Thus, we argue that the scheme provides bounded tuning time. We describe the architecture of our indexing scheme. Our scheme contains separate data buckets and index buckets. The index buckets are of two types. The first type is called the Major index. The Major index provides information about the types of data broadcasted. For example, if we intend to broadcast information like Entertainment, Weather, Traffic etc., then the major index points to either these major types of information and/or their main subtypes of information, the number of main subtypes varying from one information to another. This strictly limits number of accesses to a Major index. The Major index never points to the original data. It points to the sub indexes called the Minor index. The minor indexes are the indexes which actually points to the original data. We called these minor index pointers as Location Pointers as they points to the data which are associated with a location. Thus, our search for a data includes accessing of a major index and some minor indexes, the number of minor index varying depending on the type of information. Thus, our indexing scheme takes into account the hierarchical nature of the LDD, the Containment property, and requires our broadcast schedule to be clustered based on data type and location. The structure of the location hierarchy requires the use of different types of index at different levels. The structure and positions of index strictly depend on the location hierarchy as described in our mapping scheme earlier. We illustrate the implementation of our scheme with an example. The rules for framing the index are mentioned subsequently. Figure 3. Location Mapped Information for Broadcast Figure 4. Data coupled with Location based Index Example: Let us suppose that our broadcast content contains ICentertainment and ICweather which is represented as shown in Fig. 3. Ai represents Areas of City and Ri represents roads in a certain area. The leaves of Weather structure represent four cities. The index structure is given in Fig. 4 which shows the position of major and minor index and data in the broadcast schedule. We propose the following rules for the creation of the air indexed broadcast schedule: • The major index and the minor index are created. • The major index contains the position and range of different types of data items (Weather and Entertainment, Figure 3) and their categories. The sub categories of Entertainment, Movie and Restaurant, are also in the index. Thus, the major index contains Entertainment (E), Entertainment-Movie (EM), Entertainment-Restaurant (ER), and Weather (W). The tuple (S, L) represents the starting position (S) of the data item and L represents the range of the item in terms of number of data buckets. • The minor index contains the variables A, R and a pointer Next. In our example (Figure 3), road R represents the first node of area A. The minor index is used to point to actual data buckets present at the lowest levels of the hierarchy. In contrast, the major index points to a broader range of locations and so it contains information about main and sub categories of data. • Index information is not incorporated in the data buckets. Index buckets are separate containing only the control information. • The number of major index buckets m = #(IC), IC = {ic1, ic2, ic3,..., icn} where ici represent information type and #represents the cardinality of the Information Content set IC. In this example, IC = {icMovie, icWeather, icRestaurant} and so #(IC) = 3. Hence, the number of major index buckets is 3. • Mechanism to resolve the query is present in the java based coordinator in MU. For example, if a query Q is presented as Q (Entertainment, Movie, Road_1), then the resultant search will be for the EM information in the major index. We say, Q4EM. Our proposed index works as follows: Let us suppose that an MU issues a query which is represented by Java Coordinator present in the MU as "Restaurant information on Road 7". This is resolved by the coordinator as Q4ER. This means one has to search for ER unit of index in the major index. Let us suppose that the MU logs into the channel at R2. The first index it receives is a minor index after R2. In this index, value of Next variable = 4, which means that the next major index is present after bucket 4. The MU may go into doze mode. It becomes active after bucket 4 and receives the major index. It searches for ER information which is the first entry in this index. It is now certain that the MU will get the position of the data bucket in the adjoining minor index. The second unit in the minor index depicts the position of the required data R7. It tells that the data bucket is the first bucket in Area 4. The MU goes into doze mode again and becomes active after bucket 6. It gets the required data in the next bucket. We present the algorithm for searching the location based Index. Algorithm 1 Location based Index Search in DAYS 1. Scan broadcast for the next index bucket, found = false 2. While (not found) do 3. if bucket is Major Index then 4. Find the Type & Tuple (S, L) 5. if S is greater than 1, go into doze mode for S seconds 6. end if 7. Wake up at the Sth bucket and observe the Minor Index 8. end if 9. if bucket is Minor Index then 10. if TypeRequested not equal to Typefound and (A, R) Request not equal to (A, R) found then 11. Go into doze mode till NEXT & repeat from step 3 12. end if 13. else find entry in Minor Index which points to data 14. Compute time of arrival T of data bucket 15. Go into doze mode till T 16. Wake up at T and access data, found = true 6. PERFORMANCE EVALUATION Conservation of energy is the main concern when we try to access data from wireless broadcast. An efficient scheme should allow the mobile device to access its required data by staying active for a minimum amount of time. This would save considerable amount of energy. Since items are distributed based on types and are mapped to suitable locations, we argue that our broadcast deals with clustered data types. The mobile unit has to access a larger major index and a relatively much smaller minor index to get information about the time of arrival of data. This is in contrast to the exponential scheme where the indexes are of equal sizes. The example discussed and Algorithm 1 reveals that to access any data, we need to access the major index only once followed by one or more accesses to the minor index. The number of minor index access depends on the number of internal locations. As the number of internal locations vary for item to item (for example, Weather is generally associated with a City whereas traffic is granulated up to major and minor roads of a city), we argue that the structure of the location mapped information may be visualized as a forest which is a collection of general trees, the number of general trees depending on the types of information broadcasted and depth of a tree depending on the granularity of the location information associated with the information. For our experiments, we assume the forest as a collection of balanced M-ary trees. We further assume the M-ary trees to be full by assuming the presence of dummy nodes in different levels of a tree. Thus, if the number of data items is d and the height of the tree is m, then n = (m * d-1) / (m-1) where n is the number of vertices in the tree and i = (d-1) / (m-1) where i is the number of internal vertices. Tuning time for a data item involves 1 unit of time required to access the major index plus time required to access the data items present in the leaves of the tree. Thus, tuning time with d data items is t = logmd +1 We can say that tuning time is bounded by O (logmd). We compare our scheme with the distributed indexing and exponential scheme. We assume a flat broadcast and number of pages varying from 5000 to 25000. The various simulation parameters are shown in Table 1. Figure 5-8 shows the relative tuning times of three indexing algorithms, ie, the LBIS, exponential scheme and the distributed tree scheme. Figure 5 shows the result for number of internal location nodes = 3. We can see that LBIS significantly outperforms both the other schemes. The tuning time in LBIS ranges from approx 6.8 to 8. This large tuning time is due to the fact that after reaching the lowest minor index, the MU may have to access few buckets sequentially to get the required data bucket. We can see that the tuning time tends to become stable as the length of broadcast increases. In figure 6 we consider m = 4. Here we can see that the exponential and the distributed perform almost similarly, though the former seems to perform slightly better as the broadcast length increases. A very interesting pattern is visible in figure 7. For smaller broadcast size, the LBIS seems to have larger tuning time than the other two schemes. But as the length of broadcast increases, it is clearly visible the LBIS outperforms the other two schemes. The Distributed tree indexing shows similar behavior like the LBIS. The tuning time in LBIS remains low because the algorithm allows the MU to skip some intermediate Minor Indexes. This allows the MU to move into lower levels directly after coming into active mode, thus saving valuable energy. This action is not possible in the distributed tree indexing and hence we can observe that its tuning time is more than the LBIS scheme, although it performs better than the exponential scheme. Figure 8, in contrast, shows us that the tuning time in LBIS, though less than the other two schemes, tends to increase sharply as the broadcast length becomes greater than the 15000 pages. This may be attributed both due to increase in time required to scan the intermediate Minor Indexes and the length of the broadcast. But we can observe that the slope of the LBIS curve is significantly less than the other two curves. Table 1 Simulation Parameters The simulation results establish some facts about our location based indexing scheme. The scheme performs better than the other two schemes in terms of tuning time in most of the cases. As the length of the broadcast increases, after a certain point, though the tuning time increases as a result of factors which we have described before, the scheme always performs better than the other two schemes. Due to the prescribed limit of the number of pages in the paper, we are unable to show more results. But these omitted results show similar trend as the results depicted in figure 5-8. 7. CONCLUSION AND FUTURE WORK In this paper we have presented a scheme for mapping of wireless broadcast data with their locations. We have presented an example to show how the hierarchical structure of the location tree maps with the data to create LDD. We have presented a scheme called LBIS to index this LDD. We have used the containment property of LDD in the scheme that limits the search to a narrow range of data in the broadcast, thus saving valuable energy in the device. The mapping of data with locations and the indexing scheme will be used in our DAYS project to create the push based architecture. The LBIS has been compared with two other prominent indexing schemes, i.e., the distributed tree indexing scheme and the exponential indexing scheme. We showed in our simulations that the LBIS scheme has the lowest tuning time for broadcasts having large number of pages, thus saving valuable battery power in the MU. Figure 5. Broadcast Size (#buckets) Figure 6. Broadcast Size (#buckets) Figure 7. Broadcast Size (#buckets) Figure 8. Broadcast Size (#buckets) In the future work we try to incorporate pull based architecture in our DAYS project. Data from the server is available for access by the global users. This may be done by putting a request to the source server. The query in this case is a global query. It is transferred from the user's source server to the destination server through the use of LEO satellites. We intend to use our LDD scheme and data staging architecture in the pull based architecture. We will show that the LDD scheme together with the data staging architecture significantly improves the latency for global as well as local query.
Location based Indexing Scheme for DAYS ABSTRACT Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results. 1. INTRODUCTION Wireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users. The mode of data transfer is essentially asymmetric, that is, the capacity of the transfer of data (downstream communication) from the server to the client (mobile user) is significantly larger than the client or mobile user to the server (upstream communication). The effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime. One of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data. An example would be someone using their mobile device to search for a vegetarian restaurant. The LBS application would interact with other location technology components or use the mobile user's input to determine the user's location and download the information about the restaurants in proximity to the user by tuning into the wireless channel which is disseminating LDD. We see a limited deployment of LBS by some service providers. But there are every indications that with time some of the complex technical problems such as uniform location framework, calculating and tracking locations in all types of places, positioning in various environments, innovative location applications, etc., will be resolved and LBS will become a common facility and will help to improve market productivity and customer comfort. In our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data. A simple broadcast, however, is likely to cause significant performance degradation in the energy constrained mobile devices and a common solution to this problem is the use of efficient air indexing. The indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it. A mobile user, thus, has some free time to go into the doze mode which conserves valuable power. It also allows the user to personalize his own mobile device by selectively tuning to the information of his choice. Access efficiency and energy conservation are the two issues which are significant for data broadcast systems. Access efficiency refers to the latency experienced when a request is initiated till the response is received. Energy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data. Two parameters that affect these are the tuning time and the access latency. Tuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data. It can also be defined as the number of buckets tuned by the mobile device in active state to get its required data. Access latency may be defined as the time elapsed since a request has been issued till the response has been received. Several indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17]. The main disadvantages of the tree based schemes are that they are based on centralized tree structures. To start a search, the MU has to wait until it reaches the root of the next broadcast tree. This significantly affects the tuning time of the mobile unit. The exponential schemes facilitate index replication by sharing links in different search trees. For broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency. Also, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency. None of the above indexing schemes is equally effective in broadcasting location dependent data. In addition to providing low latency, they lack properties which are used to address LDD issues. We propose an indexing scheme in DAYS which takes care of some these problems. We show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time. The rest of the paper is presented as follows. In section 2, we discuss previous work related to indexing of broadcast data. Section 3 describes our DAYS architecture. Location dependent data, its generation and subsequent broadcast is presented in section 4. Section 5 discusses our indexing scheme in detail. Simulation of our scheme and its performance evaluation is presented in section 6. Section 7 concludes the paper and mentions future related work. 2. PREVIOUS WORK Several disk-based indexing techniques have been used for air indexing. Imielinski et al. [5, 6] applied the B + index tree, where the leaf nodes store the arrival times of the data items. The distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast. Specifically, the index tree is divided into a replicated part and a non replicated part. Each broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it. As such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time. Chen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access. These structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data. Tan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9]. A flexible indexing method was proposed in [5]. The flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments. The first bucket in each data segment contains a control index, which is a binary index mapping a given key value to the segment containing that key, and a local index, which is an m-entry index mapping a given key value to the buckets within the current segment. By tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency. Another indexing technique proposed is the exponential indexing scheme [17]. In this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time. It facilitates index replication by linking different search trees. All of the above mentioned schemes have been applied to data which are non related to each other. These non related data may be clustered or non clustered. However, none of them has specifically addressed the requirements of LDD. Location dependent data are data which are associated with a location. Presently there are several applications that deal with LDD [13, 16]. Almost all of them depict LDD with the help of hierarchical structures [3, 4]. This is based on the containment property of location dependent data. The Containment property helps determining relative position of an object by defining or identifying locations that contains those objects. The subordinate locations are hierarchically related to each other. Thus, Containment property limits the range of availability or operation of a service. We use this containment property in our indexing scheme to index LDD. 3. DAYS ARCHITECTURE 4. LOCATION DEPENDENT DATA (LDD) 4.1 Mapping Function for LDD 5. LOCATION BASED INDEXING SCHEME 6. PERFORMANCE EVALUATION
Location based Indexing Scheme for DAYS ABSTRACT Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results. 1. INTRODUCTION Wireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users. The mode of data transfer is essentially asymmetric, The effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime. One of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data. An example would be someone using their mobile device to search for a vegetarian restaurant. In our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data. The indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it. A mobile user, thus, has some free time to go into the doze mode which conserves valuable power. It also allows the user to personalize his own mobile device by selectively tuning to the information of his choice. Access efficiency and energy conservation are the two issues which are significant for data broadcast systems. Access efficiency refers to the latency experienced when a request is initiated till the response is received. Energy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data. Two parameters that affect these are the tuning time and the access latency. Tuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data. It can also be defined as the number of buckets tuned by the mobile device in active state to get its required data. Access latency may be defined as the time elapsed since a request has been issued till the response has been received. Several indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17]. The main disadvantages of the tree based schemes are that they are based on centralized tree structures. To start a search, the MU has to wait until it reaches the root of the next broadcast tree. This significantly affects the tuning time of the mobile unit. The exponential schemes facilitate index replication by sharing links in different search trees. For broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency. Also, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency. None of the above indexing schemes is equally effective in broadcasting location dependent data. In addition to providing low latency, they lack properties which are used to address LDD issues. We propose an indexing scheme in DAYS which takes care of some these problems. We show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time. In section 2, we discuss previous work related to indexing of broadcast data. Section 3 describes our DAYS architecture. Location dependent data, its generation and subsequent broadcast is presented in section 4. Section 5 discusses our indexing scheme in detail. Simulation of our scheme and its performance evaluation is presented in section 6. Section 7 concludes the paper and mentions future related work. 2. PREVIOUS WORK Several disk-based indexing techniques have been used for air indexing. Imielinski et al. [5, 6] applied the B + index tree, where the leaf nodes store the arrival times of the data items. The distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast. Specifically, the index tree is divided into a replicated part and a non replicated part. Each broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it. As such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time. Chen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access. These structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data. Tan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9]. A flexible indexing method was proposed in [5]. The flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments. By tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency. Another indexing technique proposed is the exponential indexing scheme [17]. In this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time. It facilitates index replication by linking different search trees. All of the above mentioned schemes have been applied to data which are non related to each other. These non related data may be clustered or non clustered. Location dependent data are data which are associated with a location. This is based on the containment property of location dependent data. The Containment property helps determining relative position of an object by defining or identifying locations that contains those objects. The subordinate locations are hierarchically related to each other. Thus, Containment property limits the range of availability or operation of a service. We use this containment property in our indexing scheme to index LDD.
H-88
Controlling Overlap in Content-Oriented XML Retrieval
The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation.
[ "xml", "rank", "inex", "inform retriev", "xml ir", "baselin retriev", "sog quantiz", "xml cumul gain metric", "cumul gain", "ideal gain vector", "xcg metric reward retriev", "multitext system", "re-rank algorithm", "term frequenc vector", "prioriti queue", "time complex", "extend tree travers routin" ]
[ "P", "P", "P", "M", "M", "M", "U", "M", "U", "U", "M", "U", "R", "U", "U", "U", "U" ]
Controlling Overlap in Content-Oriented XML Retrieval Charles L. A. Clarke School of Computer Science, University of Waterloo, Canada claclark@plg.uwaterloo.ca ABSTRACT The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation. Categories and Subject Descriptors H.3.3 [Information Systems]: Information Storage and Retrieval-Information Search and Retrieval General Terms Algorithms, Measurement, Performance, Experimentation 1. INTRODUCTION The representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances. In response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components. This facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents. <article> <fm> <atl>Text Compression for Dynamic Document Databases</atl> <au>Alistair Moffat</au> <au>Justin Zobel</au> <au>Neil Sharman</au> <abs><p><b>Abstract</b> For ...</p></abs> </fm> <bdy> <sec><st>INTRODUCTION</st> <ip1>Modern document databases...</ip1> <p>There are good reasons to compress...</p> </sec> <sec><st>REDUCING MEMORY REQUIREMENTS</st>... <ss1><st>2.1 Method A</st>... </sec> ... </bdy> </article> Figure 1: A journal article encoded in XML. Figure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents. Tags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words. Some elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate. Elements overlap each other - articles contain sections, sections contain subsections, and subsections contain paragraphs. Each of these characteristics affects the design of an XML IR system, and each leads to fundamental problems that must be solved in an successful system. Most of these fundamental problems can be solved through the careful adaptation of standard IR techniques, but the problems caused by overlap are unique to this area [4,11] and form the primary focus of this paper. The article of figure 1 may be viewed as an XML tree, as illustrated in figure 2. Formally, a collection of XML documents may be represented as a forest of ordered, rooted trees, consisting of a set of nodes N and a set of directed edges E connecting these nodes. For each node x ∈ N , the notation x.parent refers to the parent node of x, if one exists, and the notation x.children refers to the set of child nodes sec bdyfm atl au au au abs p b st ip1 sec st ss1 st article p Figure 2: Example XML tree. of x. Since an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes. The direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements. A high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article. For example, many of the elements in figure 2 would receive a high score on the keyword query text index compression algorithms. If each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content. One possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it. Unfortunately, this approach destroys some of the possible benefits of XML IR. For example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts. In such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements. Even when an entire book is relevant, a user may still wish to have the most important paragraphs highlighted, to guide her reading and to save time [6]. This paper presents a method for controlling overlap. Starting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant. For example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced. The inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20]. Extending that approach, the re-ranking algorithm varies weights dynamically as elements are processed. The remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4. This baseline method represents a reasonable adaptation of standard IR technology to XML. Section 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point. A re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7. Section 8 discusses an extended version of the algorithm. 2. BACKGROUND This section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction. Much research in the area of XML retrieval views it from a traditional database perspective, being concerned with such problems as the implementation of structured query languages [5] and the processing of joins [1]. Here, we take a content oriented IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language. We assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need. Other IR research has considered mixed queries, in which both content and structural requirements are specified [2,6,14,17,23]. 2.1 Term and Document Statistics In traditional information retrieval applications the standard unit of retrieval is taken to be the document. Depending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages. When applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate document, with term statistics available for each [16]. In addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole. If we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different documents, perhaps in elements representing a paragraph, a subsection, a section and an article. It is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13,23]. 2.2 Retrievable Elements While an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results. This is usually the case when elements contain very little text [10]. For example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself. Other elements may reflect the document``s physical, rather than logical, structure, which may have little or no meaning to a user. An effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18]. Standard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not. 2.3 Evaluation Methodology Over the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8]. INEX is an experimental conference series, similar to TREC, with groups from different institutions completing one or more experimental tasks using their own tools and systems, and comparing their results at the conference itself. Over 50 groups participated in INEX 2004, and the conference has become as influential in the area of XML IR as TREC is in other IR areas. The research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX. Overlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning. While substantial progress has been made, these problem are still not completely solved. Kazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics. Many of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section. 3. INEX 2004 Space limitations prevent the inclusion of more than a brief summary of INEX 2004 tasks and evaluation methodology. For detailed information, the proceedings of the conference itself should be consulted [8]. 3.1 Tasks For the main experimental tasks, INEX 2004 participants were provided with a collection of 12,107 articles taken from the IEEE Computer Societies magazines and journals between 1995 and 2002. Each document is encoded in XML using a common DTD, with the document of figures 1 and 2 providing one example. At INEX 2004, the two main experimental tasks were both adhoc retrieval tasks, investigating the performance of systems searching a static collection using previously unseen topics. The two tasks differed in the types of topics they used. For one task, the content-only or CO task, the topics consist of short natural language statements with no direct reference to the structure of the documents in the collection. For this task, the IR system is required to select the elements to be returned. For the other task, the contentand-structure or CAS task, the topics are written in an XML query language [22] and contain explicit references to document structure, which the IR system must attempt to satisfy. Since the work described in this paper is directed at the content-only task, where the IR system receives no guidance regarding the elements to return, the CAS task is ignored in the remainder of our description. In 2004, 40 new CO topics were selected by the conference organizers from contributions provided by the conference participants. Each topic includes a short keyword query, which is executed over the collection by each participating group on their own XML IR system. Each group could submit up to three experimental runs consisting of the top m = 1500 elements for each topic. 3.2 Relevance Assessment Since XML IR is concerned with locating those elements that provide complete coverage of a topic while containing as little extraneous information as possible, simple relevant vs. not relevant judgments are not sufficient. Instead, the INEX organizers adopted two dimensions for relevance assessment: The exhaustivity dimension reflects the degree to which an element covers the topic, and the specificity dimension reflects the degree to which an element is focused on the topic. A four-point scale is used in both dimensions. Thus, a (3,3) element is highly exhaustive and highly specific, a (1,3) element is marginally exhaustive and highly specific, and a (0,0) element is not relevant. Additional information on the assessment methodology may be found in Piwowarski and Lalmas [19], who provide a detailed rationale. 3.3 Evaluation Metrics The principle evaluation metric used at INEX 2004 is a version of mean average precision (MAP), adjusted by various quantization functions to give different weights to different elements, depending on their exhaustivity and specificity values. One variant, the strict quantization function gives a weight of 1 to (3,3) elements and a weight of 0 to all others. This variant is essentially the familiar MAP value, with (3,3) elements treated as relevant and all other elements treated as not relevant. Other quantization functions are designed to give partial credit to elements which are near misses, due to a lack or exhaustivity and/or specificity. Both the generalized quantization function and the specificity-oriented generalization (sog) function credit elements according to their degree of relevance [11], with the second function placing greater emphasis on specificity. This paper reports results of this metric using all three of these quantization functions. Since this metric was first introduced at INEX 2002, it is generally referred as the inex-2002 metric. The inex-2002 metric does not penalize overlap. In particular, both the generalized and sog quantization functions give partial credit to a near miss even when a (3,3) element overlapping it is reported at a higher rank. To address this problem, Kazai et al. [11] propose an XML cumulated gain metric, which compares the cumulated gain [9] of a ranked list to an ideal gain vector. This ideal gain vector is constructed from the relevance judgments by eliminating overlap and retaining only best element along a given path. Thus, the XCG metric rewards retrieval runs that avoid overlap. While XCG was not used officially at INEX 2004, a version of it is likely to be used in the future. At INEX 2003, yet another metric was introduced to ameliorate the perceived limitations of the inex-2002 metric. This inex-2003 metric extends the definitions of precision and recall to consider both the size of reported components and the overlap between them. Two versions were created, one that considered only component size and another that considered both size and overlap. While the inex-2003 metric exhibits undesirable anomalies [11], and was not used in 2004, values are reported in the evaluation section to provide an additional instrument for investigating overlap. 4. BASELINE RETRIEVAL METHOD This section provides an overview of baseline XML information retrieval method currently used in the MultiText IR system, developed by the Information Retrieval Group at the University of Waterloo [3]. This retrieval method results from the adaptation and tuning of the Okapi BM25 measure [21] to the XML information retrieval task. The MultiText system performed respectably at INEX 2004, placing in the top ten under all of the quantization functions, and placing first when the quantization function emphasized exhaustivity. To support retrieval from XML and other structured document types, the system provides generalized queries of the form: rank X by Y where X is a sub-query specifying a set of document elements to be ranked and Y is a vector of sub-queries specifying individual retrieval terms. For our INEX 2004 runs, the sub-query X specified a list of retrievable elements as those with tag names as follows: abs app article bb bdy bm fig fm ip1 li p sec ss1 ss2 vt This list includes bibliographic entries (bb) and figure captions (fig) as well as paragraphs, sections and subsections. Prior to INEX 2004, the INEX collection and the INEX 2003 relevance judgments were manually analyzed to select these tag names. Tag names were selected on the basis of their frequency in the collection, the average size of their associated elements, and the relative number of positive relevance judgments they received. Automating this selection process is planned as future work. For INEX 2004, the term vector Y was derived from the topic by splitting phrases into individual words, eliminating stopwords and negative terms (those starting with -), and applying a stemmer. For example, keyword field of topic 166 +``tree edit distance'' + XML -image became the four-term query ``$tree'' ``$edit'' ``$distance'' ``$xml'' where the $ operator within a quoted string stems the term that follows it. Our implementation of Okapi BM25 is derived from the formula of Robertson et al. [21] by setting parameters k2 = 0 and k3 = ∞. Given a term set Q, an element x is assigned the score t∈Q w(1) qt (k1 + 1)xt K + xt (1) where w(1) = log ¡ D − Dt + 0.5 Dt + 0.5 cents D = number of documents in the corpus Dt = number of documents containing t qt = frequency that t occurs in the topic xt = frequency that t occurs in x K = k1((1 − b) + b · lx/lavg) lx = length of x lavg = average document length 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0 2 4 6 8 10 12 14 16 MeanAveragePrecision(inex-2002) k1 strict generalized sog Figure 3: Impact of k1 on inex-2002 mean average precision with b = 0.75 (INEX 2003 CO topics). Prior to INEX 2004, the INEX 2003 topics and judgments were used to tune the b and k1 parameters, and the impact of this tuning is discussed later in this section. For the purposes of computing document-level statistics (D, Dt and lavg) a document is defined to be an article. These statistics are used for ranking all element types. Following the suggestion of Kamps et al. [10], the retrieval results are filtered to eliminate very short elements, those less than 25 words in length. The use of article statistics for all element types might be questioned. This approach may be justified by viewing the collection as a set of articles to be searched using standard document-oriented techniques, where only articles may be returned. The score computed for an element is essentially the score it would receive if it were added to the collection as a new document, ignoring the minor adjustments needed to the document-level statistics. Nonetheless, we plan to examine this issue again in the future. In our experience, the performance of BM25 typically benefits from tuning the b and k1 parameters to the collection, whenever training queries are available for this purpose. Prior to INEX 2004, we trained the MultiText system using the INEX 2003 queries. As a starting point we used the values b = 0.75 and k1 = 1.2, which perform well on TREC adhoc collections and are used as default values in our system. The results were surprising. Figure 3 shows the result of varying k1 with b = 0.75 on the MAP values under three quantization functions. In our experience, optimal values for k1 are typically in the range 0.0 to 2.0. In this case, large values are required for good performance. Between k1 = 1.0 and k1 = 6.0 MAP increases by over 15% under the strict quantization. Similar improvements are seen under the generalized and sog quantizations. In contrast, our default value of b = 0.75 works well under all quantization functions (figure 4). After tuning over a wide range of values under several quantization functions, we selected values of k = 10.0 and b = 0.80 for our INEX 2004 experiments, and these values are used for the experiments reported in section 7. 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MeanAveragePrecision(inex-2002) b strict generalized sog Figure 4: Impact of b on inex-2002 mean average precision with k1 = 10 (INEX 2003 CO topics). 5. CONTROLLING OVERLAP Starting with an element ranking generated by the baseline method described in the previous section, elements are re-ranked to control overlap by iteratively adjusting the scores of those elements containing or contained in higher ranking elements. At a conceptual level, re-ranking proceeds as follows: 1. Report the highest ranking element. 2. Adjust the scores of the unreported elements. 3. Repeat steps 1 and 2 until m elements are reported. One approach to adjusting the scores of unreported elements in step 2 might be based on the Okapi BM25 scores of the involved elements. For example, assume a paragraph with score p is reported in step 1. In step 2, the section containing the paragraph might then have its score s lowered by an amount α · p to reflect the reduced contribution the paragraph should make to the section``s score. In a related context, Robertson et al. [20] argue strongly against the linear combination of Okapi scores in this fashion. That work considers the problem of assigning different weights to different document fields, such as the title and body associated with Web pages. A common approach to this problem scores the title and body separately and generates a final score as a linear combination of the two. Robertson et al. discuss the theoretical flaws in this approach and demonstrate experimentally that it can actually harm retrieval effectiveness. Instead, they apply the weights at the term frequency level, with an occurrence of a query term t in the title making a greater contribution to the score than an occurrence in the body. In equation 1, xt becomes α0 · yt + α1 · zt, where yt is the number of times t occurs in the title and zt is the number of times t occurs in the body. Translating this approach to our context, the contribution of terms appearing in elements is dynamically reduced as they are reported. The next section presents and analysis a simple re-ranking algorithm that follows this strategy. The algorithm is evaluated experimentally in section 7. One limitation of the algorithm is that the contribution of terms appearing in reported elements is reduced by the same factor regardless of the number of reported elements in which it appears. In section 8 the algorithm is extended to apply increasing weights, lowering the score, when a term appears in more than one reported element. 6. RE-RANKING ALGORITHM The re-ranking algorithm operates over XML trees, such as the one appearing in figure 2. Input to the algorithm is a list of n elements ranked according to their initial BM25 scores. During the initial ranking the XML tree is dynamically re-constructed to include only those nodes with nonzero BM25 scores, so n may be considerably less than |N |. Output from the algorithm is a list of the top m elements, ranked according to their adjusted scores. An element is represented by the node x ∈ N at its root. Associated with this node are fields storing the length of element, term frequencies, and other information required by the re-ranking algorithm, as follows: x.f - term frequency vector x.g - term frequency adjustments x.l - element length x.score - current Okapi BM25 score x.reported - boolean flag, initially false x.children - set of child nodes x.parent - parent node, if one exists These fields are populated during the initial ranking process, and updated as the algorithm progresses. The vector x.f contains term frequency information corresponding to each term in the query. The vector x.g is initially zero and is updated by the algorithm as elements are reported. The score field contains the current BM25 score for the element, which will change as the values in x.g change. The score is computed using equation 1, with the xt value for each term determined by a combination of the values in x.f and x.g. Given a term t ∈ Q, let ft be the component of x.f corresponding to t, and let gt be the component of x.g corresponding to t, then: xt = ft − α · gt (2) For processing by the re-ranking algorithm, nodes are stored in priority queues, ordered by decreasing score. Each priority queue PQ supports three operations: PQ.front() - returns the node with greatest score PQ.add (x) - adds node x to the queue PQ.remove(x) - removes node x from the queue When implemented using standard data structures, the front operation requires O(1) time, and the other operations require O(log n) time, where n is the size of the queue. The core of the re-ranking algorithm is presented in figure 5. The algorithm takes as input the priority queue S containing the initial ranking, and produces the top-m reranked nodes in the priority queue F. After initializing F to be empty on line 1, the algorithm loops m times over lines 215, transferring at least one node from S to F during each iteration. At the start of each iteration, the unreported node at the front of S has the greatest adjusted score, and it is removed and added to F. The algorithm then traverses the 1 F ← ∅ 2 for i ← 1 to m do 3 x ← S.front() 4 S.remove(x) 5 x.reported ← true 6 F.add(x) 7 8 foreach y ∈ x.children do 9 Down (y) 10 end do 11 12 if x is not a root node then 13 Up (x, x.parent) 14 end if 15 end do Figure 5: Re-Ranking Algorithm - As input, the algorithm takes a priority queue S, containing XML nodes ranked by their initial scores, and returns its results in priority queue F, ranked by adjusted scores. 1 Up(x, y) ≡ 2 S.remove(y) 3 y.g ← y.g + x.f − x.g 4 recompute y.score 5 S.add(y) 6 if y is not a root node then 7 Up (x, y.parent) 8 end if 9 10 Down(x) ≡ 11 if not x.reported then 12 S.remove(x) 14 x.g ← x.f 15 recompute x.score 16 if x.score > 0 then 17 F.add(x) 18 end if 19 x.reported ← true 20 foreach y ∈ x.children do 21 Down (y) 22 end do 23 end if Figure 6: Tree traversal routines called by the reranking algorithm. 0.0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.25 0.26 0.27 0.28 0.29 0.30 0.31 0.32 0.33 0.34 0.35 0.36 MeanAveragePrecision(inex-2002) XMLCumulatedGain(XCG) alpha MAP (strict) MAP (generalized) MAP (sog) XCG (sog2) Figure 7: Impact of α on XCG and inex-2002 MAP (INEX 2004 CO topics; assessment set I). node``s ancestors (lines 8-10) and descendants (lines 12-14) adjusting the scores of these nodes. The tree traversal routines, Up and Down are given in figure 6. The Up routine removes each ancestor node from S, adjusts its term frequency values, recomputes its score, and adds it back into S. The adjustment of the term frequency values (line 3) adds to y.g only the previously unreported term occurrences in x. Re-computation of the score on line 4 uses equations 1 and 2. The Down routine performs a similar operation on each descendant. However, since the contents of each descendant are entirely contained in a reported element its final score may be computed, and it is removed from S and added to F. In order to determine the time complexity of the algorithm, first note that a node may be an argument to Down at most once. Thereafter, the reported flag of its parent is true. During each call to Down a node may be moved from S to F, requiring O(log n) time. Thus, the total time for all calls to Down is O(n log n), and we may temporarily ignore lines 8-10 of figure 5 when considering the time complexity of the loop over lines 2-15. During each iteration of this loop, a node and each of its ancestors are removed from a priority queue and then added back into a priority queue. Since a node may have at most h ancestors, where h is the maximum height of any tree in the collection, each of the m iterations requires O(h log n) time. Combining these observations produces an overall time complexity of O((n + mh) log n). In practice, re-ranking an INEX result set requires less than 200ms on a three-year-old desktop PC. 7. EVALUATION None of the metrics described in section 3.3 is a close fit with the view of overlap advocated by this paper. Nonetheless, when taken together they provide insight into the behaviour of the re-ranking algorithm. The INEX evaluation packages (inex_eval and inex_eval_ng) were used to compute values for the inex-2002 and inex-2003 metrics. Values for the XCG metrics were computed using software supplied by its inventors [11]. Figure 7 plots the three variants of inex-2002 MAP metric together with the XCG metric. Values for these metrics 0.0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MeanAveragePrecision(inex-2003) alpha strict, overlap not considered strict, overlap considered generalized, overlap not considered generalized, overlap considered Figure 8: Impact of α on inex-2003 MAP (INEX 2004 CO topics; assessment set I). are plotted for values of α between 0.0 and 1.0. Recalling that the XCG metric is designed to penalize overlap, while the inex-2002 metric ignores overlap, the conflict between the metrics is obvious. The MAP values at one extreme (α = 0.0) and the XCG value at the other extreme (α = 1.0) represent retrieval performance comparable to the best systems at INEX 2004 [8,12]. Figure 8 plots values of the inex-2003 MAP metric for two quantizations, with and without consideration of overlap. Once again, conflict is apparent, with the influence of α substantially lessened when overlap is considered. 8. EXTENDED ALGORITHM One limitation of the re-ranking algorithm is that a single weight α is used to adjust the scores of both the ancestors and descendants of reported elements. An obvious extension is to use different weights in these two cases. Furthermore, the same weight is used regardless of the number of times an element is contained in a reported element. For example, a paragraph may form part of a reported section and then form part of a reported article. Since the user may now have seen this paragraph twice, its score should be further lowered by increasing the value of the weight. Motivated by these observations, the re-ranking algorithm may be extended with a series of weights 1 = β0 ≥ β1 ≥ β2 ≥ ... ≥ βM ≥ 0. where βj is the weight applied to a node that has been a descendant of a reported node j times. Note that an upper bound on M is h, the maximum height of any XML tree in the collection. However, in practice M is likely to be relatively small (perhaps 3 or 4). Figure 9 presents replacements for the Up and Down routines of figure 6, incorporating this series of weights. One extra field is required in each node, as follows: x.j - down count The value of x.j is initially set to zero in all nodes and is incremented each time Down is called with x as its argument. When computing the score of node, the value of x.j selects 1 Up(x, y) ≡ 2 if not y.reported then 3 S.remove(y) 4 y.g ← y.g + x.f − x.g 5 recompute y.score 6 S.add(y) 8 if y is not a root node then 9 Up (x, y.parent) 10 end if 11 end if 12 13 Down(x) ≡ 14 if x.j < M then 15 x.j ← x.j + 1 16 if not x.reported then 17 S.remove(x) 18 recompute x.score 19 S.add(x) 20 end if 21 foreach y ∈ x.children do 22 Down (y) 23 end do 24 end if Figure 9: Extended tree traversal routines. the weight to be applied to the node by adjusting the value of xt in equation 1, as follows: xt = βx.j · (ft − α · gt) (3) where ft and gt are the components of x.f and x.g corresponding to term t. A few additional changes are required to extend Up and Down. The Up routine returns immediately (line 2) if its argument has already been reported, since term frequencies have already been adjusted in its ancestors. The Down routine does not report its argument, but instead recomputes its score and adds it back into S. A node cannot be an argument to Down more than M +1 times, which in turn implies an overall time complexity of O((nM + mh) log n). Since M ≤ h and m ≤ n, the time complexity is also O(nh log n). 9. CONCLUDING DISCUSSION When generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial. For example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship. The algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance. Other approaches may also help to control overlap. For example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them. While this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster. At Waterloo, we continue to develop and test our ideas for INEX 2005. In particular, we are investigating methods for learning the α and βj weights. We are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20]. 10. ACKNOWLEDGMENTS Thanks to Gabriella Kazai and Arjen de Vries for providing an early version of their software for computing the XCG metric, and thanks to Phil Tilker and Stefan B¨uttcher for their help with the experimental evaluation. In part, funding for this project was provided by IBM Canada through the National Institute for Software Research. 11. REFERENCES [1] N. Bruno, N. Koudas, and D. Srivastava. Holistic twig joins: Optimal XML pattern matching. In Proceedings of the 2002 ACM SIGMOD International Conference on the Management of Data, pages 310-321, Madison, Wisconsin, June 2002. [2] D. Carmel, Y. S. Maarek, M. Mandelbrod, Y. Mass, and A. Soffer. Searching XML documents via XML fragments. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 151-158, Toronto, Canada, 2003. [3] C. L. A. Clarke and P. L. Tilker. MultiText experiments for INEX 2004. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [4] A. P. de Vries, G. Kazai, and M. Lalmas. Tolerance to irrelevance: A user-effort oriented evaluation of retrieval systems without predefined retrieval unit. In RIAO 2004 Conference Proceedings, pages 463-473, Avignon, France, April 2004. [5] D. DeHaan, D. Toman, M. P. Consens, and M. T. ¨Ozsu. A comprehensive XQuery to SQL translation using dynamic interval encoding. In Proceedings of the 2003 ACM SIGMOD International Conference on the Management of Data, San Diego, June 2003. [6] N. Fuhr and K. Großjohann. XIRQL: A query language for information retrieval in XML documents. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 172-180, New Orleans, September 2001. [7] N. Fuhr, M. Lalmas, and S. Malik, editors. Initiative for the Evaluation of XML Retrieval. Proceedings of the Second Workshop (INEX 2003), Dagstuhl, Germany, December 2003. [8] N. Fuhr, M. Lalmas, S. Malik, and Zolt´an Szl´avik, editors. Initiative for the Evaluation of XML Retrieval. Proceedings of the Third Workshop (INEX 2004), Dagstuhl, Germany, December 2004. Published as Advances in XML Information Retrieval, Lecture Notes in Computer Science, volume 3493, Springer, 2005. [9] K. J¨avelin and J. Kek¨al¨ainen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422-446, 2002. [10] J. Kamps, M. de Rijke, and B. Sigurbj¨ornsson. Length normalization in XML retrieval. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 80-87, Sheffield, UK, July 2004. [11] G. Kazai, M. Lalmas, and A. P. de Vries. The overlap problem in content-oriented XML retrieval evaluation. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 72-79, Sheffield, UK, July 2004. [12] G. Kazai, M. Lalmas, and A. P. de Vries. Reliability tests for the XCG and inex-2002 metrics. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [13] J. Kek¨al¨ainen, M. Junkkari, P. Arvola, and T. Aalto. TRIX 2004 - Struggling with the overlap. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [14] S. Liu, Q. Zou, and W. W. Chu. Configurable indexing and ranking for XML information retrieval. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 88-95, Sheffield, UK, July 2004. [15] Y. Mass and M. Mandelbrod. Retrieving the most relevant XML components. In INEX 2003 Workshop Proceedings, Dagstuhl, Germany, December 2003. [16] Y. Mass and M. Mandelbrod. Component ranking and automatic query refinement for XML retrieval. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [17] P. Ogilvie and J. Callan. Hierarchical language models for XML component retrieval. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [18] J. Pehcevski, J. A. Thom, and A. Vercoustre. Hybrid XML retrieval re-visited. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [19] B. Piwowarski and M. Lalmas. Providing consistent and exhaustive relevance assessments for XML retrieval evaluation. In Proceedings of the 13th ACM Conference on Information and Knowledge Management, pages 361-370, Washington, DC, November 2004. [20] S. Robertson, H. Zaragoza, and M. Taylor. Simple BM25 extension to multiple weighted fields. In Proceedings of the 13th ACM Conference on Information and Knowledge Management, pages 42-50, Washington, DC, November 2004. [21] S. E. Robertson, S. Walker, and M. Beaulieu. Okapi at TREC-7: Automatic ad-hoc, filtering, VLC and interactive track. In Proceedings of the Seventh Text REtrieval Conference, Gaithersburg, MD, November 1998. [22] A. Trotman and B. Sigurbj¨ornsson. NEXI, now and next. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8]. [23] J. Vittaut, B. Piwowarski, and P. Gallinari. An algebra for structured queries in bayesian networks. In INEX 2004 Workshop Proceedings, 2004. Published in LNCS 3493 [8].
Controlling Overlap in Content-Oriented XML Retrieval ABSTRACT The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation. 1. INTRODUCTION The representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances. In response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components. This facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents. Figure 1: A journal article encoded in XML. Figure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents. Tags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words. Some elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate. Elements overlap each other--articles contain sections, sections contain subsections, and subsections contain paragraphs. Each of these characteristics affects the design of an XML IR system, and each leads to fundamental problems that must be solved in an successful system. Most of these fundamental problems can be solved through the careful adaptation of standard IR techniques, but the problems caused by overlap are unique to this area [4,11] and form the primary focus of this paper. The article of figure 1 may be viewed as an XML tree, as illustrated in figure 2. Formally, a collection of XML documents may be represented as a forest of ordered, rooted trees, consisting of a set of nodes N and a set of directed edges E connecting these nodes. For each node x ∈ N, the notation x.parent refers to the parent node of x, if one exists, and the notation x.children refers to the set of child nodes Figure 2: Example XML tree. of x. Since an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes. The direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements. A high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article. For example, many of the elements in figure 2 would receive a high score on the keyword query "text index compression algorithms". If each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content. One possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it. Unfortunately, this approach destroys some of the possible benefits of XML IR. For example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts. In such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements. Even when an entire book is relevant, a user may still wish to have the most important paragraphs highlighted, to guide her reading and to save time [6]. This paper presents a method for controlling overlap. Starting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant. For example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced. The inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20]. Extending that approach, the re-ranking algorithm varies weights dynamically as elements are processed. The remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4. This baseline method represents a reasonable adaptation of standard IR technology to XML. Section 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point. A re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7. Section 8 discusses an extended version of the algorithm. 2. BACKGROUND This section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction. Much research in the area of XML retrieval views it from a traditional database perspective, being concerned with such problems as the implementation of structured query languages [5] and the processing of joins [1]. Here, we take a "content oriented" IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language. We assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need. Other IR research has considered mixed queries, in which both content and structural requirements are specified [2, 6,14,17, 23]. 2.1 Term and Document Statistics In traditional information retrieval applications the standard unit of retrieval is taken to be the "document". Depending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages. When applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate "document", with term statistics available for each [16]. In addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole. If we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different "documents", perhaps in elements representing a paragraph, a subsection, a section and an article. It is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13, 23]. 2.2 Retrievable Elements While an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results. This is usually the case when elements contain very little text [10]. For example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself. Other elements may reflect the document's physical, rather than logical, struc ture, which may have little or no meaning to a user. An effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18]. Standard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not. 2.3 Evaluation Methodology Over the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8]. INEX is an experimental conference series, similar to TREC, with groups from different institutions completing one or more experimental tasks using their own tools and systems, and comparing their results at the conference itself. Over 50 groups participated in INEX 2004, and the conference has become as influential in the area of XML IR as TREC is in other IR areas. The research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX. Overlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning. While substantial progress has been made, these problem are still not completely solved. Kazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics. Many of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section. 3. INEX 2004 Space limitations prevent the inclusion of more than a brief summary of INEX 2004 tasks and evaluation methodology. For detailed information, the proceedings of the conference itself should be consulted [8]. 3.1 Tasks For the main experimental tasks, INEX 2004 participants were provided with a collection of 12,107 articles taken from the IEEE Computer Societies magazines and journals between 1995 and 2002. Each document is encoded in XML using a common DTD, with the document of figures 1 and 2 providing one example. At INEX 2004, the two main experimental tasks were both adhoc retrieval tasks, investigating the performance of systems searching a static collection using previously unseen topics. The two tasks differed in the types of topics they used. For one task, the "content-only" or CO task, the topics consist of short natural language statements with no direct reference to the structure of the documents in the collection. For this task, the IR system is required to select the elements to be returned. For the other task, the "contentand-structure" or CAS task, the topics are written in an XML query language [22] and contain explicit references to document structure, which the IR system must attempt to satisfy. Since the work described in this paper is directed at the content-only task, where the IR system receives no guidance regarding the elements to return, the CAS task is ignored in the remainder of our description. In 2004, 40 new CO topics were selected by the conference organizers from contributions provided by the conference participants. Each topic includes a short keyword query, which is executed over the collection by each participating group on their own XML IR system. Each group could submit up to three experimental runs consisting of the top m = 1500 elements for each topic. 3.2 Relevance Assessment Since XML IR is concerned with locating those elements that provide complete coverage of a topic while containing as little extraneous information as possible, simple "relevant" vs. "not relevant" judgments are not sufficient. Instead, the INEX organizers adopted two dimensions for relevance assessment: The exhaustivity dimension reflects the degree to which an element covers the topic, and the specificity dimension reflects the degree to which an element is focused on the topic. A four-point scale is used in both dimensions. Thus, a (3,3) element is highly exhaustive and highly specific, a (1,3) element is marginally exhaustive and highly specific, and a (0,0) element is not relevant. Additional information on the assessment methodology may be found in Piwowarski and Lalmas [19], who provide a detailed rationale. 3.3 Evaluation Metrics The principle evaluation metric used at INEX 2004 is a version of mean average precision (MAP), adjusted by various quantization functions to give different weights to different elements, depending on their exhaustivity and specificity values. One variant, the strict quantization function gives a weight of 1 to (3,3) elements and a weight of 0 to all others. This variant is essentially the familiar MAP value, with (3,3) elements treated as "relevant" and all other elements treated as "not relevant". Other quantization functions are designed to give partial credit to elements which are "near misses", due to a lack or exhaustivity and/or specificity. Both the generalized quantization function and the specificity-oriented generalization (sog) function credit elements "according to their degree of relevance" [11], with the second function placing greater emphasis on specificity. This paper reports results of this metric using all three of these quantization functions. Since this metric was first introduced at INEX 2002, it is generally referred as the "inex-2002" metric. The inex-2002 metric does not penalize overlap. In particular, both the generalized and sog quantization functions give partial credit to a "near miss" even when a (3,3) element overlapping it is reported at a higher rank. To address this problem, Kazai et al. [11] propose an XML cumulated gain metric, which compares the cumulated gain [9] of a ranked list to an ideal gain vector. This ideal gain vector is constructed from the relevance judgments by eliminating overlap and retaining only best element along a given path. Thus, the XCG metric rewards retrieval runs that avoid overlap. While XCG was not used officially at INEX 2004, a version of it is likely to be used in the future. At INEX 2003, yet another metric was introduced to ameliorate the perceived limitations of the inex-2002 metric. This "inex-2003" metric extends the definitions of precision and recall to consider both the size of reported components and the overlap between them. Two versions were created, one that considered only component size and another that considered both size and overlap. While the inex-2003 metric exhibits undesirable anomalies [11], and was not used in 2004, values are reported in the evaluation section to provide an additional instrument for investigating overlap. 4. BASELINE RETRIEVAL METHOD This section provides an overview of baseline XML information retrieval method currently used in the MultiText IR system, developed by the Information Retrieval Group at the University of Waterloo [3]. This retrieval method results from the adaptation and tuning of the Okapi BM25 measure [21] to the XML information retrieval task. The MultiText system performed respectably at INEX 2004, placing in the top ten under all of the quantization functions, and placing first when the quantization function emphasized exhaustivity. To support retrieval from XML and other structured document types, the system provides generalized queries of the form: where Xis a sub-query specifying a set of document elements to be ranked and Y is a vector of sub-queries specifying individual retrieval terms. For our INEX 2004 runs, the sub-query X specified a list of retrievable elements as those with tag names as follows: abs app article bb bdy bm fig fm ip1 li p sec ss1 ss2 vt This list includes bibliographic entries (bb) and figure captions (fig) as well as paragraphs, sections and subsections. Prior to INEX 2004, the INEX collection and the INEX 2003 relevance judgments were manually analyzed to select these tag names. Tag names were selected on the basis of their frequency in the collection, the average size of their associated elements, and the relative number of positive relevance judgments they received. Automating this selection process is planned as future work. For INEX 2004, the term vector Y was derived from the topic by splitting phrases into individual words, eliminating stopwords and negative terms (those starting with "-"), and applying a stemmer. For example, keyword field of topic 166 + "tree edit distance" + XML - image became the four-term query "$tree" "$edit" "$distance" "$xml" where the "$" operator within a quoted string stems the term that follows it. Our implementation of Okapi BM25 is derived from the formula of Robertson et al. [21] by setting parameters k2 = 0 and k3 = ∞. Given a term set Q, an element x is assigned the score where Figure 3: Impact of k1 on inex-2002 mean average precision with b = 0.75 (INEX 2003 CO topics). Prior to INEX 2004, the INEX 2003 topics and judgments were used to tune the b and k1 parameters, and the impact of this tuning is discussed later in this section. For the purposes of computing document-level statistics (D, Dt and lavg) a document is defined to be an article. These statistics are used for ranking all element types. Following the suggestion of Kamps et al. [10], the retrieval results are filtered to eliminate very short elements, those less than 25 words in length. The use of article statistics for all element types might be questioned. This approach may be justified by viewing the collection as a set of articles to be searched using standard document-oriented techniques, where only articles may be returned. The score computed for an element is essentially the score it would receive if it were added to the collection as a new document, ignoring the minor adjustments needed to the document-level statistics. Nonetheless, we plan to examine this issue again in the future. In our experience, the performance of BM25 typically benefits from tuning the b and k1 parameters to the collection, whenever training queries are available for this purpose. Prior to INEX 2004, we trained the MultiText system using the INEX 2003 queries. As a starting point we used the values b = 0.75 and k1 = 1.2, which perform well on TREC adhoc collections and are used as default values in our system. The results were surprising. Figure 3 shows the result of varying k1 with b = 0.75 on the MAP values under three quantization functions. In our experience, optimal values for k1 are typically in the range 0.0 to 2.0. In this case, large values are required for good performance. Between k1 = 1.0 and k1 = 6.0 MAP increases by over 15% under the strict quantization. Similar improvements are seen under the generalized and sog quantizations. In contrast, our default value of b = 0.75 works well under all quantization functions (figure 4). After tuning over a wide range of values under several quantization functions, we selected values of k = 10.0 and b = 0.80 for our INEX 2004 experiments, and these values are used for the experiments reported in section 7. Figure 4: Impact of b on inex-2002 mean average precision with k1 = 10 (INEX 2003 CO topics). appearing in reported elements is reduced by the same factor regardless of the number of reported elements in which it appears. In section 8 the algorithm is extended to apply increasing weights, lowering the score, when a term appears in more than one reported element. 6. RE-RANKING ALGORITHM The re-ranking algorithm operates over XML trees, such as the one appearing in figure 2. Input to the algorithm is a list of n elements ranked according to their initial BM25 scores. During the initial ranking the XML tree is dynamically re-constructed to include only those nodes with nonzero BM25 scores, so n may be considerably less than lNl. Output from the algorithm is a list of the top m elements, ranked according to their adjusted scores. An element is represented by the node x G N at its root. Associated with this node are fields storing the length of element, term frequencies, and other information required by the re-ranking algorithm, as follows: 5. CONTROLLING OVERLAP Starting with an element ranking generated by the baseline method described in the previous section, elements are re-ranked to control overlap by iteratively adjusting the scores of those elements containing or contained in higher ranking elements. At a conceptual level, re-ranking proceeds as follows: 1. Report the highest ranking element. 2. Adjust the scores of the unreported elements. 3. Repeat steps 1 and 2 until m elements are reported. One approach to adjusting the scores of unreported elements in step 2 might be based on the Okapi BM25 scores of the involved elements. For example, assume a paragraph with score p is reported in step 1. In step 2, the section containing the paragraph might then have its score s lowered by an amount α • p to reflect the reduced contribution the paragraph should make to the section's score. In a related context, Robertson et al. [20] argue strongly against the linear combination of Okapi scores in this fashion. That work considers the problem of assigning different weights to different document fields, such as the title and body associated with Web pages. A common approach to this problem scores the title and body separately and generates a final score as a linear combination of the two. Robertson et al. discuss the theoretical flaws in this approach and demonstrate experimentally that it can actually harm retrieval effectiveness. Instead, they apply the weights at the term frequency level, with an occurrence of a query term t in the title making a greater contribution to the score than an occurrence in the body. In equation 1, xt becomes α0 • yt + α1 • zt, where yt is the number of times t occurs in the title and zt is the number of times t occurs in the body. Translating this approach to our context, the contribution of terms appearing in elements is dynamically reduced as they are reported. The next section presents and analysis a simple re-ranking algorithm that follows this strategy. The algorithm is evaluated experimentally in section 7. One limitation of the algorithm is that the contribution of terms These fields are populated during the initial ranking process, and updated as the algorithm progresses. The vector x. f ~ contains term frequency information corresponding to each term in the query. The vector x. ~ g is initially zero and is updated by the algorithm as elements are reported. The score field contains the current BM25 score for the element, which will change as the values in x. ~ g change. The score is computed using equation 1, with the xt value for each term determined by a combination of the values in x. f ~ and x. ~ g. Given a term t G Q, let ft be the component of x. f ~ corresponding to t, and let gt be the component of x. ~ g corresponding to t, then: For processing by the re-ranking algorithm, nodes are stored in priority queues, ordered by decreasing score. Each priority queue PQ supports three operations: When implemented using standard data structures, the front operation requires O (1) time, and the other operations require O (log n) time, where n is the size of the queue. The core of the re-ranking algorithm is presented in figure 5. The algorithm takes as input the priority queue S containing the initial ranking, and produces the top-m reranked nodes in the priority queue F. After initializing F to be empty on line 1, the algorithm loops m times over lines 215, transferring at least one node from S to F during each iteration. At the start of each iteration, the unreported node at the front of S has the greatest adjusted score, and it is removed and added to F. The algorithm then traverses the Figure 5: Re-Ranking Algorithm--As input, the algorithm takes a priority queue S, containing XML nodes ranked by their initial scores, and returns its results in priority queue F, ranked by adjusted scores. Figure 6: Tree traversal routines called by the reranking algorithm. Figure 7: Impact of α on XCG and inex-2002 MAP (INEX 2004 CO topics; assessment set I). node's ancestors (lines 8-10) and descendants (lines 12-14) adjusting the scores of these nodes. The tree traversal routines, Up and Down are given in figure 6. The Up routine removes each ancestor node from S, adjusts its term frequency values, recomputes its score, and adds it back into S. The adjustment of the term frequency values (line 3) adds to y. ~ g only the previously unreported term occurrences in x. Re-computation of the score on line 4 uses equations 1 and 2. The Down routine performs a similar operation on each descendant. However, since the contents of each descendant are entirely contained in a reported element its final score may be computed, and it is removed from S and added to F. In order to determine the time complexity of the algorithm, first note that a node may be an argument to Down at most once. Thereafter, the reported flag of its parent is true. During each call to Down a node may be moved from S to F, requiring O (log n) time. Thus, the total time for all calls to Down is O (n log n), and we may temporarily ignore lines 8-10 of figure 5 when considering the time complexity of the loop over lines 2-15. During each iteration of this loop, a node and each of its ancestors are removed from a priority queue and then added back into a priority queue. Since a node may have at most h ancestors, where h is the maximum height of any tree in the collection, each of the m iterations requires O (h log n) time. Combining these observations produces an overall time complexity of O ((n + mh) log n). In practice, re-ranking an INEX result set requires less than 200ms on a three-year-old desktop PC. 7. EVALUATION None of the metrics described in section 3.3 is a close fit with the view of overlap advocated by this paper. Nonetheless, when taken together they provide insight into the behaviour of the re-ranking algorithm. The INEX evaluation packages (inex_eval and inex_eval_ng) were used to compute values for the inex-2002 and inex-2003 metrics. Values for the XCG metrics were computed using software supplied by its inventors [11]. Figure 7 plots the three variants of inex-2002 MAP metric together with the XCG metric. Values for these metrics Figure 8: Impact of α on inex-2003 MAP (INEX 2004 CO topics; assessment set I). are plotted for values of α between 0.0 and 1.0. Recalling that the XCG metric is designed to penalize overlap, while the inex-2002 metric ignores overlap, the conflict between the metrics is obvious. The MAP values at one extreme (α = 0.0) and the XCG value at the other extreme (α = 1.0) represent retrieval performance comparable to the best systems at INEX 2004 [8,12]. Figure 8 plots values of the inex-2003 MAP metric for two quantizations, with and without consideration of overlap. Once again, conflict is apparent, with the influence of α substantially lessened when overlap is considered. 8. EXTENDED ALGORITHM One limitation of the re-ranking algorithm is that a single weight α is used to adjust the scores of both the ancestors and descendants of reported elements. An obvious extension is to use different weights in these two cases. Furthermore, the same weight is used regardless of the number of times an element is contained in a reported element. For example, a paragraph may form part of a reported section and then form part of a reported article. Since the user may now have seen this paragraph twice, its score should be further lowered by increasing the value of the weight. Motivated by these observations, the re-ranking algorithm may be extended with a series of weights where βj is the weight applied to a node that has been a descendant of a reported node j times. Note that an upper bound on M is h, the maximum height of any XML tree in the collection. However, in practice M is likely to be relatively small (perhaps 3 or 4). Figure 9 presents replacements for the Up and Down routines of figure 6, incorporating this series of weights. One extra field is required in each node, as follows: x.j--down count The value of x.j is initially set to zero in all nodes and is incremented each time Down is called with x as its argument. When computing the score of node, the value of x.j selects Figure 9: Extended tree traversal routines. the weight to be applied to the node by adjusting the value of xt in equation 1, as follows: where ft and gt are the components of x. f ~ and x. ~ g corresponding to term t. A few additional changes are required to extend Up and Down. The Up routine returns immediately (line 2) if its argument has already been reported, since term frequencies have already been adjusted in its ancestors. The Down routine does not report its argument, but instead recomputes its score and adds it back into S. A node cannot be an argument to Down more than M +1 times, which in turn implies an overall time complexity of O ((nM + mh) log n). Since M <h and m <n, the time complexity is also O (nh log n). 9. CONCLUDING DISCUSSION When generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial. For example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship. The algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance. Other approaches may also help to control overlap. For example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them. While this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster. At Waterloo, we continue to develop and test our ideas for INEX 2005. In particular, we are investigating methods for learning the α and βj weights. We are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20].
Controlling Overlap in Content-Oriented XML Retrieval ABSTRACT The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation. 1. INTRODUCTION The representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances. In response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components. This facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents. Figure 1: A journal article encoded in XML. Figure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents. Tags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words. Some elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate. Elements overlap each other--articles contain sections, sections contain subsections, and subsections contain paragraphs. Each of these characteristics affects the design of an XML IR system, and each leads to fundamental problems that must be solved in an successful system. Most of these fundamental problems can be solved through the careful adaptation of standard IR techniques, but the problems caused by overlap are unique to this area [4,11] and form the primary focus of this paper. The article of figure 1 may be viewed as an XML tree, as illustrated in figure 2. Formally, a collection of XML documents may be represented as a forest of ordered, rooted trees, consisting of a set of nodes N and a set of directed edges E connecting these nodes. For each node x ∈ N, the notation x.parent refers to the parent node of x, if one exists, and the notation x.children refers to the set of child nodes Figure 2: Example XML tree. of x. Since an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes. The direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements. A high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article. For example, many of the elements in figure 2 would receive a high score on the keyword query "text index compression algorithms". If each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content. One possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it. Unfortunately, this approach destroys some of the possible benefits of XML IR. For example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts. In such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements. Even when an entire book is relevant, a user may still wish to have the most important paragraphs highlighted, to guide her reading and to save time [6]. This paper presents a method for controlling overlap. Starting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant. For example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced. The inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20]. Extending that approach, the re-ranking algorithm varies weights dynamically as elements are processed. The remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4. This baseline method represents a reasonable adaptation of standard IR technology to XML. Section 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point. A re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7. Section 8 discusses an extended version of the algorithm. 2. BACKGROUND This section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction. Much research in the area of XML retrieval views it from a traditional database perspective, being concerned with such problems as the implementation of structured query languages [5] and the processing of joins [1]. Here, we take a "content oriented" IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language. We assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need. Other IR research has considered mixed queries, in which both content and structural requirements are specified [2, 6,14,17, 23]. 2.1 Term and Document Statistics In traditional information retrieval applications the standard unit of retrieval is taken to be the "document". Depending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages. When applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate "document", with term statistics available for each [16]. In addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole. If we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different "documents", perhaps in elements representing a paragraph, a subsection, a section and an article. It is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13, 23]. 2.2 Retrievable Elements While an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results. This is usually the case when elements contain very little text [10]. For example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself. Other elements may reflect the document's physical, rather than logical, struc ture, which may have little or no meaning to a user. An effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18]. Standard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not. 2.3 Evaluation Methodology Over the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8]. INEX is an experimental conference series, similar to TREC, with groups from different institutions completing one or more experimental tasks using their own tools and systems, and comparing their results at the conference itself. Over 50 groups participated in INEX 2004, and the conference has become as influential in the area of XML IR as TREC is in other IR areas. The research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX. Overlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning. While substantial progress has been made, these problem are still not completely solved. Kazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics. Many of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section. 3. INEX 2004 3.1 Tasks 3.2 Relevance Assessment 3.3 Evaluation Metrics 4. BASELINE RETRIEVAL METHOD 6. RE-RANKING ALGORITHM 5. CONTROLLING OVERLAP 7. EVALUATION 8. EXTENDED ALGORITHM 9. CONCLUDING DISCUSSION When generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial. For example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship. The algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance. Other approaches may also help to control overlap. For example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them. While this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster. At Waterloo, we continue to develop and test our ideas for INEX 2005. In particular, we are investigating methods for learning the α and βj weights. We are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20].
Controlling Overlap in Content-Oriented XML Retrieval ABSTRACT The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation. 1. INTRODUCTION The representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances. In response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components. This facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents. Figure 1: A journal article encoded in XML. Figure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents. Tags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words. Some elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate. Elements overlap each other--articles contain sections, sections contain subsections, and subsections contain paragraphs. The article of figure 1 may be viewed as an XML tree, as illustrated in figure 2. Figure 2: Example XML tree. of x. Since an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes. The direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements. A high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article. For example, many of the elements in figure 2 would receive a high score on the keyword query "text index compression algorithms". If each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content. One possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it. For example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts. In such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements. This paper presents a method for controlling overlap. Starting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant. For example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced. The inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20]. Extending that approach, the re-ranking algorithm varies weights dynamically as elements are processed. The remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4. This baseline method represents a reasonable adaptation of standard IR technology to XML. Section 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point. A re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7. Section 8 discusses an extended version of the algorithm. 2. BACKGROUND This section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction. Here, we take a "content oriented" IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language. We assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need. 2.1 Term and Document Statistics In traditional information retrieval applications the standard unit of retrieval is taken to be the "document". Depending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages. When applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate "document", with term statistics available for each [16]. In addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole. If we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different "documents", perhaps in elements representing a paragraph, a subsection, a section and an article. It is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13, 23]. 2.2 Retrievable Elements While an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results. This is usually the case when elements contain very little text [10]. For example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself. Other elements may reflect the document's physical, rather than logical, struc ture, which may have little or no meaning to a user. An effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18]. Standard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not. 2.3 Evaluation Methodology Over the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8]. The research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX. Overlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning. Kazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics. Many of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section. 9. CONCLUDING DISCUSSION When generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial. For example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship. The algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance. Other approaches may also help to control overlap. For example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them. While this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster. At Waterloo, we continue to develop and test our ideas for INEX 2005. In particular, we are investigating methods for learning the α and βj weights. We are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20].
J-72
Applying Learning Algorithms to Preference Elicitation
We consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory. We show that learning algorithms can be used as a basis for preference elicitation algorithms. The resulting elicitation algorithms perform a polynomial number of queries. We also give conditions under which the resulting algorithms have polynomial communication. Our conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. In particular, we obtain an algorithm that elicits XOR bids with polynomial communication.
[ "learn", "learn algorithm", "learn", "prefer elicit", "parallel", "prefer elicit problem", "combinatori auction", "learn theori", "prefer elicit algorithm", "elicit algorithm", "polynomi", "result algorithm", "polynomi commun", "polynomi commun", "convers procedur", "combinatori auction protocol", "monoton dnf", "linear-threshold function", "xor bid", "queri polynomi number" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R" ]
Applying Learning Algorithms to Preference Elicitation Sebastien M. Lahaie Division of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 slahaie@eecs.harvard.edu David C. Parkes Division of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 parkes@eecs.harvard.edu ABSTRACT We consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory. We show that learning algorithms can be used as a basis for preference elicitation algorithms. The resulting elicitation algorithms perform a polynomial number of queries. We also give conditions under which the resulting algorithms have polynomial communication. Our conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. In particular, we obtain an algorithm that elicits XOR bids with polynomial communication. Categories and Subject Descriptors F.2.0 [Analysis of Algorithms and Problem Complexity]: General; J.4 [Social and Behavioral Sciences]: Economics; I.2.6 [Artificial Intelligence]: Learning General Terms Algorithms, Economics, Theory 1. INTRODUCTION In a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone. Since there are an exponential number of bundles (in the number of goods), communicating values over these bundles can be problematic. Communicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large. Furthermore, it might even be hard for agents to determine their valuations for single bundles [14]. It is in the interest of such agents to have auction protocols which require them to bid on as few bundles as possible. Even if agents can efficiently compute their valuations, they might still be reluctant to reveal them entirely in the course of an auction, because such information may be valuable to their competitors. These considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods. There has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19]. In learning theory, the goal is to learn a function via various types of queries, such as What is the function``s value on these inputs? In preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation. Though the goals of learning and preference elicitation differ somewhat, it is clear that these problems share similar structure, and it should come as no surprise that techniques from one field should be relevant to the other. We show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries. The resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries. Here we mean polynomial in the number of goods, agents, and the sizes of the agents'' valuation functions in a given encoding scheme. Preference elicitation schemes have not traditionally considered this last parameter. We argue that complexity guarantees for elicitation schemes should allow dependence on this parameter. Introducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone. Finally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. Of course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents'' valuations, and only require one query. The advantage of our scheme is that agents can be viewed as black-boxes that provide incremental information about their valuations. There is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer``s choosing. We expect this to be an important consideration in practice. Also, with our scheme entire revelation only happens in the worst-case. 180 For now, we leave the issue of incentives aside when deriving elicitation algorithms. Our focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation. Related work. Zinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF. Read-once formulas can represent certain substitutabilities, but no complementarities, whereas the opposite holds for Toolbox DNF. Since their work is also grounded in learning theory, they allow dependence on the size of the target valuation as we do (though read-once valuations can always be succinctly represented anyway). Their work only makes use of value queries, which are quite limited in power. Because we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions. Blum et al. [5] provide results relating the complexities of query learning and preference elicitation. They consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation. They show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa. In contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning. We will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem. Nisan and Segal [12] study the communication complexity of preference elicitation. They show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential. Their results apply to the black-box model of computational complexity. In this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations. This is in fact the basic framework of learning theory. Our work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal``s negative results. Their work motivates the need to rely on the sizes of agents'' valuation functions in stating worst-case results. 2. THE MODELS 2.1 Query Learning The query learning model we consider here is called exact learning from membership and equivalence queries, introduced by Angluin [2]. In this model the learning algorithm``s objective is to exactly identify an unknown target function f : X → Y via queries to an oracle. The target function is drawn from a function class C that is known to the algorithm. Typically the domain X is some subset of {0, 1}m , and the range Y is either {0, 1} or some subset of the real numbers Ê. As the algorithm progresses, it constructs a manifest hypothesis ˜f which is its current estimate of the target function. Upon termination, the manifest hypothesis of a correct learning algorithm satisfies ˜f(x) = f(x) for all x ∈ X. It is important to specify the representation that will be used to encode functions from C. For example, consider the following function from {0, 1}m to Ê: f(x) = 2 if x consists of m 1``s, and f(x) = 0 otherwise. This function may simply be represented as a list of 2m values. Or it may be encoded as the polynomial 2x1 · · · xm, which is much more succinct. The choice of encoding may thus have a significant impact on the time and space requirements of the learning algorithm. Let size(f) be the size of the encoding of f with respect to the given representation class. Most representation classes have a natural measure of encoding size. The size of a polynomial can be defined as the number of non-zero coefficients in the polynomial, for example. We will usually only refer to representation classes; the corresponding function classes will be implied. For example, the representation class of monotone DNF formulae implies the function class of monotone Boolean functions. Two types of queries are commonly used for exact learning: membership and equivalence queries. On a membership query, the learner presents some x ∈ X and the oracle replies with f(x). On an equivalence query, the learner presents its manifest hypothesis ˜f. The oracle either replies `YES'' if ˜f = f, or returns a counterexample x such that ˜f(x) = f(x). An equivalence query is proper if size( ˜f) ≤ size(f) at the time the manifest hypothesis is presented. We are interested in efficient learning algorithms. The following definitions are adapted from Kearns and Vazirani [9]: Definition 1. The representation class C is polynomialquery exactly learnable from membership and equivalence queries if there is a fixed polynomial p(·, ·) and an algorithm L with access to membership and equivalence queries of an oracle such that for any target function f ∈ C, L outputs after at most p(size(f), m) queries a function ˜f ∈ C such that ˜f(x) = f(x) for all instances x. Similarly, the representation class C is efficiently exactly learnable from membership and equivalence queries if the algorithm L outputs a correct hypothesis in time p(size(f), m), for some fixed polynomial p(·, ·). Here m is the dimension of the domain. Since the target function must be reconstructed, we also necessarily allow polynomial dependence on size(f). 2.2 Preference Elicitation In a combinatorial auction, a set of goods M is to be allocated among a set of agents N so as to maximize the sum of the agents'' valuations. Such an allocation is called efficient in the economics literature, but we will refer to it as optimal and reserve the term efficient to refer to computational efficiency. We let n = |N| and m = |M|. An allocation is a partition of the objects into bundles (S1, ... , Sn), such that Si ∩ Sj = ∅ for all distinct i, j ∈ N. Let Γ be the set of possible allocations. Each agent i ∈ N has a valuation function vi : 2M → Ê over the space of possible bundles. Each valuation vi is drawn from a known class of valuations Vi. The valuation classes do not need to coincide. We will assume that all the valuations considered are normalized, meaning v(∅) = 0, and that there are no externalities, meaning vi(S1, ..., Sn) = vi(Si), for all agents i ∈ N, for any allocation (S1, ..., Sn) ∈ Γ (that is, an agent cares only about the bundle allocated to her). Valuations satisfying these conditions are called general valuations.1 We 1 Often general valuations are made to satisfy the additional 181 also assume that agents have quasi-linear utility functions, meaning that agents'' utilities can be divided into monetary and non-monetary components. If an agent i is allocated bundle S at price p, it derives utility ui(S, p) = vi(S) − p. A valuation function may be viewed as a vector of 2m − 1 non-negative real-values. Of course there may also be more succinct representations for certain valuation classes, and there has been much research into concise bidding languages for various types of valuations [11]. A classic example which we will refer to again later is the XOR bidding language. In this language, the agent provides a list of atomic bids, which consist of a bundle together with its value. To determine the value of a bundle S given these bids, one searches for the bundle S of highest value listed in the atomic bids such that S ⊆ S. It is then the case that v(S) = v(S ). As in the learning theory setting, we will usually only refer to bidding languages rather than valuation classes, because the corresponding valuation classes will then be implied. For example, the XOR bidding language implies the class of valuations satisfying free-disposal, which is the condition that A ⊆ B ⇒ v(A) ≤ v(B). We let size(v1, ... , vn) = Èn i=1 size(vi). That is, the size of a vector of valuations is the size of the concatenation of the valuations'' representations in their respective encoding schemes (bidding languages). To make an analogy to computational learning theory, we assume that all representation classes considered are polynomially interpretable [11], meaning that the value of a bundle may be computed in polynomial time given the valuation function``s representation. More formally, a representation class (bidding language) C is polynomially interpretable if there exists an algorithm that given as input some v ∈ C and an instance x ∈ X computes the value v(x) in time q(size(v), m), for some fixed polynomial q(·, ·).2 In the intermediate rounds of an (iterative) auction, the auctioneer will have elicited information about the agents'' valuation functions via various types of queries. She will thus have constructed a set of manifest valuations, denoted ˜v1, ... , ˜vn.3 The values of these functions may correspond exactly to the true agent values, or they may for example be upper or lower bounds on the true values, depending on the types of queries made. They may also simply be default or random values if no information has been acquired about certain bundles. The goal in the preference elicitation problem is to construct a set of manifest valuations such that: arg max (S1,...,Sn)∈Γ i∈N ˜vi(Si) ⊆ arg max (S1,...,Sn)∈Γ i∈N vi(Si) That is, the manifest valuations provide enough information to compute an allocation that is optimal with respect to the true valuations. Note that we only require one such optimal allocation. condition of free-disposal (monotonicity), but we do not need it at this point. 2 This excludes OR∗ , assuming P = NP, because interpreting bids from this language is NP-hard by reduction from weighted set-packing, and there is no well-studied representation class in learning theory that is clearly analogous to OR∗ . 3 This view of iterative auctions is meant to parallel the learning setting. In many combinatorial auctions, manifest valuations are not explicitly maintained but rather simply implied by the history of bids. Two typical queries used in preference elicitation are value and demand queries. On a value query, the auctioneer presents a bundle S ⊆ M and the agent responds with her (exact) value for the bundle v(S) [8]. On a demand query, the auctioneer presents a vector of non-negative prices p ∈ Ê(2m ) over the bundles together with a bundle S. The agent responds `YES'' if it is the case that S ∈ arg max S ⊆M v(S ) − p(S ) ¡ or otherwise presents a bundle S such that v(S ) − p(S ) > v(S) − p(S) That is, the agent either confirms that the presented bundle is most preferred at the quoted prices, or indicates a better one [15].4 Note that we include ∅ as a bundle, so the agent will only respond `YES'' if its utility for the proposed bundle is non-negative. Note also that communicating nonlinear prices does not necessarily entail quoting a price for every possible bundle. There may be more succinct ways of communicating this vector, as we show in section 5. We make the following definitions to parallel the query learning setting and to simplify the statements of later results: Definition 2. The representation classes V1, ... , Vn can be polynomial-query elicited from value and demand queries if there is a fixed polynomial p(·, ·) and an algorithm L with access to value and demand queries of the agents such that for any (v1, ... , vn) ∈ V1 × ... × Vn, L outputs after at most p(size(v1, ... , vn), m) queries an allocation (S1, ... , Sn) ∈ arg max(S1,...,Sn)∈Γ È vi(Si). Similarly, the representation class C can be efficiently elicited from value and demand queries if the algorithm L outputs an optimal allocation with communication p(size(v1, ... , vn), m), for some fixed polynomial p(·, ·). There are some key differences here with the query learning definition. We have dropped the term exactly since the valuation functions need not be determined exactly in order to compute an optimal allocation. Also, an efficient elicitation algorithm is polynomial communication, rather than polynomial time. This reflects the fact that communication rather than runtime is the bottleneck in elicitation. Computing an optimal allocation of goods even when given the true valuations is NP-hard for a wide range of valuation classes. It is thus unreasonable to require polynomial time in the definition of an efficient preference elicitation algorithm. We are happy to focus on the communication complexity of elicitation because this problem is widely believed to be more significant in practice than that of winner determination [11].5 4 This differs slightly from the definition provided by Blum et al. [5] Their demand queries are restricted to linear prices over the goods, where the price of a bundle is the sum of the prices of its underlying goods. In contrast our demand queries allow for nonlinear prices, i.e. a distinct price for every possible bundle. This is why the lower bound in their Theorem 2 does not contradict our result that follows. 5 Though the winner determination problem is NP-hard for general valuations, there exist many algorithms that solve it efficiently in practice. These range from special purpose algorithms [7, 16] to approaches using off-the-shelf IP solvers [1]. 182 Since the valuations need not be elicited exactly it is initially less clear whether the polynomial dependence on size(v1, ... , vn) is justified in this setting. Intuitively, this parameter is justified because we must learn valuations exactly when performing elicitation, in the worst-case. We address this in the next section. 3. PARALLELSBETWEEN EQUIVALENCE AND DEMAND QUERIES We have described the query learning and preference elicitation settings in a manner that highlights their similarities. Value and membership queries are clear analogs. Slightly less obvious is the fact that equivalence and demand queries are also analogs. To see this, we need the concept of Lindahl prices. Lindahl prices are nonlinear and non-anonymous prices over the bundles. They are nonlinear in the sense that each bundle is assigned a price, and this price is not necessarily the sum of prices over its underlying goods. They are non-anonymous in the sense that two agents may face different prices for the same bundle of goods. Thus Lindahl prices are of the form pi(S), for all S ⊆ M, for all i ∈ N. Lindahl prices are presented to the agents in demand queries. When agents have normalized quasi-linear utility functions, Bikhchandani and Ostroy [4] show that there always exist Lindahl prices such that (S1, ... , Sn) is an optimal allocation if and only if Si ∈ arg max Si vi(Si) − pi(Si) ¡ ∀i ∈ N (1) (S1, ... , Sn) ∈ arg max (S1,...,Sn)∈Γ i∈N pi(Si) (2) Condition (1) states that each agent is allocated a bundle that maximizes its utility at the given prices. Condition (2) states that the allocation maximizes the auctioneer``s revenue at the given prices. The scenario in which these conditions hold is called a Lindahl equilibrium, or often a competitive equilibrium. We say that the Lindahl prices support the optimal allocation. It is therefore sufficient to announce supporting Lindahl prices to verify an optimal allocation. Once we have found an allocation with supporting Lindahl prices, the elicitation problem is solved. The problem of finding an optimal allocation (with respect to the manifest valuations) can be formulated as a linear program whose solutions are guaranteed to be integral [4]. The dual variables to this linear program are supporting Lindahl prices for the resulting allocation. The objective function to the dual program is: min pi(S) πs + i∈N πi (3) with πi = max S⊆M (˜vi(S) − pi(S)) πs = max (S1,...,Sn)∈Γ i∈N pi(Si) The optimal values of πi and πs correspond to the maximal utility to agent i with respect to its manifest valuation and the maximal revenue to the seller. There is usually a range of possible Lindahl prices supporting a given optimal allocation. The agent``s manifest valuations are in fact valid Lindahl prices, and we refer to them as maximal Lindahl prices. Out of all possible vectors of Lindahl prices, maximal Lindahl prices maximize the utility of the auctioneer, in fact giving her the entire social welfare. Conversely, prices that maximize the È i∈N πi component of the objective (the sum of the agents'' utilities) are minimal Lindahl prices. Any Lindahl prices will do for our results, but some may have better elicitation properties than others. Note that a demand query with maximal Lindahl prices is almost identical to an equivalence query, since in both cases we communicate the manifest valuation to the agent. We leave for future work the question of which Lindahl prices to choose to minimize preference elicitation. Considering now why demand and equivalence queries are direct analogs, first note that given the πi in some Lindahl equilibrium, setting pi(S) = max{0, ˜vi(S) − πi} (4) for all i ∈ N and S ⊆ M yields valid Lindahl prices. These prices leave every agent indifferent across all bundles with positive price, and satisfy condition (1). Thus demand queries can also implicitly communicate manifest valuations, since Lindahl prices will typically be an additive constant away from these by equality (4). In the following lemma we show how to obtain counterexamples to equivalence queries through demand queries. Lemma 1. Suppose an agent replies with a preferred bundle S when proposed a bundle S and supporting Lindahl prices p(S) (supporting with respect to the the agent``s manifest valuation). Then either ˜v(S) = v(S) or ˜v(S ) = v(S ). Proof. We have the following inequalities: ˜v(S) − p(S) ≥ ˜v(S ) − p(S ) ⇒ ˜v(S ) − ˜v(S) ≤ p(S ) − p(S) (5) v(S ) − p(S ) > v(S) − p(S) ⇒ v(S ) − v(S) > p(S ) − p(S) (6) Inequality (5) holds because the prices support the proposed allocation with respect to the manifest valuation. Inequality (6) holds because the agent in fact prefers S to S given the prices, according to its response to the demand query. If it were the case that ˜v(S) = v(S) and ˜v(S ) = v(S ), these inequalities would represent a contradiction. Thus at least one of S and S is a counterexample to the agent``s manifest valuation. Finally, we justify dependence on size(v1, ... , vn) in elicitation problems. Nisan and Segal (Proposition 1, [12]) and Parkes (Theorem 1, [13]) show that supporting Lindahl prices must necessarily be revealed in the course of any preference elicitation protocol which terminates with an optimal allocation. Furthermore, Nisan and Segal (Lemma 1, [12]) state that in the worst-case agents'' prices must coincide with their valuations (up to a constant), when the valuation class is rich enough to contain dual valuations (as will be the case with most interesting classes). Since revealing Lindahl prices is a necessary condition for establishing an optimal allocation, and since Lindahl prices contain the same information as valuation functions (in the worst-case), allowing for dependence on size(v1, ... , vn) in elicitation problems is entirely natural. 183 4. FROM LEARNING TO PREFERENCE ELICITATION The key to converting a learning algorithm to an elicitation algorithm is to simulate equivalence queries with demand and value queries until an optimal allocation is found. Because of our Lindahl price construction, when all agents reply `YES'' to a demand query, we have found an optimal allocation, analogous to the case where an agent replies `YES'' to an equivalence query when the target function has been exactly learned. Otherwise, we can obtain a counterexample to an equivalence query given an agent``s response to a demand query. Theorem 1. The representation classes V1, ... , Vn can be polynomial-query elicited from value and demand queries if they can each be polynomial-query exactly learned from membership and equivalence queries. Proof. Consider the elicitation algorithm in Figure 1. Each membership query in step 1 is simulated with a value query since these are in fact identical. Consider step 4. If all agents reply `YES'', condition (1) holds. Condition (2) holds because the computed allocation is revenue-maximizing for the auctioneer, regardless of the agents'' true valuations. Thus an optimal allocation has been found. Otherwise, at least one of Si or Si is a counterexample to ˜vi, by Lemma 1. We identify a counterexample by performing value queries on both these bundles, and provide it to Ai as a response to its equivalence query. This procedure will halt, since in the worst-case all agent valuations will be learned exactly, in which case the optimal allocation and Lindahl prices will be accepted by all agents. The procedure performs a polynomial number of queries, since A1, ... , An are all polynomial-query learning algorithms. Note that the conversion procedure results in a preference elicitation algorithm, not a learning algorithm. That is, the resulting algorithm does not simply learn the valuations exactly, then compute an optimal allocation. Rather, it elicits partial information about the valuations through value queries, and periodically tests whether enough information has been gathered by proposing an allocation to the agents through demand queries. It is possible to generate a Lindahl equilibrium for valuations v1, ... , vn using an allocation and prices derived using manifest valuations ˜v1, ... , ˜vn, and finding an optimal allocation does not imply that the agents'' valuations have been exactly learned. The use of demand queries to simulate equivalence queries enables this early halting. We would not obtain this property with equivalence queries based on manifest valuations. 5. COMMUNICATION COMPLEXITY In this section, we turn to the issue of the communication complexity of elicitation. Nisan and Segal [12] show that for a variety of rich valuation spaces (such as general and submodular valuations), the worst-case communication burden of determining Lindahl prices is exponential in the number of goods, m. The communication burden is measured in terms of the number of bits transmitted between agents and auctioneer in the case of discrete communication, or in terms of the number of real numbers transmitted in the case of continuous communication. Converting efficient learning algorithms to an elicitation algorithm produces an algorithm whose queries have sizes polynomial in the parameters m and size(v1, ... , vn). Theorem 2. The representation classes V1, ... , Vn can be efficiently elicited from value and demand queries if they can each be efficiently exactly learned from membership and equivalence queries. Proof. The size of any value query is O(m): the message consists solely of the queried bundle. To communicate Lindahl prices to agent i, it is sufficient to communicate the agent``s manifest valuation function and the value πi, by equality (4). Note that an efficient learning algorithm never builds up a manifest hypothesis of superpolynomial size, because the algorithm``s runtime would then also be superpolynomial, contradicting efficiency. Thus communicating the manifest valuation requires size at most p(size(vi), m), for some polynomial p that upper-bounds the runtime of the efficient learning algorithm. Representing the surplus πi to agent i cannot require space greater than q(size(˜vi), m) for some fixed polynomial q, because we assume that the chosen representation is polynomially interpretable, and thus any value generated will be of polynomial size. We must also communicate to i its allocated bundle, so the total message size for a demand query is at most p(size(vi), m) + q(p(size(vi), m), m)+O(m). Clearly, an agent``s response to a value or demand query has size at most q(size(vi), m) + O(m). Thus the value and demand queries, and the responses to these queries, are always of polynomial size. An efficient learning algorithm performs a polynomial number of queries, so the total communication of the resulting elicitation algorithm is polynomial in the relevant parameters. There will often be explicit bounds on the number of membership and equivalence queries performed by a learning algorithm, with constants that are not masked by big-O notation. These bounds can be translated to explicit bounds on the number of value and demand queries made by the resulting elicitation algorithm. We upper-bounded the size of the manifest hypothesis with the runtime of the learning algorithm in Theorem 2. We are likely to be able to do much better than this in practice. Recall that an equivalence query is proper if size( ˜f) ≤ size(f) at the time the query is made. If the learning algorithm``s equivalence queries are all proper, it may then also be possible to provide tight bounds on the communication requirements of the resulting elicitation algorithm. Theorem 2 show that elicitation algorithms that depend on the size(v1, ... , vn) parameter sidestep Nisan and Segal``s [12] negative results on the worst-case communication complexity of efficient allocation problems. They provide guarantees with respect to the sizes of the instances of valuation functions faced at any run of the algorithm. These algorithms will fare well if the chosen representation class provides succinct representations for the simplest and most common of valuations, and thus the focus moves back to one of compact yet expressive bidding languages. We consider these issues below. 6. APPLICATIONS In this section, we demonstrate the application of our methods to particular representation classes for combinatorial valuations. We have shown that the preference elicitation problem for valuation classes V1, ... , Vn can be reduced 184 Given: exact learning algorithms A1, ... , An for valuations classes V1, ... , Vn respectively. Loop until there is a signal to halt: 1. Run A1, ... , An in parallel on their respective agents until each requires a response to an equivalence query, or has halted with the agent``s exact valuation. 2. Compute an optimal allocation (S1, ... , Sn) and corresponding Lindahl prices with respect to the manifest valuations ˜v1, ... , ˜vn determined so far. 3. Present the allocation and prices to the agents in the form of a demand query. 4. If they all reply `YES'', output the allocation and halt. Otherwise there is some agent i that has replied with some preferred bundle Si. Perform value queries on Si and Si to find a counterexample to ˜vi, and provide it to Ai. Figure 1: Converting learning algorithms to an elicitation algorithm. to the problem of finding an efficient learning algorithm for each of these classes separately. This is significant because there already exist learning algorithms for a wealth of function classes, and because it may often be simpler to solve each learning subproblem separately than to attack the preference elicitation problem directly. We can develop an elicitation algorithm that is tailored to each agent``s valuation, with the underlying learning algorithms linked together at the demand query stages in an algorithm-independent way. We show that existing learning algorithms for polynomials, monotone DNF formulae, and linear-threshold functions can be converted into preference elicitation algorithms for general valuations, valuations with free-disposal, and valuations with substitutabilities, respectively. We focus on representations that are polynomially interpretable, because the computational learning theory literature places a heavy emphasis on computational tractability [18]. In interpreting the methods we emphasize the expressiveness and succinctness of each representation class. The representation class, which in combinatorial auction terms defines a bidding language, must necessarily be expressive enough to represent all possible valuations of interest, and should also succinctly represent the simplest and most common functions in the class. 6.1 Polynomial Representations Schapire and Sellie [17] give a learning algorithm for sparse multivariate polynomials that can be used as the basis for a combinatorial auction protocol. The equivalence queries made by this algorithm are all proper. Specifically, their algorithm learns the representation class of t-sparse multivariate polynomials over the real numbers, where the variables may take on values either 0 or 1. A t-sparse polynomial has at most t terms, where a term is a product of variables, e.g. x1x3x4. A polynomial over the real numbers has coefficients drawn from the real numbers. Polynomials are expressive: every valuation function v : 2M → Ê+ can be uniquely written as a polynomial [17]. To get an idea of the succinctness of polynomials as a bidding language, consider the additive and single-item valuations presented by Nisan [11]. In the additive valuation, the value of a bundle is the number of goods the bundle contains. In the single-item valuation, all bundles have value 1, except ∅ which has value 0 (i.e. the agent is satisfied as soon as it has acquired a single item). It is not hard to show that the single-item valuation requires polynomials of size 2m − 1, while polynomials of size m suffice for the additive valuation. Polynomials are thus appropriate for valuations that are mostly additive, with a few substitutabilities and complementarities that can be introduced by adjusting coefficients. The learning algorithm for polynomials makes at most mti +2 equivalence queries and at most (mti +1)(t2 i +3ti)/2 membership queries to an agent i, where ti is the sparcity of the polynomial representing vi [17]. We therefore obtain an algorithm that elicits general valuations with a polynomial number of queries and polynomial communication.6 6.2 XOR Representations The XOR bidding language is standard in the combinatorial auctions literature. Recall that an XOR bid is characterized by a set of bundles B ⊆ 2M and a value function w : B → Ê+ defined on those bundles, which induces the valuation function: v(B) = max {B ∈B | B ⊆B} w(B ) (7) XOR bids can represent valuations that satisfy free-disposal (and only such valuations), which again is the property that A ⊆ B ⇒ v(A) ≤ v(B). The XOR bidding language is slightly less expressive than polynomials, because polynomials can represent valuations that do not satisfy free-disposal. However, XOR is as expressive as required in most economic settings. Nisan [11] notes that XOR bids can represent the single-item valuation with m atomic bids, but 2m − 1 atomic bids are needed to represent the additive valuation. Since the opposite holds for polynomials, these two languages are incomparable in succinctness, and somewhat complementary for practical use. Blum et al. [5] note that monotone DNF formulae are the analogs of XOR bids in the learning theory literature. A monotone DNF formula is a disjunction of conjunctions in which the variables appear unnegated, for example x1x2 ∨ x3 ∨ x2x4x5. Note that such formulae can be represented as XOR bids where each atomic bid has value 1; thus XOR bids generalize monotone DNF formulae from Boolean to real-valued functions. These insights allow us to generalize a classic learning algorithm for monotone DNF ([3] Theorem 6 Note that Theorem 1 applies even if valuations do not satisfy free-disposal. 185 1, [18] Theorem B) to a learning algorithm for XOR bids.7 Lemma 2. An XOR bid containing t atomic bids can be exactly learned with t + 1 equivalence queries and at most tm membership queries. Proof. The algorithm will identify each atomic bid in the target XOR bid in turn. Initialize the manifest valuation ˜v to the bid that is identically zero on all bundles (this is an XOR bid containing 0 atomic bids). Present ˜v as an equivalence query. If the response is `YES'', we are done. Otherwise we obtain a bundle S for which v(S) = ˜v(S). Create a bundle T as follows. First initialize T = S. For each item i in T, check via a membership query whether v(T) = v(T − {i}). If so set T = T − {i}. Otherwise leave T as is and proceed to the next item. We claim that (T, v(T)) is an atomic bid of the target XOR bid. For each item i in T, we have v(T) = v(T − {i}). To see this, note that at some point when generating T, we had a ¯T such that T ⊆ ¯T ⊆ S and v( ¯T) > v( ¯T − {i}), so that i was kept in ¯T. Note that v(S) = v( ¯T) = v(T) because the value of the bundle S is maintained throughout the process of deleting items. Now assume v(T) = v(T − {i}). Then v( ¯T) = v(T) = v(T − {i}) > v( ¯T − {i}) which contradicts free-disposal, since T − {i} ⊆ ¯T − {i}. Thus v(T) > v(T − {i}) for all items i in T. This implies that (T, v(T)) is an atomic bid of v. If this were not the case, T would take on the maximum value of its strict subsets, by the definition of an XOR bid, and we would have v(T) = max i∈T { max T ⊆T −{i} v(T )} = max i∈T {v(T − {i})} < v(T) which is a contradiction. We now show that v(T) = ˜v(T), which will imply that (T, v(T)) is not an atomic bid of our manifest hypothesis by induction. Assume that every atomic bid (R, ˜v(R)) identified so far is indeed an atomic bid of v (meaning R is indeed listed in an atomic bid of v as having value v(R) = ˜v(R)). This assumption holds vacuously when the manifest valuation is initialized. Using the notation from (7), let ( ˜B, ˜w) be our hypothesis, and (B, w) be the target function. We have ˜B ⊆ B, and ˜w(B) = w(B) for B ∈ ˜B by assumption. Thus, ˜v(S) = max {B∈ ˜B | B⊆S} ˜w(B) = max {B∈ ˜B | B⊆S} w(B) ≤ max {B∈B | B⊆S} w(B) = v(S) (8) Now assume v(T) = ˜v(T). Then, ˜v(T) = v(T) = v(S) = ˜v(S) (9) The second equality follows from the fact that the value remains constant when we derive T from S. The last inequality holds because S is a counterexample to the manifest valuation. From equation (9) and free-disposal, we 7 The cited algorithm was also used as the basis for Zinkevich et al.``s [19] elicitation algorithm for Toolbox DNF. Recall that Toolbox DNF are polynomials with non-negative coefficients. For these representations, an equivalence query can be simulated with a value query on the bundle containing all goods. have ˜v(T) < ˜v(S). Then again from equation (9) it follows that v(S) < ˜v(S). This contradicts (8), so we in fact have v(T) = ˜v(T). Thus (T, v(T)) is not currently in our hypothesis as an atomic bid, or we would correctly have ˜v(T) = v(T) by the induction hypothesis. We add (T, v(T)) to our hypothesis and repeat the process above, performing additional equivalence queries until all atomic bids have been identified. After each equivalence query, an atomic bid is identified with at most m membership queries. Each counterexample leads to the discovery of a new atomic bid. Thus we make at most tm membership queries and exactly t + 1 equivalence queries. The number of time steps required by this algorithm is essentially the same as the number of queries performed, so the algorithm is efficient. Applying Theorem 2, we therefore obtain the following corollary: Theorem 3. The representation class of XOR bids can be efficiently elicited from value and demand queries. This contrasts with Blum et al.``s negative results ([5], Theorem 2) stating that monotone DNF (and hence XOR bids) cannot be efficiently elicited when the demand queries are restricted to linear and anonymous prices over the goods. 6.3 Linear-Threshold Representations Polynomials, XOR bids, and all languages based on the OR bidding language (such as XOR-of-OR, OR-of-XOR, and OR∗ ) fail to succinctly represent the majority valuation [11]. In this valuation, bundles have value 1 if they contain at least m/2 items, and value 0 otherwise. More generally, consider the r-of-S family of valuations where bundles have value 1 if they contain at least r items from a specified set of items S ⊆ M, and value 0 otherwise. The majority valuation is a special case of the r-of-S valuation with r = m/2 and S = M. These valuations are appropriate for representing substitutabilities: once a required set of items has been obtained, no other items can add value. Letting k = |S|, such valuations are succinctly represented by r-of-k threshold functions. These functions take the form of linear inequalities: xi1 + ... + xik ≥ r where the function has value 1 if the inequality holds, and 0 otherwise. Here i1, ... , ik are the items in S. Littlestone``s WINNOW 2 algorithm can learn such functions using equivalence queries only, using at most 8r2 + 5k + 14kr ln m + 1 queries [10]. To provide this guarantee, r must be known to the algorithm, but S (and k) are unknown. The elicitation algorithm that results from WINNOW 2 uses demand queries only (value queries are not necessary here because the values of counterexamples are implied when there are only two possible values). Note that r-of-k threshold functions can always be succinctly represented in O(m) space. Thus we obtain an algorithm that can elicit such functions with a polynomial number of queries and polynomial communication, in the parameters n and m alone. 186 7. CONCLUSIONS AND FUTURE WORK We have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries. At the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation. Our result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation. A learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types. If the designer knowns beforehand what types of preferences each agent is likely to exhibit (mostly additive, many substitutes, etc.. .) , she can design learning algorithms tailored to each agents'' valuations and integrate them into an elicitation scheme. The resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient. We do not require that agent valuations can be learned with value and demand queries. Equivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed. This is the preference elicitation problem. Theorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms'' complexity. It would be interesting to find examples of valuation classes for which elicitation is easier than learning. Blum et al. [5] provide such an example when considering membership/value queries only (Theorem 4). In future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms. In the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility. We also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15]. An interesting question here is: which Lindahl prices in the maximal to minimal range are best to quote in order to minimize information revelation? We conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries. Finally, it would be useful to determine whether the OR∗ bidding language [11] can be efficiently learned (and hence elicited), given this language``s expressiveness and succinctness for a wide variety of valuation classes. Acknowledgements We would like to thank Debasis Mishra for helpful discussions. This work is supported in part by NSF grant IIS0238147. 8. REFERENCES [1] A. Andersson, M. Tenhunen, and F. Ygge. Integer programming for combinatorial auction winner determination. In Proceedings of the Fourth International Conference on Multiagent Systems (ICMAS-00), 2000. [2] D. Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 75:87-106, November 1987. [3] D. Angluin. Queries and concept learning. Machine Learning, 2:319-342, 1987. [4] S. Bikhchandani and J. Ostroy. The Package Assignment Model. Journal of Economic Theory, 107(2), December 2002. [5] A. Blum, J. Jackson, T. Sandholm, and M. Zinkevich. Preference elicitation and query learning. In Proc. 16th Annual Conference on Computational Learning Theory (COLT), Washington DC, 2003. [6] W. Conen and T. Sandholm. Partial-revelation VCG mechanism for combinatorial auctions. In Proc. the 18th National Conference on Artificial Intelligence (AAAI), 2002. [7] Y. Fujishima, K. Leyton-Brown, and Y. Shoham. Taming the computational complexity of combinatorial auctions: Optimal and approximate approaches. In Proc. the 16th International Joint Conference on Artificial Intelligence (IJCAI), pages 548-553, 1999. [8] B. Hudson and T. Sandholm. Using value queries in combinatorial auctions. In Proc. 4th ACM Conference on Electronic Commerce (ACM-EC), San Diego, CA, June 2003. [9] M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. [10] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285-318, 1988. [11] N. Nisan. Bidding and allocation in combinatorial auctions. In Proc. the ACM Conference on Electronic Commerce, pages 1-12, 2000. [12] N. Nisan and I. Segal. The communication requirements of efficient allocations and supporting Lindahl prices. Working Paper, Hebrew University, 2003. [13] D. C. Parkes. Price-based information certificates for minimal-revelation combinatorial auctions. In Padget et al., editor, Agent-Mediated Electronic Commerce IV,LNAI 2531, pages 103-122. Springer-Verlag, 2002. [14] D. C. Parkes. Auction design with costly preference elicitation. In Special Issues of Annals of Mathematics and AI on the Foundations of Electronic Commerce, Forthcoming (2003). [15] D. C. Parkes and L. H. Ungar. Iterative combinatorial auctions: Theory and practice. In Proc. 17th National Conference on Artificial Intelligence (AAAI-00), pages 74-81, 2000. [16] T. Sandholm, S. Suri, A. Gilpin, and D. Levine. CABOB: A fast optimal algorithm for combinatorial auctions. In Proc. the 17th International Joint Conference on Artificial Intelligence (IJCAI), pages 1102-1108, 2001. [17] R. Schapire and L. Sellie. Learning sparse multivariate polynomials over a field with queries and counterexamples. In Proceedings of the Sixth Annual ACM Workshop on Computational Learning Theory, pages 17-26. ACM Press, 1993. 187 [18] L. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134-1142, Nov. 1984. [19] M. Zinkevich, A. Blum, and T. Sandholm. On polynomial-time preference elicitation with value-queries. In Proc. 4th ACM Conference on Electronic Commerce (ACM-EC), San Diego, CA, June 2003. 188
Applying Learning Algorithms to Preference Elicitation ABSTRACT We consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory. We show that learning algorithms can be used as a basis for preference elicitation algorithms. The resulting elicitation algorithms perform a polynomial number of queries. We also give conditions under which the resulting algorithms have polynomial communication. Our conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. In particular, we obtain an algorithm that elicits XOR bids with polynomial communication. 1. INTRODUCTION In a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone. Since there are an exponential number of bundles (in the number of goods), communicating values over these bundles can be problematic. Communicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large. Furthermore, it might even be hard for agents to determine their valuations for single bundles [14]. It is in the interest of such agents to have auction protocols which require them to bid on as few bundles as possible. Even if agents can efficiently compute their valuations, they might still be reluctant to reveal them entirely in the course of an auction, because such information may be valuable to their competitors. These considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods. There has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19]. In learning theory, the goal is to learn a function via various types of queries, such as "What is the function's value on these inputs?" In preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation. Though the goals of learning and preference elicitation differ somewhat, it is clear that these problems share similar structure, and it should come as no surprise that techniques from one field should be relevant to the other. We show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries. The resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries. Here we mean polynomial in the number of goods, agents, and the sizes of the agents' valuation functions in a given encoding scheme. Preference elicitation schemes have not traditionally considered this last parameter. We argue that complexity guarantees for elicitation schemes should allow dependence on this parameter. Introducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone. Finally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. Of course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents' valuations, and only require one query. The advantage of our scheme is that agents can be viewed as "black-boxes" that provide incremental information about their valuations. There is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer's choosing. We expect this to be an important consideration in practice. Also, with our scheme entire revelation only happens in the worst-case. For now, we leave the issue of incentives aside when deriving elicitation algorithms. Our focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation. Related work. Zinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF. Read-once formulas can represent certain substitutabilities, but no complementarities, whereas the opposite holds for Toolbox DNF. Since their work is also grounded in learning theory, they allow dependence on the size of the target valuation as we do (though read-once valuations can always be succinctly represented anyway). Their work only makes use of value queries, which are quite limited in power. Because we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions. Blum et al. [5] provide results relating the complexities of query learning and preference elicitation. They consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation. They show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa. In contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning. We will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem. Nisan and Segal [12] study the communication complexity of preference elicitation. They show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential. Their results apply to the "black-box" model of computational complexity. In this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations. This is in fact the basic framework of learning theory. Our work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal's negative results. Their work motivates the need to rely on the sizes of agents' valuation functions in stating worst-case results. 2. THE MODELS 2.1 Query Learning The query learning model we consider here is called exact learning from membership and equivalence queries, introduced by Angluin [2]. In this model the learning algorithm's objective is to exactly identify an unknown target function f: X--+ Y via queries to an oracle. The target function is drawn from a function class C that is known to the algorithm. Typically the domain X is some subset of {0, 11m, and the range Y is either {0, 11 or some subset of the real numbers R. As the algorithm progresses, it constructs a manifest hypothesis f˜ which is its current estimate of the target function. Upon termination, the manifest hypothesis of a correct learning algorithm satisfies f˜ (x) = f (x) for all x E X. It is important to specify the representation that will be used to encode functions from C. For example, consider the following function from {0, 11m to R: f (x) = 2 if x consists of m 1's, and f (x) = 0 otherwise. This function may simply be represented as a list of 2m values. Or it may be encoded as the polynomial 2x1 · · · xm, which is much more succinct. The choice of encoding may thus have a significant impact on the time and space requirements of the learning algorithm. Let size (f) be the size of the encoding of f with respect to the given representation class. Most representation classes have a natural measure of encoding size. The size of a polynomial can be defined as the number of non-zero coefficients in the polynomial, for example. We will usually only refer to representation classes; the corresponding function classes will be implied. For example, the representation class of monotone DNF formulae implies the function class of monotone Boolean functions. Two types of queries are commonly used for exact learning: membership and equivalence queries. On a membership query, the learner presents some x E X and the oracle replies with f (x). On an equivalence query, the learner presents its manifest hypothesis ˜f. The oracle either replies ` YES' if f˜ = f, or returns a counterexample x such that ˜f (x) = ~ f (x). An equivalence query is proper if size (˜f) <size (f) at the time the manifest hypothesis is presented. We are interested in efficient learning algorithms. The following definitions are adapted from Kearns and Vazirani [9]: Definition 1. The representation class C is polynomialquery exactly learnable from membership and equivalence queries if there is a fixed polynomial p (·, ·) and an algorithm L with access to membership and equivalence queries of an oracle such that for any target function f E C, L outputs after at most p (size (f), m) queries a function f˜ E C such that ˜f (x) = f (x) for all instances x. Similarly, the representation class C is efficiently exactly learnable from membership and equivalence queries if the algorithm L outputs a correct hypothesis in time p (size (f), m), for some fixed polynomial p (·, ·). Here m is the dimension of the domain. Since the target function must be reconstructed, we also necessarily allow polynomial dependence on size (f). 2.2 Preference Elicitation In a combinatorial auction, a set of goods M is to be allocated among a set of agents N so as to maximize the sum of the agents' valuations. Such an allocation is called efficient in the economics literature, but we will refer to it as optimal and reserve the term "efficient" to refer to computational efficiency. We let n = | N | and m = | M |. An allocation is a partition of the objects into bundles (S1,..., S.), such that Si n Sj = 0 for all distinct i, j E N. Let Γ be the set of possible allocations. Each agent i E N has a valuation function vi: 2M--+ R over the space of possible bundles. Each valuation vi is drawn from a known class of valuations Vi. The valuation classes do not need to coincide. We will assume that all the valuations considered are normalized, meaning v (0) = 0, and that there are no externalities, meaning vi (S1,..., S.) = vi (Si), for all agents i E N, for any allocation (S1,..., S.) E Γ (that is, an agent cares only about the bundle allocated to her). Valuations satisfying these conditions are called general valuations .1 We also assume that agents have quasi-linear utility functions, meaning that agents' utilities can be divided into monetary and non-monetary components. If an agent i is allocated bundle S at price p, it derives utility ui (S, p) = vi (S) − p. A valuation function may be viewed as a vector of 2m − 1 non-negative real-values. Of course there may also be more succinct representations for certain valuation classes, and there has been much research into concise bidding languages for various types of valuations [11]. A classic example which we will refer to again later is the XOR bidding language. In this language, the agent provides a list of atomic bids, which consist of a bundle together with its value. To determine the value of a bundle S given these bids, one searches for the bundle S' of highest value listed in the atomic bids such that S' C _ S. It is then the case that v (S) = v (S'). As in the learning theory setting, we will usually only refer to bidding languages rather than valuation classes, because the corresponding valuation classes will then be implied. For example, the XOR bidding language implies the class of valuations satisfying free-disposal, which is the condition that A C _ B ⇒ v (A) <v (B). We let size (v1,..., vn) = i = 1 size (vi). That is, the size of a vector of valuations is the size of the concatenation of the valuations' representations in their respective encoding schemes (bidding languages). To make an analogy to computational learning theory, we assume that all representation classes considered are polynomially interpretable [11], meaning that the value of a bundle may be computed in polynomial time given the valuation function's representation. More formally, a representation class (bidding language) C is polynomially interpretable if there exists an algorithm that given as input some v E C and an instance x E X computes the value v (x) in time q (size (v), m), for some fixed polynomial q (·, ·).2 In the intermediate rounds of an (iterative) auction, the auctioneer will have elicited information about the agents' valuation functions via various types of queries. She will thus have constructed a set of manifest valuations, denoted ˜v1,..., ˜vn .3 The values of these functions may correspond exactly to the true agent values, or they may for example be upper or lower bounds on the true values, depending on the types of queries made. They may also simply be default or random values if no information has been acquired about certain bundles. The goal in the preference elicitation problem is to construct a set of manifest valuations such that: That is, the manifest valuations provide enough information to compute an allocation that is optimal with respect to the true valuations. Note that we only require one such optimal allocation. condition of free-disposal (monotonicity), but we do not need it at this point. 2This excludes OR *, assuming P = ~ NP, because interpreting bids from this language is NP-hard by reduction from weighted set-packing, and there is no well-studied representation class in learning theory that is clearly analogous to OR *. 3This view of iterative auctions is meant to parallel the learning setting. In many combinatorial auctions, manifest valuations are not explicitly maintained but rather simply implied by the history of bids. Two typical queries used in preference elicitation are value and demand queries. On a value query, the auctioneer presents a bundle S C _ M and the agent responds with her (exact) value for the bundle v (S) [8]. On a demand query, the auctioneer presents a vector of non-negative prices p E R (2m) over the bundles together with a bundle S. The agent responds ` YES' if it is the case that That is, the agent either confirms that the presented bundle is most preferred at the quoted prices, or indicates a better one [15].4 Note that we include 0 as a bundle, so the agent will only respond ` YES' if its utility for the proposed bundle is non-negative. Note also that communicating nonlinear prices does not necessarily entail quoting a price for every possible bundle. There may be more succinct ways of communicating this vector, as we show in section 5. We make the following definitions to parallel the query learning setting and to simplify the statements of later results: Definition 2. The representation classes V1,..., Vn can be polynomial-query elicited from value and demand queries if there is a fixed polynomial p (·, ·) and an algorithm L with access to value and demand queries of the agents such that for any (v1,..., vn) E V1 x...x Vn, L outputs after at most p (size (v1,..., vn), m) queries an allocation (S1,..., Sn) E arg max (S ~ 1,..., S ~ n) EΓ E vi (Si'). Similarly, the representation class C can be efficiently elicited from value and demand queries if the algorithm L outputs an optimal allocation with communication p (size (v1,..., vn), m), for some fixed polynomial p (·, ·). There are some key differences here with the query learning definition. We have dropped the term "exactly" since the valuation functions need not be determined exactly in order to compute an optimal allocation. Also, an efficient elicitation algorithm is polynomial communication, rather than polynomial time. This reflects the fact that communication rather than runtime is the bottleneck in elicitation. Computing an optimal allocation of goods even when given the true valuations is NP-hard for a wide range of valuation classes. It is thus unreasonable to require polynomial time in the definition of an efficient preference elicitation algorithm. We are happy to focus on the communication complexity of elicitation because this problem is widely believed to be more significant in practice than that of winner determination [11].5 4This differs slightly from the definition provided by Blum et al. [5] Their demand queries are restricted to linear prices over the goods, where the price of a bundle is the sum of the prices of its underlying goods. In contrast our demand queries allow for nonlinear prices, i.e. a distinct price for every possible bundle. This is why the lower bound in their Theorem 2 does not contradict our result that follows. 5Though the winner determination problem is NP-hard for general valuations, there exist many algorithms that solve it efficiently in practice. These range from special purpose algorithms [7, 16] to approaches using off-the-shelf IP solvers [1]. Since the valuations need not be elicited exactly it is initially less clear whether the polynomial dependence on size (v1,..., vn) is justified in this setting. Intuitively, this parameter is justified because we must learn valuations exactly when performing elicitation, in the worst-case. We address this in the next section. 3. PARALLELS BETWEEN EQUIVALENCE AND DEMAND QUERIES We have described the query learning and preference elicitation settings in a manner that highlights their similarities. Value and membership queries are clear analogs. Slightly less obvious is the fact that equivalence and demand queries are also analogs. To see this, we need the concept of Lindahl prices. Lindahl prices are nonlinear and non-anonymous prices over the bundles. They are nonlinear in the sense that each bundle is assigned a price, and this price is not necessarily the sum of prices over its underlying goods. They are non-anonymous in the sense that two agents may face different prices for the same bundle of goods. Thus Lindahl prices are of the form pi (S), for all S C _ M, for all i E N. Lindahl prices are presented to the agents in demand queries. When agents have normalized quasi-linear utility functions, Bikhchandani and Ostroy [4] show that there always exist Lindahl prices such that (S1,..., Sn) is an optimal allocation if and only if states that the allocation maximizes the auctioneer's revenue at the given prices. The scenario in which these conditions hold is called a Lindahl equilibrium, or often a competitive equilibrium. We say that the Lindahl prices support the optimal allocation. It is therefore sufficient to announce supporting Lindahl prices to verify an optimal allocation. Once we have found an allocation with supporting Lindahl prices, the elicitation problem is solved. The problem of finding an optimal allocation (with respect to the manifest valuations) can be formulated as a linear program whose solutions are guaranteed to be integral [4]. The dual variables to this linear program are supporting Lindahl prices for the resulting allocation. The objective function to the dual program is: The optimal values of 7ri and 7rs correspond to the maximal utility to agent i with respect to its manifest valuation and the maximal revenue to the seller. There is usually a range of possible Lindahl prices supporting a given optimal allocation. The agent's manifest valuations are in fact valid Lindahl prices, and we refer to them as maximal Lindahl prices. Out of all possible vectors of Lindahl prices, maximal Lindahl prices maximize the utility of the auctioneer, in fact giving her the entire social welfare. Conversely, prices that maximize the EiEN7ri component of the objective (the sum of the agents' utilities) are minimal Lindahl prices. Any Lindahl prices will do for our results, but some may have better elicitation properties than others. Note that a demand query with maximal Lindahl prices is almost identical to an equivalence query, since in both cases we communicate the manifest valuation to the agent. We leave for future work the question of which Lindahl prices to choose to minimize preference elicitation. Considering now why demand and equivalence queries are direct analogs, first note that given the 7ri in some Lindahl equilibrium, setting for all i E N and S C _ M yields valid Lindahl prices. These prices leave every agent indifferent across all bundles with positive price, and satisfy condition (1). Thus demand queries can also implicitly communicate manifest valuations, since Lindahl prices will typically be an additive constant away from these by equality (4). In the following lemma we show how to obtain counterexamples to equivalence queries through demand queries. LEMMA 1. Suppose an agent replies with a preferred bundle S' when proposed a bundle S and supporting Lindahl prices p (S) (supporting with respect to the the agent's manifest valuation). Then either ˜v (S) = ~ v (S) or ˜v (S') = ~ v (S'). PROOF. We have the following inequalities: Inequality (5) holds because the prices support the proposed allocation with respect to the manifest valuation. Inequality (6) holds because the agent in fact prefers S' to S given the prices, according to its response to the demand query. If it were the case that ˜v (S) = v (S) and ˜v (S') = v (S'), these inequalities would represent a contradiction. Thus at least one of S and S' is a counterexample to the agent's manifest valuation. Finally, we justify dependence on size (v1,..., vn) in elicitation problems. Nisan and Segal (Proposition 1, [12]) and Parkes (Theorem 1, [13]) show that supporting Lindahl prices must necessarily be revealed in the course of any preference elicitation protocol which terminates with an optimal allocation. Furthermore, Nisan and Segal (Lemma 1, [12]) state that in the worst-case agents' prices must coincide with their valuations (up to a constant), when the valuation class is rich enough to contain "dual valuations" (as will be the case with most interesting classes). Since revealing Lindahl prices is a necessary condition for establishing an optimal allocation, and since Lindahl prices contain the same information as valuation functions (in the worst-case), allowing for dependence on size (v1,..., vn) in elicitation problems is entirely natural. 4. FROM LEARNING TO PREFERENCE ELICITATION The key to converting a learning algorithm to an elicitation algorithm is to simulate equivalence queries with demand and value queries until an optimal allocation is found. Because of our Lindahl price construction, when all agents reply ` YES' to a demand query, we have found an optimal allocation, analogous to the case where an agent replies ` YES' to an equivalence query when the target function has been exactly learned. Otherwise, we can obtain a counterexample to an equivalence query given an agent's response to a demand query. PROOF. Consider the elicitation algorithm in Figure 1. Each membership query in step 1 is simulated with a value query since these are in fact identical. Consider step 4. If all agents reply ` YES', condition (1) holds. Condition (2) holds because the computed allocation is revenue-maximizing for the auctioneer, regardless of the agents' true valuations. Thus an optimal allocation has been found. Otherwise, at least one of Si or S ~ i is a counterexample to ˜vi, by Lemma 1. We identify a counterexample by performing value queries on both these bundles, and provide it to Ai as a response to its equivalence query. This procedure will halt, since in the worst-case all agent valuations will be learned exactly, in which case the optimal allocation and Lindahl prices will be accepted by all agents. The procedure performs a polynomial number of queries, since Ai,..., An are all polynomial-query learning algorithms. Note that the conversion procedure results in a preference elicitation algorithm, not a learning algorithm. That is, the resulting algorithm does not simply learn the valuations exactly, then compute an optimal allocation. Rather, it elicits partial information about the valuations through value queries, and periodically tests whether enough information has been gathered by proposing an allocation to the agents through demand queries. It is possible to generate a Lindahl equilibrium for valuations vi,..., vn using an allocation and prices derived using manifest valuations ˜vi,..., ˜vn, and finding an optimal allocation does not imply that the agents' valuations have been exactly learned. The use of demand queries to simulate equivalence queries enables this early halting. We would not obtain this property with equivalence queries based on manifest valuations. 5. COMMUNICATION COMPLEXITY In this section, we turn to the issue of the communication complexity of elicitation. Nisan and Segal [12] show that for a variety of rich valuation spaces (such as general and submodular valuations), the worst-case communication burden of determining Lindahl prices is exponential in the number of goods, m. The communication burden is measured in terms of the number of bits transmitted between agents and auctioneer in the case of discrete communication, or in terms of the number of real numbers transmitted in the case of continuous communication. Converting efficient learning algorithms to an elicitation algorithm produces an algorithm whose queries have sizes polynomial in the parameters m and size (vi,..., vn). PROOF. The size of any value query is O (m): the message consists solely of the queried bundle. To communicate Lindahl prices to agent i, it is sufficient to communicate the agent's manifest valuation function and the value πi, by equality (4). Note that an efficient learning algorithm never builds up a manifest hypothesis of superpolynomial size, because the algorithm's runtime would then also be superpolynomial, contradicting efficiency. Thus communicating the manifest valuation requires size at most p (size (vi), m), for some polynomial p that upper-bounds the runtime of the efficient learning algorithm. Representing the surplus πi to agent i cannot require space greater than q (size (˜vi), m) for some fixed polynomial q, because we assume that the chosen representation is polynomially interpretable, and thus any value generated will be of polynomial size. We must also communicate to i its allocated bundle, so the total message size for a demand query is at most p (size (vi), m) + q (p (size (vi), m), m) + O (m). Clearly, an agent's response to a value or demand query has size at most q (size (vi), m) + O (m). Thus the value and demand queries, and the responses to these queries, are always of polynomial size. An efficient learning algorithm performs a polynomial number of queries, so the total communication of the resulting elicitation algorithm is polynomial in the relevant parameters. There will often be explicit bounds on the number of membership and equivalence queries performed by a learning algorithm, with constants that are not masked by big-Onotation. These bounds can be translated to explicit bounds on the number of value and demand queries made by the resulting elicitation algorithm. We upper-bounded the size of the manifest hypothesis with the runtime of the learning algorithm in Theorem 2. We are likely to be able to do much better than this in practice. Recall that an equivalence query is proper if size (˜f) ≤ size (f) at the time the query is made. If the learning algorithm's equivalence queries are all proper, it may then also be possible to provide tight bounds on the communication requirements of the resulting elicitation algorithm. Theorem 2 show that elicitation algorithms that depend on the size (vi,..., vn) parameter sidestep Nisan and Segal's [12] negative results on the worst-case communication complexity of efficient allocation problems. They provide guarantees with respect to the sizes of the instances of valuation functions faced at any run of the algorithm. These algorithms will fare well if the chosen representation class provides succinct representations for the simplest and most common of valuations, and thus the focus moves back to one of compact yet expressive bidding languages. We consider these issues below. 6. APPLICATIONS In this section, we demonstrate the application of our methods to particular representation classes for combinatorial valuations. We have shown that the preference elicitation problem for valuation classes Vi,..., Vn can be reduced Given: exact learning algorithms A1,..., An for valuations classes V1,..., Vn respectively. Loop until there is a signal to halt: 1. Run A1,..., An in parallel on their respective agents until each requires a response to an equivalence query, or has halted with the agent's exact valuation. 2. Compute an optimal allocation (S1,..., Sn) and corresponding Lindahl prices with respect to the manifest valuations ˜v1,..., ˜vn determined so far. 3. Present the allocation and prices to the agents in the form of a demand query. 4. If they all reply ` YES', output the allocation and halt. Otherwise there is some agent i that has replied with some preferred bundle S' i. Perform value queries on Si and S' i to find a counterexample to ˜vi, and provide it to Ai. Figure 1: Converting learning algorithms to an elicitation algorithm. to the problem of finding an efficient learning algorithm for each of these classes separately. This is significant because there already exist learning algorithms for a wealth of function classes, and because it may often be simpler to solve each learning subproblem separately than to attack the preference elicitation problem directly. We can develop an elicitation algorithm that is tailored to each agent's valuation, with the underlying learning algorithms linked together at the demand query stages in an algorithm-independent way. We show that existing learning algorithms for polynomials, monotone DNF formulae, and linear-threshold functions can be converted into preference elicitation algorithms for general valuations, valuations with free-disposal, and valuations with substitutabilities, respectively. We focus on representations that are polynomially interpretable, because the computational learning theory literature places a heavy emphasis on computational tractability [18]. In interpreting the methods we emphasize the expressiveness and succinctness of each representation class. The representation class, which in combinatorial auction terms defines a bidding language, must necessarily be expressive enough to represent all possible valuations of interest, and should also succinctly represent the simplest and most common functions in the class. 6.1 Polynomial Representations Schapire and Sellie [17] give a learning algorithm for sparse multivariate polynomials that can be used as the basis for a combinatorial auction protocol. The equivalence queries made by this algorithm are all proper. Specifically, their algorithm learns the representation class of t-sparse multivariate polynomials over the real numbers, where the variables may take on values either 0 or 1. A t-sparse polynomial has at most t terms, where a term is a product of variables, e.g. x1x3x4. A polynomial "over the real numbers" has coefficients drawn from the real numbers. Polynomials are expressive: every valuation function v: 2M--+ R + can be uniquely written as a polynomial [17]. To get an idea of the succinctness of polynomials as a bidding language, consider the additive and single-item valuations presented by Nisan [11]. In the additive valuation, the value of a bundle is the number of goods the bundle contains. In the single-item valuation, all bundles have value 1, except 0 which has value 0 (i.e. the agent is satisfied as soon as it has acquired a single item). It is not hard to show that the single-item valuation requires polynomials of size 2m − 1, while polynomials of size m suffice for the additive valuation. Polynomials are thus appropriate for valuations that are "mostly additive", with a few substitutabilities and complementarities that can be introduced by adjusting coefficients. The learning algorithm for polynomials makes at most mti +2 equivalence queries and at most (mti +1) (t2i +3 ti) / 2 membership queries to an agent i, where ti is the sparcity of the polynomial representing vi [17]. We therefore obtain an algorithm that elicits general valuations with a polynomial number of queries and polynomial communication .6 6.2 XOR Representations The XOR bidding language is standard in the combinatorial auctions literature. Recall that an XOR bid is characterized by a set of bundles B C 2M and a value function w: B--+ R + defined on those bundles, which induces the valuation function: XOR bids can represent valuations that satisfy free-disposal (and only such valuations), which again is the property that A C B =:>. v (A) <v (B). The XOR bidding language is slightly less expressive than polynomials, because polynomials can represent valuations that do not satisfy free-disposal. However, XOR is as expressive as required in most economic settings. Nisan [11] notes that XOR bids can represent the single-item valuation with m atomic bids, but 2m − 1 atomic bids are needed to represent the additive valuation. Since the opposite holds for polynomials, these two languages are incomparable in succinctness, and somewhat complementary for practical use. Blum et al. [5] note that monotone DNF formulae are the analogs of XOR bids in the learning theory literature. A monotone DNF formula is a disjunction of conjunctions in which the variables appear unnegated, for example x1x2 ∨ x3 ∨ x2x4x5. Note that such formulae can be represented as XOR bids where each atomic bid has value 1; thus XOR bids generalize monotone DNF formulae from Boolean to real-valued functions. These insights allow us to generalize a classic learning algorithm for monotone DNF ([3] Theorem 6Note that Theorem 1 applies even if valuations do not satisfy free-disposal. PROOF. The algorithm will identify each atomic bid in the target XOR bid in turn. Initialize the manifest valuation v˜ to the bid that is identically zero on all bundles (this is an XOR bid containing 0 atomic bids). Present v˜ as an equivalence query. If the response is ` YES', we are done. Otherwise we obtain a bundle S for which v (S) = ~ ˜v (S). Create a bundle T as follows. First initialize T = S. For each item i in T, check via a membership query whether v (T) = v (T − {i}). If so set T = T − {i}. Otherwise leave T as is and proceed to the next item. We claim that (T, v (T)) is an atomic bid of the target XOR bid. For each item i in T, we have v (T) = ~ v (T − {i}). To see this, note that at some point when generating T, we had a T ¯ such that T C _ T ¯ C _ S and v (T ¯)> v (T ¯ − {i}), so that i was kept in T ¯. Note that v (S) = v (T ¯) = v (T) because the value of the bundle S is maintained throughout the process of deleting items. Now assume v (T) = v (T − {i}). Then which contradicts free-disposal, since T − {i} C _ T ¯ − {i}. Thus v (T)> v (T − {i}) for all items i in T. This implies that (T, v (T)) is an atomic bid of v. If this were not the case, T would take on the maximum value of its strict subsets, by the definition of an XOR bid, and we would have which is a contradiction. We now show that v (T) = ~ ˜v (T), which will imply that (T, v (T)) is not an atomic bid of our manifest hypothesis by induction. Assume that every atomic bid (R, ˜v (R)) identified so far is indeed an atomic bid of v (meaning R is indeed listed in an atomic bid of v as having value v (R) = ˜v (R)). This assumption holds vacuously when the manifest valuation is initialized. Using the notation from (7), let (˜t3, ˜w) be our hypothesis, and (t3, w) be the target function. We have t3˜ C _ t3, and ˜w (B) = w (B) for B E t3˜ by assumption. Thus, The second equality follows from the fact that the value remains constant when we derive T from S. The last inequality holds because S is a counterexample to the manifest valuation. From equation (9) and free-disposal, we 7The cited algorithm was also used as the basis for Zinkevich et al.'s [19] elicitation algorithm for Toolbox DNF. Recall that Toolbox DNF are polynomials with non-negative coefficients. For these representations, an equivalence query can be simulated with a value query on the bundle containing all goods. have ˜v (T) <˜v (S). Then again from equation (9) it follows that v (S) <˜v (S). This contradicts (8), so we in fact have v (T) = ~ ˜v (T). Thus (T, v (T)) is not currently in our hypothesis as an atomic bid, or we would correctly have ˜v (T) = v (T) by the induction hypothesis. We add (T, v (T)) to our hypothesis and repeat the process above, performing additional equivalence queries until all atomic bids have been identified. After each equivalence query, an atomic bid is identified with at most m membership queries. Each counterexample leads to the discovery of a new atomic bid. Thus we make at most tm membership queries and exactly t + 1 equivalence queries. The number of time steps required by this algorithm is essentially the same as the number of queries performed, so the algorithm is efficient. Applying Theorem 2, we therefore obtain the following corollary: THEOREM 3. The representation class of XOR bids can be efficiently elicited from value and demand queries. This contrasts with Blum et al.'s negative results ([5], Theorem 2) stating that monotone DNF (and hence XOR bids) cannot be efficiently elicited when the demand queries are restricted to linear and anonymous prices over the goods. 6.3 Linear-Threshold Representations Polynomials, XOR bids, and all languages based on the OR bidding language (such as XOR-of-OR, OR-of-XOR, and OR ∗) fail to succinctly represent the majority valuation [11]. In this valuation, bundles have value 1 if they contain at least m/2 items, and value 0 otherwise. More generally, consider the r-of-S family of valuations where bundles have value 1 if they contain at least r items from a specified set of items S C _ M, and value 0 otherwise. The majority valuation is a special case of the r-of-S valuation with r = m/2 and S = M. These valuations are appropriate for representing substitutabilities: once a required set of items has been obtained, no other items can add value. Letting k = | S |, such valuations are succinctly represented by r-of-k threshold functions. These functions take the form of linear inequalities: where the function has value 1 if the inequality holds, and 0 otherwise. Here i1,..., ik are the items in S. Littlestone's WINNOW 2 algorithm can learn such functions using equivalence queries only, using at most 8r2 + 5k + 14kr ln m + 1 queries [10]. To provide this guarantee, r must be known to the algorithm, but S (and k) are unknown. The elicitation algorithm that results from WINNOW 2 uses demand queries only (value queries are not necessary here because the values of counterexamples are implied when there are only two possible values). Note that r-of-k threshold functions can always be succinctly represented in O (m) space. Thus we obtain an algorithm that can elicit such functions with a polynomial number of queries and polynomial communication, in the parameters n and m alone. 7. CONCLUSIONS AND FUTURE WORK We have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries. At the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation. Our result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation. A learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types. If the designer knowns beforehand what types of preferences each agent is likely to exhibit (mostly additive, many substitutes, etc. . .) , she can design learning algorithms tailored to each agents' valuations and integrate them into an elicitation scheme. The resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient. We do not require that agent valuations can be learned with value and demand queries. Equivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed. This is the preference elicitation problem. Theorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms' complexity. It would be interesting to find examples of valuation classes for which elicitation is easier than learning. Blum et al. [5] provide such an example when considering membership/value queries only (Theorem 4). In future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms. In the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility. We also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15]. An interesting question here is: which Lindahl prices in the maximal to minimal range are best to quote in order to minimize information revelation? We conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries. Finally, it would be useful to determine whether the OR ∗ bidding language [11] can be efficiently learned (and hence elicited), given this language's expressiveness and succinctness for a wide variety of valuation classes.
Applying Learning Algorithms to Preference Elicitation ABSTRACT We consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory. We show that learning algorithms can be used as a basis for preference elicitation algorithms. The resulting elicitation algorithms perform a polynomial number of queries. We also give conditions under which the resulting algorithms have polynomial communication. Our conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. In particular, we obtain an algorithm that elicits XOR bids with polynomial communication. 1. INTRODUCTION In a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone. Since there are an exponential number of bundles (in the number of goods), communicating values over these bundles can be problematic. Communicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large. Furthermore, it might even be hard for agents to determine their valuations for single bundles [14]. It is in the interest of such agents to have auction protocols which require them to bid on as few bundles as possible. Even if agents can efficiently compute their valuations, they might still be reluctant to reveal them entirely in the course of an auction, because such information may be valuable to their competitors. These considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods. There has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19]. In learning theory, the goal is to learn a function via various types of queries, such as "What is the function's value on these inputs?" In preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation. Though the goals of learning and preference elicitation differ somewhat, it is clear that these problems share similar structure, and it should come as no surprise that techniques from one field should be relevant to the other. We show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries. The resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries. Here we mean polynomial in the number of goods, agents, and the sizes of the agents' valuation functions in a given encoding scheme. Preference elicitation schemes have not traditionally considered this last parameter. We argue that complexity guarantees for elicitation schemes should allow dependence on this parameter. Introducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone. Finally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. Of course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents' valuations, and only require one query. The advantage of our scheme is that agents can be viewed as "black-boxes" that provide incremental information about their valuations. There is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer's choosing. We expect this to be an important consideration in practice. Also, with our scheme entire revelation only happens in the worst-case. For now, we leave the issue of incentives aside when deriving elicitation algorithms. Our focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation. Related work. Zinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF. Read-once formulas can represent certain substitutabilities, but no complementarities, whereas the opposite holds for Toolbox DNF. Since their work is also grounded in learning theory, they allow dependence on the size of the target valuation as we do (though read-once valuations can always be succinctly represented anyway). Their work only makes use of value queries, which are quite limited in power. Because we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions. Blum et al. [5] provide results relating the complexities of query learning and preference elicitation. They consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation. They show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa. In contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning. We will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem. Nisan and Segal [12] study the communication complexity of preference elicitation. They show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential. Their results apply to the "black-box" model of computational complexity. In this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations. This is in fact the basic framework of learning theory. Our work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal's negative results. Their work motivates the need to rely on the sizes of agents' valuation functions in stating worst-case results. 2. THE MODELS 2.1 Query Learning 2.2 Preference Elicitation 3. PARALLELS BETWEEN EQUIVALENCE AND DEMAND QUERIES 4. FROM LEARNING TO PREFERENCE ELICITATION 5. COMMUNICATION COMPLEXITY 6. APPLICATIONS 6.1 Polynomial Representations 6.2 XOR Representations 6.3 Linear-Threshold Representations 7. CONCLUSIONS AND FUTURE WORK We have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries. At the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation. Our result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation. A learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types. If the designer knowns beforehand what types of preferences each agent is likely to exhibit (mostly additive, many substitutes, etc. . .) , she can design learning algorithms tailored to each agents' valuations and integrate them into an elicitation scheme. The resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient. We do not require that agent valuations can be learned with value and demand queries. Equivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed. This is the preference elicitation problem. Theorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms' complexity. It would be interesting to find examples of valuation classes for which elicitation is easier than learning. Blum et al. [5] provide such an example when considering membership/value queries only (Theorem 4). In future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms. In the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility. We also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15]. An interesting question here is: which Lindahl prices in the maximal to minimal range are best to quote in order to minimize information revelation? We conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries. Finally, it would be useful to determine whether the OR ∗ bidding language [11] can be efficiently learned (and hence elicited), given this language's expressiveness and succinctness for a wide variety of valuation classes.
Applying Learning Algorithms to Preference Elicitation ABSTRACT We consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory. We show that learning algorithms can be used as a basis for preference elicitation algorithms. The resulting elicitation algorithms perform a polynomial number of queries. We also give conditions under which the resulting algorithms have polynomial communication. Our conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. In particular, we obtain an algorithm that elicits XOR bids with polynomial communication. 1. INTRODUCTION In a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone. Communicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large. Furthermore, it might even be hard for agents to determine their valuations for single bundles [14]. It is in the interest of such agents to have auction protocols These considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods. There has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19]. In learning theory, the goal is to learn a function via various types of queries, such as "What is the function's value on these inputs?" In preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation. We show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries. The resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries. Here we mean polynomial in the number of goods, agents, and the sizes of the agents' valuation functions in a given encoding scheme. Preference elicitation schemes have not traditionally considered this last parameter. We argue that complexity guarantees for elicitation schemes should allow dependence on this parameter. Introducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone. Finally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions. Of course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents' valuations, and only require one query. The advantage of our scheme is that agents can be viewed as "black-boxes" that provide incremental information about their valuations. There is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer's choosing. Also, with our scheme entire revelation only happens in the worst-case. For now, we leave the issue of incentives aside when deriving elicitation algorithms. Our focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation. Related work. Zinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF. Their work only makes use of value queries, which are quite limited in power. Because we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions. Blum et al. [5] provide results relating the complexities of query learning and preference elicitation. They consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation. They show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa. In contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning. We will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem. Nisan and Segal [12] study the communication complexity of preference elicitation. They show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential. Their results apply to the "black-box" model of computational complexity. In this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations. This is in fact the basic framework of learning theory. Our work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal's negative results. Their work motivates the need to rely on the sizes of agents' valuation functions in stating worst-case results. 7. CONCLUSIONS AND FUTURE WORK We have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries. At the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation. Our result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation. A learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types. .) , she can design learning algorithms tailored to each agents' valuations and integrate them into an elicitation scheme. The resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient. We do not require that agent valuations can be learned with value and demand queries. Equivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed. This is the preference elicitation problem. Theorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms' complexity. It would be interesting to find examples of valuation classes for which elicitation is easier than learning. Blum et al. [5] provide such an example when considering membership/value queries only (Theorem 4). In future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms. In the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility. We also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15]. We conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries.
C-56
A Hierarchical Process Execution Support for Grid Computing
Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering.
[ "hierarch process execut", "process execut", "grid comput", "distribut system", "distribut applic", "distribut process", "distribut schedul", "distribut execut", "distribut comput", "parallel comput", "parallel execut", "process descript", "grid architectur", "schedul algorithm", "process support", "distribut middlewar" ]
[ "P", "P", "P", "M", "M", "R", "R", "R", "R", "M", "M", "M", "M", "M", "R", "M" ]
A Hierarchical Process Execution Support for Grid Computing Fábio R. L. Cicerre Institute of Computing State University of Campinas Campinas, Brazil fcicerre@ic.unicamp.br Edmundo R. M. Madeira Institute of Computing State University of Campinas Campinas, Brazil edmundo@ic.unicamp.br Luiz E. Buzato Institute of Computing State University of Campinas Campinas, Brazil buzato@ic.unicamp.br ABSTRACT Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-distributed applications General Terms Design, Performance, Management, Algorithms 1. INTRODUCTION Grid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains. This research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8]. Traditionally, as in Globus [6], Condor-G [9] and Legion [10], there is a minimal infrastructure that provides data resource sharing, computational resource utilization management, and distributed execution. Specifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12]. This deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery. There are few proposed middleware infrastructures that support process execution over the grid. In general, they model processes by interconnecting their activities through control and data dependencies. Among them, WebFlow [1] emphasizes an architecture to construct distributed processes; Opera-G [3] provides execution recovering and steering, GridFlow [5] focuses on improved scheduling algorithms that take advantage of activity dependencies, and SwinDew [13] supports totally distributed execution on peer-to-peer networks. However, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13]. In order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure. The programming model employs structured control flow to promote controlled and contextualized activity execution. Complementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication. The programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way. Next Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model. Section 4 demonstrates how the support infrastructure executes processes and distributes activities. Related works are presented and compared to the proposed model in Section 5. The last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works. 87 Middleware 2004 Companion ProcessElement Process Activity Controller 1 * 1 * Figure 1: High-level framework of the programming model 2. PROGRAMMING MODEL The programming model designed for the grid computing architecture is very similar to the specified to the Business Process Execution Language (BPEL) [2]. Both describe processes in XML [4] documents, but the former specifies processes strictly synchronous and structured, and has more constructs for structured parallel control. The rationale behind of its design is the possibility of hierarchically distribute the process control and coordination based on structured constructs, differently from BPEL, which does not allow hierarchical composition of processes. In the proposed programming model, a process is a set of interdependent activities arranged to solve a certain problem. In detail, a process is composed of activities, subprocesses, and controllers (see Figure 1). Activities represent simple tasks that are executed on behalf of a process; subprocesses are processes executed in the context of a parent process; and controllers are control elements used to specify the execution order of these activities and subprocesses. Like structured languages, controllers can be nested and then determine the execution order of other controllers. Data are exchanged among process elements through parameters. They are passed by value, in case of simple objects, or by reference, if they are remote objects shared among elements of the same controller or process. External data can be accessed through data sources, such as relational databases or distributed objects. 2.1 Controllers Controllers are structured control constructs used to define the control flow of processes. There are sequential and parallel controllers. The sequential controller types are: block, switch, for and while. The block controller is a simple sequential construct, and the others mimic equivalent structured programming language constructs. Similarly, the parallel types are: par, parswitch, parfor and parwhile. They extend the respective sequential counterparts to allow parallel execution of process elements. All parallel controller types fork the execution of one or more process elements, and then, wait for each execution to finish. Indeed, they contain a fork and a join of execution. Aiming to implement a conditional join, all parallel controller types contain an exit condition, evaluated all time that an element execution finishes, in order to determine when the controller must end. The parfor and parwhile are the iterative versions of the parallel controller types. Both fork executions while the iteration condition is true. This provides flexibility to determine, at run-time, the number of process elements to execute simultaneously. When compared to workflow languages, the parallel controller types represent structured versions of the workflow control constructors, because they can nest other controllers and also can express fixed and conditional forks and joins, present in such languages. 2.2 Process Example This section presents an example of a prime number search application that receives a certain range of integers and returns a set of primes contained in this range. The whole computation is made by a process, which uses a parallel controller to start and dispatch several concurrent activities of the same type, in order to find prime numbers. The portion of the XML document that describes the process and activity types is shown below. <PROCESS_TYPE NAME="FindPrimes"> <IN_PARAMETER TYPE="int" NAME="min"/> <IN_PARAMETER TYPE="int" NAME="max"/> <IN_PARAMETER TYPE="int" NAME="numPrimes"/> <IN_PARAMETER TYPE="int" NAME="numActs"/> <BODY> <PRE_CODE> setPrimes(new RemoteHashSet()); parfor.setMin(getMin()); parfor.setMax(getMax()); parfor.setNumPrimes(getNumPrimes()); parfor.setNumActs(getNumActs()); parfor.setPrimes(getPrimes()); parfor.setCounterBegin(0); parfor.setCounterEnd(getNumActs()-1); </PRE_CODE> <PARFOR NAME="parfor"> <IN_PARAMETER TYPE="int" NAME="min"/> <IN_PARAMETER TYPE="int" NAME="max"/> <IN_PARAMETER TYPE="int" NAME="numPrimes"/> <IN_PARAMETER TYPE="int" NAME="numActs"/> <IN_PARAMETER TYPE=``RemoteCollection'' NAME=``primes''/> <ITERATE> <PRE_CODE> int range= (getMax()-getMin()+1)/getNumActs(); int minNum = range*getCounter()+getMin(); int maxNum = minNum+range-1; if (getCounter() == getNumActs()-1) maxNum = getMax(); findPrimes.setMin(minNum); findPrimes.setMax(maxNum); findPrimes.setNumPrimes(getNumPrimes()); findPrimes.setPrimes(getPrimes()); </PRE_CODE> <ACTIVITY TYPE=``FindPrimes'' NAME=``findPrimes''/> </ITERATE> </PARFOR> </BODY> <OUT_PARAMETER TYPE=``RemoteCollection'' NAME=``primes''/> </PROCESS_TYPE> Middleware for Grid Computing 88 <ACTIVITY_TYPE NAME="FindPrimes"> <IN_PARAMETER TYPE="int" NAME="min"/> <IN_PARAMETER TYPE="int" NAME="max"/> <IN_PARAMETER TYPE="int" NAME="numPrimes"/> <IN_PARAMETER TYPE=``RemoteCollection'' NAME=``primes''/> <CODE> for (int num=getMin(); num<=getMax(); num++) { // stop, required number of primes was found if (primes.size() >= getNumPrimes()) break; boolean prime = true; for (int i=2; i<num; i++) { if (num % i == 0) { prime = false; break; } } if (prime) { primes.add(new Integer(num)); } } </CODE> </ACTIVITY_TYPE> Firstly, a process type that finds prime numbers, named FindPrimes, is defined. It receives, through its input parameters, a range of integers in which prime numbers have to be found, the number of primes to be returned, and the number of activities to be executed in order to perform this work. At the end, the found prime numbers are returned as a collection through its output parameter. This process contains a PARFOR controller aiming to execute a determined number of parallel activities. It iterates from 0 to getNumActs() - 1, which determines the number of activities, starting a parallel activity in each iteration. In such case, the controller divides the whole range of numbers in subranges of the same size, and, in each iteration, starts a parallel activity that finds prime numbers in a specific subrange. These activities receive a shared object by reference in order to store the prime numbers just found and control if the required number of primes has been reached. Finally, it is defined the activity type, FindPrimes, used to find prime numbers in each subrange. It receives, through its input parameters, the range of numbers in which it has to find prime numbers, the total number of prime numbers to be found by the whole process, and, passed by reference, a collection object to store the found prime numbers. Between its CODE markers, there is a simple code to find prime numbers, which iterates over the specified range and verifies if the current integer is a prime. Additionally, in each iteration, the code verifies if the required number of primes, inserted in the primes collection by all concurrent activities, has been reached, and exits if true. The advantage of using controllers is the possibility of the support infrastructure determines the point of execution the process is in, allowing automatic recovery and monitoring, and also the capability of instantiating and dispatching process elements only when there are enough computing resources available, reducing unnecessary overhead. Besides, due to its structured nature, they can be easily composed and the support infrastructure can take advantage of this in order to distribute hierarchically the nested controllers to Group Server Group Java Virtual Machine RMI JDBC Group Manager Process Server Java Virtual Machine RMI JDBC Process Coordinator Worker Java Virtual Machine RMI Activity Manager Repository Figure 2: Infrastructure architecture different machines over the grid, allowing enhanced scalability and fault-tolerance. 3. SUPPORT INFRASTRUCTURE The support infrastructure comprises tools for specification, and services for execution and monitoring of structured processes in highly distributed, heterogeneous and autonomous grid environments. It has services to monitor availability of resources in the grid, to interpret processes and schedule activities and controllers, and to execute activities. 3.1 Infrastructure Architecture The support infrastructure architecture is composed of groups of machines and data repositories, which preserves its administrative autonomy. Generally, localized machines and repositories, such as in local networks or clusters, form a group. Each machine in a group must have a Java Virtual Machine (JVM) [11], and a Java Runtime Library, besides a combination of the following grid support services: group manager (GM), process coordinator (PC) and activity manager (AM). This combination determines what kind of group node it represents: a group server, a process server, or simply a worker (see Figure 2). In a group there are one or more group managers, but only one acts as primary and the others, as replicas. They are responsible to maintain availability information of group machines. Moreover, group managers maintain references to data resources of the group. They use group repositories to persist and recover the location of nodes and their availability. To control process execution, there are one or more process coordinators per group. They are responsible to instantiate and execute processes and controllers, select resources, and schedule and dispatch activities to workers. In order to persist and recover process execution and data, and also load process specification, they use group repositories. Finally, in several group nodes there is an activity manager. It is responsible to execute activities in the hosted machine on behalf of the group process coordinators, and to inform the current availability of the associated machine to group managers. They also have pendent activity queues, containing activities to be executed. 3.2 Inter-group Relationships In order to model real grid architecture, the infrastructure must comprise several, potentially all, local networks, like Internet does. Aiming to satisfy this intent, local groups are 89 Middleware 2004 Companion GM GM GM GM Figure 3: Inter-group relationships connected to others, directly or indirectly, through its group managers (see Figure 3). Each group manager deals with requests of its group (represented by dashed ellipses), in order to register local machines and maintain correspondent availability. Additionally, group managers communicate to group managers of other groups. Each group manager exports coarse availability information to group managers of adjacent groups and also receives requests from other external services to furnish detailed availability information. In this way, if there are resources available in external groups, it is possible to send processes, controllers and activities to these groups in order to execute them in external process coordinators and activity managers, respectively. 4. PROCESS EXECUTION In the proposed grid architecture, a process is specified in XML, using controllers to determine control flow; referencing other processes and activities; and passing objects to their parameters in order to define data flow. After specified, the process is compiled in a set of classes, which represent specific process, activity and controller types. At this time, it can be instantiated and executed by a process coordinator. 4.1 Dynamic Model To execute a specified process, it must be instantiated by referencing its type on a process coordinator service of a specific group. Also, the initial parameters must be passed to it, and then it can be started. The process coordinator carries out the process by executing the process elements included in its body sequentially. If the element is a process or a controller, the process coordinator can choose to execute it in the same machine or to pass it to another process coordinator in a remote machine, if available. Else, if the element is an activity, it passes to an activity manager of an available machine. Process coordinators request the local group manager to find available machines that contain the required service, process coordinator or activity manager, in order to execute a process element. Then, it can return a local machine, a machine in another group or none, depending on the availability of such resource in the grid. It returns an external worker (activity manager machine) if there are no available workers in the local group; and, it returns an external process server (process coordinator machine), if there are no available process servers or workers in the local group. Obeying this rule, group managers try to find process servers in the same group of the available workers. Such procedure is followed recursively by all process coGM FindPrimes Activity AM FindPrimes Activity AM FindPrimes Activity AM FindPrimes Process PC Figure 4: FindPrimes process execution ordinators that execute subprocesses or controllers of a process. Therefore, because processes are structured by nesting process elements, the process execution is automatically distributed hierarchically through one or more grid groups according to the availability and locality of computing resources. The advantage of this distribution model is wide area execution, which takes advantage of potentially all grid resources; and localized communication of process elements, because strong dependent elements, which are under the same controller, are placed in the same or near groups. Besides, it supports easy monitoring and steering, due to its structured controllers, which maintain state and control over its inner elements. 4.2 Process Execution Example Revisiting the example shown in Section 2.2, a process type is specified to find prime numbers in a certain range of numbers. In order to solve this problem, it creates a number of activities using the parfor controller. Each activity, then, finds primes in a determined part of the range of numbers. Figure 4 shows an instance of this process type executing over the proposed infrastructure. A FindPrimes process instance is created in an available process coordinator (PC), which begins executing the parfor controller. In each iteration of this controller, the process coordinator requests to the group manager (GM) an available activity manager (AM) in order to execute a new instance of the FindPrimes activity. If there is any AM available in this group or in an external one, the process coordinator sends the activity class and initial parameters to this activity manager and requests its execution. Else, if no activity manager is available, then the controller enters in a wait state until an activity manager is made available, or is created. In parallel, whenever an activity finishes, its result is sent back to the process coordinator, which records it in the parfor controller. Then, the controller waits until all activities that have been started are finished, and it ends. At this point, the process coordinator verifies that there is no other process element to execute and finishes the process. 5. RELATED WORK There are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely Middleware for Grid Computing 90 distributed resources in heterogeneous and autonomous networks. Among them, Globus [6], Condor-G [9] and Legion [10] are widely known. Aiming to standardize interfaces and services to grid, the Open Grid Services Architecture (OGSA) [7] has been defined. The grid architectures generally have services that manage computing resources and distribute the execution of independent tasks on available ones. However, emerging architectures maintain task dependencies and automatically execute tasks in a correct order. They take advantage of these dependencies to provide automatic recovery, and better distribution and scheduling algorithms. Following such model, WebFlow [1] is a process specification tool and execution environment constructed over CORBA that allows graphical composition of activities and their distributed execution in a grid environment. Opera-G [3], like WebFlow, uses a process specification language similar to the data flow diagram and workflow languages, but furnishes automatic execution recovery and limited steering of process execution. The previously referred architectures and others that enact processes over the grid have a centralized coordination. In order to surpass this limitation, systems like SwinDew [13] proposed a widely distributed process execution, in which each node knows where to execute the next activity or join activities in a peer-to-peer environment. In the specific area of activity distribution and scheduling, emphasized in this work, GridFlow [5] is remarkable. It uses a two-level scheduling: global and local. In the local level, it has services that predict computing resource utilization and activity duration. Based on this information, GridFlow employs a PERT-like technique that tries to forecast the activity execution start time and duration in order to better schedule them to the available resources. The architecture proposed in this paper, which encompasses a programming model and an execution support infrastructure, is widely decentralized, differently from WebFlow and Opera-G, being more scalable and fault-tolerant. But, like the latter, it is designed to support execution recovery. Comparing to SwinDew, the proposed architecture contains widely distributed process coordinators, which coordinate processes or parts of them, differently from SwinDew where each node has a limited view of the process: only the activity that starts next. This makes easier to monitor and control processes. Finally, the support infrastructure breaks the process and its subprocesses for grid execution, allowing a group to require another group for the coordination and execution of process elements on behalf of the first one. This is different from GridFlow, which can execute a process in at most two levels, having the global level as the only responsible to schedule subprocesses in other groups. This can limit the overall performance of processes, and make the system less scalable. 6. CONCLUSION AND FUTURE WORK Grid computing is an emerging research field that intends to promote distributed and parallel computing over the wide area network of heterogeneous and autonomous administrative domains in a seamless way, similar to what Internet does to the data sharing. There are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks. In order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner. This support infrastructure provides automatic, structured and recursive distribution of process elements over groups of available machines; better resource use, due to its on demand creation of process elements; easy process monitoring and steering, due to its structured nature; and localized communication among strong dependent process elements, which are placed under the same controller. These features contribute to better scalability, fault-tolerance and control for processes execution over the grid. Moreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes. The next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution. After that, it is desirable to advance the scheduling algorithm to forecast machine use in the same or other groups and to foresee start time of process elements, in order to use this information to pre-allocate resources and, then, obtain a better process execution performance. Finally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment. 7. ACKNOWLEDGMENTS We would like to thank Paulo C. Oliveira, from the State Treasury Department of Sao Paulo, for its deeply revision and insightful comments. 8. REFERENCES [1] E. Akarsu, G. C. Fox, W. Furmanski, and T. Haupt. WebFlow: High-Level Programming Environment and Visual Authoring Toolkit for High Performance Distributed Computing. In Proceedings of Supercom puting (SC98), 1998. [2] T. Andrews and F. Curbera. Specification: Business Process Execution Language for W eb Services V ersion 1.1. IBM DeveloperWorks, 2003. Available at http://www-106.ibm.com/developerworks/library/wsbpel. [3] W. Bausch. O PERA -G :A M icrokernelfor Com putationalG rids. PhD thesis, Swiss Federal Institute of Technology, Zurich, 2004. [4] T. Bray and J. Paoli. Extensible M arkup Language (X M L) 1.0. XML Core WG, W3C, 2004. Available at http://www.w3.org/TR/2004/REC-xml-20040204. [5] J. Cao, S. A. Jarvis, S. Saini, and G. R. Nudd. GridFlow: Workflow Management for Grid Computing. In Proceedings ofthe International Sym posium on Cluster Com puting and the G rid (CCG rid 2003), 2003. [6] I. Foster and C. Kesselman. Globus: A Metacomputing Infrastructure Toolkit. Intl.J. Supercom puter A pplications, 11(2):115-128, 1997. [7] I. Foster, C. Kesselman, J. M. Nick, and S. Tuecke. The Physiology ofthe G rid: A n O pen G rid Services A rchitecture for D istributed System s Integration. 91 Middleware 2004 Companion Open Grid Service Infrastructure WG, Global Grid Forum, 2002. [8] I. Foster, C. Kesselman, and S. Tuecke. The Anatomy of the Grid: Enabling Scalable Virtual Organization. The Intl.JournalofH igh Perform ance Com puting A pplications, 15(3):200-222, 2001. [9] J. Frey, T. Tannenbaum, M. Livny, I. Foster, and S. Tuecke. Condor-G: A Computational Management Agent for Multi-institutional Grids. In Proceedings of the Tenth Intl.Sym posium on H igh Perform ance D istributed Com puting (H PD C-10). IEEE, 2001. [10] A. S. Grimshaw and W. A. Wulf. Legion - A View from 50,000 Feet. In Proceedings ofthe Fifth Intl.. Sym posium on H igh Perform ance D istributed Com puting. IEEE, 1996. [11] T. Lindholm and F. Yellin. The Java V irtualM achine Specification. Sun Microsystems, Second Edition edition, 1999. [12] B. R. Schulze and E. R. M. Madeira. Grid Computing with Active Services. Concurrency and Com putation: Practice and Experience Journal, 5(16):535-542, 2004. [13] J. Yan, Y. Yang, and G. K. Raikundalia. Enacting Business Processes in a Decentralised Environment with P2P-Based Workflow Support. In Proceedings of the Fourth Intl.Conference on W eb-Age Inform ation M anagem ent(W A IM 2003), 2003. Middleware for Grid Computing 92
A Hierarchical Process Execution Support for Grid Computing ABSTRACT Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering. 1. INTRODUCTION Grid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains. This research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8]. Traditionally, as in Globus [6], Condor-G [9] and Legion [10], there is a minimal infrastructure that provides data resource sharing, computational resource utilization management, and distributed execution. Specifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12]. This deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery. There are few proposed middleware infrastructures that support process execution over the grid. In general, they model processes by interconnecting their activities through control and data dependencies. Among them, WebFlow [1] emphasizes an architecture to construct distributed processes; Opera-G [3] provides execution recovering and steering, GridFlow [5] focuses on improved scheduling algorithms that take advantage of activity dependencies, and SwinDew [13] supports totally distributed execution on peer-to-peer networks. However, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13]. In order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure. The programming model employs structured control flow to promote controlled and contextualized activity execution. Complementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication. The programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way. Next Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model. Section 4 demonstrates how the support infrastructure executes processes and distributes activities. Related works are presented and compared to the proposed model in Section 5. The last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works. Figure 1: High-level framework of the programming model 2. PROGRAMMING MODEL The programming model designed for the grid computing architecture is very similar to the specified to the Business Process Execution Language (BPEL) [2]. Both describe processes in XML [4] documents, but the former specifies processes strictly synchronous and structured, and has more constructs for structured parallel control. The rationale behind of its design is the possibility of hierarchically distribute the process control and coordination based on structured constructs, differently from BPEL, which does not allow hierarchical composition of processes. In the proposed programming model, a process is a set of interdependent activities arranged to solve a certain problem. In detail, a process is composed of activities, subprocesses, and controllers (see Figure 1). Activities represent simple tasks that are executed on behalf of a process; subprocesses are processes executed in the context of a parent process; and controllers are control elements used to specify the execution order of these activities and subprocesses. Like structured languages, controllers can be nested and then determine the execution order of other controllers. Data are exchanged among process elements through parameters. They are passed by value, in case of simple objects, or by reference, if they are remote objects shared among elements of the same controller or process. External data can be accessed through data sources, such as relational databases or distributed objects. 2.1 Controllers Controllers are structured control constructs used to define the control flow of processes. There are sequential and parallel controllers. The sequential controller types are: block, switch, for and while. The block controller is a simple sequential construct, and the others mimic equivalent structured programming language constructs. Similarly, the parallel types are: par, parswitch, parfor and parwhile. They extend the respective sequential counterparts to allow parallel execution of process elements. All parallel controller types fork the execution of one or more process elements, and then, wait for each execution to finish. Indeed, they contain a fork and a join of execution. Aiming to implement a conditional join, all parallel controller types contain an exit condition, evaluated all time that an element execution finishes, in order to determine when the controller must end. The parfor and parwhile are the iterative versions of the parallel controller types. Both fork executions while the iteration condition is true. This provides flexibility to determine, at run-time, the number of process elements to execute simultaneously. When compared to workflow languages, the parallel controller types represent structured versions of the workflow control constructors, because they can nest other controllers and also can express fixed and conditional forks and joins, present in such languages. 2.2 Process Example This section presents an example of a prime number search application that receives a certain range of integers and returns a set of primes contained in this range. The whole computation is made by a process, which uses a parallel controller to start and dispatch several concurrent activities of the same type, in order to find prime numbers. The portion of the XML document that describes the process and activity types is shown below. Firstly, a process type that finds prime numbers, named FindPrimes, is defined. It receives, through its input parameters, a range of integers in which prime numbers have to be found, the number of primes to be returned, and the number of activities to be executed in order to perform this work. At the end, the found prime numbers are returned as a collection through its output parameter. This process contains a PARFOR controller aiming to execute a determined number of parallel activities. It iterates from 0 to getNumActs () - 1, which determines the number of activities, starting a parallel activity in each iteration. In such case, the controller divides the whole range of numbers in subranges of the same size, and, in each iteration, starts a parallel activity that finds prime numbers in a specific subrange. These activities receive a shared object by reference in order to store the prime numbers just found and control if the required number of primes has been reached. Finally, it is defined the activity type, FindPrimes, used to find prime numbers in each subrange. It receives, through its input parameters, the range of numbers in which it has to find prime numbers, the total number of prime numbers to be found by the whole process, and, passed by reference, a collection object to store the found prime numbers. Between its CODE markers, there is a simple code to find prime numbers, which iterates over the specified range and verifies if the current integer is a prime. Additionally, in each iteration, the code verifies if the required number of primes, inserted in the primes collection by all concurrent activities, has been reached, and exits if true. The advantage of using controllers is the possibility of the support infrastructure determines the point of execution the process is in, allowing automatic recovery and monitoring, and also the capability of instantiating and dispatching process elements only when there are enough computing resources available, reducing unnecessary overhead. Besides, due to its structured nature, they can be easily composed and the support infrastructure can take advantage of this in order to distribute hierarchically the nested controllers to Figure 2: Infrastructure architecture different machines over the grid, allowing enhanced scalability and fault-tolerance. 3. SUPPORT INFRASTRUCTURE The support infrastructure comprises tools for specification, and services for execution and monitoring of structured processes in highly distributed, heterogeneous and autonomous grid environments. It has services to monitor availability of resources in the grid, to interpret processes and schedule activities and controllers, and to execute activities. 3.1 Infrastructure Architecture The support infrastructure architecture is composed of groups of machines and data repositories, which preserves its administrative autonomy. Generally, localized machines and repositories, such as in local networks or clusters, form a group. Each machine in a group must have a Java Virtual Machine (JVM) [11], and a Java Runtime Library, besides a combination of the following grid support services: group manager (GM), process coordinator (PC) and activity manager (AM). This combination determines what kind of group node it represents: a group server, a process server, or simply a worker (see Figure 2). In a group there are one or more group managers, but only one acts as primary and the others, as replicas. They are responsible to maintain availability information of group machines. Moreover, group managers maintain references to data resources of the group. They use group repositories to persist and recover the location of nodes and their availability. To control process execution, there are one or more process coordinators per group. They are responsible to instantiate and execute processes and controllers, select resources, and schedule and dispatch activities to workers. In order to persist and recover process execution and data, and also load process specification, they use group repositories. Finally, in several group nodes there is an activity manager. It is responsible to execute activities in the hosted machine on behalf of the group process coordinators, and to inform the current availability of the associated machine to group managers. They also have pendent activity queues, containing activities to be executed. 3.2 Inter-group Relationships In order to model real grid architecture, the infrastructure must comprise several, potentially all, local networks, like Internet does. Aiming to satisfy this intent, local groups are Figure 3: Inter-group relationships connected to others, directly or indirectly, through its group managers (see Figure 3). Each group manager deals with requests of its group (represented by dashed ellipses), in order to register local machines and maintain correspondent availability. Additionally, group managers communicate to group managers of other groups. Each group manager exports coarse availability information to group managers of adjacent groups and also receives requests from other external services to furnish detailed availability information. In this way, if there are resources available in external groups, it is possible to send processes, controllers and activities to these groups in order to execute them in external process coordinators and activity managers, respectively. 4. PROCESS EXECUTION In the proposed grid architecture, a process is specified in XML, using controllers to determine control flow; referencing other processes and activities; and passing objects to their parameters in order to define data flow. After specified, the process is compiled in a set of classes, which represent specific process, activity and controller types. At this time, it can be instantiated and executed by a process coordinator. 4.1 Dynamic Model To execute a specified process, it must be instantiated by referencing its type on a process coordinator service of a specific group. Also, the initial parameters must be passed to it, and then it can be started. The process coordinator carries out the process by executing the process elements included in its body sequentially. If the element is a process or a controller, the process coordinator can choose to execute it in the same machine or to pass it to another process coordinator in a remote machine, if available. Else, if the element is an activity, it passes to an activity manager of an available machine. Process coordinators request the local group manager to find available machines that contain the required service, process coordinator or activity manager, in order to execute a process element. Then, it can return a local machine, a machine in another group or none, depending on the availability of such resource in the grid. It returns an external worker (activity manager machine) if there are no available workers in the local group; and, it returns an external process server (process coordinator machine), if there are no available process servers or workers in the local group. Obeying this rule, group managers try to find process servers in the same group of the available workers. Such procedure is followed recursively by all process co Figure 4: FindPrimes process execution ordinators that execute subprocesses or controllers of a process. Therefore, because processes are structured by nesting process elements, the process execution is automatically distributed hierarchically through one or more grid groups according to the availability and locality of computing resources. The advantage of this distribution model is wide area execution, which takes advantage of potentially all grid resources; and localized communication of process elements, because strong dependent elements, which are under the same controller, are placed in the same or near groups. Besides, it supports easy monitoring and steering, due to its structured controllers, which maintain state and control over its inner elements. 4.2 Process Execution Example Revisiting the example shown in Section 2.2, a process type is specified to find prime numbers in a certain range of numbers. In order to solve this problem, it creates a number of activities using the parfor controller. Each activity, then, finds primes in a determined part of the range of numbers. Figure 4 shows an instance of this process type executing over the proposed infrastructure. A FindPrimes process instance is created in an available process coordinator (PC), which begins executing the parfor controller. In each iteration of this controller, the process coordinator requests to the group manager (GM) an available activity manager (AM) in order to execute a new instance of the FindPrimes activity. If there is any AM available in this group or in an external one, the process coordinator sends the activity class and initial parameters to this activity manager and requests its execution. Else, if no activity manager is available, then the controller enters in a wait state until an activity manager is made available, or is created. In parallel, whenever an activity finishes, its result is sent back to the process coordinator, which records it in the parfor controller. Then, the controller waits until all activities that have been started are finished, and it ends. At this point, the process coordinator verifies that there is no other process element to execute and finishes the process. 5. RELATED WORK There are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely 91 Middleware 2004 Companion distributed resources in heterogeneous and autonomous networks. Among them, Globus [6], Condor-G [9] and Legion [10] are widely known. Aiming to standardize interfaces and services to grid, the Open Grid Services Architecture (OGSA) [7] has been defined. The grid architectures generally have services that manage computing resources and distribute the execution of independent tasks on available ones. However, emerging architectures maintain task dependencies and automatically execute tasks in a correct order. They take advantage of these dependencies to provide automatic recovery, and better distribution and scheduling algorithms. Following such model, WebFlow [1] is a process specification tool and execution environment constructed over CORBA that allows graphical composition of activities and their distributed execution in a grid environment. Opera-G [3], like WebFlow, uses a process specification language similar to the data flow diagram and workflow languages, but furnishes automatic execution recovery and limited steering of process execution. The previously referred architectures and others that enact processes over the grid have a centralized coordination. In order to surpass this limitation, systems like SwinDew [13] proposed a widely distributed process execution, in which each node knows where to execute the next activity or join activities in a peer-to-peer environment. In the specific area of activity distribution and scheduling, emphasized in this work, GridFlow [5] is remarkable. It uses a two-level scheduling: global and local. In the local level, it has services that predict computing resource utilization and activity duration. Based on this information, GridFlow employs a PERT-like technique that tries to forecast the activity execution start time and duration in order to better schedule them to the available resources. The architecture proposed in this paper, which encompasses a programming model and an execution support infrastructure, is widely decentralized, differently from WebFlow and Opera-G, being more scalable and fault-tolerant. But, like the latter, it is designed to support execution recovery. Comparing to SwinDew, the proposed architecture contains widely distributed process coordinators, which coordinate processes or parts of them, differently from SwinDew where each node has a limited view of the process: only the activity that starts next. This makes easier to monitor and control processes. Finally, the support infrastructure breaks the process and its subprocesses for grid execution, allowing a group to require another group for the coordination and execution of process elements on behalf of the first one. This is different from GridFlow, which can execute a process in at most two levels, having the global level as the only responsible to schedule subprocesses in other groups. This can limit the overall performance of processes, and make the system less scalable. 6. CONCLUSION AND FUTURE WORK Grid computing is an emerging research field that intends to promote distributed and parallel computing over the wide area network of heterogeneous and autonomous administrative domains in a seamless way, similar to what Internet does to the data sharing. There are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks. In order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner. This support infrastructure provides automatic, structured and recursive distribution of process elements over groups of available machines; better resource use, due to its on demand creation of process elements; easy process monitoring and steering, due to its structured nature; and localized communication among strong dependent process elements, which are placed under the same controller. These features contribute to better scalability, fault-tolerance and control for processes execution over the grid. Moreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes. The next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution. After that, it is desirable to advance the scheduling algorithm to forecast machine use in the same or other groups and to foresee start time of process elements, in order to use this information to pre-allocate resources and, then, obtain a better process execution performance. Finally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.
A Hierarchical Process Execution Support for Grid Computing ABSTRACT Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering. 1. INTRODUCTION Grid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains. This research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8]. Traditionally, as in Globus [6], Condor-G [9] and Legion [10], there is a minimal infrastructure that provides data resource sharing, computational resource utilization management, and distributed execution. Specifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12]. This deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery. There are few proposed middleware infrastructures that support process execution over the grid. In general, they model processes by interconnecting their activities through control and data dependencies. Among them, WebFlow [1] emphasizes an architecture to construct distributed processes; Opera-G [3] provides execution recovering and steering, GridFlow [5] focuses on improved scheduling algorithms that take advantage of activity dependencies, and SwinDew [13] supports totally distributed execution on peer-to-peer networks. However, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13]. In order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure. The programming model employs structured control flow to promote controlled and contextualized activity execution. Complementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication. The programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way. Next Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model. Section 4 demonstrates how the support infrastructure executes processes and distributes activities. Related works are presented and compared to the proposed model in Section 5. The last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works. Figure 1: High-level framework of the programming model 2. PROGRAMMING MODEL 2.1 Controllers 2.2 Process Example 3. SUPPORT INFRASTRUCTURE 3.1 Infrastructure Architecture 3.2 Inter-group Relationships 4. PROCESS EXECUTION 4.1 Dynamic Model 4.2 Process Execution Example 5. RELATED WORK There are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely 91 Middleware 2004 Companion 6. CONCLUSION AND FUTURE WORK Grid computing is an emerging research field that intends to promote distributed and parallel computing over the wide area network of heterogeneous and autonomous administrative domains in a seamless way, similar to what Internet does to the data sharing. There are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks. In order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner. This support infrastructure provides automatic, structured and recursive distribution of process elements over groups of available machines; better resource use, due to its on demand creation of process elements; easy process monitoring and steering, due to its structured nature; and localized communication among strong dependent process elements, which are placed under the same controller. These features contribute to better scalability, fault-tolerance and control for processes execution over the grid. Moreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes. The next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution. After that, it is desirable to advance the scheduling algorithm to forecast machine use in the same or other groups and to foresee start time of process elements, in order to use this information to pre-allocate resources and, then, obtain a better process execution performance. Finally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.
A Hierarchical Process Execution Support for Grid Computing ABSTRACT Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering. 1. INTRODUCTION Grid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains. This research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8]. Specifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12]. This deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery. There are few proposed middleware infrastructures that support process execution over the grid. In general, they model processes by interconnecting their activities through control and data dependencies. However, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13]. In order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure. The programming model employs structured control flow to promote controlled and contextualized activity execution. Complementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication. The programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way. Next Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model. Section 4 demonstrates how the support infrastructure executes processes and distributes activities. Related works are presented and compared to the proposed model in Section 5. The last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works. Figure 1: High-level framework of the programming model 5. RELATED WORK There are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely 6. CONCLUSION AND FUTURE WORK There are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks. In order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner. These features contribute to better scalability, fault-tolerance and control for processes execution over the grid. Moreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes. The next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution. Finally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.
C-42
Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization
Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Grid-enabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed.
[ "ensembl kalman filter", "data assimil methodolog", "hydrocarbon reservoir simul", "energi explor", "tigr grid comput environ", "tigr", "grid comput", "cyberinfrastructur develop project", "high perform comput", "tigr grid middlewar", "strateg applic area", "gridwai metaschedul", "pool licens", "grid-enabl", "reservoir model", "enkf" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "M", "M", "U", "M", "R", "U", "R", "U" ]
Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization Ravi Vadapalli High Performance Computing Center Texas Tech University Lubbock, TX 79409 001-806-742-4350 Ravi.Vadapalli@ttu.edu Ajitabh Kumar Department of Petroleum Engineering Texas A&M University College Station, TX 77843 001-979-847-8735 akumar@tamu.edu Ping Luo Supercomputing Facility Texas A&M University College Station, TX 77843 001-979-862-3107 pingluo@sc.tamu.edu Shameem Siddiqui Department of Petroleum Engineering Texas Tech University Lubbock, TX 79409 001-806-742-3573 Shameem.Siddiqui@ttu.edu Taesung Kim Supercomputing Facility Texas A&M University College Station, TX 77843 001-979-204-5076 tskim@sc.tamu.edu ABSTRACT Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed. Categories and Subject Descriptors C 2.4 [Distributed Systems]: Distributed applications General Terms Algorithms, Design, Performance 1. INTRODUCTION Grid computing [1] is an emerging collaborative computing paradigm to extend institution/organization specific high performance computing (HPC) capabilities greatly beyond local resources. Its importance stems from the fact that ground breaking research in strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require intercampus collaborations and computational capabilities beyond institutional limitations. The Texas Internet Grid for Research and Education (TIGRE) [2,3] is a state funded cyberinfrastructure development project carried out by five (Rice, A&M, TTU, UH and UT Austin) major university systems - collectively called TIGRE Institutions. The purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas. TIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium. The goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities. The primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas. The secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas. These are bioscience and medicine, energy exploration and air quality modeling. Vision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities. The overall goal of TIGRE is to support local, campus and regional user interests and offer avenues to connect with national Grid projects such as Open Science Grid [5], and TeraGrid [6]. Within the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler. Section 2 provides an overview of the TIGRE environment and capabilities. Application description and the need for Grid-enabling EnKF methodology is provided in Section 3. The implementation details and merits of our approach are discussed in Section 4. Conclusions are provided in Section 5. Finally, observations and lessons learned are documented in Section 6. 2. TIGRE ENVIRONMENT The TIGRE Grid middleware consists of minimal set of components derived from a subset of the Virtual Data Toolkit (VDT) [9] which supports a variety of operating systems. The purpose of choosing a minimal software stack is to support applications at hand, and to simplify installation and distribution of client/server stacks across TIGRE sites. Additional components will be added as they become necessary. The PacMan [10] packaging and distribution mechanism is employed for TIGRE client/server installation and management. The PacMan distribution mechanism involves retrieval, installation, and often configuration of the packaged software. This approach allows the clients to keep current, consistent versions of TIGRE software. It also helps TIGRE sites to install the needed components on resources distributed throughout the participating sites. The TIGRE client/server stack consists of an authentication and authorization layer, Globus GRAM4-based job submission via web services (pre-web services installations are available up on request). The tools for handling Grid proxy generation, Grid-enabled file transfer and Grid-enabled remote login are supported. The pertinent details of TIGRE services and tools for job scheduling and management are provided below. 2.1. Certificate Authority The TIGRE security infrastructure includes a certificate authority (CA) accredited by the International Grid Trust Federation (IGTF) for issuing X. 509 user and resource Grid certificates [11]. The Texas Advanced Computing Center (TACC), University of Texas at Austin is the TIGRE``s shared CA. The TIGRE Institutions serve as Registration Authorities (RA) for their respective local user base. For up-to-date information on securing user and resource certificates and their installation instructions see ref [2]. The users and hosts on TIGRE are identified by their distinguished name (DN) in their X.509 certificate provided by the CA. A native Grid-mapfile that contains a list of authorized DNs is used to authenticate and authorize user job scheduling and management on TIGRE site resources. At Texas Tech University, the users are dynamically allocated one of the many generic pool accounts. This is accomplished through the Grid User Management System (GUMS) [12]. 2.2. Job Scheduling and Management The TIGRE environment supports GRAM4-based job submission via web services. The job submission scripts are generated using XML. The web services GRAM translates the XML scripts into target cluster specific batch schedulers such as LSF, PBS, or SGE. The high bandwidth file transfer protocols such as GridFTP are utilized for staging files in and out of the target machine. The login to remote hosts for compilation and debugging is only through GSISSH service which requires resource authentication through X.509 certificates. The authentication and authorization of Grid jobs are managed by issuing Grid certificates to both users and hosts. The certificate revocation lists (CRL) are updated on a daily basis to maintain high security standards of the TIGRE Grid services. The TIGRE portal [2] documentation area provides a quick start tutorial on running jobs on TIGRE. 2.3. Metascheduler The metascheduler interoperates with the cluster level batch schedulers (such as LSF, PBS) in the overall Grid workflow management. In the present work, we have employed GridWay [8] metascheduler - a Globus incubator project - to schedule and manage jobs across TIGRE. The GridWay is a light-weight metascheduler that fully utilizes Globus functionalities. It is designed to provide efficient use of dynamic Grid resources by multiple users for Grid infrastructures built on top of Globus services. The TIGRE site administrator can control the resource sharing through a powerful built-in scheduler provided by GridWay or by extending GridWay``s external scheduling module to provide their own scheduling policies. Application users can write job descriptions using GridWay``s simple and direct job template format (see Section 4 for details) or standard Job Submission Description Language (JSDL). See section 4 for implementation details. 2.4. Customer Service Management System A TIGRE portal [2] was designed and deployed to interface users and resource providers. It was designed using GridPort [13] and is maintained by TACC. The TIGRE environment is supported by open source tools such as the Open Ticket Request System (OTRS) [14] for servicing trouble tickets, and MoinMoin [15] Wiki for TIGRE content and knowledge management for education, outreach and training. The links for OTRS and Wiki are consumed by the TIGRE portal [2] - the gateway for users and resource providers. The TIGRE resource status and loads are monitored by the Grid Port Information Repository (GPIR) service of the GridPort toolkit [13] which interfaces with local cluster load monitoring service such as Ganglia. The GPIR utilizes cron jobs on each resource to gather site specific resource characteristics such as jobs that are running, queued and waiting for resource allocation. 3. ENSEMBLE KALMAN FILTER APPLICATION The main goal of hydrocarbon reservoir simulations is to forecast the production behavior of oil and gas field (denoted as field hereafter) for its development and optimal management. In reservoir modeling, the field is divided into several geological models as shown in Figure 1. For accurate performance forecasting of the field, it is necessary to reconcile several geological models to the dynamic response of the field through history matching [16-20]. Figure 1. Cross-sectional view of the Field. Vertical layers correspond to different geological models and the nails are oil wells whose historical information will be used for forecasting the production behavior. (Figure Ref:http://faculty.smu.edu/zchen/research.html). The EnKF is a Monte Carlo method that works with an ensemble of reservoir models. This method utilizes crosscovariances [21] between the field measurements and the reservoir model parameters (derived from several models) to estimate prediction uncertainties. The geological model parameters in the ensemble are sequentially updated with a goal to minimize the prediction uncertainties. Historical production response of the field for over 50 years is used in these simulations. The main advantage of EnKF is that it can be readily linked to any reservoir simulator, and can assimilate latest production data without the need to re-run the simulator from initial conditions. Researchers in Texas are large subscribers of the Schlumberger ECLIPSE [22] package for reservoir simulations. In the reservoir modeling, each geological model checks out an ECLIPSE license. The simulation runtime of the EnKF methodology depends on the number of geological models used, number of ECLIPSE licenses available, production history of the field, and propagated uncertainties in history matching. The overall EnKF workflow is shown Figure 2. Figure 2. Ensemble Kaman Filter Data Assimilation Workflow. Each site has L licenses. At START, the master/control process (EnKF main program) reads the simulation configuration file for number (N) of models, and model-specific input files. Then, N working directories are created to store the output files. At the end of iteration, the master/control process collects the output files from N models and post processes crosscovariances [21] to estimate the prediction uncertainties. This information will be used to update models (or input files) for the next iteration. The simulation continues until the production histories are exhausted. Typical EnKF simulation with N=50 and field histories of 50-60 years, in time steps ranging from three months to a year, takes about three weeks on a serial computing environment. In parallel computing environment, there is no interprocess communication between the geological models in the ensemble. However, at the end of each simulation time-step, model-specific output files are to be collected for analyzing cross covariances [21] and to prepare next set of input files. Therefore, master-slave model in messagepassing (MPI) environment is a suitable paradigm. In this approach, the geological models are treated as slaves and are distributed across the available processors. The master Cluster or (TIGRE/GridWay) START Read Configuration File Create N Working Directories Create N Input files Model l Model 2 Model N. . . ECLIPSE on site A ECLIPSE on Site B ECLIPSE on Site Z Collect N Model Outputs, Post-process Output files END ... process collects model-specific output files, analyzes and prepares next set of input files for the simulation. Since each geological model checks out an ECLIPSE license, parallelizability of the simulation depends on the number of licenses available. When the available number of licenses is less than the number of models in the ensemble, one or more of the nodes in the MPI group have to handle more than one model in a serial fashion and therefore, it takes longer to complete the simulation. A Petroleum Engineering Department usually procures 10-15 ECLIPSE licenses while at least ten-fold increase in the number of licenses would be necessary for industry standard simulations. The number of licenses can be increased by involving several Petroleum Engineering Departments that support ECLIPSE package. Since MPI does not scale very well for applications that involve remote compute clusters, and to get around the firewall issues with license servers across administrative domains, Grid-enabling the EnKF workflow seems to be necessary. With this motivation, we have implemented Grid-enabled EnKF workflow for the TIGRE environment and demonstrated parallelizability of the application across TIGRE using GridWay metascheduler. Further details are provided in the next section. 4. IMPLEMENTATION DETAILS To Grid-enable the EnKF approach, we have eliminated the MPI code for parallel processing and replaced with N single processor jobs (or sub-jobs) where, N is the number of geological models in the ensemble. These model-specific sub-jobs were distributed across TIGRE sites that support ECLIPSE package using the GridWay [8] metascheduler. For each sub-job, we have constructed a GridWay job template that specifies the executable, input and output files, and resource requirements. Since the TIGRE compute resources are not expected to change frequently, we have used static resource discovery policy for GridWay and the sub-jobs were scheduled dynamically across the TIGRE resources using GridWay. Figure 3 represents the sub-job template file for the GridWay metascheduler. Figure 3. GridWay Sub-Job Template In Figure 3, REQUIREMENTS flag is set to choose the resources that satisfy the application requirements. In the case of EnKF application, for example, we need resources that support ECLIPSE package. ARGUMENTS flag specifies the model in the ensemble that will invoke ECLIPSE at a remote site. INPUT_FILES is prepared by the EnKF main program (or master/control process) and is transferred by GridWay to the remote site where it is untared and is prepared for execution. Finally, OUTPUT_FILES specifies the name and location where the output files are to be written. The command-line features of GridWay were used to collect and process the model-specific outputs to prepare new set of input files. This step mimics MPI process synchronization in master-slave model. At the end of each iteration, the compute resources and licenses are committed back to the pool. Table 1 shows the sub-jobs in TIGRE Grid via GridWay using gwps command and for clarity, only selected columns were shown . USER JID DM EM NAME HOST pingluo 88 wrap pend enkf.jt antaeus.hpcc.ttu.edu/LSF pingluo 89 wrap pend enkf.jt antaeus.hpcc.ttu.edu/LSF pingluo 90 wrap actv enkf.jt minigar.hpcc.ttu.edu/LSF pingluo 91 wrap pend enkf.jt minigar.hpcc.ttu.edu/LSF pingluo 92 wrap done enkf.jt cosmos.tamu.edu/PBS pingluo 93 wrap epil enkf.jt cosmos.tamu.edu/PBS Table 1. Job scheduling across TIGRE using GridWay Metascheduler. DM: Dispatch state, EM: Execution state, JID is the job id and HOST corresponds to site specific cluster and its local batch scheduler. When a job is submitted to GridWay, it will go through a series of dispatch (DM) and execution (EM) states. For DM, the states include pend(ing), prol(og), wrap(per), epil(og), and done. DM=prol means the job has been scheduled to a resource and the remote working directory is in preparation. DM=warp implies that GridWay is executing the wrapper which in turn executes the application. DM=epil implies the job has finished running at the remote site and results are being transferred back to the GridWay server. Similarly, when EM=pend implies the job is waiting in the queue for resource and the job is running when EM=actv. For complete list of message flags and their descriptions, see the documentation in ref [8]. We have demonstrated the Grid-enabled EnKF runs using GridWay for TIGRE environment. The jobs are so chosen that the runtime doesn``t exceed more than a half hour. The simulation runs involved up to 20 jobs between A&M and TTU sites with TTU serving 10 licenses. For resource information, see Table I. One of the main advantages of Grid-enabled EnKF simulation is that both the resources and licenses are released back to the pool at the end of each simulation time step unlike in the case of MPI implementation where licenses and nodes are locked until the completion of entire simulation. However, the fact that each sub-job gets scheduled independently via GridWay could possibly incur another time delay caused by waiting in queue for execution in each simulation time step. Such delays are not expected EXECUTABLE=runFORWARD REQUIREMENTS=HOSTNAME=cosmos.tamu.edu | HOSTNAME=antaeus.hpcc.ttu.edu | HOSTNAME=minigar.hpcc.ttu.edu | ARGUMENTS=001 INPUT_FILES=001.in.tar OUTPUT_FILES=001.out.tar in MPI implementation where the node is blocked for processing sub-jobs (model-specific calculation) until the end of the simulation. There are two main scenarios for comparing Grid and cluster computing approaches. Scenario I: The cluster is heavily loaded. The conceived average waiting time of job requesting large number of CPUs is usually longer than waiting time of jobs requesting single CPU. Therefore, overall waiting time could be shorter in Grid approach which requests single CPU for each sub-job many times compared to MPI implementation that requests large number of CPUs at a single time. It is apparent that Grid scheduling is beneficial especially when cluster is heavily loaded and requested number of CPUs for the MPI job is not readily available. Scenario II: The cluster is relatively less loaded or largely available. It appears the MPI implementation is favorable compared to the Grid scheduling. However, parallelizability of the EnKF application depends on the number of ECLIPSE licenses and ideally, the number of licenses should be equal to the number of models in the ensemble. Therefore, if a single institution does not have sufficient number of licenses, the cluster availability doesn``t help as much as it is expected. Since the collaborative environment such as TIGRE can address both compute and software resource requirements for the EnKF application, Grid-enabled approach is still advantageous over the conventional MPI implementation in any of the above scenarios. 5. CONCLUSIONS AND FUTURE WORK TIGRE is a higher education Grid development project and its purpose is to sustain and extend research and educational opportunities across Texas. Within the energy exploration application area, we have Grid-enabled the MPI implementation of the ensemble Kalman filter data assimilation methodology for reservoir characterization. This task was accomplished by removing MPI code for parallel processing and replacing with single processor jobs one for each geological model in the ensemble. These single processor jobs were scheduled across TIGRE via GridWay metascheduler. We have demonstrated that by pooling licenses across TIGRE sites, more geological models can be handled in parallel and therefore conceivably better simulation accuracy. This approach has several advantages over MPI implementation especially when a site specific cluster is heavily loaded and/or the number licenses required for the simulation is more than those available at a single site. Towards the future work, it would be interesting to compare the runtime between MPI, and Grid implementations for the EnKF application. This effort could shed light on quality of service (QoS) of Grid environments in comparison with cluster computing. Another aspect of interest in the near future would be managing both compute and license resources to address the job (or processor)-to-license ratio management. 6. OBSERVATIONS AND LESSIONS LEARNED The Grid-enabling efforts for EnKF application have provided ample opportunities to gather insights on the visibility and promise of Grid computing environments for application development and support. The main issues are industry standard data security and QoS comparable to cluster computing. Since the reservoir modeling research involves proprietary data of the field, we had to invest substantial efforts initially in educating the application researchers on the ability of Grid services in supporting the industry standard data security through role- and privilege-based access using X.509 standard. With respect to QoS, application researchers expect cluster level QoS with Grid environments. Also, there is a steep learning curve in Grid computing compared to the conventional cluster computing. Since Grid computing is still an emerging technology, and it spans over several administrative domains, Grid computing is still premature especially in terms of the level of QoS although, it offers better data security standards compared to commodity clusters. It is our observation that training and outreach programs that compare and contrast the Grid and cluster computing environments would be a suitable approach for enhancing user participation in Grid computing. This approach also helps users to match their applications and abilities Grids can offer. In summary, our efforts through TIGRE in Grid-enabling the EnKF data assimilation methodology showed substantial promise in engaging Petroleum Engineering researchers through intercampus collaborations. Efforts are under way to involve more schools in this effort. These efforts may result in increased collaborative research, educational opportunities, and workforce development through graduate/faculty research programs across TIGRE Institutions. 7. ACKNOWLEDGMENTS The authors acknowledge the State of Texas for supporting the TIGRE project through the Texas Enterprise Fund, and TIGRE Institutions for providing the mechanism, in which the authors (Ravi Vadapalli, Taesung Kim, and Ping Luo) are also participating. The authors thank the application researchers Prof. Akhil Datta-Gupta of Texas A&M University and Prof. Lloyd Heinze of Texas Tech University for their discussions and interest to exploit the TIGRE environment to extend opportunities in research and development. 8. REFERENCES [1] Foster, I. and Kesselman, C. (eds.) 2004. The Grid: Blueprint for a new computing infrastructure (The Elsevier series in Grid computing) [2] TIGRE Portal: http://tigreportal.hipcat.net [3] Vadapalli, R. Sill, A., Dooley, R., Murray, M., Luo, P., Kim, T., Huang, M., Thyagaraja, K., and Chaffin, D. 2007. Demonstration of TIGRE environment for Grid enabled/suitable applications. 8th IEEE/ACM Int. Conf. on Grid Computing, Sept 19-21, Austin [4] The High Performance Computing across Texas Consortium http://www.hipcat.net [5] Pordes, R. Petravick, D. Kramer, B. Olson, D. Livny, M. Roy, A. Avery, P. Blackburn, K. Wenaus, T. Würthwein, F. Foster, I. Gardner, R. Wilde, M. Blatecky, A. McGee, J. and Quick, R. 2007. The Open Science Grid, J. Phys Conf Series http://www.iop.org/EJ/abstract/1742-6596/78/1/012057 and http://www.opensciencegrid.org [6] Reed, D.A. 2003. Grids, the TeraGrid and Beyond, Computer, vol 30, no. 1 and http://www.teragrid.org [7] Evensen, G. 2006. Data Assimilation: The Ensemble Kalman Filter, Springer [8] Herrera, J. Huedo, E. Montero, R. S. and Llorente, I. M. 2005. Scientific Programming, vol 12, No. 4. pp 317-331 [9] Avery, P. and Foster, I. 2001. The GriPhyN project: Towards petascale virtual data grids, technical report GriPhyN-200115 and http://vdt.cs.wisc.edu [10] The PacMan documentation and installation guide http://physics.bu.edu/pacman/htmls [11] Caskey, P. Murray, M. Perez, J. and Sill, A. 2007. Case studies in identify management for virtual organizations, EDUCAUSE Southwest Reg. Conf., Feb 21-23, Austin, TX. http://www.educause.edu/ir/library/pdf/SWR07058.pdf [12] The Grid User Management System (GUMS) https://www.racf.bnl.gov/Facility/GUMS/index.html [13] Thomas, M. and Boisseau, J. 2003. Building grid computing portals: The NPACI grid portal toolkit, Grid computing: making the global infrastructure a reality, Chapter 28, Berman, F. Fox, G. Thomas, M. Boisseau, J. and Hey, T. (eds), John Wiley and Sons, Ltd, Chichester [14] Open Ticket Request System http://otrs.org [15] The MoinMoin Wiki Engine http://moinmoin.wikiwikiweb.de [16] Vasco, D.W. Yoon, S. and Datta-Gupta, A. 1999. Integrating dynamic data into high resolution reservoir models using streamline-based analytic sensitivity coefficients, Society of Petroleum Engineers (SPE) Journal, 4 (4). [17] Emanuel, A. S. and Milliken, W. J. 1998. History matching finite difference models with 3D streamlines, SPE 49000, Proc of the Annual Technical Conf and Exhibition, Sept 2730, New Orleans, LA. [18] Nævdal, G. Johnsen, L.M. Aanonsen, S.I. and Vefring, E.H. 2003. Reservoir monitoring and Continuous Model Updating using Ensemble Kalman Filter, SPE 84372, Proc of the Annual Technical Conf and Exhibition, Oct 5-8, Denver, CO. [19] Jafarpour B. and McLaughlin, D.B. 2007. History matching with an ensemble Kalman filter and discrete cosine parameterization, SPE 108761, Proc of the Annual Technical Conf and Exhibition, Nov 11-14, Anaheim, CA [20] Li, G. and Reynolds, A. C. 2007. An iterative ensemble Kalman filter for data assimilation, SPE 109808, Proc of the SPE Annual Technical Conf and Exhibition, Nov 11-14, Anaheim, CA [21] Arroyo-Negrete, E. Devagowda, D. Datta-Gupta, A. 2006. Streamline assisted ensemble Kalman filter for rapid and continuous reservoir model updating. Proc of the Int. Oil & Gas Conf and Exhibition, SPE 104255, Dec 5-7, China [22] ECLIPSE Reservoir Engineering Software http://www.slb.com/content/services/software/reseng/index.a sp
Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization ABSTRACT Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed. 1. INTRODUCTION Grid computing [1] is an emerging "collaborative" computing paradigm to extend institution/organization specific high performance computing (HPC) capabilities greatly beyond local resources. Its importance stems from the fact that ground breaking research in strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require intercampus collaborations and computational capabilities beyond institutional limitations. The Texas Internet Grid for Research and Education (TIGRE) [2,3] is a state funded cyberinfrastructure development project carried out by five (Rice, A&M, TTU, UH and UT Austin) major university systems - collectively called TIGRE Institutions. The purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas. TIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium. The goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities. The primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas. The secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas. These are bioscience and medicine, energy exploration and air quality modeling. Vision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities. The overall goal of TIGRE is to support local, campus and regional user interests and offer avenues to connect with national Grid projects such as Open Science Grid [5], and TeraGrid [6]. Within the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler. Section 2 provides an overview of the TIGRE environment and capabilities. Application description and the need for Grid-enabling EnKF methodology is provided in Section 3. The implementation details and merits of our approach are discussed in Section 4. Conclusions are provided in Section 5. Finally, observations and lessons learned are documented in Section 6. 2. TIGRE ENVIRONMENT The TIGRE Grid middleware consists of minimal set of components derived from a subset of the Virtual Data Toolkit (VDT) [9] which supports a variety of operating systems. The purpose of choosing a minimal software stack is to support applications at hand, and to simplify installation and distribution of client/server stacks across TIGRE sites. Additional components will be added as they become necessary. The PacMan [10] packaging and distribution mechanism is employed for TIGRE client/server installation and management. The PacMan distribution mechanism involves retrieval, installation, and often configuration of the packaged software. This approach allows the clients to keep current, consistent versions of TIGRE software. It also helps TIGRE sites to install the needed components on resources distributed throughout the participating sites. The TIGRE client/server stack consists of an authentication and authorization layer, Globus GRAM4-based job submission via web services (pre-web services installations are available up on request). The tools for handling Grid proxy generation, Grid-enabled file transfer and Grid-enabled remote login are supported. The pertinent details of TIGRE services and tools for job scheduling and management are provided below. 2.1. Certificate Authority The TIGRE security infrastructure includes a certificate authority (CA) accredited by the International Grid Trust Federation (IGTF) for issuing X. 509 user and resource Grid certificates [11]. The Texas Advanced Computing Center (TACC), University of Texas at Austin is the TIGRE's shared CA. The TIGRE Institutions serve as Registration Authorities (RA) for their respective local user base. For up-to-date information on securing user and resource certificates and their installation instructions see ref [2]. The users and hosts on TIGRE are identified by their distinguished name (DN) in their X. 509 certificate provided by the CA. A native Grid-mapfile that contains a list of authorized DNs is used to authenticate and authorize user job scheduling and management on TIGRE site resources. At Texas Tech University, the users are dynamically allocated one of the many generic pool accounts. This is accomplished through the Grid User Management System (GUMS) [12]. 2.2. Job Scheduling and Management The TIGRE environment supports GRAM4-based job submission via web services. The job submission scripts are generated using XML. The web services GRAM translates the XML scripts into target cluster specific batch schedulers such as LSF, PBS, or SGE. The high bandwidth file transfer protocols such as GridFTP are utilized for staging files in and out of the target machine. The login to remote hosts for compilation and debugging is only through GSISSH service which requires resource authentication through X. 509 certificates. The authentication and authorization of Grid jobs are managed by issuing Grid certificates to both users and hosts. The certificate revocation lists (CRL) are updated on a daily basis to maintain high security standards of the TIGRE Grid services. The TIGRE portal [2] documentation area provides a quick start tutorial on running jobs on TIGRE. 2.3. Metascheduler The metascheduler interoperates with the cluster level batch schedulers (such as LSF, PBS) in the overall Grid workflow management. In the present work, we have employed GridWay [8] metascheduler--a Globus incubator project--to schedule and manage jobs across TIGRE. The GridWay is a light-weight metascheduler that fully utilizes Globus functionalities. It is designed to provide efficient use of dynamic Grid resources by multiple users for Grid infrastructures built on top of Globus services. The TIGRE site administrator can control the resource sharing through a powerful built-in scheduler provided by GridWay or by extending GridWay's external scheduling module to provide their own scheduling policies. Application users can write job descriptions using GridWay's simple and direct job template format (see Section 4 for details) or standard Job Submission Description Language (JSDL). See section 4 for implementation details. 2.4. Customer Service Management System A TIGRE portal [2] was designed and deployed to interface users and resource providers. It was designed using GridPort [13] and is maintained by TACC. The TIGRE environment is supported by open source tools such as the Open Ticket Request System (OTRS) [14] for servicing trouble tickets, and MoinMoin [15] Wiki for TIGRE content and knowledge management for education, outreach and training. The links for OTRS and Wiki are consumed by the TIGRE portal [2]--the gateway for users and resource providers. The TIGRE resource status and loads are monitored by the Grid Port Information Repository (GPIR) service of the GridPort toolkit [13] which interfaces with local cluster load monitoring service such as Ganglia. The GPIR utilizes "cron" jobs on each resource to gather site specific resource characteristics such as jobs that are running, queued and waiting for resource allocation. 3. ENSEMBLE KALMAN FILTER APPLICATION The main goal of hydrocarbon reservoir simulations is to forecast the production behavior of oil and gas field (denoted as field hereafter) for its development and optimal management. In reservoir modeling, the field is divided into several geological models as shown in Figure 1. For accurate performance forecasting of the field, it is necessary to reconcile several geological models to the dynamic response of the field through history matching [16-20]. Figure 1. Cross-sectional view of the Field. Vertical layers correspond to different geological models and the nails are oil wells whose historical information will be used for forecasting the production behavior. (Figure Ref: http://faculty.smu.edu/zchen/research.html). The EnKF is a Monte Carlo method that works with an ensemble of reservoir models. This method utilizes crosscovariances [21] between the field measurements and the reservoir model parameters (derived from several models) to estimate prediction uncertainties. The geological model parameters in the ensemble are sequentially updated with a goal to minimize the prediction uncertainties. Historical production response of the field for over 50 years is used in these simulations. The main advantage of EnKF is that it can be readily linked to any reservoir simulator, and can assimilate latest production data without the need to re-run the simulator from initial conditions. Researchers in Texas are large subscribers of the Schlumberger ECLIPSE [22] package for reservoir simulations. In the reservoir modeling, each geological model checks out an ECLIPSE license. The simulation runtime of the EnKF methodology depends on the number of geological models used, number of ECLIPSE licenses available, production history of the field, and propagated uncertainties in history matching. The overall EnKF workflow is shown Figure 2. Figure 2. Ensemble Kaman Filter Data Assimilation Workflow. Each site has L licenses. At START, the master/control process (EnKF main program) reads the simulation configuration file for number (N) of models, and model-specific input files. Then, N working directories are created to store the output files. At the end of iteration, the master/control process collects the output files from N models and post processes crosscovariances [21] to estimate the prediction uncertainties. This information will be used to update models (or input files) for the next iteration. The simulation continues until the production histories are exhausted. Typical EnKF simulation with N = 50 and field histories of 50-60 years, in time steps ranging from three months to a year, takes about three weeks on a serial computing environment. In parallel computing environment, there is no interprocess communication between the geological models in the ensemble. However, at the end of each simulation time-step, model-specific output files are to be collected for analyzing cross covariances [21] and to prepare next set of input files. Therefore, master-slave model in messagepassing (MPI) environment is a suitable paradigm. In this approach, the geological models are treated as slaves and are distributed across the available processors. The master process collects model-specific output files, analyzes and prepares next set of input files for the simulation. Since each geological model checks out an ECLIPSE license, parallelizability of the simulation depends on the number of licenses available. When the available number of licenses is less than the number of models in the ensemble, one or more of the nodes in the MPI group have to handle more than one model in a serial fashion and therefore, it takes longer to complete the simulation. A Petroleum Engineering Department usually procures 10-15 ECLIPSE licenses while at least ten-fold increase in the number of licenses would be necessary for industry standard simulations. The number of licenses can be increased by involving several Petroleum Engineering Departments that support ECLIPSE package. Since MPI does not scale very well for applications that involve remote compute clusters, and to get around the firewall issues with license servers across administrative domains, Grid-enabling the EnKF workflow seems to be necessary. With this motivation, we have implemented Grid-enabled EnKF workflow for the TIGRE environment and demonstrated parallelizability of the application across TIGRE using GridWay metascheduler. Further details are provided in the next section. 4. IMPLEMENTATION DETAILS To Grid-enable the EnKF approach, we have eliminated the MPI code for parallel processing and replaced with N single processor jobs (or sub-jobs) where, N is the number of geological models in the ensemble. These model-specific sub-jobs were distributed across TIGRE sites that support ECLIPSE package using the GridWay [8] metascheduler. For each sub-job, we have constructed a GridWay job template that specifies the executable, input and output files, and resource requirements. Since the TIGRE compute resources are not expected to change frequently, we have used static resource discovery policy for GridWay and the sub-jobs were scheduled dynamically across the TIGRE resources using GridWay. Figure 3 represents the sub-job template file for the GridWay metascheduler. Figure 3. GridWay Sub-Job Template In Figure 3, REQUIREMENTS flag is set to choose the resources that satisfy the application requirements. In the case of EnKF application, for example, we need resources that support ECLIPSE package. ARGUMENTS flag specifies the model in the ensemble that will invoke ECLIPSE at a remote site. INPUT_FILES is prepared by the EnKF main program (or master/control process) and is transferred by GridWay to the remote site where it is untared and is prepared for execution. Finally, OUTPUT_FILES specifies the name and location where the output files are to be written. The command-line features of GridWay were used to collect and process the model-specific outputs to prepare new set of input files. This step mimics MPI process synchronization in master-slave model. At the end of each iteration, the compute resources and licenses are committed back to the pool. Table 1 shows the sub-jobs in TIGRE Grid via GridWay using "gwps" command and for clarity, only selected columns were shown Table 1. Job scheduling across TIGRE using GridWay Metascheduler. DM: Dispatch state, EM: Execution state, JID is the job id and HOST corresponds to site specific cluster and its local batch scheduler. When a job is submitted to GridWay, it will go through a series of dispatch (DM) and execution (EM) states. For DM, the states include pend (ing), prol (og), wrap (per), epil (og), and done. DM =" prol" means the job has been scheduled to a resource and the remote working directory is in preparation. DM =" warp" implies that GridWay is executing the wrapper which in turn executes the application. DM =" epil" implies the job has finished running at the remote site and results are being transferred back to the GridWay server. Similarly, when EM =" pend" implies the job is waiting in the queue for resource and the job is running when EM =" actv". For complete list of message flags and their descriptions, see the documentation in ref [8]. We have demonstrated the Grid-enabled EnKF runs using GridWay for TIGRE environment. The jobs are so chosen that the runtime doesn't exceed more than a half hour. The simulation runs involved up to 20 jobs between A&M and TTU sites with TTU serving 10 licenses. For resource information, see Table I. One of the main advantages of Grid-enabled EnKF simulation is that both the resources and licenses are released back to the pool at the end of each simulation time step unlike in the case of MPI implementation where licenses and nodes are locked until the completion of entire simulation. However, the fact that each sub-job gets scheduled independently via GridWay could possibly incur another time delay caused by waiting in queue for execution in each simulation time step. Such delays are not expected in MPI implementation where the node is blocked for processing sub-jobs (model-specific calculation) until the end of the simulation. There are two main scenarios for comparing Grid and cluster computing approaches. Scenario I: The cluster is heavily loaded. The conceived average waiting time of job requesting large number of CPUs is usually longer than waiting time of jobs requesting single CPU. Therefore, overall waiting time could be shorter in Grid approach which requests single CPU for each sub-job many times compared to MPI implementation that requests large number of CPUs at a single time. It is apparent that Grid scheduling is beneficial especially when cluster is heavily loaded and requested number of CPUs for the MPI job is not readily available. Scenario II: The cluster is relatively less loaded or largely available. It appears the MPI implementation is favorable compared to the Grid scheduling. However, parallelizability of the EnKF application depends on the number of ECLIPSE licenses and ideally, the number of licenses should be equal to the number of models in the ensemble. Therefore, if a single institution does not have sufficient number of licenses, the cluster availability doesn't help as much as it is expected. Since the collaborative environment such as TIGRE can address both compute and software resource requirements for the EnKF application, Grid-enabled approach is still advantageous over the conventional MPI implementation in any of the above scenarios. 5. CONCLUSIONS AND FUTURE WORK TIGRE is a higher education Grid development project and its purpose is to sustain and extend research and educational opportunities across Texas. Within the energy exploration application area, we have Grid-enabled the MPI implementation of the ensemble Kalman filter data assimilation methodology for reservoir characterization. This task was accomplished by removing MPI code for parallel processing and replacing with single processor jobs one for each geological model in the ensemble. These single processor jobs were scheduled across TIGRE via GridWay metascheduler. We have demonstrated that by pooling licenses across TIGRE sites, more geological models can be handled in parallel and therefore conceivably better simulation accuracy. This approach has several advantages over MPI implementation especially when a site specific cluster is heavily loaded and/or the number licenses required for the simulation is more than those available at a single site. Towards the future work, it would be interesting to compare the runtime between MPI, and Grid implementations for the EnKF application. This effort could shed light on quality of service (QoS) of Grid environments in comparison with cluster computing. Another aspect of interest in the near future would be managing both compute and license resources to address the job (or processor) - to-license ratio management. 6. OBSERVATIONS AND LESSIONS LEARNED The Grid-enabling efforts for EnKF application have provided ample opportunities to gather insights on the visibility and promise of Grid computing environments for application development and support. The main issues are industry standard data security and QoS comparable to cluster computing. Since the reservoir modeling research involves proprietary data of the field, we had to invest substantial efforts initially in educating the application researchers on the ability of Grid services in supporting the industry standard data security through role - and privilege-based access using X. 509 standard. With respect to QoS, application researchers expect "cluster" level QoS with Grid environments. Also, there is a steep learning curve in Grid computing compared to the conventional "cluster" computing. Since Grid computing is still an "emerging" technology, and it spans over several administrative domains, Grid computing is still premature especially in terms of the level of QoS although, it offers better data security standards compared to commodity clusters. It is our observation that training and outreach programs that compare and contrast the Grid and cluster computing environments would be a suitable approach for enhancing user participation in Grid computing. This approach also helps users to match their applications and abilities Grids can offer. In summary, our efforts through TIGRE in Grid-enabling the EnKF data assimilation methodology showed substantial promise in engaging Petroleum Engineering researchers through intercampus collaborations. Efforts are under way to involve more schools in this effort. These efforts may result in increased collaborative research, educational opportunities, and workforce development through graduate/faculty research programs across TIGRE Institutions.
Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization ABSTRACT Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed. 1. INTRODUCTION Grid computing [1] is an emerging "collaborative" computing paradigm to extend institution/organization specific high performance computing (HPC) capabilities greatly beyond local resources. Its importance stems from the fact that ground breaking research in strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require intercampus collaborations and computational capabilities beyond institutional limitations. The Texas Internet Grid for Research and Education (TIGRE) [2,3] is a state funded cyberinfrastructure development project carried out by five (Rice, A&M, TTU, UH and UT Austin) major university systems - collectively called TIGRE Institutions. The purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas. TIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium. The goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities. The primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas. The secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas. These are bioscience and medicine, energy exploration and air quality modeling. Vision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities. The overall goal of TIGRE is to support local, campus and regional user interests and offer avenues to connect with national Grid projects such as Open Science Grid [5], and TeraGrid [6]. Within the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler. Section 2 provides an overview of the TIGRE environment and capabilities. Application description and the need for Grid-enabling EnKF methodology is provided in Section 3. The implementation details and merits of our approach are discussed in Section 4. Conclusions are provided in Section 5. Finally, observations and lessons learned are documented in Section 6. 2. TIGRE ENVIRONMENT 2.1. Certificate Authority 2.2. Job Scheduling and Management 2.3. Metascheduler 2.4. Customer Service Management System 3. ENSEMBLE KALMAN FILTER APPLICATION 4. IMPLEMENTATION DETAILS 5. CONCLUSIONS AND FUTURE WORK 6. OBSERVATIONS AND LESSIONS LEARNED
Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization ABSTRACT Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed. 1. INTRODUCTION Grid computing [1] is an emerging "collaborative" computing paradigm to extend institution/organization specific high performance computing (HPC) capabilities greatly beyond local resources. The purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas. TIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium. The goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities. The primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas. The secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas. These are bioscience and medicine, energy exploration and air quality modeling. Vision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities. Within the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler. Section 2 provides an overview of the TIGRE environment and capabilities. Application description and the need for Grid-enabling EnKF methodology is provided in Section 3. The implementation details and merits of our approach are discussed in Section 4. Conclusions are provided in Section 5. Finally, observations and lessons learned are documented in Section 6.
J-66
Expressive Negotiation over Donations to Charities
When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint.
[ "express negoti", "donat to chariti", "negoti materi", "bid languag", "concav bid", "linear program", "quasilinear", "mechan design", "chariti support", "express chariti donat", "combinatori auction", "econom effici", "bid framework", "donat-clear", "threshold bid", "payment willing function", "incent compat", "market clear" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "U", "R", "M", "U", "M", "M", "U", "M" ]
Expressive Negotiation over Donations to Charities∗ Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint. Categories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION When money is donated to a charitable (or other) cause (hereafter referred to as charity), often the donating party gives unconditionally: a fixed amount is transferred from the donator to the charity, and none of this transfer is contingent on other events-in particular, it is not contingent on the amount given by other parties. Indeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals. However, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity. In such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more. This is done by making the donation conditional on the others'' donations. The following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation. Suppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1. It follows neither of them will be willing to give unconditionally, because $0.75 < $1. However, if the two parties draw up a contract that says that they will each give $0.5, both the parties have an incentive to accept this contract (rather than have no contract at all): with the contract, the charity will receive $1 (rather than $0 without a contract), which is worth $0.75 to each party, which is greater than the $0.5 that that party will have to give. Effectively, each party has made its donation conditional on the other party``s donation, leading to larger donations and greater happiness to all parties involved. 51 One method that is often used to effect this is to make a matching offer. Examples of matching offers are: I will give x dollars for every dollar donated., or I will give x dollars if the total collected from other parties exceeds y. In our example above, one of the parties can make the offer I will donate $0.5 if the other party also donates at least that much, and the other party will have an incentive to indeed donate $0.5, so that the total amount given to the charity increases by $1. Thus this matching offer implements the contract suggested above. As a real-world example, the United States government has authorized a donation of up to $1 billion to the Global Fund to fight AIDS, TB and Malaria, under the condition that the American contribution does not exceed one third of the total-to encourage other countries to give more [23]. However, there are several severe limitations to the simple approach of matching offers as just described. 1. It is not clear how two parties can make matching offers where each party``s offer is stated in terms of the amount that the other pays. (For example, it is not clear what the outcome should be when both parties offer to match the other``s donation.) Thus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)-or at least there can be no circular dependencies.1 2. Given the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities. For instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate. (Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party``s offer to take effect.) In contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities. Moreover, the amount the party offers to donate at different levels of appreciation is allowed to vary arbitrarily (it does need to be a dollar-for-dollar (or n-dollarfor-dollar) matching arrangement, or an arrangement where the party offers a fixed amount provided a given (strike) total has been exceeded). Finally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other. Given each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid. However, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem. A large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it. 1 Typically, larger organizations match offers of private individuals. For example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers'' donations [8]. Towards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully. In short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world. A web-based implementation of the ideas described in this paper can facilitate voluntary reallocation of wealth on a global scale. Aditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms. 2. COMPARISON TO COMBINATORIAL AUCTIONS AND EXCHANGES This section discusses the relationship between expressive charity donation and combinatorial auctions and exchanges. It can be skipped, but may be of interest to the reader with a background in combinatorial auctions and exchanges. In a combinatorial auction, there are m items for sale, and bidders can place bids on bundles of one or more items. The auctioneer subsequently labels each bid as winning or losing, under the constraint that no item can be in more than one winning bid, to maximize the sum of the values of the winning bids. (This is known as the clearing problem.) Variants include combinatorial reverse auctions, where the auctioneer is seeking to procure a set of items; and combinatorial exchanges, where bidders can both buy and and sell items (even within the same bid). Other extensions include allowing for side constraints, as well as the specification of attributes of the items in bids. Combinatorial auctions and exchanges have recently become a popular research topic [20, 21, 17, 22, 9, 18, 13, 3, 12, 26, 19, 25, 2]. The problems of clearing expressive charity donation markets and clearing combinatorial auctions or exchanges are very different in formulation. Nevertheless, there are interesting parallels. One of the main reasons for the interest in combinatorial auctions and exchanges is that it allows for expressive bidding. A bidder can express exactly how much each different allocation is worth to her, and thus the globally optimal allocation may be chosen by the auctioneer. Compare this to a bidder having to bid on two different items in two different (one-item) auctions, without any way of expressing that (for instance) one item is worthless if the other item is not won. In this scenario, the bidder may win the first item but not the second (because there was another high bid on the second item that she did not anticipate), leading to economic inefficiency. Expressive bidding is also one of the main benefits of the expressive charity donation market. Here, bidders can express exactly how much they are willing to donate for every vector of amounts donated to charities. This may allow bidders to negotiate a complex arrangement of who gives how much to which charity, which is beneficial to all parties involved; whereas no such arrangement may have been possible if the bidders had been restricted to using simple matching offers on individual charities. Again, expressive bidding is necessary to achieve economic efficiency. Another parallel is the computational complexity of the clearing problem. In order to achieve the full economic efficiency allowed by the market``s expressiveness (or even come close to it), hard computational problems must be solved in combinatorial auctions and exchanges, as well as in the charity donation market (as we will see). 52 3. DEFINITIONS Throughout this paper, we will refer to the offers that the donating parties make as bids, and to the donating parties as bidders. In our bidding framework, a bid will specify, for each vector of total payments made to the charities, how much that bidder is willing to contribute. (The contribution of this bidder is also counted in the vector of paymentsso, the vector of total payments to the charities represents the amount given by all donating parties, not just the ones other than this bidder.) The bidding language is expressive enough that no bidder should have to make more than one bid. The following definition makes the general form of a bid in our framework precise. Definition 1. In a setting with m charities c1, c2, ... , cm, a bid by bidder bj is a function vj : Rm → R. The interpretation is that if charity ci receives a total amount of πci , then bidder j is willing to donate (up to) vj(πc1 , πc2 , ... , πcm ). We now define possible outcomes in our model, and which outcomes are valid given the bids that were made. Definition 2. An outcome is a vector of payments made by the bidders (πb1 , πb2 , ... , πbn ), and a vector of payments received by the charities (πc1 , πc2 , ... , πcm ). A valid outcome is an outcome where 1. n j=1 πbj ≥ m i=1 πci (at least as much money is collected as is given away); 2. For all 1 ≤ j ≤ n, πbj ≤ vj(πc1 , πc2 , ... , πcm ) (no bidder gives more than she is willing to). Of course, in the end, only one of the valid outcomes can be chosen. We choose the valid outcome that maximizes the objective that we have for the donation process. Definition 3. An objective is a function from the set of all outcomes to R.2 After all bids have been collected, a valid outcome will be chosen that maximizes this objective. One example of an objective is surplus, given by n j=1 πbj − m i=1 πci . The surplus could be the profits of a company managing the expressive donation marketplace; but, alternatively, the surplus could be returned to the bidders, or given to the charities. Another objective is total amount donated, given by m i=1 πci . (Here, different weights could also be placed on the different charities.) Finding the valid outcome that maximizes the objective is a (nontrivial) computational problem. We will refer to it as the clearing problem. The formal definition follows. Definition 4 (DONATION-CLEARING). We are given a set of n bids over charities c1, c2, ... , cm. Additionally, we are given an objective function. We are asked to find an objective-maximizing valid outcome. How difficult the DONATION-CLEARING problem is depends on the types of bids used and the language in which they are expressed. This is the topic of the next section. 2 In general, the objective function may also depend on the bids, but the objective functions under consideration in this paper do not depend on the bids. The techniques presented in this paper will typically generalize to objectives that take the bids into account directly. 4. A SIMPLIFIED BIDDING LANGUAGE Specifying a general bid in our framework (as defined above) requires being able to specify an arbitrary real-valued function over Rm . Even if we restricted the possible total payment made to each charity to the set {0, 1, 2, ... , s}, this would still require a bidder to specify (s+1)m values. Thus, we need a bidding language that will allow the bidders to at least specify some bids more concisely. We will specify a bidding language that only represents a subset of all possible bids, which can be described concisely.3 To introduce our bidding language, we will first describe the bidding function as a composition of two functions; then we will outline our assumptions on each of these functions. First, there is a utility function uj : Rm → R, specifying how much bidder j appreciates a given vector of total donations to the charities. (Note that the way we define a bidder``s utility function, it does not take the payments the bidder makes into account.) Then, there is a donation willingness function wj : R → R, which specifies how much bidder j is willing to pay given her utility for the vector of donations to the charities. We emphasize that this function does not need to be linear, so that utilities should not be thought of as expressible in dollar amounts. (Indeed, when an individual is donating to a large charity, the reason that the individual donates only a bounded amount is typically not decreasing marginal value of the money given to the charity, but rather that the marginal value of a dollar to the bidder herself becomes larger as her budget becomes smaller.) So, we have wj(uj(πc1 , πc2 , ... , πcm )) = vj(πc1 , πc2 , ... , πcm ), and we let the bidder describe her functions uj and wj separately. (She will submit these functions as her bid.) Our first restriction is that the utility that a bidder derives from money donated to one charity is independent of the amount donated to another charity. Thus, uj(πc1 , πc2 , ... , πcm ) = m i=1 ui j(πci ). (We observe that this does not imply that the bid function vj decomposes similarly, because of the nonlinearity of wj.) Furthermore, each ui j must be piecewise linear. An interesting special case which we will study is when each ui j is a line: ui j(πci ) = ai jπci . This special case is justified in settings where the scale of the donations by the bidders is small relative to the amounts the charities receive from other sources, so that the marginal use of a dollar to the charity is not affected by the amount given by the bidders. The only restriction that we place on the payment willingness functions wj is that they are piecewise linear. One interesting special case is a threshold bid, where wj is a step function: the bidder will provide t dollars if her utility exceeds s, and otherwise 0. Another interesting case is when such a bid is partially acceptable: the bidder will provide t dollars if her utility exceeds s; but if her utility is u < s, she is still willing to provide ut s dollars. One might wonder why, if we are given the bidders'' utility functions, we do not simply maximize the sum of the utilities rather than surplus or total donated. There are several reasons. First, because affine transformations do not affect utility functions in a fundamental way, it would be possi3 Of course, our bidding language can be trivially extended to allow for fully expressive bids, by also allowing bids from a fully expressive bidding language, in addition to the bids in our bidding language. 53 ble for a bidder to inflate her utility by changing its units, thereby making her bid more important for utility maximization purposes. Second, a bidder could simply give a payment willingness function that is 0 everywhere, and have her utility be taken into account in deciding on the outcome, in spite of her not contributing anything. 5. AVOIDING INDIRECT PAYMENTS In an initial implementation, the approach of having donations made out to a center, and having a center forward these payments to charities, may not be desirable. Rather, it may be preferable to have a partially decentralized solution, where the donating parties write out checks to the charities directly according to a solution prescribed by the center. In this scenario, the center merely has to verify that parties are giving the prescribed amounts. Advantages of this include that the center can keep its legal status minimal, as well as that we do not require the donating parties to trust the center to transfer their donations to the charities (or require some complicated verification protocol). It is also a step towards a fully decentralized solution, if this is desirable. To bring this about, we can still use the approach described earlier. After we clear the market in the manner described before, we know the amount that each donator is supposed to give, and the amount that each charity is supposed to receive. Then, it is straightforward to give some specification of who should give how much to which charity, that is consistent with that clearing. Any greedy algorithm that increases the cash flow from any bidder who has not yet paid enough, to any charity that has not yet received enough, until either the bidder has paid enough or the charity has received enough, will provide such a specification. (All of this is assuming that bj πbj = ci πci . In the case where there is nonzero surplus, that is, bj πbj > ci πci , we can distribute this surplus across the bidders by not requiring them to pay the full amount, or across the charities by giving them more than the solution specifies.) Nevertheless, with this approach, a bidder may have to write out a check to a charity that she does not care for at all. (For example, an environmental activist who was using the system to increase donations to a wildlife preservation fund may be required to write a check to a group supporting a right-wing political party.) This is likely to lead to complaints and noncompliance with the clearing. We can address this issue by letting each bidder specify explicitly (before the clearing) which charities she would be willing to make a check out to. These additional constraints, of course, may change the optimal solution. In general, checking whether a given centralized solution (with zero surplus) can be accomplished through decentralized payments when there are such constraints can be modeled as a MAX-FLOW problem. In the MAX-FLOW instance, there is an edge from the source node s to each bidder bj, with a capacity of πbj (as specified in the centralized solution); an edge from each bidder bj to each charity ci that the bidder is willing to donate money to, with a capacity of ∞; and an edge from each charity ci to the target node t with capacity πci (as specified in the centralized solution). In the remainder of this paper, all our hardness results apply even to the setting where there is no constraint on which bidders can pay to which charity (that is, even the problem as it was specified before this section is hard). We also generalize our clearing algorithms to the partially decentralized case with constraints. 6. HARDNESS OF CLEARING THE MARKET In this section, we will show that the clearing problem is completely inapproximable, even when every bidder``s utility function is linear (with slope 0 or 1 in each charity``s payments), each bidder cares either about at most two charities or about all charities equally, and each bidder``s payment willingness function is a step function. We will reduce from MAX2SAT (given a formula in conjunctive normal form (where each clause has two literals) and a target number of satisfied clauses T, does there exist an assignment of truth values to the variables that makes at least T clauses true?) , which is NP-complete [7]. Theorem 1. There exists a reduction from MAX2SAT instances to DONATION-CLEARING instances such that 1. If the MAX2SAT instance has no solution, then the only valid outcome is the zero outcome (no bidder pays anything and no charity receives anything); 2. Otherwise, there exists a solution with positive surplus. Additionally, the DONATION-CLEARING instances that we reduce to have the following properties: 1. Every ui j is a line; that is, the utility that each bidder derives from any charity is linear; 2. All the ui j have slope either 0 or 1; 3. Every bidder either has at most 2 charities that affect her utility (with slope 1), or all charities affect her utility (with slope 1); 4. Every bid is a threshold bid; that is, every bidder``s payment willingness function wj is a step function. Proof. The problem is in NP because we can nondeterministically choose the payments to be made and received, and check the validity and objective value of this outcome. In the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that uk j (πck ) = akπck (this function is 0 for ck not mentioned in the bid), and wj(uj) = t for uj ≥ s, wj(uj) = 0 otherwise. To show NP-hardness, we reduce an arbitrary MAX2SAT instance, given by a set of clauses K = {k} = {(l1 k, l2 k)} over a variable set V together with a target number of satisfied clauses T, to the following DONATION-CLEARING instance. Let the set of charities be as follows. For every literal l ∈ L, there is a charity cl. Then, let the set of bids be as follows. For every variable v, there is a bid bv = ({(c+v, 1), (c−v, 1)}, 2, 1 − 1 4|V | ). For every literal l, there is a bid bl = ({(cl, 1)}, 2, 1). For every clause k = {l1 k, l2 k} ∈ K, there is a bid bk = ({(cl1 k , 1), (cl2 k , 1)}, 2, 1 8|V ||K| ). Finally, there is a single bid that values all charities equally: b0 = ({(c1, 1), (c2, 1), ... , (cm, 1)}, 2|V |+ T 8|V ||K| , 1 4 + 1 16|V ||K| ). We show the two instances are equivalent. First, suppose there exists a solution to the MAX2SAT instance. If in this solution, l is true, then let πcl = 2 + T 8|V |2|K| ; otherwise πcl = 0. Also, the only bids that are not accepted (meaning the threshold is not met) are the bl where l is false, and the bk such that both of l1 k, l2 k are false. First we show that no bidder whose bid is accepted pays more than she is willing to. For each bv, either c+v or c−v receives at least 2, so this bidder``s threshold has been met. 54 For each bl, either l is false and the bid is not accepted, or l is true, cl receives at least 2, and the threshold has been met. For each bk, either both of l1 k, l2 k are false and the bid is not accepted, or at least one of them (say li k) is true (that is, k is satisfied) and cli k receives at least 2, and the threshold has been met. Finally, because the total amount received by the charities is 2|V | + T 8|V ||K| , b0``s threshold has also been met. The total amount that can be extracted from the accepted bids is at least |V |(1− 1 4|V | )+|V |+T 1 8|V ||K| + 1 4 + 1 16|V ||K| ) = 2|V |+ T 8|V ||K| + 1 16|V ||K| > 2|V |+ T 8|V ||K| , so there is positive surplus. So there exists a solution with positive surplus to the DONATION-CLEARING instance. Now suppose there exists a nonzero outcome in the DONATION-CLEARING instance. First we show that it is not possible (for any v ∈ V ) that both b+v and b−v are accepted. For, this would require that πc+v + πc−v ≥ 4. The bids bv, b+v, b−v cannot contribute more than 3, so we need another 1 at least. It is easily seen that for any other v , accepting any subset of {bv , b+v , b−v } would require that at least as much is given to c+v and c−v as can be extracted from these bids, so this cannot help. Finally, all the other bids combined can contribute at most |K| 1 8|V ||K| + 1 4 + 1 16|V ||K| < 1. It follows that we can interpret the outcome in the DONATION-CLEARING instance as a partial assignment of truth values to variables: v is set to true if b+v is accepted, and to false if b−v is accepted. All that is left to show is that this partial assignment satisfies at least T clauses. First we show that if a clause bid bk is accepted, then either bl1 k or bl2 k is accepted (and thus either l1 k or l2 k is set to true, hence k is satisfied). If bk is accepted, at least one of cl1 k and cl2 k must be receiving at least 1; without loss of generality, say it is cl1 k , and say l1 k corresponds to variable v1 k (that is, it is +v1 k or −v1 k). If cl1 k does not receive at least 2, bl1 k is not accepted, and it is easy to check that the bids bv1 k , b+v1 k , b−v1 k contribute (at least) 1 less than is paid to c+v1 k and c+v1 k . But this is the same situation that we analyzed before, and we know it is impossible. All that remains to show is that at least T clause bids are accepted. We now show that b0 is accepted. Suppose it is not; then one of the bv must be accepted. (The solution is nonzero by assumption; if only some bk are accepted, the total payment from these bids is at most |K| 1 8|V ||K| < 1, which is not enough for any bid to be accepted; and if one of the bl is accepted, then the threshold for the corresponding bv is also reached.) For this v, bv1 k , b+v1 k , b−v1 k contribute (at least) 1 4|V | less than the total payments to c+v and c−v. Again, the other bv and bl cannot (by themselves) help to close this gap; and the bk can contribute at most |K| 1 8|V ||K| < 1 4|V | . It follows that b0 is accepted. Now, in order for b0 to be accepted, a total of 2|V |+ T 8|V ||K| must be donated. Because is not possible (for any v ∈ V ) that both b+v and b−v are accepted, it follows that the total payment by the bv and the bl can be at most 2|V | − 1 4 . Adding b0``s payment of 1 4 + 1 16|V ||K| to this, we still need T − 1 2 8|V ||K| from the bk. But each one of them contributes at most 1 8|V ||K| , so at least T of them must be accepted. Corollary 1. Unless P=NP, there is no polynomial-time algorithm for approximating DONATION-CLEARING (with either the surplus or the total amount donated as the objective) within any ratio f(n), where f is a nonzero function of the size of the instance. This holds even if the DONATIONCLEARING structures satisfy all the properties given in Theorem 1. Proof. Suppose we had such a polynomial time algorithm, and applied it to the DONATION-CLEARING instances that were reduced from MAX2SAT instances in Theorem 1. It would return a nonzero solution when the MAX2SAT instance has a solution, and a zero solution otherwise. So we can decide whether arbitrary MAX2SAT instances are satisfiable this way, and it would follow that P=NP. (Solving the problem to optimality is NP-complete in many other (noncomparable or even more restricted) settings as well-we omit such results because of space constraint.) This should not be interpreted to mean that our approach is infeasible. First, as we will show, there are very expressive families of bids for which the problem is solvable in polynomial time. Second, NP-completeness is often overcome in practice (especially when the stakes are high). For instance, even though the problem of clearing combinatorial auctions is NP-complete [20] (even to approximate [21]), they are typically solved to optimality in practice. 7. MIXED INTEGER PROGRAMMING FORMULATION In this section, we give a mixed integer programming (MIP) formulation for the general problem. We also discuss in which special cases this formulation reduces to a linear programming (LP) formulation. In such cases, the problem is solvable in polynomial time, because linear programs can be solved in polynomial time [11]. The variables of the MIP defining the final outcome are the payments made to the charities, denoted by πci , and the payments extracted from the bidders, πbj . In the case where we try to avoid direct payments and let the bidders pay the charities directly, we add variables πci,bj indicating how much bj pays to ci, with the constraints that for each ci, πci ≤ bj πci,bj ; and for each bj, πbj ≥ ci πci,bj . Additionally, there is a constraint πci,bj = 0 whenever bidder bj is unwilling to pay charity ci. The rest of the MIP can be phrased in terms of the πci and πbj . The objectives we have discussed earlier are both linear: surplus is given by n j=1 πbj − m i=1 πci , and total amount donated is given by m i=1 πci (coefficients can be added to represent different weights on the different charities in the objective). The constraint that the outcome should be valid (no deficit) is given simply by: n j=1 πbj ≥ m i=1 πci . For every bidder, for every charity, we define an additional utility variable ui j indicating the utility that this bidder derives from the payment to this charity. The bidder``s total 55 utility is given by another variable uj, with the constraint that uj = m i=1 ui j. Each ui j is given as a function of πci by the (piecewise linear) function provided by the bidder. In order to represent this function in the MIP formulation, we will merely place upper bounding constraints on ui j, so that it cannot exceed the given functions. The MIP solver can then push the ui j variables all the way up to the constraint, in order to extract as much payment from this bidder as possible. In the case where the ui j are concave, this is easy: if (sl, tl) and (sl+1, tl+1) are endpoints of a finite linear segment in the function, we add the constraint that ui j ≤ tl + πci −sl sl+1−sl (tl+1 − tl). If the final (infinite) segment starts at (sk, tk) and has slope d, we add the constraint that ui j ≤ tk + d(πci − sk). Using the fact that the function is concave, for each value of πci , the tightest upper bound on ui j is the one corresponding to the segment above that value of πci , and therefore these constraints are sufficient to force the correct value of ui j. When the function is not concave, we require (for the first time) some binary variables. First, we define another point on the function: (sk+1, tk+1) = (sk + M, tk + dM), where d is the slope of the infinite segment and M is any upper bound on the πcj . This has the effect that we will never be on the infinite segment again. Now, let xi,j l be an indicator variable that should be 1 if πci is below the lth segment of the function, and 0 otherwise. To effect this, first add a constraint k l=0 xi,j l = 1. Now, we aim to represent πci as a weighted average of its two neighboring si,j l . For 0 ≤ l ≤ k + 1, let λi,j l be the weight on si,j l . We add the constraint k+1 l=0 λi,j l = 1. Also, for 0 ≤ l ≤ k + 1, we add the constraint λi,j l ≤ xl−1 +xl (where x−1 and xk+1 are defined to be zero), so that indeed only the two neighboring si,j l have nonzero weight. Now we add the constraint πci = k+1 l=0 si,j l λi,j l , and now the λi,j l must be set correctly. Then, we can set ui j = k+1 l=0 ti,j l λi,j l . (This is a standard MIP technique [16].) Finally, each πbj is bounded by a function of uj by the (piecewise linear) function provided by the bidder (wj). Representing this function is entirely analogous to how we represented ui j as a function of πci . (Again we will need binary variables only if the function is not concave.) Because we only use binary variables when either a utility function ui j or a payment willingness function wj is not concave, it follows that if all of these are concave, our MIP formulation is simply a linear program-which can be solved in polynomial time. Thus: Theorem 2. If all functions ui j and wj are concave (and piecewise linear), the DONATION-CLEARING problem can be solved in polynomial time using linear programming. Even if some of these functions are not concave, we can simply replace each such function by the smallest upper bounding concave function, and use the linear programming formulation to obtain an upper bound on the objectivewhich may be useful in a search formulation of the general problem. 8. WHY ONE CANNOT DO MUCH BETTER THAN LINEAR PROGRAMMING One may wonder if, for the special cases of the DONATIONCLEARING problem that can be solved in polynomial time with linear programming, there exist special purpose algorithms that are much faster than linear programming algorithms. In this section, we show that this is not the case. We give a reduction from (the decision variant of) the general linear programming problem to (the decision variant of) a special case of the DONATION-CLEARING problem (which can be solved in polynomial time using linear programming). (The decision variant of an optimization problem asks the binary question: Can the objective value exceed o?) Thus, any special-purpose algorithm for solving the decision variant of this special case of the DONATIONCLEARING problem could be used to solve a decision question about an arbitrary linear program just as fast. (And thus, if we are willing to call the algorithm a logarithmic number of times, we can solve the optimization version of the linear program.) We first observe that for linear programming, a decision question about the objective can simply be phrased as another constraint in the LP (forcing the objective to exceed the given value); then, the original decision question coincides with asking whether the resulting linear program has a feasible solution. Theorem 3. The question of whether an LP (given by a set of linear constraints4 ) has a feasible solution can be modeled as a DONATION-CLEARING instance with payment maximization as the objective, with 2v charities and v + c bids (where v is the number of variables in the LP, and c is the number of constraints). In this model, each bid bj has only linear ui j functions, and is a partially acceptable threshold bid (wj(u) = tj for u ≥ sj, otherwise wj(u) = utj sj ). The v bids corresponding to the variables mention only two charities each; the c bids corresponding to the constraints mention only two times the number of variables in the corresponding constraint. Proof. For every variable xi in the LP, let there be two charities, c+xi and c−xi . Let H be some number such that if there is a feasible solution to the LP, there is one in which every variable has absolute value at most H. In the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that uk j (πck ) = akπck (this function is 0 for ck not mentioned in the bid), and wj(uj) = t for uj ≥ s, wj(uj) = uj t s otherwise. For every variable xi in the LP, let there be a bid bxi = ({(c+xi , 1), (c−xi , 1)}, 2H, 2H − c v ). For every constraint i rj i xi ≤ sj in the linear program, let there be a bid bj = ({(c−xi , rj i )}i:r j i >0 ∪ {(c+xi , −rj i )}i:r j i <0 , ( i |rj i |)H − sj, 1). Let the target total amount donated be 2vH. Suppose there is a feasible solution (x∗ 1, x∗ 2, ... , x∗ v) to the LP. Without loss of generality, we can suppose that |x∗ i | ≤ H for all i. Then, in the DONATION-CLEARING instance, 4 These constraints must include bounds on the variables (including nonnegativity bounds), if any. 56 for every i, let πc+xi = H + x∗ i , and let πc−xi = H − x∗ i (for a total payment of 2H to these two charities). This allows us to extract the maximum payment from the bids bxi -a total payment of 2vH − c. Additionally, the utility of bidder bj is now i:r j i >0 rj i (H − x∗ i ) + i:r j i <0 −rj i (H + x∗ i ) = ( i |rj i |)H − i rj i x∗ i ≥ ( i |rj i |)H − sj (where the last inequality stems from the fact that constraint j must be satisfied in the LP solution), so it follows we can extract the maximum payment from all the bidders bj, for a total payment of c. It follows that we can extract the required 2vH payment from the bidders, and there exists a solution to the DONATION-CLEARING instance with a total amount donated of at least 2vH. Now suppose there is a solution to the DONATIONCLEARING instance with a total amount donated of at least vH. Then the maximum payment must be extracted from each bidder. From the fact that the maximum payment must be extracted from each bidder bxi , it follows that for each i, πc+xi + πc−xi ≥ 2H. Because the maximum extractable total payment is 2vH, it follows that for each i, πc+xi + πc−xi = 2H. Let x∗ i = πc+xi − H = H − πc−xi . Then, from the fact that the maximum payment must be extracted from each bidder bj, it follows that ( i |rj i |)H − sj ≤ i:r j i >0 rj i πc−xi + i:r j i <0 −rj i πc+xi = i:r j i >0 rj i (H − x∗ i ) + i:r j i <0 −rj i (H + x∗ i ) = ( i |rj i |)H − i rj i x∗ i . Equivalently, i rj i x∗ i ≤ sj. It follows that the x∗ i constitute a feasible solution to the LP. 9. QUASILINEAR BIDS Another class of bids of interest is the class of quasilinear bids. In a quasilinear bid, the bidder``s payment willingness function is linear in utility: that is, wj = uj. (Because the units of utility are arbitrary, we may as well let them correspond exactly to units of money-so we do not need a constant multiplier.) In most cases, quasilinearity is an unreasonable assumption: for example, usually bidders have a limited budget for donations, so that the payment willingness will stop increasing in utility after some point (or at least increase slower in the case of a softer budget constraint). Nevertheless, quasilinearity may be a reasonable assumption in the case where the bidders are large organizations with large budgets, and the charities are a few small projects requiring relatively little money. In this setting, once a certain small amount has been donated to a charity, a bidder will derive no more utility from more money being donated from that charity. Thus, the bidders will never reach a high enough utility for their budget constraint (even when it is soft) to take effect, and thus a linear approximation of their payment willingness function is reasonable. Another reason for studying the quasilinear setting is that it is the easiest setting for mechanism design, which we will discuss shortly. In this section, we will see that the clearing problem is much easier in the case of quasilinear bids. First, we address the case where we are trying to maximize surplus (which is the most natural setting for mechanism design). The key observation here is that when bids are quasilinear, the clearing problem decomposes across charities. Lemma 1. Suppose all bids are quasilinear, and surplus is the objective. Then we can clear the market optimally by clearing the market for each charity individually. That is, for each bidder bj, let πbj = ci πbi j . Then, for each charity ci, maximize ( bj πbi j ) − πci , under the constraint that for every bidder bj, πbi j ≤ ui j(πci ). Proof. The resulting solution is certainly valid: first of all, at least as much money is collected as is given away, because bj πbj − ci πci = bj ci πbi j − ci πci = ci (( bj πbi j ) − πci )-and the terms of this summation are the objectives of the individual optimization problems, each of which can be set at least to 0 (by setting all the variables are set to 0), so it follows that the expression is nonnegative. Second, no bidder bj pays more than she is willing to, because uj −πbj = ci ui j(πci )− ci πbi j = ci (ui j(πci )−πbi j )-and the terms of this summation are nonnegative by the constraints we imposed on the individual optimization problems. All that remains to show is that the solution is optimal. Because in an optimal solution, we will extract as much payment from the bidders as possible given the πci , all we need to show is that the πci are set optimally by this approach. Let π∗ ci be the amount paid to charity πci in some optimal solution. If we change this amount to πci and leave everything else unchanged, this will only affect the payment that we can extract from the bidders because of this particular charity, and the difference in surplus will be bj ui j(πci ) − ui j(π∗ ci ) − πci + π∗ ci . This expression is, of course, 0 if πci = π∗ ci . But now notice that this expression is maximized as a function of πci by the decomposed solution for this charity (the terms without πci in them do not matter, and of course in the decomposed solution we always set πbi j = ui j(πci )). It follows that if we change πci to the decomposed solution, the change in surplus will be at least 0 (and the solution will still be valid). Thus, we can change the πci one by one to the decomposed solution without ever losing any surplus. Theorem 4. When all bids are quasilinear and surplus is the objective, DONATION-CLEARING can be done in linear time. Proof. By Lemma 1, we can solve the problem separately for each charity. For charity ci, this amounts to maximizing ( bj ui j(πci )) − πci as a function of πci . Because all its terms are piecewise linear functions, this whole function is piecewise linear, and must be maximized at one of the points where it is nondifferentiable. It follows that we need only check all the points at which one of the terms is nondifferentiable. Unfortunately, the decomposing lemma does not hold for payment maximization. Proposition 1. When the objective is payment maximization, even when bids are quasilinear, the solution obtained by decomposing the problem across charities is in general not optimal (even with concave bids). 57 Proof. Consider a single bidder b1 placing the following quasilinear bid over two charities c1 and c2: u1 1(πc1 ) = 2πci for 0 ≤ πci ≤ 1, u1 1(πc1 ) = 2 + πci −1 4 otherwise; u2 1(πc2 ) = πci 2 . The decomposed solution is πc1 = 7 3 , πc2 = 0, for a total donation of 7 3 . But the solution πc1 = 1, πc2 = 2 is also valid, for a total donation of 3 > 7 3 . In fact, when payment maximization is the objective, DONATION-CLEARING remains (weakly) NP-complete in general. (In the remainder of the paper, proofs are omitted because of space constraint.) Theorem 5. DONATION-CLEARING is (weakly) NPcomplete when payment maximization is the objective, even when every bid is concerns only one charity (and has a stepfunction utility function for this charity), and is quasilinear. However, when the bids are also concave, a simple greedy clearing algorithm is optimal. Theorem 6. Given a DONATION-CLEARING instance with payment maximization as the objective where all bids are quasilinear and concave, consider the following algorithm. Start with πci = 0 for all charities. Then, letting γci = d bj ui j (πci ) dπci (at nondifferentiable points, these derivatives should be taken from the right), increase πc∗ i (where c∗ i ∈ arg maxci γci ), until either γc∗ i is no longer the highest (in which case, recompute c∗ i and start increasing the corresponding payment), or bj uj = ci πci and γc∗ i < 1. Finally, let πbj = uj. (A similar greedy algorithm works when the objective is surplus and the bids are quasilinear and concave, with as only difference that we stop increasing the payments as soon as γc∗ i < 1.) 10. INCENTIVE COMPATIBILITY Up to this point, we have not discussed the bidders'' incentives for bidding any particular way. Specifically, the bids may not truthfully reflect the bidders'' preferences over charities because a bidder may bid strategically, misrepresenting her preferences in order to obtain a result that is better to herself. This means the mechanism is not strategy-proof. (We will show some concrete examples of this shortly.) This is not too surprising, because the mechanism described so far is, in a sense, a first-price mechanism, where the mechanism will extract as much payment from a bidder as her bid allows. Such mechanisms (for example, first-price auctions, where winners pay the value of their bids) are typically not strategy-proof: if a bidder reports her true valuation for an outcome, then if this outcome occurs, the payment the bidder will have to make will offset her gains from the outcome completely. Of course, we could try to change the rules of the game-which outcome (payment vector to charities) do we select for which bid vector, and which bidder pays how much-in order to make bidding truthfully beneficial, and to make the outcome better with regard to the bidders'' true preferences. This is the field of mechanism design. In this section, we will briefly discuss the options that mechanism design provides for the expressive charity donation problem. 10.1 Strategic bids under the first-price mechanism We first point out some reasons for bidders to misreport their preferences under the first-price mechanism described in the paper up to this point. First of all, even when there is only one charity, it may make sense to underbid one``s true valuation for the charity. For example, suppose a bidder would like a charity to receive a certain amount x, but does not care if the charity receives more than that. Additionally, suppose that the other bids guarantee that the charity will receive at least x no matter what bid the bidder submits (and the bidder knows this). Then the bidder is best off not bidding at all (or submitting a utility for the charity of 0), to avoid having to make any payment. (This is known in economics as the free rider problem [14]. With multiple charities, another kind of manipulation may occur, where the bidder attempts to steer others'' payments towards her preferred charity. Suppose that there are two charities, and three bidders. The first bidder bids u1 1(πc1 ) = 1 if πc1 ≥ 1, u1 1(πc1 ) = 0 otherwise; u2 1(πc2 ) = 1 if πc2 ≥ 1, u2 1(πc2 ) = 0 otherwise; and w1(u1) = u1 if u1 ≤ 1, w1(u1) = 1+ 1 100 (u1 −1) otherwise. The second bidder bids u1 2(πc1 ) = 1 if πc1 ≥ 1, u1 1(πc1 ) = 0 otherwise; u2 2(πc2 ) = 0 (always); w2(u2) = 1 4 u2 if u2 ≤ 1, w2(u2) = 1 4 + 1 100 (u2 −1) otherwise. Now, the third bidder``s true preferences are accurately represented5 by the bid u1 3(πc1 ) = 1 if πc1 ≥ 1, u1 3(πc1 ) = 0 otherwise; u2 3(πc2 ) = 3 if πc2 ≥ 1, u2 3(πc1 ) = 0 otherwise; and w3(u3) = 1 3 u3 if u3 ≤ 1, w3(u3) = 1 3 + 1 100 (u3 − 1) otherwise. Now, it is straightforward to check that, if the third bidder bids truthfully, regardless of whether the objective is surplus maximization or total donated, charity 1 will receive at least 1, and charity 2 will receive less than 1. The same is true if bidder 3 does not place a bid at all (as in the previous type of manipulation); hence bidder 2``s utility will be 1 in this case. But now, if bidder 3 reports u1 3(πc1 ) = 0 everywhere; u2 3(πc2 ) = 3 if πc2 ≥ 1, u2 3(πc2 ) = 0 otherwise (this part of the bid is truthful); and w3(u3) = 1 3 u3 if u3 ≤ 1, w3(u3) = 1 3 otherwise; then charity 2 will receive at least 1, and bidder 3 will have to pay at most 1 3 . Because up to this amount of payment, one unit of money corresponds to three units of utility to bidder 3, it follows his utility is now at least 3 − 1 = 2 > 1. We observe that in this case, the strategic bidder is not only affecting how much the bidders pay, but also how much the charities receive. 10.2 Mechanism design in the quasilinear setting There are four reasons why the mechanism design approach is likely to be most successful in the setting of quasilinear preferences. First, historically, mechanism design has been been most successful when the quasilinear assumption could be made. Second, because of this success, some very general mechanisms have been discovered for the quasilinear setting (for instance, the VCG mechanisms [24, 4, 10], or the dAGVA mechanism [6, 1]) which we could apply directly to the expressive charity donation problem. Third, as we saw in Section 9, the clearing problem is much easier in 5 Formally, this means that if the bidder is forced to pay the full amount that his bid allows for a particular vector of payments to charities, the bidder is indifferent between this and not participating in the mechanism at all. (Compare this to bidding truthfully in a first-price auction.) 58 this setting, and thus we are less likely to run into computational trouble for the mechanism design problem. Fourth, as we will show shortly, the quasilinearity assumption in some cases allows for decomposing the mechanism design problem over the charities (as it did for the simple clearing problem). Moreover, in the quasilinear setting (unlike in the general setting), it makes sense to pursue social welfare (the sum of the utilities) as the objective, because now 1) units of utility correspond directly to units of money, so that we do not have the problem of the bidders arbitrarily scaling their utilities; and 2) it is no longer possible to give a payment willingness function of 0 while still affecting the donations through a utility function. Before presenting the decomposition result, we introduce some terms from game theory. A type is a preference profile that a bidder can have and can report (thus, a type report is a bid). Incentive compatibility (IC) means that bidders are best off reporting their preferences truthfully; either regardless of the others'' types (in dominant strategies), or in expectation over them (in Bayes-Nash equilibrium). Individual rationality (IR) means agents are at least as well off participating in the mechanism as not participating; either regardless of the others'' types (ex-post), or in expectation over them (ex-interim). A mechanism is budget balanced if there is no flow of money into or out of the system-in general (ex-post), or in expectation over the type reports (ex-ante). A mechanism is efficient if it (always) produces the efficient allocation of wealth to charities. Theorem 7. Suppose all agents'' preferences are quasilinear. Furthermore, suppose that there exists a single-charity mechanism M that, for a certain subclass P of (quasilinear) preferences, under a given solution concept S (implementation in dominant strategies or Bayes-Nash equilibrium) and a given notion of individual rationality R (ex post, ex interim, or none), satisfies a certain notion of budget balance (ex post, ex ante, or none), and is ex-post efficient. Then there exists such a mechanism for any number of charities. Two mechanisms that satisfy efficiency (and can in fact be applied directly to the multiple-charity problem without use of the previous theorem) are the VCG (which is incentive compatible in dominant strategies) and dAGVA (which is incentive compatible only in Bayes-Nash equilibrium) mechanisms. Each of them, however, has a drawback that would probably make it impractical in the setting of donations to charities. The VCG mechanism is not budget balanced. The dAGVA mechanism does not satisfy ex-post individual rationality. In the next subsection, we will investigate if we can do better in the setting of donations to charities. 10.3 Impossibility of efficiency In this subsection, we show that even in a very restricted setting, and with minimal requirements on IC and IR constraints, it is impossible to create a mechanism that is efficient. Theorem 8. There is no mechanism which is ex-post budget balanced, ex-post efficient, and ex-interim individually rational with Bayes-Nash equilibrium as the solution concept (even with only one charity, only two quasilinear bidders, with identical type distributions (uniform over two types, with either both utility functions being step functions or both utility functions being concave piecewise linear functions)). The case of step-functions in this theorem corresponds exactly to the case of a single, fixed-size, nonexcludable public good (the public good being that the charity receives the desired amount)-for which such an impossibility result is already known [14]. Many similar results are known, probably the most famous of which is the Myerson-Satterthwaite impossibility result, which proves the impossibility of efficient bilateral trade under the same requirements [15]. Theorem 7 indicates that there is no reason to decide on donations to multiple charities under a single mechanism (rather than a separate one for each charity), when an efficient mechanism with the desired properties exists for the single-charity case. However, because under the requirements of Theorem 8, no such mechanism exists, there may be a benefit to bringing the charities under the same umbrella. The next proposition shows that this is indeed the case. Proposition 2. There exist settings with two charities where there exists no ex-post budget balanced, ex-post efficient, and ex-interim individually rational mechanism with Bayes-Nash equilibrium as the solution concept for either charity alone; but there exists an ex-post budget balanced, ex-post efficient, and ex-post individually rational mechanism with dominant strategies as the solution concept for both charities together. (Even when the conditions are the same as in Theorem 8, apart from the fact that there are now two charities.) 11. CONCLUSION We introduced a bidding language for expressing very general types of matching offers over multiple charities. We formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings. We gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time. We then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids. Subsequently, we showed that the clearing problem is much easier when bids are quasilinear (where payment willingness functions are linear)-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically. We showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint. There are many directions for future research. One is to build a web-based implementation of the (first-price) mechanism proposed in this paper. Another is to study the computational scalability of our MIP/LP approach. It is also 59 important to identify other classes of bids (besides concave ones) for which the clearing problem is tractable. Much crucial work remains to be done on the mechanism design problem. Finally, are there good iterative mechanisms for charity donation? 6 12. REFERENCES [1] K. Arrow. The property rights doctrine and demand revelation under incomplete information. In M. Boskin, editor, Economics and human welfare. New York Academic Press, 1979. [2] L. M. Ausubel and P. Milgrom. Ascending auctions with package bidding. Frontiers of Theoretical Economics, 1, 2002. No. 1, Article 1. [3] Y. Bartal, R. Gonen, and N. Nisan. Incentive compatible multi-unit combinatorial auctions. In Theoretical Aspects of Rationality and Knowledge (TARK IX), Bloomington, Indiana, USA, 2003. [4] E. H. Clarke. Multipart pricing of public goods. Public Choice, 11:17-33, 1971. [5] V. Conitzer and T. Sandholm. Complexity of mechanism design. In Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 103-110, Edmonton, Canada, 2002. [6] C. d``Aspremont and L. A. G´erard-Varet. Incentives and incomplete information. Journal of Public Economics, 11:25-45, 1979. [7] M. R. Garey, D. S. Johnson, and L. Stockmeyer. Some simplified NP-complete graph problems. Theoretical Computer Science, 1:237-267, 1976. [8] D. Goldburg and S. McElligott. Red cross statement on official donation locations. 2001. Press release, http://www.redcross.org/press/disaster/ds pr/ 011017legitdonors.html. [9] R. Gonen and D. Lehmann. Optimal solutions for multi-unit combinatorial auctions: Branch and bound heuristics. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 13-20, Minneapolis, MN, Oct. 2000. [10] T. Groves. Incentives in teams. Econometrica, 41:617-631, 1973. [11] L. Khachiyan. A polynomial algorithm in linear programming. Soviet Math. Doklady, 20:191-194, 1979. [12] R. Lavi, A. Mu``Alem, and N. Nisan. Towards a characterization of truthful combinatorial auctions. In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), 2003. [13] D. Lehmann, L. I. O``Callaghan, and Y. Shoham. Truth revelation in rapid, approximately efficient combinatorial auctions. Journal of the ACM, 49(5):577-602, 2002. Early version appeared in ACMEC-99. 6 Compare, for example, iterative mechanisms in the combinatorial auction setting [19, 25, 2]. [14] A. Mas-Colell, M. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, 1995. [15] R. Myerson and M. Satterthwaite. Efficient mechanisms for bilateral trading. Journal of Economic Theory, 28:265-281, 1983. [16] G. L. Nemhauser and L. A. Wolsey. Integer and Combinatorial Optimization. John Wiley & Sons, 1999. Section 4, page 11. [17] N. Nisan. Bidding and allocation in combinatorial auctions. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 1-12, Minneapolis, MN, 2000. [18] N. Nisan and A. Ronen. Computationally feasible VCG mechanisms. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 242-252, Minneapolis, MN, 2000. [19] D. C. Parkes. iBundle: An efficient ascending price bundle auction. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 148-157, Denver, CO, Nov. 1999. [20] M. H. Rothkopf, A. Pekeˇc, and R. M. Harstad. Computationally manageable combinatorial auctions. Management Science, 44(8):1131-1147, 1998. [21] T. Sandholm. Algorithm for optimal winner determination in combinatorial auctions. Artificial Intelligence, 135:1-54, Jan. 2002. Conference version appeared at the International Joint Conference on Artificial Intelligence (IJCAI), pp. 542-547, Stockholm, Sweden, 1999. [22] T. Sandholm, S. Suri, A. Gilpin, and D. Levine. CABOB: A fast optimal algorithm for combinatorial auctions. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI), pages 1102-1108, Seattle, WA, 2001. [23] J. Tagliabue. Global AIDS Funds Is Given Attention, but Not Money. The New York Times, June 1, 2003. Reprinted on http://www.healthgap.org/press releases/a03/ 060103 NYT HGAP G8 fund.html. [24] W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. Journal of Finance, 16:8-37, 1961. [25] P. R. Wurman and M. P. Wellman. AkBA: A progressive, anonymous-price combinatorial auction. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 21-29, Minneapolis, MN, Oct. 2000. [26] M. Yokoo. The characterization of strategy/false-name proof combinatorial auction protocols: Price-oriented, rationing-free protocol. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, Aug. 2003. 60
Expressive Negotiation over Donations to Charities ∗ ABSTRACT When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint. 1. INTRODUCTION When money is donated to a charitable (or other) cause (hereafter referred to as charity), often the donating party gives unconditionally: a fixed amount is transferred from the donator to the charity, and none of this transfer is contingent on other events--in particular, it is not contingent on the amount given by other parties. Indeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals. However, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity. In such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more. This is done by making the donation conditional on the others' donations. The following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation. Suppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1. It follows neither of them will be willing to give unconditionally, because $0.75 <$1. However, if the two parties draw up a contract that says that they will each give $0.5, both the parties have an incentive to accept this contract (rather than have no contract at all): with the contract, the charity will receive $1 (rather than $0 without a contract), which is worth $0.75 to each party, which is greater than the $0.5 that that party will have to give. Effectively, each party has made its donation conditional on the other party's donation, leading to larger donations and greater happiness to all parties involved. One method that is often used to effect this is to make a matching offer. Examples of matching offers are: "I will give x dollars for every dollar donated." , or "I will give x dollars if the total collected from other parties exceeds y." In our example above, one of the parties can make the offer "I will donate $0.5 if the other party also donates at least that much", and the other party will have an incentive to indeed donate $0.5, so that the total amount given to the charity increases by $1. Thus this matching offer implements the contract suggested above. As a real-world example, the United States government has authorized a donation of up to $1 billion to the Global Fund to fight AIDS, TB and Malaria, under the condition that the American contribution does not exceed one third of the total--to encourage other countries to give more [23]. However, there are several severe limitations to the simple approach of matching offers as just described. 1. It is not clear how two parties can make matching offers where each party's offer is stated in terms of the amount that the other pays. (For example, it is not clear what the outcome should be when both parties offer to match the other's donation.) Thus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)--or at least there can be no circular dependencies .1 2. Given the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities. For instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate. (Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party's offer to take effect.) In contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities. Moreover, the amount the party offers to donate at different levels of appreciation is allowed to vary arbitrarily (it does need to be a dollar-for-dollar (or n-dollarfor-dollar) matching arrangement, or an arrangement where the party offers a fixed amount provided a given (strike) total has been exceeded). Finally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other. Given each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid. However, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem. A large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it. 1Typically, larger organizations match offers of private individuals. For example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers' donations [8]. Towards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully. In short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world. A web-based implementation of the ideas described in this paper can facilitate voluntary reallocation of wealth on a global scale. Aditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms. 2. COMPARISON TO COMBINATORIAL AUCTIONS AND EXCHANGES This section discusses the relationship between expressive charity donation and combinatorial auctions and exchanges. It can be skipped, but may be of interest to the reader with a background in combinatorial auctions and exchanges. In a combinatorial auction, there are m items for sale, and bidders can place bids on bundles of one or more items. The auctioneer subsequently labels each bid as winning or losing, under the constraint that no item can be in more than one winning bid, to maximize the sum of the values of the winning bids. (This is known as the clearing problem.) Variants include combinatorial reverse auctions, where the auctioneer is seeking to procure a set of items; and combinatorial exchanges, where bidders can both buy and and sell items (even within the same bid). Other extensions include allowing for side constraints, as well as the specification of attributes of the items in bids. Combinatorial auctions and exchanges have recently become a popular research topic [20, 21, 17, 22, 9, 18, 13, 3, 12, 26, 19, 25, 2]. The problems of clearing expressive charity donation markets and clearing combinatorial auctions or exchanges are very different in formulation. Nevertheless, there are interesting parallels. One of the main reasons for the interest in combinatorial auctions and exchanges is that it allows for expressive bidding. A bidder can express exactly how much each different allocation is worth to her, and thus the globally optimal allocation may be chosen by the auctioneer. Compare this to a bidder having to bid on two different items in two different (one-item) auctions, without any way of expressing that (for instance) one item is worthless if the other item is not won. In this scenario, the bidder may win the first item but not the second (because there was another high bid on the second item that she did not anticipate), leading to economic inefficiency. Expressive bidding is also one of the main benefits of the expressive charity donation market. Here, bidders can express exactly how much they are willing to donate for every vector of amounts donated to charities. This may allow bidders to negotiate a complex arrangement of who gives how much to which charity, which is beneficial to all parties involved; whereas no such arrangement may have been possible if the bidders had been restricted to using simple matching offers on individual charities. Again, expressive bidding is necessary to achieve economic efficiency. Another parallel is the computational complexity of the clearing problem. In order to achieve the full economic efficiency allowed by the market's expressiveness (or even come close to it), hard computational problems must be solved in combinatorial auctions and exchanges, as well as in the charity donation market (as we will see). 3. DEFINITIONS Throughout this paper, we will refer to the offers that the donating parties make as bids, and to the donating parties as bidders. In our bidding framework, a bid will specify, for each vector of total payments made to the charities, how much that bidder is willing to contribute. (The contribution of this bidder is also counted in the vector of payments--so, the vector of total payments to the charities represents the amount given by all donating parties, not just the ones other than this bidder.) The bidding language is expressive enough that no bidder should have to make more than one bid. The following definition makes the general form of a bid in our framework precise. We now define possible outcomes in our model, and which outcomes are valid given the bids that were made. by the bidders (πb1, πb2,..., πbn), and a vector of payments received by the charities (πc1, πc2,..., πcm). A valid outcome is an outcome where πci (at least as much money is collected as is given away); 2. For all 1 <j <n, πbj <vj (πc1, πc2,..., πcm) (no bidder gives more than she is willing to). Of course, in the end, only one of the valid outcomes can be chosen. We choose the valid outcome that maximizes the objective that we have for the donation process. One example of an objective is surplus, given by ~ mi = 1 πci. The surplus could be the profits of a company managing the expressive donation marketplace; but, alternatively, the surplus could be returned to the bidders, or given to the charities. Another objective is total amount donated, given by ~ m πci. (Here, different weights could also be placed on the different charities.) Finding the valid outcome that maximizes the objective is a (nontrivial) computational problem. We will refer to it as the clearing problem. The formal definition follows. DEFINITION 4 (DONATION-CLEARING). We are given a set of n bids over charities c1, c2,..., cm. Additionally, we are given an objective function. We are asked to find an objective-maximizing valid outcome. How difficult the DONATION-CLEARING problem is depends on the types of bids used and the language in which they are expressed. This is the topic of the next section. 2In general, the objective function may also depend on the bids, but the objective functions under consideration in this paper do not depend on the bids. The techniques presented in this paper will typically generalize to objectives that take the bids into account directly. 4. A SIMPLIFIED BIDDING LANGUAGE Specifying a general bid in our framework (as defined above) requires being able to specify an arbitrary real-valued function over Rm. Even if we restricted the possible total payment made to each charity to the set {0, 1, 2,..., s}, this would still require a bidder to specify (s +1) m values. Thus, we need a bidding language that will allow the bidders to at least specify some bids more concisely. We will specify a bidding language that only represents a subset of all possible bids, which can be described concisely .3 To introduce our bidding language, we will first describe the bidding function as a composition of two functions; then we will outline our assumptions on each of these functions. First, there is a utility function uj: Rm--+ R, specifying how much bidder j appreciates a given vector of total donations to the charities. (Note that the way we define a bidder's utility function, it does not take the payments the bidder makes into account.) Then, there is a donation willingness function wj: R--+ R, which specifies how much bidder j is willing to pay given her utility for the vector of donations to the charities. We emphasize that this function does not need to be linear, so that utilities should not be thought of as expressible in dollar amounts. (Indeed, when an individual is donating to a large charity, the reason that the individual donates only a bounded amount is typically not decreasing marginal value of the money given to the charity, but rather that the marginal value of a dollar to the bidder herself becomes larger as her budget becomes smaller.) So, we have wj (uj (πc1, πc2,..., πcm)) = vj (πc1, πc2,..., πcm), and we let the bidder describe her functions uj and wj separately. (She will submit these functions as her bid.) Our first restriction is that the utility that a bidder derives from money donated to one charity is independent of the amount donated to another charity. Thus, does not imply that the bid function vj decomposes similarly, because of the nonlinearity of wj.) Furthermore, each uij must be piecewise linear. An interesting special case which we will study is when each uij is a line: uij (πci) = aijπci. This special case is justified in settings where the scale of the donations by the bidders is small relative to the amounts the charities receive from other sources, so that the marginal use of a dollar to the charity is not affected by the amount given by the bidders. The only restriction that we place on the payment willingness functions wj is that they are piecewise linear. One interesting special case is a threshold bid, where wj is a step function: the bidder will provide t dollars if her utility exceeds s, and otherwise 0. Another interesting case is when such a bid is partially acceptable: the bidder will provide t dollars if her utility exceeds s; but if her utility is u <s, she is still willing to provide uts dollars. One might wonder why, if we are given the bidders' utility functions, we do not simply maximize the sum of the utilities rather than surplus or total donated. There are several reasons. First, because affine transformations do not affect utility functions in a fundamental way, it would be possi3Of course, our bidding language can be trivially extended to allow for fully expressive bids, by also allowing bids from a fully expressive bidding language, in addition to the bids in our bidding language. ble for a bidder to inflate her utility by changing its units, thereby making her bid more important for utility maximization purposes. Second, a bidder could simply give a payment willingness function that is 0 everywhere, and have her utility be taken into account in deciding on the outcome, in spite of her not contributing anything. 5. AVOIDING INDIRECT PAYMENTS In an initial implementation, the approach of having donations made out to a center, and having a center forward these payments to charities, may not be desirable. Rather, it may be preferable to have a partially decentralized solution, where the donating parties write out checks to the charities directly according to a solution prescribed by the center. In this scenario, the center merely has to verify that parties are giving the prescribed amounts. Advantages of this include that the center can keep its legal status minimal, as well as that we do not require the donating parties to trust the center to transfer their donations to the charities (or require some complicated verification protocol). It is also a step towards a fully decentralized solution, if this is desirable. To bring this about, we can still use the approach described earlier. After we clear the market in the manner described before, we know the amount that each donator is supposed to give, and the amount that each charity is supposed to receive. Then, it is straightforward to give some specification of who should give how much to which charity, that is consistent with that clearing. Any greedy algorithm that increases the cash flow from any bidder who has not yet paid enough, to any charity that has not yet received enough, until either the bidder has paid enough or the charity has received enough, will provide such a specification. as it was specified before this section is hard). We also generalize our clearing algorithms to the partially decentralized case with constraints. 6. HARDNESS OF CLEARING THE MARKET In this section, we will show that the clearing problem is completely inapproximable, even when every bidder's utility function is linear (with slope 0 or 1 in each charity's payments), each bidder cares either about at most two charities or about all charities equally, and each bidder's payment willingness function is a step function. We will reduce from MAX2SAT (given a formula in conjunctive normal form (where each clause has two literals) and a target number of satisfied clauses T, does there exist an assignment of truth values to the variables that makes at least T clauses true?) , which is NP-complete [7]. (All of this is assuming that E πbj = ~ πci. In the case bj ci where there is nonzero surplus, that is, E πbj> E πci, we bj ci can distribute this surplus across the bidders by not requiring them to pay the full amount, or across the charities by giving them more than the solution specifies.) Nevertheless, with this approach, a bidder may have to write out a check to a charity that she does not care for at all. (For example, an environmental activist who was using the system to increase donations to a wildlife preservation fund may be required to write a check to a group supporting a right-wing political party.) This is likely to lead to complaints and noncompliance with the clearing. We can address this issue by letting each bidder specify explicitly (before the clearing) which charities she would be willing to make a check out to. These additional constraints, of course, may change the optimal solution. In general, checking whether a given centralized solution (with zero surplus) can be accomplished through decentralized payments when there are such constraints can be modeled as a MAX-FLOW problem. In the MAX-FLOW instance, there is an edge from the source node s to each bidder bj, with a capacity of πbj (as specified in the centralized solution); an edge from each bidder bj to each charity ci that the bidder is willing to donate money to, with a capacity of ∞; and an edge from each charity ci to the target node t with capacity πci (as specified in the centralized solution). In the remainder of this paper, all our hardness results apply even to the setting where there is no constraint on which bidders can pay to which charity (that is, even the problem PROOF. The problem is in NP because we can nondeterministically choose the payments to be made and received, and check the validity and objective value of this outcome. In the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that ukj (πck) = akπck (this function is 0 for ck not mentioned in the bid), and wj (uj) = t for uj ≥ s, wj (uj) = 0 otherwise. To show NP-hardness, we reduce an arbitrary MAX2SAT instance, given by a set of clauses K = {k} = {(l1k, l2k)} over a variable set V together with a target number of satisfied clauses T, to the following DONATION-CLEARING instance. Let the set of charities be as follows. For every literal l ∈ L, there is a charity cl. Then, let the set of bids be as follows. For every variable v, there is a bid bv = ({(c + v, 1), (c_v, 1)}, 2,1 − 41V11). For every literal l, there is a bid bl = ({(cl, 1)}, 2, 1). For every clause k = {l1k, l2k} ∈ K, there is a bid bk = ({(cl1k, 1), (cl2k, 1)}, 2, 1 81V 11K1). Finally, there is a single bid that values all charities equally: b0 = ({(c1, 1), (c2, 1),..., (cm, 1)}, 2 | V | + show the two instances are equivalent. First, suppose there exists a solution to the MAX2SAT instance. If in this solution, l is true, then let πcl = 2 + 81V 121K1; otherwise πcl = 0. Also, the only bids that areT not accepted (meaning the threshold is not met) are the bl where l is false, and the bk such that both of l1k, l2k are false. First we show that no bidder whose bid is accepted pays more than she is willing to. For each bv, either c + v or c_v receives at least 2, so this bidder's threshold has been met. For each bl, either l is false and the bid is not accepted, or l is true, cl receives at least 2, and the threshold has been met. For each bk, either both of l1k, l2k are false and the bid is not accepted, or at least one of them (say lik) is true (that is, k is satisfied) and clik receives at least 2, and the threshold has been met. Finally, because the total amount received by the surplus. So there exists a solution with positive surplus to the DONATION-CLEARING instance. Now suppose there exists a nonzero outcome in the DONATION-CLEARING instance. First we show that it is not possible (for any v G V) that both b + v and b − v are accepted. For, this would require that πc + v + πc_v> 4. The bids bv, b + v, b − v cannot contribute more than 3, so we need another 1 at least. It is easily seen that for any other v ~, accepting any subset of {bv,, b + v,, b − v, I would require that at least as much is given to c + v, and c − v, as can be extracted from these bids, so this cannot help. Finally, all the other bids combined can contribute at most IKI 1 pret the outcome in the DONATION-CLEARING instance as a partial assignment of truth values to variables: v is set to true if b + v is accepted, and to false if b − v is accepted. All that is left to show is that this partial assignment satisfies at least T clauses. First we show that if a clause bid bk is accepted, then either bl1k or bl2k is accepted (and thus either l1k or l2k is set to true, hence k is satisfied). If bk is accepted, at least one of cl1 k and cl2 k must be receiving at least 1; without loss of generality, say it is cl1 k, and say l1 k corresponds to variable v1k (that is, it is + v1k or--v1k). If cl1k does not receive at least 2, bl1 k is not accepted, and it is easy to check that the bids bv1k, b + v1k, b − v1k contribute (at least) 1 less than is paid to c + v1k and c + v1k. But this is the same situation that we analyzed before, and we know it is impossible. All that remains to show is that at least T clause bids are accepted. We now show that b0 is accepted. Suppose it is not; then one of the bv must be accepted. (The solution is nonzero by assumption; if only some bk are accepted, the total payment from these bids is at most IKI1 8 | V | | K | <1, which is not enough for any bid to be accepted; and if one of the bl is accepted, then the threshold for the corresponding bv is also reached.) For this v, bv1k, b + v1k, b − v1k contribute (at least) 4 | V | less than the total payments to c + v and c − v. Again, the other bv and bl cannot (by themselves) help to close this gap; and the bk can contribute at most IKI 1 It follows that b0 is accepted. Now, in order for b0 to be accepted, a total of 2IV I+T must be donated. Because is not possible (for any v G V) that both b + v and b − v are accepted, it follows that the total payment by the bv and the bl can be at most 2IVI--1 PROOF. Suppose we had such a polynomial time algorithm, and applied it to the DONATION-CLEARING instances that were reduced from MAX2SAT instances in Theorem 1. It would return a nonzero solution when the MAX2SAT instance has a solution, and a zero solution otherwise. So we can decide whether arbitrary MAX2SAT instances are satisfiable this way, and it would follow that P = NP. (Solving the problem to optimality is NP-complete in many other (noncomparable or even more restricted) settings as well--we omit such results because of space constraint.) This should not be interpreted to mean that our approach is infeasible. First, as we will show, there are very expressive families of bids for which the problem is solvable in polynomial time. Second, NP-completeness is often overcome in practice (especially when the stakes are high). For instance, even though the problem of clearing combinatorial auctions is NP-complete [20] (even to approximate [21]), they are typically solved to optimality in practice. 7. MIXED INTEGER PROGRAMMING FORMULATION In this section, we give a mixed integer programming (MIP) formulation for the general problem. We also discuss in which special cases this formulation reduces to a linear programming (LP) formulation. In such cases, the problem is solvable in polynomial time, because linear programs can be solved in polynomial time [11]. The variables of the MIP defining the final outcome are the payments made to the charities, denoted by πci, and the payments extracted from the bidders, πbj. In the case where we try to avoid direct payments and let the bidders pay the charities directly, we add variables πci, bj indicating how much bj pays to ci, with the constraints that for each ci, πci <~ πci, bj; and for each bj, πbj> ~ πci, bj. Addibj ci tionally, there is a constraint πci, bj = 0 whenever bidder bj is unwilling to pay charity ci. The rest of the MIP can be phrased in terms of the πci and πbj. The objectives we have discussed earlier are both linear: πci, and total amount donated is given by ~ m πci (coefficients can be added to repi = 1 resent different weights on the different charities in the objective). The constraint that the outcome should be valid (no deficit) is given simply by: For every bidder, for every charity, we define an additional utility variable uij indicating the utility that this bidder derives from the payment to this charity. The bidder's total surplus is given by Each uij is given as a function of πci by the (piecewise linear) function provided by the bidder. In order to represent this function in the MIP formulation, we will merely place upper bounding constraints on uij, so that it cannot exceed the given functions. The MIP solver can then push the uij variables all the way up to the constraint, in order to extract as much payment from this bidder as possible. In the case where the uij are concave, this is easy: if (sl, tl) and (sl +1, tl +1) are endpoints of a finite linear segment in the function, we add the constraint that uij ≤ tl + πci _ sl sl +1 _ sl (tl +1 − tl). If the final (infinite) segment starts at (sk, tk) and has slope d, we add the constraint that uij ≤ tk + d (πci − sk). Using the fact that the function is concave, for each value of πci, the tightest upper bound on uij is the one corresponding to the segment above that value of πci, and therefore these constraints are sufficient to force the correct value of uij. When the function is not concave, we require (for the first time) some binary variables. First, we define another point on the function: (sk +1, tk +1) = (sk + M, tk + dM), where d is the slope of the infinite segment and M is any upper bound on the πcj. This has the effect that we will never be on the infinite segment again. Now, let xi, j l be an indicator variable that should be 1 if πci is below the lth segment of the function, and 0 otherwise. To effect this, first add a l ≤ xl_1 + xl (where x_1 and xk +1 are defined to be zero), so that indeed only the two neighboring si, j l have nonzero weight. Now we add the constraint πci = now the λi, j Finally, each πbj is bounded by a function of uj by the (piecewise linear) function provided by the bidder (wj). Representing this function is entirely analogous to how we represented uij as a function of πci. (Again we will need binary variables only if the function is not concave.) Because we only use binary variables when either a utility function uij or a payment willingness function wj is not concave, it follows that if all of these are concave, our MIP formulation is simply a linear program--which can be solved in polynomial time. Thus: THEOREM 2. If all functions uij and wj are concave (and piecewise linear), the DONATION-CLEARING problem can be solved in polynomial time using linear programming. Even if some of these functions are not concave, we can simply replace each such function by the smallest upper bounding concave function, and use the linear programming formulation to obtain an upper bound on the objective--which may be useful in a search formulation of the general problem. 8. WHY ONE CANNOT DO MUCH BETTER THAN LINEAR PROGRAMMING One may wonder if, for the special cases of the DONATIONCLEARING problem that can be solved in polynomial time with linear programming, there exist special purpose algorithms that are much faster than linear programming algorithms. In this section, we show that this is not the case. We give a reduction from (the decision variant of) the general linear programming problem to (the decision variant of) a special case of the DONATION-CLEARING problem (which can be solved in polynomial time using linear programming). (The decision variant of an optimization problem asks the binary question: "Can the objective value exceed o?") Thus, any special-purpose algorithm for solving the decision variant of this special case of the DONATIONCLEARING problem could be used to solve a decision question about an arbitrary linear program just as fast. (And thus, if we are willing to call the algorithm a logarithmic number of times, we can solve the optimization version of the linear program.) We first observe that for linear programming, a decision question about the objective can simply be phrased as another constraint in the LP (forcing the objective to exceed the given value); then, the original decision question coincides with asking whether the resulting linear program has a feasible solution. THEOREM 3. The question of whether an LP (given by a set of linear constraints4) has a feasible solution can be modeled as a DONATION-CLEARING instance with payment maximization as the objective, with 2v charities and v + c bids (where v is the number of variables in the LP, and c is the number of constraints). In this model, each bid bj has only linear uij functions, and is a partially acceptable threshold bid (wj (u) = tj for u ≥ sj, otherwise wj (u) = utj sj). The v bids corresponding to the variables mention only two charities each; the c bids corresponding to the constraints mention only two times the number of variables in the corresponding constraint. PROOF. For every variable xi in the LP, let there be two charities, c + xi and c_xi. Let H be some number such that if there is a feasible solution to the LP, there is one in which every variable has absolute value at most H. In the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that ukj (πck) = akπck (this function is 0 for ck not mentioned in the bid), and wj (uj) = t for uj ≥ s, wj (uj) = ujt s otherwise. For every variable xi in the LP, let there be a bid bxi = ({(c + xi, 1), (c_xi, 1)}, 2H, 2H − vc). For every constraint E rji xi ≤ sj in the linear program, let there be a bid bj = i ({(c_xi, rji)} i: rji> 0 ∪ {(c + xi, − rji)} i: rj i <0, (E | rji |) H − sj, 1). i Let the target total amount donated be 2vH. Suppose there is a feasible solution (x ∗ 1, x ∗ 2,..., x ∗ v) to the LP. Without loss of generality, we can suppose that | x ∗ i | ≤ H for all i. Then, in the DONATION-CLEARING instance, for every i, let πc + xi = H + x * i, and let πc − xi = H − x * i (for a total payment of 2H to these two charities). This allows us to extract the maximum payment from the bids bxi--a total payment of 2vH − c. Additionally, the utility of bidder bj is now E rji (H − x * i) + E − rji (H + x * i) = equality stems from the fact that constraint j must be satisfied in the LP solution), so it follows we can extract the maximum payment from all the bidders bj, for a total payment of c. It follows that we can extract the required 2vH payment from the bidders, and there exists a solution to the DONATION-CLEARING instance with a total amount donated of at least 2vH. Now suppose there is a solution to the DONATIONCLEARING instance with a total amount donated of at least vH. Then the maximum payment must be extracted from each bidder. From the fact that the maximum payment must be extracted from each bidder bxi, it follows that for each i, πc + xi + πc − xi ≥ 2H. Because the maximum extractable total payment is 2vH, it follows that for each i, πc + xi + πc − xi = 2H. Let x * i = πc + xi − H = H − πc − xi. Then, from the fact that the maximum payment must be extracted from each bidder bj, it follows that (E | rji |) H − solution to the LP. 9. QUASILINEAR BIDS Another class of bids of interest is the class of quasilinear bids. In a quasilinear bid, the bidder's payment willingness function is linear in utility: that is, wj = uj. (Because the units of utility are arbitrary, we may as well let them correspond exactly to units of money--so we do not need a constant multiplier.) In most cases, quasilinearity is an unreasonable assumption: for example, usually bidders have a limited budget for donations, so that the payment willingness will stop increasing in utility after some point (or at least increase slower in the case of a "softer" budget constraint). Nevertheless, quasilinearity may be a reasonable assumption in the case where the bidders are large organizations with large budgets, and the charities are a few small projects requiring relatively little money. In this setting, once a certain small amount has been donated to a charity, a bidder will derive no more utility from more money being donated from that charity. Thus, the bidders will never reach a high enough utility for their budget constraint (even when it is soft) to take effect, and thus a linear approximation of their payment willingness function is reasonable. Another reason for studying the quasilinear setting is that it is the easiest setting for mechanism design, which we will discuss shortly. In this section, we will see that the clearing problem is much easier in the case of quasilinear bids. First, we address the case where we are trying to maximize surplus (which is the most natural setting for mechanism design). The key observation here is that when bids are quasilinear, the clearing problem decomposes across charities. PROOF. The resulting solution is certainly valid: first of all, at least as much money is collected as is given away, πci)--and the terms of this summation are the objectives of the individual optimization problems, each of which can be set at least to 0 (by setting all the variables are set to 0), so it follows that the expression is nonnegative. Second, no bidder bj pays more than she is willing to, because uj − πbj = summation are nonnegative by the constraints we imposed on the individual optimization problems. All that remains to show is that the solution is optimal. Because in an optimal solution, we will extract as much payment from the bidders as possible given the πci, all we need to show is that the πci are set optimally by this approach. Let π * ci be the amount paid to charity πci in some optimal solution. If we change this amount to π' ci and leave everything else unchanged, this will only affect the payment that we can extract from the bidders because of this particular charity, and the difference in surplus will be E uij (π' ci) − uij (π * ci) − π' ci + π * ci. This expression is, of bj course, 0 if π' ci = π * ci. But now notice that this expression is maximized as a function of π' ci by the decomposed solution for this charity (the terms without π' ci in them do not matter, and of course in the decomposed solution we always set πbi = uij (πci)). It follows that if we change πci to the j decomposed solution, the change in surplus will be at least 0 (and the solution will still be valid). Thus, we can change the πci one by one to the decomposed solution without ever losing any surplus. PROOF. By Lemma 1, we can solve the problem separately for each charity. For charity ci, this amounts to maximizing (E uij (πci)) − πci as a function of πci. Because all bj its terms are piecewise linear functions, this whole function is piecewise linear, and must be maximized at one of the points where it is nondifferentiable. It follows that we need only check all the points at which one of the terms is nondifferentiable. Unfortunately, the decomposing lemma does not hold for payment maximization. PROOF. Consider a single bidder b1 placing the following quasilinear bid over two charities c1 and c2: u11 (πc1) = 2πci for 0 <πci <1, u11 (πc1) = 2 + πci − 1 2. The decomposed solution is πc1 = 73, πc2 = 0, for a total donation of 73. But the solution πc1 = 1, πc2 = 2 is also valid, for a total donation of 3> 73. In fact, when payment maximization is the objective, DONATION-CLEARING remains (weakly) NP-complete in general. (In the remainder of the paper, proofs are omitted because of space constraint.) THEOREM 5. DONATION-CLEARING is (weakly) NPcomplete when payment maximization is the objective, even when every bid is concerns only one charity (and has a stepfunction utility function for this charity), and is quasilinear. However, when the bids are also concave, a simple greedy clearing algorithm is optimal. THEOREM 6. Given a DONATION-CLEARING instance with payment maximization as the objective where all bids are quasilinear and concave, consider the following algorithm. Start with πci = 0 for all charities. Then, letting dπci (at nondifferentiable points, these derivatives should be taken from the right), increase πc ∗ i (where c ∗ i G arg maxci γci), until either γc ∗ i is no longer the highest (in which case, recompute c ∗ i and start increasing the corresponding payment), or E bj let πbj = uj. (A similar greedy algorithm works when the objective is surplus and the bids are quasilinear and concave, with as only difference that we stop increasing the payments as soon as γc ∗ i <1.) 10. INCENTIVE COMPATIBILITY Up to this point, we have not discussed the bidders' incentives for bidding any particular way. Specifically, the bids may not truthfully reflect the bidders' preferences over charities because a bidder may bid strategically, misrepresenting her preferences in order to obtain a result that is better to herself. This means the mechanism is not strategy-proof. (We will show some concrete examples of this shortly.) This is not too surprising, because the mechanism described so far is, in a sense, a first-price mechanism, where the mechanism will extract as much payment from a bidder as her bid allows. Such mechanisms (for example, first-price auctions, where winners pay the value of their bids) are typically not strategy-proof: if a bidder reports her true valuation for an outcome, then if this outcome occurs, the payment the bidder will have to make will offset her gains from the outcome completely. Of course, we could try to change the rules of the game--which outcome (payment vector to charities) do we select for which bid vector, and which bidder pays how much--in order to make bidding truthfully beneficial, and to make the outcome better with regard to the bidders' true preferences. This is the field of mechanism design. In this section, we will briefly discuss the options that mechanism design provides for the expressive charity donation problem. 10.1 Strategic bids under the first-price mechanism We first point out some reasons for bidders to misreport their preferences under the first-price mechanism described in the paper up to this point. First of all, even when there is only one charity, it may make sense to underbid one's true valuation for the charity. For example, suppose a bidder would like a charity to receive a certain amount x, but does not care if the charity receives more than that. Additionally, suppose that the other bids guarantee that the charity will receive at least x no matter what bid the bidder submits (and the bidder knows this). Then the bidder is best off not bidding at all (or submitting a utility for the charity of 0), to avoid having to make any payment. (This is known in economics as the free rider problem [14]. With multiple charities, another kind of manipulation may occur, where the bidder attempts to steer others' payments towards her preferred charity. Suppose that there are two charities, and three bidders. The first bidder bids u11 (πc1) = Now, the third bidder's true preferences are accurately represented5 by the bid u13 (πc1) = 1 if πc1> 1, u13 (πc1) = 0 otherwise; u23 (πc2) = 3 if πc2> 1, u23 (πc1) = 0 otherwise; and w3 (u3) = 31u3 if u3 <1, w3 (u3) = 31 + 1 100 (u3--1) otherwise. Now, it is straightforward to check that, if the third bidder bids truthfully, regardless of whether the objective is surplus maximization or total donated, charity 1 will receive at least 1, and charity 2 will receive less than 1. The same is true if bidder 3 does not place a bid at all (as in the previous type of manipulation); hence bidder 2's utility will be 1 in this case. But now, if bidder 3 reports u13 (πc1) = 0 everywhere; u23 (πc2) = 3 if πc2> 1, u23 (πc2) = 0 otherwise (this part of the bid is truthful); and w3 (u3) = 31u3 if u3 <1, w3 (u3) = 3 1 otherwise; then charity 2 will receive at least 1, and bidder 3 will have to pay at most 31. Because up to this amount of payment, one unit of money corresponds to three units of utility to bidder 3, it follows his utility is now at least 3--1 = 2> 1. We observe that in this case, the strategic bidder is not only affecting how much the bidders pay, but also how much the charities receive. 10.2 Mechanism design in the quasilinear setting There are four reasons why the mechanism design approach is likely to be most successful in the setting of quasilinear preferences. First, historically, mechanism design has been been most successful when the quasilinear assumption could be made. Second, because of this success, some very general mechanisms have been discovered for the quasilinear setting (for instance, the VCG mechanisms [24, 4, 10], or the dAGVA mechanism [6, 1]) which we could apply directly to the expressive charity donation problem. Third, as we saw in Section 9, the clearing problem is much easier in 5Formally, this means that if the bidder is forced to pay the full amount that his bid allows for a particular vector of payments to charities, the bidder is indifferent between this and not participating in the mechanism at all. (Compare this to bidding truthfully in a first-price auction.) this setting, and thus we are less likely to run into computational trouble for the mechanism design problem. Fourth, as we will show shortly, the quasilinearity assumption in some cases allows for decomposing the mechanism design problem over the charities (as it did for the simple clearing problem). Moreover, in the quasilinear setting (unlike in the general setting), it makes sense to pursue social welfare (the sum of the utilities) as the objective, because now 1) units of utility correspond directly to units of money, so that we do not have the problem of the bidders arbitrarily scaling their utilities; and 2) it is no longer possible to give a payment willingness function of 0 while still affecting the donations through a utility function. Before presenting the decomposition result, we introduce some terms from game theory. A type is a preference profile that a bidder can have and can report (thus, a type report is a bid). Incentive compatibility (IC) means that bidders are best off reporting their preferences truthfully; either regardless of the others' types (in dominant strategies), or in expectation over them (in Bayes-Nash equilibrium). Individual rationality (IR) means agents are at least as well off participating in the mechanism as not participating; either regardless of the others' types (ex-post), or in expectation over them (ex-interim). A mechanism is budget balanced if there is no flow of money into or out of the system--in general (ex-post), or in expectation over the type reports (ex-ante). A mechanism is efficient if it (always) produces the efficient allocation of wealth to charities. THEOREM 7. Suppose all agents' preferences are quasilinear. Furthermore, suppose that there exists a single-charity mechanism M that, for a certain subclass P of (quasilinear) preferences, under a given solution concept S (implementation in dominant strategies or Bayes-Nash equilibrium) and a given notion of individual rationality R (ex post, ex interim, or none), satisfies a certain notion of budget balance (ex post, ex ante, or none), and is ex-post efficient. Then there exists such a mechanism for any number of charities. Two mechanisms that satisfy efficiency (and can in fact be applied directly to the multiple-charity problem without use of the previous theorem) are the VCG (which is incentive compatible in dominant strategies) and dAGVA (which is incentive compatible only in Bayes-Nash equilibrium) mechanisms. Each of them, however, has a drawback that would probably make it impractical in the setting of donations to charities. The VCG mechanism is not budget balanced. The dAGVA mechanism does not satisfy ex-post individual rationality. In the next subsection, we will investigate if we can do better in the setting of donations to charities. 10.3 Impossibility of efficiency In this subsection, we show that even in a very restricted setting, and with minimal requirements on IC and IR constraints, it is impossible to create a mechanism that is efficient. The case of step-functions in this theorem corresponds exactly to the case of a single, fixed-size, nonexcludable public good (the "public good" being that the charity receives the desired amount)--for which such an impossibility result is already known [14]. Many similar results are known, probably the most famous of which is the Myerson-Satterthwaite impossibility result, which proves the impossibility of efficient bilateral trade under the same requirements [15]. Theorem 7 indicates that there is no reason to decide on donations to multiple charities under a single mechanism (rather than a separate one for each charity), when an efficient mechanism with the desired properties exists for the single-charity case. However, because under the requirements of Theorem 8, no such mechanism exists, there may be a benefit to bringing the charities under the same umbrella. The next proposition shows that this is indeed the case. PROPOSITION 2. There exist settings with two charities where there exists no ex-post budget balanced, ex-post efficient, and ex-interim individually rational mechanism with Bayes-Nash equilibrium as the solution concept for either charity alone; but there exists an ex-post budget balanced, ex-post efficient, and ex-post individually rational mechanism with dominant strategies as the solution concept for both charities together. (Even when the conditions are the same as in Theorem 8, apart from the fact that there are now two charities.) 11. CONCLUSION We introduced a bidding language for expressing very general types of matching offers over multiple charities. We formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings. We gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time. We then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids. Subsequently, we showed that the clearing problem is much easier when bids are quasilinear (where payment willingness functions are linear)--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically. We showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint. There are many directions for future research. One is to build a web-based implementation of the (first-price) mechanism proposed in this paper. Another is to study the computational scalability of our MIP/LP approach. It is also important to identify other classes of bids (besides concave ones) for which the clearing problem is tractable. Much crucial work remains to be done on the mechanism design problem. Finally, are there good iterative mechanisms for charity donation? 6
Expressive Negotiation over Donations to Charities ∗ ABSTRACT When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint. 1. INTRODUCTION When money is donated to a charitable (or other) cause (hereafter referred to as charity), often the donating party gives unconditionally: a fixed amount is transferred from the donator to the charity, and none of this transfer is contingent on other events--in particular, it is not contingent on the amount given by other parties. Indeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals. However, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity. In such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more. This is done by making the donation conditional on the others' donations. The following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation. Suppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1. It follows neither of them will be willing to give unconditionally, because $0.75 <$1. However, if the two parties draw up a contract that says that they will each give $0.5, both the parties have an incentive to accept this contract (rather than have no contract at all): with the contract, the charity will receive $1 (rather than $0 without a contract), which is worth $0.75 to each party, which is greater than the $0.5 that that party will have to give. Effectively, each party has made its donation conditional on the other party's donation, leading to larger donations and greater happiness to all parties involved. One method that is often used to effect this is to make a matching offer. Examples of matching offers are: "I will give x dollars for every dollar donated." , or "I will give x dollars if the total collected from other parties exceeds y." In our example above, one of the parties can make the offer "I will donate $0.5 if the other party also donates at least that much", and the other party will have an incentive to indeed donate $0.5, so that the total amount given to the charity increases by $1. Thus this matching offer implements the contract suggested above. As a real-world example, the United States government has authorized a donation of up to $1 billion to the Global Fund to fight AIDS, TB and Malaria, under the condition that the American contribution does not exceed one third of the total--to encourage other countries to give more [23]. However, there are several severe limitations to the simple approach of matching offers as just described. 1. It is not clear how two parties can make matching offers where each party's offer is stated in terms of the amount that the other pays. (For example, it is not clear what the outcome should be when both parties offer to match the other's donation.) Thus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)--or at least there can be no circular dependencies .1 2. Given the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities. For instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate. (Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party's offer to take effect.) In contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities. Moreover, the amount the party offers to donate at different levels of appreciation is allowed to vary arbitrarily (it does need to be a dollar-for-dollar (or n-dollarfor-dollar) matching arrangement, or an arrangement where the party offers a fixed amount provided a given (strike) total has been exceeded). Finally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other. Given each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid. However, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem. A large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it. 1Typically, larger organizations match offers of private individuals. For example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers' donations [8]. Towards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully. In short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world. A web-based implementation of the ideas described in this paper can facilitate voluntary reallocation of wealth on a global scale. Aditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms. 2. COMPARISON TO COMBINATORIAL AUCTIONS AND EXCHANGES 3. DEFINITIONS 4. A SIMPLIFIED BIDDING LANGUAGE 5. AVOIDING INDIRECT PAYMENTS 6. HARDNESS OF CLEARING THE MARKET 7. MIXED INTEGER PROGRAMMING FORMULATION 8. WHY ONE CANNOT DO MUCH BETTER THAN LINEAR PROGRAMMING 9. QUASILINEAR BIDS 10. INCENTIVE COMPATIBILITY 10.1 Strategic bids under the first-price mechanism 10.3 Impossibility of efficiency 11. CONCLUSION We introduced a bidding language for expressing very general types of matching offers over multiple charities. We formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings. We gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time. We then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids. Subsequently, we showed that the clearing problem is much easier when bids are quasilinear (where payment willingness functions are linear)--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically. We showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint. There are many directions for future research. One is to build a web-based implementation of the (first-price) mechanism proposed in this paper. Another is to study the computational scalability of our MIP/LP approach. It is also important to identify other classes of bids (besides concave ones) for which the clearing problem is tractable. Much crucial work remains to be done on the mechanism design problem. Finally, are there good iterative mechanisms for charity donation? 6
Expressive Negotiation over Donations to Charities ∗ ABSTRACT When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint. 1. INTRODUCTION Indeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals. However, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity. In such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more. This is done by making the donation conditional on the others' donations. The following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation. Suppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1. It follows neither of them will be willing to give unconditionally, because $0.75 <$1. Effectively, each party has made its donation conditional on the other party's donation, leading to larger donations and greater happiness to all parties involved. One method that is often used to effect this is to make a matching offer. Examples of matching offers are: "I will give x dollars for every dollar donated." Thus this matching offer implements the contract suggested above. However, there are several severe limitations to the simple approach of matching offers as just described. 1. It is not clear how two parties can make matching offers where each party's offer is stated in terms of the amount that the other pays. (For example, it is not clear what the outcome should be when both parties offer to match the other's donation.) Thus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)--or at least there can be no circular dependencies .1 2. Given the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities. For instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate. (Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party's offer to take effect.) In contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities. Finally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other. Given each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid. However, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem. A large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it. 1Typically, larger organizations match offers of private individuals. For example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers' donations [8]. Towards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully. In short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world. Aditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms. 11. CONCLUSION We introduced a bidding language for expressing very general types of matching offers over multiple charities. We formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings. We gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time. We then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids. For the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically. We showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint. One is to build a web-based implementation of the (first-price) mechanism proposed in this paper. Another is to study the computational scalability of our MIP/LP approach. It is also important to identify other classes of bids (besides concave ones) for which the clearing problem is tractable. Much crucial work remains to be done on the mechanism design problem. Finally, are there good iterative mechanisms for charity donation? 6
C-81
Adaptive Duty Cycling for Energy Harvesting Systems
Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energy-neutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used.
[ "duti cycl", "energi harvest system", "energi harvest", "sensor network", "energi neutral oper", "environment energi", "power manag", "harvest-awar power manag", "perform scale", "duti cycl rate", "network latenc", "sampl frequenc", "solar panel", "energi track", "storag buffer", "power scale", "low power design" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "M", "U", "M", "M", "U", "M", "M" ]
Adaptive Duty Cycling for Energy Harvesting Systems Jason Hsu, Sadaf Zahedi, Aman Kansal, Mani Srivastava Electrical Engineering Department University of California Los Angeles {jasonh,kansal,szahedi,mbs} @ ee.ucla.edu Vijay Raghunathan NEC Labs America Princeton, NJ vijay@nec-labs.com ABSTRACT Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used. Categories and Subject Descriptors C.2.4 [Computer Systems Organization]: Computer Communication Networks-Distributed Systems General Terms Algorithms, Design 1. INTRODUCTION Energy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation. The fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power. Therefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically. However, metrics for evaluating energy harvesting systems are different from those used for battery powered systems. Environmental energy is distinct from battery energy in two ways. First it is an inexhaustible supply which, if appropriately used, can allow the system to last forever, unlike the battery which is a limited resource. Second, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the battery which can be known deterministically. Thus, power management methods based on battery status are not always applicable to energy harvesting systems. In addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply. Consequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life. Energy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy. In this paper, we will study how to adapt the performance of the available energy profile. There exist many techniques to accomplish performance scaling at the node level, such as radio transmit power adjustment [1], dynamic voltage scaling [2], and the use of low power modes [3]. However, these techniques require hardware support and may not always be available on resource constrained sensor nodes. Alternatively, a common performance scaling technique is duty cycling. Low power devices typically provide at least one low power mode in which the node is shut down and the power consumption is negligible. In addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency. We will use duty cycle adjustment as the primitive performance scaling technique in our algorithms. 2. RELATED WORK Energy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc.. Several technologies to extract energy from the environment have been demonstrated including solar, motion-based, biochemical, vibration-based [8], [9], [10], [11], and others are being developed [12], [13]. While several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation. The first work to take environmental energy into account for data routing was [17], followed by [18]. While these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation. Our proposed techniques attempt to maximize system performance while maintaining energy-neutral operation. 3. SYSTEM MODEL The energy usage considerations in a harvesting system vary significantly from those in a battery powered system, as mentioned earlier. We propose the model shown in Figure 1 for designing energy management methods in a harvesting system. The functions of the various blocks shown in the figure are discussed below. The precise methods used in our system to achieve these functions will be discussed in subsequent sections. Harvested Energy Tracking: This block represents the mechanisms used to measure the energy received from the harvesting device, such as the solar panel. Such information is useful for determining the energy availability profile and adapting system performance based on it. Collecting this information requires that the node hardware be equipped with the facility to measure the power generated from the environment, and the Heliomote platform [14] we used for evaluating the algorithms has this capability. Energy Generation Model: For wireless sensor nodes with limited storage and processing capabilities to be able to use the harvested energy data, models that represent the essential components of this information without using extensive storage are required. The purpose of this block is to provide a model for the energy available to the system in a form that may be used for making power management decisions. The data measured by the energy tracking block is used here to predict future energy availability. A good prediction model should have a low prediction error and provide predicted energy values for durations long enough to make meaningful performance scaling decisions. Further, for energy sources that exhibit both long-term and short-term patterns (e.g., diurnal and climate variations vs. weather patterns for solar energy), the model must be able to capture both characteristics. Such a model can also use information from external sources such as local weather forecast service to improve its accuracy. Energy Consumption Model: It is also important to have detailed information about the energy usage characteristics of the system, at various performance levels. For general applicability of our design, we will assume that only one sleep mode is available. We assume that the power consumption in the sleep and active modes is known. It may be noted that for low power systems with more advanced capabilities such as dynamic voltage scaling (DVS), multiple low power modes, and the capability to shut down system components selectively, the power consumption in each of the states and the resultant effect on application performance should be known to make power management decisions. Energy Storage Model: This block represents the model for the energy storage technology. Since all the generated energy may not be used instantaneously, the harvesting system will usually have some energy storage technology. Storage technologies (e.g., batteries and ultra-capacitors) are non-ideal, in that there is some energy loss while storing and retrieving energy from them. These characteristics must be known to efficiently manage energy usage and storage. This block also includes the system capability to measure the residual stored energy. Most low power systems use batteries to store energy and provide residual battery status. This is commonly based on measuring the battery voltage which is then mapped to the residual battery energy using the known charge to voltage relationship for the battery technology in use. More sophisticated methods which track the flow of energy into and out of the battery are also available. Harvesting-aware Power Management: The inputs provided by the previously mentioned blocks are used here to determine the suitable power management strategy for the system. Power management could be carried to meet different objectives in different applications. For instance, in some systems, the harvested energy may marginally supplement the battery supply and the objective may be to maximize the system lifetime. A more interesting case is when the harvested energy is used as the primary source of energy for the system with the objective of achieving indefinitely long system lifetime. In such cases, the power management objective is to achieve energy neutral operation. In other words, the system should only use as much energy as harvested from the environment and attempt to maximize performance within this available energy budget. 4. THEORETICALLY OPTIMAL POWER MANAGEMENT We develop the following theory to understand the energy neutral mode of operation. Let us define Ps(t) as the energy harvested from the environment at time t, and the energy being consumed by the load at that time is Pc(t). Further, we model the non-ideal storage buffer by its round-trip efficiency η (strictly less than 1) and a constant leakage power Pleak. Using this notation, applying the rule of energy conservation leads to the following inequality: 0 0 00 [ ( ) ( )] [ ( ) ( )] 0 T T T s c c s leakP t P t dt P t P t dt P dtB η + + − − − ≥+ −∫ ∫ ∫ (1) where B0 is the initial battery level and the function [X]+ = X if X > 0 and zero otherwise. DEFINITION 1 (ρ,σ1,σ2) function: A non-negative, continuous and bounded function P (t) is said to be a (ρ,σ1,σ2) function if and only if for any value of finite real number T , the following are satisfied: 2 1( ) T T P t dt Tρ σ ρ σ− ≤ ≤ +∫ (2) This function can be used to model both energy sources and loads. If the harvested energy profile Ps(t) is a (ρ1,σ1,σ2) function, then the average rate of available energy over long durations becomes ρ1, and the burstiness is bounded by σ1 and σ2 . Similarly, Pc(t) can be modeled as a (ρ2,σ3) function, when ρ2 and σ3 are used to place an upper bound on power consumption (the inequality on the right side) while there are no minimum power consumption constraints. The condition for energy neutrality, equation (1), leads to the following theorem, based on the energy production, consumption, and energy buffer models discussed above. THEOREM 1 (ENERGY NEUTRAL OPERATION): Consider a harvesting system in which the energy production profile is characterized by a (ρ1, σ1, σ2) function, the load is characterized by a (ρ2, σ3) function and the energy buffer is characterized by parameters η for storage efficiency, and Pleak for leakage power. The following conditions are sufficient for the system to achieve energy neutrality: ρ2 ≤ ηρ1 − Pleak (3) B0 ≥ ησ2 + σ3 (4) B ≥ B0 (5) where B0 is the initial energy stored in the buffer and provides a lower bound on the capacity of the energy buffer B. The proof is presented in our prior work [19]. To adjust the duty cycle D using our performance scaling algorithm, we assume the following relation between duty cycle and the perceived utility of the system to the user: Suppose the utility of the application to the user is represented by U(D) when the system operates at a duty cycle D. Then, min 1 min max 2 max ( ) 0, ( ) , ( ) , U D if D D U D k D if D D D U D k if D D β = < = + ≤ ≤ = > This is a fairly general and simple model and the specific values of Dmin and Dmax may be determined as per application requirements. As an example, consider a sensor node designed to detect intrusion across a periphery. In this case, a linear increase in duty cycle translates into a linear increase in the detection probability. The fastest and the slowest speeds of the intruders may be known, leading to a minimum and Harvested Energy Tracking Energy Consumption Model Energy Storage Model Harvestingaware Power Mangement Energy Generation Model LOAD Figure 1. System model for an energy harvesting system. 181 maximum sensing delay tolerable, which results in the relevant Dmax and Dmin for the sensor node. While there may be cases where the relationship between utility and duty cycle may be non-linear, in this paper, we restrict our focus on applications that follow this linear model. In view of the above models for the system components and the required performance, the objective of our power management strategy is adjust the duty cycle D(i) dynamically so as to maximize the total utility U(D) over a period of time, while ensuring energy neutral operation for the sensor node. Before discussing the performance scaling methods for harvesting aware duty cycle adaptation, let us first consider the optimal power management strategy that is possible for a given energy generation profile. For the calculation of the optimal strategy, we assume complete knowledge of the energy availability profile at the node, including the availability in the future. The calculation of the optimal is a useful tool for evaluating the performance of our proposed algorithm. This is particularly useful for our algorithm since no prior algorithms are available to serve as a baseline for comparison. Suppose the time axis is partitioned into discrete slots of duration ΔT, and the duty cycle adaptation calculation is carried out over a window of Nw such time slots. We define the following energy profile variables, with the index i ranging over {1,..., Nw}: Ps(i) is the power output from the harvested source in time slot i, averaged over the slot duration, Pc is the power consumption of the load in active mode, and D(i) is the duty cycle used in slot i, whose value is to be determined. B(i) is the residual battery energy at the beginning of slot i. Following this convention, the battery energy left after the last slot in the window is represented by B(Nw+1). The values of these variables will depend on the choice of D(i). The energy used directly from the harvested source and the energy stored and used from the battery must be accounted for differently. Figure 2 shows two possible cases for Ps(i) in a time slot. Ps(i) may either be less than or higher than Pc , as shown on the left and right respectively. When Ps(i) is lower than Pc, some of the energy used by the load comes from the battery, while when Ps(i) is higher than Pc, all the energy used is supplied directly from the harvested source. The crosshatched area shows the energy that is available for storage into the battery while the hashed area shows the energy drawn from the battery. We can write the energy used from the battery in any slot i as: ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( )[ ]1 1c cs s sB i B i TD i P P i TP i D i TD i P i Pη η + + − + = Δ − − Δ − − − (6) In equation (6), the first term on the right hand side measures the energy drawn from the battery when Ps(i) < Pc, the next term measures the energy stored into the battery when the node is in sleep mode, and the last term measures the energy stored into the battery in active mode if Ps(i) > Pc. For energy neutral operation, we require the battery at the end of the window of Nw slots to be greater than or equal to the starting battery. Clearly, battery level will go down when the harvested energy is not available and the system is operated from stored energy. However, the window Nw is judiciously chosen such that over that duration, we expect the environmental energy availability to complete a periodic cycle. For instance, in the case of solar energy harvesting, Nw could be chosen to be a twenty-four hour duration, corresponding to the diurnal cycle in the harvested energy. This is an approximation since an ideal choice of the window size would be infinite, but a finite size must be used for analytical tractability. Further, the battery level cannot be negative at any time, and this is ensured by having a large enough initial battery level B0 such that node operation is sustained even in the case of total blackout during a window period. Stating the above constraints quantitatively, we can express the calculation of the optimal duty cycles as an optimization problem below: 1 max ( ) wN i D i = ∑ (7) ( ) ( ) ( ) ( ) ( ) ( ){ } ( ) ( )1 1c s s s cB i B i TD i P P i TP i D i TD i P i Pη η + + ⎡ ⎤ ⎡ ⎤− + = Δ − − Δ − − −⎣ ⎦ ⎣ ⎦ (8) 0(1)B B= (9) 0( 1)wB N B+ ≥ (10) min w( ) i {1,...,N }D i D≥ ∀ ∈ (11) max w( ) i {1,...,N }D i D≤ ∀ ∈ (12) The solution to the optimization problem yields the duty cycles that must be used in every slot and the evolution of residual battery over the course of Nw slots. Note that while the constraints above contain the non-linear function [x]+ , the quantities occurring within that function are all known constants. The variable quantities occur only in linear terms and hence the above optimization problem can be solved using standard linear programming techniques, available in popular optimization toolboxes. 5. HARVESTING-AWARE POWER MANAGEMENT We now present a practical algorithm for power management that may be used for adapting the performance based on harvested energy information. This algorithm attempts to achieve energy neutral operation without using knowledge of the future energy availability and maximizes the achievable performance within that constraint. The harvesting-aware power management strategy consists of three parts. The first part is an instantiation of the energy generation model which tracks past energy input profiles and uses them to predict future energy availability. The second part computes the optimal duty cycles based on the predicted energy, and this step uses our computationally tractable method to solve the optimization problem. The third part consists of a method to dynamically adapt the duty cycle in response to the observed energy generation profile in real time. This step is required since the observed energy generation may deviate significantly from the predicted energy availability and energy neutral operation must be ensured with the actual energy received rather than the predicted values. 5.1. Energy Prediction Model We use a prediction model based on Exponentially Weighted Moving-Average (EWMA). The method is designed to exploit the diurnal cycle in solar energy but at the same time adapt to the seasonal variations. A historical summary of the energy generation profile is maintained for this purpose. While the storage data size is limited to a vector length of Nw values in order to minimize the memory overheads of the power management algorithm, the window size is effectively infinite as each value in the history window depends on all the observed data up to that instant. The window size is chosen to be 24 hours and each time slot is taken to be 30 minutes as the variation in generated power by the solar panel using this setting is less than 10% between each adjacent slots. This yields Nw = 48. Smaller slot durations may be used at the expense of a higher Nw. The historical summary maintained is derived as follows. On a typical day, we expect the energy generation to be similar to the energy generation at the same time on the previous days. The value of energy generated in a particular slot is maintained as a weighted average of the energy received in the same time-slot during all observed days. The weights are exponential, resulting in decaying contribution from older Figure 2. Two possible cases for energy calculations Slot i Slot k Pc Pc P(i) P(i) Active Sleep 182 data. More specifically, the historical average maintained for each slot is given by: 1 (1 )k k kx x xα α−= + − where α is the value of the weighting factor, kx is the observed value of energy generated in the slot, and 1kx − is the previously stored historical average. In this model, the importance of each day relative to the previous one remains constant because the same weighting factor was used for all days. The average value derived for a slot is treated as an estimate of predicted energy value for the slot corresponding to the subsequent day. This method helps the historical average values adapt to the seasonal variations in energy received on different days. One of the parameters to be chosen in the above prediction method is the parameter α, which is a measure of rate of shift in energy pattern over time. Since this parameter is affected by the characteristics of the energy and sensor node location, the system should have a training period during which this parameter will be determined. To determine a good value of α, we collected energy data over 72 days and compared the average error of the prediction method for various values of α. The error based on the different values of α is shown in Figure 3. This curve suggests an optimum value of α = 0.15 for minimum prediction error and this value will be used in the remainder of this paper. 5.2. Low-complexity Solution The energy values predicted for the next window of Nw slots are used to calculated the desired duty cycles for the next window, assuming the predicted values match the observed values in the future. Since our objective is to develop a practical algorithm for embedded computing systems, we present a simplified method to solve the linear programming problem presented in Section 4. To this end, we define the sets S and D as follows: { } { } | ( ) 0 | ( ) 0 s c c s S i P i P D i P P i = − ≥ = − > The two sets differ by the condition that whether the node operation can be sustained entirely from environmental energy. In the case that energy produced from the environment is not sufficient, battery will be discharged to supplement the remaining energy. Next we sum up both sides of (6) over the entire Nw window and rewrite it with the new notation. 1 1 1 1 ( )[ ( )] ( ) ( ) ( ) ( )[ ( ) ] Nw Nw Nw i i c s s s s c i i D i i i S B B TD i P P i TP i TP i D i TD i P i Pη η η+ = ∈ = = ∈ − = Δ − − Δ + Δ − Δ −∑ ∑ ∑ ∑ ∑ The term on the left hand side is actually the battery energy used over the entire window of Nw slots, which can be set to 0 for energy neutral operation. After some algebraic manipulation, this yields: 1 1 ( ) ( ) ( ) 1 ( ) Nw c s s c i i D i S P P i D i P i P D i η η= ∈ ∈ ⎛ ⎞⎛ ⎞ = + − +⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ ∑ ∑ (13) The term on the left hand side is the total energy received in Nw slots. The first term on the right hand side can be interpreted as the total energy consumed during the D slots and the second term is the total energy consumed during the S slots. We can now replace three constraints (8), (9), and (10) in the original problem with (13), restating the optimization problem as follows: 1 max ( ) wN i D i = ∑ 1 1 ( ) ( ) ( ) 1 ( ) Nw c s s c i i D i S P P i D i P i P D i η η= ∈ ∈ ⎛ ⎞⎛ ⎞ = + − +⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ ∑ ∑ min max D(i) D {1,...,Nw) D(i) D {1,...,Nw) i i ≥ ∀ ∈ ≤ ∀ ∈ This form facilitates a low complexity solution that doesn``t require a general linear programming solver. Since our objective is to maximize the total system utility, it is preferable to set the duty cycle to Dmin for time slots where the utility per unit energy is the least. On the other hand, we would also like the time slots with the highest Ps to operate at Dmax because of better efficiency of using energy directly from the energy source. Combining these two characteristics, we define the utility co-efficient for each slot i as follows: 1 ( ) 1 1 ( ) 1 c c s P for i S W i P P i for i D η η ∈⎧ ⎪ = ⎛ ⎞⎛ ⎞⎨ + − ∈⎜ ⎟⎜ ⎟⎪ ⎝ ⎠⎝ ⎠⎩ where W(i) is a representation of how efficient the energy usage in a particular time slot i is. A larger W(i) indicates more system utility per unit energy in slot i and vice versa. The algorithm starts by assuming D(i) =Dmin for i = {1...NW} because of the minimum duty cycle requirement, and computes the remaining system energy R by: 1 1 ( ) ( ) ( ) 1 ( ) (14) Nw c s s c i i D i S P R P i D i P i P D i η η= ∈ ∈ ⎛ ⎞⎛ ⎞ = − + − −⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ ∑ ∑ A negative R concludes that the optimization problem is infeasible, meaning the system cannot achieve energy neutrality even at the minimum duty cycle. In this case, the system designer is responsible for increasing the environment energy availability (e.g., by using larger solar panels). If R is positive, it means the system has excess energy that is not being used, and this may be allocated to increase the duty cycle beyond Dmin for some slots. Since our objective is to maximize the total system utility, the most efficient way to allocate the excess energy is to assign duty cycle Dmax to the slots with the highest W(i). So, the coefficients W(i) are arranged in decreasing order and duty cycle Dmax is assigned to the slots beginning with the largest coefficients until the excess energy available, R (computed by (14) in every iteration), is insufficient to assign Dmax to another slot. The remaining energy, RLast, is used to increase the duty cycle to some value between Dmin and Dmax in the slot with the next lower coefficient. Denoting this slot with index j, the duty cycle is given by: D(j)= min / ( ( ) ) / ( ) Last c Last s c s R P if j D DR if j S P j P P jη ∈⎧ ⎫ ⎪ ⎪ +⎨ ⎬ ∈⎪ ⎪− −⎩ ⎭ The above solution to the optimization problem requires only simple arithmetic calculations and one sorting step which can be easily implemented on an embedded platform, as opposed to implementing a general linear program solver. 5.3. Slot-by-slot continual duty cycle adaptiation. The observed energy values may vary greatly from the predicted ones, such as due to the effect of clouds or other sudden changes. It is thus important to adapt the duty cycles calculated using the predicted values, to the actual energy measurements in real time to ensure energy neutrality. Denote the initial duty cycle assignments for each time slot i computed using the predicted energy values as D(i) = {1, ...,Nw}. First we compute the difference between predicted power level Ps(i) and actual power level observed, Ps''(i) in every slot i. Then, the excess energy in slot i, denoted by X, can be obtained as follows: ( ) '( ) '( ) 1 ( ) '( ) ( )[ ( ) '( )](1 ) '( ) s s s c s s s s s c P i P i if P i P X P i P i D i P i P i if P i P η − >⎧ ⎪ = ⎨ − − − − ≤⎪ ⎩ 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 2.6 2.7 2.8 2.9 3 alpha AvgError(mA) Figure 3. Choice of prediction parameter. 183 The upper term accounts for the energy difference when actual received energy is more than the power drawn by the load. On the other hand, if the energy received is less than Pc, we will need to account for the extra energy used from the battery by the load, which is a function of duty cycle used in time slot i and battery efficiency factor η. When more energy is received than predicted, X is positive and that excess energy is available for use in the subsequent solutes, while if X is negative, that energy must be compensated from subsequent slots. CASE I: X<0. In this case, we want to reduce the duty cycles used in the future slots in order to make up for this shortfall of energy. Since our objective function is to maximize the total system utility, we have to reduce the duty cycles for time slots with the smallest normalized utility coefficient, W(i). This is accomplished by first sorting the coefficient W(j) ,where j>i. in decreasing order, and then iteratively reducing Dj to Dmin until the total reduction in energy consumption is the same as X. CASE II: X>0. Here, we want to increase the duty cycles used in the future to utilize this excess energy we received in recent time slot. In contrast to Case I, the duty cycles of future time slots with highest utility coefficient W(i) should be increased first in order to maximize the total system utility. Suppose the duty cycle is changed by d in slot j. Define a quantity R(j,d) as follows: ⎪ ⎩ ⎪ ⎨ ⎧ <=⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −+ >⋅ = lji l ljl PPifP P d PPifP djR 1 1 d ),( ηη The precise procedure to adapt the duty cycle to account for the above factors is presented in Algorithm 1. This calculation is performed at the end of every slot to set the duty cycle for the next slot. We claim that our duty cycling algorithm is energy neutral because an surplus of energy at the previous time slot will always translate to additional energy opportunity for future time slots, and vice versa. The claim may be violated in cases of severe energy shortages especially towards the end of window. For example, a large deficit in energy supply can``t be restored if there is no future energy input until the end of the window. In such case, this offset will be carried over to the next window so that long term energy neutrality is still maintained. 6. EVALUATION Our adaptive duty cycling algorithm was evaluated using an actual solar energy profile measured using a sensor node called Heliomote, capable of harvesting solar energy [14]. This platform not only tracks the generated energy but also the energy flow into and out of the battery to provide an accurate estimate of the stored energy. The energy harvesting platform was deployed in a residential area in Los Angeles from the beginning of June through the middle of August for a total of 72 days. The sensor node used is a Mica2 mote running at a fixed 40% duty cycle with an initially full battery. Battery voltage and net current from the solar panels are sampled at a period of 10 seconds. The energy generation profile for that duration, measured by tracking the output current from the solar cell is shown in Figure 4, both on continuous and diurnal scales. We can observe that although the energy profile varies from day to day, it still exhibits a general pattern over several days. 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 Day mA 0 5 10 15 20 0 10 20 30 40 50 60 70 Hour mA 6.1. Prediction Model We first evaluate the performance of the prediction model, which is judged by the amount of absolute error it made between the predicted and actual energy profile. Figure 5 shows the average error of each time slot in mA over the entire 72 days. Generally, the amount of error is larger during the day time because that``s when the factor of weather can cause deviations in received energy, while the prediction made for night time is mostly correct. 6.2. Adaptive Duty cycling algorithm Prior methods to optimize performance while achieving energy neutral operation using harvested energy are scarce. Instead, we compare the performance of our algorithm against two extremes: the theoretical optimal calculated assuming complete knowledge about future energy availability and a simple approach which attempts to achieve energy neutrality using a fixed duty cycle without accounting for battery inefficiency. The optimal duty cycles are calculated for each slot using the future knowledge of actual received energy for that slot. For the simple approach, the duty cycle is kept constant within each day and is Figure 4 Solar Energy Profile (Left: Continuous, Right: Diurnal) Input: D: Initial duty cycle, X: Excess energy due to error in the prediction, P: Predicted energy profile, i: index of current time slot Output: D: Updated duty cycles in one or more subsequent slots AdaptDutyCycle() Iteration: At each time slot do: if X > 0 Wsorted = W{1, ...,Nw} sorted in decending order. Q := indices of Wsorted for k = 1 to |Q| if Q(k) ≤ i or D(Q(k)) ≥ Dmax //slot is already passed continue if R(Q(k), Dmax − D(Q(k))) < X D(Q(k)) = Dmax X = X − R(j, Dmax − D(Q(k))) else //X insufficient to increase duty cycle to Dmax if P (Q(k)) > Pl D(Q(k)) = D(Q(k)) + X/Pl else D(Q(k)) = D(Q(k)) + ( ( ))(1 1 ))c s X P P Q kη η+ − if X < 0 Wsorted = W{1, ...,Nw} sorted in ascending order. Q := indices of Wsorted for k = 1 to |Q| if Q(k) ≤ I or D(Q(k)) ≤ Dmin continue if R(Q(k), Dmax − D(Q(k))) > X D(Q(k)) = Dmin X = X − R(j, Dmin − D(Q(k))) else if P (Q(k)) > Pc D(Q(k)) = D(Q(k)) + X/Pc else D(Q(k)) = D(Q(k)) + ( ( ))(1 1 ))c s X P P Q kη η+ − ALGORITHM 1 Pseudocode for the duty-cycle adaptation algorithm Figure 5. Average Predictor Error in mA 0 5 10 15 20 25 0 2 4 6 8 10 12 Time(H) abserror(mA) 184 computed by taking the ratio of the predicted energy availability and the maximum usage, and this guarantees that the senor node will never deplete its battery running at this duty cycle. {1. . ) ( )s w c i Nw D P i N Pη ∈ = ⋅ ⋅∑ We then compare the performance of our algorithm to the two extremes with varying battery efficiency. Figure 6 shows the results, using Dmax = 0.8 and Dmin = 0.3. The battery efficiency was varied from 0.5 to 1 on the x-axis and solar energy utilizations achieved by the three algorithms are shown on the y-axis. It shows the fraction of net received energy that is used to perform useful work rather than lost due to storage inefficiency. As can be seen from the figure, battery efficiency factor has great impact on the performance of the three different approaches. The three approaches all converges to 100% utilization if we have a perfect battery (η=1), that is, energy is not lost by storing it into the batteries. When battery inefficiency is taken into account, both the adaptive and optimal approach have much better solar energy utilization rate than the simple one. Additionally, the result also shows that our adaptive duty cycle algorithm performs extremely close to the optimal. 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 Eta-batery roundtrip efficiency SolarEnergyUtilization(%) Optimal Adaptive Simple We also compare the performance of our algorithm with different values of Dmin and Dmax for η=0.7, which is typical of NiMH batteries. These results are shown in Table 1 as the percentage of energy saved by the optimal and adaptive approaches, and this is the energy which would normally be wasted in the simple approach. The figures and table indicate that our real time algorithm is able to achieve a performance very close to the optimal feasible. In addition, these results show that environmental energy harvesting with appropriate power management can achieve much better utilization of the environmental energy. Dmax Dmin 0.8 0.05 0.8 0.1 0.8 0.3 0.5 0.2 0.9 0.2 1.0 0.2 Adaptive 51.0% 48.2% 42.3% 29.4% 54.7% 58.7% Optimal 52.3% 49.6% 43.7% 36.7% 56.6% 60.8% 7. CONCLUSIONS We discussed various issues in power management for systems powered using environmentally harvested energy. Specifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation. We also derived a theoretically optimal bound on the performance and showed that our proposed algorithm operated very close to the optimal. The proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment. Our method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance. However, this work is only the first step towards optimal solutions for energy neutral operation. It is designed for a specific power scaling method based on adapting the duty cycle. Several other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available. It is thus of interest to extend our methods to exploit these advanced capabilities. 8. ACKNOWLEDGEMENTS This research was funded in part through support provided by DARPA under the PAC/C program, the National Science Foundation (NSF) under award #0306408, and the UCLA Center for Embedded Networked Sensing (CENS). Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA, NSF, or CENS. REFERENCES [1] R Ramanathan, and R Hain, Toplogy Control of Multihop Wireless Networks Using Transmit Power Adjustment in Proc. Infocom. Vol 2. 26-30 pp. 404-413. March 2000 [2] T.A. Pering, T.D. Burd, and R. W. Brodersen, The simulation and evaluation of dynamic voltage scaling algorithms, in Proc. ACM ISLPED, pp. 76-81, 1998 [3] L. Benini and G. De Micheli, Dynamic Power Management: Design Techniques and CAD Tools. Kluwer Academic Publishers, Norwell, MA, 1997. [4] John Kymisis, Clyde Kendall, Joseph Paradiso, and Neil Gershenfeld. Parasitic power harvesting in shoes. In ISWC, pages 132-139. IEEE Computer Society press, October 1998. [5] Nathan S. Shenck and Joseph A. Paradiso. Energy scavenging with shoemounted piezoelectrics. IEEE Micro, 21(3):30ñ42, May-June 2001. [6] T Starner. Human-powered wearable computing. IBM Systems Journal, 35(3-4), 1996. [7] Mohammed Rahimi, Hardik Shah, Gaurav S. Sukhatme, John Heidemann, and D. Estrin. Studying the feasibility of energy harvesting in a mobile sensor network. In ICRA, 2003. [8] ChrisMelhuish. The ecobot project. www.ias.uwe.ac.uk/energy autonomy/EcoBot web page.html. [9] Jan M.Rabaey, M. Josie Ammer, Julio L. da Silva Jr., Danny Patel, and Shad Roundy. Picoradio supports ad-hoc ultra-low power wireless networking. IEEE Computer, pages 42-48, July 2000. [10] Joseph A. Paradiso and Mark Feldmeier. A compact, wireless, selfpowered pushbutton controller. In ACM Ubicomp, pages 299-304, Atlanta, GA, USA, September 2001. Springer-Verlag Berlin Heidelberg. [11] SE Wright, DS Scott, JB Haddow, andMA Rosen. The upper limit to solar energy conversion. volume 1, pages 384 - 392, July 2000. [12] Darpa energy harvesting projects. http://www.darpa.mil/dso/trans/energy/projects.html. [13] Werner Weber. Ambient intelligence: industrial research on a visionary concept. In Proceedings of the 2003 international symposium on Low power electronics and design, pages 247-251. ACM Press, 2003. [14] V Raghunathan, A Kansal, J Hsu, J Friedman, and MB Srivastava, ``Design Considerations for Solar Energy Harvesting Wireless Embedded Systems,'' (IPSN/SPOTS), April 2005. [15] Xiaofan Jiang, Joseph Polastre, David Culler, Perpetual Environmentally Powered Sensor Networks, (IPSN/SPOTS), April 25-27, 2005. [16] Chulsung Park, Pai H. Chou, and Masanobu Shinozuka, ``DuraNode: Wireless Networked Sensor for Structural Health Monitoring,'' to appear in Proceedings of the 4th IEEE International Conference on Sensors, Irvine, CA, Oct. 31 - Nov. 1, 2005. [17] Aman Kansal and Mani B. Srivastava. An environmental energy harvesting framework for sensor networks. In International symposium on Low power electronicsand design, pages 481-486. ACM Press, 2003. [18] Thiemo Voigt, Hartmut Ritter, and Jochen Schiller. Utilizing solar power in wireless sensor networks. In LCN, 2003. [19] A. Kansal, J. Hsu, S. Zahedi, and M. B. Srivastava. Power management in energy harvesting sensor networks. Technical Report TR-UCLA-NESL200603-02, Networked and Embedded Systems Laboratory, UCLA, March 2006. Figure 6. Duty Cycles achieved with respect to η TABLE 1. Energy Saved by adaptive and optimal approach. 185
Adaptive Duty Cycling for Energy Harvesting Systems {jasonh, kansal, szahedi, mbs} @ ee.ucla.edu vijay@nec-labs.com ABSTRACT Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used. 1. INTRODUCTION Energy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation. The fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power. Therefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically. However, metrics for evaluating energy harvesting systems are different from those used for battery powered systems. Environmental energy is distinct from battery energy in two ways. First it is an inexhaustible supply which, if appropriately used, can allow the system to last forever, unlike the battery which is a limited resource. Second, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the battery which can be known deterministically. Thus, power management methods based on battery status are not always applicable to energy harvesting systems. In addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply. Consequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life. Energy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy. In this paper, we will study how to adapt the performance of the available energy profile. There exist many techniques to accomplish performance scaling at the node level, such as radio transmit power adjustment [1], dynamic voltage scaling [2], and the use of low power modes [3]. However, these techniques require hardware support and may not always be available on resource constrained sensor nodes. Alternatively, a common performance scaling technique is duty cycling. Low power devices typically provide at least one low power mode in which the node is shut down and the power consumption is negligible. In addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency. We will use duty cycle adjustment as the primitive performance scaling technique in our algorithms. 2. RELATED WORK Energy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc. . Several technologies to extract energy from the environment have been demonstrated including solar, motion-based, biochemical, vibration-based [8], [9], [10], [11], and others are being developed [12], [13]. While several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation. The first work to take environmental energy into account for data routing was [17], followed by [18]. While these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation. Our proposed techniques attempt to maximize system performance while maintaining energy-neutral operation. 3. SYSTEM MODEL The energy usage considerations in a harvesting system vary significantly from those in a battery powered system, as mentioned earlier. We propose the model shown in Figure 1 for designing energy management methods in a harvesting system. The functions of the various blocks shown in the figure are discussed below. The precise methods used in our system to achieve these functions will be discussed in subsequent sections. Harvested Energy Tracking: This block represents the mechanisms used to measure the energy received from the harvesting device, such as the solar panel. Such information is useful for determining the energy availability profile and adapting system performance based on it. Collecting this information requires that the node hardware be equipped with the facility to measure the power Figure 1. System model for an energy harvesting system. generated from the environment, and the Heliomote platform [14] we used for evaluating the algorithms has this capability. Energy Generation Model: For wireless sensor nodes with limited storage and processing capabilities to be able to use the harvested energy data, models that represent the essential components of this information without using extensive storage are required. The purpose of this block is to provide a model for the energy available to the system in a form that may be used for making power management decisions. The data measured by the energy tracking block is used here to predict future energy availability. A good prediction model should have a low prediction error and provide predicted energy values for durations long enough to make meaningful performance scaling decisions. Further, for energy sources that exhibit both long-term and short-term patterns (e.g., diurnal and climate variations vs. weather patterns for solar energy), the model must be able to capture both characteristics. Such a model can also use information from external sources such as local weather forecast service to improve its accuracy. Energy Consumption Model: It is also important to have detailed information about the energy usage characteristics of the system, at various performance levels. For general applicability of our design, we will assume that only one sleep mode is available. We assume that the power consumption in the sleep and active modes is known. It may be noted that for low power systems with more advanced capabilities such as dynamic voltage scaling (DVS), multiple low power modes, and the capability to shut down system components selectively, the power consumption in each of the states and the resultant effect on application performance should be known to make power management decisions. Energy Storage Model: This block represents the model for the energy storage technology. Since all the generated energy may not be used instantaneously, the harvesting system will usually have some energy storage technology. Storage technologies (e.g., batteries and ultra-capacitors) are non-ideal, in that there is some energy loss while storing and retrieving energy from them. These characteristics must be known to efficiently manage energy usage and storage. This block also includes the system capability to measure the residual stored energy. Most low power systems use batteries to store energy and provide residual battery status. This is commonly based on measuring the battery voltage which is then mapped to the residual battery energy using the known charge to voltage relationship for the battery technology in use. More sophisticated methods which track the flow of energy into and out of the battery are also available. Harvesting-aware Power Management: The inputs provided by the previously mentioned blocks are used here to determine the suitable power management strategy for the system. Power management could be carried to meet different objectives in different applications. For instance, in some systems, the harvested energy may marginally supplement the battery supply and the objective may be to maximize the system lifetime. A more interesting case is when the harvested energy is used as the primary source of energy for the system with the objective of achieving indefinitely long system lifetime. In such cases, the power management objective is to achieve energy neutral operation. In other words, the system should only use as much energy as harvested from the environment and attempt to maximize performance within this available energy budget. 4. THEORETICALLY OPTIMAL POWER MANAGEMENT We develop the following theory to understand the energy neutral mode of operation. Let us define Ps (t) as the energy harvested from the environment at time t, and the energy being consumed by the load at that time is Pc (t). Further, we model the non-ideal storage buffer by its round-trip efficiency 11 (strictly less than 1) and a constant leakage power Pleak. Using this notation, applying the rule of energy conservation leads to the following inequality: where B0 is the initial battery level and the function [X] + = X if X> 0 and zero otherwise. DEFINITION 1 (p,61,62) function: A non-negative, continuous and bounded function P (t) is said to be a (p,61,62) function if and only if for any value of finite real number T, the following are satisfied: This function can be used to model both energy sources and loads. If the harvested energy profile Ps (t) is a (p1,61,62) function, then the average rate of available energy over long durations becomes p1, and the burstiness is bounded by 61 and 62. Similarly, Pc (t) can be modeled as a (p2,63) function, when p2 and 63 are used to place an upper bound on power consumption (the inequality on the right side) while there are no minimum power consumption constraints. The condition for energy neutrality, equation (1), leads to the following theorem, based on the energy production, consumption, and energy buffer models discussed above. THEOREM 1 (ENERGY NEUTRAL OPERATION): Consider a harvesting system in which the energy production profile is characterized by a (p1, 61, 62) function, the load is characterized by a (p2, 63) function and the energy buffer is characterized by parameters ii for storage efficiency, and Pleak for leakage power. The following conditions are sufficient for the system to achieve energy neutrality: where B0 is the initial energy stored in the buffer and provides a lower bound on the capacity of the energy buffer B. The proof is presented in our prior work [19]. To adjust the duty cycle D using our performance scaling algorithm, we assume the following relation between duty cycle and the perceived utility of the system to the user: Suppose the utility of the application to the user is represented by U (D) when the system operates at a duty cycle D. Then, This is a fairly general and simple model and the specific values of Dmin and Dmax may be determined as per application requirements. As an example, consider a sensor node designed to detect intrusion across a periphery. In this case, a linear increase in duty cycle translates into a linear increase in the detection probability. The fastest and the slowest speeds of the intruders may be known, leading to a minimum and Figure 2. Two possible cases for energy calculations maximum sensing delay tolerable, which results in the relevant Dmax and Dmin for the sensor node. While there may be cases where the relationship between utility and duty cycle may be non-linear, in this paper, we restrict our focus on applications that follow this linear model. In view of the above models for the system components and the required performance, the objective of our power management strategy is adjust the duty cycle D (i) dynamically so as to maximize the total utility U (D) over a period of time, while ensuring energy neutral operation for the sensor node. Before discussing the performance scaling methods for harvesting aware duty cycle adaptation, let us first consider the optimal power management strategy that is possible for a given energy generation profile. For the calculation of the optimal strategy, we assume complete knowledge of the energy availability profile at the node, including the availability in the future. The calculation of the optimal is a useful tool for evaluating the performance of our proposed algorithm. This is particularly useful for our algorithm since no prior algorithms are available to serve as a baseline for comparison. Suppose the time axis is partitioned into discrete slots of duration ΔT, and the duty cycle adaptation calculation is carried out over a window of Nw such time slots. We define the following energy profile variables, with the index i ranging over {1,..., Nw}: Ps (i) is the power output from the harvested source in time slot i, averaged over the slot duration, Pc is the power consumption of the load in active mode, and D (i) is the duty cycle used in slot i, whose value is to be determined. B (i) is the residual battery energy at the beginning of slot i. Following this convention, the battery energy left after the last slot in the window is represented by B (Nw +1). The values of these variables will depend on the choice of D (i). The energy used directly from the harvested source and the energy stored and used from the battery must be accounted for differently. Figure 2 shows two possible cases for Ps (i) in a time slot. Ps (i) may either be less than or higher than Pc, as shown on the left and right respectively. When Ps (i) is lower than Pc, some of the energy used by the load comes from the battery, while when Ps (i) is higher than Pc, all the energy used is supplied directly from the harvested source. The crosshatched area shows the energy that is available for storage into the battery while the hashed area shows the energy drawn from the battery. We can write the energy used from the battery in any slot i as: In equation (6), the first term on the right hand side measures the energy drawn from the battery when Ps (i) <Pc, the next term measures the energy stored into the battery when the node is in sleep mode, and the last term measures the energy stored into the battery in active mode if Ps (i)> Pc. For energy neutral operation, we require the battery at the end of the window of Nw slots to be greater than or equal to the starting battery. Clearly, battery level will go down when the harvested energy is not available and the system is operated from stored energy. However, the window Nw is judiciously chosen such that over that duration, we expect the environmental energy availability to complete a periodic cycle. For instance, in the case of solar energy harvesting, Nw could be chosen to be a twenty-four hour duration, corresponding to the diurnal cycle in the harvested energy. This is an approximation since an ideal choice of the window size would be infinite, but a finite size must be used for analytical tractability. Further, the battery level cannot be negative at any time, and this is ensured by having a large enough initial battery level B0 such that node operation is sustained even in the case of total blackout during a window period. Stating the above constraints quantitatively, we can express the calculation of the optimal duty cycles as an optimization problem below: The solution to the optimization problem yields the duty cycles that must be used in every slot and the evolution of residual battery over the course of Nw slots. Note that while the constraints above contain the non-linear function [x] +, the quantities occurring within that function are all known constants. The variable quantities occur only in linear terms and hence the above optimization problem can be solved using standard linear programming techniques, available in popular optimization toolboxes. 5. HARVESTING-AWARE POWER MANAGEMENT We now present a practical algorithm for power management that may be used for adapting the performance based on harvested energy information. This algorithm attempts to achieve energy neutral operation without using knowledge of the future energy availability and maximizes the achievable performance within that constraint. The harvesting-aware power management strategy consists of three parts. The first part is an instantiation of the energy generation model which tracks past energy input profiles and uses them to predict future energy availability. The second part computes the optimal duty cycles based on the predicted energy, and this step uses our computationally tractable method to solve the optimization problem. The third part consists of a method to dynamically adapt the duty cycle in response to the observed energy generation profile in real time. This step is required since the observed energy generation may deviate significantly from the predicted energy availability and energy neutral operation must be ensured with the actual energy received rather than the predicted values. 5.1. Energy Prediction Model We use a prediction model based on Exponentially Weighted Moving-Average (EWMA). The method is designed to exploit the diurnal cycle in solar energy but at the same time adapt to the seasonal variations. A historical summary of the energy generation profile is maintained for this purpose. While the storage data size is limited to a vector length of Nw values in order to minimize the memory overheads of the power management algorithm, the window size is effectively infinite as each value in the history window depends on all the observed data up to that instant. The window size is chosen to be 24 hours and each time slot is taken to be 30 minutes as the variation in generated power by the solar panel using this setting is less than 10% between each adjacent slots. This yields Nw = 48. Smaller slot durations may be used at the expense of a higher Nw. The historical summary maintained is derived as follows. On a typical day, we expect the energy generation to be similar to the energy generation at the same time on the previous days. The value of energy generated in a particular slot is maintained as a weighted average of the energy received in the same time-slot during all observed days. The weights are exponential, resulting in decaying contribution from older data. More specifically, the historical average maintained for each slot is given by: where a is the value of the weighting factor, xk is the observed value of energy generated in the slot, and xk − 1 is the previously stored historical average. In this model, the importance of each day relative to the previous one remains constant because the same weighting factor was used for all days. The average value derived for a slot is treated as an estimate of predicted energy value for the slot corresponding to the subsequent day. This method helps the historical average values adapt to the seasonal variations in energy received on different days. One of the parameters to be chosen in the above prediction method is the parameter a, which is a measure of rate of shift in energy pattern over time. Since this parameter is affected by the characteristics of the energy and sensor node location, the system should have a training period during which this parameter will be determined. To determine a good value of a, we collected energy data over 72 days and compared the average error of the prediction method for various values of a. The error based on the different values of a is shown in Figure 3. This curve suggests an optimum value of a = 0.15 for minimum prediction error and this value will be used in the remainder of this paper. Figure 3. Choice of prediction parameter. 5.2. Low-complexity Solution The energy values predicted for the next window of Nw slots are used to calculated the desired duty cycles for the next window, assuming the predicted values match the observed values in the future. Since our objective is to develop a practical algorithm for embedded computing systems, we present a simplified method to solve the linear programming problem presented in Section 4. To this end, we define the sets S and D as follows: The two sets differ by the condition that whether the node operation can be sustained entirely from environmental energy. In the case that energy produced from the environment is not sufficient, battery will be discharged to supplement the remaining energy. Next we sum up both sides of (6) over the entire Nw window and rewrite it with the new notation. The term on the left hand side is actually the battery energy used over the entire window of Nw slots, which can be set to 0 for energy neutral operation. After some algebraic manipulation, this yields: The term on the left hand side is the total energy received in Nw slots. The first term on the right hand side can be interpreted as the total energy consumed during the D slots and the second term is the total energy consumed during the S slots. We can now replace three constraints (8), (9), and (10) in the original problem with (13), restating the optimization problem as follows: This form facilitates a low complexity solution that doesn't require a general linear programming solver. Since our objective is to maximize the total system utility, it is preferable to set the duty cycle to Dmin for time slots where the utility per unit energy is the least. On the other hand, we would also like the time slots with the highest Ps to operate at Dmax because of better efficiency of using energy directly from the energy source. Combining these two characteristics, we define the utility co-efficient for each slot i as follows: where W (i) is a representation of how efficient the energy usage in a particular time slot i is. A larger W (i) indicates more system utility per unit energy in slot i and vice versa. The algorithm starts by assuming D (i) = Dmin for i = {1...NW} because of the minimum duty cycle requirement, and computes the remaining system energy R by: A negative R concludes that the optimization problem is infeasible, meaning the system cannot achieve energy neutrality even at the minimum duty cycle. In this case, the system designer is responsible for increasing the environment energy availability (e.g., by using larger solar panels). If R is positive, it means the system has excess energy that is not being used, and this may be allocated to increase the duty cycle beyond Dmin for some slots. Since our objective is to maximize the total system utility, the most efficient way to allocate the excess energy is to assign duty cycle Dmax to the slots with the highest W (i). So, the coefficients W (i) are arranged in decreasing order and duty cycle Dmax is assigned to the slots beginning with the largest coefficients until the excess energy available, R (computed by (14) in every iteration), is insufficient to assign Dmax to another slot. The remaining energy, RLast, is used to increase the duty cycle to some value between Dmin and Dmax in the slot with the next lower coefficient. Denoting this slot with index j, the duty cycle is given by: The above solution to the optimization problem requires only simple arithmetic calculations and one sorting step which can be easily implemented on an embedded platform, as opposed to implementing a general linear program solver. 5.3. Slot-by-slot continual duty cycle adaptiation. The observed energy values may vary greatly from the predicted ones, such as due to the effect of clouds or other sudden changes. It is thus important to adapt the duty cycles calculated using the predicted values, to the actual energy measurements in real time to ensure energy neutrality. Denote the initial duty cycle assignments for each time slot i computed using the predicted energy values as D (i) = {1,..., Nw}. First we compute the difference between predicted power level Ps (i) and actual power level observed, Ps' (i) in every slot i. Then, the excess energy in slot i, denoted by X, can be obtained as follows: ALGORITHM 1 Pseudocode for the duty-cycle adaptation algorithm The upper term accounts for the energy difference when actual received energy is more than the power drawn by the load. On the other hand, if the energy received is less than Pc, we will need to account for the extra energy used from the battery by the load, which is a function of duty cycle used in time slot i and battery efficiency factor η. When more energy is received than predicted, X is positive and that excess energy is available for use in the subsequent solutes, while if X is negative, that energy must be compensated from subsequent slots. CASE I: X <0. In this case, we want to reduce the duty cycles used in the future slots in order to make up for this shortfall of energy. Since our objective function is to maximize the total system utility, we have to reduce the duty cycles for time slots with the smallest normalized utility coefficient, W (i). This is accomplished by first sorting the coefficient W (j), where j> i. in decreasing order, and then iteratively reducing Dj to Dmin until the total reduction in energy consumption is the same as X. CASE II: X> 0. Here, we want to increase the duty cycles used in the future to utilize this excess energy we received in recent time slot. In contrast to Case I, the duty cycles of future time slots with highest utility coefficient W (i) should be increased first in order to maximize the total system utility. Suppose the duty cycle is changed by d in slot j. Define a quantity R (j, d) as follows: The precise procedure to adapt the duty cycle to account for the above factors is presented in Algorithm 1. This calculation is performed at the end of every slot to set the duty cycle for the next slot. We claim that our duty cycling algorithm is energy neutral because an surplus of energy at the previous time slot will always translate to additional energy opportunity for future time slots, and vice versa. The claim may be violated in cases of severe energy shortages especially towards the end of window. For example, a large deficit in energy supply can't be restored if there is no future energy input until the end of the window. In such case, this offset will be carried over to the next window so that long term energy neutrality is still maintained. 6. EVALUATION Our adaptive duty cycling algorithm was evaluated using an actual solar energy profile measured using a sensor node called Heliomote, capable of harvesting solar energy [14]. This platform not only tracks the generated energy but also the energy flow into and out of the battery to provide an accurate estimate of the stored energy. The energy harvesting platform was deployed in a residential area in Los Angeles from the beginning of June through the middle of August for a total of 72 days. The sensor node used is a Mica2 mote running at a fixed 40% duty cycle with an initially full battery. Battery voltage and net current from the solar panels are sampled at a period of 10 seconds. The energy generation profile for that duration, measured by tracking the output current from the solar cell is shown in Figure 4, both on continuous and diurnal scales. We can observe that although the energy profile varies from day to day, it still exhibits a general pattern over several days. Day Hour Figure 4 Solar Energy Profile (Left: Continuous, Right: Diurnal) 6.1. Prediction Model We first evaluate the performance of the prediction model, which is judged by the amount of absolute error it made between the predicted and actual energy profile. Figure 5 shows the average error of each time slot in mA over the entire 72 days. Generally, the amount of error is larger during the day time because that's when the factor of weather can cause deviations in received energy, while the prediction made for night time is mostly correct. Figure 5. Average Predictor Error in mA 6.2. Adaptive Duty cycling algorithm Prior methods to optimize performance while achieving energy neutral operation using harvested energy are scarce. Instead, we compare the performance of our algorithm against two extremes: the theoretical optimal calculated assuming complete knowledge about future energy availability and a simple approach which attempts to achieve energy neutrality using a fixed duty cycle without accounting for battery inefficiency. The optimal duty cycles are calculated for each slot using the future knowledge of actual received energy for that slot. For the simple approach, the duty cycle is kept constant within each day and is computed by taking the ratio of the predicted energy availability and the maximum usage, and this guarantees that the senor node will never deplete its battery running at this duty cycle. We then compare the performance of our algorithm to the two extremes with varying battery efficiency. Figure 6 shows the results, using Dmax = 0.8 and Dmin = 0.3. The battery efficiency was varied from 0.5 to 1 on the x-axis and solar energy utilizations achieved by the three algorithms are shown on the y-axis. It shows the fraction of net received energy that is used to perform useful work rather than lost due to storage inefficiency. As can be seen from the figure, battery efficiency factor has great impact on the performance of the three different approaches. The three approaches all converges to 100% utilization if we have a perfect battery (η = 1), that is, energy is not lost by storing it into the batteries. When battery inefficiency is taken into account, both the adaptive and optimal approach have much better solar energy utilization rate than the simple one. Additionally, the result also shows that our adaptive duty cycle algorithm performs extremely close to the optimal. Figure 6. Duty Cycles achieved with respect to η We also compare the performance of our algorithm with different values of Dmin and Dmax for η = 0.7, which is typical of NiMH batteries. These results are shown in Table 1 as the percentage of energy saved by the optimal and adaptive approaches, and this is the energy which would normally be wasted in the simple approach. The figures and table indicate that our real time algorithm is able to achieve a performance very close to the optimal feasible. In addition, these results show that environmental energy harvesting with appropriate power management can achieve much better utilization of the environmental energy. TABLE 1. Energy Saved by adaptive and optimal approach. 7. CONCLUSIONS We discussed various issues in power management for systems powered using environmentally harvested energy. Specifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation. We also derived a theoretically optimal bound on the performance and showed that our proposed algorithm operated very close to the optimal. The proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment. Our method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance. However, this work is only the first step towards optimal solutions for energy neutral operation. It is designed for a specific power scaling method based on adapting the duty cycle. Several other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available. It is thus of interest to extend our methods to exploit these advanced capabilities.
Adaptive Duty Cycling for Energy Harvesting Systems {jasonh, kansal, szahedi, mbs} @ ee.ucla.edu vijay@nec-labs.com ABSTRACT Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used. 1. INTRODUCTION Energy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation. The fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power. Therefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically. However, metrics for evaluating energy harvesting systems are different from those used for battery powered systems. Environmental energy is distinct from battery energy in two ways. First it is an inexhaustible supply which, if appropriately used, can allow the system to last forever, unlike the battery which is a limited resource. Second, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the battery which can be known deterministically. Thus, power management methods based on battery status are not always applicable to energy harvesting systems. In addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply. Consequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life. Energy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy. In this paper, we will study how to adapt the performance of the available energy profile. There exist many techniques to accomplish performance scaling at the node level, such as radio transmit power adjustment [1], dynamic voltage scaling [2], and the use of low power modes [3]. However, these techniques require hardware support and may not always be available on resource constrained sensor nodes. Alternatively, a common performance scaling technique is duty cycling. Low power devices typically provide at least one low power mode in which the node is shut down and the power consumption is negligible. In addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency. We will use duty cycle adjustment as the primitive performance scaling technique in our algorithms. 2. RELATED WORK Energy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc. . Several technologies to extract energy from the environment have been demonstrated including solar, motion-based, biochemical, vibration-based [8], [9], [10], [11], and others are being developed [12], [13]. While several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation. The first work to take environmental energy into account for data routing was [17], followed by [18]. While these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation. Our proposed techniques attempt to maximize system performance while maintaining energy-neutral operation. 3. SYSTEM MODEL 4. THEORETICALLY OPTIMAL POWER MANAGEMENT 5. HARVESTING-AWARE POWER MANAGEMENT 5.1. Energy Prediction Model 5.2. Low-complexity Solution 5.3. Slot-by-slot continual duty cycle adaptiation. 6. EVALUATION Day Hour 4 Solar Energy Profile (Left: Continuous, Right: Diurnal) 6.1. Prediction Model 5. Average Predictor Error in mA 6.2. Adaptive Duty cycling algorithm 7. CONCLUSIONS We discussed various issues in power management for systems powered using environmentally harvested energy. Specifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation. We also derived a theoretically optimal bound on the performance and showed that our proposed algorithm operated very close to the optimal. The proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment. Our method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance. However, this work is only the first step towards optimal solutions for energy neutral operation. It is designed for a specific power scaling method based on adapting the duty cycle. Several other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available. It is thus of interest to extend our methods to exploit these advanced capabilities.
Adaptive Duty Cycling for Energy Harvesting Systems {jasonh, kansal, szahedi, mbs} @ ee.ucla.edu vijay@nec-labs.com ABSTRACT Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used. 1. INTRODUCTION Energy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation. The fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power. Therefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically. However, metrics for evaluating energy harvesting systems are different from those used for battery powered systems. Environmental energy is distinct from battery energy in two ways. Second, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the battery which can be known deterministically. Thus, power management methods based on battery status are not always applicable to energy harvesting systems. In addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply. Consequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life. Energy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy. In this paper, we will study how to adapt the performance of the available energy profile. However, these techniques require hardware support and may not always be available on resource constrained sensor nodes. Alternatively, a common performance scaling technique is duty cycling. In addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency. We will use duty cycle adjustment as the primitive performance scaling technique in our algorithms. 2. RELATED WORK Energy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc. . While several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation. The first work to take environmental energy into account for data routing was [17], followed by [18]. While these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation. Our proposed techniques attempt to maximize system performance while maintaining energy-neutral operation. 7. CONCLUSIONS We discussed various issues in power management for systems powered using environmentally harvested energy. Specifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation. The proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment. Our method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance. However, this work is only the first step towards optimal solutions for energy neutral operation. It is designed for a specific power scaling method based on adapting the duty cycle. Several other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available.
I-37
A Framework for Agent-Based Distributed Machine Learning and Data Mining
This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework.
[ "agent", "distribut machin learn", "machin learn", "data mine", "individu learn process", "meta-reason", "distribut cluster applic", "framework and architectur", "unsupervis cluster", "bayesian classifi", "consensusbas method", "commun and coordin", "autonom learn agent", "histor inform", "multiag learn" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "M", "U", "M", "M", "M", "U", "M" ]
A Framework for Agent-Based Distributed Machine Learning and Data Mining Jan Tozicka Gerstner Laboratory Czech Technical University Technick``a 2, Prague, 166 27 Czech Republic tozicka@labe.felk.cvut.cz Michael Rovatsos School of Informatics The University of Edinburgh Edinburgh EH8 9LE United Kingdom mrovatso@inf.ed.ac.uk Michal Pechoucek Gerstner Laboratory Czech Technical University Technick``a 2, Prague, 166 27 Czech Republic pechouc@labe.felk.cvut.cz ABSTRACT This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework. General Terms Theory Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems 1. INTRODUCTION In the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance. Various techniques have been suggested in this respect, ranging from the low-level integration of independently derived learning hypotheses (e.g. combining different classifiers to make optimal classification decisions [4, 7], model averaging of Bayesian classifiers [8], or consensusbased methods for integrating different clusterings [11]), to the high-level combination of learning results obtained by heterogeneous learning agents using meta-learning (e.g. [3, 10, 21]). All of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem). Therefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems. In such systems, learners (agents) may not be able to integrate their datasets or learning results (because of different data formats and representations, learning algorithms, or legal restrictions that prohibit such integration [11]) and cannot always be guaranteed to interact in a strictly cooperative fashion (discovered knowledge and collected data might be economic assets that should only be shared when this is deemed profitable; malicious agents might attempt to adversely influence others'' learning results, etc.). Examples for applications of this kind abound. Many distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2]) - however, they may permit the exchange of local learning hypotheses among different learners. In other areas, training data might be commercially valuable, so that agents would only make it available to others if those agents could provide something in return (e.g. in remote ship surveillance and tracking, where the different agencies involved are commercial service providers [1]). Furthermore, agents might have a vested interest in negatively affecting other agents'' learning performance. An example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud. Viewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor becomes a necessity as oppossed to preferences for scalability, dynamic data selection, interactivity [13], which can also be achieved through (non-agent) distribution and parallelisation in principle. Despite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to 1. exchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2. decide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3. reason about how this information can best be used to improve their own learning performance. Our model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results). Our hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents. To test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3). We report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4). Finally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6). 2. ABSTRACT ARCHITECTURE Our framework is based on providing formal (meta-level) descriptions of learning processes, i.e. representations of all relevant components of the learning machinery used by a learning agent, together with information about the state of the learning process. To ensure that this framework is sufficiently general, we consider the following general description of a learning problem: Given data D ⊆ D taken from an instance space D, a hypothesis space H and an (unknown) target function c ∈ H1 , derive a function h ∈ H that approximates c as well as possible according to some performance measure g : H → Q where Q is a set of possible levels of learning performance. 1 By requiring this we are ensuring that the learning problem can be solved in principle using the given hypothesis space. This very broad definition includes a number of components of a learning problem for which more concrete specifications can be provided if we want to be more precise. For the cases of classification and clustering, for example, we can further specify the above as follows: Learning data can be described in both cases as D = ×n i=1[Ai] where [Ai] is the domain of the ith attribute and the set of attributes is A = {1, ... , n}. For the hypothesis space we obtain H ⊆ {h|h : D → {0, 1}} in the case of classification (i.e. a subset of the set of all possible classifiers, the nature of which depends on the expressivity of the learning algorithm used) and H ⊆ {h|h : D → N, h is total with range {1, ... , k}} in the case of clustering (i.e. a subset of all sets of possible cluster assignments that map data points to a finite number of clusters numbered 1 to k). For classification, g might be defined in terms of the numbers of false negatives and false positives with respect to some validation set V ⊆ D, and clustering might use various measures of cluster validity to evaluate the quality of a current hypothesis, so that Q = R in both cases (but other sets of learning quality levels can be imagined). Next, we introduce a notion of learning step, which imposes a uniform basic structure on all learning processes that are supposed to exchange information using our framework. For this, we assume that each learner is presented with a finite set of data D = d1, ... dk in each step (this is an ordered set to express that the order in which the samples are used for training matters) and employs a training/update function f : H × D∗ → H which updates h given a series of samples d1, ... , dk. In other words, one learning step always consists of applying the update function to all samples in D exactly once. We define a learning step as a tuple l = D, H, f, g, h where we require that H ⊆ H and h ∈ H. The intuition behind this definition is that each learning step completely describes one learning iteration as shown in Figure 1: in step t, the learner updates the current hypothesis ht−1 with data Dt, and evaluates the resulting new hypothesis ht according to the current performance measure gt. Such a learning step is equivalent to the following steps of computation: 1. train the algorithm on all samples in D (once), i.e. calculate ft(ht−1, Dt) = ht, 2. calculate the quality gt of the resulting hypothesis gt(ht). We denote the set of all possible learning steps by L. For ease of notation, we denote the components of any l ∈ L by D(l), H(l), f(l) and g(l), respectively. The reason why such learning step specifications use a subset H of H instead of H itself is that learners often have explicit knowledge about which hypotheses are effectively ruled out by f given h in the future (if this is not the case, we can still set H = H). A learning process is a finite, non-empty sequence l = l1 → l2 → ... → ln of learning steps such that ∀1 ≤ i < n . h(li+1) = f(li)(h(li), D(li)) The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 679 training function ht performance measure solution quality qtgtft training set Dt hypothesis hypothesis ht−1 Figure 1: A generic model of a learning step i.e. the only requirement the transition relation →⊆ L × L makes is that the new hypothesis is the result of training the old hypothesis on all available sample data that belongs to the current step. We denote the set of all possible learning processes by L (ignoring, for ease of notation, the fact that this set depends on H, D and the spaces of possible training and evaluation functions f and g). The performance trace associated with a learning process l is the sequence q1, ... , qn ∈ Qn where qi = g(li)(h(li)), i.e. the sequence of quality values calculated by the performance measures of the individual learning steps on the respective hypotheses. Such specifications allow agents to provide a selfdescription of their learning process. However, in communication among learning agents, it is often useful to provide only partial information about one``s internal learning process rather than its full details, e.g. when advertising this information in order to enter information exchange negotiations with others. For this purpose, we will assume that learners describe their internal state in terms of sets of learning processes (in the sense of disjunctive choice) which we call learning process descriptions (LPDs) rather than by giving precise descriptions about a single, concrete learning process. This allows us to describe properties of a learning process without specifying its details exhaustively. As an example, the set {l ∈ L|∀l = l[i]. D(l) ≤ 100} describes all processes that have a training set of at most 100 samples (where all the other elements are arbitrary). Likewise, {l ∈ L|∀l = l[i]. D(l) = {d}} is equivalent to just providing information about a single sample {d} and no other details about the process (this can be useful to model, for example, data received from the environment). Therefore, we use ℘(L), that is the set of all LPDs, as the basis for designing content languages for communication in the protocols we specify below. In practice, the actual content language chosen will of course be more restricted and allow only for a special type of subsets of L to be specified in a compact way, and its choice will be crucial for the interactions that can occur between learning agents. For our examples below, we simply assume explicit enumeration of all possible elements of the respective sets and function spaces (D, H, etc.) extended by the use of wildcard symbols ∗ (so that our second example above would become ({d}, ∗, ∗, ∗, ∗)). 2.1 Learning agents In our framework, a learning agent is essentially a metareasoning function that operates on information about learning processes and is situated in an environment co-inhabited by other learning agents. This means that it is not only capable of meta-level control on how to learn, but in doing so it can take information into account that is provided by other agents or the environment. Although purely cooperative or hybrid cases are possible, for the purposes of this paper we will assume that agents are purely self-interested, and that while there may be a potential for cooperation considering how agents can mutually improve each others'' learning performance, there is no global mechanism that can enforce such cooperative behaviour.2 Formally speaking, an agent``s learning function is a function which, given a set of histories of previous learning processes (of oneself and potentially of learning processes about which other agents have provided information) and outputs a learning step which is its next learning action. In the most general sense, our learning agent``s internal learning process update can hence be viewed as a function λ : ℘(L) → L × ℘(L) which takes a set of learning histories of oneself and others as inputs and computes a new learning step to be executed while updating the set of known learning process histories (e.g. by appending the new learning action to one``s own learning process and leaving all information about others'' learning processes untouched). Note that in λ({l1, ... ln}) = (l, {l1, ... ln }) some elements li of the input learning process set may be descriptions of new learning data received from the environment. The λ-function can essentially be freely chosen by the agent as long as one requirement is met, namely that the learning data that is being used always stems from what has been previously observed. More formally, ∀{l1, ... ln} ∈ ℘(L). λ({l1, ... ln}) = (l, {l1, ... ln }) ⇒ „ D(l) ∪ [ l =li[j] D(l ) `` ⊆ [ l =li[j] D(l ) i.e. whatever λ outputs as a new learning step and updated set of learning histories, it cannot invent new data; it has to work with the samples that have been made available to it earlier in the process through the environment or from other agents (and it can of course re-train on previously used data). The goal of the agent is to output an optimal learning step in each iteration given the information that it has. One possibility of specifying this is to require that ∀{l1, ... ln} ∈ ℘(L). λ({l1, ... ln}) = (l, {l1, ... ln }) ⇒ l = arg max l ∈L g(l )(h(l )) but since it will usually be unrealistic to compute the optimal next learning step in every situation, it is more useful 2 Note that our outlook is not only different from common, cooperative models of distributed machine learning and data mining, but also delineates our approach from multiagent learning systems in which agents learn about other agents [25], i.e. the learning goal itself is not affected by agents'' behaviour in the environment. 680 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) i j Dj Hj fj gj hj Di pD→D 1 (Di, Dj) . . . pD→D kD→D (Di, Dj) ... . n/a ... Hi . . . ... n/a fi . . . ... n/a gi . . . n/a pg→h 1 (gi, hj) . . . pg→h kg→h (gi, hj) hi . . . n/a ... Table 1: Matrix of integration functions for messages sent from learner i to j to simply use g(l )(h(l )) as a running performance measure to evaluate how well the agent is performing. This is too abstract and unspecific for our purposes: While it describes what agents should do (transform the settings for the next learning step in an optimal way), it doesn``t specify how this can be achieved in practice. 2.2 Integrating learning process information To specify how an agent``s learning process can be affected by integrating information received from others, we need to flesh out the details of how the learning steps it will perform can be modified using incoming information about learning processes described by other agents (this includes the acquisition of new learning data from the environment as a special case). In the most general case, we can specify this in terms of the potential modifications to the existing information about learning histories that can be performed using new information. For ease of presentation, we will assume that agents are stationary learning processes that can only record the previously executed learning step and only exchange information about this one individual learning step (our model can be easily extended to cater for more complex settings). Let lj = Dj, Hj, fj, gj, hj be the current state of agent j when receiving a learning process description li = Di, Hi, fi, gi, hi from agent i (for the time being, we assume that this is a specific learning step and not a more vague, disjunctive description of properties of the learning step of i). Considering all possible interactions at an abstract level, we basically obtain a matrix of possibilities for modifications of j``s learning step specification as shown in Table 1. In this matrix, each entry specifies a family of integration functions pc→c 1 , ... , pc→c kc→c where c, c ∈ {D, H, f, g, h} and which define how agent j``s component cj will be modified using the information ci provided about (the same or a different component of) i``s learning step by applying pc→c r (ci, cj) for some r ∈ {1, ... , kc→c }. To put it more simply, the collections of p-functions an agent j uses specifies how it will modify its own learning behaviour using information obtained from i. For the diagonal of this matrix, which contains the most common ways of integrating new information in one``s own learning model, obvious ways of modifying one``s own learning process include replacing cj by ci or ignoring ci altogether. More complex/subtle forms of learning process integration include: • Modification of Dj: append Di to Dj; filter out all elements from Dj which also appear in Di; append Di to Dj discarding all elements with attributes outside ranges which affect gj, or those elements already correctly classified by hj; • Modification of Hi: use the union/intersection of Hi and Hj; alternatively, discard elements of Hj that are inconsistent with Dj in the process of intersection or union, or filter out elements that cannot be obtained using fj (unless fj is modified at the same time) • Modification of fj: modify parameters or background knowledge of fj using information about fi; assess their relevance by simulating previous learning steps on Dj using gj and discard those that do not help improve own performance • Modification of hj: combine hj with hi using (say) logical or mathematical operators; make the use of hi contingent on a pre-integration assessment of its quality using own data Dj and gj While this list does not include fully fledged, concrete integration operations for learning processes, it is indicative of the broad range of interactions between individual agents'' learning processes that our framework enables. Note that the list does not include any modifications to gj. This is because we do not allow modifications to the agent``s own quality measure as this would render the model of rational (learning) action useless (if the quality measure is relative and volatile, we cannot objectively judge learning performance). Also note that some of the above examples require consulting other elements of lj than those appearing as arguments of the p-operations; we omit these for ease of notation, but emphasise that information-rich operations will involve consulting many different aspects of lj. Apart from operations along the diagonal of the matrix, more exotic integration operations are conceivable that combine information about different components. In theory we could fill most of the matrix with entries for them, but for lack of space we list only a few examples: • Modification of Dj using fi: pre-process samples in fi, e.g. to achieve intermediate representations that fj can be applied to • Modification of Dj using hi: filter out samples from Dj that are covered by hi and build hj using fj only on remaining samples • Modification of Hj using fi: filter out hypotheses from Hj that are not realisable using fi • Modification of hj using gi: if hj is composed of several sub-components, filter out those sub-components that do not perform well according to gi • ... Finally, many messages received from others describing properties of their learning processes will contain information about several elements of a learning step, giving rise to yet more complex operations that depend on which kinds of information are available. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681 Figure 2: Screenshot of our simulation system, displaying online vessel tracking data for the North Sea region 3. APPLICATION EXAMPLE 3.1 Domain description As an illustration of our framework, we present an agentbased data mining system for clustering-based surveillance using AIS (Automatic Identification System [1]) data. In our application domain, different commercial and governmental agencies track the journeys of ships over time using AIS data which contains structured information automatically provided by ships equipped with shipborne mobile AIS stations to shore stations, other ships and aircrafts. This data contains the ship``s identity, type, position, course, speed, navigational status and other safety-related information. Figure 2 shows a screenshot of our simulation system. It is the task of AIS agencies to detect anomalous behaviour so as to alarm police/coastguard units to further investigate unusual, potentially suspicious behaviour. Such behaviour might include things such as deviation from the standard routes between the declared origin and destination of the journey, unexpected close encounters between different vessels on sea, or unusual patterns in the choice of destination over multiple journeys, taking the type of vessel and reported freight into account. While the reasons for such unusual behaviour may range from pure coincidence or technical problems to criminal activity (such as smuggling, piracy, terrorist/military attacks) it is obviously useful to pre-process the huge amount of vessel (tracking) data that is available before engaging in further analysis by human experts. To support this automated pre-processing task, software used by these agencies applies clustering methods in order to identify outliers and flag those as potentially suspicious entities to the human user. However, many agencies active in this domain are competing enterprises and use their (partially overlapping, but distinct) datasets and learning hypotheses (models) as assets and hence cannot be expected to collaborate in a fully cooperative way to improve overall learning results. Considering that this is the reality of the domain in the real world, it is easy to see that a framework like the one we have suggested above might be useful to exploit the cooperation potential that is not exploited by current systems. 3.2 Agent-based distributed learning system design To describe a concrete design for the AIS domain, we need to specify the following elements of the overall system: 1. The datasets and clustering algorithms available to individual agents, 2. the interaction mechanism used for exchanging descriptions of learning processes, and 3. the decision mechanism agents apply to make learning decisions. Regarding 1., our agents are equipped with their own private datasets in the form of vessel descriptions. Learning samples are represented by tuples containing data about individual vessels in terms of attributes A = {1, ... , n} including things such as width, length, etc. with real-valued domains ([Ai] = R for all i). In terms of learning algorithm, we consider clustering with a fixed number of k clusters using the k-means and k-medoids clustering algorithms [5] (fixed meaning that the learning algorithm will always output k clusters; however, we allow agents to change the value of k over different learning cycles). This means that the hypothesis space can be defined as H = { c1, ... , ck |ci ∈ R|A| } i.e. the set of all possible sets of k cluster centroids in |A|-dimensional Euclidean space. For each hypothesis h = c1, ... , ck and any data point d ∈ ×n i=1[Ai] given domain [Ai] for the ith attribute of each sample, the assignment to clusters is given by C( c1, ... , ck , d) = arg min 1≤j≤k |d − cj| i.e. d is assigned to that cluster whose centroid is closest to the data point in terms of Euclidean distance. For evaluation purposes, each dataset pertaining to a particular agent i is initially split into a training set Di and a validation Vi. Then, we generate a set of fake vessels Fi such that |Fi| = |Vi|. These two sets assess the agent``s ability to detect suspicious vessels. For this, we assign a confidence value r(h, d) to every ship d: r(h, d) = 1 |d − cC(h,d)| where C(h, d) is the index of the nearest centroid. Based on this measure, we classify any vessel in Fi ∪ Vi as fake if its r-value is below the median of all the confidences r(h, d) for d ∈ Fi ∪ Vi. With this, we can compute the quality gi(h) ∈ R as the ratio between all correctly classified vessels and all vessels in Fi ∪ Vi. As concerns 2., we use a simple Contract-Net Protocol (CNP) [20] based hypothesis trading mechanism: Before each learning iteration, agents issue (publicly broadcasted) Calls-For-Proposals (CfPs), advertising their own numerical model quality. In other words, the initiator of a CNP describes its own current learning state as (∗, ∗, ∗, gi(h), ∗) where h is their current hypothesis/model. We assume that agents are sincere when advertising their model quality, but note that this quality might be of limited relevance to other agents as they may specialise on specific regions of the data space not related to the test set of the sender of the CfP. Subsequently, (some) agents may issue bids in which they advertise, in turn, the quality of their own model. If the 682 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) bids (if any) are accepted by the initiator of the protocol who issued the CfP, the agents exchange their hypotheses and the next learning iteration ensues. To describe what is necessary for 3., we have to specify (i) under which conditions agents submit bids in response to a CfP, (ii) when they accept bids in the CNP negotiation process, and (iii) how they integrate the received information in their own learning process. Concerning (i) and (ii), we employ a very simple rule that is identical in both cases: let g be one``s own model quality and g that advertised by the CfP (or highest bid, respectively). If g > g we respond to the CfP (accept the bid), else respond to the CfP (accept the bid) with probability P(g /g) and ignore (reject) it else. If two agents make a deal, they exchange their learning hypotheses (models). In our experiments, g and g are calculated by an additional agent that acts as a global validation mechanism for all agents (in a more realistic setting a comparison mechanism for different g functions would have to be provided). As for (iii), each agent uses a single model merging operator taken from the following two classes of operators (hj is the receiver``s own model and hi is the provider``s model): • ph→h (hi, hj) : - m-join: The m best clusters (in terms of coverage of Dj) from hypothesis hi are appended to hj. - m-select: The set of the m best clusters (in terms of coverage of Dj) from the union hi ∪hj is chosen as a new model. (Unlike m-join this method does not prefer own clusters over others''.) • ph→D (hi, Dj) : - m-filter: The m best clusters (as above) from hi are identified and appended to a new model formed by using those samples not covered by these clusters applying the own learning algorithm fj. Whenever m is large enough to encompass all clusters, we simply write join or filter for them. In section 4 we analyse the performance of each of these two classes for different choices of m. It is noteworthy that this agent-based distributed data mining system is one of the simplest conceivable instances of our abstract architecture. While we have previously applied it also to a more complex market-based architecture using Inductive Logic Programming learners in a transport logistics domain [22], we believe that the system described here is complex enough to illustrate the key design decisions involved in using our framework and provides simple example solutions for these design issues. 4. EXPERIMENTAL RESULTS Figure 3 shows results obtained from simulations with three learning agents in the above system using the k-means and k-medoids clustering methods respectively. We partition the total dataset of 300 ships into three disjoint sets of 100 samples each and assign each of these to one learning agent. The Single Agent is learning from the whole dataset. The parameter k is set to 10 as this is the optimal value for the total dataset according to the Davies-Bouldin index [9]. For m-select we assume m = k which achieves a constant Figure 3: Performance results obtained for different integration operations in homogeneous learner societies using the k-means (top) and k-medoids (bottom) methods model size. For m-join and m-filter we assume m = 3 to limit the extent to which models increase over time. During each experiment the learning agents receive ship descriptions in batches of 10 samples. Between these batches, there is enough time to exchange the models among the agents and recompute the models if necessary. Each ship is described using width, length, draught and speed attributes with the goal of learning to detect which vessels have provided fake descriptions of their own properties. The validation set contains 100 real and 100 randomly generated fake ships. To generate sufficiently realistic properties for fake ships, their individual attribute values are taken from randomly selected ships in the validation set (so that each fake sample is a combination of attribute values of several existing ships). In these experiments, we are mainly interested in investigating whether a simple form of knowledge sharing between self-interested learning agents could improve agent performance compared to a setting of isolated learners. Thereby, we distinguish between homogeneous learner societies where all agents use the same clustering algorithm and heterogeneous ones where different agents use different algorithms. As can be seen from the performance plots in Figure 3 (homogeneous case) and 4 (heterogeneous case, two agents use the same method and one agent uses the other) this is clearly the case for the (unrestricted) join and filter integration operations (m = k) in both cases. This is quite natural, as these operations amount to sharing all available model knowledge among agents (under appropriate constraints depending on how beneficial the exchange seems to the agents). We can see that the quality of these operations is very close to the Single Agent that has access to all training data. For the restricted (m < k) m-join, m-filter and m-select methods we can also observe an interesting distinction, The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 683 Figure 4: Performance results obtained for different integration operations in heterogeneous societies with the majority of learners using the k-means (top) and k-medoids (bottom) methods namely that these perform similarly to the isolated learner case in homogeneous agent groups but better than isolated learners in more heterogeneous societies. This suggests that heterogeneous learners are able to benefit even from rather limited knowledge sharing (and this is what using a rather small m = 3 amounts to given that k = 10) while this is not always true for homogeneous agents. This nicely illustrates how different learning or data mining algorithms can specialise on different parts of the problem space and then integrate their local results to achieve better individual performance. Apart from these obvious performance benefits, integrating partial learning results can also have other advantages: The m-filter operation, for example, decreases the number of learning samples and thus can speed up the learning process. The relative number of filtered examples measured in our experiments is shown in the following table. k-means k-medoids filtering 30-40 % 10-20 % m-filtering 20-30 % 5-15 % The overall conclusion we can draw from these initial experiments with our architecture is that since a very simplistic application of its principles has proven capable of improving the performance of individual learning agents, it is worthwhile investigating more complex forms of information exchange about learning processes among autonomous learners. 5. RELATED WORK We have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems. Very often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori. A typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm. A similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour. The difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents. Many approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method). An agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm. The same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners. A number of approaches for sharing learning data [18] have also been proposed: Grecu and Becker [12] suggest an exchange of learning samples among agents, and Ghosh et al. [11] is a step in the right direction in terms of revealing only partial information about one``s learning process as it deals with limited information sharing in distributed clustering. Papyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks. The MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses. Agents were able to critique each others'' hypotheses until agreement was reached. However, all agents in this system were identical and the system was strictly cooperative. The ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system. As these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work. 6. CONCLUSION In this paper, we outlined a generic, abstract framework for distributed machine learning and data mining. This framework constitutes, to our knowledge, the first attempt 684 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) to capture complex forms of interaction between heterogeneous and/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity. To illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour. Although our experimental results only hint at the potential of using our architecture, they underline that what we are proposing is feasible in principle and can have beneficial effects even in its most simple instantiation. Yet there is a number of issues that we have not addressed in the presentation of the architecture and its empirical evaluation: Firstly, we have not considered the cost of communication and made the implicit assumption that the required communication comes for free. This is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results. Secondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical). In systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents'' learning processes. Finally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be. These issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject. Acknowledgement: We gratefully acknowledge the support of the presented research by Army Research Laboratory project N62558-03-0819 and Office for Naval Research project N00014-06-1-0232. 7. REFERENCES [1] http://www.aislive.com. [2] http://www.healthagents.com. [3] S. Bailey, R. Grossman, H. Sivakumar, and A. Turinsky. Papyrus: A System for Data Mining over Local and Wide Area Clusters and Super-Clusters. In Proc. of the Conference on Supercomputing. 1999. [4] E. Bauer and R. Kohavi. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning, 36, 1999. [5] P. Berkhin. Survey of Clustering Data Mining Techniques, Technical Report, Accrue Software, 2002. [6] D. Caragea, A. Silvescu, and V. Honavar. Agents that Learn from Distributed Dynamic Data sources. In Proc. of the Workshop on Learning Agents, 2000. [7] N. Chawla and S. E. abd L. O. Hall. Creating ensembles of classifiers. In Proceedings of ICDM 2001, pages 580-581, San Jose, CA, USA, 2001. [8] D. Dash and G. F. Cooper. Model Averaging for Prediction with Discrete Bayesian Networks. Journal of Machine Learning Research, 5:1177-1203, 2004. [9] D. L. Davies and D. W. Bouldin. A Cluster Separation Measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, 4:224-227, 1979. [10] P. Edwards and W. Davies. A Heterogeneous Multi-Agent Learning System. In Proceedings of the Special Interest Group on Cooperating Knowledge Based Systems, pages 163-184, 1993. [11] J. Ghosh, A. Strehl, and S. Merugu. A Consensus Framework for Integrating Distributed Clusterings Under Limited Knowledge Sharing. In NSF Workshop on Next Generation Data Mining, 99-108, 2002. [12] D. L. Grecu and L. A. Becker. Coactive Learning for Distributed Data Mining. In Proceedings of KDD-98, pages 209-213, New York, NY, August 1998. [13] M. Klusch, S. Lodi, and G. Moro. Agent-based distributed data mining: The KDEC scheme. In AgentLink, number 2586 in LNCS. Springer, 2003. [14] T. M. Mitchell. Machine Learning, pages 29-36. McGraw-Hill, New York, 1997. [15] S. Ontanon and E. Plaza. Recycling Data for Multi-Agent Learning. In Proc. of ICML-05, 2005. [16] L. Panait and S. Luke. Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, 11(3):387-434, 2005. [17] B. Park and H. Kargupta. Distributed Data Mining: Algorithms, Systems, and Applications. In N. Ye, editor, Data Mining Handbook, pages 341-358, 2002. [18] F. J. Provost and D. N. Hennessy. Scaling up: Distributed machine learning with cooperation. In Proc. of AAAI-96, pages 74-79. AAAI Press, 1996. [19] S. Sian. Extending learning to multiple agents: Issues and a model for multi-agent machine learning (ma-ml). In Y. Kodratoff, editor, Machine LearningEWSL-91, pages 440-456. Springer-Verlag, 1991. [20] R. Smith. The contract-net protocol: High-level communication and control in a distributed problem solver. IEEE Transactions on Computers, C-29(12):1104-1113, 1980. [21] S. J. Stolfo, A. L. Prodromidis, S. Tselepis, W. Lee, D. W. Fan, and P. K. Chan. Jam: Java Agents for Meta-Learning over Distributed Databases. In Proc. of the KDD-97, pages 74-81, USA, 1997. [22] J. Toˇziˇcka, M. Jakob, and M. Pˇechouˇcek. Market-Inspired Approach to Collaborative Learning. In Cooperative Information Agents X (CIA 2006), volume 4149 of LNCS, pages 213-227. Springer, 2006. [23] Y. Z. Wei, L. Moreau, and N. R. Jennings. Recommender systems: a market-based design. In Proceedings of AAMAS-03), pages 600-607, 2003. [24] G. Weiß. A Multiagent Perspective of Parallel and Distributed Machine Learning. In Proceedings of Agents``98, pages 226-230, 1998. [25] G. Weiss and P. Dillenbourg. What is ``multi'' in multi-agent learning? Collaborative-learning: Cognitive and Computational Approaches, 64-80, 1999. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 685
A Framework for Agent-Based Distributed Machine Learning and Data Mining ABSTRACT This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework. 1. INTRODUCTION In the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance. Various techniques have been suggested in this respect, ranging from the low-level integration of independently derived learning hypotheses (e.g. combining different classifiers to make optimal classification decisions [4, 7], model averaging of Bayesian classifiers [8], or consensusbased methods for integrating different clusterings [11]), to the high-level combination of learning results obtained by heterogeneous learning "agents" using meta-learning (e.g. [3, 10, 21]). All of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem). Therefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems. In such systems, learners (agents) may not be able to integrate their datasets or learning results (because of different data formats and representations, learning algorithms, or legal restrictions that prohibit such integration [11]) and cannot always be guaranteed to interact in a strictly cooperative fashion (discovered knowledge and collected data might be economic assets that should only be shared when this is deemed profitable; malicious agents might attempt to adversely influence others' learning results, etc.). Examples for applications of this kind abound. Many distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2])--however, they may permit the exchange of local learning hypotheses among different learners. In other areas, training data might be commercially valuable, so that agents would only make it available to others if those agents could provide something in return (e.g. in remote ship surveillance and tracking, where the different agencies involved are commercial service providers [1]). Furthermore, agents might have a vested interest in negatively affecting other agents' learning performance. An example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud. Viewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor be comes a necessity as oppossed to preferences for scalability, dynamic data selection, "interactivity" [13], which can also be achieved through (non-agent) distribution and parallelisation in principle. Despite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to 1. exchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2. decide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3. reason about how this information can best be used to improve their own learning performance. Our model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results). Our hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents. To test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3). We report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4). Finally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6). 2. ABSTRACT ARCHITECTURE Our framework is based on providing formal (meta-level) descriptions of learning processes, i.e. representations of all relevant components of the learning machinery used by a learning agent, together with information about the state of the learning process. To ensure that this framework is sufficiently general, we consider the following general description of a learning problem: Given data D ⊆ D taken from an instance space D, a hypothesis space H and an (unknown) target function c ∈ H1, derive a function h ∈ H that approximates c as well as possible according to some performance measure g: H → Q where Q is a set of possible levels of learning performance. This very broad definition includes a number of components of a learning problem for which more concrete specifications can be provided if we want to be more precise. For the cases of classification and clustering, for example, we can further specify the above as follows: Learning data can be described in both cases as D = × n i = 1 [Ai] where [Ai] is the domain of the ith attribute and the set of attributes is A = {1,..., n}. For the hypothesis space we obtain in the case of classification (i.e. a subset of the set of all possible classifiers, the nature of which depends on the expressivity of the learning algorithm used) and in the case of clustering (i.e. a subset of all sets of possible cluster assignments that map data points to a finite number of clusters numbered 1 to k). For classification, g might be defined in terms of the numbers of false negatives and false positives with respect to some validation set V ⊆ D, and clustering might use various measures of cluster validity to evaluate the quality of a current hypothesis, so that Q = R in both cases (but other sets of learning quality levels can be imagined). Next, we introduce a notion of learning step, which imposes a uniform basic structure on all learning processes that are supposed to exchange information using our framework. For this, we assume that each learner is presented with a finite set of data D = ~ d1,...dk ~ in each step (this is an ordered set to express that the order in which the samples are used for training matters) and employs a training/update function f: H × D ∗ → H which updates h given a series of samples d1,..., dk. In other words, one learning step always consists of applying the update function to all samples in D exactly once. We define a learning step as a tuple where we require that H ⊆ H and h ∈ H. The intuition behind this definition is that each learning step completely describes one learning iteration as shown in Figure 1: in step t, the learner updates the current hypothesis ht − 1 with data Dt, and evaluates the resulting new hypothesis ht according to the current performance measure gt. Such a learning step is equivalent to the following steps of computation: 1. train the algorithm on all samples in D (once), i.e. calculate ft (ht − 1, Dt) = ht, 2. calculate the quality gt of the resulting hypothesis gt (ht). We denote the set of all possible learning steps by L. For ease of notation, we denote the components of any l ∈ L by D (l), H (l), f (l) and g (l), respectively. The reason why such learning step specifications use a subset H of H instead of H itself is that learners often have explicit knowledge about which hypotheses are effectively ruled out by f given h in the future (if this is not the case, we can still set H = H). A learning process is a finite, non-empty sequence of learning steps such that Figure 1: A generic model of a learning step training set Dt i.e. the only requirement the transition relation--+ C L x L makes is that the new hypothesis is the result of training the old hypothesis on all available sample data that belongs to the current step. We denote the set of all possible learning processes by L (ignoring, for ease of notation, the fact that this set depends on W, D and the spaces of possible training and evaluation functions f and g). The performance trace associated with a learning process l is the sequence (q1,..., qn) E Qn where qi = g (li) (h (li)), i.e. the sequence of quality values calculated by the performance measures of the individual learning steps on the respective hypotheses. Such specifications allow agents to provide a selfdescription of their learning process. However, in communication among learning agents, it is often useful to provide only partial information about one's internal learning process rather than its full details, e.g. when advertising this information in order to enter information exchange negotiations with others. For this purpose, we will assume that learners describe their internal state in terms of sets of learning processes (in the sense of disjunctive choice) which we call learning process descriptions (LPDs) rather than by giving precise descriptions about a single, concrete learning process. This allows us to describe properties of a learning process without specifying its details exhaustively. As an example, the set {l E L | Vl = l [i]. D (l) <100} describes all processes that have a training set of at most 100 samples (where all the other elements are arbitrary). Likewise, {l E L | Vl = l [i]. D (l) = {d}} is equivalent to just providing information about a single sample {d} and no other details about the process (this can be useful to model, for example, data received from the environment). Therefore, we use ℘ (L), that is the set of all LPDs, as the basis for designing content languages for communication in the protocols we specify below. In practice, the actual content language chosen will of course be more restricted and allow only for a special type of subsets of L to be specified in a compact way, and its choice will be crucial for the interactions that can occur between learning agents. For our examples below, we simply assume explicit enumeration of all possible elements of the respective sets and function spaces (D, H, etc.) extended by the use of wildcard symbols * (so that our second example above would become ({d}, *, *, *, *)). 2.1 Learning agents In our framework, a learning agent is essentially a metareasoning function that operates on information about learning processes and is situated in an environment co-inhabited by other learning agents. This means that it is not only capable of meta-level control on "how to learn", but in doing so it can take information into account that is provided by other agents or the environment. Although purely cooperative or hybrid cases are possible, for the purposes of this paper we will assume that agents are purely self-interested, and that while there may be a potential for cooperation considering how agents can mutually improve each others' learning performance, there is no global mechanism that can enforce such cooperative behaviour .2 Formally speaking, an agent's learning function is a function which, given a set of histories of previous learning processes (of oneself and potentially of learning processes about which other agents have provided information) and outputs a learning step which is its next "learning action". In the most general sense, our learning agent's internal learning process update can hence be viewed as a function which takes a set of learning "histories" of oneself and others as inputs and computes a new learning step to be executed while updating the set of known learning process histories (e.g. by appending the new learning action to one's own learning process and leaving all information about others' learning processes untouched). Note that in λ ({l1,...ln}) = (l, {l' 1,...l' n,}) some elements li of the input learning process set may be descriptions of new learning data received from the environment. The λ-function can essentially be freely chosen by the agent as long as one requirement is met, namely that the learning data that is being used always stems from what has been previously observed. More formally, = l, i [j] = li [j] i.e. whatever λ outputs as a new learning step and updated set of learning histories, it cannot "invent" new data; it has to work with the samples that have been made available to it earlier in the process through the environment or from other agents (and it can of course re-train on previously used data). The goal of the agent is to output an optimal learning step in each iteration given the information that it has. One possibility of specifying this is to require that but since it will usually be unrealistic to compute the optimal next learning step in every situation, it is more useful 2Note that our outlook is not only different from common, cooperative models of distributed machine learning and data mining, but also delineates our approach from multiagent learning systems in which agents learn about other agents [25], i.e. the learning goal itself is not affected by agents' behaviour in the environment. 680 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Matrix of integration functions for messages sent from learner i to j to simply use g (l') (h (l')) as a running performance measure to evaluate how well the agent is performing. This is too abstract and unspecific for our purposes: While it describes what agents should do (transform the settings for the next learning step in an optimal way), it doesn't specify how this can be achieved in practice. 2.2 Integrating learning process information To specify how an agent's learning process can be affected by integrating information received from others, we need to flesh out the details of how the learning steps it will perform can be modified using incoming information about learning processes described by other agents (this includes the acquisition of new learning data from the environment as a special case). In the most general case, we can specify this in terms of the potential modifications to the existing information about learning histories that can be performed using new information. For ease of presentation, we will assume that agents are stationary learning processes that can only record the previously executed learning step and only exchange information about this one individual learning step (our model can be easily extended to cater for more complex settings). Let lj = (Dj, Hj, fj, gj, hj) be the current "state" of agent j when receiving a learning process description li = (Di, Hi, fi, gi, hi) from agent i (for the time being, we assume that this is a specific learning step and not a more vague, disjunctive description of properties of the learning step of i). Considering all possible interactions at an abstract level, we basically obtain a matrix of possibilities for modifications of j's learning step specification as shown in Table 1. In this matrix, each entry specifies a family of integration functions pc → c kc-c where c, c' ∈ {D, H, f, g, h} and which define how agent j's component c' j will be modified using the information ci provided about (the same or a different component of) i's learning step by applying pc → c r (ci, c' j) for some r ∈ {1,..., kc → c}. To put it more simply, the collections of p-functions an agent j uses specifies how it will modify its own learning behaviour using information obtained from i. For the diagonal of this matrix, which contains the most common ways of integrating new information in one's own learning model, obvious ways of modifying one's own learning process include replacing c' j by ci or ignoring ci altogether. More complex/subtle forms of learning process integration include: • Modification of Dj: append Di to Dj; filter out all elements from Dj which also appear in Di; append Di to Dj discarding all elements with attributes outside ranges which affect gj, or those elements already correctly classified by hj; • Modification of Hi: use the union/intersection of Hi and Hj; alternatively, discard elements of Hj that are inconsistent with Dj in the process of intersection or union, or filter out elements that cannot be obtained using fj (unless fj is modified at the same time) • Modification of fj: modify parameters or background knowledge of fj using information about fi; assess their relevance by simulating previous learning steps on Dj using gj and discard those that do not help improve own performance • Modification of hj: combine hj with hi using (say) logical or mathematical operators; make the use of hi contingent on a "pre-integration" assessment of its quality using own data Dj and gj While this list does not include fully fledged, concrete integration operations for learning processes, it is indicative of the broad range of interactions between individual agents' learning processes that our framework enables. Note that the list does not include any modifications to gj. This is because we do not allow modifications to the agent's own quality measure as this would render the model of rational (learning) action useless (if the quality measure is relative and volatile, we cannot objectively judge learning performance). Also note that some of the above examples require consulting other elements of lj than those appearing as arguments of the p-operations; we omit these for ease of notation, but emphasise that information-rich operations will involve consulting many different aspects of lj. Apart from operations along the diagonal of the matrix, more "exotic" integration operations are conceivable that combine information about different components. In theory we could fill most of the matrix with entries for them, but for lack of space we list only a few examples: • Modification of Dj using fi: pre-process samples in fi, e.g. to achieve intermediate representations that fj can be applied to • Modification of Dj using hi: filter out samples from Dj that are covered by hi and build hj using fj only on remaining samples • Modification of Hj using fi: filter out hypotheses from Hj that are not realisable using fi • Modification of hj using gi: if hj is composed of several sub-components, filter out those sub-components that do not perform well according to gi • ... Finally, many messages received from others describing properties of their learning processes will contain information about several elements of a learning step, giving rise to yet more complex operations that depend on which kinds of information are available. The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681 Figure 2: Screenshot of our simulation system, displaying online vessel tracking data for the North Sea region 3. APPLICATION EXAMPLE 3.1 Domain description As an illustration of our framework, we present an agentbased data mining system for clustering-based surveillance using AIS (Automatic Identification System [1]) data. In our application domain, different commercial and governmental agencies track the journeys of ships over time using AIS data which contains structured information automatically provided by ships equipped with shipborne mobile AIS stations to shore stations, other ships and aircrafts. This data contains the ship's identity, type, position, course, speed, navigational status and other safety-related information. Figure 2 shows a screenshot of our simulation system. It is the task of AIS agencies to detect anomalous behaviour so as to alarm police/coastguard units to further investigate unusual, potentially suspicious behaviour. Such behaviour might include things such as deviation from the standard routes between the declared origin and destination of the journey, unexpected "close encounters" between different vessels on sea, or unusual patterns in the choice of destination over multiple journeys, taking the type of vessel and reported freight into account. While the reasons for such unusual behaviour may range from pure coincidence or technical problems to criminal activity (such as smuggling, piracy, terrorist/military attacks) it is obviously useful to pre-process the huge amount of vessel (tracking) data that is available before engaging in further analysis by human experts. To support this automated pre-processing task, software used by these agencies applies clustering methods in order to identify outliers and flag those as potentially suspicious entities to the human user. However, many agencies active in this domain are competing enterprises and use their (partially overlapping, but distinct) datasets and learning hypotheses (models) as assets and hence cannot be expected to collaborate in a fully cooperative way to improve overall learning results. Considering that this is the reality of the domain in the real world, it is easy to see that a framework like the one we have suggested above might be useful to exploit the cooperation potential that is not exploited by current systems. 3.2 Agent-based distributed learning system design To describe a concrete design for the AIS domain, we need to specify the following elements of the overall system: 1. The datasets and clustering algorithms available to individual agents, 2. the interaction mechanism used for exchanging descriptions of learning processes, and 3. the decision mechanism agents apply to make learning decisions. Regarding 1., our agents are equipped with their own private datasets in the form of vessel descriptions. Learning samples are represented by tuples containing data about individual vessels in terms of attributes A = {1,. . . , n} including things such as width, length, etc. with real-valued domains ([Ai] = R for all i). In terms of learning algorithm, we consider clustering with a fixed number of k clusters using the k-means and k-medoids clustering algorithms [5] ("fixed" meaning that the learning algorithm will always output k clusters; however, we allow agents to change the value of k over different learning cycles). This means that the hypothesis space can be defined as H = {~ c1,..., ck ~ | ci ∈ R | A |} i.e. the set of all possible sets of k cluster centroids in | A | - dimensional Euclidean space. For each hypothesis h = ~ c1,..., ck ~ and any data point d ∈ × ni = 1 [Ai] given domain [Ai] for the ith attribute of each sample, the assignment to clusters is given by | d − cC (h, d) | where C (h, d) is the index of the nearest centroid. Based on this measure, we classify any vessel in Fi ∪ Vi as fake if its r-value is below the median of all the confidences r (h, d) for d ∈ Fi ∪ Vi. With this, we can compute the quality gi (h) ∈ R as the ratio between all correctly classified vessels and all vessels in Fi ∪ Vi. As concerns 2., we use a simple Contract-Net Protocol (CNP) [20] based "hypothesis trading" mechanism: Before each learning iteration, agents issue (publicly broadcasted) Calls-For-Proposals (CfPs), advertising their own numerical model quality. In other words, the "initiator" of a CNP describes its own current learning state as (∗, ∗, ∗, gi (h), ∗) where h is their current hypothesis/model. We assume that agents are sincere when advertising their model quality, but note that this quality might be of limited relevance to other agents as they may specialise on specific regions of the data space not related to the test set of the sender of the CfP. Subsequently, (some) agents may issue bids in which they advertise, in turn, the quality of their own model. If the i.e. d is assigned to that cluster whose centroid is closest to the data point in terms of Euclidean distance. For evaluation purposes, each dataset pertaining to a particular agent i is initially split into a training set Di and a validation Vi. Then, we generate a set of "fake" vessels Fi such that | Fi | = | Vi |. These two sets assess the agent's ability to detect "suspicious" vessels. For this, we assign a confidence value r (h, d) to every ship d: 682 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) bids (if any) are accepted by the initiator of the protocol who issued the CfP, the agents exchange their hypotheses and the next learning iteration ensues. To describe what is necessary for 3., we have to specify (i) under which conditions agents submit bids in response to a CfP, (ii) when they accept bids in the CNP negotiation process, and (iii) how they integrate the received information in their own learning process. Concerning (i) and (ii), we employ a very simple rule that is identical in both cases: let g be one's own model quality and g' that advertised by the CfP (or highest bid, respectively). If g'> g we respond to the CfP (accept the bid), else respond to the CfP (accept the bid) with probability P (g' / g) and ignore (reject) it else. If two agents make a deal, they exchange their learning hypotheses (models). In our experiments, g and g' are calculated by an additional agent that acts as a global validation mechanism for all agents (in a more realistic setting a comparison mechanism for different g functions would have to be provided). As for (iii), each agent uses a single model merging operator taken from the following two classes of operators (hj is the receiver's own model and hi is the provider's model): • ph--. h (hi, hj):--m-join: The m best clusters (in terms of coverage of Dj) from hypothesis hi are appended to hj. -- m-select: The set of the m best clusters (in terms of coverage of Dj) from the union hi ∪ hj is chosen as a new model. (Unlike m-join this method does not prefer own clusters over others'.) • ph--. D (hi, Dj): -- m-filter: The m best clusters (as above) from hi are identified and appended to a new model formed by using those samples not covered by these clusters applying the own learning algorithm fj. Whenever m is large enough to encompass all clusters, we simply write join or filter for them. In section 4 we analyse the performance of each of these two classes for different choices of m. It is noteworthy that this agent-based distributed data mining system is one of the simplest conceivable instances of our abstract architecture. While we have previously applied it also to a more complex market-based architecture using Inductive Logic Programming learners in a transport logistics domain [22], we believe that the system described here is complex enough to illustrate the key design decisions involved in using our framework and provides simple example solutions for these design issues. 4. EXPERIMENTAL RESULTS Figure 3 shows results obtained from simulations with three learning agents in the above system using the k-means and k-medoids clustering methods respectively. We partition the total dataset of 300 ships into three disjoint sets of 100 samples each and assign each of these to one learning agent. The Single Agent is learning from the whole dataset. The parameter k is set to 10 as this is the optimal value for the total dataset according to the Davies-Bouldin index [9]. For m-select we assume m = k which achieves a constant Figure 3: Performance results obtained for different integration operations in homogeneous learner societies using the k-means (top) and k-medoids (bottom) methods model size. For m-join and m-filter we assume m = 3 to limit the extent to which models increase over time. During each experiment the learning agents receive ship descriptions in batches of 10 samples. Between these batches, there is enough time to exchange the models among the agents and recompute the models if necessary. Each ship is described using width, length, draught and speed attributes with the goal of learning to detect which vessels have provided fake descriptions of their own properties. The validation set contains 100 real and 100 randomly generated fake ships. To generate sufficiently realistic properties for fake ships, their individual attribute values are taken from randomly selected ships in the validation set (so that each fake sample is a combination of attribute values of several existing ships). In these experiments, we are mainly interested in investigating whether a simple form of knowledge sharing between self-interested learning agents could improve agent performance compared to a setting of isolated learners. Thereby, we distinguish between homogeneous learner societies where all agents use the same clustering algorithm and heterogeneous ones where different agents use different algorithms. As can be seen from the performance plots in Figure 3 (homogeneous case) and 4 (heterogeneous case, two agents use the same method and one agent uses the other) this is clearly the case for the (unrestricted) join and filter integration operations (m = k) in both cases. This is quite natural, as these operations amount to sharing all available model knowledge among agents (under appropriate constraints depending on how beneficial the exchange seems to the agents). We can see that the quality of these operations is very close to the Single Agent that has access to all training data. For the restricted (m <k) m-join, m-filter and m-select methods we can also observe an interesting distinction, Figure 4: Performance results obtained for different integration operations in heterogeneous societies with the majority of learners using the k-means (top) and k-medoids (bottom) methods namely that these perform similarly to the isolated learner case in homogeneous agent groups but better than isolated learners in more heterogeneous societies. This suggests that heterogeneous learners are able to benefit even from rather limited knowledge sharing (and this is what using a rather small m = 3 amounts to given that k = 10) while this is not always true for homogeneous agents. This nicely illustrates how different learning or data mining algorithms can specialise on different parts of the problem space and then integrate their local results to achieve better individual performance. Apart from these obvious performance benefits, integrating partial learning results can also have other advantages: The m-filter operation, for example, decreases the number of learning samples and thus can speed up the learning process. The relative number of filtered examples measured in our experiments is shown in the following table. k-means k-medoids filtering 30-40% 10-20% m-filtering 20-30% 5-15% The overall conclusion we can draw from these initial experiments with our architecture is that since a very simplistic application of its principles has proven capable of improving the performance of individual learning agents, it is worthwhile investigating more complex forms of information exchange about learning processes among autonomous learners. 5. RELATED WORK We have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems. Very often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori. A typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm. A similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour. The difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents. Many approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method). An agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm. The same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners. A number of approaches for sharing learning data [18] have also been proposed: Grecu and Becker [12] suggest an exchange of learning samples among agents, and Ghosh et al. [11] is a step in the right direction in terms of revealing only partial information about one's learning process as it deals with limited information sharing in distributed clustering. Papyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks. The MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses. Agents were able to critique each others' hypotheses until agreement was reached. However, all agents in this system were identical and the system was strictly cooperative. The ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system. As these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work. 6. CONCLUSION In this paper, we outlined a generic, abstract framework for distributed machine learning and data mining. This framework constitutes, to our knowledge, the first attempt 684 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) to capture complex forms of interaction between heterogeneous and/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity. To illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour. Although our experimental results only hint at the potential of using our architecture, they underline that what we are proposing is feasible in principle and can have beneficial effects even in its most simple instantiation. Yet there is a number of issues that we have not addressed in the presentation of the architecture and its empirical evaluation: Firstly, we have not considered the cost of communication and made the implicit assumption that the required communication "comes for free". This is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results. Secondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical). In systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents' learning processes. Finally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be. These issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject. Acknowledgement: We gratefully acknowledge the support of the presented research by Army Research Laboratory project N62558-03-0819 and Office for Naval Research project N00014-06-1-0232.
A Framework for Agent-Based Distributed Machine Learning and Data Mining ABSTRACT This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework. 1. INTRODUCTION In the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance. Various techniques have been suggested in this respect, ranging from the low-level integration of independently derived learning hypotheses (e.g. combining different classifiers to make optimal classification decisions [4, 7], model averaging of Bayesian classifiers [8], or consensusbased methods for integrating different clusterings [11]), to the high-level combination of learning results obtained by heterogeneous learning "agents" using meta-learning (e.g. [3, 10, 21]). All of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem). Therefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems. In such systems, learners (agents) may not be able to integrate their datasets or learning results (because of different data formats and representations, learning algorithms, or legal restrictions that prohibit such integration [11]) and cannot always be guaranteed to interact in a strictly cooperative fashion (discovered knowledge and collected data might be economic assets that should only be shared when this is deemed profitable; malicious agents might attempt to adversely influence others' learning results, etc.). Examples for applications of this kind abound. Many distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2])--however, they may permit the exchange of local learning hypotheses among different learners. In other areas, training data might be commercially valuable, so that agents would only make it available to others if those agents could provide something in return (e.g. in remote ship surveillance and tracking, where the different agencies involved are commercial service providers [1]). Furthermore, agents might have a vested interest in negatively affecting other agents' learning performance. An example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud. Viewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor be comes a necessity as oppossed to preferences for scalability, dynamic data selection, "interactivity" [13], which can also be achieved through (non-agent) distribution and parallelisation in principle. Despite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to 1. exchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2. decide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3. reason about how this information can best be used to improve their own learning performance. Our model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results). Our hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents. To test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3). We report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4). Finally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6). 2. ABSTRACT ARCHITECTURE 2.1 Learning agents 680 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.2 Integrating learning process information The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681 3. APPLICATION EXAMPLE 3.1 Domain description 3.2 Agent-based distributed learning system design 682 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4. EXPERIMENTAL RESULTS 5. RELATED WORK We have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems. Very often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori. A typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm. A similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour. The difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents. Many approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method). An agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm. The same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners. A number of approaches for sharing learning data [18] have also been proposed: Grecu and Becker [12] suggest an exchange of learning samples among agents, and Ghosh et al. [11] is a step in the right direction in terms of revealing only partial information about one's learning process as it deals with limited information sharing in distributed clustering. Papyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks. The MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses. Agents were able to critique each others' hypotheses until agreement was reached. However, all agents in this system were identical and the system was strictly cooperative. The ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system. As these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work. 6. CONCLUSION In this paper, we outlined a generic, abstract framework for distributed machine learning and data mining. This framework constitutes, to our knowledge, the first attempt 684 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) to capture complex forms of interaction between heterogeneous and/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity. To illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour. Although our experimental results only hint at the potential of using our architecture, they underline that what we are proposing is feasible in principle and can have beneficial effects even in its most simple instantiation. Yet there is a number of issues that we have not addressed in the presentation of the architecture and its empirical evaluation: Firstly, we have not considered the cost of communication and made the implicit assumption that the required communication "comes for free". This is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results. Secondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical). In systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents' learning processes. Finally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be. These issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject. Acknowledgement: We gratefully acknowledge the support of the presented research by Army Research Laboratory project N62558-03-0819 and Office for Naval Research project N00014-06-1-0232.
A Framework for Agent-Based Distributed Machine Learning and Data Mining ABSTRACT This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework. 1. INTRODUCTION In the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance. All of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem). Therefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems. Examples for applications of this kind abound. Many distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2])--however, they may permit the exchange of local learning hypotheses among different learners. Furthermore, agents might have a vested interest in negatively affecting other agents' learning performance. An example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud. Viewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor be Despite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to 1. exchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2. decide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3. reason about how this information can best be used to improve their own learning performance. Our model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results). Our hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents. To test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3). We report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4). Finally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6). 5. RELATED WORK We have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems. Very often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori. A typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm. A similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour. The difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents. Many approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method). An agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm. The same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners. Papyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks. The MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses. Agents were able to critique each others' hypotheses until agreement was reached. However, all agents in this system were identical and the system was strictly cooperative. The ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system. As these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work. 6. CONCLUSION In this paper, we outlined a generic, abstract framework for distributed machine learning and data mining. This framework constitutes, to our knowledge, the first attempt 684 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) to capture complex forms of interaction between heterogeneous and/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity. To illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour. This is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results. Secondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical). In systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents' learning processes. Finally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be. These issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject.
C-80
Consistency-preserving Caching of Dynamic Database Content
With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These benefits do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modifications to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones.
[ "databas content", "relat databas", "tempor local", "cach dynam databas content", "hash-base techniqu", "redund", "natur chunk boundari", "proxi", "jdbc driver", "resultset object", "bboard benchmark", "read-write oper", "reciperesultset", "content address storag", "relat databas system", "databas cach", "wide area network", "bandwidth optim" ]
[ "P", "P", "U", "R", "U", "U", "U", "U", "U", "U", "M", "U", "U", "M", "M", "R", "U", "U" ]
Consistency-preserving Caching of Dynamic Database Content∗ Niraj Tolia and M. Satyanarayanan Carnegie Mellon University {ntolia,satya}@cs. cmu.edu ABSTRACT With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These benefits do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modifications to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; H.2.4 [Database Management]: Systems General Terms Design, Performance 1. INTRODUCTION An increasing fraction of web content is dynamically generated from back-end relational databases. Even when database content remains unchanged, temporal locality of access cannot be exploited because dynamic content is not cacheable by web browsers or by intermediate caching servers such as Akamai mirrors. In a multitiered architecture, each web request can stress the WAN link between the web server and the database. This causes user experience to be highly variable because there is no caching to insulate the client from bursty loads. Previous attempts in caching dynamic database content have generally weakened transactional semantics [3, 4] or required application modifications [15, 34]. We report on a new solution that takes the form of a databaseagnostic middleware layer called Ganesh. Ganesh makes no effort to semantically interpret the contents of queries or their results. Instead, it relies exclusively on cryptographic hashing to detect similarities with previous results. Hash-based similarity detection has seen increasing use in distributed file systems [26, 36, 37] for improving performance on low-bandwidth networks. However, these techniques have not been used for relational databases. Unlike previous approaches that use generic methods to detect similarity, Ganesh exploits the structure of relational database results to yield superior performance improvement. One faces at least three challenges in applying hash-based similarity detection to back-end databases. First, previous work in this space has traditionally viewed storage content as uninterpreted bags of bits with no internal structure. This allows hash-based techniques to operate on long, contiguous runs of data for maximum effectiveness. In contrast, relational databases have rich internal structure that may not be as amenable to hash-based similarity detection. Second, relational databases have very tight integrity and consistency constraints that must not be compromised by the use of hash-based techniques. Third, the source code of commercial databases is typically not available. This is in contrast to previous work which presumed availability of source code. Our experiments show that Ganesh, while conceptually simple, can improve performance significantly at bandwidths representative of today``s commercial Internet. On benchmarks modeling multitiered web applications, the throughput improvement was as high as tenfold for data-intensive workloads. For workloads that were not data-intensive, throughput improvements of up to twofold were observed. Even when bandwidth was not a constraint, Ganesh had low overhead and did not hurt performance. Our experiments also confirm that exploiting the structure present in database results is crucial to this performance improvement. 2. BACKGROUND 2.1 Dynamic Content Generation As the World Wide Web has grown, many web sites have decentralized their data and functionality by pushing them to the edges of the Internet. Today, eBusiness systems often use a three-tiered architecture consisting of a front-end web server, an application server, and a back-end database server. Figure 1 illustrates this architecture. The first two tiers can be replicated close to a concentration of clients at the edge of the Internet. This improves user experience by lowering end-to-end latency and reducing exposure WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 311 Back-End Database Server Front-End Web and Application Servers Figure 1: Multi-Tier Architecture to backbone traffic congestion. It can also increase the availability and scalability of web services. Content that is generated dynamically from the back-end database cannot be cached in the first two tiers. While databases can be easily replicated in a LAN, this is infeasible in a WAN because of the difficult task of simultaneously providing strong consistency, availability, and tolerance to network partitions [7]. As a result, databases tend to be centralized to meet the strong consistency requirements of many eBusiness applications such as banking, finance, and online retailing [38]. Thus, the back-end database is usually located far from many sets of first and second-tier nodes [2]. In the absence of both caching and replication, WAN bandwidth can easily become a limiting factor in the performance and scalability of data-intensive applications. 2.2 Hash-Based Systems Ganesh``s focus is on efficient transmission of results by discovering similarities with the results of previous queries. As SQL queries can generate large results, hash-based techniques lend themselves well to the problem of efficiently transferring these large results across bandwidth constrained links. The use of hash-based techniques to reduce the volume of data transmitted has emerged as a common theme of many recent storage systems, as discussed in Section 8.2. These techniques rely on some basic assumptions. Cryptographic hash functions are assumed to be collision-resistant. In other words, it is computationally intractable to find two inputs that hash to the same output. The functions are also assumed to be one-way; that is, finding an input that results in a specific output is computationally infeasible. Menezes et al. [23] provide more details about these assumptions. The above assumptions allow hash-based systems to assume that collisions do not occur. Hence, they are able to treat the hash of a data item as its unique identifier. A collection of data items effectively becomes content-addressable, allowing a small hash to serve as a codeword for a much larger data item in permanent storage or network transmission. The assumption that collisions are so rare as to be effectively non-existent has recently come under fire [17]. However, as explained by Black [5], we believe that these issues do not form a concern for Ganesh. All communication is between trusted parts of the system and an adversary has no way to force Ganesh to accept invalid data. Further, Ganesh does not depend critically on any specific hash function. While we currently use SHA-1, replacing it with a different hash function would be simple. There would be no impact on performance as stronger hash functions (e.g. SHA256) only add a few extra bytes and the generated hashes are still orders of magnitude smaller than the data items they represent. No re-hashing of permanent storage is required since Ganesh only uses hashing on volatile data. 3. DESIGN AND IMPLEMENTATION Ganesh exploits redundancy in the result stream to avoid transmitting result fragments that are already present at the query site. Redundancy can arise naturally in many different ways. For example, a query repeated after a certain interval may return a different result because of updates to the database; however, there may be significant commonality between the two results. As another example, a user who is refining a search may generate a sequence of queries with overlapping results. When Ganesh detects redundancy, it suppresses transmission of the corresponding result fragments. Instead, it transmits a much smaller digest of those fragments and lets the query site reconstruct the result through hash lookup in a cache of previous results. In effect, Ganesh uses computation at the edges to reduce Internet communication. Our description of Ganesh focuses on four aspects. We first explain our approach to detecting similarity in query results. Next, we discuss how the Ganesh architecture is completely invisible to all components of a multi-tier system. We then describe Ganesh``s proxy-based approach and the dataflow for detecting similarity. 3.1 Detecting Similarity One of the key design decisions in Ganesh is how similarity is detected. There are many potential ways to decompose a result into fragments. The optimal way is, of course, the one that results in the smallest possible object for transmission for a given query``s results. Finding this optimal decomposition is a difficult problem because of the large space of possibilities and because the optimal choice depends on many factors such as the contents of the query``s result, the history of recent results, and the cache management algorithm. When an object is opaque, the use of Rabin fingerprints [8, 30] to detect common data between two objects has been successfully shown in the past by systems such as LBFS [26] and CASPER [37]. Rabin fingerprinting uses a sliding window over the data to compute a rolling hash. Assuming that the hash function is uniformly distributed, a chunk boundary is defined whenever the lower order bits of the hash value equal some predetermined value. The number of lower order bits used defines the average chunk size. These sub-divided chunks of the object become the unit of comparison for detecting similarity between different objects. As the locations of boundaries found by using Rabin fingerprints is stochastically determined, they usually fail to align with any structural properties of the underlying data. The algorithm therefore deals well with in-place updates, insertions and deletions. However, it performs poorly in the presence of any reordering of data. Figure 2 shows an example where two results, A and B, consisting of three rows, have the same data but have different sort attributes. In the extreme case, Rabin fingerprinting might be unable to find any similar data due to the way it detects chunk boundaries. Fortunately, Ganesh can use domain specific knowledge for more precise boundary detection. The information we exploit is that a query``s result reflects the structure of a relational database where all data is organized as tables and rows. It is therefore simple to check for similarity with previous results at two granularities: first the entire result, and then individual rows. The end of a row in a result serves as a natural chunk boundary. It is important to note that using the tabular structure in results only involves shallow interpretation of the data. Ganesh does not perform any deeper semantic interpretation such as understanding data types, result schema, or integrity constraints. Tuning Rabin fingerprinting for a workload can also be difficult. If the average chunk size is too large, chunks can span multiple result rows. However, selecting a smaller average chunk size increases the amount of metadata required to the describe the results. WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 312 Figure 2: Rabin Fingerprinting vs. Ganesh``s Chunking This, in turn, would decrease the savings obtained via its use. Rabin fingerprinting also needs two computationally-expensive passes over the data: once to determine chunk boundaries and one again to generate cryptographic hashes for the chunks. Ganesh only needs a single pass for hash generation as the chunk boundaries are provided by the data``s natural structure. The performance comparison in Section 6 shows that Ganesh``s row-based algorithm outperforms Rabin fingerprinting. Given that previous work has already shown that Rabin fingerprinting performs better than gzip [26], we do not compare Ganesh to compression algorithms in this paper. 3.2 Transparency The key factor influencing our design was the need for Ganesh to be completely transparent to all components of a typical eBusiness system: web servers, application servers, and database servers. Without this, Ganesh stands little chance of having a significant real-world impact. Requiring modifications to any of the above components would raise the barrier for entry of Ganesh into an existing system, and thus reduce its chances of adoption. Preserving transparency is simplified by the fact that Ganesh is purely a performance enhancement, not a functionality or usability enhancement. We chose agent interposition as the architectural approach to realizing our goal. This approach relies on the existence of a compact programming interface that is already widely used by target software. It also relies on a mechanism to easily add new code without disrupting existing module structure. These conditions are easily met in our context because of the popularity of Java as the programming language for eBusiness systems. The Java Database Connectivity (JDBC) API [32] allows Java applications to access a wide variety of databases and even other tabular data repositories such as flat files. Access to these data sources is provided by JDBC drivers that translate between the JDBC API and the database communication mechanism. Figure 3(a) shows how JDBC is typically used in an application. As the JDBC interface is standardized, one can substitute one JDBC driver for another without application modifications. The JDBC driver thus becomes the natural module to exploit for code interposition. As shown in Figure 3(b), the native JDBC driver is replaced with a Ganesh JDBC driver that presents the same standardized interface. The Ganesh driver maintains an in-memory cache of result fragments from previous queries and performs reassembly of results. At the database, we add a new process called the Ganesh proxy. This proxy, which can be shared by multiple front-end nodes, consists of two parts: code to detect similarity in result fragments and the original native JDBC driver that communicates with the database. The use of a proxy at the database makes Ganesh database-agnostic and simplifies prototyping and experimentation. Ganesh is thus able to work with a wide range of databases and applications, requiring no modifications to either. 3.3 Proxy-Based Caching The native JDBC driver shown in Figure 3(a) is a lightweight code component supplied by the database vendor. Its main funcClient Database Web and Application Server Native JDBC Driver WAN (a) Native Architecture Client Database Ganesh Proxy Native JDBC Driver WAN Web and Application Server Ganesh JDBC Driver (b) Ganesh``s Interposition-based Architecture Figure 3: Native vs. Ganesh Architecture tion is to mediate communication between the application and the remote database. It forwards queries, buffers entire results, and responds to application requests to view parts of results. The Ganesh JDBC driver shown in Figure 3(b) presents the application with an interface identical to that provided by the native driver. It provides the ability to reconstruct results from compact hash-based descriptions sent by the proxy. To perform this reconstruction, the driver maintains an in-memory cache of recentlyreceived results. This cache is only used as a source of result fragments in reconstructing results. No attempt is made by the Ganesh driver or proxy to track database updates. The lack of cache consistency does not hurt correctness as a description of the results is always fetched from the proxy - at worst, there will be no performance benefit from using Ganesh. Stale data will simply be paged out of the cache over time. The Ganesh proxy accesses the database via the native JDBC driver, which remains unchanged between Figures 3(a) and (b). The database is thus completely unaware of the existence of the proxy. The proxy does not examine any queries received from the Ganesh driver but passes them to the native driver. Instead, the proxy is responsible for inspecting database output received from the native driver, detecting similar results, and generating hash-based encodings of these results whenever enough similarity is found. While this architecture does not decrease the load on a database, as mentioned earlier in Section 2.1, it is much easier to replicate databases for scalability in a LAN than in a WAN. To generate a hash-based encoding, the proxy must be aware of what result fragments are available in the Ganesh driver``s cache. One approach is to be optimistic, and to assume that all result fragments are available. This will result in the smallest possible initial transmission of a result. However, in cases where there is little overlap with previous results, the Ganesh driver will have to make many calls to the proxy during reconstruction to fetch missing result fragments. To avoid this situation, the proxy loosely tracks the state of the Ganesh driver``s cache. Since both components are under our control, it is relatively simple to do this without resorting to gray-box techniques or explicit communication for maintaining cache coherence. Instead, the proxy simulates the Ganesh driver``s cache management algorithm and uses this to maintain a list of hashes for which the Ganesh driver is likely to possess the result fragments. In case of mistracking, there will be no loss of correctness but there will be extra round-trip delays to fetch the missing fragments. If the client detects loss of synchronization with the proxy, it can ask the proxy to reset the state shared between them. Also note that the proxy does not need to keep the result fragments themselves, only their hashes. This allows the proxy to remain scalable even when it is shared by many front-end nodes. WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 313 Object Output Stream Convert ResultSet Object Input Stream Convert ResultSet All Data Recipe ResultSet All Data ResultSet Network Ganesh Proxy Ganesh JDBC Driver Result Set Recipe Result Set Yes Yes No No GaneshInputStream GaneshOutputStream Figure 4: Dataflow for Result Handling 3.4 Encoding and Decoding Results The Ganesh proxy receives database output as Java objects from the native JDBC driver. It examines this output to see if a Java object of type ResultSet is present. The JDBC interface uses this data type to store results of database queries. If a ResultSet object is found, it is shrunk as discussed below. All other Java objects are passed through unmodified. As discussed in Section 3.1, the proxy uses the row boundaries defined in the ResultSet to partition it into fragments consisting of single result rows. All ResultSet objects are converted into objects of a new type called RecipeResultSet. We use the term recipe for this compact description of a database result because of its similarity to a file recipe in the CASPER file system [37]. The conversion replaces each result fragment that is likely to be present in the Ganesh driver``s cache by a SHA-1 hash of that fragment. Previously unseen result fragments are retained verbatim. The proxy also retains hashes for the new result fragments as they will be present in the driver``s cache in the future. Note that the proxy only caches hashes for result fragments and does not cache recipes. The proxy constructs a RecipeResultSet by checking for similarity at the entire result and then the row level. If the entire result is predicted to be present in the Ganesh driver``s cache, the RecipeResultSet is simply a single hash of the entire result. Otherwise, it contains hashes for those rows predicted to be present in that cache; all other rows are retained verbatim. If the proxy estimates an overall space savings, it will transmit the RecipeResultSet. Otherwise the original ResultSet is transmitted. The RecipeResultSet objects are transformed back into ResultSet objects by the Ganesh driver. Figure 4 illustrates ResultSet handling at both ends. Each SHA-1 hash found in a RecipeResultSet is looked up in the local cache of result fragments. On a hit, the hash is replaced by the corresponding fragment. On a miss, the driver contacts the Ganesh proxy to fetch the fragment. All previously unseen result fragments that were retained verbatim by the proxy are hashed and added to the result cache. There should be very few misses if the proxy has accurately tracked the Ganesh driver``s cache state. A future optimization would be to batch the fetch of missing fragments. This would be valuable when there are many small missing fragments in a high-latency WAN. Once the transformation is complete, the fully reconstructed ResultSet object is passed up to the application. 4. EXPERIMENTAL VALIDATION Three questions follow from the goals and design of Ganesh: • First, can performance can be improved significantly by exploiting similarity across database results? Benchmark Dataset Details 500,000 Users, 12,000 Stories BBOARD 2.0 GB 3,298,000 Comments AUCTION 1.3 GB 1,000,000 Users, 34,000 Items Table 1: Benchmark Dataset Details • Second, how important is Ganesh``s structural similarity detection relative to Rabin fingerprinting``s similarity detection? • Third, is the overhead of the proxy-based design acceptable? Our evaluation answers these question through controlled experiments with the Ganesh prototype. This section describes the benchmarks used, our evaluation procedure, and the experimental setup. Results of the experiments are presented in Sections 5, 6, and 7. 4.1 Benchmarks Our evaluation is based on two benchmarks [18] that have been widely used by other researchers to evaluate various aspects of multi-tier [27] and eBusiness architectures [9]. The first benchmark, BBOARD, is modeled after Slashdot, a technology-oriented news site. The second benchmark, AUCTION, is modeled after eBay, an online auction site. In both benchmarks, most content is dynamically generated from information stored in a database. Details of the datasets used can be found in Table 1. 4.1.1 The BBOARD Benchmark The BBOARD benchmark, also known as RUBBoS [18], models Slashdot, a popular technology-oriented web site. Slashdot aggregates links to news stories and other topics of interest found elsewhere on the web. The site also serves as a bulletin board by allowing users to comment on the posted stories in a threaded conversation form. It is not uncommon for a story to gather hundreds of comments in a matter of hours. The BBOARD benchmark is similar to the site and models the activities of a user, including readonly operations such as browsing the stories of the day, browsing story categories, and viewing comments as well as write operations such as new user registration, adding and moderating comments, and story submission. The benchmark consists of three different phases: a short warmup phase, a runtime phase representing the main body of the workload, and a short cool-down phase. In this paper we only report results from the runtime phase. The warm-up phase is important in establishing dynamic system state, but measurements from that phase are not significant for our evaluation. The cool-down phase is solely for allowing the benchmark to shut down. The warm-up, runtime, and cool-down phases are 2, 15, and 2 minutes respectively. The number of simulated clients were 400, 800, 1200, and 1600. The benchmark is available in a Java Servlets and PHP version and has different datasets; we evaluated Ganesh using the Java Servlets version and the Expanded dataset. The BBOARD benchmark defines two different workloads. The first, the Authoring mix, consists of 70% read-only operations and 30% read-write operations. The second, the Browsing mix, contains only read-only operations and does not update the database. 4.1.2 The AUCTION Benchmark The AUCTION benchmark, also known as RUBiS [18], models eBay, the online auction site. The eBay web site is used to buy and sell items via an auction format. The main activities of a user include browsing, selling, or bidding for items. Modeling the activities on this site, this benchmark includes read-only activities such as browsing items by category and by region, as well as read-write WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 314 NetEm Router Ganesh Proxy Clients Web and Application Server Database Server Figure 5: Experimental Setup activities such as bidding for items, buying and selling items, and leaving feedback. As with BBOARD, the benchmark consists of three different phases. The warm-up, runtime, and cool-down phases for this experiment are 1.5, 15, and 1 minutes respectively. We tested Ganesh with four client configurations where the number of test clients was set to 400, 800, 1200, and 1600. The benchmark is available in a Enterprise Java Bean (EJB), Java Servlets, and PHP version and has different datasets; we evaluated Ganesh with the Java Servlets version and the Expanded dataset. The AUCTION benchmark defines two different workloads. The first, the Bidding mix, consists of 70% read-only operations and 30% read-write operations. The second, the Browsing mix, contains only read-only operations and does not update the database. 4.2 Experimental Procedure Both benchmarks involve a synthetic workload of clients accessing a web server. The number of clients emulated is an experimental parameter. Each emulated client runs an instance of the benchmark in its own thread, using a matrix to transition between different benchmark states. The matrix defines a stochastic model with probabilities of transitioning between the different states that represent typical user actions. An example transition is a user logging into the AUCTION system and then deciding on whether to post an item for sale or bid on active auctions. Each client also models user think time between requests. The think time is modeled as an exponential distribution with a mean of 7 seconds. We evaluate Ganesh along two axes: number of clients and WAN bandwidth. Higher loads are especially useful in understanding Ganesh``s performance when the CPU or disk of the database server or proxy is the limiting factor. A previous study has shown that approximately 50% of the wide-area Internet bottlenecks observed had an available bandwidth under 10 Mb/s [1]. Based on this work, we focus our evaluation on the WAN bandwidth of 5 Mb/s with 66 ms of round-trip latency, representative of severely constrained network paths, and 20 Mb/s with 33 ms of round-trip latency, representative of a moderately constrained network path. We also report Ganesh``s performance at 100 Mb/s with no added round-trip latency. This bandwidth, representative of an unconstrained network, is especially useful in revealing any potential overhead of Ganesh in situations where WAN bandwidth is not the limiting factor. For each combination of number of clients and WAN bandwidth, we measured results from the two configurations listed below: • Native: This configuration corresponds to Figure 3(a). Native avoids Ganesh``s overhead in using a proxy and performing Java object serialization. • Ganesh: This configuration corresponds to Figure 3(b). For a given number of clients and WAN bandwidth, comparing these results to the corresponding Native results gives the performance benefit due to the Ganesh middleware system. The metric used to quantify the improvement in throughput is the number of client requests that can be serviced per second. The metric used to quantify Ganesh``s overhead is the average response time for a client request. For all of the experiments, the Ganesh driver used by the application server used a cache size of 100,000 items1 . The proxy was effective in tracking the Ganesh driver``s cache state; for all of our experiments the miss rate on the driver never exceeded 0.7%. 4.3 Experimental Setup The experimental setup used for the benchmarks can be seen in Figure 5. All machines were 3.2 GHz Pentium 4s (with HyperThreading enabled.) With the exception of the database server, all machines had 2 GB of SDRAM and ran the Fedora Core Linux distribution. The database server had 4 GB of SDRAM. We used Apache``s Tomcat as both the application server that hosted the Java Servlets and the web server. Both benchmarks used Java Servlets to generate the dynamic content. The database server used the open source MySQL database. For the native JDBC drivers, we used the Connector/J drivers provided by MySQL. The application server used Sun``s Java Virtual Machine as the runtime environment for the Java Servlets. The sysstat tool was used to monitor the CPU, network, disk, and memory utilization on all machines. The machines were connected by a switched gigabit Ethernet network. As shown in Figure 5, the front-end web and application server was separated from the proxy and database server by a NetEm router [16]. This router allowed us to control the bandwidth and latency settings on the network. The NetEm router is a standard PC with two network cards running the Linux Traffic Control and Network Emulation software. The bandwidth and latency constraints were only applied to the link between the application server and the database for the native case and between the application server and the proxy for the Ganesh case. There is no communication between the application server and the database with Ganesh as all data flows through the proxy. As our focus was on the WAN link between the application server and the database, there were no constraints on the link between the simulated test clients and the web server. 5. THROUGHPUT AND RESPONSE TIME In this section, we address the first question raised in Section 4: Can performance can be improved significantly by exploiting similarity across database results? To answer this question, we use results from the BBOARD and AUCTION benchmarks. We use two metrics to quantify the performance improvement obtainable through the use of Ganesh: throughput, from the perspective of the web server, and average response time, from the perspective of the client. Throughput is measured in terms of the number of client requests that can be serviced per second. 5.1 BBOARD Results and Analysis 5.1.1 Authoring Mix Figures 6 (a) and (b) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients for BBOARD``s Authoring Mix. As Figure 6 (a) shows, Native easily saturates the 5 Mb/s link. At 400 clients, the Native solution delivers 29 requests/sec with an average response time of 8.3 seconds. Native``s throughput drops with an increase in test clients as clients timeout due to congestion at the application server. Usability studies have shown that response times above 10 seconds cause the user to move on to 1 As Java lacks a sizeof() operator, Java caches therefore limit their size based on the number of objects. The size of cache dumps taken at the end of the experiments never exceeded 212 MB. WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 315 0 50 100 150 200 250 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Requests/sec Native Ganesh 0.001 0.01 0.1 1 10 100 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Avg.Resp.Time(sec) Native Ganesh Note Logscale (a) Throughput: Authoring Mix (b) Response Time: Authoring Mix 0 50 100 150 200 250 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Requests/sec Native Ganesh 0.001 0.01 0.1 1 10 100 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Avg.Resp.Time(sec) Native Ganesh Note Logscale (c) Throughput: Browsing Mix (d) Response Time: Browsing Mix Mean of three trials. The maximum standard deviation for throughput and response time was 9.8% and 11.9% of the corresponding mean. Figure 6: BBOARD Benchmark - Throughput and Average Response Time other tasks [24]. Based on these numbers, increasing the number of test clients makes the Native system unusable. Ganesh at 5 Mb/s, however, delivers a twofold improvement with 400 test clients and a fivefold improvement at 1200 clients. Ganesh``s performance drops slightly at 1200 and 1600 clients as the network is saturated. Compared to Native, Figure 6 (b) shows that Ganesh``s response times are substantially lower with sub-second response times at 400 clients. Figure 6 (a) also shows that for 400 and 800 test clients Ganesh at 5 Mb/s has the same throughput and average response time as Native at 20 Mb/s. Only at 1200 and 1600 clients does Native at 20 Mb/s deliver higher throughput than Ganesh at 5 Mb/s. Comparing both Ganesh and Native at 20 Mb/s, we see that Ganesh is no longer bandwidth constrained and delivers up to a twofold improvement over Native at 1600 test clients. As Ganesh does not saturate the network with higher test client configurations, at 1600 test clients, its average response time is 0.1 seconds rather than Native``s 7.7 seconds. As expected, there are no visible gains from Ganesh at the higher bandwidth of 100 Mb/s where the network is no longer the bottleneck. Ganesh, however, still tracks Native in terms of throughput. 5.1.2 Browsing Mix Figures 6 (c) and (d) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients for BBOARD``s Browsing Mix. Regardless of the test client configuration, Figure 6 (c) shows that Native``s throughput at 5 Mb/s is limited to 10 reqs/sec. Ganesh at 5 Mb/s with 400 test clients, delivers more than a sixfold increase in throughput. The improvement increases to over a elevenfold increase at 800 test clients before Ganesh saturates the network. Further, Figure 6 (d) shows that Native``s average response time of 35 seconds at 400 test clients make the system unusable. These high response times further increase with the addition of test clients. Even with the 1600 test client configuration Ganesh delivers an acceptable average response time of 8.2 seconds. Due to the data-intensive nature of the Browsing mix, Ganesh at 5 Mb/s surprisingly performs much better than Native at 20 Mb/s. Further, as shown in Figure 6 (d), while the average response time for Native at 20 Mb/s is acceptable at 400 test clients, it is unusable with 800 test clients with an average response time of 15.8 seconds. Like the 5 Mb/s case, this response time increases with the addition of extra test clients. Ganesh at 20 Mb/s and both Native and Ganesh at 100 Mb/s are not bandwidth limited. However, performance plateaus out after 1200 test clients due to the database CPU being saturated. 5.1.3 Filter Variant We were surprised by the Native performance from the BBOARD benchmark. At the bandwidth of 5 Mb/s, Native performance was lower than what we had expected. It turned out the benchmark code that displays stories read all the comments associated with the particular story from the database and only then did some postprocessing to select the comments to be displayed. While this is exactly the behavior of SlashCode, the code base behind the Slashdot web site, we decided to modify the benchmark to perform some pre-filtering at the database. This modified benchmark, named the Filter Variant, models a developer who applies optimizations at the SQL level to transfer less data. In the interests of brevity, we only briefly summarize the results from the Authoring mix. For the Authoring mix, at 800 test clients at 5 Mb/s, Figure 7 (a) shows that Native``s throughput increase by 85% when compared to the original benchmark while Ganesh``s improvement is smaller at 15%. Native``s performance drops above 800 clients as the test clients time out due to high response times. The most significant gain for Native is seen at 20 Mb/s. At 1600 test clients, when compared to the original benchmark, Native sees a 73% improvement in throughput and a 77% reduction in average response time. While WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 316 0 50 100 150 200 250 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Requests/sec Native Ganesh 0.001 0.01 0.1 1 10 100 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Avg.Resp.Time(sec) Native Ganesh Note Logscale (a) Throughput: Authoring Mix (b) Response Time: Authoring Mix Mean of three trials. The maximum standard deviation for throughput and response time was 7.2% and 11.5% of the corresponding mean. Figure 7: BBOARD Benchmark - Filter Variant - Throughput and Average Response Time Ganesh sees no improvement when compared to the original, it still processes 19% more requests/sec than Native. Thus, while the optimizations were more helpful to Native, Ganesh still delivers an improvement in performance. 5.2 AUCTION Results and Analysis 5.2.1 Bidding Mix Figures 8 (a) and (b) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients for AUCTION``s Bidding Mix. As mentioned earlier, the Bidding mix consists of a mixture of read and write operations. The AUCTION benchmark is not as data intensive as BBOARD. Therefore, most of the gains are observed at the lower bandwidth of 5 Mb/s. Figure 8 (a) shows that the increase in throughput due to Ganesh ranges from 8% at 400 test clients to 18% with 1600 test clients. As seen in Figure 8 (b), the average response times for Ganesh are significantly lower than Native ranging from a decrease of 84% at 800 test clients to 88% at 1600 test clients. Figure 8 (a) also shows that with a fourfold increase of bandwidth from 5 Mb/s to 20 Mb/s, Native is no longer bandwidth constrained and there is no performance difference between Ganesh and Native. With the higher test client configurations, we did observe that the bandwidth used by Ganesh was lower than Native. Ganesh might still be useful in these non-constrained scenarios if bandwidth is purchased on a metered basis. Similar results are seen for the 100 Mb/s scenario. 5.2.2 Browsing Mix For AUCTION``s Browsing Mix, Figures 8 (c) and (d) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients. Again, most of the gains are observed at lower bandwidths. At 5 Mb/s, Native and Ganesh deliver similar throughput and response times with 400 test clients. While the throughput for both remains the same at 800 test clients, Figure 8 (d) shows that Ganesh``s average response time is 62% lower than Native. Native saturates the link at 800 clients and adding extra test clients only increases the average response time. Ganesh, regardless of the test client configuration, is not bandwidth constrained and maintains the same response time. At 1600 test clients, Figure 8 (c) shows that Ganesh``s throughput is almost twice that of Native. At the higher bandwidths of 20 and 100 Mb/s, neither Ganesh nor Native is bandwidth limited and deliver equivalent throughput and response times. Benchmark Orig. Size Ganesh Size Rabin Size SelectSort1 223.6 MB 5.4 MB 219.3 MB SelectSort2 223.6 MB 5.4 MB 223.6 MB Table 2: Similarity Microbenchmarks 6. STRUCTURAL VS. RABIN SIMILARITY In this section, we address the second question raised in Section 4: How important is Ganesh``s structural similarity detection relative to Rabin fingerprinting-based similarity detecting? To answer this question, we used microbenchmarks and the BBOARD and AUCTION benchmarks. As Ganesh always performed better than Rabin fingerprinting, we only present a subset of the results here in the interests of brevity. 6.1 Microbenchmarks Two microbenchmarks show an example of the effects of data reordering on Rabin fingerprinting algorithm. In the first microbenchmark, SelectSort1, a query with a specified sort order selects 223.6 MB of data spread over approximately 280 K rows. The query is then repeated with a different sort attribute. While the same number of rows and the same data is returned, the order of rows is different. In such a scenario, one would expect a large amount of similarity to be detected between both results. As Table 2 shows, Ganesh``s row-based algorithm achieves a 97.6% reduction while the Rabin fingerprinting algorithm, with the average chunk size parameter set to 4 KB, only achieves a 1% reduction. The reason, as shown earlier in Figure 2, is that with Rabin fingerprinting, the spans of data between two consecutive boundaries usually cross row boundaries. With the order of the rows changing in the second result and the Rabin fingerprints now spanning different rows, the algorithm is unable to detect significant similarity. The small gain seen is mostly for those single rows that are large enough to be broken into multiple chunks. SelectSort2, another micro-benchmark executed the same queries but increased the minimum chunk size of the Rabin fingerprinting algorithm. As can be seen in Table 2, even the small gain from the previous microbenchmark disappears as the minimum chunk size was greater than the average row size. While one can partially address these problems by dynamically varying the parameters of the Rabin fingerprinting algorithm, this can be computationally expensive, especially in the presence of changing workloads. 6.2 Application Benchmarks We ran the BBOARD benchmark described in Section 4.1.1 on two versions of Ganesh: the first with Rabin fingerprinting used as WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 317 0 50 100 150 200 250 300 350 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Requests/sec Native Ganesh 0.001 0.01 0.1 1 10 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Avg.Resp.Time(sec) Native Ganesh Note Logscale (a) Throughput: Bidding Mix (b) Response Time: Bidding Mix 0 50 100 150 200 250 300 350 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Requests/sec Native Ganesh 0.001 0.01 0.1 1 10 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Avg.Resp.Time(sec) Native Ganesh Note Logscale (c) Throughput: Browsing Mix (d) Response Time: Browsing Mix Mean of three trials. The maximum standard deviation for throughput and response time was 2.2% and 11.8% of the corresponding mean. Figure 8: AUCTION Benchmark - Throughput and Average Response Time the chunking algorithm and the second with Ganesh``s row-based algorithm. Rabin``s results for the Browsing Mix are normalized to Ganesh``s results and presented in Figure 9. As Figure 9 (a) shows, at 5 Mb/s, independent of the test client configuration, Rabin significantly underperforms Ganesh. This happens because of a combination of two reasons. First, as outlined in Section 3.1, Rabin finds less similarity as it does not exploit the result``s structural information. Second, this benchmark contained some queries that generated large results. In this case, Rabin, with a small average chunk size, generated a large number of objects that evicted other useful data from the cache. In contrast, Ganesh was able to detect these large rows and correspondingly increase the size of the chunks. This was confirmed as cache statistics showed that Ganesh``s hit ratio was roughly three time that of Rabin. Throughput measurements at 20 Mb/s were similar with the exception of Rabin``s performance with 400 test clients. In this case, Ganesh was not network limited and, in fact, the throughput was the same as 400 clients at 5 Mb/s. Rabin, however, took advantage of the bandwidth increase from 5 to 20 Mb/s to deliver a slightly better performance. At 100 Mb/s, Rabin``s throughput was almost similar to Ganesh as bandwidth was no longer a bottleneck. The normalized response time, presented in Figure 9 (b), shows similar trends. At 5 and 20 Mb/s, the addition of test clients decreases the normalized response time as Ganesh``s average response time increases faster than Rabin``s. However, at no point does Rabin outperform Ganesh. Note that at 400 and 800 clients at 100 Mb/s, Rabin does have a higher overhead even when it is not bandwidth constrained. As mentioned in Section 3.1, this is due to the fact that Rabin has to hash each ResultSet twice. The overhead disappears with 1200 and 1600 clients as the database CPU is saturated and limits the performance of both Ganesh and Rabin. 7. PROXY OVERHEAD In this section, we address the third question raised in Section 4: Is the overhead of Ganesh``s proxy-based design acceptable? To answer this question, we concentrate on its performance at the higher bandwidths. Our evaluation in Section 5 showed that Ganesh, when compared to Native, can deliver a substantial throughput improvement at lower bandwidths. It is only at higher bandwidths that latency, measured by the average response time for a client request, and throughput, measured by the number of client requests that can be serviced per second, overheads would be visible. Looking at the Authoring mix of the original BBOARD benchmark, there are no visible gains from Ganesh at 100 Mb/s. Ganesh, however, still tracks Native in terms of throughput. While the average response time is higher for Ganesh, the absolute difference is in between 0.01 and 0.04 seconds and would be imperceptible to the end-user. The Browsing mix shows an even smaller difference in average response times. The results from the filter variant of the BBOARD benchmarks are similar. Even for the AUCTION benchmark, the difference between Native and Ganesh``s response time at 100 Mb/s was never greater than 0.02 seconds. The only exception to the above results was seen in the filter variant of the BBOARD benchmark where Ganesh at 1600 test clients added 0.85 seconds to the average response time. Thus, even for much faster networks where the WAN link is not the bottleneck, Ganesh always delivers throughput equivalent to Native. While some extra latency is added by the proxy-based design, it is usually imperceptible. 8. RELATED WORK To the best of our knowledge, Ganesh is the first system that combines the use of hash-based techniques with caching of database results to improve throughput and response times for applications with dynamic content. We also believe that it is also the first system to demonstrate the benefits of using structural information for WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 318 0.0 0.2 0.4 0.6 0.8 1.0 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Norm.Throughput 31.8 3.8 2.8 2.3 23.8 32.8 5.8 3.6 1.8 2.1 1.1 1.0 0 5 10 15 20 25 30 35 400 800 1200 1600 400 800 1200 1600 400 800 1200 1600 5 Mb/s 20 Mb/s 100 Mb/s Test Clients Norm.ResponseTime (a) Normalized Throughput: Higher is better (b) Normalized Response Time: Higher is worse For throughput, a normalized result greater than 1 implies that Rabin is better, For response time, a normalized result greater than 1 implies that Ganesh is better. Mean of three trials. The maximum standard deviation for throughput and response time was 9.1% and 13.9% of the corresponding mean. Figure 9: Normalized Comparison of Ganesh vs. Rabin - BBOARD Browsing Mix detecting similarity. In this section, we first discuss alternative approaches to caching dynamic content and then examine other uses of hash-based primitives in distributed systems. 8.1 Caching Dynamic Content At the database layer, a number of systems have advocated middletier caching where parts of the database are replicated at the edge or server [3, 4, 20]. These systems either cache entire tables in what is essentially a replicated database or use materialized views from previous query replies [19]. They require tight integration with the back-end database to ensure a time bound on the propagation of updates. These systems are also usually targeted towards workloads that do not require strict consistency and can tolerate stale data. Further, unlike Ganesh, some of these mid-tier caching solutions [2, 3], suffer from the complexity of having to participate in query planing and distributed query processing. Gao et al. [15] propose using a distributed object replication architecture where the data store``s consistency requirements are adapted on a per-application basis. These solutions require substantial developer resources and detailed understanding of the application being modified. While systems that attempt to automate the partitioning and replication of an application``s database exist [34], they do not provide full transaction semantics. In comparison, Ganesh does not weaken any of the semantics provided by the underlying database. Recent work in the evaluation of edge caching options for dynamic web sites [38] has suggested that, without careful planning, employing complex offloading strategies can hurt performance. Instead, the work advocates for an architecture in which all tiers except the database should be offloaded to the edge. Our evaluation of Ganesh has shown that it would benefit these scenarios. To improve database scalability, C-JDBC [10], SSS [22], and Ganymed [28] also advocate the use of an interposition-based architecture to transparently cluster and replicate databases at the middleware level. The approaches of these architectures and Ganesh are complementary and they would benefit each other. Moving up to the presentation layer, there has been widespread adoption of fragment-based caching [14], which improves cache utilization by separately caching different parts of generated web pages. While fragment-based caching works at the edge, a recent proposal has proposed moving web page assembly to the clients to optimize content delivery [31]. While Ganesh is not used at the presentation layer, the same principles have been applied in Duplicate Transfer Detection [25] to increase web cache efficiency as well as for web access across bandwidth limited links [33]. 8.2 Hash-based Systems The past few years have seen the emergence of many systems that exploit hash-based techniques. At the heart of all these systems is the idea of detecting similarity in data without requiring interpretation of that data. This simple yet elegant idea relies on cryptographic hashing, as discussed earlier in Section 2. Successful applications of this idea span a wide range of storage systems. Examples include peer-to-peer backup of personal computing files [11], storage-efficient archiving of data [29], and finding similar files [21]. Spring and Wetherall [35] apply similar principles at the network level. Using synchronized caches at both ends of a network link, duplicated data is replaced by smaller tokens for transmission and then restored at the remote end. This and other hash-based systems such as the CASPER [37] and LBFS [26] filesystems, and Layer-2 bandwidth optimizers such as Riverbed and Peribit use Rabin fingerprinting [30] to discover spans of commonality in data. This approach is especially useful when data items are modified in-place through insertions, deletions, and updates. However, as Section 6 shows, the performance of this technique can show a dramatic drop in the presence of data reordering. Ganesh instead uses row boundaries as dividers for detecting similarity. The most aggressive use of hash-based techniques is by systems that use hashes as the primary identifiers for objects in persistent storage. Storage systems such as CFS [12] and PAST [13] that have been built using distributed hash tables fall into this category. Single Instance Storage [6] and Venti [29] are other examples of such systems. As discussed in Section 2.2, the use of cryptographic hashes for addressing persistent data represents a deeper level of faith in their collision-resistance than that assumed by Ganesh. If time reveals shortcomings in the hash algorithm, the effort involved in correcting the flaw is much greater. In Ganesh, it is merely a matter of replacing the hash algorithm. 9. CONCLUSION The growing use of dynamic web content generated from relational databases places increased demands on WAN bandwidth. Traditional caching solutions for bandwidth and latency reduction are often ineffective for such content. This paper shows that the impact of WAN accesses to databases can be substantially reduced through the Ganesh architecture without any compromise of the database``s strict consistency semantics. The essence of the Ganesh architecture is the use of computation at the edges to reduce communication through the Internet. Ganesh is able to use cryptographic hashes to detect similarity with previous results and send WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 319 compact recipes of results rather than full results. Our design uses interposition to achieve complete transparency: clients, application servers, and database servers are all unaware of Ganesh``s presence and require no modification. Our experimental evaluation confirms that Ganesh, while conceptually simple, can be highly effective in improving throughput and response time. Our results also confirm that exploiting the structure present in database results to detect similarity is crucial to this performance improvement. 10. REFERENCES [1] AKELLA, A., SESHAN, S., AND SHAIKH, A. An empirical evaluation of wide-area internet bottlenecks. In Proc. 3rd ACM SIGCOMM Conference on Internet Measurement (Miami Beach, FL, USA, Oct. 2003), pp. 101-114. [2] ALTINEL, M., BORNH ¨OVD, C., KRISHNAMURTHY, S., MOHAN, C., PIRAHESH, H., AND REINWALD, B. Cache tables: Paving the way for an adaptive database cache. In Proc. of 29th VLDB (Berlin, Germany, 2003), pp. 718-729. [3] ALTINEL, M., LUO, Q., KRISHNAMURTHY, S., MOHAN, C., PIRAHESH, H., LINDSAY, B. G., WOO, H., AND BROWN, L. Dbcache: Database caching for web application servers. In Proc. 2002 ACM SIGMOD (2002), pp. 612-612. [4] AMIRI, K., PARK, S., TEWARI, R., AND PADMANABHAN, S. Dbproxy: A dynamic data cache for web applications. In Proc. IEEE International Conference on Data Engineering (ICDE) (Mar. 2003). [5] BLACK, J. Compare-by-hash: A reasoned analysis. In Proc. 2006 USENIX Annual Technical Conference (Boston, MA, May 2006), pp. 85-90. [6] BOLOSKY, W. J., CORBIN, S., GOEBEL, D., , AND DOUCEUR, J. R. Single instance storage in windows 2000. In Proc. 4th USENIX Windows Systems Symposium (Seattle, WA, Aug. 2000), pp. 13-24. [7] BREWER, E. A. Lessons from giant-scale services. IEEE Internet Computing 5, 4 (2001), 46-55. [8] BRODER, A., GLASSMAN, S., MANASSE, M., AND ZWEIG, G. Syntactic clustering of the web. In Proc. 6th International WWW Conference (1997). [9] CECCHET, E., CHANDA, A., ELNIKETY, S., MARGUERITE, J., AND ZWAENEPOEL, W. Performance comparison of middleware architectures for generating dynamic web content. In Proc. Fourth ACM/IFIP/USENIX International Middleware Conference (Rio de Janeiro, Brazil, June 2003). [10] CECCHET, E., MARGUERITE, J., AND ZWAENEPOEL, W. C-JDBC: Flexible database clustering middleware. In Proc. 2004 USENIX Annual Technical Conference (Boston, MA, June 2004). [11] COX, L. P., MURRAY, C. D., AND NOBLE, B. D. Pastiche: Making backup cheap and easy. In OSDI: Symposium on Operating Systems Design and Implementation (2002). [12] DABEK, F., KAASHOEK, M. F., KARGER, D., MORRIS, R., AND STOICA, I. Wide-area cooperative storage with CFS. In 18th ACM Symposium on Operating Systems Principles (Banff, Canada, Oct. 2001). [13] DRUSCHEL, P., AND ROWSTRON, A. PAST: A large-scale, persistent peer-to-peer storage utility. In HotOS VIII (Schloss Elmau, Germany, May 2001), pp. 75-80. [14] Edge side includes. http://www.esi.org. [15] GAO, L., DAHLIN, M., NAYATE, A., ZHENG, J., AND IYENGAR, A. Application specific data replication for edge services. In WWW ``03: Proc. Twelfth International Conference on World Wide Web (2003), pp. 449-460. [16] HEMMINGER, S. Netem - emulating real networks in the lab. In Proc. 2005 Linux Conference Australia (Canberra, Australia, Apr. 2005). [17] HENSON, V. An analysis of compare-by-hash. In Proc. 9th Workshop on Hot Topics in Operating Systems (HotOS IX) (May 2003), pp. 13-18. [18] Jmob benchmarks. http://jmob.objectweb.org/. [19] LABRINIDIS, A., AND ROUSSOPOULOS, N. Balancing performance and data freshness in web database servers. In Proc. 29th VLDB Conference (Sept. 2003). [20] LARSON, P.-A., GOLDSTEIN, J., AND ZHOU, J. Transparent mid-tier database caching in sql server. In Proc. 2003 ACM SIGMOD (2003), pp. 661-661. [21] MANBER, U. Finding similar files in a large file system. In Proc. USENIX Winter 1994 Technical Conference (San Fransisco, CA, 17-21 1994), pp. 1-10. [22] MANJHI, A., AILAMAKI, A., MAGGS, B. M., MOWRY, T. C., OLSTON, C., AND TOMASIC, A. Simultaneous scalability and security for data-intensive web applications. In Proc. 2006 ACM SIGMOD (June 2006), pp. 241-252. [23] MENEZES, A. J., VANSTONE, S. A., AND OORSCHOT, P. C. V. Handbook of Applied Cryptography. CRC Press, 1996. [24] MILLER, R. B. Response time in man-computer conversational transactions. In Proc. AFIPS Fall Joint Computer Conference (1968), pp. 267-277. [25] MOGUL, J. C., CHAN, Y. M., AND KELLY, T. Design, implementation, and evaluation of duplicate transfer detection in http. In Proc. First Symposium on Networked Systems Design and Implementation (San Francisco, CA, Mar. 2004). [26] MUTHITACHAROEN, A., CHEN, B., AND MAZIERES, D. A low-bandwidth network file system. In Proc. 18th ACM Symposium on Operating Systems Principles (Banff, Canada, Oct. 2001). [27] PFEIFER, D., AND JAKSCHITSCH, H. Method-based caching in multi-tiered server applications. In Proc. Fifth International Symposium on Distributed Objects and Applications (Catania, Sicily, Italy, Nov. 2003). [28] PLATTNER, C., AND ALONSO, G. Ganymed: Scalable replication for transactional web applications. In Proc. 5th ACM/IFIP/USENIX International Conference on Middleware (2004), pp. 155-174. [29] QUINLAN, S., AND DORWARD, S. Venti: A new approach to archival storage. In Proc. FAST 2002 Conference on File and Storage Technologies (2002). [30] RABIN, M. Fingerprinting by random polynomials. In Harvard University Center for Research in Computing Technology Technical Report TR-15-81 (1981). [31] RABINOVICH, M., XIAO, Z., DOUGLIS, F., AND KALMANEK, C. Moving edge side includes to the real edge - the clients. In Proc. 4th USENIX Symposium on Internet Technologies and Systems (Seattle, WA, Mar. 2003). [32] REESE, G. Database Programming with JDBC and Java, 1st ed. O``Reilly, June 1997. [33] RHEA, S., LIANG, K., AND BREWER, E. Value-based web caching. In Proc. Twelfth International World Wide Web Conference (May 2003). [34] SIVASUBRAMANIAN, S., ALONSO, G., PIERRE, G., AND VAN STEEN, M. Globedb: Autonomic data replication for web applications. In WWW ``05: Proc. 14th International World-Wide Web conference (May 2005). [35] SPRING, N. T., AND WETHERALL, D. A protocol-independent technique for eliminating redundant network traffic. In Proc. of ACM SIGCOMM (Aug. 2000). [36] TOLIA, N., HARKES, J., KOZUCH, M., AND SATYANARAYANAN, M. Integrating portable and distributed storage. In Proc. 3rd USENIX Conference on File and Storage Technologies (San Francisco, CA, Mar. 2004). [37] TOLIA, N., KOZUCH, M., SATYANARAYANAN, M., KARP, B., PERRIG, A., AND BRESSOUD, T. Opportunistic use of content addressable storage for distributed file systems. In Proc. 2003 USENIX Annual Technical Conference (San Antonio, TX, June 2003), pp. 127-140. [38] YUAN, C., CHEN, Y., AND ZHANG, Z. Evaluation of edge caching/offloading for dynamic content delivery. In WWW ``03: Proc. Twelfth International Conference on World Wide Web (2003), pp. 461-471. WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content 320
Consistency-preserving Caching of Dynamic Database Content * ABSTRACT With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These benefits do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modifications to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones. 1. INTRODUCTION An increasing fraction of web content is dynamically generated from back-end relational databases. Even when database content remains unchanged, temporal locality of access cannot be exploited because dynamic content is not cacheable by web browsers or by intermediate caching servers such as Akamai mirrors. In a multitiered architecture, each web request can stress the WAN link between the web server and the database. This causes user experience to be highly variable because there is no caching to insu * This research was supported by the National Science Foundation (NSF) under grant number CCR-0205266. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or Carnegie Mellon University. late the client from bursty loads. Previous attempts in caching dynamic database content have generally weakened transactional semantics [3, 4] or required application modifications [15, 34]. We report on a new solution that takes the form of a databaseagnostic middleware layer called Ganesh. Ganesh makes no effort to semantically interpret the contents of queries or their results. Instead, it relies exclusively on cryptographic hashing to detect similarities with previous results. Hash-based similarity detection has seen increasing use in distributed file systems [26, 36, 37] for improving performance on low-bandwidth networks. However, these techniques have not been used for relational databases. Unlike previous approaches that use generic methods to detect similarity, Ganesh exploits the structure of relational database results to yield superior performance improvement. One faces at least three challenges in applying hash-based similarity detection to back-end databases. First, previous work in this space has traditionally viewed storage content as uninterpreted bags of bits with no internal structure. This allows hash-based techniques to operate on long, contiguous runs of data for maximum effectiveness. In contrast, relational databases have rich internal structure that may not be as amenable to hash-based similarity detection. Second, relational databases have very tight integrity and consistency constraints that must not be compromised by the use of hash-based techniques. Third, the source code of commercial databases is typically not available. This is in contrast to previous work which presumed availability of source code. Our experiments show that Ganesh, while conceptually simple, can improve performance significantly at bandwidths representative of today's commercial Internet. On benchmarks modeling multitiered web applications, the throughput improvement was as high as tenfold for data-intensive workloads. For workloads that were not data-intensive, throughput improvements of up to twofold were observed. Even when bandwidth was not a constraint, Ganesh had low overhead and did not hurt performance. Our experiments also confirm that exploiting the structure present in database results is crucial to this performance improvement. 2. BACKGROUND 2.1 Dynamic Content Generation As the World Wide Web has grown, many web sites have decentralized their data and functionality by pushing them to the edges of the Internet. Today, eBusiness systems often use a three-tiered architecture consisting of a front-end web server, an application server, and a back-end database server. Figure 1 illustrates this architecture. The first two tiers can be replicated close to a concentration of clients at the edge of the Internet. This improves user experience by lowering end-to-end latency and reducing exposure Figure 1: Multi-Tier Architecture to backbone traffic congestion. It can also increase the availability and scalability of web services. Content that is generated dynamically from the back-end database cannot be cached in the first two tiers. While databases can be easily replicated in a LAN, this is infeasible in a WAN because of the difficult task of simultaneously providing strong consistency, availability, and tolerance to network partitions [7]. As a result, databases tend to be centralized to meet the strong consistency requirements of many eBusiness applications such as banking, finance, and online retailing [38]. Thus, the back-end database is usually located far from many sets of first and second-tier nodes [2]. In the absence of both caching and replication, WAN bandwidth can easily become a limiting factor in the performance and scalability of data-intensive applications. 2.2 Hash-Based Systems Ganesh's focus is on efficient transmission of results by discovering similarities with the results of previous queries. As SQL queries can generate large results, hash-based techniques lend themselves well to the problem of efficiently transferring these large results across bandwidth constrained links. The use of hash-based techniques to reduce the volume of data transmitted has emerged as a common theme of many recent storage systems, as discussed in Section 8.2. These techniques rely on some basic assumptions. Cryptographic hash functions are assumed to be collision-resistant. In other words, it is computationally intractable to find two inputs that hash to the same output. The functions are also assumed to be one-way; that is, finding an input that results in a specific output is computationally infeasible. Menezes et al. [23] provide more details about these assumptions. The above assumptions allow hash-based systems to assume that collisions do not occur. Hence, they are able to treat the hash of a data item as its unique identifier. A collection of data items effectively becomes content-addressable, allowing a small hash to serve as a codeword for a much larger data item in permanent storage or network transmission. The assumption that collisions are so rare as to be effectively non-existent has recently come under fire [17]. However, as explained by Black [5], we believe that these issues do not form a concern for Ganesh. All communication is between trusted parts of the system and an adversary has no way to force Ganesh to accept invalid data. Further, Ganesh does not depend critically on any specific hash function. While we currently use SHA-1, replacing it with a different hash function would be simple. There would be no impact on performance as stronger hash functions (e.g. SHA256) only add a few extra bytes and the generated hashes are still orders of magnitude smaller than the data items they represent. No re-hashing of permanent storage is required since Ganesh only uses hashing on volatile data. 3. DESIGN AND IMPLEMENTATION Ganesh exploits redundancy in the result stream to avoid transmitting result fragments that are already present at the query site. Redundancy can arise naturally in many different ways. For example, a query repeated after a certain interval may return a different result because of updates to the database; however, there may be significant commonality between the two results. As another example, a user who is refining a search may generate a sequence of queries with overlapping results. When Ganesh detects redundancy, it suppresses transmission of the corresponding result fragments. Instead, it transmits a much smaller digest of those fragments and lets the query site reconstruct the result through hash lookup in a cache of previous results. In effect, Ganesh uses computation at the edges to reduce Internet communication. Our description of Ganesh focuses on four aspects. We first explain our approach to detecting similarity in query results. Next, we discuss how the Ganesh architecture is completely invisible to all components of a multi-tier system. We then describe Ganesh's proxy-based approach and the dataflow for detecting similarity. 3.1 Detecting Similarity One of the key design decisions in Ganesh is how similarity is detected. There are many potential ways to decompose a result into fragments. The optimal way is, of course, the one that results in the smallest possible object for transmission for a given query's results. Finding this optimal decomposition is a difficult problem because of the large space of possibilities and because the optimal choice depends on many factors such as the contents of the query's result, the history of recent results, and the cache management algorithm. When an object is opaque, the use of Rabin fingerprints [8, 30] to detect common data between two objects has been successfully shown in the past by systems such as LBFS [26] and CASPER [37]. Rabin fingerprinting uses a sliding window over the data to compute a rolling hash. Assuming that the hash function is uniformly distributed, a chunk boundary is defined whenever the lower order bits of the hash value equal some predetermined value. The number of lower order bits used defines the average chunk size. These sub-divided chunks of the object become the unit of comparison for detecting similarity between different objects. As the locations of boundaries found by using Rabin fingerprints is stochastically determined, they usually fail to align with any structural properties of the underlying data. The algorithm therefore deals well with in-place updates, insertions and deletions. However, it performs poorly in the presence of any reordering of data. Figure 2 shows an example where two results, A and B, consisting of three rows, have the same data but have different sort attributes. In the extreme case, Rabin fingerprinting might be unable to find any similar data due to the way it detects chunk boundaries. Fortunately, Ganesh can use domain specific knowledge for more precise boundary detection. The information we exploit is that a query's result reflects the structure of a relational database where all data is organized as tables and rows. It is therefore simple to check for similarity with previous results at two granularities: first the entire result, and then individual rows. The end of a row in a result serves as a natural chunk boundary. It is important to note that using the tabular structure in results only involves shallow interpretation of the data. Ganesh does not perform any deeper semantic interpretation such as understanding data types, result schema, or integrity constraints. Tuning Rabin fingerprinting for a workload can also be difficult. If the average chunk size is too large, chunks can span multiple result rows. However, selecting a smaller average chunk size increases the amount of metadata required to the describe the results. Figure 2: Rabin Fingerprinting vs. Ganesh's Chunking This, in turn, would decrease the savings obtained via its use. Rabin fingerprinting also needs two computationally-expensive passes over the data: once to determine chunk boundaries and one again to generate cryptographic hashes for the chunks. Ganesh only needs a single pass for hash generation as the chunk boundaries are provided by the data's natural structure. The performance comparison in Section 6 shows that Ganesh's row-based algorithm outperforms Rabin fingerprinting. Given that previous work has already shown that Rabin fingerprinting performs better than gzip [26], we do not compare Ganesh to compression algorithms in this paper. 3.2 Transparency The key factor influencing our design was the need for Ganesh to be completely transparent to all components of a typical eBusiness system: web servers, application servers, and database servers. Without this, Ganesh stands little chance of having a significant real-world impact. Requiring modifications to any of the above components would raise the barrier for entry of Ganesh into an existing system, and thus reduce its chances of adoption. Preserving transparency is simplified by the fact that Ganesh is purely a performance enhancement, not a functionality or usability enhancement. We chose agent interposition as the architectural approach to realizing our goal. This approach relies on the existence of a compact programming interface that is already widely used by target software. It also relies on a mechanism to easily add new code without disrupting existing module structure. These conditions are easily met in our context because of the popularity of Java as the programming language for eBusiness systems. The Java Database Connectivity (JDBC) API [32] allows Java applications to access a wide variety of databases and even other tabular data repositories such as flat files. Access to these data sources is provided by JDBC drivers that translate between the JDBC API and the database communication mechanism. Figure 3 (a) shows how JDBC is typically used in an application. As the JDBC interface is standardized, one can substitute one JDBC driver for another without application modifications. The JDBC driver thus becomes the natural module to exploit for code interposition. As shown in Figure 3 (b), the native JDBC driver is replaced with a Ganesh JDBC driver that presents the same standardized interface. The Ganesh driver maintains an in-memory cache of result fragments from previous queries and performs reassembly of results. At the database, we add a new process called the Ganesh proxy. This proxy, which can be shared by multiple front-end nodes, consists of two parts: code to detect similarity in result fragments and the original native JDBC driver that communicates with the database. The use of a proxy at the database makes Ganesh database-agnostic and simplifies prototyping and experimentation. Ganesh is thus able to work with a wide range of databases and applications, requiring no modifications to either. 3.3 Proxy-Based Caching The native JDBC driver shown in Figure 3 (a) is a lightweight code component supplied by the database vendor. Its main func Figure 3: Native vs. Ganesh Architecture tion is to mediate communication between the application and the remote database. It forwards queries, buffers entire results, and responds to application requests to view parts of results. The Ganesh JDBC driver shown in Figure 3 (b) presents the application with an interface identical to that provided by the native driver. It provides the ability to reconstruct results from compact hash-based descriptions sent by the proxy. To perform this reconstruction, the driver maintains an in-memory cache of recentlyreceived results. This cache is only used as a source of result fragments in reconstructing results. No attempt is made by the Ganesh driver or proxy to track database updates. The lack of cache consistency does not hurt correctness as a description of the results is always fetched from the proxy--at worst, there will be no performance benefit from using Ganesh. Stale data will simply be paged out of the cache over time. The Ganesh proxy accesses the database via the native JDBC driver, which remains unchanged between Figures 3 (a) and (b). The database is thus completely unaware of the existence of the proxy. The proxy does not examine any queries received from the Ganesh driver but passes them to the native driver. Instead, the proxy is responsible for inspecting database output received from the native driver, detecting similar results, and generating hash-based encodings of these results whenever enough similarity is found. While this architecture does not decrease the load on a database, as mentioned earlier in Section 2.1, it is much easier to replicate databases for scalability in a LAN than in a WAN. To generate a hash-based encoding, the proxy must be aware of what result fragments are available in the Ganesh driver's cache. One approach is to be optimistic, and to assume that all result fragments are available. This will result in the smallest possible initial transmission of a result. However, in cases where there is little overlap with previous results, the Ganesh driver will have to make many calls to the proxy during reconstruction to fetch missing result fragments. To avoid this situation, the proxy loosely tracks the state of the Ganesh driver's cache. Since both components are under our control, it is relatively simple to do this without resorting to gray-box techniques or explicit communication for maintaining cache coherence. Instead, the proxy simulates the Ganesh driver's cache management algorithm and uses this to maintain a list of hashes for which the Ganesh driver is likely to possess the result fragments. In case of mistracking, there will be no loss of correctness but there will be extra round-trip delays to fetch the missing fragments. If the client detects loss of synchronization with the proxy, it can ask the proxy to reset the state shared between them. Also note that the proxy does not need to keep the result fragments themselves, only their hashes. This allows the proxy to remain scalable even when it is shared by many front-end nodes. Database Figure 4: Dataflow for Result Handling 3.4 Encoding and Decoding Results The Ganesh proxy receives database output as Java objects from the native JDBC driver. It examines this output to see if a Java object of type ResultSet is present. The JDBC interface uses this data type to store results of database queries. If a ResultSet object is found, it is shrunk as discussed below. All other Java objects are passed through unmodified. As discussed in Section 3.1, the proxy uses the row boundaries defined in the ResultSet to partition it into fragments consisting of single result rows. All ResultSet objects are converted into objects of a new type called RecipeResultSet. We use the term "recipe" for this compact description of a database result because of its similarity to a file recipe in the CASPER file system [37]. The conversion replaces each result fragment that is likely to be present in the Ganesh driver's cache by a SHA-1 hash of that fragment. Previously unseen result fragments are retained verbatim. The proxy also retains hashes for the new result fragments as they will be present in the driver's cache in the future. Note that the proxy only caches hashes for result fragments and does not cache recipes. The proxy constructs a RecipeResultSet by checking for similarity at the entire result and then the row level. If the entire result is predicted to be present in the Ganesh driver's cache, the RecipeResultSet is simply a single hash of the entire result. Otherwise, it contains hashes for those rows predicted to be present in that cache; all other rows are retained verbatim. If the proxy estimates an overall space savings, it will transmit the RecipeResultSet. Otherwise the original ResultSet is transmitted. The RecipeResultSet objects are transformed back into ResultSet objects by the Ganesh driver. Figure 4 illustrates ResultSet handling at both ends. Each SHA-1 hash found in a RecipeResultSet is looked up in the local cache of result fragments. On a hit, the hash is replaced by the corresponding fragment. On a miss, the driver contacts the Ganesh proxy to fetch the fragment. All previously unseen result fragments that were retained verbatim by the proxy are hashed and added to the result cache. There should be very few misses if the proxy has accurately tracked the Ganesh driver's cache state. A future optimization would be to batch the fetch of missing fragments. This would be valuable when there are many small missing fragments in a high-latency WAN. Once the transformation is complete, the fully reconstructed ResultSet object is passed up to the application. 4. EXPERIMENTAL VALIDATION Three questions follow from the goals and design of Ganesh: • First, can performance can be improved significantly by exploiting similarity across database results? Benchmark Dataset Details 500,000 Users, 12,000 Stories 3,298,000 Comments 1,000,000 Users, 34,000 Items Table 1: Benchmark Dataset Details • Second, how important is Ganesh's structural similarity detection relative to Rabin fingerprinting's similarity detection? • Third, is the overhead of the proxy-based design acceptable? Our evaluation answers these question through controlled experiments with the Ganesh prototype. This section describes the benchmarks used, our evaluation procedure, and the experimental setup. Results of the experiments are presented in Sections 5, 6, and 7. 4.1 Benchmarks Our evaluation is based on two benchmarks [18] that have been widely used by other researchers to evaluate various aspects of multi-tier [27] and eBusiness architectures [9]. The first benchmark, BBOARD, is modeled after Slashdot, a technology-oriented news site. The second benchmark, AUCTION, is modeled after eBay, an online auction site. In both benchmarks, most content is dynamically generated from information stored in a database. Details of the datasets used can be found in Table 1. 4.1.1 The BBOARD Benchmark The BBOARD benchmark, also known as RUBBoS [18], models Slashdot, a popular technology-oriented web site. Slashdot aggregates links to news stories and other topics of interest found elsewhere on the web. The site also serves as a bulletin board by allowing users to comment on the posted stories in a threaded conversation form. It is not uncommon for a story to gather hundreds of comments in a matter of hours. The BBOARD benchmark is similar to the site and models the activities of a user, including readonly operations such as browsing the stories of the day, browsing story categories, and viewing comments as well as write operations such as new user registration, adding and moderating comments, and story submission. The benchmark consists of three different phases: a short warmup phase, a runtime phase representing the main body of the workload, and a short cool-down phase. In this paper we only report results from the runtime phase. The warm-up phase is important in establishing dynamic system state, but measurements from that phase are not significant for our evaluation. The cool-down phase is solely for allowing the benchmark to shut down. The warm-up, runtime, and cool-down phases are 2, 15, and 2 minutes respectively. The number of simulated clients were 400, 800, 1200, and 1600. The benchmark is available in a Java Servlets and PHP version and has different datasets; we evaluated Ganesh using the Java Servlets version and the Expanded dataset. The BBOARD benchmark defines two different workloads. The first, the Authoring mix, consists of 70% read-only operations and 30% read-write operations. The second, the Browsing mix, contains only read-only operations and does not update the database. 4.1.2 The AUCTION Benchmark The AUCTION benchmark, also known as RUBiS [18], models eBay, the online auction site. The eBay web site is used to buy and sell items via an auction format. The main activities of a user include browsing, selling, or bidding for items. Modeling the activities on this site, this benchmark includes read-only activities such as browsing items by category and by region, as well as read-write driver used by the application server used a cache size of 100,000 items1. The proxy was effective in tracking the Ganesh driver's cache state; for all of our experiments the miss rate on the driver never exceeded 0.7%. Figure 5: Experimental Setup activities such as bidding for items, buying and selling items, and leaving feedback. As with BBOARD, the benchmark consists of three different phases. The warm-up, runtime, and cool-down phases for this experiment are 1.5, 15, and 1 minutes respectively. We tested Ganesh with four client configurations where the number of test clients was set to 400, 800, 1200, and 1600. The benchmark is available in a Enterprise Java Bean (EJB), Java Servlets, and PHP version and has different datasets; we evaluated Ganesh with the Java Servlets version and the Expanded dataset. The AUCTION benchmark defines two different workloads. The first, the Bidding mix, consists of 70% read-only operations and 30% read-write operations. The second, the Browsing mix, contains only read-only operations and does not update the database. 4.2 Experimental Procedure Both benchmarks involve a synthetic workload of clients accessing a web server. The number of clients emulated is an experimental parameter. Each emulated client runs an instance of the benchmark in its own thread, using a matrix to transition between different benchmark states. The matrix defines a stochastic model with probabilities of transitioning between the different states that represent typical user actions. An example transition is a user logging into the AUCTION system and then deciding on whether to post an item for sale or bid on active auctions. Each client also models user think time between requests. The think time is modeled as an exponential distribution with a mean of 7 seconds. We evaluate Ganesh along two axes: number of clients and WAN bandwidth. Higher loads are especially useful in understanding Ganesh's performance when the CPU or disk of the database server or proxy is the limiting factor. A previous study has shown that approximately 50% of the wide-area Internet bottlenecks observed had an available bandwidth under 10 Mb/s [1]. Based on this work, we focus our evaluation on the WAN bandwidth of 5 Mb/s with 66 ms of round-trip latency, representative of severely constrained network paths, and 20 Mb/s with 33 ms of round-trip latency, representative of a moderately constrained network path. We also report Ganesh's performance at 100 Mb/s with no added round-trip latency. This bandwidth, representative of an unconstrained network, is especially useful in revealing any potential overhead of Ganesh in situations where WAN bandwidth is not the limiting factor. For each combination of number of clients and WAN bandwidth, we measured results from the two configurations listed below: • Native: This configuration corresponds to Figure 3 (a). Native avoids Ganesh's overhead in using a proxy and performing Java object serialization. • Ganesh: This configuration corresponds to Figure 3 (b). For a given number of clients and WAN bandwidth, comparing these results to the corresponding Native results gives the performance benefit due to the Ganesh middleware system. The metric used to quantify the improvement in throughput is the number of client requests that can be serviced per second. The metric used to quantify Ganesh's overhead is the average response time for a client request. For all of the experiments, the Ganesh 4.3 Experimental Setup The experimental setup used for the benchmarks can be seen in Figure 5. All machines were 3.2 GHz Pentium 4s (with HyperThreading enabled.) With the exception of the database server, all machines had 2 GB of SDRAM and ran the Fedora Core Linux distribution. The database server had 4 GB of SDRAM. We used Apache's Tomcat as both the application server that hosted the Java Servlets and the web server. Both benchmarks used Java Servlets to generate the dynamic content. The database server used the open source MySQL database. For the native JDBC drivers, we used the Connector/J drivers provided by MySQL. The application server used Sun's Java Virtual Machine as the runtime environment for the Java Servlets. The sysstat tool was used to monitor the CPU, network, disk, and memory utilization on all machines. The machines were connected by a switched gigabit Ethernet network. As shown in Figure 5, the front-end web and application server was separated from the proxy and database server by a NetEm router [16]. This router allowed us to control the bandwidth and latency settings on the network. The NetEm router is a standard PC with two network cards running the Linux Traffic Control and Network Emulation software. The bandwidth and latency constraints were only applied to the link between the application server and the database for the native case and between the application server and the proxy for the Ganesh case. There is no communication between the application server and the database with Ganesh as all data flows through the proxy. As our focus was on the WAN link between the application server and the database, there were no constraints on the link between the simulated test clients and the web server. 5. THROUGHPUT AND RESPONSE TIME In this section, we address the first question raised in Section 4: Can performance can be improved significantly by exploiting similarity across database results? To answer this question, we use results from the BBOARD and AUCTION benchmarks. We use two metrics to quantify the performance improvement obtainable through the use of Ganesh: throughput, from the perspective of the web server, and average response time, from the perspective of the client. Throughput is measured in terms of the number of client requests that can be serviced per second. 5.1 BBOARD Results and Analysis 5.1.1 Authoring Mix Figures 6 (a) and (b) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients for BBOARD's Authoring Mix. As Figure 6 (a) shows, Native easily saturates the 5 Mb/s link. At 400 clients, the Native solution delivers 29 requests/sec with an average response time of 8.3 seconds. Native's throughput drops with an increase in test clients as clients timeout due to congestion at the application server. Usability studies have shown that response times above 10 seconds cause the user to move on to Figure 6: BBOARD Benchmark - Throughput and Average Response Time other tasks [24]. Based on these numbers, increasing the number of test clients makes the Native system unusable. Ganesh at 5 Mb/s, however, delivers a twofold improvement with 400 test clients and a fivefold improvement at 1200 clients. Ganesh's performance drops slightly at 1200 and 1600 clients as the network is saturated. Compared to Native, Figure 6 (b) shows that Ganesh's response times are substantially lower with sub-second response times at 400 clients. Figure 6 (a) also shows that for 400 and 800 test clients Ganesh at 5 Mb/s has the same throughput and average response time as Native at 20 Mb/s. Only at 1200 and 1600 clients does Native at 20 Mb/s deliver higher throughput than Ganesh at 5 Mb/s. Comparing both Ganesh and Native at 20 Mb/s, we see that Ganesh is no longer bandwidth constrained and delivers up to a twofold improvement over Native at 1600 test clients. As Ganesh does not saturate the network with higher test client configurations, at 1600 test clients, its average response time is 0.1 seconds rather than Native's 7.7 seconds. As expected, there are no visible gains from Ganesh at the higher bandwidth of 100 Mb/s where the network is no longer the bottleneck. Ganesh, however, still tracks Native in terms of throughput. 5.1.2 Browsing Mix Figures 6 (c) and (d) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients for BBOARD's Browsing Mix. Regardless of the test client configuration, Figure 6 (c) shows that Native's throughput at 5 Mb/s is limited to 10 reqs/sec. Ganesh at 5 Mb/s with 400 test clients, delivers more than a sixfold increase in throughput. The improvement increases to over a elevenfold increase at 800 test clients before Ganesh saturates the network. Further, Figure 6 (d) shows that Native's average response time of 35 seconds at 400 test clients make the system unusable. These high response times further increase with the addition of test clients. Even with the 1600 test client configuration Ganesh delivers an acceptable average response time of 8.2 seconds. Due to the data-intensive nature of the Browsing mix, Ganesh at 5 Mb/s surprisingly performs much better than Native at 20 Mb/s. Further, as shown in Figure 6 (d), while the average response time for Native at 20 Mb/s is acceptable at 400 test clients, it is unusable with 800 test clients with an average response time of 15.8 seconds. Like the 5 Mb/s case, this response time increases with the addition of extra test clients. Ganesh at 20 Mb/s and both Native and Ganesh at 100 Mb/s are not bandwidth limited. However, performance plateaus out after 1200 test clients due to the database CPU being saturated. 5.1.3 Filter Variant We were surprised by the Native performance from the BBOARD benchmark. At the bandwidth of 5 Mb/s, Native performance was lower than what we had expected. It turned out the benchmark code that displays stories read all the comments associated with the particular story from the database and only then did some postprocessing to select the comments to be displayed. While this is exactly the behavior of SlashCode, the code base behind the Slashdot web site, we decided to modify the benchmark to perform some pre-filtering at the database. This modified benchmark, named the Filter Variant, models a developer who applies optimizations at the SQL level to transfer less data. In the interests of brevity, we only briefly summarize the results from the Authoring mix. For the Authoring mix, at 800 test clients at 5 Mb/s, Figure 7 (a) shows that Native's throughput increase by 85% when compared to the original benchmark while Ganesh's improvement is smaller at 15%. Native's performance drops above 800 clients as the test clients time out due to high response times. The most significant gain for Native is seen at 20 Mb/s. At 1600 test clients, when compared to the original benchmark, Native sees a 73% improvement in throughput and a 77% reduction in average response time. While Figure 7: BBOARD Benchmark - Filter Variant - Throughput and Average Response Time Ganesh sees no improvement when compared to the original, it still processes 19% more requests/sec than Native. Thus, while the optimizations were more helpful to Native, Ganesh still delivers an improvement in performance. 5.2 AUCTION Results and Analysis 5.2.1 Bidding Mix Figures 8 (a) and (b) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients for AUCTION's Bidding Mix. As mentioned earlier, the Bidding mix consists of a mixture of read and write operations. The AUCTION benchmark is not as data intensive as BBOARD. Therefore, most of the gains are observed at the lower bandwidth of 5 Mb/s. Figure 8 (a) shows that the increase in throughput due to Ganesh ranges from 8% at 400 test clients to 18% with 1600 test clients. As seen in Figure 8 (b), the average response times for Ganesh are significantly lower than Native ranging from a decrease of 84% at 800 test clients to 88% at 1600 test clients. Figure 8 (a) also shows that with a fourfold increase of bandwidth from 5 Mb/s to 20 Mb/s, Native is no longer bandwidth constrained and there is no performance difference between Ganesh and Native. With the higher test client configurations, we did observe that the bandwidth used by Ganesh was lower than Native. Ganesh might still be useful in these non-constrained scenarios if bandwidth is purchased on a metered basis. Similar results are seen for the 100 Mb/s scenario. 5.2.2 Browsing Mix For AUCTION's Browsing Mix, Figures 8 (c) and (d) present the average number of requests serviced per second and the average response time for these requests as perceived by the clients. Again, most of the gains are observed at lower bandwidths. At 5 Mb/s, Native and Ganesh deliver similar throughput and response times with 400 test clients. While the throughput for both remains the same at 800 test clients, Figure 8 (d) shows that Ganesh's average response time is 62% lower than Native. Native saturates the link at 800 clients and adding extra test clients only increases the average response time. Ganesh, regardless of the test client configuration, is not bandwidth constrained and maintains the same response time. At 1600 test clients, Figure 8 (c) shows that Ganesh's throughput is almost twice that of Native. At the higher bandwidths of 20 and 100 Mb/s, neither Ganesh nor Native is bandwidth limited and deliver equivalent throughput and response times. Table 2: Similarity Microbenchmarks 6. STRUCTURAL VS. RABIN SIMILARITY In this section, we address the second question raised in Section 4: How important is Ganesh's structural similarity detection relative to Rabin fingerprinting-based similarity detecting? To answer this question, we used microbenchmarks and the BBOARD and AUCTION benchmarks. As Ganesh always performed better than Rabin fingerprinting, we only present a subset of the results here in the interests of brevity. 6.1 Microbenchmarks Two microbenchmarks show an example of the effects of data reordering on Rabin fingerprinting algorithm. In the first microbenchmark, SelectSort1, a query with a specified sort order selects 223.6 MB of data spread over approximately 280 K rows. The query is then repeated with a different sort attribute. While the same number of rows and the same data is returned, the order of rows is different. In such a scenario, one would expect a large amount of similarity to be detected between both results. As Table 2 shows, Ganesh's row-based algorithm achieves a 97.6% reduction while the Rabin fingerprinting algorithm, with the average chunk size parameter set to 4 KB, only achieves a 1% reduction. The reason, as shown earlier in Figure 2, is that with Rabin fingerprinting, the spans of data between two consecutive boundaries usually cross row boundaries. With the order of the rows changing in the second result and the Rabin fingerprints now spanning different rows, the algorithm is unable to detect significant similarity. The small gain seen is mostly for those single rows that are large enough to be broken into multiple chunks. SelectSort2, another micro-benchmark executed the same queries but increased the minimum chunk size of the Rabin fingerprinting algorithm. As can be seen in Table 2, even the small gain from the previous microbenchmark disappears as the minimum chunk size was greater than the average row size. While one can partially address these problems by dynamically varying the parameters of the Rabin fingerprinting algorithm, this can be computationally expensive, especially in the presence of changing workloads. 6.2 Application Benchmarks We ran the BBOARD benchmark described in Section 4.1.1 on two versions of Ganesh: the first with Rabin fingerprinting used as (c) Throughput: Browsing Mix (d) Response Time: Browsing Mix Mean of three trials. The maximum standard deviation for throughput and response time was 2.2% and 11.8% of the corresponding mean. Figure 8: AUCTION Benchmark - Throughput and Average Response Time the chunking algorithm and the second with Ganesh's row-based algorithm. Rabin's results for the Browsing Mix are normalized to Ganesh's results and presented in Figure 9. As Figure 9 (a) shows, at 5 Mb/s, independent of the test client configuration, Rabin significantly underperforms Ganesh. This happens because of a combination of two reasons. First, as outlined in Section 3.1, Rabin finds less similarity as it does not exploit the result's structural information. Second, this benchmark contained some queries that generated large results. In this case, Rabin, with a small average chunk size, generated a large number of objects that evicted other useful data from the cache. In contrast, Ganesh was able to detect these large rows and correspondingly increase the size of the chunks. This was confirmed as cache statistics showed that Ganesh's hit ratio was roughly three time that of Rabin. Throughput measurements at 20 Mb/s were similar with the exception of Rabin's performance with 400 test clients. In this case, Ganesh was not network limited and, in fact, the throughput was the same as 400 clients at 5 Mb/s. Rabin, however, took advantage of the bandwidth increase from 5 to 20 Mb/s to deliver a slightly better performance. At 100 Mb/s, Rabin's throughput was almost similar to Ganesh as bandwidth was no longer a bottleneck. The normalized response time, presented in Figure 9 (b), shows similar trends. At 5 and 20 Mb/s, the addition of test clients decreases the normalized response time as Ganesh's average response time increases faster than Rabin's. However, at no point does Rabin outperform Ganesh. Note that at 400 and 800 clients at 100 Mb/s, Rabin does have a higher overhead even when it is not bandwidth constrained. As mentioned in Section 3.1, this is due to the fact that Rabin has to hash each ResultSet twice. The overhead disappears with 1200 and 1600 clients as the database CPU is saturated and limits the performance of both Ganesh and Rabin. 7. PROXY OVERHEAD In this section, we address the third question raised in Section 4: Is the overhead of Ganesh's proxy-based design acceptable? To answer this question, we concentrate on its performance at the higher bandwidths. Our evaluation in Section 5 showed that Ganesh, when compared to Native, can deliver a substantial throughput improvement at lower bandwidths. It is only at higher bandwidths that latency, measured by the average response time for a client request, and throughput, measured by the number of client requests that can be serviced per second, overheads would be visible. Looking at the Authoring mix of the original BBOARD benchmark, there are no visible gains from Ganesh at 100 Mb/s. Ganesh, however, still tracks Native in terms of throughput. While the average response time is higher for Ganesh, the absolute difference is in between 0.01 and 0.04 seconds and would be imperceptible to the end-user. The Browsing mix shows an even smaller difference in average response times. The results from the filter variant of the BBOARD benchmarks are similar. Even for the AUCTION benchmark, the difference between Native and Ganesh's response time at 100 Mb/s was never greater than 0.02 seconds. The only exception to the above results was seen in the filter variant of the BBOARD benchmark where Ganesh at 1600 test clients added 0.85 seconds to the average response time. Thus, even for much faster networks where the WAN link is not the bottleneck, Ganesh always delivers throughput equivalent to Native. While some extra latency is added by the proxy-based design, it is usually imperceptible. 8. RELATED WORK To the best of our knowledge, Ganesh is the first system that combines the use of hash-based techniques with caching of database results to improve throughput and response times for applications with dynamic content. We also believe that it is also the first system to demonstrate the benefits of using structural information for For throughput, a normalized result greater than 1 implies that Rabin is better, For response time, a normalized result greater than 1 implies that Ganesh is better. Mean of three trials. The maximum standard deviation for throughput and response time was 9.1% and 13.9% of the corresponding mean. Figure 9: Normalized Comparison of Ganesh vs. Rabin - BBOARD Browsing Mix detecting similarity. In this section, we first discuss alternative approaches to caching dynamic content and then examine other uses of hash-based primitives in distributed systems. 8.1 Caching Dynamic Content At the database layer, a number of systems have advocated middletier caching where parts of the database are replicated at the edge or server [3, 4, 20]. These systems either cache entire tables in what is essentially a replicated database or use materialized views from previous query replies [19]. They require tight integration with the back-end database to ensure a time bound on the propagation of updates. These systems are also usually targeted towards workloads that do not require strict consistency and can tolerate stale data. Further, unlike Ganesh, some of these mid-tier caching solutions [2, 3], suffer from the complexity of having to participate in query planing and distributed query processing. Gao et al. [15] propose using a distributed object replication architecture where the data store's consistency requirements are adapted on a per-application basis. These solutions require substantial developer resources and detailed understanding of the application being modified. While systems that attempt to automate the partitioning and replication of an application's database exist [34], they do not provide full transaction semantics. In comparison, Ganesh does not weaken any of the semantics provided by the underlying database. Recent work in the evaluation of edge caching options for dynamic web sites [38] has suggested that, without careful planning, employing complex offloading strategies can hurt performance. Instead, the work advocates for an architecture in which all tiers except the database should be offloaded to the edge. Our evaluation of Ganesh has shown that it would benefit these scenarios. To improve database scalability, C-JDBC [10], SSS [22], and Ganymed [28] also advocate the use of an interposition-based architecture to transparently cluster and replicate databases at the middleware level. The approaches of these architectures and Ganesh are complementary and they would benefit each other. Moving up to the presentation layer, there has been widespread adoption of fragment-based caching [14], which improves cache utilization by separately caching different parts of generated web pages. While fragment-based caching works at the edge, a recent proposal has proposed moving web page assembly to the clients to optimize content delivery [31]. While Ganesh is not used at the presentation layer, the same principles have been applied in Duplicate Transfer Detection [25] to increase web cache efficiency as well as for web access across bandwidth limited links [33]. 8.2 Hash-based Systems The past few years have seen the emergence of many systems that exploit hash-based techniques. At the heart of all these systems is the idea of detecting similarity in data without requiring interpretation of that data. This simple yet elegant idea relies on cryptographic hashing, as discussed earlier in Section 2. Successful applications of this idea span a wide range of storage systems. Examples include peer-to-peer backup of personal computing files [11], storage-efficient archiving of data [29], and finding similar files [21]. Spring and Wetherall [35] apply similar principles at the network level. Using synchronized caches at both ends of a network link, duplicated data is replaced by smaller tokens for transmission and then restored at the remote end. This and other hash-based systems such as the CASPER [37] and LBFS [26] filesystems, and Layer-2 bandwidth optimizers such as Riverbed and Peribit use Rabin fingerprinting [30] to discover spans of commonality in data. This approach is especially useful when data items are modified in-place through insertions, deletions, and updates. However, as Section 6 shows, the performance of this technique can show a dramatic drop in the presence of data reordering. Ganesh instead uses row boundaries as dividers for detecting similarity. The most aggressive use of hash-based techniques is by systems that use hashes as the primary identifiers for objects in persistent storage. Storage systems such as CFS [12] and PAST [13] that have been built using distributed hash tables fall into this category. Single Instance Storage [6] and Venti [29] are other examples of such systems. As discussed in Section 2.2, the use of cryptographic hashes for addressing persistent data represents a deeper level of faith in their collision-resistance than that assumed by Ganesh. If time reveals shortcomings in the hash algorithm, the effort involved in correcting the flaw is much greater. In Ganesh, it is merely a matter of replacing the hash algorithm. 9. CONCLUSION The growing use of dynamic web content generated from relational databases places increased demands on WAN bandwidth. Traditional caching solutions for bandwidth and latency reduction are often ineffective for such content. This paper shows that the impact of WAN accesses to databases can be substantially reduced through the Ganesh architecture without any compromise of the database's strict consistency semantics. The essence of the Ganesh architecture is the use of computation at the edges to reduce communication through the Internet. Ganesh is able to use cryptographic hashes to detect similarity with previous results and send WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content compact recipes of results rather than full results. Our design uses interposition to achieve complete transparency: clients, application servers, and database servers are all unaware of Ganesh's presence and require no modification. Our experimental evaluation confirms that Ganesh, while conceptually simple, can be highly effective in improving throughput and response time. Our results also confirm that exploiting the structure present in database results to detect similarity is crucial to this performance improvement.
Consistency-preserving Caching of Dynamic Database Content * ABSTRACT With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These benefits do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modifications to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones. 1. INTRODUCTION An increasing fraction of web content is dynamically generated from back-end relational databases. Even when database content remains unchanged, temporal locality of access cannot be exploited because dynamic content is not cacheable by web browsers or by intermediate caching servers such as Akamai mirrors. In a multitiered architecture, each web request can stress the WAN link between the web server and the database. This causes user experience to be highly variable because there is no caching to insu * This research was supported by the National Science Foundation (NSF) under grant number CCR-0205266. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or Carnegie Mellon University. late the client from bursty loads. Previous attempts in caching dynamic database content have generally weakened transactional semantics [3, 4] or required application modifications [15, 34]. We report on a new solution that takes the form of a databaseagnostic middleware layer called Ganesh. Ganesh makes no effort to semantically interpret the contents of queries or their results. Instead, it relies exclusively on cryptographic hashing to detect similarities with previous results. Hash-based similarity detection has seen increasing use in distributed file systems [26, 36, 37] for improving performance on low-bandwidth networks. However, these techniques have not been used for relational databases. Unlike previous approaches that use generic methods to detect similarity, Ganesh exploits the structure of relational database results to yield superior performance improvement. One faces at least three challenges in applying hash-based similarity detection to back-end databases. First, previous work in this space has traditionally viewed storage content as uninterpreted bags of bits with no internal structure. This allows hash-based techniques to operate on long, contiguous runs of data for maximum effectiveness. In contrast, relational databases have rich internal structure that may not be as amenable to hash-based similarity detection. Second, relational databases have very tight integrity and consistency constraints that must not be compromised by the use of hash-based techniques. Third, the source code of commercial databases is typically not available. This is in contrast to previous work which presumed availability of source code. Our experiments show that Ganesh, while conceptually simple, can improve performance significantly at bandwidths representative of today's commercial Internet. On benchmarks modeling multitiered web applications, the throughput improvement was as high as tenfold for data-intensive workloads. For workloads that were not data-intensive, throughput improvements of up to twofold were observed. Even when bandwidth was not a constraint, Ganesh had low overhead and did not hurt performance. Our experiments also confirm that exploiting the structure present in database results is crucial to this performance improvement. 2. BACKGROUND 2.1 Dynamic Content Generation As the World Wide Web has grown, many web sites have decentralized their data and functionality by pushing them to the edges of the Internet. Today, eBusiness systems often use a three-tiered architecture consisting of a front-end web server, an application server, and a back-end database server. Figure 1 illustrates this architecture. The first two tiers can be replicated close to a concentration of clients at the edge of the Internet. This improves user experience by lowering end-to-end latency and reducing exposure Figure 1: Multi-Tier Architecture to backbone traffic congestion. It can also increase the availability and scalability of web services. Content that is generated dynamically from the back-end database cannot be cached in the first two tiers. While databases can be easily replicated in a LAN, this is infeasible in a WAN because of the difficult task of simultaneously providing strong consistency, availability, and tolerance to network partitions [7]. As a result, databases tend to be centralized to meet the strong consistency requirements of many eBusiness applications such as banking, finance, and online retailing [38]. Thus, the back-end database is usually located far from many sets of first and second-tier nodes [2]. In the absence of both caching and replication, WAN bandwidth can easily become a limiting factor in the performance and scalability of data-intensive applications. 2.2 Hash-Based Systems Ganesh's focus is on efficient transmission of results by discovering similarities with the results of previous queries. As SQL queries can generate large results, hash-based techniques lend themselves well to the problem of efficiently transferring these large results across bandwidth constrained links. The use of hash-based techniques to reduce the volume of data transmitted has emerged as a common theme of many recent storage systems, as discussed in Section 8.2. These techniques rely on some basic assumptions. Cryptographic hash functions are assumed to be collision-resistant. In other words, it is computationally intractable to find two inputs that hash to the same output. The functions are also assumed to be one-way; that is, finding an input that results in a specific output is computationally infeasible. Menezes et al. [23] provide more details about these assumptions. The above assumptions allow hash-based systems to assume that collisions do not occur. Hence, they are able to treat the hash of a data item as its unique identifier. A collection of data items effectively becomes content-addressable, allowing a small hash to serve as a codeword for a much larger data item in permanent storage or network transmission. The assumption that collisions are so rare as to be effectively non-existent has recently come under fire [17]. However, as explained by Black [5], we believe that these issues do not form a concern for Ganesh. All communication is between trusted parts of the system and an adversary has no way to force Ganesh to accept invalid data. Further, Ganesh does not depend critically on any specific hash function. While we currently use SHA-1, replacing it with a different hash function would be simple. There would be no impact on performance as stronger hash functions (e.g. SHA256) only add a few extra bytes and the generated hashes are still orders of magnitude smaller than the data items they represent. No re-hashing of permanent storage is required since Ganesh only uses hashing on volatile data. 3. DESIGN AND IMPLEMENTATION 3.1 Detecting Similarity 3.2 Transparency 3.3 Proxy-Based Caching 3.4 Encoding and Decoding Results 4. EXPERIMENTAL VALIDATION 4.1 Benchmarks 4.1.1 The BBOARD Benchmark 4.1.2 The AUCTION Benchmark 4.2 Experimental Procedure 4.3 Experimental Setup 5. THROUGHPUT AND RESPONSE TIME 5.1 BBOARD Results and Analysis 5.1.1 Authoring Mix 5.1.2 Browsing Mix 5.1.3 Filter Variant 5.2 AUCTION Results and Analysis 5.2.1 Bidding Mix 5.2.2 Browsing Mix 6. STRUCTURAL VS. RABIN SIMILARITY 6.1 Microbenchmarks 6.2 Application Benchmarks 7. PROXY OVERHEAD 8. RELATED WORK To the best of our knowledge, Ganesh is the first system that combines the use of hash-based techniques with caching of database results to improve throughput and response times for applications with dynamic content. We also believe that it is also the first system to demonstrate the benefits of using structural information for For throughput, a normalized result greater than 1 implies that Rabin is better, For response time, a normalized result greater than 1 implies that Ganesh is better. Mean of three trials. The maximum standard deviation for throughput and response time was 9.1% and 13.9% of the corresponding mean. Figure 9: Normalized Comparison of Ganesh vs. Rabin - BBOARD Browsing Mix detecting similarity. In this section, we first discuss alternative approaches to caching dynamic content and then examine other uses of hash-based primitives in distributed systems. 8.1 Caching Dynamic Content At the database layer, a number of systems have advocated middletier caching where parts of the database are replicated at the edge or server [3, 4, 20]. These systems either cache entire tables in what is essentially a replicated database or use materialized views from previous query replies [19]. They require tight integration with the back-end database to ensure a time bound on the propagation of updates. These systems are also usually targeted towards workloads that do not require strict consistency and can tolerate stale data. Further, unlike Ganesh, some of these mid-tier caching solutions [2, 3], suffer from the complexity of having to participate in query planing and distributed query processing. Gao et al. [15] propose using a distributed object replication architecture where the data store's consistency requirements are adapted on a per-application basis. These solutions require substantial developer resources and detailed understanding of the application being modified. While systems that attempt to automate the partitioning and replication of an application's database exist [34], they do not provide full transaction semantics. In comparison, Ganesh does not weaken any of the semantics provided by the underlying database. Recent work in the evaluation of edge caching options for dynamic web sites [38] has suggested that, without careful planning, employing complex offloading strategies can hurt performance. Instead, the work advocates for an architecture in which all tiers except the database should be offloaded to the edge. Our evaluation of Ganesh has shown that it would benefit these scenarios. To improve database scalability, C-JDBC [10], SSS [22], and Ganymed [28] also advocate the use of an interposition-based architecture to transparently cluster and replicate databases at the middleware level. The approaches of these architectures and Ganesh are complementary and they would benefit each other. Moving up to the presentation layer, there has been widespread adoption of fragment-based caching [14], which improves cache utilization by separately caching different parts of generated web pages. While fragment-based caching works at the edge, a recent proposal has proposed moving web page assembly to the clients to optimize content delivery [31]. While Ganesh is not used at the presentation layer, the same principles have been applied in Duplicate Transfer Detection [25] to increase web cache efficiency as well as for web access across bandwidth limited links [33]. 8.2 Hash-based Systems The past few years have seen the emergence of many systems that exploit hash-based techniques. At the heart of all these systems is the idea of detecting similarity in data without requiring interpretation of that data. This simple yet elegant idea relies on cryptographic hashing, as discussed earlier in Section 2. Successful applications of this idea span a wide range of storage systems. Examples include peer-to-peer backup of personal computing files [11], storage-efficient archiving of data [29], and finding similar files [21]. Spring and Wetherall [35] apply similar principles at the network level. Using synchronized caches at both ends of a network link, duplicated data is replaced by smaller tokens for transmission and then restored at the remote end. This and other hash-based systems such as the CASPER [37] and LBFS [26] filesystems, and Layer-2 bandwidth optimizers such as Riverbed and Peribit use Rabin fingerprinting [30] to discover spans of commonality in data. This approach is especially useful when data items are modified in-place through insertions, deletions, and updates. However, as Section 6 shows, the performance of this technique can show a dramatic drop in the presence of data reordering. Ganesh instead uses row boundaries as dividers for detecting similarity. The most aggressive use of hash-based techniques is by systems that use hashes as the primary identifiers for objects in persistent storage. Storage systems such as CFS [12] and PAST [13] that have been built using distributed hash tables fall into this category. Single Instance Storage [6] and Venti [29] are other examples of such systems. As discussed in Section 2.2, the use of cryptographic hashes for addressing persistent data represents a deeper level of faith in their collision-resistance than that assumed by Ganesh. If time reveals shortcomings in the hash algorithm, the effort involved in correcting the flaw is much greater. In Ganesh, it is merely a matter of replacing the hash algorithm. 9. CONCLUSION The growing use of dynamic web content generated from relational databases places increased demands on WAN bandwidth. Traditional caching solutions for bandwidth and latency reduction are often ineffective for such content. This paper shows that the impact of WAN accesses to databases can be substantially reduced through the Ganesh architecture without any compromise of the database's strict consistency semantics. The essence of the Ganesh architecture is the use of computation at the edges to reduce communication through the Internet. Ganesh is able to use cryptographic hashes to detect similarity with previous results and send WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content compact recipes of results rather than full results. Our design uses interposition to achieve complete transparency: clients, application servers, and database servers are all unaware of Ganesh's presence and require no modification. Our experimental evaluation confirms that Ganesh, while conceptually simple, can be highly effective in improving throughput and response time. Our results also confirm that exploiting the structure present in database results to detect similarity is crucial to this performance improvement.
Consistency-preserving Caching of Dynamic Database Content * ABSTRACT With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These benefits do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modifications to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones. 1. INTRODUCTION An increasing fraction of web content is dynamically generated from back-end relational databases. Even when database content remains unchanged, temporal locality of access cannot be exploited because dynamic content is not cacheable by web browsers or by intermediate caching servers such as Akamai mirrors. In a multitiered architecture, each web request can stress the WAN link between the web server and the database. Previous attempts in caching dynamic database content have generally weakened transactional semantics [3, 4] or required application modifications [15, 34]. We report on a new solution that takes the form of a databaseagnostic middleware layer called Ganesh. Ganesh makes no effort to semantically interpret the contents of queries or their results. Instead, it relies exclusively on cryptographic hashing to detect similarities with previous results. Hash-based similarity detection has seen increasing use in distributed file systems [26, 36, 37] for improving performance on low-bandwidth networks. However, these techniques have not been used for relational databases. Unlike previous approaches that use generic methods to detect similarity, Ganesh exploits the structure of relational database results to yield superior performance improvement. One faces at least three challenges in applying hash-based similarity detection to back-end databases. First, previous work in this space has traditionally viewed storage content as uninterpreted bags of bits with no internal structure. This allows hash-based techniques to operate on long, contiguous runs of data for maximum effectiveness. In contrast, relational databases have rich internal structure that may not be as amenable to hash-based similarity detection. Second, relational databases have very tight integrity and consistency constraints that must not be compromised by the use of hash-based techniques. Third, the source code of commercial databases is typically not available. This is in contrast to previous work which presumed availability of source code. Our experiments show that Ganesh, while conceptually simple, can improve performance significantly at bandwidths representative of today's commercial Internet. On benchmarks modeling multitiered web applications, the throughput improvement was as high as tenfold for data-intensive workloads. Even when bandwidth was not a constraint, Ganesh had low overhead and did not hurt performance. Our experiments also confirm that exploiting the structure present in database results is crucial to this performance improvement. 2. BACKGROUND 2.1 Dynamic Content Generation Today, eBusiness systems often use a three-tiered architecture consisting of a front-end web server, an application server, and a back-end database server. Figure 1 illustrates this architecture. The first two tiers can be replicated close to a concentration of clients at the edge of the Internet. This improves user experience by lowering end-to-end latency and reducing exposure Figure 1: Multi-Tier Architecture It can also increase the availability and scalability of web services. Content that is generated dynamically from the back-end database cannot be cached in the first two tiers. As a result, databases tend to be centralized to meet the strong consistency requirements of many eBusiness applications such as banking, finance, and online retailing [38]. Thus, the back-end database is usually located far from many sets of first and second-tier nodes [2]. In the absence of both caching and replication, WAN bandwidth can easily become a limiting factor in the performance and scalability of data-intensive applications. 2.2 Hash-Based Systems Ganesh's focus is on efficient transmission of results by discovering similarities with the results of previous queries. As SQL queries can generate large results, hash-based techniques lend themselves well to the problem of efficiently transferring these large results across bandwidth constrained links. The use of hash-based techniques to reduce the volume of data transmitted has emerged as a common theme of many recent storage systems, as discussed in Section 8.2. These techniques rely on some basic assumptions. Cryptographic hash functions are assumed to be collision-resistant. In other words, it is computationally intractable to find two inputs that hash to the same output. The functions are also assumed to be one-way; that is, finding an input that results in a specific output is computationally infeasible. Menezes et al. [23] provide more details about these assumptions. The above assumptions allow hash-based systems to assume that collisions do not occur. Hence, they are able to treat the hash of a data item as its unique identifier. A collection of data items effectively becomes content-addressable, allowing a small hash to serve as a codeword for a much larger data item in permanent storage or network transmission. However, as explained by Black [5], we believe that these issues do not form a concern for Ganesh. All communication is between trusted parts of the system and an adversary has no way to force Ganesh to accept invalid data. Further, Ganesh does not depend critically on any specific hash function. While we currently use SHA-1, replacing it with a different hash function would be simple. No re-hashing of permanent storage is required since Ganesh only uses hashing on volatile data. 8. RELATED WORK To the best of our knowledge, Ganesh is the first system that combines the use of hash-based techniques with caching of database results to improve throughput and response times for applications with dynamic content. We also believe that it is also the first system to demonstrate the benefits of using structural information for For throughput, a normalized result greater than 1 implies that Rabin is better, For response time, a normalized result greater than 1 implies that Ganesh is better. Mean of three trials. Figure 9: Normalized Comparison of Ganesh vs. Rabin - BBOARD Browsing Mix detecting similarity. In this section, we first discuss alternative approaches to caching dynamic content and then examine other uses of hash-based primitives in distributed systems. 8.1 Caching Dynamic Content At the database layer, a number of systems have advocated middletier caching where parts of the database are replicated at the edge or server [3, 4, 20]. These systems either cache entire tables in what is essentially a replicated database or use materialized views from previous query replies [19]. They require tight integration with the back-end database to ensure a time bound on the propagation of updates. These systems are also usually targeted towards workloads that do not require strict consistency and can tolerate stale data. Further, unlike Ganesh, some of these mid-tier caching solutions [2, 3], suffer from the complexity of having to participate in query planing and distributed query processing. Gao et al. [15] propose using a distributed object replication architecture where the data store's consistency requirements are adapted on a per-application basis. These solutions require substantial developer resources and detailed understanding of the application being modified. While systems that attempt to automate the partitioning and replication of an application's database exist [34], they do not provide full transaction semantics. In comparison, Ganesh does not weaken any of the semantics provided by the underlying database. Instead, the work advocates for an architecture in which all tiers except the database should be offloaded to the edge. Our evaluation of Ganesh has shown that it would benefit these scenarios. The approaches of these architectures and Ganesh are complementary and they would benefit each other. While fragment-based caching works at the edge, a recent proposal has proposed moving web page assembly to the clients to optimize content delivery [31]. 8.2 Hash-based Systems The past few years have seen the emergence of many systems that exploit hash-based techniques. At the heart of all these systems is the idea of detecting similarity in data without requiring interpretation of that data. This simple yet elegant idea relies on cryptographic hashing, as discussed earlier in Section 2. Successful applications of this idea span a wide range of storage systems. Using synchronized caches at both ends of a network link, duplicated data is replaced by smaller tokens for transmission and then restored at the remote end. This approach is especially useful when data items are modified in-place through insertions, deletions, and updates. However, as Section 6 shows, the performance of this technique can show a dramatic drop in the presence of data reordering. Ganesh instead uses row boundaries as dividers for detecting similarity. The most aggressive use of hash-based techniques is by systems that use hashes as the primary identifiers for objects in persistent storage. Storage systems such as CFS [12] and PAST [13] that have been built using distributed hash tables fall into this category. Single Instance Storage [6] and Venti [29] are other examples of such systems. As discussed in Section 2.2, the use of cryptographic hashes for addressing persistent data represents a deeper level of faith in their collision-resistance than that assumed by Ganesh. If time reveals shortcomings in the hash algorithm, the effort involved in correcting the flaw is much greater. In Ganesh, it is merely a matter of replacing the hash algorithm. 9. CONCLUSION The growing use of dynamic web content generated from relational databases places increased demands on WAN bandwidth. Traditional caching solutions for bandwidth and latency reduction are often ineffective for such content. This paper shows that the impact of WAN accesses to databases can be substantially reduced through the Ganesh architecture without any compromise of the database's strict consistency semantics. The essence of the Ganesh architecture is the use of computation at the edges to reduce communication through the Internet. Ganesh is able to use cryptographic hashes to detect similarity with previous results and send WWW 2007 / Track: Performance and Scalability Session: Scalable Systems for Dynamic Content compact recipes of results rather than full results. Our design uses interposition to achieve complete transparency: clients, application servers, and database servers are all unaware of Ganesh's presence and require no modification. Our experimental evaluation confirms that Ganesh, while conceptually simple, can be highly effective in improving throughput and response time. Our results also confirm that exploiting the structure present in database results to detect similarity is crucial to this performance improvement.
J-67
Mechanism Design for Online Real-Time Scheduling
For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents.
[ "mechan design", "schedul", "competit ratio", "determinist algorithm", "non-strateg set", "deadlin", "determinist mechan", "job onlin schedul", "import ratio", "zero laxiti", "onlin algorithm", "quasi-linear function", "incent compat", "individu ration", "profit deviat", "monoton", "game theori" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "M", "U", "R", "U", "M", "U", "U", "U", "U" ]
Mechanism Design for Online Real-Time Scheduling Ryan Porter∗ Computer Science Department Stanford University Stanford, CA 94305 rwporter@stanford.edu ABSTRACT For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; F.1.2 [Computation by Abstract Devices]: Modes of Computation-Online computation General Terms Algorithms, Economics, Design, Theory 1. INTRODUCTION We consider the problem of online scheduling of jobs on a single processor. Each job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline. The objective is to maximize the sum of the values of the jobs completed by their respective deadlines. The key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time. Competitive analysis [6, 10], with its roots in [12], is a well-studied approach for analyzing online algorithms by comparing them against the optimal offline algorithm, which has full knowledge of the input at the beginning of its execution. One interpretation of this approach is as a game between the designer of the online algorithm and an adversary. First, the designer selects the online algorithm. Then, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm. Two papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs. For k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms. The same paper also generalizes the lower bound to (1 + √ k)2 for any k ≥ 1, and [15] then presents a matching (1 + √ k)2 -competitive algorithm. The setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release. However, in domains such as grid computing (see, for example, [7, 8]) this assumption is invalid, because buyers of processor time choose when and how to submit their jobs. Furthermore, sellers not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting. Thus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent. Instead of being released to the algorithm, each job is now released only to its owning agent. Each agent now has four different ways in which it can manipulate the algorithm: it decides when to submit the job to the algorithm after the true release time, it can artificially inflate the length of the job, and it can declare an arbitrary value and deadline for the job. Because the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause 61 their job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15]. The addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents. Recent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]). In general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome. In our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center. A basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent``s best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline. In order to evaluate a mechanism using competitive analysis, the adversary model must be updated. In the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism. Thus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least 1 c of the value that the optimal offline mechanism achieves on the same sequence of jobs. The rest of the paper is structured as follows. In Section 2, we formally define and review results from the original, non-strategic setting. After introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3. In Section 4 we present our first main result, a ((1 + √ k)2 + 1)-competitive mechanism, and formally prove incentive compatibility and the competitive ratio. We also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job. Returning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents. Finally, in Section 6, we discuss related work other than the directly relevant [4] and [15], before concluding with Section 7. 2. NON-STRATEGIC SETTING In this section, we formally define the original, non-strategic setting, and recap previous results. 2.1 Formulation There exists a single processor on which jobs can execute, and N jobs, although this number is not known beforehand. Each job i is characterized by a tuple θi = (ri, di, li, vi), which denotes the release time, deadline, length of processing time required, and value, respectively. The space Θi of possible tuples is the same for each job and consists of all θi such that ri, di, li, vi ∈ + (thus, the model of time is continuous). Each job is released at time ri, at which point its three other characteristics are known. Nothing is known about the job before its arrival. Each deadline is firm (or, hard), which means that no value is obtained for a job that is completed after its deadline. Preemption of jobs is allowed, and it takes no time to switch between jobs. Thus, job i is completed if and only if the total time it executes on the processor before di is at least li. Let θ = (θ1, ... , θN ) denote the vector of tuples for all jobs, and let θ−i = (θ1, ... , θi−1, θi+1, ... , θN ) denote the same vector without the tuple for job i. Thus, (θi, θ−i) denotes a complete vector of tuples. Define the value density ρi = vi li of job i to be the ratio of its value to its length. For an input θ, denote the maximum and minimum value densities as ρmin = mini ρi and ρmax = maxi ρi. The importance ratio is then defined to be ρmax ρmin , the maximal ratio of value densities between two jobs. The algorithm is assumed to always know an upper bound k on the importance ratio. For simplicity, we normalize the range of possible value densities so that ρmin = 1. An online algorithm is a function f : Θ1 × ... × ΘN → O that maps the vector of tuples (for any number N) to an outcome o. An outcome o ∈ O is simply a schedule of jobs on the processor, recorded by the function S : + → {0, 1, ... , N}, which maps each point in time to the active job, or to 0 if the processor is idle. To denote the total elapsed time that a job has spent on the processor at time t, we will use the function ei(t) = t 0 µ(S(x) = i)dx, where µ(·) is an indicator function that returns 1 if the argument is true, and zero otherwise. A job``s laxity at time t is defined to be di − t − li + ei(t) , the amount of time that it can remain inactive and still be completed by its deadline. A job is abandoned if it cannot be completed by its deadline (formally, if di −t+ei(t) < li). Also, overload S(·) and ei(·) so that they can also take a vector θ as an argument. For example, S(θ, t) is shorthand for the S(t) of the outcome f(θ), and it denotes the active job at time t when the input is θ. Since a job cannot be executed before its release time, the space of possible outcomes is restricted in that S(θ, t) = i implies ri ≤ t. Also, because the online algorithm must produce the schedule over time, without knowledge of future inputs, it must make the same decision at time t for inputs that are indistinguishable at this time. Formally, let θ(t) denote the subset of the tuples in θ that satisfy ri ≤ t. The constraint is then that θ(t) = θ (t) implies S(θ, t) = S(θ , t). The objective function is the sum of the values of the jobs that are completed by their respective deadlines: W(o, θ) = i vi · µ(ei(θ, di) ≥ li) . Let W∗ (θ) = maxo∈O W(o, θ) denote the maximum possible total value for the profile θ. In competitive analysis, an online algorithm is evaluated by comparing it against an optimal offline algorithm. Because the offline algorithm knows the entire input θ at time 0 (but still cannot start each job i until time ri), it always achieves W∗ (θ). An online algorithm f(·) is (strictly) c-competitive if there does not exist an input θ such that c · W(f(θ), θ) < W∗ (θ). An algorithm that is c-competitive is also said to achieve a competitive ratio of c. We assume that there does not exist an overload period of infinite duration. A period of time [ts , tf ] is overloaded if the sum of the lengths of the jobs whose release time and deadline both fall within the time period exceeds the duration of the interval (formally, if tf −ts ≤ i|(ts≤ri,di≤tf ) li). Without such an assumption, it is not possible to achieve a finite competitive ratio [15]. 2.2 Previous Results In the non-strategic setting, [4] presents a 4-competitive algorithm called TD1 (version 2) for the case of k = 1, while [15] presents a (1+ √ k)2 -competitive algorithm called Dover for the general case of k ≥ 1. Matching lower bounds for deterministic algorithms for both of these cases were shown 62 in [4]. In this section we provide a high-level description of TD1 (version 2) using an example. TD1 (version 2) divides the schedule into intervals, each of which begins when the processor transitions from idle to busy (call this time tb ), and ends with the completion of a job. The first active job of an interval may have laxity; however, for the remainder of the interval, preemption of the active job is only considered when some other job has zero laxity. For example, when the input is the set of jobs listed in Table 1, the first interval is the complete execution of job 1 over the range [0.0, 0.9]. No preemption is considered during this interval, because job 2 has laxity until time 1.5. Then, a new interval starts at tb = 0.9 when job 2 becomes active. Before job 2 can finish, preemption is considered at time 4.8, when job 3 is released with zero laxity. In order to decide whether to preempt the active job, TD1 (version 2) uses two more variables: te and p loss. The former records the latest deadline of a job that would be abandoned if the active job executes to completion (or, if no such job exists, the time that the active job will finish if it is not preempted). In this case, te = 17.0. The value te −tb represents the an upper bound on the amount of possible execution time lost to the optimal offline algorithm due to the completion of the active job. The other variable, p loss, is equal to the length of the first active job of the current interval. Because in general this job could have laxity, the offline algorithm may be able to complete it outside of the range [tb , te ].1 If the algorithm completes the active job and this job``s length is at least te −tb +p loss 4 , then the algorithm is guaranteed to be 4-competitive for this interval (note that k = 1 implies that all jobs have the same value density and thus that lengths can used to compute the competitive ratio). Because this is not case at time 4.8 (since te −tb +p loss 4 = 17.0−0.9+4.0 4 > 4.0 = l2), the algorithm preempts job 2 for job 3, which then executes to completion. Job ri di li vi 1 0.0 0.9 0.9 0.9 2 0.5 5.5 4.0 4.0 3 4.8 17.0 12.2 12.2 01 5 17 6 ? 6 ? 6 ? Table 1: Input used to recap TD1 (version 2) [4]. The up and down arrows represent ri and di, respectively, while the length of the box equals li. 3. MECHANISM DESIGN SETTING However, false information about job 2 would cause TD1 (version 2) to complete this job. For example, if job 2``s deadline were declared as ˆd2 = 4.7, then it would have zero laxity at time 0.7. At this time, the algorithm would preempt job 1 for job 2, because te −tb +p loss 4 = 4.7−0.0+1.0 4 > 0.9 = l1. Job 2 would then complete before the arrival of job 3.2 1 While it would be easy to alter the algorithm to recognize that this is not possible for the jobs in Table 1, our example does not depend on the use of p loss. 2 While we will not describe the significantly more complex In order to address incentive issues such as this one, we need to formalize the setting as a mechanism design problem. In this section we first present the mechanism design formulation, and then define our goals for the mechanism. 3.1 Formulation There exists a center, who controls the processor, and N agents, where the value of N is unknown by the center beforehand. Each job i is owned by a separate agent i. The characteristics of the job define the agent``s type θi ∈ Θi. At time ri, agent i privately observes its type θi, and has no information about job i before ri. Thus, jobs are still released over time, but now each job is revealed only to the owning agent. Agents interact with the center through a direct mechanism Γ = (Θ1, ... , ΘN , g(·)), in which each agent declares a job, denoted by ˆθi = (ˆri, ˆdi, ˆli, ˆvi), and g : Θ1×...×ΘN → O maps the declared types to an outcome o ∈ O. An outcome o = (S(·), p1, ... , pN ) consists of a schedule and a payment from each agent to the mechanism. In a standard mechanism design setting, the outcome is enforced at the end of the mechanism. However, since the end is not well-defined in this online setting, we choose to model returning the job if it is completed and collecting a payment from each agent i as occurring at ˆdi, which, according to the agent``s declaration, is the latest relevant point of time for that agent. That is, even if job i is completed before ˆdi, the center does not return the job to agent i until that time. This modelling decision could instead be viewed as a decision by the mechanism designer from a larger space of possible mechanisms. Indeed, as we will discuss later, this decision of when to return a completed job is crucial to our mechanism. Each agent``s utility, ui(g(ˆθ), θi) = vi · µ(ei(ˆθ, di) ≥ li) · µ( ˆdi ≤ di) − pi(ˆθ), is a quasi-linear function of its value for its job (if completed and returned by its true deadline) and the payment it makes to the center. We assume that each agent is a rational, expected utility maximizer. Agent declarations are restricted in that an agent cannot declare a length shorter than the true length, since the center would be able to detect such a lie if the job were completed. On the other hand, in the general formulation we will allow agents to declare longer lengths, since in some settings it may be possible add unnecessary work to a job. However, we will also consider a restricted formulation in which this type of lie is not possible. The declared release time ˆri is the time that the agent chooses to submit job i to the center, and it cannot precede the time ri at which the job is revealed to the agent. The agent can declare an arbitrary deadline or value. To summarize, agent i can declare any type ˆθi = (ˆri, ˆdi, ˆli, ˆvi) such that ˆli ≥ li and ˆri ≥ ri. While in the non-strategic setting it was sufficient for the algorithm to know the upper bound k on the ratio ρmax ρmin , in the mechanism design setting we will strengthen this assumption so that the mechanism also knows ρmin (or, equivalently, the range [ρmin, ρmax] of possible value densities).3 Dover , we note that it is similar in its use of intervals and its preference for the active job. Also, we note that the lower bound we will show in Section 5 implies that false information can also benefit a job in Dover . 3 Note that we could then force agent declarations to satisfy ρmin ≤ ˆvi ˆli ≤ ρmax. However, this restriction would not 63 While we feel that it is unlikely that a center would know k without knowing this range, we later present a mechanism that does not depend on this extra knowledge in a restricted setting. The restriction on the schedule is now that S(ˆθ, t) = i implies ˆri ≤ t, to capture the fact that a job cannot be scheduled on the processor before it is declared to the mechanism. As before, preemption of jobs is allowed, and job switching takes no time. The constraints due to the online mechanism``s lack of knowledge of the future are that ˆθ(t) = ˆθ (t) implies S(ˆθ, t) = S(ˆθ , t), and ˆθ( ˆdi) = ˆθ ( ˆdi) implies pi(ˆθ) = pi(ˆθ ) for each agent i. The setting can then be summarized as follows. 1Overview of the Setting: for all t do The center instantiates S(ˆθ, t) ← i, for some i s.t. ˆri ≤ t if ∃i, (ri = t) then θi is revealed to agent i if ∃i, (t ≥ ri) and agent i has not declared a job then Agent i can declare any job ˆθi, s.t. ˆri = t and ˆli ≥ li if ∃i, ( ˆdi = t) ∧ (ei(ˆθ, t) ≥ li) then Completed job i is returned to agent i if ∃i, ( ˆdi = t) then Center sets and collects payment pi(ˆθ) from agent i 3.2 Mechanism Goals Our aim as mechanism designer is to maximize the value of completed jobs, subject to the constraints of incentive compatibility and individual rationality. The condition for (dominant strategy) incentive compatibility is that for each agent i, regardless of its true type and of the declared types of all other agents, agent i cannot increase its utility by unilaterally changing its declaration. Definition 1. A direct mechanism Γ satisfies incentive compatibility (IC) if ∀i, θi, θi, ˆθ−i : ui(g(θi, ˆθ−i), θi) ≥ ui(g(θi, ˆθ−i), θi) From an agent perspective, dominant strategies are desirable because the agent does not have to reason about either the strategies of the other agents or the distribution from the which other agent``s types are drawn. From a mechanism designer perspective, dominant strategies are important because we can reasonably assume that an agent who has a dominant strategy will play according to it. For these reasons, in this paper we require dominant strategies, as opposed to a weaker equilibrium concept such as Bayes-Nash, under which we could improve upon our positive results.4 decrease the lower bound on the competitive ratio. 4 A possible argument against the need for incentive compatibility is that an agent``s lie may actually improve the schedule. In fact, this was the case in the example we showed for the false declaration ˆd2 = 4.7. However, if an agent lies due to incorrect beliefs over the future input, then the lie could instead make the schedule the worse (for example, if job 3 were never released, then job 1 would have been unnecessarily abandoned). Furthermore, if we do not know the beliefs of the agents, and thus cannot predict how they will lie, then we can no longer provide a competitive guarantee for our mechanism. While restricting ourselves to incentive compatible direct mechanisms may seem limiting at first, the Revelation Principle for Dominant Strategies (see, e.g., [17]) tells us that if our goal is dominant strategy implementation, then we can make this restriction without loss of generality. The second goal for our mechanism, individual rationality, requires that agents who truthfully reveal their type never have negative utility. The rationale behind this goal is that participation in the mechanism is assumed to be voluntary. Definition 2. A direct mechanism Γ satisfies individual rationality (IR) if ∀i, θi, ˆθ−i, ui(g(θi, ˆθ−i), θi) ≥ 0. Finally, the social welfare function that we aim to maximize is the same as the objective function of the non-strategic setting: W(o, θ) = i vi · µ(ei(θ, di) ≥ li) . As in the nonstrategic setting, we will evaluate an online mechanism using competitive analysis to compare it against an optimal offline mechanism (which we will denote by Γoffline). An offline mechanism knows all of the types at time 0, and thus can always achieve W∗ (θ).5 Definition 3. An online mechanism Γ is (strictly) ccompetitive if it satisfies IC and IR, and if there does not exist a profile of agent types θ such that c·W(g(θ), θ) < W∗ (θ). 4. RESULTS In this section, we first present our main positive result: a (1+ √ k)2 +1 -competitive mechanism (Γ1). After providing some intuition as to why Γ1 satisfies individual rationality and incentive compatibility, we formally prove first these two properties and then the competitive ratio. We then consider a special case in which k = 1 and agents cannot lie about the length of their job, which allows us to alter this mechanism so that it no longer requires either knowledge of ρmin or the collection of payments from agents. Unlike TD1 (version 2) and Dover , Γ1 gives no preference to the active job. Instead, it always executes the available job with the highest priority: (ˆvi + √ k · ei(ˆθ, t) · ρmin). Each agent whose job is completed is then charged the lowest value that it could have declared such that its job still would have been completed, holding constant the rest of its declaration. By the use of a payment rule similar to that of a secondprice auction, Γ1 satisfies both IC with respect to values and IR. We now argue why it satisfies IC with respect to the other three characteristics. Declaring an improved job (i.e., declaring an earlier release time, a shorter length, or a later deadline) could possibly decrease the payment of an agent. However, the first two lies are not possible in our setting, while the third would cause the job, if it is completed, to be returned to the agent after the true deadline. This is the reason why it is important to always return a completed job at its declared deadline, instead of at the point at which it is completed. 5 Another possibility is to allow only the agents to know their types at time 0, and to force Γoffline to be incentive compatible so that agents will truthfully declare their types at time 0. However, this would not affect our results, since executing a VCG mechanism (see, e.g., [17]) at time 0 both satisfies incentive compatibility and always maximizes social welfare. 64 Mechanism 1 Γ1 Execute S(ˆθ, ·) according to Algorithm 1 for all i do if ei(ˆθ, ˆdi) ≥ ˆli {Agent i``s job is completed} then pi(ˆθ) ← arg minvi≥0(ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) else pi(ˆθ) ← 0 Algorithm 1 for all t do Avail ← {i|(t ≥ ˆri)∧(ei(ˆθ, t) < ˆli)∧(ei(ˆθ, t)+ ˆdi−t ≥ ˆli)} {Set of all released, non-completed, non-abandoned jobs} if Avail = ∅ then S(ˆθ, t) ← arg maxi∈Avail(ˆvi + √ k · ei(ˆθ, t) · ρmin) {Break ties in favor of lower ˆri} else S(ˆθ, t) ← 0 It remains to argue why an agent does not have incentive to worsen its job. The only possible effects of an inflated length are delaying the completion of the job and causing it to be abandoned, and the only possible effects of an earlier declared deadline are causing to be abandoned and causing it to be returned earlier (which has no effect on the agent``s utility in our setting). On the other hand, it is less obvious why agents do not have incentive to declare a later release time. Consider a mechanism Γ1 that differs from Γ1 in that it does not preempt the active job i unless there exists another job j such that (ˆvi + √ k·li(ˆθ, t)·ρmin) < ˆvj. Note that as an active job approaches completion in Γ1, its condition for preemption approaches that of Γ1. However, the types in Table 2 for the case of k = 1 show why an agent may have incentive to delay the arrival of its job under Γ1. Job 1 becomes active at time 0, and job 2 is abandoned upon its release at time 6, because 10 + 10 = v1 +l1 > v2 = 13. Then, at time 8, job 1 is preempted by job 3, because 10 + 10 = v1 + l1 < v3 = 22. Job 3 then executes to completion, forcing job 1 to be abandoned. However, job 2 had more weight than job 1, and would have prevented job 3 from being executed if it had been the active job at time 8, since 13 + 13 = v2 + l2 > v3 = 22. Thus, if agent 1 had falsely declared ˆr1 = 20, then job 3 would have been abandoned at time 8, and job 1 would have completed over the range [20, 30]. Job ri di li vi 1 0 30 10 10 2 6 19 13 13 3 8 30 22 22 0 6 10 20 30 6 ? 6 ? 6 ? Table 2: Jobs used to show why a slightly altered version of Γ1 would not be incentive compatible with respect to release times. Intuitively, Γ1 avoids this problem because of two properties. First, when a job becomes active, it must have a greater priority than all other available jobs. Second, because a job``s priority can only increase through the increase of its elapsed time, ei(ˆθ, t), the rate of increase of a job``s priority is independent of its characteristics. These two properties together imply that, while a job is active, there cannot exist a time at which its priority is less than the priority that one of these other jobs would have achieved by executing on the processor instead. 4.1 Proof of Individual Rationality and Incentive Compatibility After presenting the (trivial) proof of IR, we break the proof of IC into lemmas. Theorem 1. Mechanism Γ1 satisfies individual rationality. Proof. For arbitrary i, θi, ˆθ−i, if job i is not completed, then agent i pays nothing and thus has a utility of zero; that is, pi(θi, ˆθ−i) = 0 and ui(g(θi, ˆθ−i), θi) = 0. On the other hand, if job i is completed, then its value must exceed agent i``s payment. Formally, ui(g(θi, ˆθ−i), θi) = vi − arg minvi≥0(ei(((ri, di, li, vi), ˆθ−i), di) ≥ li) ≥ 0 must hold, since vi = vi satisfies the condition. To prove IC, we need to show that for an arbitrary agent i, and an arbitrary profile ˆθ−i of declarations of the other agents, agent i can never gain by making a false declaration ˆθi = θi, subject to the constraints that ˆri ≥ ri and ˆli ≥ li. We start by showing that, regardless of ˆvi, if truthful declarations of ri, di, and li do not cause job i to be completed, then worse declarations of these variables (that is, declarations that satisfy ˆri ≥ ri, ˆli ≥ li and ˆdi ≤ di) can never cause the job to be completed. We break this part of the proof into two lemmas, first showing that it holds for the release time, regardless of the declarations of the other variables, and then for length and deadline. Lemma 2. In mechanism Γ1, the following condition holds for all i, θi, ˆθ−i: ∀ ˆvi, ˆli ≥ li, ˆdi ≤ di, ˆri ≥ ri, ei ((ˆri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli =⇒ ei ((ri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli Proof. Assume by contradiction that this condition does not hold- that is, job i is not completed when ri is truthfully declared, but is completed for some false declaration ˆri ≥ ri. We first analyze the case in which the release time is truthfully declared, and then we show that job i cannot be completed when agent i delays submitting it to the center. Case I: Agent i declares ˆθi = (ri, ˆdi, ˆli, ˆvi). First, define the following three points in the execution of job i. • Let ts = arg mint S((ˆθi, ˆθ−i), t) = i be the time that job i first starts execution. • Let tp = arg mint>ts S((ˆθi, ˆθ−i), t) = i be the time that job i is first preempted. • Let ta = arg mint ei((ˆθi, ˆθ−i), t) + ˆdi − t < ˆli be the time that job i is abandoned. 65 If ts and tp are undefined because job i never becomes active, then let ts = tp = ta . Also, partition the jobs declared by other agents before ta into the following three sets. • X = {j|(ˆrj < tp ) ∧ (j = i)} consists of the jobs (other than i) that arrive before job i is first preempted. • Y = {j|(tp ≤ ˆrj ≤ ta )∧(ˆvj > ˆvi + √ k·ei((ˆθi, ˆθ−i), ˆrj)} consists of the jobs that arrive in the range [tp , ta ] and that when they arrive have higher priority than job i (note that we are make use of the normalization). • Z = {j|(tp ≤ ˆrj ≤ ta )∧(ˆvj ≤ ˆvi + √ k ·ei((ˆθi, ˆθ−i), ˆrj)} consists of the jobs that arrive in the range [tp , ta ] and that when they arrive have lower priority than job i. We now show that all active jobs during the range (tp , ta ] must be either i or in the set Y . Unless tp = ta (in which case this property trivially holds), it must be the case that job i has a higher priority than an arbitrary job x ∈ X at time tp , since at the time just preceding tp job x was available and job i was active. Formally, ˆvx + √ k · ex((ˆθi, ˆθ−i), tp ) < ˆvi + √ k · ei((ˆθi, ˆθ−i), tp ) must hold.6 We can then show that, over the range [tp , ta ], no job x ∈ X runs on the processor. Assume by contradiction that this is not true. Let tf ∈ [tp , ta ] be the earliest time in this range that some job x ∈ X is active, which implies that ex((ˆθi, ˆθ−i), tf ) = ex((ˆθi, ˆθ−i), tp ). We can then show that job i has a higher priority at time tf as follows: ˆvx+ √ k·ex((ˆθi, ˆθ−i), tf ) = ˆvx+ √ k·ex((ˆθi, ˆθ−i), tp ) < ˆvi + √ k · ei((ˆθi, ˆθ−i), tp ) ≤ ˆvi + √ k · ei((ˆθi, ˆθ−i), tf ), contradicting the fact that job x is active at time tf . A similar argument applies to an arbitrary job z ∈ Z, starting at it release time ˆrz > tp , since by definition job i has a higher priority at that time. The only remaining jobs that can be active over the range (tp , ta ] are i and those in the set Y . Case II: Agent i declares ˆθi = (ˆri, ˆdi, ˆli, ˆvi), where ˆri > ri. We now show that job i cannot be completed in this case, given that it was not completed in case I. First, we can restrict the range of ˆri that we need to consider as follows. Declaring ˆri ∈ (ri, ts ] would not affect the schedule, since ts would still be the first time that job i executes. Also, declaring ˆri > ta could not cause the job to be completed, since di − ta < ˆli holds, which implies that job i would be abandoned at its release. Thus, we can restrict consideration to ˆri ∈ (ts , ta ]. In order for declaring ˆθi to cause job i to be completed, a necessary condition is that the execution of some job yc ∈ Y must change during the range (tp , ta ], since the only jobs other than i that are active during that range are in Y . Let tc = arg mint∈(tp,ta][∃yc ∈ Y, (S((ˆθi, ˆθ−i), t) = yc ) ∧ (S((ˆθi, ˆθ−i), t) = yc )] be the first time that such a change occurs. We will now show that for any ˆri ∈ (ts , ta ], there cannot exist a job with higher priority than yc at time tc , contradicting (S((ˆθi, ˆθ−i), t) = yc ). First note that job i cannot have a higher priority, since there would have to exist a t ∈ (tp , tc ) such that ∃y ∈ 6 For simplicity, when we give the formal condition for a job x to have a higher priority than another job y, we will assume that job x``s priority is strictly greater than job y``s, because, in the case of a tie that favors x, future ties would also be broken in favor of job x. Y, (S((ˆθi, ˆθ−i), t) = y) ∧ (S((ˆθi, ˆθ−i), t) = i), contradicting the definition of tc . Now consider an arbitrary y ∈ Y such that y = yc . In case I, we know that job y has lower priority than yc at time tc ; that is, ˆvy + √ k·ey((ˆθi, ˆθ−i), tc ) < ˆvyc + √ k·eyc ((ˆθi, ˆθ−i), tc ). Thus, moving to case II, job y must replace some other job before tc . Since ˆry ≥ tp , the condition is that there must exist some t ∈ (tp , tc ) such that ∃w ∈ Y ∪{i}, (S((ˆθi, ˆθ−i), t) = w) ∧ (S((ˆθi, ˆθ−i), t) = y). Since w ∈ Y would contradict the definition of tc , we know that w = i. That is, the job that y replaces must be i. By definition of the set Y , we know that ˆvy > ˆvi + √ k · ei((ˆθi, ˆθ−i), ˆry). Thus, if ˆry ≤ t, then job i could not have executed instead of y in case I. On the other hand, if ˆry > t, then job y obviously could not execute at time t, contradicting the existence of such a time t. Now consider an arbitrary job x ∈ X. We know that in case I job i has a higher priority than job x at time ts , or, formally, that ˆvx + √ k · ex((ˆθi, ˆθ−i), ts ) < ˆvi + √ k · ei((ˆθi, ˆθ−i), ts ). We also know that ˆvi + √ k·ei((ˆθi, ˆθ−i), tc ) < ˆvyc + √ k · eyc ((ˆθi, ˆθ−i), tc ). Since delaying i``s arrival will not affect the execution up to time ts , and since job x cannot execute instead of a job y ∈ Y at any time t ∈ (tp , tc ] by definition of tc , the only way for job x``s priority to increase before tc as we move from case I to II is to replace job i over the range (ts , tc ]. Thus, an upper bound on job x``s priority when agent i declares ˆθi is: ˆvx+ √ k· ex((ˆθi, ˆθ−i), ts )+ei((ˆθi, ˆθ−i), tc )−ei((ˆθi, ˆθ−i), ts ) < ˆvi + √ k· ei((ˆθi, ˆθ−i), ts )+ei((ˆθi, ˆθ−i), tc )−ei((ˆθi, ˆθ−i), ts ) = ˆvi + √ k · ei((ˆθi, ˆθ−i), tc ) < ˆvyc + √ k · eyc ((ˆθi, ˆθ−i), tc ). Thus, even at this upper bound, job yc would execute instead of job x at time tc . A similar argument applies to an arbitrary job z ∈ Z, starting at it release time ˆrz. Since the sets {i}, X, Y, Z partition the set of jobs released before ta , we have shown that no job could execute instead of job yc , contradicting the existence of tc , and completing the proof. Lemma 3. In mechanism Γ1, the following condition holds for all i, θi, ˆθ−i: ∀ ˆvi, ˆli ≥ li, ˆdi ≤ di, ei ((ri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli =⇒ ei ((ri, di, li, ˆvi), ˆθ−i), ˆdi ≥ li Proof. Assume by contradiction there exists some instantiation of the above variables such that job i is not completed when li and di are truthfully declared, but is completed for some pair of false declarations ˆli ≥ li and ˆdi ≤ di. Note that the only effect that ˆdi and ˆli have on the execution of the algorithm is on whether or not i ∈ Avail. Specifically, they affect the two conditions: (ei(ˆθ, t) < ˆli) and (ei(ˆθ, t) + ˆdi − t ≥ ˆli). Because job i is completed when ˆli and ˆdi are declared, the former condition (for completion) must become false before the latter. Since truthfully declaring li ≤ ˆli and di ≥ ˆdi will only make the former condition become false earlier and the latter condition become false later, the execution of the algorithm will not be affected when moving to truthful declarations, and job i will be completed, a contradiction. We now use these two lemmas to show that the payment for a completed job can only increase by falsely declaring worse ˆli, ˆdi, and ˆri. 66 Lemma 4. In mechanism Γ1, the following condition holds for all i, θi, ˆθ−i: ∀ ˆli ≥ li, ˆdi ≤ di, ˆri ≥ ri, arg min vi≥0 ei ((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi ≥ ˆli ≥ arg min vi≥0 ei ((ri, di, li, vi), ˆθ−i), di ≥ li Proof. Assume by contradiction that this condition does not hold. This implies that there exists some value vi such that the condition (ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) holds, but (ei(((ri, di, li, vi), ˆθ−i), di) ≥ li) does not. Applying Lemmas 2 and 3: (ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) =⇒ (ei(((ri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli) =⇒ (ei(((ri, di, li, vi), ˆθ−i), di) ≥ li), a contradiction. Finally, the following lemma tells us that the completion of a job is monotonic in its declared value. Lemma 5. In mechanism Γ1, the following condition holds for all i, ˆθi, ˆθ−i: ∀ ˆvi ≥ ˆvi, ei ((ˆri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli =⇒ ei ((ˆri, ˆdi, ˆli, ˆvi), ˆθ−i), ˆdi ≥ ˆli The proof, by contradiction, of this lemma is omitted because it is essentially identical to that of Lemma 2 for ˆri. In case I, agent i declares (ˆri, ˆdi, ˆli, ˆvi) and the job is not completed, while in case II he declares (ˆri, ˆdi, ˆli, ˆvi) and the job is completed. The analysis of the two cases then proceeds as before- the execution will not change up to time ts because the initial priority of job i decreases as we move from case I to II; and, as a result, there cannot be a change in the execution of a job other than i over the range (tp , ta ]. We can now combine the lemmas to show that no profitable deviation is possible. Theorem 6. Mechanism Γ1 satisfies incentive compatibility. Proof. For an arbitrary agent i, we know that ˆri ≥ ri and ˆli ≥ li hold by assumption. We also know that agent i has no incentive to declare ˆdi > di, because job i would never be returned before its true deadline. Then, because the payment function is non-negative, agent i``s utility could not exceed zero. By IR, this is the minimum utility it would achieve if it truthfully declared θi. Thus, we can restrict consideration to ˆθi that satisfy ˆri ≥ ri, ˆli ≥ li, and ˆdi ≤ di. Again using IR, we can further restrict consideration to ˆθi that cause job i to be completed, since any other ˆθi yields a utility of zero. If truthful declaration of θi causes job i to be completed, then by Lemma 4 any such false declaration ˆθi could not decrease the payment of agent i. On the other hand, if truthful declaration does not cause job i to be completed, then declaring such a ˆθi will cause agent i to have negative utility, since vi < arg minvi≥0 ei(((ri, di, li, vi), ˆθ−i), ˆdi) ≥ li ≤ arg minvi≥0 ei(((ˆri, ˆdi, ˆli, vi), ˆθ−i), ˆdi) ≥ ˆli holds by Lemmas 5 and 4, respectively. 4.2 Proof of Competitive Ratio The proof of the competitive ratio, which makes use of techniques adapted from those used in [15], is also broken into lemmas. Having shown IC, we can assume truthful declaration (ˆθ = θ). Since we have also shown IR, in order to prove the competitive ratio it remains to bound the loss of social welfare against Γoffline. Denote by (1, 2, ... , F) the sequence of jobs completed by Γ1. Divide time into intervals If = (topen f , tclose f ], one for each job f in this sequence. Set tclose f to be the time at which job f is completed, and set topen f = tclose f−1 for f ≥ 2, and topen 1 = 0 for f = 1. Also, let tbegin f be the first time that the processor is not idle in interval If . Lemma 7. For any interval If , the following inequality holds: tclose f − tbegin f ≤ (1 + 1√ k ) · vf Proof. Interval If begins with a (possibly zero length) period of time in which the processor is idle because there is no available job. Then, it continuously executes a sequence of jobs (1, 2, ... , c), where each job i in this sequence is preempted by job i + 1, except for job c, which is completed (thus, job c in this sequence is the same as job f is the global sequence of completed jobs). Let ts i be the time that job i begins execution. Note that ts 1 = tbegin f . Over the range [tbegin f , tclose f ], the priority (vi+ √ k·ei(θ, t)) of the active job is monotonically increasing with time, because this function linearly increases while a job is active, and can only increase at a point in time when preemption occurs. Thus, each job i > 1 in this sequence begins execution at its release time (that is, ts i = ri), because its priority does not increase while it is not active. We now show that the value of the completed job c exceeds the product of √ k and the time spent in the interval on jobs 1 through c−1, or, more formally, that the following condition holds: vc ≥ √ k c−1 h=1(eh(θ, ts h+1) − eh(θ, ts h)). To show this, we will prove by induction that the stronger condition vi ≥ √ k i−1 h=1 eh(θ, ts h+1) holds for all jobs i in the sequence. Base Case: For i = 1, v1 ≥ √ k 0 h=1 eh(θ, ts h+1) = 0, since the sum is over zero elements. Inductive Step: For an arbitrary 1 ≤ i < c, we assume that vi ≥ √ k i−1 h=1 eh(θ, ts h+1) holds. At time ts i+1, we know that vi+1 ≥ vi + √ k · ei(θ, ts i+1) holds, because ts i+1 = ri+1. These two inequalities together imply that vi+1 ≥√ k i h=1 eh(θ, ts h+1), completing the inductive step. We also know that tclose f − ts c ≤ lc ≤ vc must hold, by the simplifying normalization of ρmin = 1 and the fact that job c``s execution time cannot exceed its length. We can thus bound the total execution time of If by: tclose f − tbegin f = (tclose f −ts c)+ c−1 h=1(eh(θ, ts h+1)−eh(θ, ts h)) ≤ (1+ 1√ k )vf . We now consider the possible execution of uncompleted jobs by Γoffline. Associate each job i that is not completed by Γ1 with the interval during which it was abandoned. All jobs are now associated with an interval, since there are no gaps between the intervals, and since no job i can be abandoned after the close of the last interval at tclose F . Because the processor is idle after tclose F , any such job i would become active at some time t ≥ tclose F , which would lead to the completion of some job, creating a new interval and contradicting the fact that IF is the last one. 67 The following lemma is equivalent to Lemma 5.6 of [15], but the proof is different for our mechanism. Lemma 8. For any interval If and any job i abandoned in If , the following inequality holds: vi ≤ (1 + √ k)vf . Proof. Assume by contradiction that there exists a job i abandoned in If such that vi > (1 + √ k)vf . At tclose f , the priority of job f is vf + √ k · lf < (1 + √ k)vf . Because the priority of the active job monotonically increases over the range [tbegin f , tclose f ], job i would have a higher priority than the active job (and thus begin execution) at some time t ∈ [tbegin f , tclose f ]. Again applying monotonicity, this would imply that the priority of the active job at tclose f exceeds (1 + √ k)vf , contradicting the fact that it is (1 + √ k)vf . As in [15], for each interval If , we give Γoffline the following gift: k times the amount of time in the range [tbegin f , tclose f ] that it does not schedule a job. Additionally, we give the adversary vf , since the adversary may be able to complete this job at some future time, due to the fact that Γ1 ignores deadlines. The following lemma is Lemma 5.10 in [15], and its proof now applies directly. Lemma 9. [15] With the above gifts the total net gain obtained by the clairvoyant algorithm from scheduling the jobs abandoned during If is not greater than (1 + √ k) · vf . The intuition behind this lemma is that the best that the adversary can do is to take almost all of the gift of k ·(tclose f −tbegin f ) (intuitively, this is equivalent to executing jobs with the maximum possible value density over the time that Γ1 is active), and then begin execution of a job abandoned by Γ1 right before tclose f . By Lemma 8, the value of this job is bounded by (1 + √ k) · vf . We can now combine the results of these lemmas to prove the competitive ratio. Theorem 10. Mechanism Γ1 is (1+ √ k)2+1 -competitive. Proof. Using the fact that the way in which jobs are associated with the intervals partitions the entire set of jobs, we can show the competitive ratio by showing that Γ1 is (1+ √ k)2 +1 -competitive for each interval in the sequence (1, ... , F). Over an arbitrary interval If , the offline algorithm can achieve at most (tclose f −tbegin f )·k+vf +(1+ √ k)vf , from the two gifts and the net gain bounded by Lemma 9. Applying Lemma 7, this quantity is then bounded from above by (1+ 1√ k )·vf ·k+vf +(1+ √ k)vf = ((1+ √ k)2 +1)·vf . Since Γ1 achieves vf , the competitive ratio holds. 4.3 Special Case: Unalterable length and k=1 While so far we have allowed each agent to lie about all four characteristics of its job, lying about the length of the job is not possible in some settings. For example, a user may not know how to alter a computational problem in a way that both lengthens the job and allows the solution of the original problem to be extracted from the solution to the altered problem. Another restriction that is natural in some settings is uniform value densities (k = 1), which was the case considered by [4]. If the setting satisfies these two conditions, then, by using Mechanism Γ2, we can achieve a competitive ratio of 5 (which is the same competitive ratio as Γ1 for the case of k = 1) without knowledge of ρmin and without the use of payments. The latter property may be necessary in settings that are more local than grid computing (e.g., within a department) but in which the users are still self-interested.7 Mechanism 2 Γ2 Execute S(ˆθ, ·) according to Algorithm 2 for all i do pi(ˆθ) ← 0 Algorithm 2 for all t do Avail ← {i|(t ≥ ˆri)∧(ei(ˆθ, t) < li)∧(ei(ˆθ, t)+ ˆdi−t ≥ li)} if Avail = ∅ then S(ˆθ, t) ← arg maxi∈Avail(li + ei(ˆθ, t)) {Break ties in favor of lower ˆri} else S(ˆθ, t) ← 0 Theorem 11. When k = 1, and each agent i cannot falsely declare li, Mechanism Γ2 satisfies individual rationality and incentive compatibility. Theorem 12. When k = 1, and each agent i cannot falsely declare li, Mechanism Γ2 is 5-competitive. Since this mechanism is essentially a simplification of Γ1, we omit proofs of these theorems. Basically, the fact that k = 1 and ˆli = li both hold allows Γ2 to substitute the priority (li +ei(ˆθ, t)) for the priority used in Γ1; and, since ˆvi is ignored, payments are no longer needed to ensure incentive compatibility. 5. COMPETITIVE LOWER BOUND We now show that the competitive ratio of (1 + √ k)2 + 1 achieved by Γ1 is a lower bound for deterministic online mechanisms. To do so, we will appeal to third requirement on a mechanism, non-negative payments (NNP), which requires that the center never pays an agent (formally, ∀i, ˆθ, pi(ˆθi) ≥ 0). Unlike IC and IR, this requirement is not standard in mechanism design. We note, however, that both Γ1 and Γ2 satisfy it trivially, and that, in the following proof, zero only serves as a baseline utility for an agent, and could be replaced by any non-positive function of ˆθ−i. The proof of the lower bound uses an adversary argument similar to that used in [4] to show a lower bound of (1 +√ k)2 in the non-strategic setting, with the main novelty lying in the perturbation of the job sequence and the related incentive compatibility arguments. We first present a lemma relating to the recurrence used for this argument, with the proof omitted due to space constraints. Lemma 13. For any k ≥ 1, for the recurrence defined by li+1 = λ · li − k · i h=1 lh and l1 = 1, where (1 + √ k)2 − 1 < λ < (1 + √ k)2 , there exists an integer m ≥ 1 such that lm+k· m−1 h=1 lh lm > λ. 7 While payments are not required in this setting, Γ2 can be changed to collect a payments without affecting incentive compatibility by charging some fixed fraction of li for each job i that is completed. 68 Theorem 14. There does not exist a deterministic online mechanism that satisfies NNP and that achieves a competitive ratio less than (1 + √ k)2 + 1. Proof. Assume by contradiction that there exists a deterministic online mechanism Γ that satisfies NNP and that achieves a competitive ratio of c = (1 + √ k)2 + 1 − for some > 0 (and, by implication, satisfies IC and IR as well). Since a competitive ratio of c implies a competitive ratio of c + x, for any x > 0, we assume without loss of generality that < 1. First, we will construct a profile of agent types θ using an adversary argument. After possibly slightly perturbing θ to assure that a strictness property is satisfied, we will then use a more significant perturbation of θ to reach a contradiction. We now construct the original profile θ. Pick an α such that 0 < α < , and define δ = α ck+3k . The adversary uses two sequences of jobs: minor and major. Minor jobs i are characterized by li = δ, vi = k · δ, and zero laxity. The first minor job is released at time 0, and ri = di−1 for all i > 1. The sequence stops whenever Γ completes any job. Major jobs also have zero laxity, but they have the smallest possible value ratio (that is, vi = li). The lengths of the major jobs that may be released, starting with i = 1, are determined by the following recurrence relation. li+1 = (c − 1 + α) · li − k · i h=1 lh l1 = 1 The bounds on α imply that (1 + √ k)2 − 1 < c − 1 + α < (1+ √ k)2 , which allows us to apply Lemma 13. Let m be the smallest positive number such that lm+k· m−1 h=1 lh lm > c−1+α. The first major job has a release time of 0, and each major job i > 1 has a release time of ri = di−1 − δ, just before the deadline of the previous job. The adversary releases major job i ≤ m if and only if each major job j < i was executed continuously over the range [ri, ri+1]. No major job is released after job m. In order to achieve the desired competitive ratio, Γ must complete some major job f, because Γoffline can always at least complete major job 1 (for a value of 1), and Γ can complete at most one minor job (for a value of α c+3 < 1 c ). Also, in order for this job f to be released, the processor time preceding rf can only be spent executing major jobs that are later abandoned. If f < m, then major job f + 1 will be released and it will be the final major job. Γ cannot complete job f +1, because rf +lf = df > rf+1. Therefore, θ consists of major jobs 1 through f + 1 (or, f, if f = m), plus minor jobs from time 0 through time df . We now possibly perturb θ slightly. By IR, we know that vf ≥ pf (θ). Since we will later need this inequality to be strict, if vf = pf (θ), then change θf to θf , where rf = rf , but vf , lf , and df are all incremented by δ over their respective values in θf . By IC, job f must still be completed by Γ for the profile (θf , θ−f ). If not, then by IR and NNP we know that pf (θf , θ−f ) = 0, and thus that uf (g(θf , θ−f ), θf ) = 0. However, agent f could then increase its utility by falsely declaring the original type of θf , receiving a utility of: uf (g(θf , θ−f ), θf ) = vf − pf (θ) = δ > 0, violating IC. Furthermore, agent f must be charged the same amount (that is, pf (θf , θ−f ) = pf (θ)), due to a similar incentive compatibility argument. Thus, for the remainder of the proof, assume that vf > pf (θ). We now use a more substantial perturbation of θ to complete the proof. If f < m, then define θf to be identical to θf , except that df = df+1 + lf , allowing job f to be completely executed after job f + 1 is completed. If f = m, then instead set df = df +lf . IC requires that for the profile (θf , θ−f ), Γ still executes job f continuously over the range [rf , rf +lf ], thus preventing job f +1 from being completed. Assume by contradiction that this were not true. Then, at the original deadline of df , job f is not completed. Consider the possible profile (θf , θ−f , θx), which differs from the new profile only in the addition of a job x which has zero laxity, rx = df , and vx = lx = max(df − df , (c + 1) · (lf + lf+1)). Because this new profile is indistinguishable from (θf , θ−f ) to Γ before time df , it must schedule jobs in the same way until df . Then, in order to achieve the desired competitive ratio, it must execute job x continuously until its deadline, which is by construction at least as late as the new deadline df of job f. Thus, job f will not be completed, and, by IR and NNP, it must be the case that pf (θf , θ−f , θx) = 0 and uf (g(θf , θ−f , θx), θf ) = 0. Using the fact that θ is indistinguishable from (θf , θ−f , θx) up to time df , if agent f falsely declared his type to be the original θf , then its job would be completed by df and it would be charged pf (θ). Its utility would then increase to uf (g(θf , θ−f , θx), θf ) = vf − pf (θ) > 0, contradicting IC. While Γ``s execution must be identical for both (θf , θ−f ) and (θf , θ−f ), Γoffline can take advantage of the change. If f < m, then Γ achieves a value of at most lf +δ (the value of job f if it were perturbed), while Γoffline achieves a value of at least k·( f h=1 lh −2δ)+lf+1 +lf by executing minor jobs until rf+1, followed by job f +1 and then job f (we subtract two δ``s instead of one because the last minor job before rf+1 may have to be abandoned). Substituting in for lf+1, the competitive ratio is then at least: k·( f h=1 lh−2δ)+lf+1+lf lf +δ = k·( f h=1 lh)−2k·δ+(c−1+α)·lf −k·( f h=1 lh)+lf lf +δ = c·lf +(α·lf −2k·δ) lf +δ ≥ c·lf +((ck+3k)δ−2k·δ) lf +δ > c. If instead f = m, then Γ achieves a value of at most lm +δ, while Γoffline achieves a value of at least k · ( m h=1 lh − 2δ) + lm by completing minor jobs until dm = rm + lm, and then completing job m. The competitive ratio is then at least: k·( m h=1 lh−2δ)+lm lm+δ = k·( m−1 h=1 lh)−2k·δ+klm+lm lm+δ > (c−1+α)·lm−2k·δ+klm lm+δ = (c+k−1)·lm+(αlm−2k·δ) lm+δ > c. 6. RELATED WORK In this section we describe related work other than the two papers ([4] and [15]) on which this paper is based. Recent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the offline algorithm (see, e.g., [13, 14]). Mechanism design was also applied to a scheduling problem in [18]. In their model, the center owns the jobs in an offline setting, and it is the agents who can execute them. The private information of an agent is the time it will require to execute each job. Several incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem. This paper also launched the area of algorithmic mechanism design, in which the mechanism must sat69 isfy computational requirements in addition to the standard incentive requirements. A growing sub-field in this area is multicast cost-sharing mechanism design (see, e.g., [1]), in which the mechanism must efficiently determine, for each agent in a multicast tree, whether the agent receives the transmission and the price it must pay. For a survey of this and other topics in distributed algorithmic mechanism design, see [9]. Online execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings. For example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time. In [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values. Truthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions. The main difference between the two is that the former considers the case of a digital good, which thus has unlimited supply. It is pointed out in [16] that their results continue to hold when the setting is extended so that bidders can delay their arrival. The only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time. A dominant strategy IC mechanism is presented for the variant in which every point in time is essentially independent, while a Bayes-Nash IC mechanism is presented for the variant in which the center``s current decision affects the cost of future actions. 7. CONCLUSION In this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents. We presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one. We also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job. We then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents. Several open problems remain in this setting. One is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments. Also, while we feel that it is reasonable to strengthen the assumption of knowing the maximum possible ratio of value densities (k) to knowing the actual range of possible value densities, it would be interesting to determine whether there exists a ((1 + √ k)2 + 1)-competitive mechanism under the original assumption. Finally, randomized mechanisms provide an unexplored area for future work. 8. REFERENCES [1] A. Archer, J. Feigenbaum, A. Krishnamurthy, R. Sami, and S. Shenker, Approximation and collusion in multicast cost sharing, Games and Economic Behavior (to appear). [2] B. Awerbuch, Y. Azar, and A. Meyerson, Reducing truth-telling online mechanisms to online optimization, Proceedings on the 35th Symposium on the Theory of Computing, 2003. [3] Z. Bar-Yossef, K. Hildrum, and F. Wu, Incentive-compatible online auctions for digital goods, Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002. [4] S. Baruah, G. Koren, D. Mao, B. Mishra, A. Raghunathan, L. Rosier, D. Shasha, and F. Wang, On the competitiveness of on-line real-time task scheduling, Journal of Real-Time Systems 4 (1992), no. 2, 125-144. [5] A. Blum, T. Sandholm, and M. Zinkevich, Online algorithms for market clearing, Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002. [6] A. Borodin and R. El-Yaniv, Online computation and competitive analysis, Cambridge University Press, 1998. [7] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger, Economic models for resource management and scheduling in grid computing, The Journal of Concurrency and Computation: Practice and Experience 14 (2002), 1507-1542. [8] N. Camiel, S. London, N. Nisan, and O. Regev, The popcorn project: Distributed computation over the internet in java, 6th International World Wide Web Conference, 1997. [9] J. Feigenbaum and S. Shenker, Distributed algorithmic mechanism design: Recent results and future directions, Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, 2002, pp. 1-13. [10] A. Fiat and G. Woeginger (editors), Online algorithms: The state of the art, Springer Verlag, 1998. [11] E. Friedman and D. Parkes, Pricing wifi at starbucksissues in online mechanism design, EC``03, 2003. [12] R. L. Graham, Bounds for certain multiprocessor anomalies, Bell System Technical Journal 45 (1966), 1563-1581. [13] B. Kalyanasundaram and K. Pruhs, Speed is as powerful as clairvoyance, Journal of the ACM 47 (2000), 617-643. [14] C. Koo, T. Lam, T. Ngan, and K. To, On-line scheduling with tight deadlines, Theoretical Computer Science 295 (2003), 251-261. [15] G. Koren and D. Shasha, D-over: An optimal on-line scheduling algorithm for overloaded real-time systems, SIAM Journal of Computing 24 (1995), no. 2, 318-339. [16] R. Lavi and N. Nisan, Competitive analysis of online auctions, EC``00, 2000. [17] A. Mas-Colell, M. Whinston, and J. Green, Microeconomic theory, Oxford University Press, 1995. [18] N. Nisan and A. Ronen, Algorithmic mechanism design, Games and Economic Behavior 35 (2001), 166-196. [19] C. Papadimitriou, Algorithms, games, and the internet, STOC, 2001, pp. 749-753. 70
Mechanism Design for Online Real-Time Scheduling ABSTRACT For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents. 1. INTRODUCTION We consider the problem of online scheduling of jobs on a single processor. Each job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline. The objective is to maximize the sum of the values of the jobs completed by their respective deadlines. The key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time. Competitive analysis [6, 10], with its roots in [12], is a well-studied approach for analyzing online algorithms by comparing them against the optimal offline algorithm, which has full knowledge of the input at the beginning of its execution. One interpretation of this approach is as a game between the designer of the online algorithm and an adversary. First, the designer selects the online algorithm. Then, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm. Two papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs. For k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms. The same paper also generalizes the \ / lower bound to (1 + k) 2 for any k ≥ 1, and [15] then \ / presents a matching (1 + k) 2-competitive algorithm. The setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release. However, in domains such as grid computing (see, for example, [7, 8]) this assumption is invalid, because "buyers" of processor time choose when and how to submit their jobs. Furthermore, "sellers" not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting. Thus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent. Instead of being released to the algorithm, each job is now released only to its owning agent. Each agent now has four different ways in which it can manipulate the algorithm: it decides when to submit the job to the algorithm after the true release time, it can artificially inflate the length of the job, and it can declare an arbitrary value and deadline for the job. Because the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause their job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15]. The addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents. Recent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]). In general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome. In our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center. A basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent's best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline. In order to evaluate a mechanism using competitive analysis, the adversary model must be updated. In the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism. Thus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least c1 of the value that the optimal offline mechanism achieves on the same sequence of jobs. The rest of the paper is structured as follows. In Section 2, we formally define and review results from the original, non-strategic setting. After introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3. In Section 4 we present our first √ main result, a ((1 + k) 2 + 1) - competitive mechanism, and formally prove incentive compatibility and the competitive ratio. We also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job. Returning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents. Finally, in Section 6, we discuss related work other than the directly relevant [4] and [15], before concluding with Section 7. 2. NON-STRATEGIC SETTING In this section, we formally define the original, non-strategic setting, and recap previous results. 2.1 Formulation There exists a single processor on which jobs can execute, and N jobs, although this number is not known beforehand. Each job i is characterized by a tuple θi = (ri, di, li, vi), which denotes the release time, deadline, length of processing time required, and value, respectively. The space Θi of possible tuples is the same for each job and consists of all θi such that ri, di, li, vi ∈ ~ + (thus, the model of time is continuous). Each job is released at time ri, at which point its three other characteristics are known. Nothing is known about the job before its arrival. Each deadline is firm (or, hard), which means that no value is obtained for a job that is completed after its deadline. Preemption of jobs is allowed, and it takes no time to switch between jobs. Thus, job i is completed if and only if the total time it executes on the processor before di is at least li. Let θ = (θ1,..., θN) denote the vector of tuples for all jobs, and let θ_i = (θ1,..., θi_1, θi +1,..., θN) denote the same vector without the tuple for job i. Thus, (θi, θ_i) denotes a complete vector of tuples. Define the value density ρi = vi li of job i to be the ratio of its value to its length. For an input θ, denote the maximum and minimum value densities as ρmin = mini ρi and ρmax = maxi ρi. The importance ratio is then defined to be ρmaxρmin, the maximal ratio of value densities between two jobs. The algorithm is assumed to always know an upper bound k on the importance ratio. For simplicity, we normalize the range of possible value densities so that ρmin = 1. An online algorithm is a function f: Θ1 ×...× ΘN → O that maps the vector of tuples (for any number N) to an outcome o. An outcome o ∈ O is simply a schedule of jobs on the processor, recorded by the function S: ~ + → {0, 1,..., N}, which maps each point in time to the active job, or to 0 if the processor is idle. To denote the total elapsed time that a job has spent on the processor at time t, we will use the function ei (t) = f0 t µ (S (x) = i) dx, whereµ (·) is an indicator function that job's laxity at time t is defined to be (di − t − li + ei (t)), returns 1 if the argument is true, and zero otherwise. A the amount of time that it can remain inactive and still be completed by its deadline. A job is abandoned if it cannot be completed by its deadline (formally, if di − t + ei (t) <li). Also, overload S (·) and ei (·) so that they can also take a vector θ as an argument. For example, S (θ, t) is shorthand for the S (t) of the outcome f (θ), and it denotes the active job at time t when the input is θ. Since a job cannot be executed before its release time, the space of possible outcomes is restricted in that S (θ, t) = i implies ri ≤ t. Also, because the online algorithm must produce the schedule over time, without knowledge of future inputs, it must make the same decision at time t for inputs that are indistinguishable at this time. Formally, let θ (t) denote the subset of the tuples in θ that satisfy ri ≤ t. The constraint is then that θ (t) = θ' (t) implies S (θ, t) = S (θ', t). The objective function is the sum of the values of the jobs that are completed by their respective deadlines: W (o, θ) = E (vi · µ (ei (θ, di) ≥ li)). Let W * (θ) = maxoEO W (o, θ) i denote the maximum possible total value for the profile θ. In competitive analysis, an online algorithm is evaluated by comparing it against an optimal offline algorithm. Because the offline algorithm knows the entire input θ at time 0 (but still cannot start each job i until time ri), it always achieves W * (θ). An online algorithm f (·) is (strictly) c-competitive if there does not exist an input θ such that c · W (f (θ), θ) <W * (θ). An algorithm that is c-competitive is also said to achieve a competitive ratio of c. We assume that there does not exist an overload period of infinite duration. A period of time [ts, tf] is overloaded if the sum of the lengths of the jobs whose release time and tion of the interval (formally, if tf − ts ≤ E deadline both fall within the time period exceeds the durai1 (ts <ri, di <tf) li). Without such an assumption, it is not possible to achieve a finite competitive ratio [15]. 2.2 Previous Results In the non-strategic setting, [4] presents a 4-competitive algorithm called TD1 (version 2) for the case of k = 1, while √ [15] presents a (1 + k) 2-competitive algorithm called Dover for the general case of k ≥ 1. Matching lower bounds for deterministic algorithms for both of these cases were shown µ (in [4]. In this section we provide a high-level description of TD1 (version 2) using an example. TD1 (version 2) divides the schedule into intervals, each of which begins when the processor transitions from idle to busy (call this time tb), and ends with the completion of a job. The first active job of an interval may have laxity; however, for the remainder of the interval, preemption of the active job is only considered when some other job has zero laxity. For example, when the input is the set of jobs listed in Table 1, the first interval is the complete execution of job 1 over the range [0.0, 0.9]. No preemption is considered during this interval, because job 2 has laxity until time 1.5. Then, a new interval starts at tb = 0.9 when job 2 becomes active. Before job 2 can finish, preemption is considered at time 4.8, when job 3 is released with zero laxity. In order to decide whether to preempt the active job, TD1 (version 2) uses two more variables: te and p loss. The former records the latest deadline of a job that would be abandoned if the active job executes to completion (or, if no such job exists, the time that the active job will finish if it is not preempted). In this case, te = 17.0. The value te − tb represents the an upper bound on the amount of possible execution time "lost" to the optimal offline algorithm due to the completion of the active job. The other variable, p loss, is equal to the length of the first active job of the current interval. Because in general this job could have laxity, the offline algorithm may be able to complete it outside of the range [tb, te].1 If the algorithm completes the active job and this job's length is at least te − tb + p loss, then the algorithm is guaranteed to be 4-competitive for this interval (note that k = 1 implies that all jobs have the same value density and thus that lengths can used to compute the competitive ratio). Because this is not case at time 4.8 Table 1: Input used to recap TD1 (version 2) [4]. The up and down arrows represent ri and di, respectively, while the length of the box equals li. 3. MECHANISM DESIGN SETTING However, false information about job 2 would cause TD1 (version 2) to complete this job. For example, if job 2's deadˆd2 = 4.7, then it would have zero laxity at time 0.7. At this time, the algorithm would preempt job Job 2 would then complete before the arrival of job 3.2 In order to address incentive issues such as this one, we need to formalize the setting as a mechanism design problem. In this section we first present the mechanism design formulation, and then define our goals for the mechanism. 3.1 Formulation There exists a center, who controls the processor, and N agents, where the value of N is unknown by the center beforehand. Each job i is owned by a separate agent i. The characteristics of the job define the agent's type θi ∈ Θi. At time ri, agent i privately observes its type θi, and has no information about job i before ri. Thus, jobs are still released over time, but now each job is revealed only to the owning agent. Agents interact with the center through a direct mechanism Γ = (Θ1,..., ΘN, g (·)), in which each agent declares a ˆθi = (ˆri, ˆdi, ˆli, ˆvi), and g: Θ1 ×...× ΘN → O maps the declared types to an outcome o ∈ O. An outcome o = (S (·), p1,..., pN) consists of a schedule and a payment from each agent to the mechanism. In a standard mechanism design setting, the outcome is enforced at the end of the mechanism. However, since the end is not well-defined in this online setting, we choose to model returning the job if it is completed and collecting a payment from each agent i as occurring at ˆdi, which, according to the agent's declaration, is the latest relevant point of time for that agent. That is, even if job i is completed before ˆdi, the center does not return the job to agent i until that time. This modelling decision could instead be viewed as a decision by the mechanism designer from a larger space of possible mechanisms. Indeed, as we will discuss later, this decision of when to return a completed job is crucial to our mechanism. Each agent's utility, ui (g (ˆθ), θi) = vi · µ (ei (ˆθ, di) ≥ li) · ˆdi ≤ di) − pi (ˆθ), is a quasi-linear function of its value for its job (if completed and returned by its true deadline) and the payment it makes to the center. We assume that each agent is a rational, expected utility maximizer. Agent declarations are restricted in that an agent cannot declare a length shorter than the true length, since the center would be able to detect such a lie if the job were completed. On the other hand, in the general formulation we will allow agents to declare longer lengths, since in some settings it may be possible add unnecessary work to a job. However, we will also consider a restricted formulation in which this type of lie is not possible. The declared release time ˆri is the time that the agent chooses to submit job i to the center, and it cannot precede the time ri at which the job is revealed to the agent. The agent can declare an arbitrary deadline or value. To summarize, agent i can declare any type ˆθi = (ˆri, ˆdi, ˆli, ˆvi) such that ˆli ≥ li and ˆri ≥ ri. While in the non-strategic setting it was sufficient for the algorithm to know the upper bound k on the ratio ρmax ρmin, in the mechanism design setting we will strengthen this assumption so that the mechanism also knows ρmin (or, equivalently, the range [ρmin, ρmax] of possible value densities).3 Dover, we note that it is similar in its use of intervals and its preference for the active job. Also, we note that the lower bound we will show in Section 5 implies that false information can also benefit a job in Dover. 3Note that we could then force agent declarations to satisfy ρmin ≤ ˆvi ˆli ≤ ρmax. However, this restriction would not While we feel that it is unlikely that a center would know k without knowing this range, we later present a mechanism that does not depend on this extra knowledge in a restricted setting. The restriction on the schedule is now that S (ˆθ, t) = i implies ˆri <t, to capture the fact that a job cannot be scheduled on the processor before it is declared to the mechanism. As before, preemption of jobs is allowed, and job switching takes no time. The constraints due to the online mechanism's lack of knowledge of the future are that ˆθ (t) = ˆθ ~ (t) implies S (ˆθ, t) = S (ˆθ ~, t), and ˆθ (ˆdi) = ˆθ ~ (ˆdi) implies pi (ˆθ) = pi (ˆθ ~) for each agent i. The setting can then be summarized as follows. Overview1 of the Setting: for all t do The center instantiates S (ˆθ, t) +--i, for some i s.t. ˆri <t if 3i, (ri = t) then θi is revealed to agent i Center sets and collects payment pi (ˆθ) from agent i 3.2 Mechanism Goals Our aim as mechanism designer is to maximize the value of completed jobs, subject to the constraints of incentive compatibility and individual rationality. The condition for (dominant strategy) incentive compatibility is that for each agent i, regardless of its true type and of the declared types of all other agents, agent i cannot increase its utility by unilaterally changing its declaration. From an agent perspective, dominant strategies are desirable because the agent does not have to reason about either the strategies of the other agents or the distribution from the which other agent's types are drawn. From a mechanism designer perspective, dominant strategies are important because we can reasonably assume that an agent who has a dominant strategy will play according to it. For these reasons, in this paper we require dominant strategies, as opposed to a weaker equilibrium concept such as Bayes-Nash, under which we could improve upon our positive results .4 decrease the lower bound on the competitive ratio. 4A possible argument against the need for incentive compatibility is that an agent's lie may actually improve the schedule. In fact, this was the case in the example we showed for the false declaration ˆd2 = 4.7. However, if an agent lies due to incorrect beliefs over the future input, then the lie could instead make the schedule the worse (for example, if job 3 were never released, then job 1 would have been unnecessarily abandoned). Furthermore, if we do not know the beliefs of the agents, and thus cannot predict how they will lie, then we can no longer provide a competitive guarantee for our mechanism. While restricting ourselves to incentive compatible direct mechanisms may seem limiting at first, the Revelation Principle for Dominant Strategies (see, e.g., [17]) tells us that if our goal is dominant strategy implementation, then we can make this restriction without loss of generality. The second goal for our mechanism, individual rationality, requires that agents who truthfully reveal their type never have negative utility. The rationale behind this goal is that participation in the mechanism is assumed to be voluntary. Finally, the social welfare function that we aim to maximize is the same as the objective function of the non-strategic strategic setting, we will evaluate an online mechanism using competitive analysis to compare it against an optimal offline mechanism (which we will denote by Γoff line). An offline mechanism knows all of the types at time 0, and thus can always achieve W ∗ (θ).5 4. RESULTS ((1 + In this section, we first present our main positive result: a ✓ k) 2 +1) - competitive mechanism (Γ1). After providing some intuition as to why Γ1 satisfies individual rationality and incentive compatibility, we formally prove first these two properties and then the competitive ratio. We then consider a special case in which k = 1 and agents cannot lie about the length of their job, which allows us to alter this mechanism so that it no longer requires either knowledge of ρmin or the collection of payments from agents. Unlike TD1 (version 2) and Dover, Γ1 gives no preference to the active job. Instead, it always executes the avail ✓ able job with the highest priority: (ˆvi + k • ei (ˆθ, t) • ρmin). Each agent whose job is completed is then charged the lowest value that it could have declared such that its job still would have been completed, holding constant the rest of its declaration. By the use of a payment rule similar to that of a secondprice auction, Γ1 satisfies both IC with respect to values and IR. We now argue why it satisfies IC with respect to the other three characteristics. Declaring an "improved" job (i.e., declaring an earlier release time, a shorter length, or a later deadline) could possibly decrease the payment of an agent. However, the first two lies are not possible in our setting, while the third would cause the job, if it is completed, to be returned to the agent after the true deadline. This is the reason why it is important to always return a completed job at its declared deadline, instead of at the point at which it is completed. 5Another possibility is to allow only the agents to know their types at time 0, and to force Γof f line to be incentive compatible so that agents will truthfully declare their types at time 0. However, this would not affect our results, since executing a VCG mechanism (see, e.g., [17]) at time 0 both satisfies incentive compatibility and always maximizes social welfare. It remains to argue why an agent does not have incentive to "worsen" its job. The only possible effects of an inflated length are delaying the completion of the job and causing it to be abandoned, and the only possible effects of an earlier declared deadline are causing to be abandoned and causing it to be returned earlier (which has no effect on the agent's utility in our setting). On the other hand, it is less obvious why agents do not have incentive to declare a later release time. Consider a mechanism Γ ~ 1 that differs from Γ1 in that it does not preempt the active job i unless there exists an √ other job j such that (ˆvi + k · li (ˆθ, t) · ρmin) <ˆvj. Note that as an active job approaches completion in Γ1, its condition for preemption approaches that of Γ ~ 1. However, the types in Table 2 for the case of k = 1 show why an agent may have incentive to delay the arrival of its job under Γ ~ 1. Job 1 becomes active at time 0, and job 2 is abandoned upon its release at time 6, because 10 + 10 = v1 + l1> v2 = 13. Then, at time 8, job 1 is preempted by job 3, because 10 + 10 = v1 + l1 <v3 = 22. Job 3 then executes to completion, forcing job 1 to be abandoned. However, job 2 had more "weight" than job 1, and would have prevented job 3 from being executed if it had been the active job at time 8, since 13 + 13 = v2 + l2> v3 = 22. Thus, if agent 1 had falsely declared ˆr1 = 20, then job 3 would have been abandoned at time 8, and job 1 would have completed over the range [20, 30]. Table 2: Jobs used to show why a slightly altered version of Γ1 would not be incentive compatible with respect to release times. Intuitively, Γ1 avoids this problem because of two properties. First, when a job becomes active, it must have a greater priority than all other available jobs. Second, because a job's priority can only increase through the increase of its elapsed time, ei (ˆθ, t), the rate of increase of a job's priority is independent of its characteristics. These two properties together imply that, while a job is active, there cannot exist a time at which its priority is less than the priority that one of these other jobs would have achieved by executing on the processor instead. 4.1 Proof of Individual Rationality and Incentive Compatibility After presenting the (trivial) proof of IR, we break the proof of IC into lemmas. THEOREM 1. Mechanism Γ1 satisfies individual rationality. PROOF. For arbitrary i, θi, ˆθ − i, if job i is not completed, then agent i pays nothing and thus has a utility of zero; that is, pi (θi, ˆθ − i) = 0 and ui (g (θi, ˆθ − i), θi) = 0. On the other hand, if job i is completed, then its value must exceed agent i's payment. Formally, ui (g (θi, ˆθ − i), θi) = vi − arg minvii ≥ 0 (ei (((ri, di, li, v ~ i), ˆθ − i), di) ≥ li) ≥ 0 must hold, since v ~ i = vi satisfies the condition. To prove IC, we need to show that for an arbitrary agent i, and an arbitrary profile ˆθ − i of declarations of the other agents, agent i can never gain by making a false declaration ˆθi = ~ θi, subject to the constraints that ˆri ≥ ri and ˆli ≥ li. We start by showing that, regardless of ˆvi, if truthful declarations of ri, di, and li do not cause job i to be completed, then "worse" declarations of these variables (that is, declarations that satisfy ˆri ≥ ri, ˆli ≥ li and ˆdi ≤ di) can never cause the job to be completed. We break this part of the proof into two lemmas, first showing that it holds for the release time, regardless of the declarations of the other variables, and then for length and deadline. PROOF. Assume by contradiction that this condition does not hold--that is, job i is not completed when ri is truthfully declared, but is completed for some false declaration ˆri ≥ ri. We first analyze the case in which the release time is truthfully declared, and then we show that job i cannot be completed when agent i delays submitting it to the center. Case I: Agent i declares ˆθ ~ i = (ri, ˆdi, ˆli, ˆvi). First, define the following three points in the execution of job i. • Let ts = arg mint (S ((ˆθ ~ i, ˆθ − i), t) = i) be the time that job i first starts execution. • Let tp = arg mint> ts (S ((ˆθ ~ i, ˆθ − i), t) = ~ i) be the time that job i is first preempted. • Let ta = arg mint (ei ((ˆθ ~ i, ˆθ − i), t) + ˆdi − t <ˆli) be the time that job i is abandoned. If ts and tp are undefined because job i never becomes active, then let ts = tp = ta. Also, partition the jobs declared by other agents before ta into the following three sets. • X = {j | (ˆrj <tp) ∧ (j = ~ i)} consists of the jobs (other than i) that arrive before job i is first preempted. √ • Y = {j | (tp ≤ ˆrj ≤ ta) ∧ (ˆvj> ˆvi + k · ei ((ˆθ' i, ˆθ_i), ˆrj)} consists of the jobs that arrive in the range [tp, ta] and that when they arrive have higher priority than job i (note that we are make use of the normalization). √ • Z = {j | (tp ≤ ˆrj ≤ ta) ∧ (ˆvj ≤ ˆvi + k · ei ((ˆθ' i, ˆθ_i), ˆrj)} consists of the jobs that arrive in the range [tp, ta] and that when they arrive have lower priority than job i. We now show that all active jobs during the range (tp, ta] must be either i or in the set Y. Unless tp = ta (in which case this property trivially holds), it must be the case that job i has a higher priority than an arbitrary job x ∈ X at time tp, since at the time just preceding tp job x was available and job i was active. Formally, ˆvx + k · ex ((ˆθ' i, ˆθ_i), tp) <ˆvi + k · ei ((ˆθ' i, ˆθ_i), tp) must hold .6 We can then show that, over the range [tp, ta], no job x ∈ X runs on the processor. Assume by contradiction that this is not true. Let tf ∈ [tp, ta] be the earliest time in this range that some job x ∈ X is active, which implies that ex ((ˆθ ` i, ˆθ_i), tf) = ex ((ˆθ ` i, ˆθ_i), tp). ˆθi = (ˆri, ˆdi, ˆli, ˆvi), where ˆri> ri. We now show that job i cannot be completed in this case, given that it was not completed in case I. First, we can restrict the range of ˆri that we need to consider as follows. Declaring ˆri ∈ (ri, ts] would not affect the schedule, since ts would still be the first time that job i executes. Also, declaring ˆri> ta could not cause the job to be completed, since di − ta <ˆli holds, which implies that job i would be abandoned at its release. Thus, we can restrict consideration to ˆri ∈ (ts, ta]. In order for declaring ˆθi to cause job i to be completed, a necessary condition is that the execution of some job yc ∈ Y must change during the range (tp, ta], since the only jobs other than i that are active during that range are in Y. ˆθ_i), t) = yc) ∧ (S ((ˆθi, ˆθ_i), t) = ~ yc)] be the first time that such a change occurs. We will now show that for any ˆri ∈ (ts, ta], there cannot exist a job with higher priority than yc at time tc, contradicting (S ((ˆθi, ˆθ_i), t) = ~ yc). First note that job i cannot have a higher priority, since there would have to exist a t ∈ (tp, tc) such that ∃ y ∈ 6For simplicity, when we give the formal condition for a job x to have a higher priority than another job y, we will assume that job x's priority is strictly greater than job y's, because, in the case of a tie that favors x, future ties would also be broken in favor of job x. the definition of tc. Now consider an arbitrary y ∈ Y such that y = ~ yc. In case I, we know that job y has lower priority than yc at time tc; Thus, moving to case II, job y must replace some other job before tc. Since ˆry ≥ tp, the condition is that there must exist some t ∈ (tp, tc) such that ∃ w ∈ Y ∪ {i}, (S ((ˆθ ` i, ˆθ_i), t) = w) ∧ (S ((ˆθi, ˆθ_i), t) = y). Since w ∈ Y would contradict the definition of tc, we know that w = i. That is, the job that y replaces must be i. By definition of the set Y, we know that √ ˆvy> ˆvi + k · ei ((ˆθ' i, ˆθ_i), ˆry). Thus, if ˆry ≤ t, then job i could not have executed instead of y in case I. On the other hand, if ˆry> t, then job y obviously could not execute at time t, contradicting the existence of such a time t. Now consider an arbitrary job x ∈ X. We know that in case I job i has a higher priority than job x at time ˆvyc + k · eyc ((ˆθ' i, ˆθ_i), tc). Since delaying i's arrival will not affect the execution up to time ts, and since job x cannot execute instead of a job y ∈ Y at any time t ∈ (tp, tc] by definition of tc, the only way for job x's priority to increase before tc as we move from case I to II is to replace job i over the range (ts, tc]. Thus, an upper bound on job x's priority when agent i declares ˆθi is: Thus, even at this upper bound, job yc would execute instead of job x at time tc. A similar argument applies to an arbitrary job z ∈ Z, starting at it release time ˆrz. Since the sets {i}, X, Y, Z partition the set of jobs released before ta, we have shown that no job could execute instead of job yc, contradicting the existence of tc, and completing the proof. LEMMA 3. In mechanism Γ1, the following condition holds for all i, θi, ˆθ_i: ∀ ˆvi, ˆli ≥ li, ˆdi ≤ di, ~ ei ~ ((ri, ˆdi, ˆli, ˆvi), ˆθ_i), ˆdi ~ ≥ ˆli ~ = ⇒ ~ ei ~ ~ ((ri, di, li, ˆvi), ˆθ_i), ˆdi ~ ≥ li PROOF. Assume by contradiction there exists some instantiation of the above variables such that job i is not completed when li and di are truthfully declared, but is completed for some pair of false declarations ˆli ≥ li and ˆdi ≤ di. Note that the only effect that ˆdi and ˆli have on the execution of the algorithm is on whether or not i ∈ Avail. Specifically, they affect the two conditions: (ei (ˆθ, t) <ˆli) and (ei (ˆθ, t) + ˆdi − t ≥ ˆli). Because job i is completed when ˆli and ˆdi are declared, the former condition (for completion) must become false before the latter. Since truthfully declaring li ≤ ˆli and di ≥ ˆdi will only make the former condition become false earlier and the latter condition become false later, the execution of the algorithm will not be affected when moving to truthful declarations, and job i will be completed, a contradiction. We now use these two lemmas to show that the payment for a completed job can only increase by falsely declaring "worse" ˆli, ˆdi, and ˆri. We can then show that job i has a higher priority at time tf as √ √ follows: ˆvx + k · ex ((ˆθ' i, ˆθ_i), tf) = ˆvx + k · ex ((ˆθ' i, ˆθ_i), tp) <√ √ ˆvi + k · ei ((ˆθ' i, ˆθ_i), tp) ≤ ˆvi + k · ei ((ˆθ' i, ˆθ_i), tf), contradicting the fact that job x is active at time tf. A similar argument applies to an arbitrary job z ∈ Z, starting at it release time ˆrz> tp, since by definition job i has a higher priority at that time. The only remaining jobs that can be active over the range (tp, ta] are i and those in the set Y. Case II: Agent i declares Let tc = arg mintE (tp, ta] [∃ yc ∈ Y, (S (( PROOF. Assume by contradiction that this condition does not hold. This implies that there exists some value v ~ i such that the condition (ei (((ˆri, ˆdi, ˆli, v ~ i), ˆθ_i), ˆdi) ≥ ˆli) holds, but (ei (((ri, di, li, v ~ i), ˆθ_i), di) ≥ li) does not. Applying Lemmas 2 and 3: (ei (((ˆri, ˆdi, ˆli, v ~ i), ˆθ_i), ˆdi) ≥ ˆli) = ⇒ Finally, the following lemma tells us that the completion of a job is monotonic in its declared value. ~ ei ~ ~ ((ˆri, ˆdi, ˆli, ˆv ~ i), ˆθ_i), ˆdi ~ ≥ ˆli The proof, by contradiction, of this lemma is omitted because it is essentially identical to that of Lemma 2 for ˆri. In case I, agent i declares (ˆri, ˆdi, ˆli, ˆv ~ i) and the job is not completed, while in case II he declares (ˆri, ˆdi, ˆli, ˆvi) and the job is completed. The analysis of the two cases then proceeds as before--the execution will not change up to time ts because the initial priority of job i decreases as we move from case I to II; and, as a result, there cannot be a change in the execution of a job other than i over the range (tp, ta]. We can now combine the lemmas to show that no profitable deviation is possible. THEOREM 6. Mechanism Γ1 satisfies incentive compatibility. PROOF. For an arbitrary agent i, we know that ˆri ≥ ri and ˆli ≥ li hold by assumption. We also know that agent i has no incentive to declare ˆdi> di, because job i would never be returned before its true deadline. Then, because the payment function is non-negative, agent i's utility could not exceed zero. By IR, this is the minimum utility it would achieve if it truthfully declared θi. Thus, we can restrict consideration to ˆθi that satisfy ˆri ≥ ri, ˆli ≥ li, and ˆdi ≤ di. Again using IR, we can further restrict consideration to ˆθi that cause job i to be completed, since any other ˆθi yields a utility of zero. If truthful declaration of θi causes job i to be completed, then by Lemma 4 any such false declaration ˆθi could not decrease the payment of agent i. On the other hand, if truthful declaration does not cause job i to be completed, then declaring such a ˆθi will cause agent i to have negative Lemmas 5 and 4, respectively. 4.2 Proof of Competitive Ratio The proof of the competitive ratio, which makes use of techniques adapted from those used in [15], is also broken into lemmas. Having shown IC, we can assume truthful declaration (ˆθ = θ). Since we have also shown IR, in order to prove the competitive ratio it remains to bound the loss of social welfare against Γoff line. Denote by (1, 2,..., F) the sequence of jobs completed by Γ1. Divide time into intervals If = (topen f, tclose f], one for each job f in this sequence. Set tclose f to be the time at which job f is completed, and set topen f be the first time that the processor is not idle in interval If. LEMMA 7. For any interval If, the following inequality holds: tclose f − tbegin f ≤ (1 + √ 1k) · vf PROOF. Interval If begins with a (possibly zero length) period of time in which the processor is idle because there is no available job. Then, it continuously executes a sequence of jobs (1, 2,..., c), where each job i in this sequence is preempted by job i + 1, except for job c, which is completed (thus, job c in this sequence is the same as job f is the global sequence of completed jobs). Let tsi be the time that job i begins execution. Note that ts1 = tbegin f. √ Over the range [tbegin f, tclose f], the priority (vi + k · ei (θ, t)) of the active job is monotonically increasing with time, because this function linearly increases while a job is active, and can only increase at a point in time when preemption occurs. Thus, each job i> 1 in this sequence begins execution at its release time (that is, tsi = ri), because its priority does not increase while it is not active. We now show that the value of the completed job c ex √ ceeds the product of k and the time spent in the interval on jobs 1 through c − 1, or, more formally, that the following √ k ~ c_1 condition holds: vc ≥ h = 1 (eh (θ, ts h +1) − eh (θ, tsh)). To show this, we will prove by induction that the stronger con √ k ~ i_1 Base Case: For i = 1, v1 ≥ h = 1 eh (θ, tsh +1) = 0, since the sum is over zero elements. Inductive Step: For an arbitrary 1 ≤ i <c, we assume √ k ~ i_1 that vi ≥ h = 1 eh (θ, tsh +1) holds. At time tsi +1, we √ know that vi +1 ≥ vi + k · ei (θ, tsi +1) holds, because tsi +1 = ri +1. These two inequalities together imply that vi +1 ≥ √ k ~ ih = 1 eh (θ, tsh +1), completing the inductive step. We also know that tclose f − tsc ≤ lc ≤ vc must hold, by the simplifying normalization of ρmin = 1 and the fact that job c's execution time cannot exceed its length. We can thus bound the total execution time of If by: tclose We now consider the possible execution of uncompleted jobs by Γoff line. Associate each job i that is not completed by Γ1 with the interval during which it was abandoned. All jobs are now associated with an interval, since there are no gaps between the intervals, and since no job i can be abandoned after the close of the last interval at tclose F. Because the processor is idle after tclose F, any such job i would become active at some time t ≥ tclose F, which would lead to the completion of some job, creating a new interval and contradicting the fact that IF is the last one. LEMMA 5. In mechanism Γ1, the following condition holds for all i, ˆθi, ˆθ_i: ∀ ˆv ~ i ≥ ˆvi, ~ ei ~ ((ˆri, ˆdi, ˆli, ˆvi), ˆθ_i), ˆdi ~ ≥ ˆli ~ = ⇒ The following lemma is equivalent to Lemma 5.6 of [15], but the proof is different for our mechanism. √ in If, the following inequality holds: vi ≤ (1 + k) vf. PROOF. Assume by contradiction that there exists a job the priority of job f is vf + k · lf <(1 + k) vf. Because the priority of the active job monotonically increases over the range [tbegin f, tclose f], job i would have a higher priority than the active job (and thus begin execution) at some time t ∈ [tbeginf, tclose f]. Again applying monotonicity, this would imply that the priority of the active job at tclose As in [15], for each interval If, we give Γoff line the following "gift": k times the amount of time in the range [tbegin f, tclose f] that it does not schedule a job. Additionally, we "give" the adversary vf, since the adversary may be able to complete this job at some future time, due to the fact that Γ1 ignores deadlines. The following lemma is Lemma 5.10 in [15], and its proof now applies directly. LEMMA 9. [15] With the above gifts the total net gain obtained by the clairvoyant algorithm from scheduling the jobs √ abandoned during If is not greater than (1 + k) · vf. The intuition behind this lemma is that the best that the adversary can do is to take almost all of the "gift" of f) (intuitively, this is equivalent to executing jobs with the maximum possible value density over the time that Γ1 is active), and then begin execution of a job abandoned by Γ1 right before tclose f. By Lemma 8, the value of √ this job is bounded by (1 + k) · vf. We can now combine the results of these lemmas to prove the competitive ratio. THEOREM 10. Mechanism Γ1 is ((1 + √ k) 2 +1) - competitive. PROOF. Using the fact that the way in which jobs are associated with the intervals partitions the entire set of jobs, we can show the competitive ratio by showing that Γ1 is ((1 + √ k) 2 +1) - competitive for each interval in the sequence (1,..., F). Over an arbitrary interval If, the offline algo √ rithm can achieve at most (tclose f − tbegin f) · k + vf + (1 + k) vf, from the two gifts and the net gain bounded by Lemma 9. Applying Lemma 7, this quantity is then bounded from √ √ above by (1 + √ 1 k) · vf · k + vf + (1 + k) vf = ((1 + k) 2 +1) · vf. Since Γ1 achieves vf, the competitive ratio holds. 4.3 Special Case: Unalterable length and k = 1 While so far we have allowed each agent to lie about all four characteristics of its job, lying about the length of the job is not possible in some settings. For example, a user may not know how to alter a computational problem in a way that both lengthens the job and allows the solution of the original problem to be extracted from the solution to the altered problem. Another restriction that is natural in some settings is uniform value densities (k = 1), which was the case considered by [4]. If the setting satisfies these two conditions, then, by using Mechanism Γ2, we can achieve a competitive ratio of 5 (which is the same competitive ratio as Γ1 for the case of k = 1) without knowledge of ρmin and without the use of payments. The latter property may be necessary in settings that are more local than grid computing (e.g., within a department) but in which the users are still self-interested .7 THEOREM 11. When k = 1, and each agent i cannot falsely declare li, Mechanism Γ2 satisfies individual rationality and incentive compatibility. THEOREM 12. When k = 1, and each agent i cannot falsely declare li, Mechanism Γ2 is 5-competitive. Since this mechanism is essentially a simplification of Γ1, we omit proofs of these theorems. Basically, the fact that k = 1 and ˆli = li both hold allows Γ2 to substitute the priority (li + ei (ˆθ, t)) for the priority used in Γ1; and, since ˆvi is ignored, payments are no longer needed to ensure incentive compatibility. 5. COMPETITIVE LOWER BOUND √ We now show that the competitive ratio of (1 + k) 2 + 1 achieved by Γ1 is a lower bound for deterministic online mechanisms. To do so, we will appeal to third requirement on a mechanism, non-negative payments (NNP), which requires that the center never pays an agent (formally, ∀ i, ˆθ, pi (ˆθi) ≥ 0). Unlike IC and IR, this requirement is not standard in mechanism design. We note, however, that both Γ1 and Γ2 satisfy it trivially, and that, in the following proof, zero only serves as a baseline utility for an agent, and could be replaced by any non-positive function of ˆ θ − i. The proof of the lower bound uses an adversary argument similar to that used in [4] to show a lower bound of (1 + √ k) 2 in the non-strategic setting, with the main novelty lying in the perturbation of the job sequence and the related incentive compatibility arguments. We first present a lemma relating to the recurrence used for this argument, with the proof omitted due to space constraints. tive ratio less than (1 + k) 2 + 1. PROOF. Assume by contradiction that there exists a deterministic online mechanism Γ that satisfies NNP and that \ / achieves a competitive ratio of c = (1 + k) 2 +1--e for some e> 0 (and, by implication, satisfies IC and IR as well). Since a competitive ratio of c implies a competitive ratio of c + x, for any x> 0, we assume without loss of generality that e <1. First, we will construct a profile of agent types θ using an adversary argument. After possibly slightly perturbing θ to assure that a strictness property is satisfied, we will then use a more significant perturbation of θ to reach a contradiction. We now construct the original profile θ. Pick an α such that 0 <α <e, and define δ = α ck +3 k. The adversary uses two sequences of jobs: minor and major. Minor jobs i are characterized by li = δ, vi = k • δ, and zero laxity. The first minor job is released at time 0, and ri = di − 1 for all i> 1. The sequence stops whenever Γ completes any job. Major jobs also have zero laxity, but they have the smallest possible value ratio (that is, vi = li). The lengths of the major jobs that may be released, starting with i = 1, are determined by the following recurrence relation. The first major job has a release time of 0, and each major job i> 1 has a release time of ri = di − 1--δ, just before the deadline of the previous job. The adversary releases major job i <m if and only if each major job j <i was executed continuously over the range [ri, ri +1]. No major job is released after job m. In order to achieve the desired competitive ratio, Γ must complete some major job f, because Γoff line can always at least complete major job 1 (for a value of 1), and Γ can complete at most one minor job (for a value of α c +3 <c1). Also, in order for this job f to be released, the processor time preceding rf can only be spent executing major jobs that are later abandoned. If f <m, then major job f +1 will be released and it will be the final major job. Γ cannot complete job f + 1, because rf + lf = df> rf +1. Therefore, θ consists of major jobs 1 through f +1 (or, f, if f = m), plus minor jobs from time 0 through time df. We now possibly perturb θ slightly. By IR, we know that vf> pf (θ). Since we will later need this inequality to be strict, if vf = pf (θ), then change θf to θ ~ f, where r ~ f = rf, but v ~ f, l ~ f, and d ~ f are all incremented by δ over their respective values in θf. By IC, job f must still be completed by Γ for the profile (θ ~ f, θ − f). If not, then by IR and NNP we know that pf (θ ~ f, θ − f) = 0, and thus that uf (g (θ ~ f, θ − f), θ ~ f) = 0. However, agent f could then increase its utility by falsely declaring the original type of θf, receiving a utility of: uf (g (θ ~ f, θ − f), θ ~ f) = v ~ f--pf (θ) = δ> 0, violating IC. Furthermore, agent f must be charged the same amount (that is, pf (θ ~ f, θ − f) = pf (θ)), due to a similar incentive compatibility argument. Thus, for the remainder of the proof, assume that vf> pf (θ). We now use a more substantial perturbation of θ to complete the proof. If f <m, then define θ ~ ~ f to be identical to θf, except that d ~ ~ f = df +1 + lf, allowing job f to be completely executed after job f +1 is completed. If f = m, then instead set d ~ ~ f = df + lf. IC requires that for the profile (θ ~ ~ f, θ − f), Γ still executes job f continuously over the range [rf, rf + lf], thus preventing job f +1 from being completed. Assume by contradiction that this were not true. Then, at the original deadline of df, job f is not completed. Consider the possible profile (θ ~ ~ f, θ − f, θx), which differs from the new profile only in the addition of a job x which has zero laxity, rx = df, and vx = lx = max (d ~ ~ f--df, (c + 1) • (lf + lf +1)). Because this new profile is indistinguishable from (θ ~ ~ f, θ − f) to Γ before time df, it must schedule jobs in the same way until df. Then, in order to achieve the desired competitive ratio, it must execute job x continuously until its deadline, which is by construction at least as late as the new deadline d ~ ~ f of job f. Thus, job f will not be completed, and, by IR and NNP, it must be the case that pf (θ ~ ~ f, θ − f, θx) = 0 and uf (g (θ ~ ~ f, θ − f, θx), θ ~ ~ f) = 0. Using the fact that θ is indistinguishable from (θf, θ − f, θx) up to time df, if agent f falsely declared his type to be the original θf, then its job would be completed by df and it would be charged pf (θ). Its utility would then increase to uf (g (θf, θ − f, θx), θ ~ ~ f) = vf--pf (θ)> 0, contradicting IC. While Γ's execution must be identical for both (θf, θ − f) and (θ ~ ~ f, θ − f), Γoff line can take advantage of the change. If f <m, then Γ achieves a value of at most lf + δ (the value of job f if it were perturbed), while Γoff line achieves a value of at least k • (Efh = 1 lh--2δ) + lf +1 + lf by executing minor jobs until rf +1, followed by job f +1 and then job f (we subtract two δ's instead of one because the last minor job before rf +1 may have to be abandoned). Substituting in for lf +1, the while Γoffline achieves a value of at least k • (Em If instead f = m, then Γ achieves a value of at most lm + δ, and then completing job m. The competitive ratio is then 6. RELATED WORK In this section we describe related work other than the two papers ([4] and [15]) on which this paper is based. Recent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the offline algorithm (see, e.g., [13, 14]). Mechanism design was also applied to a scheduling problem in [18]. In their model, the center owns the jobs in an offline setting, and it is the agents who can execute them. The private information of an agent is the time it will require to execute each job. Several incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem. This paper also launched the area of algorithmic mechanism design, in which the mechanism must sat isfy computational requirements in addition to the standard incentive requirements. A growing sub-field in this area is multicast cost-sharing mechanism design (see, e.g., [1]), in which the mechanism must efficiently determine, for each agent in a multicast tree, whether the agent receives the transmission and the price it must pay. For a survey of this and other topics in distributed algorithmic mechanism design, see [9]. Online execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings. For example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time. In [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values. Truthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions. The main difference between the two is that the former considers the case of a digital good, which thus has unlimited supply. It is pointed out in [16] that their results continue to hold when the setting is extended so that bidders can delay their arrival. The only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time. A dominant strategy IC mechanism is presented for the variant in which every point in time is essentially independent, while a Bayes-Nash IC mechanism is presented for the variant in which the center's current decision affects the cost of future actions. 7. CONCLUSION In this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents. We presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one. We also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job. We then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents. Several open problems remain in this setting. One is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments. Also, while we feel that it is reasonable to strengthen the assumption of knowing the maximum possible ratio of value densities (k) to knowing the actual range of possible value densities, it would be interesting to determine whether there √ exists a ((1 + k) 2 + 1) - competitive mechanism under the original assumption. Finally, randomized mechanisms provide an unexplored area for future work.
Mechanism Design for Online Real-Time Scheduling ABSTRACT For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents. 1. INTRODUCTION We consider the problem of online scheduling of jobs on a single processor. Each job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline. The objective is to maximize the sum of the values of the jobs completed by their respective deadlines. The key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time. Competitive analysis [6, 10], with its roots in [12], is a well-studied approach for analyzing online algorithms by comparing them against the optimal offline algorithm, which has full knowledge of the input at the beginning of its execution. One interpretation of this approach is as a game between the designer of the online algorithm and an adversary. First, the designer selects the online algorithm. Then, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm. Two papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs. For k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms. The same paper also generalizes the \ / lower bound to (1 + k) 2 for any k ≥ 1, and [15] then \ / presents a matching (1 + k) 2-competitive algorithm. The setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release. However, in domains such as grid computing (see, for example, [7, 8]) this assumption is invalid, because "buyers" of processor time choose when and how to submit their jobs. Furthermore, "sellers" not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting. Thus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent. Instead of being released to the algorithm, each job is now released only to its owning agent. Each agent now has four different ways in which it can manipulate the algorithm: it decides when to submit the job to the algorithm after the true release time, it can artificially inflate the length of the job, and it can declare an arbitrary value and deadline for the job. Because the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause their job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15]. The addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents. Recent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]). In general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome. In our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center. A basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent's best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline. In order to evaluate a mechanism using competitive analysis, the adversary model must be updated. In the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism. Thus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least c1 of the value that the optimal offline mechanism achieves on the same sequence of jobs. The rest of the paper is structured as follows. In Section 2, we formally define and review results from the original, non-strategic setting. After introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3. In Section 4 we present our first √ main result, a ((1 + k) 2 + 1) - competitive mechanism, and formally prove incentive compatibility and the competitive ratio. We also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job. Returning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents. Finally, in Section 6, we discuss related work other than the directly relevant [4] and [15], before concluding with Section 7. 2. NON-STRATEGIC SETTING 2.1 Formulation 2.2 Previous Results 3. MECHANISM DESIGN SETTING 3.1 Formulation Overview1 of the Setting: for all t do 3.2 Mechanism Goals 4. RESULTS 4.1 Proof of Individual Rationality and Incentive Compatibility 4.2 Proof of Competitive Ratio 4.3 Special Case: Unalterable length and k = 1 5. COMPETITIVE LOWER BOUND 6. RELATED WORK In this section we describe related work other than the two papers ([4] and [15]) on which this paper is based. Recent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the offline algorithm (see, e.g., [13, 14]). Mechanism design was also applied to a scheduling problem in [18]. In their model, the center owns the jobs in an offline setting, and it is the agents who can execute them. The private information of an agent is the time it will require to execute each job. Several incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem. This paper also launched the area of algorithmic mechanism design, in which the mechanism must sat isfy computational requirements in addition to the standard incentive requirements. A growing sub-field in this area is multicast cost-sharing mechanism design (see, e.g., [1]), in which the mechanism must efficiently determine, for each agent in a multicast tree, whether the agent receives the transmission and the price it must pay. For a survey of this and other topics in distributed algorithmic mechanism design, see [9]. Online execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings. For example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time. In [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values. Truthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions. The main difference between the two is that the former considers the case of a digital good, which thus has unlimited supply. It is pointed out in [16] that their results continue to hold when the setting is extended so that bidders can delay their arrival. The only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time. A dominant strategy IC mechanism is presented for the variant in which every point in time is essentially independent, while a Bayes-Nash IC mechanism is presented for the variant in which the center's current decision affects the cost of future actions. 7. CONCLUSION In this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents. We presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one. We also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job. We then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents. Several open problems remain in this setting. One is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments. Also, while we feel that it is reasonable to strengthen the assumption of knowing the maximum possible ratio of value densities (k) to knowing the actual range of possible value densities, it would be interesting to determine whether there √ exists a ((1 + k) 2 + 1) - competitive mechanism under the original assumption. Finally, randomized mechanisms provide an unexplored area for future work.
Mechanism Design for Online Real-Time Scheduling ABSTRACT For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents. 1. INTRODUCTION We consider the problem of online scheduling of jobs on a single processor. Each job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline. The objective is to maximize the sum of the values of the jobs completed by their respective deadlines. The key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time. One interpretation of this approach is as a game between the designer of the online algorithm and an adversary. First, the designer selects the online algorithm. Then, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm. Two papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs. For k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms. The same paper also generalizes the \ / lower bound to (1 + k) 2 for any k ≥ 1, and [15] then \ / presents a matching (1 + k) 2-competitive algorithm. The setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release. Furthermore, "sellers" not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting. Thus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent. Instead of being released to the algorithm, each job is now released only to its owning agent. Because the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause their job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15]. The addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents. Recent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]). In general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome. In our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center. A basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent's best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline. In order to evaluate a mechanism using competitive analysis, the adversary model must be updated. In the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism. Thus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least c1 of the value that the optimal offline mechanism achieves on the same sequence of jobs. The rest of the paper is structured as follows. In Section 2, we formally define and review results from the original, non-strategic setting. After introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3. In Section 4 we present our first √ main result, a ((1 + k) 2 + 1) - competitive mechanism, and formally prove incentive compatibility and the competitive ratio. We also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job. Returning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents. 6. RELATED WORK In this section we describe related work other than the two papers ([4] and [15]) on which this paper is based. Recent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the offline algorithm (see, e.g., [13, 14]). Mechanism design was also applied to a scheduling problem in [18]. In their model, the center owns the jobs in an offline setting, and it is the agents who can execute them. The private information of an agent is the time it will require to execute each job. Several incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem. This paper also launched the area of algorithmic mechanism design, in which the mechanism must sat isfy computational requirements in addition to the standard incentive requirements. For a survey of this and other topics in distributed algorithmic mechanism design, see [9]. Online execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings. For example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time. In [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values. Truthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions. The only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time. 7. CONCLUSION In this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents. We presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one. We also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job. We then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents. Several open problems remain in this setting. One is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments. Finally, randomized mechanisms provide an unexplored area for future work.
J-73
Competitive Algorithms for VWAP and Limit Order Trading
We introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books. We provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading.
[ "competit algorithm", "vwap", "onlin model", "modern financi market", "onlin algorithm", "stock trade", "volum weight averag price trade model", "limit order book trade model", "trade sequenc", "share", "market order", "onlin trade", "competit analysi" ]
[ "P", "P", "P", "P", "P", "P", "R", "R", "M", "U", "R", "R", "M" ]
Competitive Algorithms for VWAP and Limit Order Trading Sham M. Kakade Computer and Information Science University of Pennsylvania kakade@linc.cis.upenn.edu Michael Kearns Computer and Information Science University of Pennsylvania mkearns@cis.upenn.edu Yishay Mansour Computer Science Tel Aviv University mansour@post.tau.ac.il Luis E. Ortiz Computer and Information Science University of Pennsylvania leortiz@linc.cis.upenn.edu ABSTRACT We introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books. We provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading. Categories and Subject Descriptors F.2 [Analysis of Algorithms and Problem Complexity]: Miscellaneous; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics 1. INTRODUCTION While popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained. The constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of standard trading strategies or criteria that invite algorithmic analysis. One of the most common activities in modern financial markets is known as Volume Weighted Average Price, or VWAP, trading. Informally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume. In VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP. Very large institutional trades constitute one of the main motivations behind VWAP activity. A typical scenario goes as follows. Suppose a very large mutual fund holds 3% of the outstanding shares of a large, publicly traded company - a huge fraction of the shares - and that this fund``s manager decides he would like to reduce this holding to 2% over a 1-month period. (Such a decision might be forced by the fund``s own regulations or other considerations.) Typically, such a fund manager would be unqualified to sell such a large number of shares in the open market - it requires a professional broker to intelligently break the trade up over time, and possibly over multiple exchanges, in order to minimize the market impact of such a sizable transaction. Thus, the fund manager would approach brokerages for help in selling the 1%. The brokerage will typically alleviate the fund manager``s problem immediately by simply buying the shares directly from the fund manager, and then selling them off laterbut what price should the brokerage pay the fund manager? Paying the price on the day of the sale is too risky for the brokerage, as they need to sell the shares themselves over an extended period, and events beyond their control (such as wars) could cause the price to fall dramatically. The usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period - in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month``s VWAP minus 1 cent. The brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits. If they can beat the VWAP by a penny, they make two cents per share. Such small-margin, high-volume profits can be extremely lucrative for a large brokerage. The importance of the VWAP has led to many automated VWAP trading algorithms - indeed, every major brokerage has at least one VWAP box, 189 Price Volume Model Order Book Model Macroscopic Distribution Model OWT Θ(log(R)) (From[3]) O(log(R) log(N)) 2E(Pbins maxprice ) 2(1 + )E(Pbins maxprice ) for -approx of Pbins maxprice Θ(log(Q)) (same as above plus...) VWAP Θ(log(R)) O(log(R) log(N)) (from above) 2E(Pbins vol ) Ω(Q) fixed schedule O(log(Q)) for large N (1 + )2E(Pbins vol ) for -approx. of Pbins vol 1 for volume in [N, QN] Figure 1: The table summarizes the results presented in this paper. The rows represent results for either the OWT or VWAP criterion. The columns represent which model we are working in. The entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better. The parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model. (See Section 4 for a description of the Macroscopic Distribution Model.) All the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion. However, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology. In this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting. We first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information. In this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3]. Our most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets. Briefly, a limit buy or sell order specifies both the number of shares and the desired price, and will only be executed if there is a matching party on the opposing side, according to a well-defined matching procedure used by all the major exchanges. While limit order books (the list of limit orders awaiting possible future execution) have existed since the dawn of equity exchanges, only very recently have these books become visible to traders in real time, thus opening the way to trading algorithms of all varieties that attempt to exploit this rich market microstructure data. Such data and algorithms are a topic of great current interest on Wall Street [4]. We thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it. Our results are summarized in Figure 1 (see the caption for a summary). 2. THEPRICE-VOLUMETRADINGMODEL We now present a trading model which includes both price and volume information about the sequence of trades. While this model is a generalization of previous formalisms for online trading, it makes an infinite liquidity assumption which fails to model the negative market impact that trading a large number of shares typically has. This will be addressed in the order book model studied in the next section. A note on terminology: throughout the paper (unless otherwise specified), we shall use the term market to describe all activity or orders other than those of the algorithm under consideration. The setting we consider can be viewed as a game between our algorithm and the market. 2.1 The Model In the price-volume trading model, we assume that the intraday trading activity in a given stock is summarized by a discrete sequence of price and volume pairs (pt, vt) for t = 1, ... , T. Here t = 0 corresponds to the day``s market open, and t = T to the close. While there is nothing technically special about the time horizon of a single day, it is particularly consistent with limit order book trading on Wall Street. The pair (pt, vt) represents the fact that a total of vt shares were traded at an (average) price per share pt in the market between time t − 1 and t. Realistically, we should imagine the number of intervals T being reasonably large, so that it is sensible to assign a common approximate price to all shares traded within an interval. In the price-volume model, we shall make an infinite liquidity assumption for our trading algorithms. More precisely, in this online model, we see the price-volume sequence one pair at a time. Following the observation of (pt, vt), we are permitted to sell any (possibly fractional) number of shares nt at the price pt. Let us assume that our goal is to sell N shares over the course of the day. Hence, at each time, we must select a (possibly fractional) number of shares nt to sell at price pt, subject to the global constraint T t=1 nt = N. It is thus assumed that if we have left over shares to sell after time T − 1, we are forced to sell them at the closing price of the market - that is, nT = N − T −1 t=1 nt is sold at pT . In this way we are certain to sell exactly N shares over the course of the day; the only thing an algorithm must do is determine the schedule of selling based on the incoming market price-volume stream. Any algorithm which sells fractional volumes can be converted to a randomized algorithm which only sells integral volumes with the same expected number of shares sold. If we keep the hard constraint of selling exactly N shares, we might incur an additional slight loss in the conversion. (Note that we only allow fractional volumes in the price-volume model, where liquidity is not an issue. In the order book model to follow, we do not allow fractional volumes.) In VWAP trading, the goal of an online algorithm A which sells exactly N shares is not to maximize profits per se, but to track the market VWAP. The market VWAP for an intraday trading sequence S = (p1, v1), ... , (pT , vT ) is simply the average price paid per share over the course of the trading 190 day, ie VWAPM (S) = T t=1 ptvt /V where V is the total daily volume, i.e., V = T t=1 vt.. If on the sequence S, the algorithm A sells its N stocks using the volume sequence n1, ... nT , then we analogously define the VWAP of A on market sequence S by VWAPA(S) = T t=1 ptnt /N . Note that the market VWAP does not include the shares that the algorithm sells. The VWAP competitive ratio of A with respect to a set of sequences Σ is then RVWAP(A) = max S∈Σ {VWAPM (S)/VWAPA(S)} In the case that A is randomized, we generalize the definition above by taking an expectation over VWAPA(S) inside the max. We note that unlike on Wall Street, our definition of VWAPM does not take our own trading into account. It is easy to see that this makes it a more challenging criterion to track. In contrast to the VWAP, another common measure of the performance of an online selling algorithm would be its one-way trading (OWT) competitive ratio [3] with respect to a set of sequences Σ: ROWT(A) = max S∈Σ max 1≤t≤T {pt/VWAPA(S)} where the algorithms performance is compared to the largest individual price appearing in the sequence S. In both VWAP and OWT, we are comparing the average price per share received by a selling algorithm to some measure of market performance. In the case of OWT, we compare to the rather ambitious benchmark of the high price of the day, ignoring volumes entirely. In VWAP trading, we have the more modest goal of comparing favorably to the overall market average of the day. As we shall see, there are some important commonalities and differences to these two approaches. For now we note one simple fact: on any specific sequence S, VWAPA(S) may be larger that VWAPM (S). However, RVWAP(A) cannot be smaller than 1, since on any sequence S in which all price pt are identical, it is impossible to get a better average share per price. Thus, for all algorithms A, both RVWAP(A) and ROWT(A) are larger than 1, and the closer to 1 they are, the better A is tracking its respective performance measure. 2.2 VWAP Results in the Price-Volume Model As in previous work on online trading, it is generally not possible to obtain finite bounds on competitive ratios with absolutely no assumptions on the set of sequences Σbounds on the maximum variation in price or volume are required, depending on the exact setting. We thus introduce the following two assumptions. 2.2.0.1 Volume Variability Assumption. . Let 0 < Vmin ≤ Vmax be known positive constants, and define Q = Vmax /Vmin . For all intraday trading sequences S ∈ Σ, the total daily volume V ∈ [Vmin , Vmax ]. 2.2.0.2 Price Variability Assumption. . Let 0 < pmin ≤ pmax be known positive constants, and define R = pmax/pmin. For all intraday trading sequences S ∈ Σ, the prices satisfy pt ∈ [pmin, pmax], for all t = 1, ... , T. Competitive ratios are generally taken over all sets Σ consistent with at least one of these assumptions. To gain some intuition consider the two trivial cases of R = 1 and Q = 1. In the case of R = 1 (where there is no fluctuation in price), any schedule is optimal. In the case of Q = 1 (where the total volume V over the trading period is known), we can gain a competitive ratio of 1 by selling vt V N shares after each time period. For the OWT problem in the price-volume model, volumes are irrelevant for the performance criterion, but for the VWAP criterion they are central. For the OWT problem under the price variability assumption, the results of [3] established that the optimal competitive ratio was Θ(log(R)). Our first result establishes that the optimal competitive ratio for VWAP under the volume variability assumption is Θ(log(Q)) and is achieved by an algorithm that ignores the price data. Theorem 1. In the price-volume model under the volume variability assumption, there exists an online algorithm A for selling N shares achieving competitive ratio RVWAP(A) ≤ 2 log(Q). In addition, if only the volume variability (and not the price variability) assumption holds, any online algorithm A for selling N shares has RVWAP(A) = Ω(log(Q)). Proof. (Sketch) For the upper bound, the idea is similar to the price reservation algorithm of [3] for the OWT problem, and similar in spirit to the general technique of classify and select [1]. Consider algorithms which use a parameter ˆV , which is interpreted as an estimate for the total volume for the day. Then at each time t, if the market price and volume is (pt, vt), the algorithm sells a fraction vt/ ˆV of its shares. We consider a family of log(Q) such algorithms, where algorithm Ai uses ˆV = Vmin 2i−1 . Clearly, one of the Ai has a competitive ratio of 2. We can derive an O(log(Q)) VWAP competitive ratio by running these algorithms in parallel, and letting each algorithm sell N/ log(Q) shares. (Alternatively, we can randomly select one Ai and guarantee the same expected competitive ratio.) We now sketch the proof of the lower bound, which relates performance in the VWAP and OWT problems. Any algorithm that is c-competitive in the VWAP setting (under fixed Q) is 3c-competitive in the OWT setting with R = Q/2. To show this, we take any sequence S of prices for the OWT problem, and convert it into a price-volume sequence for the VWAP problem. The prices in the VWAP sequence are the same as in S. To construct the volumes in the VWAP sequence, we segment the prices in S into log(R) intervals [2i−1 pmin , 2i pmin ). Suppose pt ∈ [2i−1 pmin , 2i pmin ), and this is the first time in S that a price has fallen in this interval. Then in the VWAP sequence we set the volume vt = 2i−1 . If this is not the first visit to the interval containing pt, we set vt = 0. Assume that the maximum price in S is pmax . The VWAP of our sequence is at least pmax /3. Since we had a c competitive algorithm, its average sell is at least pmax /3c. The lower bound now follows using the lower bound in [3]. An alternative approach to VWAP is to ignore the volumes in favor of prices, and apply an algorithm for the OWT problem. Note that the lower bound in this theorem, unlike in the previous one, only assumes a price variation bound. 191 Theorem 2. In the price-volume model under the price variability assumption, there exists an online algorithm A for selling N shares achieving competitive ratio RVWAP(A) = O(log(R)). In addition, if only the price variability (and not the volume variability) assumption holds, any online A for selling N shares has RVWAP(A) = Ω(log(R)). Proof. (Sketch) Follows immediately from the results of [3] for OWT: the upper bound from the simple fact that for any sequence S, VWAPA(S) is less than max1≤t≤T {pt}, and the lower bound from a reduction to OWT. Theorems 1 and 2 demonstrate that one can achieve logarithmic VWAP competitive ratios under the assumption of either bounded variability of total volume or bounded variability of maximum price. If both assumptions hold, it is possible to give an algorithm accomplishing the minimum of log(Q) and log(R). This flexibility of approach derives from the fact that the VWAP is a quantity in which both prices and volumes matter, as opposed to OWT. 2.3 RelatedResultsinthePrice-VolumeModel All of the VWAP algorithms we have discussed so far make some use of the daily data (pt, vt) as it unfolds, using either the price or volume information. In contrast, a fixed schedule VWAP algorithm has a predetermined distribution {f1, f2, ... fT }, and simply sells ftN shares at time t, independent of (pt, vt). Fixed schedule VWAP algorithms, or slight variants of them, are surprisingly common on Wall Street, and the schedule is usually derived from historical intraday volume data. Our next result demonstrates that such algorithms can perform considerably worse than dynamically adaptive algorithms in terms of the worst case competitive ratio. Theorem 3. In the price-volume model under both the volume and price variability assumptions, any fixed schedule VWAP algorithm A for selling N shares has sell VWAP competitive ratio RVWAP(A) = Ω(min(T, R)). The proofs of all the results in this subsection are in the Appendix. So far our emphasis has been on VWAP algorithms that must sell exactly N shares. In many realistic circumstances, however, there is actually some flexibility in the precise number of shares to be sold. For instance, this is true at large brokerages, where many separate VWAP trades may be pooled and executed by a common algorithm, and the firm would be quite willing to carry a small position of unsold shares overnight if it resulted in better execution prices. The following theorem (which interestingly has no analogue for the OWT problem) demonstrates that this trade-off in shares sold and performance can be realized dramatically in our model. It states that if we are willing to let the number of shares sold vary with Q, we can in fact achieve a VWAP competitive ratio of 1. Theorem 4. In the price-volume model under the volume variability assumption, there exists an algorithm A that always sells between N and QN shares and that the average price per sold share is exactly VWAPM (S). In many online problems, there is a clear distinction between benefit problems and cost problems [2]. In the VWAP setting, selling shares is a benefit problem, and buying shares is a cost problem. The definitions of the competitive ratios, Rbuy VWAP(A) and Rbuy OWT(A), for algorithms which Figure 2: Sample Island order books for MSFT. buy exactly N shares are maxS∈Σ{VWAPA(S)/VWAPM (S)} and maxS∈Σ maxt{VWAPA(S)/pt} respectively. Eventhough Theorem 4 also holds for buying, in general, the competitive ratio of the buy (cost) problem is much higher, as stated in the following theorem. Theorem 5. In the price-volume model under the volume and price variability assumptions, there exists an online algorithm A for buying N shares achieving buy VWAP competitive ratio Rbuy VWAP(A) = O(min{Q, √ R}). In addition any online algorithm A for buying N shares has buy VWAP competitive ratio Rbuy VWAP(A) = Ω(min{Q, √ R}). 3. A LIMIT ORDER BOOK TRADING MODEL Before we can define our online trading model based on limit order books, we give some necessary background on the detailed mechanics of financial markets, which are sometimes referred to as market microstructure. We then provide results and algorithms for both the OWT and VWAP problems. 192 3.1 Background on Limit Order Books and Market Microstructure A fundamental distinction in stock trading is that between a limit order and a market order. Suppose we wish to purchase 1000 shares of Microsoft (MSFT) stock. In a limit order, we specify not only the desired volume (1000 shares), but also the desired price. Suppose that MSFT is currently trading at roughly $24.07 a share (see Figure 2, which shows an actual snapshot of a recent MSFT order book on Island (www.island.com), a well-known electronic exchange for NASDAQ stocks), but we are only willing to buy the 1000 shares at $24.04 a share or lower. We can choose to submit a limit order with this specification, and our order will be placed in a queue called the buy order book, which is ordered by price, with the highest offered unexecuted buy price at the top (often referred to as the bid). If there are multiple limit orders at the same price, they are ordered by time of arrival (with older orders higher in the book). In the example provided by Figure 2, our order would be placed immediately after the extant order for 5,503 shares at $24.04; though we offer the same price, this order has arrived before ours. Similarly, a sell order book for sell limit orders (for instance, we might want to sell 500 shares of MSFT at $24.10 or higher) is maintained, this time with the lowest sell price offered (often referred to as the ask). Thus, the order books are sorted from the most competitive limit orders at the top (high buy prices and low sell prices) down to less competitive limit orders. The bid and ask prices (which again, are simply the prices in the limit orders at the top of the buy and sell books, respectively) together are sometimes referred to as the inside market, and the difference between them as the spread. By definition, the order books always consist exclusively of unexecuted orders - they are queues of orders hopefully waiting for the price to move in their direction. How then do orders get executed? There are two methods. First, any time a market order arrives, it is immediately matched with the most competitive limit orders on the opposing book. Thus, a market order to buy 2000 shares is matched with enough volume on the sell order book to fill the 2000 shares. For instance, in the example of Figure 2, such an order would be filled by the two limit sell orders for 500 shares at $24.069, the 500 shares at $24.07, the 200 shares at $24.08, and then 300 of the 1981 shares at $24.09. The remaining 1681 shares of this last limit order would remain as the new top of the sell limit order book. Second, if a buy (sell, respectively) limit order comes in above the ask (below the bid, respectively) price, then the order is matched with orders on the opposing books. It is important to note that the prices of executions are the prices specified in the limit orders already in the books, not the prices of the incoming order that is immediately executed. Every market or limit order arrives atomically and instantaneously - there is a strict temporal sequence in which orders arrive, and two orders can never arrive simultaneously. This gives rise to the definition of the last price of the exchange, which is simply the last price at which the exchange executed an order. It is this quantity that is usually meant when people casually refer to the (ticker) price of a stock. Note that a limit buy (sell, respectively) order with a price of infinity (0, respectively) is effectively a market order. We shall thus assume without loss of generality that all orders are placed as limit order. Although limit orders which are unexecuted may be removed by the party which placed them, for simplicity, we assume that limit orders are never removed from the books. We refer the reader to [4] for further discussion of modern electronic exchanges and market microstructure. 3.2 The Model The online order book trading model is intended to capture the realistic details of market microstructure just discussed in a competitive ratio setting. In this refined model, a day``s market activity is described by a sequence of limit orders (pt, vt, bt). Here bt is a bit indicating whether the order is a buy or sell order, while pt is the limit order price and vt the number of shares desired. Following the arrival of each such limit order, an online trading algorithm is permitted to place its own limit order. These two interleaved sources (market and algorithm) of limit orders are then simply operated on according to the matching process described in Section 3.1. Any limit order that is not immediately executable according to this process is placed in the appropriate (buy or sell) book for possible future execution; arriving orders that can be partially or fully executed are so executed, with any residual shares remaining on the respective book. The goal of a VWAP or OWT selling algorithm is essentially the same as in the price-volume model, but the context has changed in the following two fundamental ways. First, the assumption of infinite liquidity in the price-volume model is eliminated entirely. The number of shares available at any given price is restricted to the total volume of limit orders offering that price. Second, all incoming orders, and therefore the complete limit order books, are assumed to be visible to the algorithm. This is consistent with modern electronic financial exchanges, and indeed is the source of much current interest on Wall Street [4]. In general, the definition of competitive ratios in the order book model is complicated by the fact that now our algorithm``s activity influences the sequence of executed prices and volumes. We thus first define the execution sequence determined by a limit order sequence (placed by the market and our algorithm). Let S = (p1, v1, b1), ... , (pT , vT , bT ) be a limit order sequence placed by the market, and let S = (p1, v1, b1), ... , (pT , vT , bT ) be a limit order sequence placed by our algorithm (unless otherwise specified, all bt are of the sell type). Let merge(S, S ) be the merged sequence (p1, v1, b1), (p1, v1, b1), ... , (pT , vT , bT ), (pT , vT , bT ), which is the time sequence of orders placed by the market and algorithm. Note that the algorithm has the option of not placing an order, which we can view as a zero volume order. If we conducted the order book maintenance and order execution process described in Section 3.1 on the sequence merge(S, S ), at irregular intervals a trade occurs for some number of shares and some price. In each executed trade, the selling party is either the market or the algorithm. Let execM (S, S ) = (q1, w1), ... , (qT , wT ) be the sequence of executions where the market (that is, a party other than the algorithm) was the selling party, where the qt are the execution prices and wt the execution volumes. Similarly, we define execA(S, S ) = (r1, x1), ... , (rT , xT ) to be the sequence of executions in which the algorithm was the selling party. Thus, execA(S, S ) ∪ execM (S, S ) is the set of all executions. We generally expect T to be (possibly much) smaller than T . The revenue of the algorithm and the market are defined 193 as: REVM (S, S ) ≡ T t=1 qtwt , REVA(S, S ) ≡ T t=1 rtxt Note that both these quantities are solely determined by the execution sequences execM (S, S ) and execA(S, S ), respectively. For an algorithm A which is constrained to sell exactly N shares, we define the OWT competitive ratio of A, ROWT(A), as the maximum ratio (under any S ∈ Σ) of the revenue obtained by A, as compared to the revenue obtained by an optimal offline algorithm A∗ . More formally, for A∗ which is constrained to sell exactly N shares, we define ROWT(A) = max S∈Σ max A∗ REVA∗ (S S∗ ) REVA(S, S ) where S∗ is the limit order sequence placed by A∗ on S. If the algorithm A is randomized then we take the appropriate expectation with respect to S ∼ A. We define the VWAP competitive ratio, RVWAP(A), as the maximum ratio (under any S ∈ Σ) between the market and algorithm VWAPs. More formally, define VWAPM (S, S ) as REVM (S, S )/ T t=1 wt, where the denominator is just the total executed volume of orders placed by the market. Similarly, we define VWAPA(S, S ) as REVA(S, S )/N, since we assume the algorithm sells no more than N shares (this definition implicitly assumes that A gets a 0 price for unsold shares). The VWAP competitive ratio of A is then: RVWAP(A) = max S∈Σ {VWAPM (S, S )/VWAPA(S, S )} where S is the online sequence of limit orders generated by A in response to the sequence S. 3.3 OWT Results in the Order Book Model For the OWT problem in the order book model, we introduce a more subtle version of the price variability assumption. This is due to the fact that our algorithm``s trading can impact the high and low prices of the day. For the assumption below, note that execM (S, ∅) is the sequence of executions without the interaction of our algorithm. 3.3.0.3 Order Book Price Variability Assumption. . Let 0 < pmin ≤ pmax be known positive constants, and define R = pmax/pmin. For all intraday trading sequences S ∈ Σ, the prices pt in the sequence execM (S, ∅) satisfy pt ∈ [pmin, pmax], for all t = 1, ... , T. Note that this assumption does not imply that the ratios of high to low prices under the sequences execM (S, S ) or execA(S, S ) are bounded by R. In fact, the ratio in the sequence execA(S, S ) could be infinite if the algorithm ends up selling some stocks at a 0 price. Theorem 6. In the order book model under the order book price variability assumption, there exists an online algorithm A for selling N shares achieving sell OWT competitive ratio ROWT(A) = 2 log(R) log(N). Proof. The algorithm A works by guessing a price p in the set {pmin2i : 1 ≤ i ≤ log(R)} and placing a sell limit order for all N shares at the price p at the beginning of the day. (Alternatively, algorithm A can place log(R) sell limit orders, where the i-th one has price 2i pmin and volume N/ log(R).) By placing an order at the beginning of the day, the algorithm undercuts all sell orders that will be placed during the day for a price of p or higher, meaning the algorithm``s N shares must be filled first at this price. Hence, if there were k shares that would have been sold at price p or higher without our activity, then A would sell at least kp shares. We define {pj} to be the multiset of prices of individual shares that are either executed or are buy limit order shares that remained unexecuted, excluding the activity of our algorithm (that is, assuming our algorithm places no orders). Assume without loss of generality that p1 ≥ p2 ≥ .... Consider guessing the kth highest such price, pk. If an order for N shares is placed at the day``s start at price pk, then we are guaranteed to obtain a return of kpk. Let k∗ = argmaxk{kpk}. We can view our algorithm as attempting to guess pk∗ , and succeeding if the guess p satisfies p ∈ [pk∗ /2, pk∗ ]. Hence, we are 2 log(R) competitive with the quantity max1≤k≤N kpk. Note that ρ ≡ N i=1 pi = N i=1 1 i ipi ≤ max 1≤k≤N kpk N i=1 1 i ≤ log(N) max 1≤k≤N kpk where ρ is defined as the sum of the top N prices pi without A``s involvement. Similarly, let {pj} be the multiset of prices of individual executed shares, or the prices of unexecuted buy order shares, but now including the orders placed by some selling algorithm A . We now wish to show that for all algorithms A which sell N shares, REVA ≤ N i=1 pi ≤ ρ. Essentially, this inequality states the intuitive idea that a selling algorithm can only lower executed or unmatched buy order share prices. To prove this, we use induction to show that the removal of the activity of a selling algorithm causes these prices to increase. First, remove the last share in the last sell order placed by either A or the market on an arbitrary sequence merge(S, S ) - by this we mean, take the last sell order placed by A or the market and decrease its volume by one share. After this modification, the top N prices p1 ... pN will not decrease. This is because either this sell order share was not executed, in which case the claim is trivially true, or, if it was executed, the removal of this sell order share leaves an additional unexecuted buy order share of equal or higher price. For induction, assume that if we remove a share from any sell order that was placed, by A or the market, at or after time t then the top N prices do not decrease. We now show that if we remove a share from the last sell order that was placed by A or the market before time t, then the top N prices do not decrease. If this sell order share was not executed, then the claim is trivially true. Else, if the sell order share was executed, then claim is true because by removing this executed share from the sell order either: i) the corresponding buy order share (of equal or higher value) is unmatched on the remainder of the sequence, in which case the claim is true; or ii) this buy 194 order matches some sell order share at an equal or higher price, which has the effect of removing a share from a sell order on the remainder of the sequence, and, by the inductive assumption, this can only increase prices. Hence, we have proven that for all A which sell N shares REVA ≤ ρ. We have now established that our revenue satisfies 2 log(R)ES ∼A[REVA(S, S )] ≥ max 1≤k≤N {kpk} ≥ ρ/ log(N) ≥ max A {REVA }/ log(N), where A performs an arbitrary sequence of N sell limit orders. 3.4 VWAP Results in the Order Book Model The OWT algorithm from Theorem 6 can be applied to obtain the following VWAP result: Corollary 7. In the order book model under the order book price variability assumption, there exists an online algorithm A for selling N shares achieving sell VWAP competitive ratio RVWAP(A) = O(log(R) log(N)). We now make a rather different assumption on the sequences S. 3.4.0.4 Bounded Order Volume and Max Price Assumption. The set of sequences Σ satisfies the following two properties. First, we assume that each order placed by the market is of volume less than γ, which we view as a mild assumption since typically single orders on the market are not of high volume (due to liquidity issues). This assumption allows our algorithm to place at least one limit order at a time interleaved with approximately γ market executions. Second, we assume that there is large volume in the sell order books below the price pmax , which means that no orders placed by the market will be executed above the price pmax . The simplest way to instantiate this latter assumption in the order book model is to assume that each sequence S ∈ Σ starts by placing a huge number of sell orders (more than Vmax ) at price pmax . Although this assumption has a maximum price parameter, it does not imply that the price ratio R is finite, since it does not imply any lower bound on the prices of buy or executed shares (aside from the trivial one of 0). Theorem 8. Consider the order book model under the bounded order volume and max price assumption. There exists an algorithm A in which after exactly γN market executions have occurred, then A has sold at most N shares and REVA(S, S ) N = VWAPA(S, S ) ≥ (1 − )VWAPM (S, S ) − pmax N where S is a sequence of N sell limit orders generated by A when observing S. Proof. The algorithm divides the trading day into volume intervals whose real-time duration may vary. For each period i in which γ shares have been executed in the market, the algorithm computes the market VWAP of only those shares traded in period i; let us denote this by VWAPi. Following this ith volume interval, the algorithm places a limit order to sell exactly one share at a price close to VWAPi. More precisely, the algorithm only places orders at the discrete prices (1− )pmax , (1− )2 pmax , .... Following volume interval i, the algorithm places a limit order to sell one share at the discretized price that is closest to VWAPi, but which is strictly smaller. For the analysis, we begin by noting that if all of the algorithm``s limit orders are executed during the day, the total revenue received by the algorithm would be at least (1 − )VWAPM (S, S )N. To see this, it suffices to note that VWAPM (S, S ) is a uniform mixture of the VWAPi (since by definition they each cover the same amount of market volume); and if all the algorithm``s limit orders were executed, they each received more than (1 − )VWAPi dollars for the interval i they followed. We now count the potential lost revenue of the algorithm due to unexecuted limit orders. By the assumption that individual orders are placed with volume less than γ, then our algorithm is able to place a limit order during every block of γ shares have been traded. Hence, after γN market orders have been executed, A has placed N orders in the market. Note that there can be at most one limit order (and thus, at most one share) left unexecuted at each level of the discretized price ladder defined above. This is because following interval i, the algorithm places its limit order strictly below VWAPi, so if VWAPj ≥ VWAPi for j > i, this limit order must have been executed. Thus unexecuted limit orders bound the VWAPs of the remainder of the day, resulting in at most one unexecuted order per price level. A bound on the lost revenue is thus the sum of the discretized prices: ∞ i=1(1 − )i pmax ≤ pmax / . Clearly our algorithm has sold at most N shares. Note that as N becomes large, VWAPA approaches 1 − times the market VWAP. If we knew that the final total volume of the market executions is V , then we can set γ = V/N, assuming that γ >> 1. If we have only an upper and lower bound on V we should be able to guess and incur a logarithmic loss. The following assumption tries to capture the market volume variability. 3.4.0.5 Order Book Volume Variability Assumption. We now assume that the total volume (which includes the shares executed by both our algorithm and the market) is variable within some known region and that the market volume will be greater than our algorithms volume. More formally, for all S ∈ Σ, assume that the total volume V of shares traded in execM (S, S ), for any sequence S of N sell limit orders, satisfies 2N ≤ Vmin ≤ V ≤ Vmax . Let Q = Vmax /Vmin . The following corollary is derived using a constant = 1/2 and observing that if we set γ such that V ≤ γN ≤ 2V then our algorithm will place between N and N/2 limit orders. Corollary 9. In the order book model, if the bounded order volume and max price assumption, and the order book volume variability assumption hold, there exists an online algorithm A for selling at most N shares such that VWAPA(S, S ) ≥ 1 4 log(Q) VWAPM (S, S ) − 2pmax N 195 0 2 4 6 8 x 10 7 0 20 40 60 80 100 QQQ: log(Q)=4.71, E=3.77 0 2 4 6 8 10 x 10 6 0 20 40 60 80 JNPR: log(Q)=5.66, E=3.97 0 0.5 1 1.5 2 x 10 6 0 10 20 30 40 50 60 70 MCHP: log(Q)=5.28, E=3.86 0 2 4 6 8 10 x 10 6 0 50 100 150 200 250 CHKP: log(Q)=6.56, E=4.50 Figure 3: Here we present bounds from Section 4 based on the empirical volume distributions for four real stocks: QQQ, MCHP, JNPR, and CHKP. The plots show histograms for the total daily volumes transacted on Island for these stocks, in the last year and a half, along with the corresponding values of log(Q) and E(Pbins vol ) (denoted by ``E''). We assume that the minimum and maximum daily volumes in the data correspond to Vmin and Vmax , respectively. The worst-case competitive ratio bounds (which are twice log(Q)) of our algorithm for those stocks are 9.42, 10.56, 11.32, and 13.20, respectively. The corresponding bounds on the competitive ratio performance of our algorithm under the volume distribution model (which are twice E(Pbins vol )) are better: 7.54, 7.72, 7.94, and 9.00, respectively (a 20−40% relative improvement). Using a finer volume binning along with a slightly more refined bound on the competitive ratio, we can construct algorithms that, using the empirical volume distribution given as correct, guarantee even better competitive ratios of 2.76, 2.73, 2.75, and 3.17, respectively for those stocks (details omitted). 4. MACROSCOPIC DISTRIBUTION MODELS We conclude our results with a return to the price-volume model, where we shall introduce some refined methods of analysis for online trading algorithms. We leave the generalization of these methods to the order book model for future work. The competitive ratios defined so far measure performance relative to some baseline criterion in the worst case over all market sequences S ∈ Σ. It has been observed in many online settings that such worst-case metrics can yield pessimistic results, and various relaxations have been considered, such as permitting a probability distribution over the input sequence. We now consider distributional models that are considerably weaker than assuming a distribution over complete market sequences S ∈ Σ. In the volume distribution model, we assume only that there exists a distribution Pvol over the total volume V traded in the market for the day, and then examine the worst-case competitive ratio over sequences consistent with the randomly chosen volume. More precisely, we define RVWAP(A, Pvol ) = EV ∼Pvol max S∈seq(V ) VWAPM (S) VWAPA(S) . Here V ∼ Pvol denotes that V is chosen with respect to distribution Pvol , and seq(V ) ⊂ Σ is the set of all market sequences (p1, v1), ... , (pT , vT ) satisfying T t=1 vt = V . Similarly, for OWT, we can define ROWT(A, Pmaxprice ) = Ep∼Pmaxprice max S∈seq(p) p VWAPA(S) . Here Pmaxprice is a distribution over just the maximum price of the day, and we then examine worst-case sequences consistent with this price (seq(p) ⊂ Σ is the set of all market sequences satisfying max1≤t≤T pt = p). Analogous buy-side definitions can be given. We emphasize that in these models, only the distribution of maximum volume and price is known to the algorithm. We also note that our probabilistic assumptions on S are considerably weaker than typical statistical finance models, which would posit a detailed stochastic model for the step-by-step evolution of (pt, vt). Here we instead permit only a distribution over crude, macroscopic measures of the entire day``s market activity, such as the total volume and high price, and analyze the worst-case performance consistent with these crude measures. For this reason, we refer to such settings as the macroscopic distribution model. The work of El-Yaniv et al. [3] examines distributional assumptions similar to ours, but they emphasize the worst196 case choices for the distributions as well, and show that this leads to results no better than the original worst-case analysis over all sequences. In contrast, we feel that the analysis of specific distributions Pvol and Pmaxprice is natural in many financial contexts and our preliminary experimental results show significant improvements when this rather crude distributional information is taken into account (see Figure 3). Our results in the VWAP setting examine the cases where these distributions are known exactly or only approximately. Similar results can be obtained for macroscopic distributions of maximum daily price for the one-way trading setting. 4.1 Results in the Macroscopic Distribution Model We begin by noting that the algorithms examined so far work by binning total volumes or maximum prices into bins of exponentially increasing size, and then guessing the index of the bin in which the actual quantity falls. It is thus natural that the macroscopic distribution model performance of such algorithms (which are common in competitive analysis) might depend on the distribution of the true bin index. In the remaining, we assume that Q is a power of 2 and the base of the logarithm is 2. Let Pvol denote the distribution of total daily market volume. We define the related distribution Pbins vol over bin indices i as follows: for all i = 1, ... , log(Q) − 1, Pbins vol (i) is equal to the probability, under Pvol , that the daily volume falls in the interval [Vmin 2i−1 , Vmin 2i ), and Pbins vol (log(Q)) is for the last interval [Vmax /2, Vmax ] . We define E as E(Pbins vol ) ≡ Ei∼P bins vol 1/Pbins vol (i) 2 = log(Q) i=1 Pbins vol (i) 2 . Since the support of Pbins vol has only log(Q) elements, E(Pbins vol ) can vary from 1 (for distributions Pvol that place all of their weight in only one of the log(Q) intervals between Vmin , Vmin 2, Vmin 4, ... , Vmax ) to log(Q) (for distributions Pvol in which the total daily volume is equally likely to fall in any one of these intervals). Note that distributions Pvol of this latter type are far from uniform over the entire range [Vmin , Vmax ]. Theorem 10. In the volume distribution model under the volume variability assumption, there exists an online algorithm A for selling N shares that, using only knowledge of the total volume distribution Pvol , achieves RVWAP(A, Pvol ) ≤ 2E(Pbins vol ). All proofs in this section are provided in the appendix. As a concrete example, consider the case in which Pvol is the uniform distribution over [Vmin , Vmax ]. In that case, Pbins vol is exponentially increasing and peaks at the last bin, which, having the largest width, also has the largest weight. In this case E(Pbins vol ) is a constant (i.e., independent of Q), leading to a constant competitive ratio. On the other hand, if Pvol is exponential, then Pbins vol is uniform, leading to an O(log(Q)) competitive ratio, just as in the more adversarial price-volume setting discussed earlier. In Figure 3, we provide additional specific bounds obtained for empirical total daily volume distributions computed for some real stocks. We now examine the setting in which Pvol is unknown, but an approximation ˜Pvol is available. Let us define C(Pbins vol , ˜Pbins vol ) = log(Q) j=1 ˜Pbins vol (j) log(Q) i=1 P bins vol (i) √ ˜P bins vol (i) . C is minimized at C(Pbins vol , Pbins vol ) = E(Pbins vol ), and C may be infinite if ˜Pbins vol (i) is 0 when Pbins vol (i) > 0. Theorem 11. In the volume distribution model under the volume variability assumption, there exists an online algorithm A for selling N shares that using only knowledge of an approximation ˜Pvol of Pvol achieves RVWAP(A, Pvol ) ≤ 2C(Pbins vol , ˜Pbins vol ). As an example of this result, suppose our approximation obeys (1/α)Pbins vol (i) ≤ ˜Pbins vol (i) ≤ αPbins vol (i) for all i, for some α > 1. Thus our estimated bin index probabilities are all within a factor of α of the truth. Then it is easy to show that C(Pbins vol , ˜Pbins vol ) ≤ αE(Pbins vol ), so according to Theorems 10 and 11 our penalty for using the approximate distribution is a factor of α in competitive ratio. 5. REFERENCES [1] B. Awerbuch, Y. Bartal, A. Fiat, and A. Ros´en. Competitive non-preemptive call control. In Proc. 5``th ACM-SIAM Symp. on Discrete Algorithms, pages 312-320, 1994. [2] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998. [3] R. El-Yaniv, A. Fiat, R. M. Karp, and G. Turpin. Optimal search and one-way trading online algorithms. Algorithmica, 30:101-139, 2001. [4] M. Kearns and L. Ortiz. The Penn-Lehman automated trading project. IEEE Intelligent Systems, 2003. To appear. 6. APPENDIX 6.1 Proofs from Subsection 2.3 Proof. (Sketch of Theorem 3) W.l.o.g., assume that Q = 1 and the total volume is V . Consider the time t where the fixed schedule f sells the least, then ft ≤ N/T. Consider the sequences where at time t we have pt = pmax , vt = V , and for times t = t we have pt = pmin and vt = 0. The VWAP is pmax and the fixed schedule average is (N/T)pmax + (N − N/T)pmin . Proof. (Sketch of Theorem 4) The algorithm simply sells ut = (vt/Vmin )N shares at time t. The total number of shares sold U is clearly more than N and U = t ut = t (vt/Vmin )N = (V/Vmin )N ≤ QN The average price is V WAPA(S) = ( t ptut)/U = t pt(vt/V ) = V WAPM (S), where we used the fact that ut/U = vt/V . 197 Proof. (of Theorem 5) We start with the proof of the lower bound. Consider the following scenario. For the first T time units we have a price of √ Rpmin , and a total volume of Vmin . We observe how many shares the online algorithm has bought. If it has bought more than half of the shares, the remaining time steps have price pmin and volume Vmax − Vmin . Otherwise, the remaining time steps have price pmax and negligible volume. In the first case the online has paid at least √ Rpmin /2 while the VWAP is at most √ Rpmin /Q + pmin . Therefore, in this case the competitive ratio is Ω(Q). In the second case the online has to buy at least half of the shares at pmax , so its average cost is at least pmax /2. The market VWAP is√ Rpmin = pmax / √ R, hence the competitive ratio is Ω( √ R). For the upper bound, we can get a √ R competitive ratio, by buying all the shares once the price drops below √ Rpmin . The Q upper bound is derive by running an algorithm that assumes the volume is Vmin . The online pays a cost of p, while the VWAP will be at least p/Q. 6.2 Proofs from Section 4 Proof. (Sketch of Theorem 10) We use the idea of guessing the total volume from Theorem 1, but now allow for the possibility of an arbitrary (but known) distribution over the total volume. In particular, consider constructing a distribution Gbins vol over a set of volume values using Pvol and use it to guess the total volume V . Let the algorithm guess ˆV = Vmin 2i with probability Gbins vol (i). Then note that, for any price-volume sequence S, if V ∈ [Vmin 2i−1 , Vmin 2i ], VWAPA(S) ≥ Gbins vol (i)VWAPM (S)/2. This implies an upper bound on RVWAP(A, Pvol ) in terms of Gbins vol . We then get that Gbins vol (i) ∝ Pbins vol (i) minimizes the upper bound, which leads to the upper bound stated in the theorem. Proof. (Sketch of Theorem 11) Replace Pvol with ˜Pvol in the expression for Gbins vol in the proof sketch for the last result. 198
Competitive Algorithms for VWAP and Limit Order Trading ABSTRACT We introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books. We provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading. 1. INTRODUCTION While popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained. The constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of "standard" trading strategies or criteria that invite algorithmic analysis. One of the most common activities in modern financial markets is known as Volume Weighted Average Price, or VWAP, trading. Informally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume. In VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP. Very large institutional trades constitute one of the main motivations behind VWAP activity. A typical scenario goes as follows. Suppose a very large mutual fund holds 3% of the outstanding shares of a large, publicly traded company--a huge fraction of the shares--and that this fund's manager decides he would like to reduce this holding to 2% over a 1-month period. (Such a decision might be forced by the fund's own regulations or other considerations.) Typically, such a fund manager would be unqualified to sell such a large number of shares in the open market--it requires a professional broker to intelligently break the trade up over time, and possibly over multiple exchanges, in order to minimize the market impact of such a sizable transaction. Thus, the fund manager would approach brokerages for help in selling the 1%. The brokerage will typically alleviate the fund manager's problem immediately by simply buying the shares directly from the fund manager, and then selling them off later--but what price should the brokerage pay the fund manager? Paying the price on the day of the sale is too risky for the brokerage, as they need to sell the shares themselves over an extended period, and events beyond their control (such as wars) could cause the price to fall dramatically. The usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period--in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month's VWAP minus 1 cent. The brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits. If they can beat the VWAP by a penny, they make two cents per share. Such small-margin, high-volume profits can be extremely lucrative for a large brokerage. The importance of the VWAP has led to many automated VWAP trading algorithms--indeed, every major brokerage has at least one" VWAP box", Figure 1: The table summarizes the results presented in this paper. The rows represent results for either the OWT or VWAP criterion. The columns represent which model we are working in. The entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better. The parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model. (See Section 4 for a description of the Macroscopic Distribution Model.) All the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion. However, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology. In this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting. We first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information. In this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3]. Our most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets. Briefly, a limit buy or sell order specifies both the number of shares and the desired price, and will only be executed if there is a matching party on the opposing side, according to a well-defined matching procedure used by all the major exchanges. While limit order books (the list of limit orders awaiting possible future execution) have existed since the dawn of equity exchanges, only very recently have these books become visible to traders in real time, thus opening the way to trading algorithms of all varieties that attempt to exploit this rich market microstructure data. Such data and algorithms are a topic of great current interest on Wall Street [4]. We thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it. Our results are summarized in Figure 1 (see the caption for a summary). 2. THE PRICE-VOLUME TRADING MODEL We now present a trading model which includes both price and volume information about the sequence of trades. While this model is a generalization of previous formalisms for online trading, it makes an infinite liquidity assumption which fails to model the negative market impact that trading a large number of shares typically has. This will be addressed in the order book model studied in the next section. A note on terminology: throughout the paper (unless otherwise specified), we shall use the term "market" to describe all activity or orders other than those of the algorithm under consideration. The setting we consider can be viewed as a game between our algorithm and the market. 2.1 The Model In the price-volume trading model, we assume that the intraday trading activity in a given stock is summarized by a discrete sequence of price and volume pairs (pt, vt) for t = 1,..., T. Here t = 0 corresponds to the day's market open, and t = T to the close. While there is nothing technically special about the time horizon of a single day, it is particularly consistent with limit order book trading on Wall Street. The pair (pt, vt) represents the fact that a total of vt shares were traded at an (average) price per share pt in the market between time t − 1 and t. Realistically, we should imagine the number of intervals T being reasonably large, so that it is sensible to assign a common approximate price to all shares traded within an interval. In the price-volume model, we shall make an infinite liquidity assumption for our trading algorithms. More precisely, in this online model, we see the price-volume sequence one pair at a time. Following the observation of (pt, vt), we are permitted to sell any (possibly fractional) number of shares nt at the price pt. Let us assume that our goal is to sell N shares over the course of the day. Hence, at each time, we must select a (possibly fractional) number of shares nt to sell at price pt, subject to the global constraint Tt = 1 nt = N. It is thus assumed that if we have "left over" the closing price of the market--that is, nT = N − ET − 1 shares to sell after time T − 1, we are forced to sell them at t = 1 nt is sold at pT. In this way we are certain to sell exactly N shares over the course of the day; the only thing an algorithm must do is determine the schedule of selling based on the incoming market price-volume stream. Any algorithm which sells fractional volumes can be converted to a randomized algorithm which only sells integral volumes with the same expected number of shares sold. If we keep the hard constraint of selling exactly N shares, we might incur an additional slight loss in the conversion. (Note that we only allow fractional volumes in the price-volume model, where liquidity is not an issue. In the order book model to follow, we do not allow fractional volumes.) In VWAP trading, the goal of an online algorithm A which sells exactly N shares is not to maximize profits per se, but to track the market VWAP. The market VWAP for an intraday trading sequence S = (p1, v1),..., (pT, vT) is simply the average price paid per share over the course of the trading where V is the total daily volume, i.e., V = ETt = 1 vt. . If on the sequence S, the algorithm A sells its N stocks using the volume sequence n1,...nT, then we analogously define the VWAP of A on market sequence S by Note that the market VWAP does not include the shares that the algorithm sells. The VWAP competitive ratio of A with respect to a set of sequences Σ is then In the case that A is randomized, we generalize the definition above by taking an expectation over VWAPA (S) inside the max. We note that unlike on Wall Street, our definition of VWAPM does not take our own trading into account. It is easy to see that this makes it a more challenging criterion to track. In contrast to the VWAP, another common measure of the performance of an online selling algorithm would be its one-way trading (OWT) competitive ratio [3] with respect to a set of sequences Σ: where the algorithms performance is compared to the largest individual price appearing in the sequence S. In both VWAP and OWT, we are comparing the average price per share received by a selling algorithm to some measure of market performance. In the case of OWT, we compare to the rather ambitious benchmark of the high price of the day, ignoring volumes entirely. In VWAP trading, we have the more modest goal of comparing favorably to the overall market average of the day. As we shall see, there are some important commonalities and differences to these two approaches. For now we note one simple fact: on any specific sequence S, VWAPA (S) may be larger that VWAPM (S). However, RVWAP (A) cannot be smaller than 1, since on any sequence S in which all price pt are identical, it is impossible to get a better average share per price. Thus, for all algorithms A, both RVWAP (A) and ROWT (A) are larger than 1, and the closer to 1 they are, the better A is tracking its respective performance measure. 2.2 VWAP Results in the Price-Volume Model As in previous work on online trading, it is generally not possible to obtain finite bounds on competitive ratios with absolutely no assumptions on the set of sequences Σ--bounds on the maximum variation in price or volume are required, depending on the exact setting. We thus introduce the following two assumptions. 2.2.0.1 Volume Variability Assumption. . Let 0 <Vmin <Vmax be known positive constants, and define Q = Vmax/Vmin. For all intraday trading sequences S ∈ Σ, the total daily volume V ∈ [Vmin, Vmax]. 2.2.0.2 Price Variability Assumption. . Let 0 <pmin <pmax be known positive constants, and define R = pmax/pmin. For all intraday trading sequences S ∈ Σ, the prices satisfy pt ∈ [pmin, pmax], for all t = 1,..., T. Competitive ratios are generally taken over all sets Σ consistent with at least one of these assumptions. To gain some intuition consider the two trivial cases of R = 1 and Q = 1. In the case of R = 1 (where there is no fluctuation in price), any schedule is optimal. In the case of Q = 1 (where the total volume V over the trading period is known), we can gain a competitive ratio of 1 by selling vtV N shares after each time period. For the OWT problem in the price-volume model, volumes are irrelevant for the performance criterion, but for the VWAP criterion they are central. For the OWT problem under the price variability assumption, the results of [3] established that the optimal competitive ratio was Θ (log (R)). Our first result establishes that the optimal competitive ratio for VWAP under the volume variability assumption is Θ (log (Q)) and is achieved by an algorithm that ignores the price data. THEOREM 1. In the price-volume model under the volume variability assumption, there exists an online algorithm A for selling N shares achieving competitive ratio RVWAP (A) <2 log (Q). In addition, if only the volume variability (and not the price variability) assumption holds, any online algorithm A for selling N shares has RVWAP (A) = Ω (log (Q)). PROOF. (Sketch) For the upper bound, the idea is similar to the price reservation algorithm of [3] for the OWT problem, and similar in spirit to the general technique of classify and select [1]. Consider algorithms which use a parameter Vˆ, which is interpreted as an estimate for the total volume for the day. Then at each time t, if the market price and volume is (pt, vt), the algorithm sells a fraction Vˆ of its shares. We consider a family of log (Q) such algorithms, where algorithm Ai uses Vˆ = Vmin2i-1. Clearly, one of the Ai has a competitive ratio of 2. We can derive an O (log (Q)) VWAP competitive ratio by running these algorithms in parallel, and letting each algorithm sell N / log (Q) shares. (Alternatively, we can randomly select one Ai and guarantee the same expected competitive ratio.) We now sketch the proof of the lower bound, which relates performance in the VWAP and OWT problems. Any algorithm that is c-competitive in the VWAP setting (under fixed Q) is 3c-competitive in the OWT setting with R = Q/2. To show this, we take any sequence S of prices for the OWT problem, and convert it into a price-volume sequence for the VWAP problem. The prices in the VWAP sequence are the same as in S. To construct the volumes in the VWAP sequence, we segment the prices in S into log (R) intervals [2i-1pmin, 2ipmin). Suppose pt ∈ [2i-1pmin, 2ipmin), and this is the first time in S that a price has fallen in this interval. Then in the VWAP sequence we set the volume vt = 2i-1. If this is not the first visit to the interval containing pt, we set vt = 0. Assume that the maximum price in S is pmax. The VWAP of our sequence is at least pmax/3. Since we had a c competitive algorithm, its average sell is at least pmax/3c. The lower bound now follows using the lower bound in [3]. An alternative approach to VWAP is to ignore the volumes in favor of prices, and apply an algorithm for the OWT problem. Note that the lower bound in this theorem, unlike in the previous one, only assumes a price variation bound. vt / THEOREM 2. In the price-volume model under the price variability assumption, there exists an online algorithm A for selling N shares achieving competitive ratio RVWAP (A) = O (log (R)). In addition, if only the price variability (and not the volume variability) assumption holds, any online A for selling N shares has RVWAP (A) = 0 (log (R)). PROOF. (Sketch) Follows immediately from the results of [3] for OWT: the upper bound from the simple fact that for any sequence S, VWAPA (S) is less than max1 ≤ t ≤ T {pt}, and the lower bound from a reduction to OWT. Theorems 1 and 2 demonstrate that one can achieve logarithmic VWAP competitive ratios under the assumption of either bounded variability of total volume or bounded variability of maximum price. If both assumptions hold, it is possible to give an algorithm accomplishing the minimum of log (Q) and log (R). This "flexibility" of approach derives from the fact that the VWAP is a quantity in which both prices and volumes matter, as opposed to OWT. 2.3 Related Results in the Price-Volume Model All of the VWAP algorithms we have discussed so far make some use of the daily data (pt, vt) as it unfolds, using either the price or volume information. In contrast, a fixed schedule VWAP algorithm has a predetermined distribution {f1, f2,...fT}, and simply sells ftN shares at time t, independent of (pt, vt). Fixed schedule VWAP algorithms, or slight variants of them, are surprisingly common on Wall Street, and the schedule is usually derived from historical intraday volume data. Our next result demonstrates that such algorithms can perform considerably worse than dynamically adaptive algorithms in terms of the worst case competitive ratio. THEOREM 3. In the price-volume model under both the volume and price variability assumptions, any fixed schedule VWAP algorithm A for selling N shares has sell VWAP competitive ratio RVWAP (A) = 0 (min (T, R)). The proofs of all the results in this subsection are in the Appendix. So far our emphasis has been on VWAP algorithms that must sell exactly N shares. In many realistic circumstances, however, there is actually some flexibility in the precise number of shares to be sold. For instance, this is true at large brokerages, where many separate VWAP trades may be pooled and executed by a common algorithm, and the firm would be quite willing to carry a small position of unsold shares overnight if it resulted in better execution prices. The following theorem (which interestingly has no analogue for the OWT problem) demonstrates that this trade-off in shares sold and performance can be realized dramatically in our model. It states that if we are willing to let the number of shares sold vary with Q, we can in fact achieve a VWAP competitive ratio of 1. THEOREM 4. In the price-volume model under the volume variability assumption, there exists an algorithm A that always sells between N and QN shares and that the average price per sold share is exactly VWAPM (S). In many online problems, there is a clear distinction between "benefit" problems and "cost" problems [2]. In the VWAP setting, selling shares is a benefit problem, and buying shares is a cost problem. The definitions of the competitive ratios, Rbuy VWAP (A) and Rbuy OWT (A), for algorithms which Figure 2: Sample Island order books for MSFT. buy exactly N shares are maxS ∈ Σ {VWAPA (S) / VWAPM (S)} and maxS ∈ Σ maxt {VWAPA (S) / pt} respectively. Eventhough Theorem 4 also holds for buying, in general, the competitive ratio of the buy (cost) problem is much higher, as stated in the following theorem. THEOREM 5. In the price-volume model under the volume and price variability assumptions, there exists an online algorithm A for buying N shares achieving buy VWAP com √ 3. A LIMIT ORDER BOOK TRADING MODEL Before we can define our online trading model based on limit order books, we give some necessary background on the detailed mechanics of financial markets, which are sometimes referred to as market microstructure. We then provide results and algorithms for both the OWT and VWAP problems. 3.1 Background on Limit Order Books and Market Microstructure A fundamental distinction in stock trading is that between a limit order and a market order. Suppose we wish to purchase 1000 shares of Microsoft (MSFT) stock. In a limit order, we specify not only the desired volume (1000 shares), but also the desired price. Suppose that MSFT is currently trading at roughly $24.07 a share (see Figure 2, which shows an actual snapshot of a recent MSFT order book on Island (www.island.com), a well-known electronic exchange for NASDAQ stocks), but we are only willing to buy the 1000 shares at $24.04 a share or lower. We can choose to submit a limit order with this specification, and our order will be placed in a queue called the buy order book, which is ordered by price, with the highest offered unexecuted buy price at the top (often referred to as the bid). If there are multiple limit orders at the same price, they are ordered by time of arrival (with older orders higher in the book). In the example provided by Figure 2, our order would be placed immediately after the extant order for 5,503 shares at $24.04; though we offer the same price, this order has arrived before ours. Similarly, a sell order book for sell limit orders (for instance, we might want to sell 500 shares of MSFT at $24.10 or higher) is maintained, this time with the lowest sell price offered (often referred to as the ask). Thus, the order books are sorted from the most competitive limit orders at the top (high buy prices and low sell prices) down to less competitive limit orders. The bid and ask prices (which again, are simply the prices in the limit orders at the top of the buy and sell books, respectively) together are sometimes referred to as the inside market, and the difference between them as the spread. By definition, the order books always consist exclusively of unexecuted orders--they are queues of orders hopefully waiting for the price to move in their direction. How then do orders get executed? There are two methods. First, any time a market order arrives, it is immediately matched with the most competitive limit orders on the opposing book. Thus, a market order to buy 2000 shares is matched with enough volume on the sell order book to fill the 2000 shares. For instance, in the example of Figure 2, such an order would be filled by the two limit sell orders for 500 shares at $24.069, the 500 shares at $24.07, the 200 shares at $24.08, and then 300 of the 1981 shares at $24.09. The remaining 1681 shares of this last limit order would remain as the new top of the sell limit order book. Second, if a buy (sell, respectively) limit order comes in above the ask (below the bid, respectively) price, then the order is matched with orders on the opposing books. It is important to note that the prices of executions are the prices specified in the limit orders already in the books, not the prices of the incoming order that is immediately executed. Every market or limit order arrives atomically and instantaneously--there is a strict temporal sequence in which orders arrive, and two orders can never arrive simultaneously. This gives rise to the definition of the last price of the exchange, which is simply the last price at which the exchange executed an order. It is this quantity that is usually meant when people casually refer to the (ticker) price of a stock. Note that a limit buy (sell, respectively) order with a price of infinity (0, respectively) is effectively a market order. We shall thus assume without loss of generality that all orders are placed as limit order. Although limit orders which are unexecuted may be removed by the party which placed them, for simplicity, we assume that limit orders are never removed from the books. We refer the reader to [4] for further discussion of modern electronic exchanges and market microstructure. 3.2 The Model The online order book trading model is intended to capture the realistic details of market microstructure just discussed in a competitive ratio setting. In this refined model, a day's market activity is described by a sequence of limit orders (pt, vt, bt). Here bt is a bit indicating whether the order is a buy or sell order, while pt is the limit order price and vt the number of shares desired. Following the arrival of each such limit order, an online trading algorithm is permitted to place its own limit order. These two interleaved sources (market and algorithm) of limit orders are then simply operated on according to the matching process described in Section 3.1. Any limit order that is not immediately executable according to this process is placed in the appropriate (buy or sell) book for possible future execution; arriving orders that can be partially or fully executed are so executed, with any residual shares remaining on the respective book. The goal of a VWAP or OWT selling algorithm is essentially the same as in the price-volume model, but the context has changed in the following two fundamental ways. First, the assumption of infinite liquidity in the price-volume model is eliminated entirely. The number of shares available at any given price is restricted to the total volume of limit orders offering that price. Second, all incoming orders, and therefore the complete limit order books, are assumed to be visible to the algorithm. This is consistent with modern electronic financial exchanges, and indeed is the source of much current interest on Wall Street [4]. In general, the definition of competitive ratios in the order book model is complicated by the fact that now our algorithm's activity influences the sequence of executed prices and volumes. We thus first define the execution sequence determined by a limit order sequence (placed by the market and our algorithm). Let S = (pl, vl, bl),..., (pT, vT, bT) be a limit order sequence placed by the market, and let S' = (p' l, v' l, b' l),..., (p' T, v' T, b' T) be a limit order sequence placed by our algorithm (unless otherwise specified, all b' t are of the sell type). Let merge (S, S') be the merged sequence (pl, vl, bl), (p' l, v' l, b' l),..., (pT, vT, bT), (p' T, v' T, b' T), which is the time sequence of orders placed by the market and algorithm. Note that the algorithm has the option of not placing an order, which we can view as a zero volume order. If we conducted the order book maintenance and order execution process described in Section 3.1 on the sequence merge (S, S'), at irregular intervals a trade occurs for some number of shares and some price. In each executed trade, the selling party is either the market or the algorithm. Let execM (S, S') = (ql, wl),..., (qT 1, wT 1) be the sequence of executions where the market (that is, a party other than the algorithm) was the selling party, where the qt are the execution prices and wt the execution volumes. Similarly, we define execA (S, S') = (rl, xl),..., (rT 11, xT 11) to be the sequence of executions in which the algorithm was the selling party. Thus, execA (S, S') ∪ execM (S, S') is the set of all executions. We generally expect T' ' to be (possibly much) smaller than T'. The revenue of the algorithm and the market are defined as: Note that both these quantities are solely determined by the execution sequences execM (S, S') and execA (S, S'), respectively. For an algorithm A which is constrained to sell exactly N shares, we define the OWT competitive ratio of A, ROWT (A), as the maximum ratio (under any S E Σ) of the revenue obtained by A, as compared to the revenue obtained by an optimal offline algorithm A *. More formally, for A * which is constrained to sell exactly N shares, we define where S * is the limit order sequence placed by A * on S. If the algorithm A is randomized then we take the appropriate expectation with respect to S'--A. We define the VWAP competitive ratio, RVWAP (A), as the maximum ratio (under any S E Σ) between the market and algorithm VWAPs. More formally, define VWAPM (S, S') as REVM (S, S') / ~ T ~ t = 1 wt, where the denominator is just the total executed volume of orders placed by the market. Similarly, we define VWAPA (S, S') as REVA (S, S') / N, since we assume the algorithm sells no more than N shares (this definition implicitly assumes that A gets a 0 price for unsold shares). The VWAP competitive ratio of A is then: where S' is the online sequence of limit orders generated by A in response to the sequence S. 3.3 OWT Results in the Order Book Model For the OWT problem in the order book model, we introduce a more subtle version of the price variability assumption. This is due to the fact that our algorithm's trading can impact the high and low prices of the day. For the assumption below, note that execM (S, 0) is the sequence of executions without the interaction of our algorithm. 3.3.0.3 Order Book Price Variability Assumption. . Let 0 <pmin <pmax be known positive constants, and define R = pmax/pmin. For all intraday trading sequences S E Σ, the prices pt in the sequence execM (S, 0) satisfy pt E [pmin, pmax], for all t = 1,..., T. Note that this assumption does not imply that the ratios of high to low prices under the sequences execM (S, S') or execA (S, S') are bounded by R. In fact, the ratio in the sequence execA (S, S') could be infinite if the algorithm ends up selling some stocks at a 0 price. PROOF. The algorithm A works by guessing a price p in the set {pmin2i: 1 <i <log (R)} and placing a sell limit order for all N shares at the price p at the beginning of the day. (Alternatively, algorithm A can place log (R) sell limit orders, where the i-th one has price 2ipmin and volume N / log (R).) By placing an order at the beginning of the day, the algorithm undercuts all sell orders that will be placed during the day for a price of p or higher, meaning the algorithm's N shares must be filled first at this price. Hence, if there were k shares that would have been sold at price p or higher without our activity, then A would sell at least kp shares. We define {pj} to be the multiset of prices of individual shares that are either executed or are buy limit order shares that remained unexecuted, excluding the activity of our algorithm (that is, assuming our algorithm places no orders). Assume without loss of generality that p1> p2>... . Consider guessing the kth highest such price, pk. If an order for N shares is placed at the day's start at price pk, then we are guaranteed to obtain a return of kpk. Let k * = argmaxk {kpk}. We can view our algorithm as attempting to guess pk ∗, and succeeding if the guess p satisfies p E [pk ∗ / 2, pk ∗]. Hence, we are 2 log (R) competitive with the quantity max1 <k <N kpk. Note that where ρ is defined as the sum of the top N prices pi without A's involvement. Similarly, let {pj'} be the multiset of prices of individual executed shares, or the prices of unexecuted buy order shares, but now including the orders placed by some selling algorithm A'. We now wish to show that for all algorithms A' which sell N shares, REVA ~ <~ N i = 1 p' i <ρ. Essentially, this inequality states the intuitive idea that a selling algorithm can only lower executed or unmatched buy order share prices. To prove this, we use induction to show that the removal of the activity of a selling algorithm causes these prices to increase. First, remove the last share in the last sell order placed by either A' or the market on an arbitrary sequence merge (S, S')--by this we mean, take the last sell order placed by A' or the market and decrease its volume by one share. After this modification, the top N prices p' 1...p 'N will not decrease. This is because either this sell order share was not executed, in which case the claim is trivially true, or, if it was executed, the removal of this sell order share leaves an additional unexecuted buy order share of equal or higher price. For induction, assume that if we remove a share from any sell order that was placed, by A' or the market, at or after time t then the top N prices do not decrease. We now show that if we remove a share from the last sell order that was placed by A' or the market before time t, then the top N prices do not decrease. If this sell order share was not executed, then the claim is trivially true. Else, if the sell order share was executed, then claim is true because by removing this executed share from the sell order either: i) the corresponding buy order share (of equal or higher value) is unmatched on the remainder of the sequence, in which case the claim is true; or ii) this buy order matches some sell order share at an equal or higher price, which has the effect of removing a share from a sell order on the remainder of the sequence, and, by the inductive assumption, this can only increase prices. Hence, we have proven that for all A' which sell N shares REVA ≤ ρ. We have now established that our revenue satisfies where A' performs an arbitrary sequence of N sell limit orders. 3.4 VWAP Results in the Order Book Model The OWT algorithm from Theorem 6 can be applied to obtain the following VWAP result: COROLLARY 7. In the order book model under the order book price variability assumption, there exists an online algorithm A for selling N shares achieving sell VWAP competitive ratio RVWAP (A) = O (log (R) log (N)). We now make a rather different assumption on the sequences S. 3.4.0.4 Bounded Order Volume and Max Price Assumption. The set of sequences Σ satisfies the following two properties. First, we assume that each order placed by the market is of volume less than γ, which we view as a mild assumption since typically single orders on the market are not of high volume (due to liquidity issues). This assumption allows our algorithm to place at least one limit order at a time interleaved with approximately γ market executions. Second, we assume that there is "large" volume in the sell order books below the price pmax, which means that no orders placed by the market will be executed above the price pmax. The simplest way to instantiate this latter assumption in the order book model is to assume that each sequence S ∈ Σ starts by placing a huge number of sell orders (more than Vmax) at price pmax. Although this assumption has a maximum price parameter, it does not imply that the price ratio R is finite, since it does not imply any lower bound on the prices of buy or executed shares (aside from the trivial one of 0). THEOREM 8. Consider the order book model under the bounded order volume and max price assumption. There exists an algorithm A in which after exactly γN market executions have occurred, then A has sold at most N shares and where S' is a sequence of N sell limit orders generated by A when observing S. PROOF. The algorithm divides the trading day into volume intervals whose real-time duration may vary. For each period i in which γ shares have been executed in the market, the algorithm computes the market VWAP of only those shares traded in period i; let us denote this by VWAPi. Following this ith volume interval, the algorithm places a limit order to sell exactly one share at a price "close" to VWAPi. More precisely, the algorithm only places orders at the discrete prices (1 − ~) pmax, (1 − ~) 2pmax,.... Following volume interval i, the algorithm places a limit order to sell one share at the discretized price that is closest to VWAPi, but which is strictly smaller. For the analysis, we begin by noting that if all of the algorithm's limit orders are executed during the day, the total revenue received by the algorithm would be at least (1 − ~) VWAPM (S, S') N. To see this, it suffices to note that VWAPM (S, S') is a uniform mixture of the VWAPi (since by definition they each cover the same amount of market volume); and if all the algorithm's limit orders were executed, they each received more than (1 − ~) VWAPi dollars for the interval i they followed. We now count the potential "lost revenue" of the algorithm due to unexecuted limit orders. By the assumption that individual orders are placed with volume less than γ, then our algorithm is able to place a limit order during every block of γ shares have been traded. Hence, after γN market orders have been executed, A has placed N orders in the market. Note that there can be at most one limit order (and thus, at most one share) left unexecuted at each level of the discretized price ladder defined above. This is because following interval i, the algorithm places its limit order strictly below VWAPi, so if VWAPj ≥ VWAPi for j> i, this limit order must have been executed. Thus unexecuted limit orders bound the VWAPs of the remainder of the day, resulting in at most one unexecuted order per price level. cretized prices: E ∞ A bound on the lost revenue is thus the sum of the disi = 1 (1 − ~) ipmax ≤ pmax / ~. Clearly our algorithm has sold at most N shares. Note that as N becomes large, VWAPA approaches 1 − ~ times the market VWAP. If we knew that the final total volume of the market executions is V, then we can set γ = V/N, assuming that γ>> 1. If we have only an upper and lower bound on V we should be able to "guess" and incur a logarithmic loss. The following assumption tries to capture the market volume variability. 3.4.0.5 Order Book Volume Variability Assumption. We now assume that the total volume (which includes the shares executed by both our algorithm and the market) is variable within some known region and that the market volume will be greater than our algorithms volume. More formally, for all S ∈ Σ, assume that the total volume V of shares traded in execM (S, S'), for any sequence S' of N sell limit orders, satisfies 2N ≤ Vmin ≤ V ≤ Vmax. Let Figure 3: Here we present bounds from Section 4 based on the empirical volume distributions for four real stocks: QQQ, MCHP, JNPR, and CHKP. The plots show histograms for the total daily volumes transacted on Island for these stocks, in the last year and a half, along with the corresponding values of log (Q) and #(P bins vol) (denoted by' E'). We assume that the minimum and maximum daily volumes in the data correspond to Vmin and Vmax, respectively. The worst-case competitive ratio bounds (which are twice log (Q)) of our algorithm for those stocks are 9.42, 10.56, 11.32, and 13.20, respectively. The corresponding bounds on the competitive ratio performance of our algorithm under the volume distribution model (which are twice #(P bins vol)) are better: 7.54, 7.72, 7.94, and 9.00, respectively (a 20 − 40% relative improvement). Using a finer volume binning along with a slightly more refined bound on the competitive ratio, we can construct algorithms that, using the empirical volume distribution given as correct, guarantee even better competitive ratios of 2.76, 2.73, 2.75, and 3.17, respectively for those stocks (details omitted). 4. MACROSCOPIC DISTRIBUTION MODELS We conclude our results with a return to the price-volume model, where we shall introduce some refined methods of analysis for online trading algorithms. We leave the generalization of these methods to the order book model for future work. The competitive ratios defined so far measure performance relative to some baseline criterion in the worst case over all market sequences S ∈ Σ. It has been observed in many online settings that such worst-case metrics can yield pessimistic results, and various relaxations have been considered, such as permitting a probability distribution over the input sequence. We now consider distributional models that are considerably weaker than assuming a distribution over complete market sequences S ∈ Σ. In the volume distribution model, we assume only that there exists a distribution Pvol over the total volume V traded in the market for the day, and then examine the worst-case competitive ratio over sequences consistent with the randomly chosen volume. More precisely, we define S ∈ seq (V) VWAPA (S) Here V ∼ Pvol denotes that V is chosen with respect to distribution Pvol, and seq (V) ⊂ Σ is the set of all market sequences (P1, V1),..., (PT, VT) satisfying Tt = 1 Vt = V. Similarly, for OWT, we can define Here Pmaxprice is a distribution over just the maximum price of the day, and we then examine worst-case sequences consistent with this price (seq (P) ⊂ Σ is the set of all market sequences satisfying max1 <t <T Pt = P). Analogous buy-side definitions can be given. We emphasize that in these models, only the distribution of maximum volume and price is known to the algorithm. We also note that our probabilistic assumptions on S are considerably weaker than typical statistical finance models, which would posit a detailed stochastic model for the step-by-step evolution of (Pt, Vt). Here we instead permit only a distribution over crude, macroscopic measures of the entire day's market activity, such as the total volume and high price, and analyze the worst-case performance consistent with these crude measures. For this reason, we refer to such settings as the macroscopic distribution model. The work of El-Yaniv et al. [3] examines distributional assumptions similar to ours, but they emphasize the worst ROWT (A, Pmaxprice) = Ep_Pmaxprice case choices for the distributions as well, and show that this leads to results no better than the original worst-case analysis over all sequences. In contrast, we feel that the analysis of specific distributions Pvol and Pmaxprice is natural in many financial contexts and our preliminary experimental results show significant improvements when this rather crude distributional information is taken into account (see Figure 3). Our results in the VWAP setting examine the cases where these distributions are known exactly or only approximately. Similar results can be obtained for macroscopic distributions of maximum daily price for the one-way trading setting. 4.1 Results in the Macroscopic Distribution Model We begin by noting that the algorithms examined so far work by binning total volumes or maximum prices into bins of exponentially increasing size, and then "guessing" the index of the bin in which the actual quantity falls. It is thus natural that the macroscopic distribution model performance of such algorithms (which are common in competitive analysis) might depend on the distribution of the true bin index. In the remaining, we assume that Q is a power of 2 and the base of the logarithm is 2. Let Pvol denote the distribution of total daily market volume. We define the related distribution P bins vol over bin indices i as follows: for all i = 1,..., log (Q) − 1, Pbins We define E as Since the support of Pvolbins has only log (Q) elements, E (P bins vol) can vary from 1 (for distributions Pvol that place all of their weight in only one of the log (Q) intervals between Vmin, Vmin2, Vmin4,..., Vmax) to log (Q) (for distributions Pvol in which the total daily volume is equally likely to fall in any one of these intervals). Note that distributions Pvol of this latter type are far from uniform over the entire range [Vmin, Vmax]. 2E (PTHEOREM 10. In the volume distribution model under the volume variability assumption, there exists an online algorithm A for selling N shares that, using only knowledge of the total volume distribution Pvol, achieves RVWAP (A, Pvol) ≤ bins vol). All proofs in this section are provided in the appendix. As a concrete example, consider the case in which Pvol is the uniform distribution over [Vmin, Vmax]. In that case, Pbins vol is exponentially increasing and peaks at the last bin, which, having the largest width, also has the largest weight. In this case E (P bins vol) is a constant (i.e., independent of Q), leading to a constant competitive ratio. On the other hand, if Pvol is exponential, then P bins vol is uniform, leading to an O (log (Q)) competitive ratio, just as in the more adversarial price-volume setting discussed earlier. In Figure 3, we provide additional specific bounds obtained for empirical total daily volume distributions computed for some real stocks. We now examine the setting in which Pvol is unknown, but an approximation ˜Pvol is available. Let us define THEOREM 11. In the volume distribution model under the volume variability assumption, there exists an online algorithm A for selling N shares that using only knowledge of an approximation ˜Pvol of Pvol achieves RVWAP (A, Pvol) ≤ 2C (P bins vol, P˜ binsvol). As an example of this result, suppose our approximation obeys (1 / α) P vol bins (i) ≤ P˜ vol bins (i) ≤ αP bins vol (i) for all i, for some α> 1. Thus our estimated bin index probabilities are all within a factor of α of the truth. Then it is easy to show that C (P vol bins, P˜ vol bins) ≤ αE (Pbins vol), so according to Theorems 10 and 11 our penalty for using the approximate distribution is a factor of α in competitive ratio.
Competitive Algorithms for VWAP and Limit Order Trading ABSTRACT We introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books. We provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading. 1. INTRODUCTION While popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained. The constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of "standard" trading strategies or criteria that invite algorithmic analysis. One of the most common activities in modern financial markets is known as Volume Weighted Average Price, or VWAP, trading. Informally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume. In VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP. Very large institutional trades constitute one of the main motivations behind VWAP activity. A typical scenario goes as follows. Suppose a very large mutual fund holds 3% of the outstanding shares of a large, publicly traded company--a huge fraction of the shares--and that this fund's manager decides he would like to reduce this holding to 2% over a 1-month period. (Such a decision might be forced by the fund's own regulations or other considerations.) Typically, such a fund manager would be unqualified to sell such a large number of shares in the open market--it requires a professional broker to intelligently break the trade up over time, and possibly over multiple exchanges, in order to minimize the market impact of such a sizable transaction. Thus, the fund manager would approach brokerages for help in selling the 1%. The brokerage will typically alleviate the fund manager's problem immediately by simply buying the shares directly from the fund manager, and then selling them off later--but what price should the brokerage pay the fund manager? Paying the price on the day of the sale is too risky for the brokerage, as they need to sell the shares themselves over an extended period, and events beyond their control (such as wars) could cause the price to fall dramatically. The usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period--in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month's VWAP minus 1 cent. The brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits. If they can beat the VWAP by a penny, they make two cents per share. Such small-margin, high-volume profits can be extremely lucrative for a large brokerage. The importance of the VWAP has led to many automated VWAP trading algorithms--indeed, every major brokerage has at least one" VWAP box", Figure 1: The table summarizes the results presented in this paper. The rows represent results for either the OWT or VWAP criterion. The columns represent which model we are working in. The entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better. The parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model. (See Section 4 for a description of the Macroscopic Distribution Model.) All the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion. However, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology. In this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting. We first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information. In this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3]. Our most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets. Briefly, a limit buy or sell order specifies both the number of shares and the desired price, and will only be executed if there is a matching party on the opposing side, according to a well-defined matching procedure used by all the major exchanges. While limit order books (the list of limit orders awaiting possible future execution) have existed since the dawn of equity exchanges, only very recently have these books become visible to traders in real time, thus opening the way to trading algorithms of all varieties that attempt to exploit this rich market microstructure data. Such data and algorithms are a topic of great current interest on Wall Street [4]. We thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it. Our results are summarized in Figure 1 (see the caption for a summary). 2. THE PRICE-VOLUME TRADING MODEL 2.1 The Model 2.2 VWAP Results in the Price-Volume Model 2.2.0.1 Volume Variability Assumption. . 2.2.0.2 Price Variability Assumption. . 2.3 Related Results in the Price-Volume Model 3. A LIMIT ORDER BOOK TRADING MODEL 3.1 Background on Limit Order Books and Market Microstructure 3.2 The Model 3.3 OWT Results in the Order Book Model 3.3.0.3 Order Book Price Variability Assumption. . 3.4 VWAP Results in the Order Book Model 3.4.0.5 Order Book Volume Variability Assumption. 4. MACROSCOPIC DISTRIBUTION MODELS ROWT (A, Pmaxprice) = Ep_Pmaxprice 4.1 Results in the Macroscopic Distribution Model
Competitive Algorithms for VWAP and Limit Order Trading ABSTRACT We introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books. We provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading. 1. INTRODUCTION While popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained. The constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of "standard" trading strategies or criteria that invite algorithmic analysis. One of the most common activities in modern financial markets is known as Volume Weighted Average Price, or VWAP, trading. Informally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume. In VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP. Very large institutional trades constitute one of the main motivations behind VWAP activity. A typical scenario goes as follows. Thus, the fund manager would approach brokerages for help in selling the 1%. The brokerage will typically alleviate the fund manager's problem immediately by simply buying the shares directly from the fund manager, and then selling them off later--but what price should the brokerage pay the fund manager? The usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period--in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month's VWAP minus 1 cent. The brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits. If they can beat the VWAP by a penny, they make two cents per share. Such small-margin, high-volume profits can be extremely lucrative for a large brokerage. The importance of the VWAP has led to many automated VWAP trading algorithms--indeed, every major brokerage has at least one" VWAP box", Figure 1: The table summarizes the results presented in this paper. The rows represent results for either the OWT or VWAP criterion. The columns represent which model we are working in. The entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better. The parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model. (See Section 4 for a description of the Macroscopic Distribution Model.) All the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion. However, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology. In this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting. We first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information. In this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3]. Our most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets. Such data and algorithms are a topic of great current interest on Wall Street [4]. We thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it. Our results are summarized in Figure 1 (see the caption for a summary).
C-57
Congestion Games with Load-Dependent Failures: Identical Resources
We define a new class of games, congestion games with load-dependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.
[ "congest game", "load-depend failur", "load-depend failur", "ident resourc", "failur probabl", "potenti function", "pure strategi nash equilibrium", "nash equilibrium", "nondecreas cost function", "localeffect game", "resourc cost function", "real-valu function", "load-depend resourc failur" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "M", "R" ]
Congestion Games with Load-Dependent Failures: Identical Resources Michal Penn Technion - IIT Haifa, Israel mpenn@ie.technion.ac.il Maria Polukarov Technion - IIT Haifa, Israel pmasha@tx.technion.ac.il Moshe Tennenholtz Technion - IIT Haifa, Israel moshet@ie.technion.ac.il ABSTRACT We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence -multiagent systems General Terms Theory, Economics 1. INTRODUCTION We study the effects of resource failures in congestion settings. This study is motivated by a variety of situations in multi-agent systems with unreliable components, such as machines, computers etc.. We define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations. In this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion. There is a fixed number of agents, each having a task which can be carried out by any of the resources. For reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources. Thus, the congestion on the resources is not known in advance, but is strategy-dependent. Each resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource. The objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses. The benefits of the agents from successful completion of their tasks are allowed to vary across the agents. The resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it. Thus, it is natural to assume that these functions are nonnegative. In addition, in many real-life applications of our model the resource cost functions have a special structure. In particular, they can monotonically increase or decrease with the number of the users, depending on the context. The former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher. A typical example of such situation is as follows. Assume we need to deliver an important package. Since there is no guarantee that a courier will reach the destination in time, we might send several couriers to deliver the same package. The time required by each courier to deliver the package increases with the congestion on his way. In addition, the payment to a courier is proportional to the time he spends in delivering the package. Thus, the payment to the courier increases when the congestion increases. The latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group``s members, or, the cost of 210 using a resource decreases with the number of users, according to some marketing policy. Our results We show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function. Therefore, the CGLF model can not be reduced to congestion games. Nevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist. We show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria. However, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions. Related work Our model extends the well-known class of congestion games [11]. In a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses. An important property of these games is the existence of pure strategy Nash equilibria. Monderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium. They observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game). Moreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide. Congestion games have been extensively studied and generalized. In particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games. In a local-effect game, each agent``s payoff is effected not only by the number of agents who have chosen the same resources as he has chosen, but also by the number of agents who have chosen neighboring resources (in a given graph structure). Monderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games). He defined PS-congestion games of type q (q-congestion games), where q is a positive number, and showed that every game in strategic form is a q-congestion game for some q. Playerspecific resource cost functions were discussed for the first time by Milchtaich [6]. He showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium. PScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource. Ackermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources. Much of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium. In particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games. Intensive study has also been devoted to quantify the inefficiency of equilibria in congestion games. Koutsoupias and Papadimitriou [4] proposed the worst-case ratio of the social welfare achieved by a Nash equilibrium and by a socially optimal strategy profile (dubbed the price of anarchy) as a measure of the performance degradation caused by lack of coordination. Christodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions. Roughgarden and Tardos [12] used this approach to study the cost of selfish routing in networks with a continuum of users. However, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks. In the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored. Penn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10]. They introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies. The CGF-model significantly differs from ours. In a CGF, the authors considered the delay associated with successful task completion, where the delay for an agent is the minimum of the delays of his successful attempts and the aim of each agent is to minimize his expected delay. In contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses. The above differences imply that CGFs and CGLFs possess different properties. In particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist. This, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function. Furthermore, the procedures proposed by the authors in [10] for the construction of a pure strategy Nash equilibrium are not valid in our model, even in the simple, agent-symmetric case, where all agents have the same benefit from successful completion of their tasks. Our work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games. Moreover, it is the first model to consider load-dependent failures in the above context. 211 Organization The rest of the paper is organized as follows. In Section 2 we define our model. In Section 3 we present our results. In 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria. In 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs). We show that these games do not admit a potential function. However, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs. Section 4 is devoted to a short discussion. Many of the proofs are omitted from this conference version of the paper, and will appear in the full version. 2. THE MODEL The scenarios considered in this work consist of a finite set of agents where each agent has a task that can be carried out by any element of a set of identical resources (service providers). The agents simultaneously choose a subset of the resources in order to perform their tasks, and their aim is to maximize their own expected payoff, as described in the sequel. Let N be a set of n agents (n ∈ N), and let M be a set of m resources (m ∈ N). Agent i ∈ N chooses a strategy σi ∈ Σi which is a (potentially empty) subset of the resources. That is, Σi is the power set of the set of resources: Σi = P(M). Given a subset S ⊆ N of the agents, the set of strategy combinations of the members of S is denoted by ΣS = ×i∈SΣi, and the set of strategy combinations of the complement subset of agents is denoted by Σ−S (Σ−S = ΣN S = ×i∈N SΣi). The set of pure strategy profiles of all the agents is denoted by Σ (Σ = ΣN ). Each resource is associated with a cost, c(·), and a failure probability, f(·), each of which depends on the number of agents who use this resource. We assume that the failure probabilities of the resources are independent. Let σ = (σ1, ... , σn) ∈ Σ be a pure strategy profile. The (m-dimensional) congestion vector that corresponds to σ is hσ = (hσ e )e∈M , where hσ e = ˛ ˛{i ∈ N : e ∈ σi} ˛ ˛. The failure probability of a resource e is a monotone nondecreasing function f : {1, ... , n} → [0, 1) of the congestion experienced by e. The cost of utilizing resource e is a function c : {1, ... , n} → R+ of the congestion experienced by e. The outcome for agent i ∈ N is denoted by xi ∈ {S, F}, where S and F, respectively, indicate whether the task execution succeeded or failed. We say that the execution of agent``s i task succeeds if the task of agent i is successfully completed by at least one of the resources chosen by him. The benefit of agent i from his outcome xi is denoted by Vi(xi), where Vi(S) = vi, a given (nonnegative) value, and Vi(F) = 0. The utility of agent i from strategy profile σ and his outcome xi, ui(σ, xi), is the difference between his benefit from the outcome (Vi(xi)) and the sum of the costs of the resources he has used: ui(σ, xi) = Vi(xi) − X e∈σi c(hσ e ) . The expected utility of agent i from strategy profile σ, Ui(σ), is, therefore: Ui(σ) = 1 − Y e∈σi f(hσ e ) ! vi − X e∈σi c(hσ e ) , where 1 − Q e∈σi f(hσ e ) denotes the probability of successful completion of agent i``s task. We use the convention thatQ e∈∅ f(hσ e ) = 1. Hence, if agent i chooses an empty set σi = ∅ (does not assign his task to any resource), then his expected utility, Ui(∅, σ−i), equals zero. 3. PURE STRATEGY NASH EQUILIBRIA IN CGLFS In this section we present our results on CGLFs. We investigate the property of the (non-)existence of pure strategy Nash equilibria in these games. We show that this class of games does not, in general, possess pure strategy equilibria. Nevertheless, if the resource cost functions are nondecreasing then such equilibria are guaranteed to exist, despite the non-existence of a potential function. 3.1 Decreasing Cost Functions We start by showing that the class of CGLFs and, in particular, the subclass of CGLFs with decreasing cost functions, does not, in general, possess Nash equilibria in pure strategies. Consider a CGLF with two agents (N = {1, 2}) and two resources (M = {e1, e2}). The cost function of each resource is given by c(x) = 1 xx , where x ∈ {1, 2}, and the failure probabilities are f(1) = 0.01 and f(2) = 0.26. The benefits of the agents from successful task completion are v1 = 1.1 and v2 = 4. Below we present the payoff matrix of the game. ∅ {e1} {e2} {e1, e2} ∅ U1 = 0 U1 = 0 U1 = 0 U1 = 0 U2 = 0 U2 = 2.96 U2 = 2.96 U2 = 1.9996 {e1} U1 = 0.089 U1 = 0.564 U1 = 0.089 U1 = 0.564 U2 = 0 U2 = 2.71 U2 = 2.96 U2 = 2.7396 {e2} U1 = 0.089 U1 = 0.089 U1 = 0.564 U1 = 0.564 U2 = 0 U2 = 2.96 U2 = 2.71 U2 = 2.7396 {e1, e2} U1 = −0.90011 U1 = −0.15286 U1 = −0.15286 U1 = 0.52564 U2 = 0 U2 = 2.71 U2 = 2.71 U2 = 3.2296 Table 1: Example for non-existence of pure strategy Nash equilibria in CGLFs. It can be easily seen that for every pure strategy profile σ in this game there exist an agent i and a strategy σi ∈ Σi such that Ui(σ−i, σi) > Ui(σ). That is, every pure strategy profile in this game is not in equilibrium. However, if the cost functions in a given CGLF do not decrease in the number of users, then, as we show in the main result of this paper, a pure strategy Nash equilibrium is guaranteed to exist. 212 3.2 Nondecreasing Cost Functions This section focuses on the subclass of CGLFs with nondecreasing cost functions (henceforth, nondecreasing CGLFs). We show that nondecreasing CGLFs do not, in general, admit a potential function. Therefore, these games are not congestion games. Nevertheless, we prove that all such games possess pure strategy Nash equilibria. 3.2.1 The (Non-)Existence of a Potential Function Recall that Monderer and Shapley [9] introduced the notions of potential function and potential game, where potential game is defined to be a game that possesses a potential function. A potential function is a real-valued function over the set of pure strategy profiles, with the property that the gain (or loss) of an agent shifting to another strategy while the other agents'' strategies are kept unchanged, equals to the corresponding increment of the potential function. The authors [9] showed that the classes of finite potential games and congestion games coincide. Here we show that the class of CGLFs and, in particular, the subclass of nondecreasing CGLFs, does not admit a potential function, and therefore is not included in the class of congestion games. However, for the special case of constant failure probabilities, a potential function is guaranteed to exist. To prove these statements we use the following characterization of potential games [9]. A path in Σ is a sequence τ = (σ0 → σ1 → · · · ) such that for every k ≥ 1 there exists a unique agent, say agent i, such that σk = (σk−1 −i , σi) for some σi = σk−1 i in Σi. A finite path τ = (σ0 → σ1 → · · · → σK ) is closed if σ0 = σK . It is a simple closed path if in addition σl = σk for every 0 ≤ l = k ≤ K − 1. The length of a simple closed path is defined to be the number of distinct points in it; that is, the length of τ = (σ0 → σ1 → · · · → σK ) is K. Theorem 1. [9] Let G be a game in strategic form with a vector U = (U1, ... , Un) of utility functions. For a finite path τ = (σ0 → σ1 → · · · → σK ), let U(τ) = PK k=1[Uik (σk )− Uik (σk−1 )], where ik is the unique deviator at step k. Then, G is a potential game if and only if U(τ) = 0 for every simple closed path τ of length 4. Load-Dependent Failures Based on Theorem 1, we present the following counterexample that demonstrates the non-existence of a potential function in CGLFs. We consider the following agent-symmetric game G in which two agents (N = {1, 2}) wish to assign a task to two resources (M = {e1, e2}). The benefit from a successful task completion of each agent equals v, and the failure probability function strictly increases with the congestion. Consider the simple closed path of length 4 which is formed by α = (∅, {e2}) , β = ({e1}, {e2}) , γ = ({e1}, {e1, e2}) , δ = (∅, {e1, e2}) : {e2} {e1, e2} ∅ U1 = 0 U1 = 0 U2 = (1 − f(1)) v − c(1) U2 = ` 1 − f(1)2 ´ v − 2c(1) {e1} U1 = (1 − f(1)) v − c(1) U1 = (1 − f(2)) v − c(2) U2 = (1 − f(1)) v − c(1) U2 = (1 − f(1)f(2)) v − c(1) − c(2) Table 2: Example for non-existence of potentials in CGLFs. Therefore, U1(α) − U1(β) + U2(β) − U2(γ) + U1(γ) − U1(δ) +U2(δ) − U2(α) = v (1 − f(1)) (f(1) − f(2)) = 0. Thus, by Theorem 1, nondecreasing CGLFs do not admit potentials. As a result, they are not congestion games. However, as presented in the next section, the special case in which the failure probabilities are constant, always possesses a potential function. Constant Failure Probabilities We show below that CGLFs with constant failure probabilities always possess a potential function. This follows from the fact that the expected benefit (revenue) of each agent in this case does not depend on the choices of the other agents. In addition, for each agent, the sum of the costs over his chosen subset of resources, equals the payoff of an agent choosing the same strategy in the corresponding congestion game. Assume we are given a game G with constant failure probabilities. Let τ = (α → β → γ → δ → α) be an arbitrary simple closed path of length 4. Let i and j denote the active agents (deviators) in τ and z ∈ Σ−{i,j} be a fixed strategy profile of the other agents. Let α = (xi, xj, z), β = (yi, xj, z), γ = (yi, yj, z), δ = (xi, yj, z), where xi, yi ∈ Σi and xj, yj ∈ Σj. Then, U(τ) = Ui(xi, xj, z) − Ui(yi, xj, z) +Uj(yi, xj, z) − Uj(yi, yj, z) +Ui(yi, yj, z) − Ui(xi, yj, z) +Uj(xi, yj, z) − Uj(xi, xj, z) = 1 − f|xi| vi − X e∈xi c(h (xi,xj ,z) e ) − ... − 1 − f|xj | vj + X e∈xj c(h (xi,xj ,z) e ) = '' 1 − f|xi| vi − ... − 1 − f|xj | vj − '' X e∈xi c(h (xi,xj ,z) e ) − ... − X e∈xj c(h (xi,xj ,z) e ) . Notice that '' 1 − f|xi| vi − ... − 1 − f|xj | vj = 0, as a sum of a telescope series. The remaining sum equals 0, by applying Theorem 1 to congestion games, which are known to possess a potential function. Thus, by Theorem 1, G is a potential game. 213 We note that the above result holds also for the more general settings with non-identical resources (having different failure probabilities and cost functions) and general cost functions (not necessarily monotone and/or nonnegative). 3.2.2 The Existence of a Pure Strategy Nash Equilibrium In the previous section, we have shown that CGLFs and, in particular, nondecreasing CGLFs, do not admit a potential function, but this fact, in general, does not contradict the existence of an equilibrium in pure strategies. In this section, we present and prove the main result of this paper (Theorem 2) which shows the existence of pure strategy Nash equilibria in nondecreasing CGLFs. Theorem 2. Every nondecreasing CGLF possesses a Nash equilibrium in pure strategies. The proof of Theorem 2 is based on Lemmas 4, 7 and 8, which are presented in the sequel. We start with some definitions and observations that are needed for their proofs. In particular, we present the notions of A-, D- and S-stability and show that a strategy profile is in equilibrium if and only if it is A-, D- and S- stable. Furthermore, we prove the existence of such a profile in any given nondecreasing CGLF. Definition 3. For any strategy profile σ ∈ Σ and for any agent i ∈ N, the operation of adding precisely one resource to his strategy, σi, is called an A-move of i from σ. Similarly, the operation of dropping a single resource is called a D-move, and the operation of switching one resource with another is called an S-move. Clearly, if agent i deviates from strategy σi to strategy σi by applying a single A-, D- or S-move, then max {|σi σi|, |σi σi|} = 1, and vice versa, if max {|σi σi|, |σi σi|} = 1 then σi is obtained from σi by applying exactly one such move. For simplicity of exposition, for any pair of sets A and B, let µ(A, B) = max {|A B|, |B A|}. The following lemma implies that any strategy profile, in which no agent wishes unilaterally to apply a single A-, Dor S-move, is a Nash equilibrium. More precisely, we show that if there exists an agent who benefits from a unilateral deviation from a given strategy profile, then there exists a single A-, D- or S-move which is profitable for him as well. Lemma 4. Given a nondecreasing CGLF, let σ ∈ Σ be a strategy profile which is not in equilibrium, and let i ∈ N such that ∃xi ∈ Σi for which Ui(σ−i, xi) > Ui(σ). Then, there exists yi ∈ Σi such that Ui(σ−i, yi) > Ui(σ) and µ(yi, σi) = 1. Therefore, to prove the existence of a pure strategy Nash equilibrium, it suffices to look for a strategy profile for which no agent wishes to unilaterally apply an A-, D- or S-move. Based on the above observation, we define A-, D- and Sstability as follows. Definition 5. A strategy profile σ is said to be A-stable (resp., D-stable, S-stable) if there are no agents with a profitable A- (resp., D-, S-) move from σ. Similarly, we define a strategy profile σ to be DS-stable if there are no agents with a profitable D- or S-move from σ. The set of all DS-stable strategy profiles is denoted by Σ0 . Obviously, the profile (∅, ... , ∅) is DS-stable, so Σ0 is not empty. Our goal is to find a DS-stable profile for which no profitable A-move exists, implying this profile is in equilibrium. To describe how we achieve this, we define the notions of light (heavy) resources and (nearly-) even strategy profiles, which play a central role in the proof of our main result. Definition 6. Given a strategy profile σ, resource e is called σ-light if hσ e ∈ arg mine∈M hσ e and σ-heavy otherwise. A strategy profile σ with no heavy resources will be termed even. A strategy profile σ satisfying |hσ e − hσ e | ≤ 1 for all e, e ∈ M will be termed nearly-even. Obviously, every even strategy profile is nearly-even. In addition, in a nearly-even strategy profile, all heavy resources (if exist) have the same congestion. We also observe that the profile (∅, ... , ∅) is even (and DS-stable), so the subset of even, DS-stable strategy profiles is not empty. Based on the above observations, we define two types of an A-move that are used in the sequel. Suppose σ ∈ Σ0 is a nearly-even DS-stable strategy profile. For each agent i ∈ N, let ei ∈ arg mine∈M σi hσ e . That is, ei is a lightest resource not chosen previously by i. Then, if there exists any profitable A-move for agent i, then the A-move with ei is profitable for i as well. This is since if agent i wishes to unilaterally add a resource, say a ∈ M σi, then Ui (σ−i, (σi ∪ {a})) > Ui(σ). Hence, 1 − Y e∈σi f(hσ e )f(hσ a + 1) ! vi − X e∈σi c(hσ e ) − c(hσ a + 1) > 1 − Y e∈σi f(hσ e ) ! vi − X e∈σi c(hσ e ) ⇒ vi Y e∈σi f(hσ e ) > c(hσ a + 1) 1 − f(hσ a + 1) ≥ c(hσ ei + 1) 1 − f(hσ ei + 1) ⇒ Ui (σ−i, (σi ∪ {ei})) > Ui(σ) . If no agent wishes to change his strategy in this manner, i.e. Ui(σ) ≥ Ui(σ−i, σi ∪{ei}) for all i ∈ N, then by the above Ui(σ) ≥ Ui(σ−i, σi ∪{a}) for all i ∈ N and a ∈ M σi. Hence, σ is A-stable and by Lemma 4, σ is a Nash equilibrium strategy profile. Otherwise, let N(σ) denote the subset of all agents for which there exists ei such that a unilateral addition of ei is profitable. Let a ∈ arg minei : i∈N(σ) hσ ei . Let also i ∈ N(σ) be the agent for which ei = a. If a is σ-light, then let σ = (σ−i, σi ∪ {a}). In this case we say that σ is obtained from σ by a one-step addition of resource a, and a is called an added resource. If a is σ-heavy then there exists a σ-light resource b and an agent j such that a ∈ σj and b /∈ σj. Then let σ = ` σ−{i,j}, σi ∪ {a}, (σj {a}) ∪ {b} ´ . In this case we say that σ is obtained from σ by a two-step addition of resource b, and b is called an added resource. We notice that, in both cases, the congestion of each resource in σ is the same as in σ, except for the added resource, for which its congestion in σ increased by 1. Thus, since the added resource is σ-light and σ is nearly-even, σ is nearly-even. Then, the following lemma implies the Sstability of σ . 214 Lemma 7. In a nondecreasing CGLF, every nearly-even strategy profile is S-stable. Coupled with Lemma 7, the following lemma shows that if σ is a nearly-even and DS-stable strategy profile, and σ is obtained from σ by a one- or two-step addition of resource a, then the only potential cause for a non-DS-stability of σ is the existence of an agent k ∈ N with σk = σk, who wishes to drop the added resource a. Lemma 8. Let σ be a nearly-even DS-stable strategy profile of a given nondecreasing CGLF, and let σ be obtained from σ by a one- or two-step addition of resource a. Then, there are no profitable D-moves for any agent i ∈ N with σi = σi. For an agent i ∈ N with σi = σi, the only possible profitable D-move (if exists) is to drop the added resource a. We are now ready to prove our main result - Theorem 2. Let us briefly describe the idea behind the proof. By Lemma 4, it suffices to prove the existence of a strategy profile which is A-, D- and S-stable. We start with the set of even and DS-stable strategy profiles which is obviously not empty. In this set, we consider the subset of strategy profiles with maximum congestion and maximum sum of the agents'' utilities. Assuming on the contrary that every DSstable profile admits a profitable A-move, we show the existence of a strategy profile x in the above subset, such that a (one-step) addition of some resource a to x results in a DSstable strategy. Then by a finite series of one- or two-step addition operations we obtain an even, DS-stable strategy profile with strictly higher congestion on the resources, contradicting the choice of x. The full proof is presented below. Proof of Theorem 2: Let Σ1 ⊆ Σ0 be the subset of all even, DS-stable strategy profiles. Observe that since (∅, ... , ∅) is an even, DS-stable strategy profile, then Σ1 is not empty, and minσ∈Σ0 ˛ ˛{e ∈ M : e is σ−heavy} ˛ ˛ = 0. Then, Σ1 could also be defined as Σ1 = arg min σ∈Σ0 ˛ ˛{e ∈ M : e is σ−heavy} ˛ ˛ , with hσ being the common congestion. Now, let Σ2 ⊆ Σ1 be the subset of Σ1 consisting of all those profiles with maximum congestion on the resources. That is, Σ2 = arg max σ∈Σ1 hσ . Let UN (σ) = P i∈N Ui(σ) denotes the group utility of the agents, and let Σ3 ⊆ Σ2 be the subset of all profiles in Σ2 with maximum group utility. That is, Σ3 = arg max σ∈Σ2 X i∈N Ui(σ) = arg max σ∈Σ2 UN (σ) . Consider first the simple case in which maxσ∈Σ1 hσ = 0. Obviously, in this case, Σ1 = Σ2 = Σ3 = {x = (∅, ... , ∅)}. We show below that by performing a finite series of (onestep) addition operations on x, we obtain an even, DSstable strategy profile y with higher congestion, that is with hy > hx = 0, in contradiction to x ∈ Σ2 . Let z ∈ Σ0 be a nearly-even (not necessarily even) DS-stable profile such that mine∈M hz e = 0, and note that the profile x satisfies the above conditions. Let N(z) be the subset of agents for which a profitable A-move exists, and let i ∈ N(z). Obviously, there exists a z-light resource a such that Ui(z−i, zi ∪ {a}) > Ui(z) (otherwise, arg mine∈M hz e ⊆ zi, in contradiction to mine∈M hz e = 0). Consider the strategy profile z = (z−i, zi ∪ {a}) which is obtained from z by a (one-step) addition of resource a by agent i. Since z is nearly-even and a is z-light, we can easily see that z is nearly-even. Then, Lemma 7 implies that z is S-stable. Since i is the only agent using resource a in z , by Lemma 8, no profitable D-moves are available. Thus, z is a DS-stable strategy profile. Therefore, since the number of resources is finite, there is a finite series of one-step addition operations on x = (∅, ... , ∅) that leads to strategy profile y ∈ Σ1 with hy = 1 > 0 = hx , in contradiction to x ∈ Σ2 . We turn now to consider the other case where maxσ∈Σ1 hσ ≥ 1. In this case we select from Σ3 a strategy profile x, as described below, and use it to contradict our contrary assumption. Specifically, we show that there exists x ∈ Σ3 such that for all j ∈ N, vjf(hx )|xj |−1 ≥ c(hx + 1) 1 − f(hx + 1) . (1) Let x be a strategy profile which is obtained from x by a (one-step) addition of some resource a ∈ M by some agent i ∈ N(x) (note that x is nearly-even). Then, (1) is derived from and essentially equivalent to the inequality Uj(x ) ≥ Uj(x−j, xj {a}), for all a ∈ xj. That is, after performing an A-move with a by i, there is no profitable D-move with a. Then, by Lemmas 7 and 8, x is DS-stable. Following the same lines as above, we construct a procedure that initializes at x and achieves a strategy profile y ∈ Σ1 with hy > hx , in contradiction to x ∈ Σ2 . Now, let us confirm the existence of x ∈ Σ3 that satisfies (1). Let x ∈ Σ3 and let M(x) be the subset of all resources for which there exists a profitable (one-step) addition. First, we show that (1) holds for all j ∈ N such that xj ∩M(x) = ∅, that is, for all those agents with one of their resources being desired by another agent. Let a ∈ M(x), and let x be the strategy profile that is obtained from x by the (one-step) addition of a by agent i. Assume on the contrary that there is an agent j with a ∈ xj such that vjf(hx )|xj |−1 < c(hx + 1) 1 − f(hx + 1) . Let x = (x−j, xj {a}). Below we demonstrate that x is a DS-stable strategy profile and, since x and x correspond to the same congestion vector, we conclude that x lies in Σ2 . In addition, we show that UN (x ) > UN (x), contradicting the fact that x ∈ Σ3 . To show that x ∈ Σ0 we note that x is an even strategy profile, and thus no S-moves may be performed for x . In addition, since hx = hx and x ∈ Σ0 , there are no profitable D-moves for any agent k = i, j. It remains to show that there are no profitable D-moves for agents i and j as well. 215 Since Ui(x ) > Ui(x), we get vif(hx )|xi| > c(hx + 1) 1 − f(hx + 1) ⇒ vif(hx )|xi |−1 = vif(hx )|xi| > c(hx + 1) 1 − f(hx + 1) > c(hx ) 1 − f(hx) = c(hx ) 1 − f(hx ) , which implies Ui(x ) > Ui(x−i, xi {b}), for all b ∈ xi . Thus, there are no profitable D-moves for agent i. By the DS-stability of x, for agent j and for all b ∈ xj, we have Uj(x) ≥ Uj(x−j, xj {b}) ⇒ vjf(hx )|xj |−1 ≥ c(hx ) 1 − f(hx) . Then, vjf(hx )|xj |−1 > vjf(hx )|xj | = vjf(hx )|xj |−1 ≥ c(hx ) 1 − f(hx) = c(hx ) 1 − f(hx ) ⇒ Uj(x ) > Uj(x−j, xj {b}), for all b ∈ xi. Therefore, x is DS-stable and lies in Σ2 . To show that UN (x ), the group utility of x , satisfies UN (x ) > UN (x), we note that hx = hx , and thus Uk(x ) = Uk(x), for all k ∈ N {i, j}. Therefore, we have to show that Ui(x ) + Uj(x ) > Ui(x) + Uj(x), or Ui(x ) − Ui(x) > Uj(x) − Uj(x ). Observe that Ui(x ) > Ui(x) ⇒ vif(hx )|xi| > c(hx + 1) 1 − f(hx + 1) and Uj(x ) < Uj(x ) ⇒ vjf(hx )|xj |−1 < c(hx + 1) 1 − f(hx + 1) , which yields vif(hx )|xi| > vjf(hx )|xj |−1 . Thus, Ui(x ) − Ui(x) = 1 − f(hx )|xi|+1 vi − (|xi| + 1) c(hx ) − h 1 − f(hx )|xi| vi − |xi|c(hx ) i = vif(hx )|xi| (1 − f(hx )) − c(hx ) > vjf(hx )|xj |−1 (1 − f(hx )) − c(hx ) = 1 − f(hx )|xj | vj − |xj|c(hx ) − h 1 − f(hx )|xj |−1 vj − (|xi| − 1) c(hx ) i = Uj(x) − Uj(x ) . Therefore, x lies in Σ2 and satisfies UN (x ) > UN (x), in contradiction to x ∈ Σ3 . Hence, if x ∈ Σ3 then (1) holds for all j ∈ N such that xj ∩M(x) = ∅. Now let us see that there exists x ∈ Σ3 such that (1) holds for all the agents. For that, choose an agent i ∈ arg mink∈N vif(hx )|xk| . If there exists a ∈ xi ∩ M(x) then i satisfies (1), implying by the choice of agent i, that the above obviously yields the correctness of (1) for any agent k ∈ N. Otherwise, if no resource in xi lies in M(x), then let a ∈ xi and a ∈ M(x). Since a ∈ xi, a /∈ xi, and hx a = hx a , then there exists agent j such that a ∈ xj and a /∈ xj. One can easily check that the strategy profile x = ` x−{i,j}, (xi {a}) ∪ {a }, (xj {a }) ∪ {a} ´ lies in Σ3 . Thus, x satisfies (1) for agent i, and therefore, for any agent k ∈ N. Now, let x ∈ Σ3 satisfy (1). We show below that by performing a finite series of one- and two-step addition operations on x, we can achieve a strategy profile y that lies in Σ1 , such that hy > hx , in contradiction to x ∈ Σ2 . Let z ∈ Σ0 be a nearly-even (not necessarily even), DS-stable strategy profile, such that vi Y e∈zi {b} f(hz e) ≥ c(hz b + 1) 1 − f(hz b + 1) , (2) for all i ∈ N and for all z-light resource b ∈ zi. We note that for profile x ∈ Σ3 ⊆ Σ1 , with all resources being x-light, conditions (2) and (1) are equivalent. Let z be obtained from z by a one- or two-step addition of a z-light resource a. Obviously, z is nearly-even. In addition, hz e ≥ hz e for all e ∈ M, and mine∈M hz e ≥ mine∈M hz e. To complete the proof we need to show that z is DS-stable, and, in addition, that if mine∈M hz e = mine∈M hz e then z has property (2). The DS-stability of z follows directly from Lemmas 7 and 8, and from (2) with respect to z. It remains to prove property (2) for z with mine∈M hz e = mine∈M hz e. Using (2) with respect to z, for any agent k with zk = zk and for any zlight resource b ∈ zk, we get vk Y e∈zk {b} f(hz e ) ≥ vk Y e∈zk {b} f(hz e) ≥ c(hz b + 1) 1 − f(hz b + 1) = c(hz b + 1) 1 − f(hz b + 1) , as required. Now let us consider the rest of the agents. Assume z is obtained by the one-step addition of a by agent i. In this case, i is the only agent with zi = zi. The required property for agent i follows directly from Ui(z ) > Ui(z). In the case of a two-step addition, let z = ` z−{i,j}, zi ∪ {b}, (zj {b}) ∪ {a}), where b is a z-heavy resource. For agent i, from Ui(z−i, zi ∪ {b}) > Ui(z) we get 1 − Y e∈zi f(hz e)f(hz b + 1) ! vi − X e∈zi c(hz e) − c(hz b + 1) > 1 − Y e∈zi f(hz e) ! vi − X e∈zi c(hz e) ⇒ vi Y e∈zi f(hz e) > c(hz b + 1) 1 − f(hz b + 1) , (3) and note that since hz b ≥ hz e for all e ∈ M and, in particular, for all z -light resources, then c(hz b + 1) 1 − f(hz b + 1) ≥ c(hz e + 1) 1 − f(hz e + 1) , (4) for any z -light resource e . 216 Now, since hz e ≥ hz e for all e ∈ M and b is z-heavy, then vi Y e∈zi {e } f(hz e ) ≥ vi Y e∈zi {e } f(hz e) = vi Y e∈(zi∪{b}) {e } f(hz e) ≥ vi Y e∈zi f(hz e) , for any z -light resource e . The above, coupled with (3) and (4), yields the required. For agent j we just use (2) with respect to z and the equality hz b = hz a . For any z -light resource e , vj Y e∈zj {e } f(hz e ) ≥ vi Y e∈zi {e } f(hz e) ≥ c(hz e + 1) 1 − f(hz e + 1) = c(hz e + 1) 1 − f(hz e + 1) . Thus, since the number of resources is finite, there is a finite series of one- and two-step addition operations on x that leads to strategy profile y ∈ Σ1 with hy > hx , in contradiction to x ∈ Σ2 . This completes the proof. 4. DISCUSSION In this paper, we introduce and investigate congestion settings with unreliable resources, in which the probability of a resource``s failure depends on the congestion experienced by this resource. We defined a class of congestion games with load-dependent failures (CGLFs), which generalizes the wellknown class of congestion games. We study the existence of pure strategy Nash equilibria and potential functions in the presented class of games. We show that these games do not, in general, possess pure strategy equilibria. Nevertheless, if the resource cost functions are nondecreasing then such equilibria are guaranteed to exist, despite the non-existence of a potential function. The CGLF-model can be modified to the case where the agents pay only for non-faulty resources they selected. Both the model discussed in this paper and the modified one are reasonable. In the full version we will show that the modified model leads to similar results. In particular, we can show the existence of a pure strategy equilibrium for nondecreasing CGLFs also in the modified model. In future research we plan to consider various extensions of CGLFs. In particular, we plan to consider CGLFs where the resources may have different costs and failure probabilities, as well as CGLFs in which the resource failure probabilities are mutually dependent. In addition, it is of interest to develop an efficient algorithm for the computation of pure strategy Nash equilibrium, as well as discuss the social (in)efficiency of the equilibria. 5. REFERENCES [1] H. Ackermann, H. R¨oglin, and B. V¨ocking. Pure nash equilibria in player-specific and weighted congestion games. In WINE-06, 2006. [2] G. Christodoulou and E. Koutsoupias. The price of anarchy of finite congestion games. In Proceedings of the 37th Annual ACM Symposium on Theory and Computing (STOC-05), 2005. [3] A. Fabrikant, C. Papadimitriou, and K. Talwar. The complexity of pure nash equilibria. In STOC-04, pages 604-612, 2004. [4] E. Koutsoupias and C. Papadimitriou. Worst-case equilibria. In Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, pages 404-413, 1999. [5] K. Leyton-Brown and M. Tennenholtz. Local-effect games. In IJCAI-03, 2003. [6] I. Milchtaich. Congestion games with player-specific payoff functions. Games and Economic Behavior, 13:111-124, 1996. [7] D. Monderer. Solution-based congestion games. Advances in Mathematical Economics, 8:397-407, 2006. [8] D. Monderer. Multipotential games. In IJCAI-07, 2007. [9] D. Monderer and L. Shapley. Potential games. Games and Economic Behavior, 14:124-143, 1996. [10] M. Penn, M. Polukarov, and M. Tennenholtz. Congestion games with failures. In Proceedings of the 6th ACM Conference on Electronic Commerce (EC-05), pages 259-268, 2005. [11] R. Rosenthal. A class of games possessing pure-strategy nash equilibria. International Journal of Game Theory, 2:65-67, 1973. [12] T. Roughgarden and E. Tardos. How bad is selfish routing. Journal of the ACM, 49(2):236-259, 2002. 217
Congestion Games with Load-Dependent Failures: Identical Resources ABSTRACT We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions. 1. INTRODUCTION We study the effects of resource failures in congestion settings. This study is motivated by a variety of situations in multi-agent systems with unreliable components, such as machines, computers etc. . We define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations. In this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion. There is a fixed number of agents, each having a task which can be carried out by any of the resources. For reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources. Thus, the congestion on the resources is not known in advance, but is strategy-dependent. Each resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource. The objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses. The benefits of the agents from successful completion of their tasks are allowed to vary across the agents. The resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it. Thus, it is natural to assume that these functions are nonnegative. In addition, in many real-life applications of our model the resource cost functions have a special structure. In particular, they can monotonically increase or decrease with the number of the users, depending on the context. The former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher. A typical example of such situation is as follows. Assume we need to deliver an important package. Since there is no guarantee that a courier will reach the destination in time, we might send several couriers to deliver the same package. The time required by each courier to deliver the package increases with the congestion on his way. In addition, the payment to a courier is proportional to the time he spends in delivering the package. Thus, the payment to the courier increases when the congestion increases. The latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group's members, or, the cost of using a resource decreases with the number of users, according to some marketing policy. Our results. We show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function. Therefore, the CGLF model cannot be reduced to congestion games. Nevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist. . We show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria. However, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions. Related work Our model extends the well-known class of congestion games [11]. In a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses. An important property of these games is the existence of pure strategy Nash equilibria. Monderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium. They observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game). Moreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide. Congestion games have been extensively studied and generalized. In particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games. In a local-effect game, each agent's payoff is effected not only by the number of agents who have chosen the same resources as he has chosen, but also by the number of agents who have chosen neighboring resources (in a given graph structure). Monderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games). He defined PS-congestion games of type q (q-congestion games), where q is a positive number, and showed that every game in strategic form is a q-congestion game for some q. Playerspecific resource cost functions were discussed for the first time by Milchtaich [6]. He showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium. PScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource. Ackermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources. Much of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium. In particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games. Intensive study has also been devoted to quantify the inefficiency of equilibria in congestion games. Koutsoupias and Papadimitriou [4] proposed the worst-case ratio of the social welfare achieved by a Nash equilibrium and by a socially optimal strategy profile (dubbed the price of anarchy) as a measure of the performance degradation caused by lack of coordination. Christodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions. Roughgarden and Tardos [12] used this approach to study the cost of selfish routing in networks with a continuum of users. However, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks. In the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored. Penn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10]. They introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies. The CGF-model significantly differs from ours. In a CGF, the authors considered the delay associated with successful task completion, where the delay for an agent is the minimum of the delays of his successful attempts and the aim of each agent is to minimize his expected delay. In contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses. The above differences imply that CGFs and CGLFs possess different properties. In particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist. This, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function. Furthermore, the procedures proposed by the authors in [10] for the construction of a pure strategy Nash equilibrium are not valid in our model, even in the simple, agent-symmetric case, where all agents have the same benefit from successful completion of their tasks. Our work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games. Moreover, it is the first model to consider load-dependent failures in the above context. Organization The rest of the paper is organized as follows. In Section 2 we define our model. In Section 3 we present our results. In 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria. In 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs). We show that these games do not admit a potential function. However, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs. Section 4 is devoted to a short discussion. Many of the proofs are omitted from this conference version of the paper, and will appear in the full version. 2. THE MODEL The scenarios considered in this work consist of a finite set of agents where each agent has a task that can be carried out by any element of a set of identical resources (service providers). The agents simultaneously choose a subset of the resources in order to perform their tasks, and their aim is to maximize their own expected payoff, as described in the sequel. Let N be a set of n agents (n E N), and let M be a set of m resources (m E N). Agent i E N chooses a strategy σi E Σi which is a (potentially empty) subset of the resources. That is, Σi is the power set of the set of resources: Σi = P (M). Given a subset S C _ N of the agents, the set of strategy combinations of the members of S is denoted by ΣS = Xi ∈ SΣi, and the set of strategy combinations of the complement subset of agents is denoted by Σ − S (Σ − S = ΣNIS = Xi ∈ NISΣi). The set of pure strategy profiles of all the agents is denoted by Σ (Σ = ΣN). Each resource is associated with a cost, c (·), and a failure probability, f (·), each of which depends on the number of agents who use this resource. We assume that the failure probabilities of the resources are independent. Let σ = (σ1,..., σn) E Σ be a pure strategy profile. The (m-dimensional) congestion vector that corresponds to σ is hσ = (hσe) e ∈ M, where hσ ˛˛. The faile = ˛˛ {i E N: e E σi} ure probability of a resource e is a monotone nondecreasing function f: {1,..., n}--+ [0, 1) of the congestion experienced by e. The cost of utilizing resource e is a function The outcome for agent i E N is denoted by xi E {S, F}, where S and F, respectively, indicate whether the task execution succeeded or failed. We say that the execution of agent's i task succeeds if the task of agent i is successfully completed by at least one of the resources chosen by him. The benefit of agent i from his outcome xi is denoted by Vi (xi), where Vi (S) = vi, a given (nonnegative) value, and Vi (F) = 0. The utility of agent i from strategy profile σ and his outcome xi, ui (σ, xi), is the difference between his benefit from the outcome (Vi (xi)) and the sum of the costs of the resources he has used: where 1 − Fe ∈ σi f (hσe) denotes the probability of successful completion of agent i's task. We use the convention that Fe ∈ ∅ f (hσe) = 1. Hence, if agent i chooses an empty set σi = 0 (does not assign his task to any resource), then his expected utility, Ui (0, σ − i), equals zero. 3. PURE STRATEGY NASH EQUILIBRIA IN CGLFS In this section we present our results on CGLFs. We investigate the property of the (non -) existence of pure strategy Nash equilibria in these games. We show that this class of games does not, in general, possess pure strategy equilibria. Nevertheless, if the resource cost functions are nondecreasing then such equilibria are guaranteed to exist, despite the non-existence of a potential function. 3.1 Decreasing Cost Functions We start by showing that the class of CGLFs and, in particular, the subclass of CGLFs with decreasing cost functions, does not, in general, possess Nash equilibria in pure strategies. Consider a CGLF with two agents (N = {1, 2}) and two resources (M = {e1, e2}). The cost function of each resource is given by c (x) = xx1, where x E {1, 2}, and the failure probabilities are f (1) = 0.01 and f (2) = 0.26. The benefits of the agents from successful task completion are v1 = 1.1 and v2 = 4. Below we present the payoff matrix of the game. Table 1: Example for non-existence of pure strategy Nash equilibria in CGLFs. It can be easily seen that for every pure strategy profile σ in this game there exist an agent i and a strategy σ0i E Σi such that Ui (σ − i, σ0i)> Ui (σ). That is, every pure strategy profile in this game is not in equilibrium. However, if the cost functions in a given CGLF do not decrease in the number of users, then, as we show in the main result of this paper, a pure strategy Nash equilibrium is guaranteed to exist. 3.2 Nondecreasing Cost Functions This section focuses on the subclass of CGLFs with nondecreasing cost functions (henceforth, nondecreasing CGLFs). We show that nondecreasing CGLFs do not, in general, admit a potential function. Therefore, these games are not congestion games. Nevertheless, we prove that all such games possess pure strategy Nash equilibria. Table 2: Example for non-existence of potentials in CGLFs. 3.2.1 The (Non -) Existence of a Potential Function Recall that Monderer and Shapley [9] introduced the notions of potential function and potential game, where potential game is defined to be a game that possesses a potential function. A potential function is a real-valued function over the set of pure strategy profiles, with the property that the gain (or loss) of an agent shifting to another strategy while the other agents' strategies are kept unchanged, equals to the corresponding increment of the potential function. The authors [9] showed that the classes of finite potential games and congestion games coincide. Here we show that the class of CGLFs and, in particular, the subclass of nondecreasing CGLFs, does not admit a potential function, and therefore is not included in the class of congestion games. However, for the special case of constant failure probabilities, a potential function is guaranteed to exist. To prove these statements we use the following characterization of potential games [9]. A path in Σ is a sequence τ = (σ0 → σ1 → · · ·) such that for every k ≥ 1 there exists a unique agent, say agent i, such that σk = (σk − 1 − i, σ0i) for some σ0i = 6 σk − 1 i in Σi. A finite path τ = (σ0 → σ1 → · · · → σK) is closed if σ0 = σK. It is a simple closed path if in addition σl = 6 σk for every 0 ≤ l = 6 k ≤ K − 1. The length of a simple closed path is defined to be the number of distinct points in it; that is, the length of τ = (σ0 → σ1 → · · · → σK) is K. THEOREM 1. [9] Let G be a game in strategic form with a vector U = (U1,..., Un) of utility functions. For a finite path τ = (σ0 → σ1 → · · · → σK), let U (τ) = PKk = 1 [Uik (σk) − Uik (σk − 1)], where ik is the unique deviator at step k. Then, G is a potential game if and only if U (τ) = 0 for every simple closed path τ of length 4. Thus, by Theorem 1, nondecreasing CGLFs do not admit potentials. As a result, they are not congestion games. However, as presented in the next section, the special case in which the failure probabilities are constant, always possesses a potential function. Constant Failure Probabilities We show below that CGLFs with constant failure probabilities always possess a potential function. This follows from the fact that the expected benefit (revenue) of each agent in this case does not depend on the choices of the other agents. In addition, for each agent, the sum of the costs over his chosen subset of resources, equals the payoff of an agent choosing the same strategy in the corresponding congestion game. Assume we are given a game G with constant failure probabilities. Let τ = (α → β → γ → δ → α) be an arbitrary simple closed path of length 4. Let i and j denote the active agents (deviators) in τ and z ∈ Σ − {i, j} be a fixed strategy profile of the other agents. Let α = (xi, xj, z), β = (yi, xj, z), γ = (yi, yj, z), δ = (xi, yj, z), where xi, yi ∈ Σi and xj, yj ∈ Σj. Then, Based on Theorem 1, we present the following counterexample that demonstrates the non-existence of a potential function in CGLFs. We consider the following agent-symmetric game G in which two agents (N = {1, 2}) wish to assign a task to two resources (M = {e1, e2}). The benefit from a successful task completion of each agent equals v, and the failure probability function strictly increases with the congestion. Consider the simple closed path of length 4 which is formed by Notice that vi −...− vj = 0, as a sum of a telescope series. The remaining sum equals 0, by applying Theorem 1 to congestion games, which are known to possess a potential function. Thus, by Theorem 1, G is a potential game. c (h (xi, xj, z) e) We note that the above result holds also for the more general settings with non-identical resources (having different failure probabilities and cost functions) and general cost functions (not necessarily monotone and/or nonnegative). 3.2.2 The Existence of a Pure Strategy Nash Equilibrium In the previous section, we have shown that CGLFs and, in particular, nondecreasing CGLFs, do not admit a potential function, but this fact, in general, does not contradict the existence of an equilibrium in pure strategies. In this section, we present and prove the main result of this paper (Theorem 2) which shows the existence of pure strategy Nash equilibria in nondecreasing CGLFs. The proof of Theorem 2 is based on Lemmas 4, 7 and 8, which are presented in the sequel. We start with some definitions and observations that are needed for their proofs. In particular, we present the notions of A -, D - and S-stability and show that a strategy profile is in equilibrium if and only if it is A -, D - and S - stable. Furthermore, we prove the existence of such a profile in any given nondecreasing CGLF. Clearly, if agent i deviates from strategy ori to strategy or0i by applying a single A -, D - or S-move, then max {| ori -, or0 i |, | or0i -, ori |} = 1, and vice versa, if max {| ori -, or0i |, | or0i -, ori |} = 1 then or0i is obtained from ori by applying exactly one such move. For simplicity of exposition, for any pair of sets A and B, let µ (A, B) = max {| A -, B |, | B -, A |}. The following lemma implies that any strategy profile, in which no agent wishes unilaterally to apply a single A -, Dor S-move, is a Nash equilibrium. More precisely, we show that if there exists an agent who benefits from a unilateral deviation from a given strategy profile, then there exists a single A -, D - or S-move which is profitable for him as well. LEMMA 4. Given a nondecreasing CGLF, let or ∈ Σ be a strategy profile which is not in equilibrium, and let i ∈ N such that ∃ xi ∈ Σi for which Ui (or − i, xi)> Ui (or). Then, there exists yi ∈ Σi such that Ui (or − i, yi)> Ui (or) and µ (yi, ori) = 1. Therefore, to prove the existence of a pure strategy Nash equilibrium, it suffices to look for a strategy profile for which no agent wishes to unilaterally apply an A -, D - or S-move. Based on the above observation, we define A -, D - and Sstability as follows. The set of all DS-stable strategy profiles is denoted by Σ °. Obviously, the profile (0,..., 0) is DS-stable, so Σ ° is not empty. Our goal is to find a DS-stable profile for which no profitable A-move exists, implying this profile is in equilibrium. To describe how we achieve this, we define the notions of light (heavy) resources and (nearly -) even strategy profiles, which play a central role in the proof of our main result. Obviously, every even strategy profile is nearly-even. In addition, in a nearly-even strategy profile, all heavy resources (if exist) have the same congestion. We also observe that the profile (0,..., 0) is even (and DS-stable), so the subset of even, DS-stable strategy profiles is not empty. Based on the above observations, we define two types of an A-move that are used in the sequel. Suppose or ∈ Σ ° is a nearly-even DS-stable strategy profile. For each agent i ∈ N, let ei ∈ arg mine ∈ M1σi hσe. That is, ei is a lightest resource not chosen previously by i. Then, if there exists any profitable A-move for agent i, then the A-move with ei is profitable for i as well. This is since if agent i wishes to unilaterally add a resource, say a ∈ M -, ori, then If no agent wishes to change his strategy in this manner, i.e. Ui (or) ≥ Ui (or − i, ori ∪ {ei}) for all i ∈ N, then by the above Ui (or) ≥ Ui (or − i, ori ∪ {a}) for all i ∈ N and a ∈ M -, ori. Hence, or is A-stable and by Lemma 4, or is a Nash equilibrium strategy profile. Otherwise, let N (or) denote the subset of all agents for which there exists ei such that a unilateral addition of ei is profitable. Let a ∈ arg minei: i ∈ N (σ) hσ ei. Let also i ∈ N (or) be the agent for which ei = a. If a is or-light, then let or0 = (or − i, ori ∪ {a}). In this case we say that or0 is obtained from or by a one-step addition of resource a, and a is called an added resource. If a is or-heavy then there exists a or-light resource b and an agent j such that a ∈ orj and b ∈ / orj. Then let or0 = ` or − {i, j}, ori ∪ {a}, (orj -, {a}) ∪ {b}). In this case we say that or0 is obtained from or by a two-step addition of resource b, and b is called an added resource. We notice that, in both cases, the congestion of each resource in or0 is the same as in or, except for the added resource, for which its congestion in or0 increased by 1. Thus, since the added resource is or-light and or is nearly-even, or0 is nearly-even. Then, the following lemma implies the Sstability of or0. Coupled with Lemma 7, the following lemma shows that if σ is a nearly-even and DS-stable strategy profile, and σ0 is obtained from σ by a one - or two-step addition of resource a, then the only potential cause for a non-DS-stability of σ0 is the existence of an agent k E N with σ0k = σk, who wishes to drop the added resource a. LEMMA 8. Let σ be a nearly-even DS-stable strategy profile of a given nondecreasing CGLF, and let σ0 be obtained from σ by a one - or two-step addition of resource a. Then, there are no profitable D-moves for any agent i E N with σ0i = 6 σi. For an agent i E N with σ0i = σi, the only possible profitable D-move (if exists) is to drop the added resource a. We are now ready to prove our main result - Theorem 2. Let us briefly describe the idea behind the proof. By Lemma 4, it suffices to prove the existence of a strategy profile which is A -, D - and S-stable. We start with the set of even and DS-stable strategy profiles which is obviously not empty. In this set, we consider the subset of strategy profiles with maximum congestion and maximum sum of the agents' utilities. Assuming on the contrary that every DSstable profile admits a profitable A-move, we show the existence of a strategy profile x in the above subset, such that a (one-step) addition of some resource a to x results in a DSstable strategy. Then by a finite series of one - or two-step addition operations we obtain an even, DS-stable strategy profile with strictly higher congestion on the resources, contradicting the choice of x. The full proof is presented below. Proof of Theorem 2: Let Σ1 C _ Σ0 be the subset of all even, DS-stable strategy profiles. Observe that since (0,..., 0) is an even, DS-stable strategy profile, then Σ1 ˛˛ = 0. is not empty, and minσ ∈ Σo ˛˛ {e E M: e is σ − heavy} Then, Σ1 could also be defined as with hσ being the common congestion. Now, let Σ2 C _ Σ1 be the subset of Σ1 consisting of all those profiles with maximum congestion on the resources. That is, Let UN (σ) = Ei ∈ N Ui (σ) denotes the group utility of the agents, and let Σ3 C _ Σ2 be the subset of all profiles in Σ2 with maximum group utility. That is, Consider first the simple case in which maxσ ∈ Σl hσ = 0. Obviously, in this case, Σ1 = Σ2 = Σ3 = {x = (0,..., 0)}. We show below that by performing a finite series of (onestep) addition operations on x, we obtain an even, DSstable strategy profile y with higher congestion, that is with hy> hx = 0, in contradiction to x E Σ2. Let z E Σ0 be a nearly-even (not necessarily even) DS-stable profile such that mine ∈ M hze = 0, and note that the profile x satisfies the above conditions. Let N (z) be the subset of agents for which a profitable A-move exists, and let i E N (z). Obviously, there exists a z-light resource a such that Ui (z − i, zi U {a})> Ui (z) (otherwise, arg mine ∈ M hze C _ zi, in contradiction to mine ∈ M hze = 0). Consider the strategy profile z0 = (z − i, zi U {a}) which is obtained from z by a (one-step) addition of resource a by agent i. Since z is nearly-even and a is z-light, we can easily see that z0 is nearly-even. Then, Lemma 7 implies that z0 is S-stable. Since i is the only agent using resource a in z0, by Lemma 8, no profitable D-moves are available. Thus, z0 is a DS-stable strategy profile. Therefore, since the number of resources is finite, there is a finite series of one-step addition operations on x = (0,..., 0) that leads to strategy profile y E Σ1 with hy = 1> 0 = hx, in contradiction to x E Σ2. We turn now to consider the other case where maxσ ∈ Σl hσ> 1. In this case we select from Σ3 a strategy profile x, as described below, and use it to contradict our contrary assumption. Specifically, we show that there exists x E Σ3 such that for all j E N, Let x0 be a strategy profile which is obtained from x by a (one-step) addition of some resource a E M by some agent i E N (x) (note that x0 is nearly-even). Then, (1) is derived from and essentially equivalent to the inequality Uj (x0)> Uj (x0 − j, x0j \ {a}), for all a E xj. That is, after performing an A-move with a by i, there is no profitable D-move with a. Then, by Lemmas 7 and 8, x0 is DS-stable. Following the same lines as above, we construct a procedure that initializes at x and achieves a strategy profile y E Σ1 with hy> hx, in contradiction to x E Σ2. Now, let us confirm the existence of x E Σ3 that satisfies (1). Let x E Σ3 and let M (x) be the subset of all resources for which there exists a profitable (one-step) addition. First, we show that (1) holds for all j E N such that xj n M (x) = 6 0, that is, for all those agents with one of their resources being desired by another agent. Let a E M (x), and let x0 be the strategy profile that is obtained from x by the (one-step) addition of a by agent i. Assume on the contrary that there is an agent j with a E xj such that Let x00 = (x0 − j, x0j \ {a}). Below we demonstrate that x00 is a DS-stable strategy profile and, since x00 and x correspond to the same congestion vector, we conclude that x00 lies in Σ2. In addition, we show that UN (x00)> UN (x), contradicting the fact that x E Σ3. To show that x00 E Σ0 we note that x00 is an even strategy profile, and thus no S-moves may be performed for x00. In addition, since hx00 D-moves for any agent k = 6 i, j. It remains to show that there are no profitable D-moves for agents i and j as well. i. Thus, there are no profitable D-moves for agent i. By the DS-stability of x, for agent j and for all b ∈ xj, we have Uj (x) ≥ Uj (x − j, xj -, {b}) ⇒ vj f (hx) | xj | − 1 ≥ c (hx) Then, Therefore, x00 lies in Σ2 and satisfies UN (x00)> UN (x), in contradiction to x ∈ Σ3. Hence, if x ∈ Σ3 then (1) holds for all j ∈ N such that xj ∩ M (x) = 6 0. Now let us see that there exists x ∈ Σ3 such that (1) holds for all the agents. For that, choose an agent i ∈ arg mink ∈ N vif (hx) | xk |. If there exists a ∈ xi ∩ M (x) then i satisfies (1), implying by the choice of agent i, that the above obviously yields the correctness of (1) for any agent k ∈ N. Otherwise, if no resource in xi lies in M (x), then let a ∈ xi and a0 ∈ M (x). Since a ∈ xi, a0 ∈ / xi, and hxa = hxay, then there exists agent j such that a0 ∈ xj and a ∈ / xj. One can easily check that the strategy profile x0 = ` x − {i, j}, (xi -, {a}) ∪ {a0}, (xj -, {a0}) ∪ {a} ´ lies in Σ3. Thus, x0 satisfies (1) for agent i, and therefore, for any agent k ∈ N. Now, let x ∈ Σ3 satisfy (1). We show below that by performing a finite series of one - and two-step addition operations on x, we can achieve a strategy profile y that lies in Σ1, such that hy> hx, in contradiction to x ∈ Σ2. Let z ∈ Σ0 be a nearly-even (not necessarily even), DS-stable strategy profile, such that for all i ∈ N and for all z-light resource b ∈ zi. We note that for profile x ∈ Σ3 ⊆ Σ1, with all resources being x-light, conditions (2) and (1) are equivalent. Let z0 be obtained from z by a one - or two-step addition of a z-light resource a. Obviously, z0 is nearly-even. In addition, hzye ≥ hze for all e ∈ M, and mine ∈ M hzy e ≥ mine ∈ M hze. To complete the proof we need to show that z0 is DS-stable, and, in addition, that if mine ∈ M hzy e = mine ∈ M hze then z0 has property (2). The DS-stability of z0 follows directly from Lemmas 7 and 8, and from (2) with respect to z. It remains to prove property (2) for z0 with mine ∈ M hzy as required. Now let us consider the rest of the agents. Assume z is obtained by the one-step addition of a by agent i. In this case, i is the only agent with z0i = 6 zi. The required the case of a two-step addition, let z0 = ` z − {i, j}, zi ∪ {b}, property for agent i follows directly from Ui (z0)> Ui (z). In (zj -, {b}) ∪ {a}), where b is a z-heavy resource. For agent i, from Ui (z − i, zi ∪ {b})> Ui (z) we get and note that since hzb ≥ hzy ey for all e0 ∈ M and, in particular, for all z0-light resources, then for any z0-light resource e0. for any z' - light resource e'. The above, coupled with (3) and (4), yields the required. For agent j we just use (2) with respect to z and the equality hzb = hz ' Thus, since the number of resources is finite, there is a finite series of one - and two-step addition operations on x that leads to strategy profile y E Σ1 with hy> hx, in contradiction to x E Σ2. This completes the proof. ❑
Congestion Games with Load-Dependent Failures: Identical Resources ABSTRACT We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions. 1. INTRODUCTION We study the effects of resource failures in congestion settings. This study is motivated by a variety of situations in multi-agent systems with unreliable components, such as machines, computers etc. . We define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations. In this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion. There is a fixed number of agents, each having a task which can be carried out by any of the resources. For reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources. Thus, the congestion on the resources is not known in advance, but is strategy-dependent. Each resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource. The objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses. The benefits of the agents from successful completion of their tasks are allowed to vary across the agents. The resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it. Thus, it is natural to assume that these functions are nonnegative. In addition, in many real-life applications of our model the resource cost functions have a special structure. In particular, they can monotonically increase or decrease with the number of the users, depending on the context. The former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher. A typical example of such situation is as follows. Assume we need to deliver an important package. Since there is no guarantee that a courier will reach the destination in time, we might send several couriers to deliver the same package. The time required by each courier to deliver the package increases with the congestion on his way. In addition, the payment to a courier is proportional to the time he spends in delivering the package. Thus, the payment to the courier increases when the congestion increases. The latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group's members, or, the cost of using a resource decreases with the number of users, according to some marketing policy. Our results. We show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function. Therefore, the CGLF model cannot be reduced to congestion games. Nevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist. . We show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria. However, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions. Related work Our model extends the well-known class of congestion games [11]. In a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses. An important property of these games is the existence of pure strategy Nash equilibria. Monderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium. They observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game). Moreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide. Congestion games have been extensively studied and generalized. In particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games. In a local-effect game, each agent's payoff is effected not only by the number of agents who have chosen the same resources as he has chosen, but also by the number of agents who have chosen neighboring resources (in a given graph structure). Monderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games). He defined PS-congestion games of type q (q-congestion games), where q is a positive number, and showed that every game in strategic form is a q-congestion game for some q. Playerspecific resource cost functions were discussed for the first time by Milchtaich [6]. He showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium. PScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource. Ackermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources. Much of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium. In particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games. Intensive study has also been devoted to quantify the inefficiency of equilibria in congestion games. Koutsoupias and Papadimitriou [4] proposed the worst-case ratio of the social welfare achieved by a Nash equilibrium and by a socially optimal strategy profile (dubbed the price of anarchy) as a measure of the performance degradation caused by lack of coordination. Christodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions. Roughgarden and Tardos [12] used this approach to study the cost of selfish routing in networks with a continuum of users. However, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks. In the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored. Penn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10]. They introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies. The CGF-model significantly differs from ours. In a CGF, the authors considered the delay associated with successful task completion, where the delay for an agent is the minimum of the delays of his successful attempts and the aim of each agent is to minimize his expected delay. In contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses. The above differences imply that CGFs and CGLFs possess different properties. In particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist. This, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function. Furthermore, the procedures proposed by the authors in [10] for the construction of a pure strategy Nash equilibrium are not valid in our model, even in the simple, agent-symmetric case, where all agents have the same benefit from successful completion of their tasks. Our work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games. Moreover, it is the first model to consider load-dependent failures in the above context. Organization The rest of the paper is organized as follows. In Section 2 we define our model. In Section 3 we present our results. In 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria. In 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs). We show that these games do not admit a potential function. However, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs. Section 4 is devoted to a short discussion. Many of the proofs are omitted from this conference version of the paper, and will appear in the full version. 2. THE MODEL 3. PURE STRATEGY NASH EQUILIBRIA IN CGLFS 3.1 Decreasing Cost Functions 3.2 Nondecreasing Cost Functions 3.2.1 The (Non -) Existence of a Potential Function Constant Failure Probabilities 3.2.2 The Existence of a Pure Strategy Nash Equilibrium
Congestion Games with Load-Dependent Failures: Identical Resources ABSTRACT We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions. 1. INTRODUCTION We study the effects of resource failures in congestion settings. We define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations. In this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion. There is a fixed number of agents, each having a task which can be carried out by any of the resources. For reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources. Thus, the congestion on the resources is not known in advance, but is strategy-dependent. Each resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource. The objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses. The benefits of the agents from successful completion of their tasks are allowed to vary across the agents. The resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it. Thus, it is natural to assume that these functions are nonnegative. In addition, in many real-life applications of our model the resource cost functions have a special structure. In particular, they can monotonically increase or decrease with the number of the users, depending on the context. The former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher. A typical example of such situation is as follows. Assume we need to deliver an important package. The time required by each courier to deliver the package increases with the congestion on his way. In addition, the payment to a courier is proportional to the time he spends in delivering the package. Thus, the payment to the courier increases when the congestion increases. The latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group's members, or, the cost of using a resource decreases with the number of users, according to some marketing policy. Our results. We show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function. Therefore, the CGLF model cannot be reduced to congestion games. Nevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist. . We show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria. However, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions. Related work Our model extends the well-known class of congestion games [11]. In a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses. An important property of these games is the existence of pure strategy Nash equilibria. Monderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium. They observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game). Moreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide. Congestion games have been extensively studied and generalized. In particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games. Monderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games). He showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium. PScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource. Ackermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources. Much of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium. In particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games. Intensive study has also been devoted to quantify the inefficiency of equilibria in congestion games. Christodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions. However, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks. In the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored. Penn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10]. They introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies. In contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses. The above differences imply that CGFs and CGLFs possess different properties. In particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist. This, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function. Our work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games. Moreover, it is the first model to consider load-dependent failures in the above context. Organization The rest of the paper is organized as follows. In Section 2 we define our model. In Section 3 we present our results. In 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria. In 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs). We show that these games do not admit a potential function. However, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs. Section 4 is devoted to a short discussion.
H-62
Implicit User Modeling for Personalized Search
Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word java to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine.
[ "implicit user model", "user model", "person search", "inform retriev system", "retriev perform", "implicit feedback", "queri expans", "search accuraci", "user-center adapt inform retriev", "person web search", "interact ir", "queri refin", "person inform retriev", "interact retriev" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "U", "M", "R", "M" ]
Implicit User Modeling for Personalized Search Xuehua Shen, Bin Tan, ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign ABSTRACT Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word java to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user``s interest from the user``s search context and use the inferred implicit user model for personalized search . We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models, Relevance feedback, Search Process General Terms Algorithms 1. INTRODUCTION Although many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17]. This inherent non-optimality is seen clearly in the following two cases: (1) Different users may use exactly the same query (e.g., Java) to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users. Without considering the actual user, it is impossible to know which sense Java refers to in a query. (2) A user``s information needs may change over time. The same user may use Java sometimes to mean the Java island in Indonesia and some other times to mean the programming language. Without recognizing the search context, it would be again impossible to recognize the correct sense. In order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user. The major goal of user modeling for information retrieval is to accurately model a user``s information need, which is, unfortunately, a very difficult task. Indeed, it is even hard for a user to precisely describe what his/her information need is. What information is available for a system to infer a user``s information need? Obviously, the user``s query provides the most direct evidence. Indeed, most existing retrieval systems rely solely on the query to model a user``s information need. However, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished . An effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his/her information need), and then to improve user modeling based on such examples of relevant documents. This is called relevance feedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20]. Unfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11]. It is thus very interesting to study how to infer a user``s information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort. Indeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy. In [3], a web browser (Curious Browser) is developed to record a user``s explicit relevance ratings of web pages (relevance feedback) and browsing behavior when viewing a page, such as dwelling time, mouse click, mouse movement and scrolling (implicit feedback). It is shown that the dwelling time on a page, amount of scrolling on a page and the combination of time and scrolling have a strong correlation with explicit relevance ratings, which suggests that implicit feedback may be helpful for inferring user information need. In [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users'' preferences. In [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy. 824 While a user may have general long term interests and preferences for information, often he/she is searching for documents to satisfy an ad-hoc information need, which only lasts for a short period of time; once the information need is satisfied, the user would generally no longer be interested in such information. For example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he/she is generally no longer interested in such information. In such cases, implicit feedback information collected over a long period of time is unlikely to be very useful, but the immediate search context and feedback information, such as which of the search results for the current information need are viewed, can be expected to be much more useful. Consider the query Java again. Any of the following immediate feedback information about the user could potentially help determine the intended meaning of Java in the query: (1) The previous query submitted by the user is hashtable (as opposed to, e.g., travel Indonesia). (2) In the search results, the user viewed a page where words such as programming, software, and applet occur many times. To the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work. In this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad-hoc retrieval. In order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform eager implicit feedback. That is, as soon as we observe any new piece of evidence from the user, we would update the system``s belief about the user``s information need and respond with improved retrieval results based on the updated user model. We present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. In a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values. As a result, the retrieval process is a simple independent cycle of query and result display. In the proposed new retrieval paradigm, the user``s search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user. The new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general. We further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user. Using these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. Thus the developed search agent can improve existing web search performance without additional effort from the user. The remaining sections are organized as follows. In Section 2, we discuss the related work. In Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling. In Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback. In Section 5, we report our experiment results using the search agent. Section 6 concludes our work. 2. RELATED WORK Implicit user modeling for personalized search has been studied in previous work, but our work differs from all previous work in several aspects: (1) We emphasize the exploitation of immediate search context such as the related immediately preceding query and the viewed documents in the same session, while most previous work relies on long-term collection of implicit feedback information [25]. (2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10]. (3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22]. (4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions. Currently some search engines provide rudimentary personalization, such as Google Personalized web search [6], which allows users to explicitly describe their interests by selecting from predefined topics, so that those results that match their interests are brought to the top, and My Yahoo! search [16], which gives users the option to save web sites they like and block those they dislike. In contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts. Furthermore, the personalization of UCAIR is provided on the client side. There are two remarkable advantages on this. First, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26]. Second, both the computation of personalization and the storage of the user profile are done at the client side so that the server load is reduced dramatically [9]. There have been many works studying user query logs [1] or query dynamics [13]. UCAIR makes direct use of a user``s query history to benefit the same user immediately in the same search session. UCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion. Our query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user``s implicit feedback information to expand the current query. These two techniques may be combined. 3. OPTIMIZATION IN INTERACTIVE IR In interactive IR, a user interacts with the retrieval system through an action dialogue, in which the system responds to each user action with some system action. For example, the user``s action may be submitting a query and the system``s response may be returning a list of 10 document summaries. In general, the space of user actions and system responses and their granularities would depend on the interface of a particular retrieval system. In principle, every action of the user can potentially provide new evidence to help the system better infer the user``s information need. Thus in order to respond optimally, the system should use all the evidence collected so far about the user when choosing a response. When viewed in this way, most existing search engines are clearly non-optimal. For example, if a user has viewed some documents on the first page of search results, when the user clicks on the Next link to fetch more results, an existing retrieval system would still return the next page of results retrieved based on the original query without considering the new evidence that a particular result has been viewed by the user. 825 We propose to optimize retrieval performance by adapting system responses based on every action that a user has taken, and cast the optimization problem as a decision task. Specifically, at any time, the system would attempt to do two tasks: (1) User model updating: Monitor any useful evidence from the user regarding his/her information need and update the user model as soon as such evidence is available; (2) Improving search results: Rerank immediately all the documents that the user has not yet seen, as soon as the user model is updated. We emphasize eager updating and reranking, which makes our work quite different from any existing work. Below we present a formal decision theoretic framework for optimizing retrieval performance through implicit user modeling in interactive information retrieval. 3.1 A decision-theoretic framework Let A be the set of all user actions and R(a) be the set of all possible system responses to a user action a ∈ A. At any time, let At = (a1, ..., at) be the observed sequence of user actions so far (up to time point t) and Rt−1 = (r1, ..., rt−1) be the responses that the system has made responding to the user actions. The system``s goal is to choose an optimal response rt ∈ R(at) for the current user action at. Let M be the space of all possible user models. We further define a loss function L(a, r, m) ∈ , where a ∈ A is a user action, r ∈ R(a) is a system response, and m ∈ M is a user model. L(a, r, m) encodes our decision preferences and assesses the optimality of responding with r when the current user model is m and the current user action is a. According to Bayesian decision theory, the optimal decision at time t is to choose a response that minimizes the Bayes risk, i.e., r∗ t = argmin r∈R(at) M L(at, r, mt)P(mt|U, D, At, Rt−1)dmt (1) where P(mt|U, D, At, Rt−1) is the posterior probability of the user model mt given all the observations about the user U we have made up to time t. To simplify the computation of Equation 1, let us assume that the posterior probability mass P(mt|U, D, At, Rt−1) is mostly concentrated on the mode m∗ t = argmaxmt P(mt|U, D, At, Rt−1). We can then approximate the integral with the value of the loss function at m∗ t . That is, r∗ t ≈ argminr∈R(at)L(at, r, m∗ t ) (2) where m∗ t = argmaxmt P(mt|U, D, At, Rt−1). Leaving aside how to define and estimate these probabilistic models and the loss function, we can see that such a decision-theoretic formulation suggests that, in order to choose the optimal response to at, the system should perform two tasks: (1) compute the current user model and obtain m∗ t based on all the useful information. (2) choose a response rt to minimize the loss function value L(at, rt, m∗ t ). When at does not affect our belief about m∗ t , the first step can be omitted and we may reuse m∗ t−1 for m∗ t . Note that our framework is quite general since we can potentially model any kind of user actions and system responses. In most cases, as we may expect, the system``s response is some ranking of documents, i.e., for most actions a, R(a) consists of all the possible rankings of the unseen documents, and the decision problem boils down to choosing the best ranking of unseen documents based on the most current user model. When a is the action of submitting a keyword query, such a response is exactly what a current retrieval system would do. However, we can easily imagine that a more intelligent web search engine would respond to a user``s clicking of the Next link (to fetch more unseen results) with a more optimized ranking of documents based on any viewed documents in the current page of results. In fact, according to our eager updating strategy, we may even allow a system to respond to a user``s clicking of browser``s Back button after viewing a document in the same way, so that the user can maximally benefit from implicit feedback. These are precisely what our UCAIR system does. 3.2 User models A user model m ∈ M represents what we know about the user U, so in principle, it can contain any information about the user that we wish to model. We now discuss two important components in a user model. The first component is a component model of the user``s information need. Presumably, the most important factor affecting the optimality of the system``s response is how well the response addresses the user``s information need. Indeed, at any time, we may assume that the system has some belief about what the user is interested in, which we model through a term vector x = (x1, ..., x|V |), where V = {w1, ..., w|V |} is the set of all terms (i.e., vocabulary) and xi is the weight of term wi. Such a term vector is commonly used in information retrieval to represent both queries and documents. For example, the vector-space model, assumes that both the query and the documents are represented as term vectors and the score of a document with respect to a query is computed based on the similarity between the query vector and the document vector [21]. In a language modeling approach, we may also regard the query unigram language model [12, 29] or the relevance model [14] as a term vector representation of the user``s information need. Intuitively, x would assign high weights to terms that characterize the topics which the user is interested in. The second component we may include in our user model is the documents that the user has already viewed. Obviously, even if a document is relevant, if the user has already seen the document, it would not be useful to present the same document again. We thus introduce another variable S ⊂ D (D is the whole set of documents in the collection) to denote the subset of documents in the search results that the user has already seen/viewed. In general, at time t, we may represent a user model as mt = (S, x, At, Rt−1), where S is the seen documents, x is the system``s understanding of the user``s information need, and (At, Rt−1) represents the user``s interaction history. Note that an even more general user model may also include other factors such as the user``s reading level and occupation. If we assume that the uncertainty of a user model mt is solely due to the uncertainty of x, the computation of our current estimate of user model m∗ t will mainly involve computing our best estimate of x. That is, the system would choose a response according to r∗ t = argminr∈R(at)L(at, r, S, x∗ , At, Rt−1) (3) where x∗ = argmaxx P(x|U, D, At, Rt−1). This is the decision mechanism implemented in the UCAIR system to be described later. In this system, we avoided specifying the probabilistic model P(x|U, D, At, Rt−1) by computing x∗ directly with some existing feedback method. 3.3 Loss functions The exact definition of loss function L depends on the responses, thus it is inevitably application-specific. We now briefly discuss some possibilities when the response is to rank all the unseen documents and present the top k of them. Let r = (d1, ..., dk) be the top k documents, S be the set of seen documents by the user, and x∗ be the system``s best guess of the user``s information need. We 826 may simply define the loss associated with r as the negative sum of the probability that each of the di is relevant, i.e., L(a, r, m) = − k i=1 P(relevant|di, m). Clearly, in order to minimize this loss function, the optimal response r would contain the k documents with the highest probability of relevance, which is intuitively reasonable. One deficiency of this top-k loss function is that it is not sensitive to the internal order of the selected top k documents, so switching the ranking order of a non-relevant document and a relevant one would not affect the loss, which is unreasonable. To model ranking, we can introduce a factor of the user model - the probability of each of the k documents being viewed by the user, P(view|di), and define the following ranking loss function: L(a, r, m) = − k i=1 P(view|di)P(relevant|di, m) Since in general, if di is ranked above dj (i.e., i < j), P(view|di) > P(view|dj), this loss function would favor a decision to rank relevant documents above non-relevant ones, as otherwise, we could always switch di with dj to reduce the loss value. Thus the system should simply perform a regular retrieval and rank documents according to the probability of relevance [18]. Depending on the user``s retrieval preferences, there can be many other possibilities. For example, if the user does not want to see redundant documents, the loss function should include some redundancy measure on r based on the already seen documents S. Of course, when the response is not to choose a ranked list of documents, we would need a different loss function. We discuss one such example that is relevant to the search agent that we implement. When a user enters a query qt (current action), our search agent relies on some existing search engine to actually carry out search. In such a case, even though the search agent does not have control of the retrieval algorithm, it can still attempt to optimize the search results through refining the query sent to the search engine and/or reranking the results obtained from the search engine. The loss functions for reranking are already discussed above; we now take a look at the loss functions for query refinement. Let f be the retrieval function of the search engine that our agent uses so that f(q) would give us the search results using query q. Given that the current action of the user is entering a query qt (i.e., at = qt), our response would be f(q) for some q. Since we have no choice of f, our decision is to choose a good q. Formally, r∗ t = argminrt L(a, rt, m) = argminf(q)L(a, f(q), m) = f(argminqL(qt, f(q), m)) which shows that our goal is to find q∗ = argminqL(qt, f(q), m), i.e., an optimal query that would give us the best f(q). A different choice of loss function L(qt, f(q), m) would lead to a different query refinement strategy. In UCAIR, we heuristically compute q∗ by expanding qt with terms extracted from rt−1 whenever qt−1 and qt have high similarity. Note that rt−1 and qt−1 are contained in m as part of the user``s interaction history. 3.4 Implicit user modeling Implicit user modeling is captured in our framework through the computation of x∗ = argmaxx P(x|U, D, At, Rt−1), i.e., the system``s current belief of what the user``s information need is. Here again there may be many possibilities, leading to different algorithms for implicit user modeling. We now discuss a few of them. First, when two consecutive queries are related, the previous query can be exploited to enrich the current query and provide more search context to help disambiguation. For this purpose, instead of performing query expansion as we did in the previous section, we could also compute an updated x∗ based on the previous query and retrieval results. The computed new user model can then be used to rank the documents with a standard information retrieval model. Second, we can also infer a user``s interest based on the summaries of the viewed documents. When a user is presented with a list of summaries of top ranked documents, if the user chooses to skip the first n documents and to view the (n+1)-th document, we may infer that the user is not interested in the displayed summaries for the first n documents, but is attracted by the displayed summary of the (n + 1)-th document. We can thus use these summaries as negative and positive examples to learn a more accurate user model x∗ . Here many standard relevance feedback techniques can be exploited [19, 20]. Note that we should use the displayed summaries, as opposed to the actual contents of those documents, since it is possible that the displayed summary of the viewed document is relevant, but the document content is actually not. Similarly, a displayed summary may mislead a user to skip a relevant document. Inferring user models based on such displayed information, rather than the actual content of a document is an important difference between UCAIR and some other similar systems. In UCAIR, both of these strategies for inferring an implicit user model are implemented. 4. UCAIR: A PERSONALIZED SEARCH AGENT 4.1 Design In this section, we present a client-side web search agent called UCAIR, in which we implement some of the methods discussed in the previous section for performing personalized search through implicit user modeling. UCAIR is a web browser plug-in 1 that acts as a proxy for web search engines. Currently, it is only implemented for Internet Explorer and Google, but it is a matter of engineering to make it run on other web browsers and interact with other search engines. The issue of privacy is a primary obstacle for deploying any real world applications involving serious user modeling, such as personalized search. For this reason, UCAIR is strictly running as a client-side search agent, as opposed to a server-side application. This way, the captured user information always resides on the computer that the user is using, thus the user does not need to release any information to the outside. Client-side personalization also allows the system to easily observe a lot of user information that may not be easily available to a server. Furthermore, performing personalized search on the client-side is more scalable than on the serverside, since the overhead of computation and storage is distributed among clients. As shown in Figure 1, the UCAIR toolbar has 3 major components: (1) The (implicit) user modeling module captures a user``s search context and history information, including the submitted queries and any clicked search results and infers search session boundaries. (2) The query modification module selectively improves the query formulation according to the current user model. (3) The result re-ranking module immediately re-ranks any unseen search results whenever the user model is updated. In UCAIR, we consider four basic user actions: (1) submitting a keyword query; (2) viewing a document; (3) clicking the Back button; (4) clicking the Next link on a result page. For each of these four actions, the system responds with, respectively, (1) 1 UCAIR is available at: http://sifaka.cs.uiuc.edu/ir/ucair/download.html 827 Search Engine (e.g., Google) Search History Log (e.g.,past queries, clicked results) Query Modification Result Re-Ranking User Modeling Result Buffer UCAIR Userquery results clickthrough... Figure 1: UCAIR architecture generating a ranked list of results by sending a possibly expanded query to a search engine; (2) updating the information need model x; (3) reranking the unseen results on the current result page based on the current model x; and (4) reranking the unseen pages and generating the next page of results based on the current model x. Behind these responses, there are three basic tasks: (1) Decide whether the previous query is related to the current query and if so expand the current query with useful terms from the previous query or the results of the previous query. (2) Update the information need model x based on a newly clicked document summary. (3) Rerank a set of unseen documents based on the current model x. Below we describe our algorithms for each of them. 4.2 Session boundary detection and query expansion To effectively exploit previous queries and their corresponding clickthrough information, UCAIR needs to judge whether two adjacent queries belong to the same search session (i.e., detect session boundaries). Existing work on session boundary detection is mostly in the context of web log analysis (e.g., [8]), and uses statistical information rather than textual features. Since our clientside agent does not have access to server query logs, we make session boundary decisions based on textual similarity between two queries. Because related queries do not necessarily share the same words (e.g., java island and travel Indonesia), it is insufficient to use only query text. Therefore we use the search results of the two queries to help decide whether they are topically related. For example, for the above queries java island and travel Indonesia'', the words java, bali, island, indonesia and travel may occur frequently in both queries'' search results, yielding a high similarity score. We only use the titles and summaries of the search results to calculate the similarity since they are available in the retrieved search result page and fetching the full text of every result page would significantly slow down the process. To compensate for the terseness of titles and summaries, we retrieve more results than a user would normally view for the purpose of detecting session boundaries (typically 50 results). The similarity between the previous query q and the current query q is computed as follows. Let {s1, s2, ... , sn } and {s1, s2, ... , sn} be the result sets for the two queries. We use the pivoted normalization TF-IDF weighting formula [24] to compute a term weight vector si for each result si. We define the average result savg to be the centroid of all the result vectors, i.e., (s1 + s2 + ... + sn)/n. The cosine similarity between the two average results is calculated as s avg · savg/ s 2 avg · s2 avg If the similarity value exceeds a predefined threshold, the two queries will be considered to be in the same information session. If the previous query and the current query are found to belong to the same search session, UCAIR would attempt to expand the current query with terms from the previous query and its search results. Specifically, for each term in the previous query or the corresponding search results, if its frequency in the results of the current query is greater than a preset threshold (e.g. 5 results out of 50), the term would be added to the current query to form an expanded query. In this case, UCAIR would send this expanded query rather than the original one to the search engine and return the results corresponding to the expanded query. Currently, UCAIR only uses the immediate preceding query for query expansion; in principle, we could exploit all related past queries. 4.3 Information need model updating Suppose at time t, we have observed that the user has viewed k documents whose summaries are s1, ..., sk. We update our user model by computing a new information need vector with a standard feedback method in information retrieval (i.e., Rocchio [19]). According to the vector space retrieval model, each clicked summary si can be represented by a term weight vector si with each term weighted by a TF-IDF weighting formula [21]. Rocchio computes the centroid vector of all the summaries and interpolates it with the original query vector to obtain an updated term vector. That is, x = αq + (1 − α) 1 k k i=1 si where q is the query vector, k is the number of summaries the user clicks immediately following the current query and α is a parameter that controls the influence of the clicked summaries on the inferred information need model. In our experiments, α is set to 0.5. Note that we update the information need model whenever the user views a document. 4.4 Result reranking In general, we want to rerank all the unseen results as soon as the user model is updated. Currently, UCAIR implements reranking in two cases, corresponding to the user clicking the Back button and Next link in the Internet Explorer. In both cases, the current (updated) user model would be used to rerank the unseen results so that the user would see improved search results immediately. To rerank any unseen document summaries, UCAIR uses the standard vector space retrieval model and scores each summary based on the similarity of the result and the current user information need vector x [21]. Since implicit feedback is not completely reliable, we bring up only a small number (e.g. 5) of highest reranked results to be followed by any originally high ranked results. 828 Google result (user query = java map) UCAIR result (user query =java map) previous query = travel Indonesia previous query = hashtable expanded user query = java map Indonesia expanded user query = java map class 1 Java map projections of the world ... Lonely Planet - Indonesia Map Map (Java 2 Platform SE v1.4.2) www.btinternet.com/ se16/js/mapproj. htm www.lonelyplanet.com/mapshells/... java.sun.com/j2se/1.4.2/docs/... 2 Java map projections of the world ... INDONESIA TOURISM : CENTRAL JAVA - MAP Java 2 Platform SE v1.3.1: Interface Map www.btinternet.com/ se16/js/oldmapproj. htm www.indonesia-tourism.com/... java.sun.com/j2se/1.3/docs/api/java/... 3 Java Map INDONESIA TOURISM : WEST JAVA - MAP An Introduction to Java Map Collection Classes java.sun.com/developer/... www.indonesia-tourism.com/ ... www.oracle.com/technology/... 4 Java Technology Concept Map IndoStreets - Java Map An Introduction to Java Map Collection Classes java.sun.com/developer/onlineTraining/... www.indostreets.com/maps/java/ www.theserverside.com/news/... 5 Science@NASA Home Indonesia Regions and Islands Maps, Bali, Java, ... Koders - Mappings.java science.nasa.gov/Realtime/... www.maps2anywhere.com/Maps/... www.koders.com/java/ 6 An Introduction to Java Map Collection Classes Indonesia City Street Map,... Hibernate simplifies inheritance mapping www.oracle.com/technology/... www.maps2anywhere.com/Maps/... www.ibm.com/developerworks/java/... 7 Lonely Planet - Java Map Maps Of Indonesia tmap 30. map Class Hierarchy www.lonelyplanet.com/mapshells/ www.embassyworld.com/maps/... tmap.pmel.noaa.gov/... 8 ONJava.com: Java API Map Maps of Indonesia by Peter Loud Class Scope www.onjava.com/pub/a/onjava/api map/ users.powernet.co.uk/... jalbum.net/api/se/datadosen/util/Scope.html 9 GTA San Andreas : Sam Maps of Indonesia by Peter Loud Class PrintSafeHashMap www.gtasanandreas.net/sam/ users.powernet.co.uk/mkmarina/indonesia/ jalbum.net/api/se/datadosen/... 10 INDONESIA TOURISM : WEST JAVA - MAP indonesiaphoto.com Java Pro - Union and Vertical Mapping of Classes www.indonesia-tourism.com/... www.indonesiaphoto.com/... www.fawcette.com/javapro/... Table 1: Sample results of query expansion 5. EVALUATION OF UCAIR We now present some results on evaluating the two major UCAIR functions: selective query expansion and result reranking based on user clickthrough data. 5.1 Sample results The query expansion strategy implemented in UCAIR is intentionally conservative to avoid misinterpretation of implicit user models. In practice, whenever it chooses to expand the query, the expansion usually makes sense. In Table 1, we show how UCAIR can successfully distinguish two different search contexts for the query java map, corresponding to two different previous queries (i.e., travel Indonesia vs. hashtable). Due to implicit user modeling, UCAIR intelligently figures out to add Indonesia and class, respectively, to the user``s query java map, which would otherwise be ambiguous as shown in the original results from Google on March 21, 2005. UCAIR``s results are much more accurate than Google``s results and reflect personalization in search. The eager implicit feedback component is designed to immediately respond to a user``s activity such as viewing a document. In Figure 2, we show how UCAIR can successfully disambiguate an ambiguous query jaguar by exploiting a viewed document summary. In this case, the initial retrieval results using jaguar (shown on the left side) contain two results about the Jaguar cars followed by two results about the Jaguar software. However, after the user views the web page content of the second result (about Jaguar car) and returns to the search result page by clicking Back button, UCAIR automatically nominates two new search results about Jaguar cars (shown on the right side), while the original two results about Jaguar software are pushed down on the list (unseen from the picture). 5.2 Quantitative evaluation To further evaluate UCAIR quantitatively, we conduct a user study on the effectiveness of the eager implicit feedback component. It is a challenge to quantitatively evaluate the potential performance improvement of our proposed model and UCAIR over Google in an unbiased way [7]. Here, we design a user study, in which participants would do normal web search and judge a randomly and anonymously mixed set of results from Google and UCAIR at the end of the search session; participants do not know whether a result comes from Google or UCAIR. We recruited 6 graduate students for this user study, who have different backgrounds (3 computer science, 2 biology, and 1 chem<top> <num> Number: 716 <title> Spammer arrest sue <desc> Description: Have any spammers been arrested or sued for sending unsolicited e-mail? <narr> Narrative: Instances of arrests, prosecutions, convictions, and punishments of spammers, and lawsuits against them are relevant. Documents which describe laws to limit spam without giving details of lawsuits or criminal trials are not relevant. </top> Figure 3: An example of TREC query topic, expressed in a form which might be given to a human assistant or librarian istry). We use query topics from TREC 2 2004 Terabyte track [2] and TREC 2003 Web track [4] topic distillation task in the way to be described below. An example topic from TREC 2004 Terabyte track appears in Figure 3. The title is a short phrase and may be used as a query to the retrieval system. The description field provides a slightly longer statement of the topic requirement, usually expressed as a single complete sentence or question. Finally the narrative supplies additional information necessary to fully specify the requirement, expressed in the form of a short paragraph. Initially, each participant would browse 50 topics either from Terabyte track or Web track and pick 5 or 7 most interesting topics. For each picked topic, the participant would essentially do the normal web search using UCAIR to find many relevant web pages by using the title of the query topic as the initial keyword query. During this process, the participant may view the search results and possibly click on some interesting ones to view the web pages, just as in a normal web search. There is no requirement or restriction on how many queries the participant must submit or when the participant should stop the search for one topic. When the participant plans to change the search topic, he/she will simply press a button 2 Text REtrieval Conference: http://trec.nist.gov/ 829 Figure 2: Screen shots for result reranking to evaluate the search results before actually switching to the next topic. At the time of evaluation, 30 top ranked results from Google and UCAIR (some are overlapping) are randomly mixed together so that the participant would not know whether a result comes from Google or UCAIR. The participant would then judge the relevance of these results. We measure precision at top n (n = 5, 10, 20, 30) documents of Google and UCAIR. We also evaluate precisions at different recall levels. Altogether, 368 documents judged as relevant from Google search results and 429 documents judged as relevant from UCAIR by participants. Scatter plots of precision at top 10 and top 20 documents are shown in Figure 4 and Figure 5 respectively (The scatter plot of precision at top 30 documents is very similar to precision at top 20 documents). Each point of the scatter plots represents the precisions of Google and UCAIR on one query topic. Table 2 shows the average precision at top n documents among 32 topics. From Figure 4, Figure 5 and Table 2, we see that the search results from UCAIR are consistently better than those from Google by all the measures. Moreover, the performance improvement is more dramatic for precision at top 20 documents than that at precision at top 10 documents. One explanation for this is that the more interaction the user has with the system, the more clickthrough data UCAIR can be expected to collect. Thus the retrieval system can build more precise implicit user models, which lead to better retrieval accuracy. Ranking Method prec@5 prec@10 prec@20 prec@30 Google 0.538 0.472 0.377 0.308 UCAIR 0.581 0.556 0.453 0.375 Improvement 8.0% 17.8% 20.2% 21.8% Table 2: Table of average precision at top n documents for 32 query topics The plot in Figure 6 shows the precision-recall curves for UCAIR and Google, where it is clearly seen that the performance of UCAIR 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 UCAIR prec@10 Googleprec@10 Scatterplot of Precision at Top 10 Documents Figure 4: Precision at top 10 documents of UCAIR and Google is consistently and considerably better than that of Google at all levels of recall. 6. CONCLUSIONS In this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy. Unlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user. We presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. We further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user. Using these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over 830 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 UCAIR prec@20 Googleprec@20 Scatterplot of Precision at Top 20 documents Figure 5: Precision at top 20 documents of UCAIR and Google 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 recall precision Precision−Recall curves Google Result UCAIR Result Figure 6: Precision at top 20 result of UCAIR and Google Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. The developed search agent thus can improve existing web search performance without any additional effort from the user. 7. ACKNOWLEDGEMENT We thank the six participants of our evaluation experiments. This work was supported in part by the National Science Foundation grants IIS-0347933 and IIS-0428472. 8. REFERENCES [1] S. M. Beitzel, E. C. Jensen, A. Chowdhury, D. Grossman, and O. Frieder. Hourly analysis of a very large topically categorized web query log. In Proceedings of SIGIR 2004, pages 321-328, 2004. [2] C. Clarke, N. Craswell, and I. Soboroff. Overview of the TREC 2004 terabyte track. In Proceedings of TREC 2004, 2004. [3] M. Claypool, P. Le, M. Waseda, and D. Brown. Implicit interest indicators. In Proceedings of Intelligent User Interfaces 2001, pages 33-40, 2001. [4] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu. Overview of the TREC 2003 web track. In Proceedings of TREC 2003, 2003. [5] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko. Relevance feedback and personalization: A language modeling perspective. In Proeedings of Second DELOS Workshop: Personalisation and Recommender Systems in Digital Libraries, 2001. [6] Google Personalized. http://labs.google.com/personalized. [7] D. Hawking, N. Craswell, P. B. Thistlewaite, and D. Harman. Results and challenges in web search evaluation. Computer Networks, 31(11-16):1321-1330, 1999. [8] X. Huang, F. Peng, A. An, and D. Schuurmans. Dynamic web log session identification with statistical language models. Journal of the American Society for Information Science and Technology, 55(14):1290-1303, 2004. [9] G. Jeh and J. Widom. Scaling personalized web search. In Proceedings of WWW 2003, pages 271-279, 2003. [10] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of SIGKDD 2002, pages 133-142, 2002. [11] D. Kelly and J. Teevan. Implicit feedback for inferring user preference: A bibliography. SIGIR Forum, 37(2):18-28, 2003. [12] J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information retrieval. In Proceedings of SIGIR``01, pages 111-119, 2001. [13] T. Lau and E. Horvitz. Patterns of search: Analyzing and modeling web query refinement. In Proceedings of the Seventh International Conference on User Modeling (UM), pages 145 -152, 1999. [14] V. Lavrenko and B. Croft. Relevance-based language models. In Proceedings of SIGIR``01, pages 120-127, 2001. [15] M. Mitra, A. Singhal, and C. Buckley. Improving automatic query expansion. In Proceedings of SIGIR 1998, pages 206-214, 1998. [16] My Yahoo! http://mysearch.yahoo.com. [17] G. Nunberg. As google goes, so goes the nation. New York Times, May 2003. [18] S. E. Robertson. The probability ranking principle in ı˚. Journal of Documentation, 33(4):294-304, 1977. [19] J. J. Rocchio. Relevance feedback in information retrieval. In The SMART Retrieval System: Experiments in Automatic Document Processing, pages 313-323. Prentice-Hall Inc., 1971. [20] G. Salton and C. Buckley. Improving retrieval performance by retrieval feedback. Journal of the American Society for Information Science, 41(4):288-297, 1990. [21] G. Salton and M. J. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1983. [22] X. Shen, B. Tan, and C. Zhai. Context-sensitive information retrieval using implicit feedback. In Proceedings of SIGIR 2005, pages 43-50, 2005. [23] X. Shen and C. Zhai. Exploiting query history for document ranking in interactive information retrieval (Poster). In Proceedings of SIGIR 2003, pages 377-378, 2003. [24] A. Singhal. Modern information retrieval: A brief overview. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, 24(4):35-43, 2001. [25] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web search based on user profile constructed without any effort from users. In Proceedings of WWW 2004, pages 675-684, 2004. [26] E. Volokh. Personalization and privacy. Communications of the ACM, 43(8):84-88, 2000. [27] R. W. White, J. M. Jose, C. J. van Rijsbergen, and I. Ruthven. A simulated study of implicit feedback models. In Proceedings of ECIR 2004, pages 311-326, 2004. [28] J. Xu and W. B. Croft. Query expansion using local and global document analysis. In Proceedings of SIGIR 1996, pages 4-11, 1996. [29] C. Zhai and J. Lafferty. Model-based feedback in KL divergence retrieval model. In Proceedings of the CIKM 2001, pages 403-410, 2001. 831
Implicit User Modeling for Personalized Search ABSTRACT Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word "java" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine. 1. INTRODUCTION Although many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17]. This inherent non-optimality is seen clearly in the following two cases: (1) Different users may use exactly the same query (e.g., "Java") to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users. Without considering the actual user, it is impossible to know which sense "Java" refers to in a query. (2) A user's information needs may change over time. The same user may use "Java" sometimes to mean the Java island in Indonesia and some other times to mean the programming language. Without recognizing the search context, it would be again impossible to recognize the correct sense. In order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user. The major goal of user modeling for information retrieval is to accurately model a user's information need, which is, unfortunately, a very difficult task. Indeed, it is even hard for a user to precisely describe what his/her information need is. What information is available for a system to infer a user's information need? Obviously, the user's query provides the most direct evidence. Indeed, most existing retrieval systems rely solely on the query to model a user's information need. However, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished. An effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his/her information need), and then to improve user modeling based on such examples of relevant documents. This is called relevancefeedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20]. Unfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11]. It is thus very interesting to study how to infer a user's information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort. Indeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy. In [3], a web browser (Curious Browser) is developed to record a user's explicit relevance ratings of web pages (relevance feedback) and browsing behavior when viewing a page, such as dwelling time, mouse click, mouse movement and scrolling (implicit feedback). It is shown that the dwelling time on a page, amount of scrolling on a page and the combination of time and scrolling have a strong correlation with explicit relevance ratings, which suggests that implicit feedback may be helpful for inferring user information need. In [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users' preferences. In [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy. While a user may have general long term interests and preferences for information, often he/she is searching for documents to satisfy an "ad hoc" information need, which only lasts for a short period of time; once the information need is satisfied, the user would generally no longer be interested in such information. For example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he/she is generally no longer interested in such information. In such cases, implicit feedback information collected over a long period of time is unlikely to be very useful, but the immediate search context and feedback information, such as which of the search results for the current information need are viewed, can be expected to be much more useful. Consider the query "Java" again. Any of the following immediate feedback information about the user could potentially help determine the intended meaning of "Java" in the query: (1) The previous query submitted by the user is "hashtable" (as opposed to, e.g., "travel Indonesia"). (2) In the search results, the user viewed a page where words such as "programming", "software", and "applet" occur many times. To the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work. In this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad hoc retrieval. In order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform "eager implicit feedback". That is, as soon as we observe any new piece of evidence from the user, we would update the system's belief about the user's information need and respond with improved retrieval results based on the updated user model. We present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. In a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values. As a result, the retrieval process is a simple independent cycle of "query" and "result display". In the proposed new retrieval paradigm, the user's search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user. The new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general. We further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user. Using these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. Thus the developed search agent can improve existing web search performance without additional effort from the user. The remaining sections are organized as follows. In Section 2, we discuss the related work. In Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling. In Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback. In Section 5, we report our experiment results using the search agent. Section 6 concludes our work. 2. RELATED WORK Implicit user modeling for personalized search has been studied in previous work, but our work differs from all previous work in several aspects: (1) We emphasize the exploitation of immediate search context such as the related immediately preceding query and the viewed documents in the same session, while most previous work relies on long-term collection of implicit feedback information [25]. (2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10]. (3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22]. (4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions. Currently some search engines provide rudimentary personalization, such as Google Personalized web search [6], which allows users to explicitly describe their interests by selecting from predefined topics, so that those results that match their interests are brought to the top, and My Yahoo! search [16], which gives users the option to save web sites they like and block those they dislike. In contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts. Furthermore, the personalization of UCAIR is provided on the client side. There are two remarkable advantages on this. First, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26]. Second, both the computation of personalization and the storage of the user profile are done at the client side so that the server load is reduced dramatically [9]. There have been many works studying user query logs [1] or query dynamics [13]. UCAIR makes direct use of a user's query history to benefit the same user immediately in the same search session. UCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion. Our query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user's implicit feedback information to expand the current query. These two techniques may be combined. 3. OPTIMIZATION IN INTERACTIVE IR In interactive IR, a user interacts with the retrieval system through an "action dialogue", in which the system responds to each user action with some system action. For example, the user's action may be submitting a query and the system's response may be returning a list of 10 document summaries. In general, the space of user actions and system responses and their granularities would depend on the interface of a particular retrieval system. In principle, every action of the user can potentially provide new evidence to help the system better infer the user's information need. Thus in order to respond optimally, the system should use all the evidence collected so far about the user when choosing a response. When viewed in this way, most existing search engines are clearly non-optimal. For example, if a user has viewed some documents on the first page of search results, when the user clicks on the "Next" link to fetch more results, an existing retrieval system would still return the next page of results retrieved based on the original query without considering the new evidence that a particular result has been viewed by the user. We propose to optimize retrieval performance by adapting system responses based on every action that a user has taken, and cast the optimization problem as a decision task. Specifically, at any time, the system would attempt to do two tasks: (1) User model updating: Monitor any useful evidence from the user regarding his/her information need and update the user model as soon as such evidence is available; (2) Improving search results: Rerank immediately all the documents that the user has not yet seen, as soon as the user model is updated. We emphasize eager updating and reranking, which makes our work quite different from any existing work. Below we present a formal decision theoretic framework for optimizing retrieval performance through implicit user modeling in interactive information retrieval. 3.1 A decision-theoretic framework Let A be the set of all user actions and R (a) be the set of all possible system responses to a user action a 2 A. At any time, let At = (a1,..., at) be the observed sequence of user actions so far (up to time point t) and Rt − 1 = (r1,..., rt − 1) be the responses that the system has made responding to the user actions. The system's goal is to choose an optimal response rt 2 R (at) for the current user action at. Let M be the space of all possible user models. We further define a loss function L (a, r, m) 2 OR, where a 2 A is a user action, r 2 R (a) is a system response, and m 2 M is a user model. L (a, r, m) encodes our decision preferences and assesses the optimality of responding with r when the current user model is m and the current user action is a. According to Bayesian decision theory, the optimal decision at time t is to choose a response that minimizes the Bayes risk, i.e., where P (mtjU, D, At, Rt − 1) is the posterior probability of the user model mt given all the observations about the user U we have made up to time t. To simplify the computation of Equation 1, let us assume that the posterior probability mass P (mtjU, D, At, Rt − 1) is mostly concentrated on the mode m ∗ t = argmaxmt P (mtjU, D, At, Rt − 1). We can then approximate the integral with the value of the loss function at m ∗ t. That is, where m ∗ t = argmaxmt P (mtjU, D, At, Rt − 1). Leaving aside how to define and estimate these probabilistic models and the loss function, we can see that such a decision-theoretic formulation suggests that, in order to choose the optimal response to at, the system should perform two tasks: (1) compute the current user model and obtain m ∗ t based on all the useful information. (2) choose a response rt to minimize the loss function value L (at, rt, m ∗ t). When at does not affect our belief about m ∗ t, the first step can be omitted and we may reuse m ∗ t − 1 for m ∗ t. Note that our framework is quite general since we can potentially model any kind of user actions and system responses. In most cases, as we may expect, the system's response is some ranking of documents, i.e., for most actions a, R (a) consists of all the possible rankings of the unseen documents, and the decision problem boils down to choosing the best ranking of unseen documents based on the most current user model. When a is the action of submitting a keyword query, such a response is exactly what a current retrieval system would do. However, we can easily imagine that a more intelligent web search engine would respond to a user's clicking of the "Next" link (to fetch more unseen results) with a more optimized ranking of documents based on any viewed documents in the current page of results. In fact, according to our eager updating strategy, we may even allow a system to respond to a user's clicking of browser's "Back" button after viewing a document in the same way, so that the user can maximally benefit from implicit feedback. These are precisely what our UCAIR system does. 3.2 User models A user model m 2 M represents what we know about the user U, so in principle, it can contain any information about the user that we wish to model. We now discuss two important components in a user model. The first component is a component model of the user's information need. Presumably, the most important factor affecting the optimality of the system's response is how well the response addresses the user's information need. Indeed, at any time, we may assume that the system has some "belief" about what the user is interested in, which we model through a term vector x ~ = (x1,..., x | V |), where V = {w1,..., w | V |} is the set of all terms (i.e., vocabulary) and xi is the weight of term wi. Such a term vector is commonly used in information retrieval to represent both queries and documents. For example, the vector-space model, assumes that both the query and the documents are represented as term vectors and the score of a document with respect to a query is computed based on the similarity between the query vector and the document vector [21]. In a language modeling approach, we may also regard the query unigram language model [12, 29] or the relevance model [14] as a term vector representation of the user's information need. Intuitively, x ~ would assign high weights to terms that characterize the topics which the user is interested in. The second component we may include in our user model is the documents that the user has already viewed. Obviously, even if a document is relevant, if the user has already seen the document, it would not be useful to present the same document again. We thus introduce another variable S 1/2 D (D is the whole set of documents in the collection) to denote the subset of documents in the search results that the user has already seen/viewed. In general, at time t, we may represent a user model as mt = (S, ~ x, At, Rt − 1), where S is the seen documents, x ~ is the system's "understanding" of the user's information need, and (At, Rt − 1) represents the user's interaction history. Note that an even more general user model may also include other factors such as the user's reading level and occupation. If we assume that the uncertainty of a user model mt is solely due to the uncertainty of ~ x, the computation of our current estimate of user model m ∗ t will mainly involve computing our best estimate of ~ x. That is, the system would choose a response according to where ~ x ∗ = argmax ~ x P (~ xjU, D, At, Rt − 1). This is the decision mechanism implemented in the UCAIR system to be described later. In this system, we avoided specifying the probabilistic model P (~ xjU, D, At, Rt − 1) by computing ~ x ∗ directly with some existing feedback method. 3.3 Loss functions The exact definition of loss function L depends on the responses, thus it is inevitably application-specific. We now briefly discuss some possibilities when the response is to rank all the unseen documents and present the top k of them. Let r = (d1,..., dk) be the top k documents, S be the set of seen documents by the user, and ~ x ∗ be the system's best guess of the user's information need. We may simply define the loss associated with r as the negative sum of the probability that each of the di is relevant, i.e., L (a, r, m) = − Eki = 1 P (relevant | di, m). Clearly, in order to minimize this loss function, the optimal response r would contain the k documents with the highest probability of relevance, which is intuitively reasonable. One deficiency of this "top-k loss function" is that it is not sensitive to the internal order of the selected top k documents, so switching the ranking order of a non-relevant document and a relevant one would not affect the loss, which is unreasonable. To model ranking, we can introduce a factor of the user model--the probability of each of the k documents being viewed by the user, P (view | di), and define the following "ranking loss function": Since in general, if di is ranked above dj (i.e., i <j), P (view | di)> P (view | dj), this loss function would favor a decision to rank relevant documents above non-relevant ones, as otherwise, we could always switch di with dj to reduce the loss value. Thus the system should simply perform a regular retrieval and rank documents according to the probability of relevance [18]. Depending on the user's retrieval preferences, there can be many other possibilities. For example, if the user does not want to see redundant documents, the loss function should include some redundancy measure on r based on the already seen documents S. Of course, when the response is not to choose a ranked list of documents, we would need a different loss function. We discuss one such example that is relevant to the search agent that we implement. When a user enters a query qt (current action), our search agent relies on some existing search engine to actually carry out search. In such a case, even though the search agent does not have control of the retrieval algorithm, it can still attempt to optimize the search results through refining the query sent to the search engine and/or reranking the results obtained from the search engine. The loss functions for reranking are already discussed above; we now take a look at the loss functions for query refinement. Let f be the retrieval function of the search engine that our agent uses so that f (q) would give us the search results using query q. Given that the current action of the user is entering a query qt (i.e., at = qt), our response would be f (q) for some q. Since we have no choice of f, our decision is to choose a good q. Formally, which shows that our goal is to find q * = argminqL (qt, f (q), m), i.e., an optimal query that would give us the best f (q). A different choice of loss function L (qt, f (q), m) would lead to a different query refinement strategy. In UCAIR, we heuristically compute q * by expanding qt with terms extracted from rt_1 whenever qt_1 and qt have high similarity. Note that rt_1 and qt_1 are contained in m as part of the user's interaction history. 3.4 Implicit user modeling Implicit user modeling is captured in our framework through the computation of: = argmax ~ x P (x | U, D, At, Rt_1), i.e., the system's current belief of what the user's information need is. Here again there may be many possibilities, leading to different algorithms for implicit user modeling. We now discuss a few of them. First, when two consecutive queries are related, the previous query can be exploited to enrich the current query and provide more search context to help disambiguation. For this purpose, instead of performing query expansion as we did in the previous section, we could also compute an updated Y * based on the previous query and retrieval results. The computed new user model can then be used to rank the documents with a standard information retrieval model. Second, we can also infer a user's interest based on the summaries of the viewed documents. When a user is presented with a list of summaries of top ranked documents, if the user chooses to skip the first n documents and to view the (n + 1) - th document, we may infer that the user is not interested in the displayed summaries for the first n documents, but is attracted by the displayed summary of the (n + 1) - th document. We can thus use these summaries as negative and positive examples to learn a more accurate user model: e *. Here many standard relevance feedback techniques can be exploited [19, 20]. Note that we should use the displayed summaries, as opposed to the actual contents of those documents, since it is possible that the displayed summary of the viewed document is relevant, but the document content is actually not. Similarly, a displayed summary may mislead a user to skip a relevant document. Inferring user models based on such displayed information, rather than the actual content of a document is an important difference between UCAIR and some other similar systems. In UCAIR, both of these strategies for inferring an implicit user model are implemented. 4. UCAIR: A PERSONALIZED SEARCH AGENT 4.1 Design In this section, we present a client-side web search agent called UCAIR, in which we implement some of the methods discussed in the previous section for performing personalized search through implicit user modeling. UCAIR is a web browser plug-in 1 that acts as a proxy for web search engines. Currently, it is only implemented for Internet Explorer and Google, but it is a matter of engineering to make it run on other web browsers and interact with other search engines. The issue of privacy is a primary obstacle for deploying any real world applications involving serious user modeling, such as personalized search. For this reason, UCAIR is strictly running as a client-side search agent, as opposed to a server-side application. This way, the captured user information always resides on the computer that the user is using, thus the user does not need to release any information to the outside. Client-side personalization also allows the system to easily observe a lot of user information that may not be easily available to a server. Furthermore, performing personalized search on the client-side is more scalable than on the serverside, since the overhead of computation and storage is distributed among clients. As shown in Figure 1, the UCAIR toolbar has 3 major components: (1) The (implicit) user modeling module captures a user's search context and history information, including the submitted queries and any clicked search results and infers search session boundaries. (2) The query modification module selectively improves the query formulation according to the current user model. (3) The result re-ranking module immediately re-ranks any unseen search results whenever the user model is updated. In UCAIR, we consider four basic user actions: (1) submitting a keyword query; (2) viewing a document; (3) clicking the "Back" button; (4) clicking the "Next" link on a result page. For each of these four actions, the system responds with, respectively, (1) Figure 1: UCAIR architecture generating a ranked list of results by sending a possibly expanded query to a search engine; (2) updating the information need model ~ x; (3) reranking the unseen results on the current result page based on the current model ~ x; and (4) reranking the unseen pages and generating the next page of results based on the current model ~ x. Behind these responses, there are three basic tasks: (1) Decide whether the previous query is related to the current query and if so expand the current query with useful terms from the previous query or the results of the previous query. (2) Update the information need model x ~ based on a newly clicked document summary. (3) Rerank a set of unseen documents based on the current model ~ x. Below we describe our algorithms for each of them. 4.2 Session boundary detection and query expansion To effectively exploit previous queries and their corresponding clickthrough information, UCAIR needs to judge whether two adjacent queries belong to the same search session (i.e., detect session boundaries). Existing work on session boundary detection is mostly in the context of web log analysis (e.g., [8]), and uses statistical information rather than textual features. Since our clientside agent does not have access to server query logs, we make session boundary decisions based on textual similarity between two queries. Because related queries do not necessarily share the same words (e.g., "java island" and "travel Indonesia"), it is insufficient to use only query text. Therefore we use the search results of the two queries to help decide whether they are topically related. For example, for the above queries "java island" and "travel Indonesia"', the words "java", "bali", "island"," indonesia" and" travel" may occur frequently in both queries' search results, yielding a high similarity score. We only use the titles and summaries of the search results to calculate the similarity since they are available in the retrieved search result page and fetching the full text of every result page would significantly slow down the process. To compensate for the terseness of titles and summaries, we retrieve more results than a user would normally view for the purpose of detecting session boundaries (typically 50 results). The similarity between the previous query q' and the current query q is computed as follows. Let {si, s2,..., sn,} and {s1, s2,..., sn} be the result sets for the two queries. We use the pivoted normalization TF-IDF weighting formula [24] to compute a term weight vector ~ si for each result si. We define the average result ~ savg to be the centroid of all the result vectors, i.e., (~ s1 + ~ s2 +...+ ~ sn) / n. The cosine similarity between the two average results is calculated as ~ s avg · ~ savg / ~ s 2avg · ~ s2avg If the similarity value exceeds a predefined threshold, the two queries will be considered to be in the same information session. If the previous query and the current query are found to belong to the same search session, UCAIR would attempt to expand the current query with terms from the previous query and its search results. Specifically, for each term in the previous query or the corresponding search results, if its frequency in the results of the current query is greater than a preset threshold (e.g. 5 results out of 50), the term would be added to the current query to form an expanded query. In this case, UCAIR would send this expanded query rather than the original one to the search engine and return the results corresponding to the expanded query. Currently, UCAIR only uses the immediate preceding query for query expansion; in principle, we could exploit all related past queries. 4.3 Information need model updating Suppose at time t, we have observed that the user has viewed k documents whose summaries are s1,..., sk. We update our user model by computing a new information need vector with a standard feedback method in information retrieval (i.e., Rocchio [19]). According to the vector space retrieval model, each clicked summary si can be represented by a term weight vector ~ si with each term weighted by a TF-IDF weighting formula [21]. Rocchio computes the centroid vector of all the summaries and interpolates it with the original query vector to obtain an updated term vector. That is, where q ~ is the query vector, k is the number of summaries the user clicks immediately following the current query and α is a parameter that controls the influence of the clicked summaries on the inferred information need model. In our experiments, α is set to 0.5. Note that we update the information need model whenever the user views a document. 4.4 Result reranking In general, we want to rerank all the unseen results as soon as the user model is updated. Currently, UCAIR implements reranking in two cases, corresponding to the user clicking the "Back" button and "Next" link in the Internet Explorer. In both cases, the current (updated) user model would be used to rerank the unseen results so that the user would see improved search results immediately. To rerank any unseen document summaries, UCAIR uses the standard vector space retrieval model and scores each summary based on the similarity of the result and the current user information need vector x ~ [21]. Since implicit feedback is not completely reliable, we bring up only a small number (e.g. 5) of highest reranked results to be followed by any originally high ranked results. Table 1: Sample results of query expansion 5. EVALUATION OF UCAIR We now present some results on evaluating the two major UCAIR functions: selective query expansion and result reranking based on user clickthrough data. 5.1 Sample results The query expansion strategy implemented in UCAIR is intentionally conservative to avoid misinterpretation of implicit user models. In practice, whenever it chooses to expand the query, the expansion usually makes sense. In Table 1, we show how UCAIR can successfully distinguish two different search contexts for the query "java map", corresponding to two different previous queries (i.e., "travel Indonesia" vs. "hashtable"). Due to implicit user modeling, UCAIR intelligently figures out to add "Indonesia" and "class", respectively, to the user's query "java map", which would otherwise be ambiguous as shown in the original results from Google on March 21, 2005. UCAIR's results are much more accurate than Google's results and reflect personalization in search. The eager implicit feedback component is designed to immediately respond to a user's activity such as viewing a document. In Figure 2, we show how UCAIR can successfully disambiguate an ambiguous query "jaguar" by exploiting a viewed document summary. In this case, the initial retrieval results using "jaguar" (shown on the left side) contain two results about the Jaguar cars followed by two results about the Jaguar software. However, after the user views the web page content of the second result (about "Jaguar car") and returns to the search result page by clicking "Back" button, UCAIR automatically nominates two new search results about Jaguar cars (shown on the right side), while the original two results about Jaguar software are pushed down on the list (unseen from the picture). 5.2 Quantitative evaluation To further evaluate UCAIR quantitatively, we conduct a user study on the effectiveness of the eager implicit feedback component. It is a challenge to quantitatively evaluate the potential performance improvement of our proposed model and UCAIR over Google in an unbiased way [7]. Here, we design a user study, in which participants would do normal web search and judge a randomly and anonymously mixed set of results from Google and UCAIR at the end of the search session; participants do not know whether a result comes from Google or UCAIR. We recruited 6 graduate students for this user study, who have different backgrounds (3 computer science, 2 biology, and 1 chem <narr> Narrative: Instances of arrests, prosecutions, convictions, and punishments of spammers, and lawsuits against them are relevant. Documents which describe laws to limit spam without giving details of lawsuits or criminal trials are not relevant. Figure 3: An example of TREC query topic, expressed in a form which might be given to a human assistant or librarian istry). We use query topics from TREC 2 2004 Terabyte track [2] and TREC 2003 Web track [4] topic distillation task in the way to be described below. An example topic from TREC 2004 Terabyte track appears in Figure 3. The title is a short phrase and may be used as a query to the retrieval system. The description field provides a slightly longer statement of the topic requirement, usually expressed as a single complete sentence or question. Finally the narrative supplies additional information necessary to fully specify the requirement, expressed in the form of a short paragraph. Initially, each participant would browse 50 topics either from Terabyte track or Web track and pick 5 or 7 most interesting topics. For each picked topic, the participant would essentially do the normal web search using UCAIR to find many relevant web pages by using the title of the query topic as the initial keyword query. During this process, the participant may view the search results and possibly click on some interesting ones to view the web pages, just as in a normal web search. There is no requirement or restriction on how many queries the participant must submit or when the participant should stop the search for one topic. When the participant plans to change the search topic, he/she will simply press a button Figure 2: Screen shots for result reranking to evaluate the search results before actually switching to the next topic. At the time of evaluation, 30 top ranked results from Google and UCAIR (some are overlapping) are randomly mixed together so that the participant would not know whether a result comes from Google or UCAIR. The participant would then judge the relevance of these results. We measure precision at top n (n = 5, 10, 20, 30) documents of Google and UCAIR. We also evaluate precisions at different recall levels. Altogether, 368 documents judged as relevant from Google search results and 429 documents judged as relevant from UCAIR by participants. Scatter plots of precision at top 10 and top 20 documents are shown in Figure 4 and Figure 5 respectively (The scatter plot of precision at top 30 documents is very similar to precision at top 20 documents). Each point of the scatter plots represents the precisions of Google and UCAIR on one query topic. Table 2 shows the average precision at top n documents among 32 topics. From Figure 4, Figure 5 and Table 2, we see that the search results from UCAIR are consistently better than those from Google by all the measures. Moreover, the performance improvement is more dramatic for precision at top 20 documents than that at precision at top 10 documents. One explanation for this is that the more interaction the user has with the system, the more clickthrough data UCAIR can be expected to collect. Thus the retrieval system can build more precise implicit user models, which lead to better retrieval accuracy. Table 2: Table of average precision at top n documents for 32 query topics The plot in Figure 6 shows the precision-recall curves for UCAIR and Google, where it is clearly seen that the performance of UCAIR Figure 4: Precision at top 10 documents of UCAIR and Google is consistently and considerably better than that of Google at all levels of recall. 6. CONCLUSIONS In this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy. Unlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user. We presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. We further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user. Using these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Figure 5: Precision at top 20 documents of UCAIR and Google Figure 6: Precision at top 20 result of UCAIR and Google Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. The developed search agent thus can improve existing web search performance without any additional effort from the user.
Implicit User Modeling for Personalized Search ABSTRACT Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word "java" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine. 1. INTRODUCTION Although many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17]. This inherent non-optimality is seen clearly in the following two cases: (1) Different users may use exactly the same query (e.g., "Java") to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users. Without considering the actual user, it is impossible to know which sense "Java" refers to in a query. (2) A user's information needs may change over time. The same user may use "Java" sometimes to mean the Java island in Indonesia and some other times to mean the programming language. Without recognizing the search context, it would be again impossible to recognize the correct sense. In order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user. The major goal of user modeling for information retrieval is to accurately model a user's information need, which is, unfortunately, a very difficult task. Indeed, it is even hard for a user to precisely describe what his/her information need is. What information is available for a system to infer a user's information need? Obviously, the user's query provides the most direct evidence. Indeed, most existing retrieval systems rely solely on the query to model a user's information need. However, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished. An effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his/her information need), and then to improve user modeling based on such examples of relevant documents. This is called relevancefeedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20]. Unfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11]. It is thus very interesting to study how to infer a user's information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort. Indeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy. In [3], a web browser (Curious Browser) is developed to record a user's explicit relevance ratings of web pages (relevance feedback) and browsing behavior when viewing a page, such as dwelling time, mouse click, mouse movement and scrolling (implicit feedback). It is shown that the dwelling time on a page, amount of scrolling on a page and the combination of time and scrolling have a strong correlation with explicit relevance ratings, which suggests that implicit feedback may be helpful for inferring user information need. In [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users' preferences. In [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy. While a user may have general long term interests and preferences for information, often he/she is searching for documents to satisfy an "ad hoc" information need, which only lasts for a short period of time; once the information need is satisfied, the user would generally no longer be interested in such information. For example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he/she is generally no longer interested in such information. In such cases, implicit feedback information collected over a long period of time is unlikely to be very useful, but the immediate search context and feedback information, such as which of the search results for the current information need are viewed, can be expected to be much more useful. Consider the query "Java" again. Any of the following immediate feedback information about the user could potentially help determine the intended meaning of "Java" in the query: (1) The previous query submitted by the user is "hashtable" (as opposed to, e.g., "travel Indonesia"). (2) In the search results, the user viewed a page where words such as "programming", "software", and "applet" occur many times. To the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work. In this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad hoc retrieval. In order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform "eager implicit feedback". That is, as soon as we observe any new piece of evidence from the user, we would update the system's belief about the user's information need and respond with improved retrieval results based on the updated user model. We present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. In a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values. As a result, the retrieval process is a simple independent cycle of "query" and "result display". In the proposed new retrieval paradigm, the user's search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user. The new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general. We further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user. Using these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. Thus the developed search agent can improve existing web search performance without additional effort from the user. The remaining sections are organized as follows. In Section 2, we discuss the related work. In Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling. In Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback. In Section 5, we report our experiment results using the search agent. Section 6 concludes our work. 2. RELATED WORK Implicit user modeling for personalized search has been studied in previous work, but our work differs from all previous work in several aspects: (1) We emphasize the exploitation of immediate search context such as the related immediately preceding query and the viewed documents in the same session, while most previous work relies on long-term collection of implicit feedback information [25]. (2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10]. (3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22]. (4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions. Currently some search engines provide rudimentary personalization, such as Google Personalized web search [6], which allows users to explicitly describe their interests by selecting from predefined topics, so that those results that match their interests are brought to the top, and My Yahoo! search [16], which gives users the option to save web sites they like and block those they dislike. In contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts. Furthermore, the personalization of UCAIR is provided on the client side. There are two remarkable advantages on this. First, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26]. Second, both the computation of personalization and the storage of the user profile are done at the client side so that the server load is reduced dramatically [9]. There have been many works studying user query logs [1] or query dynamics [13]. UCAIR makes direct use of a user's query history to benefit the same user immediately in the same search session. UCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion. Our query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user's implicit feedback information to expand the current query. These two techniques may be combined. 3. OPTIMIZATION IN INTERACTIVE IR 3.1 A decision-theoretic framework 3.2 User models 3.3 Loss functions 3.4 Implicit user modeling 4. UCAIR: A PERSONALIZED SEARCH AGENT 4.1 Design 4.2 Session boundary detection and query expansion 4.3 Information need model updating 4.4 Result reranking 5. EVALUATION OF UCAIR 5.1 Sample results 5.2 Quantitative evaluation 6. CONCLUSIONS In this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy. Unlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user. We presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. We further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user. Using these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Figure 5: Precision at top 20 documents of UCAIR and Google Figure 6: Precision at top 20 result of UCAIR and Google Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. The developed search agent thus can improve existing web search performance without any additional effort from the user.
Implicit User Modeling for Personalized Search ABSTRACT Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word "java" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine. 1. INTRODUCTION Although many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17]. (1) Different users may use exactly the same query (e.g., "Java") to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users. Without considering the actual user, it is impossible to know which sense "Java" refers to in a query. (2) A user's information needs may change over time. The same user may use "Java" sometimes to mean the Java island in Indonesia and some other times to mean the programming language. Without recognizing the search context, it would be again impossible to recognize the correct sense. In order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user. The major goal of user modeling for information retrieval is to accurately model a user's information need, which is, unfortunately, a very difficult task. Indeed, it is even hard for a user to precisely describe what his/her information need is. What information is available for a system to infer a user's information need? Obviously, the user's query provides the most direct evidence. Indeed, most existing retrieval systems rely solely on the query to model a user's information need. However, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished. An effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his/her information need), and then to improve user modeling based on such examples of relevant documents. This is called relevancefeedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20]. Unfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11]. It is thus very interesting to study how to infer a user's information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort. Indeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy. In [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users' preferences. In [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy. For example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he/she is generally no longer interested in such information. Consider the query "Java" again. (2) In the search results, the user viewed a page where words such as "programming", "software", and "applet" occur many times. To the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work. In this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad hoc retrieval. In order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform "eager implicit feedback". That is, as soon as we observe any new piece of evidence from the user, we would update the system's belief about the user's information need and respond with improved retrieval results based on the updated user model. We present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. In a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values. As a result, the retrieval process is a simple independent cycle of "query" and "result display". In the proposed new retrieval paradigm, the user's search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user. The new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general. Using these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. Thus the developed search agent can improve existing web search performance without additional effort from the user. The remaining sections are organized as follows. In Section 2, we discuss the related work. In Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling. In Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback. In Section 5, we report our experiment results using the search agent. Section 6 concludes our work. 2. RELATED WORK (2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10]. (3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22]. (4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions. In contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts. Furthermore, the personalization of UCAIR is provided on the client side. First, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26]. There have been many works studying user query logs [1] or query dynamics [13]. UCAIR makes direct use of a user's query history to benefit the same user immediately in the same search session. UCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion. Our query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user's implicit feedback information to expand the current query. These two techniques may be combined. 6. CONCLUSIONS In this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy. Unlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user. We presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function. Using these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google). Experiments on web search show that our search agent can improve search accuracy over Figure 5: Precision at top 20 documents of UCAIR and Google Figure 6: Precision at top 20 result of UCAIR and Google Google. Since the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort. The developed search agent thus can improve existing web search performance without any additional effort from the user.
H-48
A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch
The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing.
[ "evalu", "queri expans", "inform retriev", "relev document", "queri-document term mismatch", "inform search", "document expans", "document process" ]
[ "P", "P", "P", "P", "M", "M", "R", "R" ]
A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch Tonya Custis Thomson Corporation 610 Opperman Drive St. Paul, MN tonya.custis@thomson.com Khalid Al-Kofahi Thomson Corporation 610 Opperman Drive St. Paul, MN khalid.al-kofahi@thomson.com ABSTRACT The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Measurement, Experimentation 1. INTRODUCTION In our domain,1 and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue. Missing relevant documents may have non-trivial consequences on the outcome of a court proceeding. Attorneys are especially concerned about missing relevant documents when researching a legal topic that is new to them, as they may not be aware of all language variations in such topics. Therefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents. During our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose. {Whooping cough, pertussis}, {heart attack, myocardial infarction}, {car wash, automobile cleaning}, {attorney, legal counsel, lawyer} are all examples of things that share the same meaning. Often, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs. This query-document term mismatch arises from two sources: (1) the synonymy found in natural language, both at the term and the phrasal level, and (2) the degree to which the user is an expert at searching and/or has expert knowledge in the domain of the collection being searched. IR evaluations are comparative in nature (cf. TREC). Generally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision- and recall-based metrics. Similarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets. While this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch. If the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation. An effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system. 1 Thomson Corporation builds information based solutions to the professional markets including legal, financial, health care, scientific, and tax and accounting. In order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user``s query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents. Because we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not. If a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch. 2. RELATED WORK Accounting for term mismatch between the terms in user queries and the documents relevant to users'' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47]. Query expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users'' information needs. Explicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37]. Implicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2]. Regardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents). In practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others. QE techniques are judged as effective in the case that they help more than they hurt overall on a particular collection [47, 45, 41, 27]. Often, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift. As such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5]. The evaluation of IR systems has received much attention in the research community, both in terms of developing test collections for the evaluation of different systems [11, 12, 13, 43] and in terms of the utility of evaluation metrics such as recall, precision, mean average precision, precision at rank, Bpref, etc. [7, 8, 44, 14]. In addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41]. In addition, the IR research community has given attention to differences between the performance of individual queries. Research efforts have been made to predict which queries will be improved by QE and then selectively applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve optimal overall performance. In addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9]. There is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP. GMAP (geometric mean average precision) gives more weight to the lower end of the average precision (as opposed to MAP), thereby emphasizing the degree to which difficult or poorly performing queries contribute to the score [33]. However, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms. By purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query. The work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance. [42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR. [6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time. In cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms. Although similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch. 3. METHODOLOGY In order to accurately measure IR system performance in the presence of query-term mismatch, we need to be able to adjust the degree of term mismatch in a test corpus in a principled manner. Our approach is to introduce querydocument term mismatch into a corpus in a controlled manner and then measure the performance of IR systems as the degree of term mismatch changes. We systematically remove query terms from known relevant documents, creating alternate versions of a test collection that differ only in how many or which query terms have been removed from the documents relevant to a particular query. Introducing query-document term mismatch into the test collection in this manner allows us to manipulate the degree of term mismatch between relevant documents and queries in a controlled manner. This removal process affects only the relevant documents in the search collection. The queries themselves remain unaltered. Query terms are removed from documents one by one, so the differences in IR system performance can be measured with respect to missing terms. In the most extreme case (i.e., when the length of the query is less than or equal to the number of query terms removed from the relevant documents), there will be no term overlap between a query and its relevant documents. Notice that, for a given query, only relevant documents are modified. Non-relevant documents are left unchanged, even in the case that they contain query terms. Although, on the surface, we are changing the distribution of terms between the relevant and non-relevant documents sets by removing query terms from the relevant documents, doing so does not change the conceptual relevancy of these documents. Systematically removing query terms from known relevant documents introduces a controlled amount of query-document term mismatch by which we can evaluate the degree to which particular QE techniques are able to retrieve conceptually relevant documents, despite a lack of actual term overlap. Removing a query term from relevant documents simply masks the presence of that query term in those documents. It does not in any way change the conceptual relevancy of the documents. The evaluation framework presented in this paper consists of three elements: a test collection, C; a strategy for selecting which query terms to remove from the relevant documents in that collection, S; and a metric by which to compare performance of the IR systems, m. The test collection, C, consists of a document collection, queries, and relevance judgments. The strategy, S, determines the order and manner in which query terms are removed from the relevant documents in C. This evaluation framework is not metric-specific; any metric (MAP, P@10, recall, etc.) can be used to measure IR system performance. Although test collections are difficult to come by, it should be noted that this evaluation framework can be used on any available test collection. In fact, using this framework stretches the value of existing test collections in that one collection becomes several when query terms are removed from relevant documents, thereby increasing the amount of information that can be gained from evaluating on a particular collection. In other evaluations of QE effectiveness, the controlled variable is simply whether or not queries have been expanded or not, compared in terms of some metric. In contrast, the controlled variable in this framework is the query term that has been removed from the documents relevant to that query, as determined by the removal strategy, S. Query terms are removed one by one, in a manner and order determined by S, so that collections differ only with respect to the one term that has been removed (or masked) in the documents relevant to that query. It is in this way that we can explicitly measure the degree to which an IR system overcomes query-document term mismatch. The choice of a query term removal strategy is relatively flexible; the only restriction in choosing a strategy S is that query terms must be removed one at a time. Two decisions must be made when choosing a removal strategy S. The first is the order in which S removes terms from the relevant documents. Possible orders for removal could be based on metrics such as IDF or the global probability of a term in a document collection. Based on the purpose of the evaluation and the retrieval algorithm being used, it might make more sense to choose a removal order for S based on query term IDF or perhaps based on a measure of query term probability in the document collection. Once an order for removal has been decided, a manner for term removal/masking must be decided. It must be determined if S will remove the terms individually (i.e., remove just one different term each time) or additively (i.e., remove one term first, then that term in addition to another, and so on). The incremental additive removal of query terms from relevant documents allows the evaluation to show the degree to which IR system performance degrades as more and more query terms are missing, thereby increasing the degree of query-document term mismatch. Removing terms individually allows for a clear comparison of the contribution of QE in the absence of each individual query term. 4. EXPERIMENTAL SET-UP 4.1 IR Systems We used the proposed evaluation framework to evaluate four IR systems on two test collections. Of the four systems used in the evaluation, two implement query expansion techniques: Okapi (with pseudo-feedback for QE), and a proprietary concept search engine (we``ll call it TCS, for Thomson Concept Search). TCS is a language modeling based retrieval engine that utilizes a subject-appropriate external corpus (i.e., legal or news) as a knowledge source. This external knowledge source is a corpus separate from, but thematically related to, the document collection to be searched. Translation probabilities for QE [2] are calculated from these large external corpora. Okapi (without feedback) and a language model query likelihood (QL) model (implemented using Indri) are included as keyword-only baselines. Okapi without feedback is intended as an analogous baseline for Okapi with feedback, and the QL model is intended as an appropriate baseline for TCS, as they both implement language-modeling based retrieval algorithms. We choose these as baselines because they are dependent only on the words appearing in the queries and have no QE capabilities. As a result, we expect that when query terms are removed from relevant documents, the performance of these systems should degrade more dramatically than their counterparts that implement QE. The Okapi and QL model results were obtained using the Lemur Toolkit.2 Okapi was run with the parameters k1=1.2, b=0.75, and k3=7. When run with feedback, the feedback parameters used in Okapi were set at 10 documents and 25 terms. The QL model used Jelinek-Mercer smoothing, with λ = 0.6. 4.2 Test Collections We evaluated the performance of the four IR systems outlined above on two different test collections. The two test collections used were the TREC AP89 collection (TIPSTER disk 1) and the FSupp Collection. The FSupp Collection is a proprietary collection of 11,953 case law documents for which we have 44 queries (ranging from four to twenty-two words after stop word removal) with full relevance judgments.3 The average length of documents in the FSupp Collection is 3444 words. 2 www.lemurproject.org 3 Each of the 11,953 documents was evaluated by domain experts with respect to each of the 44 queries. The TREC AP89 test collection contains 84,678 documents, averaging 252 words in length. In our evaluation, we used both the title and the description fields of topics 151200 as queries, so we have two sets of results for the AP89 Collection. After stop word removal, the title queries range from two to eleven words and the description queries range from four to twenty-six terms. 4.3 Query Term Removal Strategy In our experiments, we chose to sequentially and additively remove query terms from highest-to-lowest inverse document frequency (IDF) with respect to the entire document collection. Terms with high IDF values tend to influence document ranking more than those with lower IDF values. Additionally, high IDF terms tend to be domainspecific terms that are less likely to be known to non-expert user, hence we start by removing these first. For the FSupp Collection, queries were evaluated incrementally with one, two, three, five, and seven terms removed from their corresponding relevant documents. The longer description queries from TREC topics 151-200 were likewise evaluated on the AP89 Collection with one, two, three, five, and seven query terms removed from their relevant documents. For the shorter TREC title queries, we removed one, two, three, and five terms from the relevant documents. 4.4 Metrics In this implementation of the evaluation framework, we chose three metrics by which to compare IR system performance: mean average precision (MAP), precision at 10 documents (P10), and recall at 1000 documents. Although these are the metrics we chose to demonstrate this framework, any appropriate IR metrics could be used within the framework. 5. RESULTS 5.1 FSupp Collection Figures 1, 2, and 3 show the performance (in terms of MAP, P10 and Recall, respectively) for the four search engines on the FSupp Collection. As expected, the performance of the keyword-only IR systems, QL and Okapi, drops quickly as query terms are removed from the relevant documents in the collection. The performance of Okapi with feedback (Okapi FB) is somewhat surprising in that on the original collection (i.e., prior to query term removal), its performance is worse than that of Okapi without feedback on all three measures. TCS outperforms the QL keyword baseline on every measure except for MAP on the original collection (i.e., prior to removing any query terms). Because TCS employs implicit query expansion using an external domain specific knowledge base, it is less sensitive to term removal (i.e., mismatch) than the Okapi FB, which relies on terms from the top-ranked documents retrieved by an initial keywordonly search. Because overall search engine performance is frequently measured in terms of MAP, and because other evaluations of QE often only consider performance on the entire collection (i.e., they do not consider term mismatch), the QE implemented in TCS would be considered (in an0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 MeanAveragePrecision(MAP) Okapi FB Okapi TCS QL FSupp: Mean Average Precision with Query Terms Removed Figure 1: The performance of the four retrieval systems on the FSupp collection in terms of Mean Average Precision (MAP) and as a function of the number of query terms removed (the horizontal axis). 0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Precisionat10Documents(P10) Okapi FB Okapi TCS QL FSupp: P10 with Query Terms Removed Figure 2: The performance of the four retrieval systems on the FSupp collection in terms of Precision at 10 and as a function of the number of query terms removed (the horizontal axis). other evaluation) to hurt performance on the FSupp Collection. However, when we look at the comparison of TCS to QL when query terms are removed from the relevant documents, we can see that the QE in TCS is indeed contributing positively to the search. 5.2 The AP89 Collection: using the description queries Figures 4, 5, and 6 show the performance of the four IR systems on the AP89 Collection, using the TREC topic descriptions as queries. The most interesting difference between the performance on the FSupp Collection and the AP89 collection is the reversal of Okapi FB and TCS. On FSupp, TCS outperformed the other engines consistently (see Figures 1, 2, and 3); on the AP89 Collection, Okapi FB is clearly the best performer (see Figures 4, 5, and 6). This is all the more interesting, based on the fact that QE in Okapi FB takes place after the first search iteration, which 0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Okapi FB Okapi TCS Indri FSupp: Recall at 1000 documents with Query Terms Removed Figure 3: The Recall (at 1000) of the four retrieval systems on the FSupp collection as a function of the number of query terms removed (the horizontal axis). 0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 MeanAveragePrecision(MAP) Okapi FB Okapi TCS QL AP89: Mean Average Precision with Query Terms Removed (description queries) Figure 4: MAP of the four IR systems on the AP89 Collection, using TREC description queries. MAP is measured as a function of the number of query terms removed. 0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Precisionat10Documents(P10) Okapi FB Okapi TCS QL AP89: P10 with Query Terms Removed (description queries) Figure 5: Precision at 10 of the four IR systems on the AP89 Collection, using TREC description queries. P at 10 is measured as a function of the number of query terms removed. 0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall Okapi FB Okapi TCS QL AP89: Recall at 1000 documents with Query Terms Removed (description queries) Figure 6: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC description queries, and as a function of the number of query terms removed. we would expect to be handicapped when query terms are removed. Looking at P10 in Figure 5, we can see that TCS and Okapi FB score similarly on P10, starting at the point where one query term is removed from relevant documents. At two query terms removed, TCS starts outperforming Okapi FB. If modeling this in terms of expert versus non-expert users, we could conclude that TCS might be a better search engine for non-experts to use on the AP89 Collection, while Okapi FB would be best for an expert searcher. It is interesting to note that on each metric for the AP89 description queries, TCS performs more poorly than all the other systems on the original collection, but quickly surpasses the baseline systems and approaches Okapi FB``s performance as terms are removed. This is again a case where the performance of a system on the entire collection is not necessarily indicative of how it handles query-document term mismatch. 5.3 The AP89 Collection: using the title queries Figures 7, 8, and 9 show the performance of the four IR systems on the AP89 Collection, using the TREC topic titles as queries. As with the AP89 description queries, Okapi FB is again the best performer of the four systems in the evaluation. As before, the performance of the Okapi and QL systems, the non-QE baseline systems, sharply degrades as query terms are removed. On the shorter queries, TCS seems to have a harder time catching up to the performance of Okapi FB as terms are removed. Perhaps the most interesting result from our evaluation is that although the keyword-only baselines performed consistently and as expected on both collections with respect to query term removal from relevant documents, the performances of the engines implementing QE techniques differed dramatically between collections. 0 1 2 3 4 5 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 MeanAveragePrecision(MAP) Okapi FB Okapi TCS QL AP89: Mean Average Precision with Query Terms Removed (title queries) Figure 7: MAP of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed. 0 1 2 3 4 5 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Precisionat10Documents(P10) Okapi FB Okapi TCS QL AP89: P10 with Query Terms Removed (title queries) Figure 8: Precision at 10 of the four IR systems on the AP89 Collection, using TREC title queries, and as a function of the number of query terms removed. 0 1 2 3 4 5 Number of Query Terms Removed from Relevant Documents 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall Okapi FB Okapi TCS QL AP89: Recall at 1000 documents with Query Terms Removed (title queries) Figure 9: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed. 6. DISCUSSION The intuition behind this evaluation framework is to measure the degree to which various QE techniques overcome term mismatch between queries and relevant documents. In general, it is easy to evaluate the overall performance of different techniques for QE in comparison to each other or against a non-QE variant on any complete test collection. Such an approach does tell us which systems perform better on a complete test collection, but it does not measure the ability of a particular QE technique to retrieve relevant documents despite partial or complete term mismatch between queries and relevant documents. A systematic evaluation of IR systems as outlined in this paper is useful not only with respect to measuring the general success or failure of particular QE techniques in the presence of query-document term mismatch, but it also provides insight into how a particular IR system will perform when used by expert versus non-expert users on a particular collection. The less a user knows about the domain of the document collection on which they are searching, the more prevalent query-document term mismatch is likely to be. This distinction is especially relevant in the case that the test collection is domain-specific (i.e., medical or legal, as opposed to a more general domain, such as news), where the distinction between experts and non-experts may be more marked. For example, a non-expert in the medical domain might search for whooping cough, but relevant documents might instead contain the medical term pertussis. Since query terms are masked only the in relevant documents, this evaluation framework is actually biased against retrieving relevant documents. This is because non-relevant documents may also contain query terms, which can cause a retrieval system to rank such documents higher than it would have before terms were masked in relevant documents. Still, we think this is a more realistic scenario than removing terms from all documents regardless of relevance. The degree to which a QE technique is well-suited to a particular collection can be evaluated in terms of its ability to still find the relevant documents, even when they are missing query terms, despite the bias of this approach against relevant documents. However, given that Okapi FB and TCS outperformed each other on two different collection sets, further investigation into the degree of compatibility between QE expansion approach and target collection is probably warranted. Furthermore, the investigation of other term removal strategies could provide insight into the behavior of different QE techniques and their overall impact on the user experience. As mentioned earlier, our choice of the term removal strategy was motivated by (1) our desire to see the highest impact on system performance as terms are removed and (2) because high IDF terms, in our domain context, are more likely to be domain specific, which allows us to better understand the performance of an IR system as experienced by expert and non-expert users. Although not attempted in our experiments, another application of this evaluation framework would be to remove query terms individually, rather than incrementally, to analyze which terms (or possibly which types of terms) are being helped most by a QE technique on a particular test collection. This could lead to insight as to when QE should and should not be applied. This evaluation framework allows us to see how IR systems perform in the presence of query-document term mismatch. In other evaluations, the performance of a system is measured only on the entire collection, in which the degree of query-term document mismatch is not known. By systematically introducing this mismatch, we can see that even if an IR system is not the best performer on the entire collection, its performance may nonetheless be more robust to query-document term mismatch than other systems. Such robustness makes a system more user-friendly, especially to non-expert users. This paper presents a novel framework for IR system evaluation, the applications of which are numerous. The results presented in this paper are not by any means meant to be exhaustive or entirely representative of the ways in which this evaluation could be applied. To be sure, there is much future work that could be done using this framework. In addition to looking at average performance of IR systems, the results of individual queries could be examined and compared more closely, perhaps giving more insight into the classification and prediction of difficult queries, or perhaps showing which QE techniques improve (or degrade) individual query performance under differing degrees of querydocument term mismatch. Indeed, this framework would also benefit from further testing on a larger collection. 7. CONCLUSION The proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don``t overcome) term mismatch between queries and relevant documents. Evaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval. By systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents. Further, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection. The evaluation framework proposed in this paper is attractive for several reasons. Most importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch. In addition, this framework takes advantage and stretches the amount of information we can get from existing test collections. Further, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way. It should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users'' information needs as represented by their queries. An IR system that is easy to use should be good at retrieving documents that are relevant to users'' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents. 8. REFERENCES [1] Amati, G., C. Carpineto, and G. Romano. Query difficulty, robustness and selective application of query expansion. In Proceedings of the 25th European Conference on Information Retrieval (ECIR 2004), pp. 127-137. [2] Berger, A. and J.D. Lafferty. 1999. Information retrieval as statistical translation. In Research and Development in Information Retrieval, pages 222-229. [3] Billerbeck, B., F. Scholer, H. E. Williams, and J. Zobel. 2003. Query expansion using associated queries. In Proceedings of CIKM 2003, pp. 2-9. [4] Billerbeck, B., and J. Zobel. 2003. When Query Expansion Fails. In Proceedings of SIGIR 2003, pp. 387-388. [5] Billerbeck, B. and J. Zobel. 2004. Questioning Query Expansion: An Examination of Behaviour and Parameters. In Proceedings of the 15th Australasian Database Conference (ADC2004), pp. 69-76. [6] Billerbeck, B. and J. Zobel. 2005. Document Expansion versus Query Expansion for ad-hoc Retrieval. In Proceedings of the 10th Australasian Document Computing Symposium. [7] Buckley, C. and E.M. Voorhees. 2000. Evaluating Evaluation Measure Stability. In Proceedings of SIGIR 2000, pp. 33-40. [8] Buckley, C. and E.M. Voorhees. 2004. Retrieval evaluation with incomplete information. In Proceedings of SIGIR 2004, pp. 25-32. [9] Carmel, D., E. Yom-Tov, A. Darlow, D. Pelleg. 2006. What Makes A Query Difficult? In Proceedings of SIGIR 2006, pp. 390-397. [10] Carpineto, C., R. Mori and G. Romano. 1998. Informative Term Selection for Automatic Query Expansion. In The 7th Text REtrieval Conference, pp.363:369. [11] Carterette, B. and J. Allan. 2005. Incremental Test Collections. In Proceedings of CIKM 2005, pp. 680-687. [12] Carterette, B., J. Allan, and R. Sitaraman. 2006. Minimal Test Collections for Retrieval Evaluation. In Proceedings of SIGIR 2006, pp. 268-275. [13] Cormack, G.V., C. R. Palmer, and C.L. Clarke. 1998. Efficient Construction of Large Test Collections. In Proceedings of SIGIR 1998, pp. 282-289. [14] Cormack, G. and T.R. Lynam. 2006. Statistical Precision of Information Retrieval Evaluation. In Proceedings of SIGIR 2006, pp. 533-540. [15] Cronen-Townsend, S., Y. Zhou, and W.B. Croft. 2004. A Language Modeling Framework for Selective Query Expansion, CIIR Technical Report. [16] Efthimiadis, E.N. Query Expansion. 1996. In Martha E. Williams (ed.) , Annual Review of Information Systems and Technology (ARIST), v31, pp 121- 187. [17] Evans, D.A. and Lefferts, R.G. 1995. CLARIT-TREC Experiments. Information Processing & Management. 31(3): 385-295. [18] Fang, H. and C.X. Zhai. 2006. Semantic Term Matching in Axiomatic Approaches to Information Retrieval. In Proceedings of SIGIR 2006, pp. 115-122. [19] Gao, J., J. Nie, G. Wu and G. Cao. 2004. Dependence language model for information retrieval. In Proceedings of SIGIR 2004, pp. 170-177. [20] Harman, D.K. 1992. Relevance feedback revisited. In Proceedings of ACM SIGIR 1992, pp. 1-10. [21] Harman, D.K., ed. 1993. The First Text REtrieval Conference (TREC-1): 1992. [22] Harman, D.K., ed. 1994. The Second Text REtrieval Conference (TREC-2): 1993. [23] Harman, D.K., ed. 1995. The Third Text REtrieval Conference (TREC-3): 1994. [24] Harman, D.K., 1998. Towards Interactive Query Expansion. In Proceedings of SIGIR 1998, pp. 321-331. [25] Hofmann, T. 1999. Probabilistic latent semantic indexing. In Proceedings of SIGIR 1999, pp 50-57. [26] Jing, Y. and W.B. Croft. 1994. The Association Thesaurus for Information Retrieval. In Proceedings of RIAO 1994, pp. 146-160 [27] Lu, X.A. and R.B. Keefer. Query expansion/reduction and its impact on retrieval effectiveness. In: D.K. Harman, ed. The Third Text REtrieval Conference (TREC-3). Gaithersburg, MD: National Institute of Standards and Technology, 1995,231-239. [28] McNamee, P. and J. Mayfield. 2002. Comparing Cross-Language Query Expansion Techniques by Degrading Translation Resources. In Proceedings of SIGIR 2002, pp. 159-166. [29] Mitra, M., A. Singhal, and C. Buckley. 1998. Improving Automatic Query Expansion. In Proceedings of SIGIR 1998, pp. 206-214. [30] Peat, H. J. and P. Willett. 1991. The limitations of term co-occurrence data for query expansion in document retrieval systems. Journal of the American Society for Information Science, 42(5): 378-383. [31] Ponte, J.M. and W.B. Croft. 1998. A language modeling approach to information retrieval. In Proceedings of SIGIR 1998, pp.275-281. [32] Qiu Y., and Frei H. 1993. Concept based query expansion. In Proceedings of SIGIR 1993, pp. 160-169. [33] Robertson, S. 2006. On GMAP - and other transformations. In Proceedings of CIKM 2006, pp. 78-83. [34] Robertson, S.E. and K. Sparck Jones. 1976. Relevance Weighting of Search Terms. Journal of the American Society for Information Science, 27(3): 129-146. [35] Robertson, S.E., S. Walker, S. Jones, M.M. Hancock-Beaulieu, and M. Gatford. 1994. Okapi at TREC-2. In D.K. Harman (ed). 1994. The Second Text REtrieval Conference (TREC-2): 1993, pp. 21-34. [36] Robertson, S.E., S. Walker, S. Jones, M.M. Hancock-Beaulieu, and M. Gatford. 1995. Okapi at TREC-3. In D.K. Harman (ed). 1995. The Third Text REtrieval Conference (TREC-2): 1993, pp. 109-126 [37] Rocchio, J.J. 1971. Relevance feedback in information retrieval. In G. Salton (Ed.) , The SMART Retrieval System. Prentice-Hall, Inc., Englewood Cliffs, NJ, pp. 313-323. [38] Salton, G. 1968. Automatic Information Organization and Retrieval. McGraw-Hill. [39] Salton, G. 1971. The SMART Retrieval System: Experiments in Automatic Document Processing. Englewood Cliffs NJ; Prentice-Hall. [40] Salton,G. 1980. Automatic term class construction using relevance-a summary of work in automatic pseudoclassification. Information Processing & Management. 16(1): 1-15. [41] Salton, G., and C. Buckley. 1988. On the Use of Spreading Activation Methods in Automatic Information Retrieval. In Proceedings of SIGIR 1998, pp. 147-160. [42] Sanderson, M. 1994. Word sense disambiguation and information retrieval. In Proceedings of SIGIR 1994, pp. 161-175. [43] Sanderson, M. and H. Joho. 2004. Forming test collections with no system pooling. In Proceedings of SIGIR 2004, pp. 186-193. [44] Sanderson, M. and Zobel, J. 2005. Information Retrieval System Evaluation: Effort, Sensitivity, and Reliability. In Proceedings of SIGIR 2005, pp. 162-169. [45] Smeaton, A.F. and C.J. Van Rijsbergen. 1983. The Retrieval Effects of Query Expansion on a Feedback Document Retrieval System. Computer Journal. 26(3):239-246. [46] Song, F. and W.B. Croft. 1999. A general language model for information retrieval. In Proceedings of the Eighth International Conference on Information and Knowledge Management, pages 316-321. [47] Sparck Jones, K. 1971. Automatic Keyword Classification for Information Retrieval. London: Butterworths. [48] Terra, E. and C. L. Clarke. 2004. Scoring missing terms in information retrieval tasks. In Proceedings of CIKM 2004, pp. 50-58. [49] Turtle, Howard. 1994. Natural Language vs. Boolean Query Evaluation: A Comparison of Retrieval Performance. In Proceedings of SIGIR 1994, pp. 212-220. [50] Voorhees, E.M. 1994a. On Expanding Query Vectors with Lexically Related Words. In Harman, D. K., ed. Text REtrieval Conference (TREC-1): 1992. [51] Voorhees, E.M. 1994b. Query Expansion Using Lexical-Semantic Relations. In Proceedings of SIGIR 1994, pp. 61-69.
A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch ABSTRACT The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing. 1. INTRODUCTION In our domain,' and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue. Missing relevant documents may have non-trivial consequences on the outcome of a court proceeding. Attorneys are especially concerned about missing relevant documents when researching a legal topic that is new to them, as they may not be aware of all language variations in such topics. Therefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents. During our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose. {Whooping cough, pertussis}, {heart attack, myocardial infarction}, {car wash, automobile cleaning}, {attorney, legal counsel, lawyer} are all examples of things that share the same meaning. Often, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs. This query-document term mismatch arises from two sources: (1) the synonymy found in natural language, both at the term and the phrasal level, and (2) the degree to which the user is an expert at searching and/or has expert knowledge in the domain of the collection being searched. IR evaluations are comparative in nature (cf. TREC). Generally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision - and recall-based metrics. Similarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets. While this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch. If the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation. An effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system . ' Thomson Corporation builds information based solutions to the professional markets including legal, financial, health care, scientific, and tax and accounting. In order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user's query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents. Because we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not. If a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch. 2. RELATED WORK Accounting for term mismatch between the terms in user queries and the documents relevant to users' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47]. Query expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users' information needs. Explicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37]. Implicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2]. Regardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents). In practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others. QE techniques are judged as effective in the case that they help more than they hurt overall on a particular collection [47, 45, 41, 27]. Often, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift. As such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5]. The evaluation of IR systems has received much attention in the research community, both in terms of developing test collections for the evaluation of different systems [11, 12, 13, 43] and in terms of the utility of evaluation metrics such as recall, precision, mean average precision, precision at rank, Bpref, etc. [7, 8, 44, 14]. In addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41]. In addition, the IR research community has given attention to differences between the performance of individual queries. Research efforts have been made to predict which queries will be improved by QE and then selectively applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve optimal overall performance. In addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9]. There is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP. GMAP (geometric mean average precision) gives more weight to the lower end of the average precision (as opposed to MAP), thereby emphasizing the degree to which difficult or poorly performing queries contribute to the score [33]. However, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms. By purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query. The work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance. [42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR. [6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time. In cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms. Although similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch. 3. METHODOLOGY In order to accurately measure IR system performance in the presence of query-term mismatch, we need to be able to adjust the degree of term mismatch in a test corpus in a principled manner. Our approach is to introduce querydocument term mismatch into a corpus in a controlled manner and then measure the performance of IR systems as the degree of term mismatch changes. We systematically remove query terms from known relevant documents, creating alternate versions of a test collection that differ only in how many or which query terms have been removed from the documents relevant to a particular query. Introducing query-document term mismatch into the test collection in this manner allows us to manipulate the degree of term mismatch between relevant documents and queries in a controlled manner. This removal process affects only the relevant documents in the search collection. The queries themselves remain unaltered. Query terms are removed from documents one by one, so the differences in IR system performance can be measured with respect to missing terms. In the most extreme case (i.e., when the length of the query is less than or equal to the number of query terms removed from the relevant documents), there will be no term overlap between a query and its relevant documents. Notice that, for a given query, only relevant documents are modified. Non-relevant documents are left unchanged, even in the case that they contain query terms. Although, on the surface, we are changing the distribution of terms between the relevant and non-relevant documents sets by removing query terms from the relevant documents, doing so does not change the conceptual relevancy of these documents. Systematically removing query terms from known relevant documents introduces a controlled amount of query-document term mismatch by which we can evaluate the degree to which particular QE techniques are able to retrieve conceptually relevant documents, despite a lack of actual term overlap. Removing a query term from relevant documents simply masks the presence of that query term in those documents. It does not in any way change the conceptual relevancy of the documents. The evaluation framework presented in this paper consists of three elements: a test collection, C; a strategy for selecting which query terms to remove from the relevant documents in that collection, S; and a metric by which to compare performance of the IR systems, m. The test collection, C, consists of a document collection, queries, and relevance judgments. The strategy, S, determines the order and manner in which query terms are removed from the relevant documents in C. This evaluation framework is not metric-specific; any metric (MAP, P@10, recall, etc.) can be used to measure IR system performance. Although test collections are difficult to come by, it should be noted that this evaluation framework can be used on any available test collection. In fact, using this framework stretches the value of existing test collections in that one collection becomes several when query terms are removed from relevant documents, thereby increasing the amount of information that can be gained from evaluating on a particular collection. In other evaluations of QE effectiveness, the controlled variable is simply whether or not queries have been expanded or not, compared in terms of some metric. In contrast, the controlled variable in this framework is the query term that has been removed from the documents relevant to that query, as determined by the removal strategy, S. Query terms are removed one by one, in a manner and order determined by S, so that collections differ only with respect to the one term that has been removed (or masked) in the documents relevant to that query. It is in this way that we can explicitly measure the degree to which an IR system overcomes query-document term mismatch. The choice of a query term removal strategy is relatively flexible; the only restriction in choosing a strategy S is that query terms must be removed one at a time. Two decisions must be made when choosing a removal strategy S. The first is the order in which S removes terms from the relevant documents. Possible orders for removal could be based on metrics such as IDF or the global probability of a term in a document collection. Based on the purpose of the evaluation and the retrieval algorithm being used, it might make more sense to choose a removal order for S based on query term IDF or perhaps based on a measure of query term probability in the document collection. Once an order for removal has been decided, a manner for term removal/masking must be decided. It must be determined if S will remove the terms individually (i.e., remove just one different term each time) or additively (i.e., remove one term first, then that term in addition to another, and so on). The incremental additive removal of query terms from relevant documents allows the evaluation to show the degree to which IR system performance degrades as more and more query terms are missing, thereby increasing the degree of query-document term mismatch. Removing terms individually allows for a clear comparison of the contribution of QE in the absence of each individual query term. 4. EXPERIMENTAL SET-UP 4.1 IR Systems We used the proposed evaluation framework to evaluate four IR systems on two test collections. Of the four systems used in the evaluation, two implement query expansion techniques: Okapi (with pseudo-feedback for QE), and a proprietary concept search engine (we'll call it TCS, for Thomson Concept Search). TCS is a language modeling based retrieval engine that utilizes a subject-appropriate external corpus (i.e., legal or news) as a knowledge source. This external knowledge source is a corpus separate from, but thematically related to, the document collection to be searched. Translation probabilities for QE [2] are calculated from these large external corpora. Okapi (without feedback) and a language model query likelihood (QL) model (implemented using Indri) are included as keyword-only baselines. Okapi without feedback is intended as an analogous baseline for Okapi with feedback, and the QL model is intended as an appropriate baseline for TCS, as they both implement language-modeling based retrieval algorithms. We choose these as baselines because they are dependent only on the words appearing in the queries and have no QE capabilities. As a result, we expect that when query terms are removed from relevant documents, the performance of these systems should degrade more dramatically than their counterparts that implement QE. The Okapi and QL model results were obtained using the Lemur Toolkit .2 Okapi was run with the parameters k1 = 1.2, b = 0.75, and k3 = 7. When run with feedback, the feedback parameters used in Okapi were set at 10 documents and 25 terms. The QL model used Jelinek-Mercer smoothing, with λ = 0.6. 4.2 Test Collections We evaluated the performance of the four IR systems outlined above on two different test collections. The two test collections used were the TREC AP89 collection (TIPSTER disk 1) and the FSupp Collection. The FSupp Collection is a proprietary collection of 11,953 case law documents for which we have 44 queries (ranging from four to twenty-two words after stop word removal) with full relevance judgments .3 The average length of documents in the FSupp Collection is 3444 words. The TREC AP89 test collection contains 84,678 documents, averaging 252 words in length. In our evaluation, we used both the title and the description fields of topics 151200 as queries, so we have two sets of results for the AP89 Collection. After stop word removal, the title queries range from two to eleven words and the description queries range from four to twenty-six terms. 4.3 Query Term Removal Strategy In our experiments, we chose to sequentially and additively remove query terms from highest-to-lowest inverse document frequency (IDF) with respect to the entire document collection. Terms with high IDF values tend to influence document ranking more than those with lower IDF values. Additionally, high IDF terms tend to be domainspecific terms that are less likely to be known to non-expert user, hence we start by removing these first. For the FSupp Collection, queries were evaluated incrementally with one, two, three, five, and seven terms removed from their corresponding relevant documents. The longer description queries from TREC topics 151-200 were likewise evaluated on the AP89 Collection with one, two, three, five, and seven query terms removed from their relevant documents. For the shorter TREC title queries, we removed one, two, three, and five terms from the relevant documents. 4.4 Metrics In this implementation of the evaluation framework, we chose three metrics by which to compare IR system performance: mean average precision (MAP), precision at 10 documents (P10), and recall at 1000 documents. Although these are the metrics we chose to demonstrate this framework, any appropriate IR metrics could be used within the framework. 5. RESULTS 5.1 FSupp Collection Figures 1, 2, and 3 show the performance (in terms of MAP, P10 and Recall, respectively) for the four search engines on the FSupp Collection. As expected, the performance of the keyword-only IR systems, QL and Okapi, drops quickly as query terms are removed from the relevant documents in the collection. The performance of Okapi with feedback (Okapi FB) is somewhat surprising in that on the original collection (i.e., prior to query term removal), its performance is worse than that of Okapi without feedback on all three measures. TCS outperforms the QL keyword baseline on every measure except for MAP on the original collection (i.e., prior to removing any query terms). Because TCS employs implicit query expansion using an external domain specific knowledge base, it is less sensitive to term removal (i.e., mismatch) than the Okapi FB, which relies on terms from the top-ranked documents retrieved by an initial keywordonly search. Because overall search engine performance is frequently measured in terms of MAP, and because other evaluations of QE often only consider performance on the entire collection (i.e., they do not consider term mismatch), the QE implemented in TCS would be considered (in an Figure 1: The performance of the four retrieval systems on the FSupp collection in terms of Mean Average Precision (MAP) and as a function of the number of query terms removed (the horizontal axis). Figure 2: The performance of the four retrieval systems on the FSupp collection in terms of Precision at 10 and as a function of the number of query terms removed (the horizontal axis). other evaluation) to hurt performance on the FSupp Collection. However, when we look at the comparison of TCS to QL when query terms are removed from the relevant documents, we can see that the QE in TCS is indeed contributing positively to the search. 5.2 The AP89 Collection: using the description queries Figures 4, 5, and 6 show the performance of the four IR systems on the AP89 Collection, using the TREC topic descriptions as queries. The most interesting difference between the performance on the FSupp Collection and the AP89 collection is the reversal of Okapi FB and TCS. On FSupp, TCS outperformed the other engines consistently (see Figures 1, 2, and 3); on the AP89 Collection, Okapi FB is clearly the best performer (see Figures 4, 5, and 6). This is all the more interesting, based on the fact that QE in Okapi FB takes place after the first search iteration, which Figure 3: The Recall (at 1000) of the four retrieval systems on the FSupp collection as a function of the number of query terms removed (the horizontal axis). Figure 4: MAP of the four IR systems on the AP89 Collection, using TREC description queries. MAP is measured as a function of the number of query terms removed. Figure 5: Precision at 10 of the four IR systems on the AP89 Collection, using TREC description queries. P at 10 is measured as a function of the number of query terms removed. Figure 6: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC description queries, and as a function of the number of query terms removed. we would expect to be handicapped when query terms are removed. Looking at P10 in Figure 5, we can see that TCS and Okapi FB score similarly on P10, starting at the point where one query term is removed from relevant documents. At two query terms removed, TCS starts outperforming Okapi FB. If modeling this in terms of expert versus non-expert users, we could conclude that TCS might be a better search engine for non-experts to use on the AP89 Collection, while Okapi FB would be best for an expert searcher. It is interesting to note that on each metric for the AP89 description queries, TCS performs more poorly than all the other systems on the original collection, but quickly surpasses the baseline systems and approaches Okapi FB's performance as terms are removed. This is again a case where the performance of a system on the entire collection is not necessarily indicative of how it handles query-document term mismatch. 5.3 The AP89 Collection: using the title queries Figures 7, 8, and 9 show the performance of the four IR systems on the AP89 Collection, using the TREC topic titles as queries. As with the AP89 description queries, Okapi FB is again the best performer of the four systems in the evaluation. As before, the performance of the Okapi and QL systems, the non-QE baseline systems, sharply degrades as query terms are removed. On the shorter queries, TCS seems to have a harder time catching up to the performance of Okapi FB as terms are removed. Perhaps the most interesting result from our evaluation is that although the keyword-only baselines performed consistently and as expected on both collections with respect to query term removal from relevant documents, the performances of the engines implementing QE techniques differed dramatically between collections. Figure 7: MAP of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed. Figure 8: Precision at 10 of the four IR systems on the AP89 Collection, using TREC title queries, and as a function of the number of query terms removed. Figure 9: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed. 7. CONCLUSION The proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don't overcome) term mismatch between queries and relevant documents. Evaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval. By systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents. Further, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection. The evaluation framework proposed in this paper is attractive for several reasons. Most importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch. In addition, this framework takes advantage and stretches the amount of information we can get from existing test collections. Further, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way. It should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users' information needs as represented by their queries. An IR system that is easy to use should be good at retrieving documents that are relevant to users' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.
A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch ABSTRACT The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing. 1. INTRODUCTION In our domain,' and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue. Missing relevant documents may have non-trivial consequences on the outcome of a court proceeding. Attorneys are especially concerned about missing relevant documents when researching a legal topic that is new to them, as they may not be aware of all language variations in such topics. Therefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents. During our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose. {Whooping cough, pertussis}, {heart attack, myocardial infarction}, {car wash, automobile cleaning}, {attorney, legal counsel, lawyer} are all examples of things that share the same meaning. Often, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs. This query-document term mismatch arises from two sources: (1) the synonymy found in natural language, both at the term and the phrasal level, and (2) the degree to which the user is an expert at searching and/or has expert knowledge in the domain of the collection being searched. IR evaluations are comparative in nature (cf. TREC). Generally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision - and recall-based metrics. Similarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets. While this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch. If the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation. An effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system . ' Thomson Corporation builds information based solutions to the professional markets including legal, financial, health care, scientific, and tax and accounting. In order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user's query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents. Because we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not. If a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch. 2. RELATED WORK Accounting for term mismatch between the terms in user queries and the documents relevant to users' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47]. Query expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users' information needs. Explicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37]. Implicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2]. Regardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents). In practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others. QE techniques are judged as effective in the case that they help more than they hurt overall on a particular collection [47, 45, 41, 27]. Often, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift. As such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5]. The evaluation of IR systems has received much attention in the research community, both in terms of developing test collections for the evaluation of different systems [11, 12, 13, 43] and in terms of the utility of evaluation metrics such as recall, precision, mean average precision, precision at rank, Bpref, etc. [7, 8, 44, 14]. In addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41]. In addition, the IR research community has given attention to differences between the performance of individual queries. Research efforts have been made to predict which queries will be improved by QE and then selectively applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve optimal overall performance. In addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9]. There is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP. GMAP (geometric mean average precision) gives more weight to the lower end of the average precision (as opposed to MAP), thereby emphasizing the degree to which difficult or poorly performing queries contribute to the score [33]. However, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms. By purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query. The work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance. [42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR. [6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time. In cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms. Although similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch. 3. METHODOLOGY 4. EXPERIMENTAL SET-UP 4.1 IR Systems 4.2 Test Collections 4.3 Query Term Removal Strategy 4.4 Metrics 5. RESULTS 5.1 FSupp Collection 5.2 The AP89 Collection: using the description queries 5.3 The AP89 Collection: using the title queries 7. CONCLUSION The proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don't overcome) term mismatch between queries and relevant documents. Evaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval. By systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents. Further, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection. The evaluation framework proposed in this paper is attractive for several reasons. Most importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch. In addition, this framework takes advantage and stretches the amount of information we can get from existing test collections. Further, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way. It should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users' information needs as represented by their queries. An IR system that is easy to use should be good at retrieving documents that are relevant to users' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.
A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch ABSTRACT The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing. 1. INTRODUCTION In our domain,' and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue. Missing relevant documents may have non-trivial consequences on the outcome of a court proceeding. Therefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents. During our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose. Often, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs. IR evaluations are comparative in nature (cf. TREC). Generally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision - and recall-based metrics. Similarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets. While this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch. If the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation. An effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system . ' In order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user's query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents. Because we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not. If a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch. 2. RELATED WORK Accounting for term mismatch between the terms in user queries and the documents relevant to users' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47]. Query expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users' information needs. Explicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37]. Implicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2]. Regardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents). In practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others. Often, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift. As such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5]. In addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41]. In addition, the IR research community has given attention to differences between the performance of individual queries. In addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9]. There is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP. However, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms. By purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query. The work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance. [42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR. [6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time. In cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms. Although similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch. 7. CONCLUSION The proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don't overcome) term mismatch between queries and relevant documents. Evaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval. By systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents. Further, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection. The evaluation framework proposed in this paper is attractive for several reasons. Most importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch. In addition, this framework takes advantage and stretches the amount of information we can get from existing test collections. Further, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way. It should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users' information needs as represented by their queries. An IR system that is easy to use should be good at retrieving documents that are relevant to users' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.
H-60
A Frequency-based and a Poisson-based Definition of the Probability of Being Informative
This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf). We show that an intuitive idf-based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.
[ "inform", "invers document frequenc", "invers document frequenc", "idf", "probabl function", "frequenc-base probabl", "poisson-base probabl", "document disjoint", "nois probabl", "inform retriev", "probabl theori", "collect space", "probabilist inform retriev", "poisson distribut", "inform theori", "independ assumpt" ]
[ "P", "P", "P", "P", "P", "M", "M", "R", "M", "R", "M", "U", "R", "U", "M", "R" ]
A Frequency-based and a Poisson-based Definition of the Probability of Being Informative Thomas Roelleke Department of Computer Science Queen Mary University of London thor@dcs.qmul.ac.uk ABSTRACT This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf ). We show that an intuitive idf -based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Theory 1. INTRODUCTION AND BACKGROUND The inverse document frequency (idf ) is one of the most successful parameters for a relevance-based ranking of retrieved objects. With N being the total number of documents, and n(t) being the number of documents in which term t occurs, the idf is defined as follows: idf(t) := − log n(t) N , 0 <= idf(t) < ∞ Ranking based on the sum of the idf -values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications. Also, it is well known that the combination of a document-specific term weight and idf works better than idf alone. This approach is known as tf-idf , where tf(t, d) (0 <= tf(t, d) <= 1) is the so-called term frequency of term t in document d. The idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term. The idf alone works better than the tf alone does. An explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as noisy terms. We use the notion of noisy terms rather than frequent terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document. We associate noise with the document frequency of a term in a collection, and we associate occurrence with the withindocument frequency of a term. The tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document. Therefore, the removal of noisy terms (known as stopword removal) is essential when applying tf . In a tf-idf approach, the removal of stopwords is conceptually obsolete, if stopwords are just words with a low idf . From a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an informative rather than a probabilistic interpretation. The missing probabilistic interpretation of idf is a problem in probabilistic retrieval models where we combine uncertain knowledge of different dimensions (e.g.: informativeness of terms, structure of documents, quality of documents, age of documents, etc.) such that a good estimate of the probability of relevance is achieved. An intuitive solution is a normalisation of idf such that we obtain values in the interval [0; 1]. For example, consider a normalisation based on the maximal idf -value. Let T be the set of terms occurring in a collection. Pfreq (t is informative) := idf(t) maxidf maxidf := max({idf(t)|t ∈ T}), maxidf <= − log(1/N) minidf := min({idf(t)|t ∈ T}), minidf >= 0 minidf maxidf ≤ Pfreq (t is informative) ≤ 1.0 This frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents. Can we interpret Pfreq , the normalised idf , as the probability that the term is informative? When investigating the probabilistic interpretation of the 227 normalised idf , we made several observations related to disjointness and independence of document events. These observations are reported in section 3. We show in section 3.1 that the frequency-based noise probability n(t) N used in the classic idf -definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events. In section 3.2 we show that by assuming independence of documents, we obtain 1 − e−1 ≈ 1 − 0.37 as the upper bound of the noise probability of a term. The value e−1 is related to the logarithm and we investigate in section 3.3 the link to information theory. In section 4, we link the results of the previous sections to probability theory. We show the steps from possible worlds to binomial distribution and Poisson distribution. In section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf . Finally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions. 2. BACKGROUND The relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers. In this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson. [4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords). [9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model. [10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model. The non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability. The Poisson model was here applied to the term frequency of a term in a document. We will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively. Our discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document. [7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates. The results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed. [3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson. Different definitions of idf are put into context and a notion of noise is defined, where noise is viewed as the complement of idf . We use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events. [11], [12], [8] and [1] link frequencies and probability estimation to information theory. [12] establishes a framework in which information retrieval models are formalised based on probabilistic inference. A key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events. The probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12]. [8] address entropy and bibliometric distributions. Entropy is maximal if all events are equiprobable and the frequency-based Lotka law (N/iλ is the number of scientists that have written i publications, where N and λ are distribution parameters), Zipf and the Pareto distribution are related. The Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences. The Pareto distribution is used by [2] for term frequency normalisation. The Pareto distribution compares to the Poisson distribution in the sense that Pareto is fat-tailed, i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do. This makes Pareto interesting since Poisson is felt to be too radical on frequent events. We restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval. [1] establishes a theoretical link between tf-idf and information theory and the theoretical research on the meaning of tf-idf clarifies the statistical model on which the different measures are commonly based. This motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination. 3. FROM DISJOINT TO INDEPENDENT We define and discuss in this section three probabilities: The frequency-based noise probability (definition 1), the total noise probability for disjoint documents (definition 2). and the noise probability for independent documents (definition 3). 3.1 Binary occurrence, constant containment and disjointness of documents We show in this section, that the frequency-based noise probability n(t) N in the idf definition can be explained as a total probability with binary term occurrence, constant document containment and disjointness of document containments. We refer to a probability function as binary if for all events the probability is either 1.0 or 0.0. The occurrence probability P(t|d) is binary, if P(t|d) is equal to 1.0 if t ∈ d, and P(t|d) is equal to 0.0, otherwise. P(t|d) is binary : ⇐⇒ P(t|d) = 1.0 ∨ P(t|d) = 0.0 We refer to a probability function as constant if for all events the probability is equal. The document containment probability reflect the chance that a document occurs in a collection. This containment probability is constant if we have no information about the document containment or we ignore that documents differ in containment. Containment could be derived, for example, from the size, quality, age, links, etc. of a document. For a constant containment in a collection with N documents, 1 N is often assumed as the containment probability. We generalise this definition and introduce the constant λ where 0 ≤ λ ≤ N. The containment of a document d depends on the collection c, this is reflected by the notation P(d|c) used for the containment 228 of a document. P(d|c) is constant : ⇐⇒ ∀d : P(d|c) = λ N For disjoint documents that cover the whole event space, we set λ = 1 and obtain Èd P(d|c) = 1.0. Next, we define the frequency-based noise probability and the total noise probability for disjoint documents. We introduce the event notation t is noisy and t occurs for making the difference between the noise probability P(t is noisy|c) in a collection and the occurrence probability P(t occurs|d) in a document more explicit, thereby keeping in mind that the noise probability corresponds to the occurrence probability of a term in a collection. Definition 1. The frequency-based term noise probability: Pfreq (t is noisy|c) := n(t) N Definition 2. The total term noise probability for disjoint documents: Pdis (t is noisy|c) := d P(t occurs|d) · P(d|c) Now, we can formulate a theorem that makes assumptions explicit that explain the classical idf . Theorem 1. IDF assumptions: If the occurrence probability P(t|d) of term t over documents d is binary, and the containment probability P(d|c) of documents d is constant, and document containments are disjoint events, then the noise probability for disjoint documents is equal to the frequency-based noise probability. Pdis (t is noisy|c) = Pfreq (t is noisy|c) Proof. The assumptions are: ∀d : (P(t occurs|d) = 1 ∨ P(t occurs|d) = 0) ∧ P(d|c) = λ N ∧ d P(d|c) = 1.0 We obtain: Pdis (t is noisy|c) = d|t∈d 1 N = n(t) N = Pfreq (t is noisy|c) The above result is not a surprise but it is a mathematical formulation of assumptions that can be used to explain the classical idf . The assumptions make explicit that the different types of term occurrence in documents (frequency of a term, importance of a term, position of a term, document part where the term occurs, etc.) and the different types of document containment (size, quality, age, etc.) are ignored, and document containments are considered as disjoint events. From the assumptions, we can conclude that idf (frequencybased noise, respectively) is a relatively simple but strict estimate. Still, idf works well. This could be explained by a leverage effect that justifies the binary occurrence and constant containment: The term occurrence for small documents tends to be larger than for large documents, whereas the containment for small documents tends to be smaller than for large documents. From that point of view, idf means that P(t ∧ d|c) is constant for all d in which t occurs, and P(t ∧ d|c) is zero otherwise. The occurrence and containment can be term specific. For example, set P(t∧d|c) = 1/ND(c) if t occurs in d, where ND(c) is the number of documents in collection c (we used before just N). We choose a document-dependent occurrence P(t|d) := 1/NT (d), i. e. the occurrence probability is equal to the inverse of NT (d), which is the total number of terms in document d. Next, we choose the containment P(d|c) := NT (d)/NT (c)·NT (c)/ND(c) where NT (d)/NT (c) is a document length normalisation (number of terms in document d divided by the number of terms in collection c), and NT (c)/ND(c) is a constant factor of the collection (number of terms in collection c divided by the number of documents in collection c). We obtain P(t∧d|c) = 1/ND(c). In a tf-idf -retrieval function, the tf -component reflects the occurrence probability of a term in a document. This is a further explanation why we can estimate the idf with a simple P(t|d), since the combined tf-idf contains the occurrence probability. The containment probability corresponds to a document normalisation (document length normalisation, pivoted document length) and is normally attached to the tf -component or the tf-idf -product. The disjointness assumption is typical for frequency-based probabilities. From a probability theory point of view, we can consider documents as disjoint events, in order to achieve a sound theoretical model for explaining the classical idf . But does disjointness reflect the real world where the containment of a document appears to be independent of the containment of another document? In the next section, we replace the disjointness assumption by the independence assumption. 3.2 The upper bound of the noise probability for independent documents For independent documents, we compute the probability of a disjunction as usual, namely as the complement of the probability of the conjunction of the negated events: P(d1 ∨ ... ∨ dN ) = 1 − P(¬d1 ∧ ... ∧ ¬dN ) = 1 − d (1 − P(d)) The noise probability can be considered as the conjunction of the term occurrence and the document containment. P(t is noisy|c) := P(t occurs ∧ (d1 ∨ ... ∨ dN )|c) For disjoint documents, this view of the noise probability led to definition 2. For independent documents, we use now the conjunction of negated events. Definition 3. The term noise probability for independent documents: Pin (t is noisy|c) := d (1 − P(t occurs|d) · P(d|c)) With binary occurrence and a constant containment P(d|c) := λ/N, we obtain the term noise of a term t that occurs in n(t) documents: Pin (t is noisy|c) = 1 − 1 − λ N n(t) 229 For binary occurrence and disjoint documents, the containment probability was 1/N. Now, with independent documents, we can use λ as a collection parameter that controls the average containment probability. We show through the next theorem that the upper bound of the noise probability depends on λ. Theorem 2. The upper bound of being noisy: If the occurrence P(t|d) is binary, and the containment P(d|c) is constant, and document containments are independent events, then 1 − e−λ is the upper bound of the noise probability. ∀t : Pin (t is noisy|c) < 1 − e−λ Proof. The upper bound of the independent noise probability follows from the limit limN→∞(1 + x N )N = ex (see any comprehensive math book, for example, [5], for the convergence equation of the Euler function). With x = −λ, we obtain: lim N→∞ 1 − λ N N = e−λ For the term noise, we have: Pin (t is noisy|c) = 1 − 1 − λ N n(t) Pin (t is noisy|c) is strictly monotonous: The noise of a term tn is less than the noise of a term tn+1, where tn occurs in n documents and tn+1 occurs in n + 1 documents. Therefore, a term with n = N has the largest noise probability. For a collection with infinite many documents, the upper bound of the noise probability for terms tN that occur in all documents becomes: lim N→∞ Pin (tN is noisy) = lim N→∞ 1 − 1 − λ N N = 1 − e−λ By applying an independence rather a disjointness assumption, we obtain the probability e−1 that a term is not noisy even if the term does occur in all documents. In the disjoint case, the noise probability is one for a term that occurs in all documents. If we view P(d|c) := λ/N as the average containment, then λ is large for a term that occurs mostly in large documents, and λ is small for a term that occurs mostly in small documents. Thus, the noise of a term t is large if t occurs in n(t) large documents and the noise is smaller if t occurs in small documents. Alternatively, we can assume a constant containment and a term-dependent occurrence. If we assume P(d|c) := 1, then P(t|d) := λ/N can be interpreted as the average probability that t represents a document. The common assumption is that the average containment or occurrence probability is proportional to n(t). However, here is additional potential: The statistical laws (see [3] on Luhn and Zipf) indicate that the average probability could follow a normal distribution, i. e. small probabilities for small n(t) and large n(t), and larger probabilities for medium n(t). For the monotonous case we investigate here, the noise of a term with n(t) = 1 is equal to 1 − (1 − λ/N) = λ/N and the noise of a term with n(t) = N is close to 1− e−λ . In the next section, we relate the value e−λ to information theory. 3.3 The probability of a maximal informative signal The probability e−1 is special in the sense that a signal with that probability is a signal with maximal information as derived from the entropy definition. Consider the definition of the entropy contribution H(t) of a signal t. H(t) := P(t) · − ln P(t) We form the first derivation for computing the optimum. ∂H(t) ∂P(t) = − ln P(t) + −1 P(t) · P(t) = −(1 + ln P(t)) For obtaining optima, we use: 0 = −(1 + ln P(t)) The entropy contribution H(t) is maximal for P(t) = e−1 . This result does not depend on the base of the logarithm as we see next: ∂H(t) ∂P(t) = − logb P(t) + −1 P(t) · ln b · P(t) = − 1 ln b + logb P(t) = − 1 + ln P(t) ln b We summarise this result in the following theorem: Theorem 3. The probability of a maximal informative signal: The probability Pmax = e−1 ≈ 0.37 is the probability of a maximal informative signal. The entropy of a maximal informative signal is Hmax = e−1 . Proof. The probability and entropy follow from the derivation above. The complement of the maximal noise probability is e−λ and we are looking now for a generalisation of the entropy definition such that e−λ is the probability of a maximal informative signal. We can generalise the entropy definition by computing the integral of λ+ ln P(t), i. e. this derivation is zero for e−λ . We obtain a generalised entropy: −(λ + ln P(t)) d(P(t)) = P(t) · (1 − λ − ln P(t)) The generalised entropy corresponds for λ = 1 to the classical entropy. By moving from disjoint to independent documents, we have established a link between the complement of the noise probability of a term that occurs in all documents and information theory. Next, we link independent documents to probability theory. 4. THE LINK TO PROBABILITY THEORY We review for independent documents three concepts of probability theory: possible worlds, binomial distribution and Poisson distribution. 4.1 Possible Worlds Each conjunction of document events (for each document, we consider two document events: the document can be true or false) is associated with a so-called possible world. For example, consider the eight possible worlds for three documents (N = 3). 230 world w conjunction w7 d1 ∧ d2 ∧ d3 w6 d1 ∧ d2 ∧ ¬d3 w5 d1 ∧ ¬d2 ∧ d3 w4 d1 ∧ ¬d2 ∧ ¬d3 w3 ¬d1 ∧ d2 ∧ d3 w2 ¬d1 ∧ d2 ∧ ¬d3 w1 ¬d1 ∧ ¬d2 ∧ d3 w0 ¬d1 ∧ ¬d2 ∧ ¬d3 With each world w, we associate a probability µ(w), which is equal to the product of the single probabilities of the document events. world w probability µ(w) w7 λ N ¡3 · 1 − λ N ¡0 w6 λ N ¡2 · 1 − λ N ¡1 w5 λ N ¡2 · 1 − λ N ¡1 w4 λ N ¡1 · 1 − λ N ¡2 w3 λ N ¡2 · 1 − λ N ¡1 w2 λ N ¡1 · 1 − λ N ¡2 w1 λ N ¡1 · 1 − λ N ¡2 w0 λ N ¡0 · 1 − λ N ¡3 The sum over the possible worlds in which k documents are true and N −k documents are false is equal to the probability function of the binomial distribution, since the binomial coefficient yields the number of possible worlds in which k documents are true. 4.2 Binomial distribution The binomial probability function yields the probability that k of N events are true where each event is true with the single event probability p. P(k) := binom(N, k, p) := N k pk (1 − p)N −k The single event probability is usually defined as p := λ/N, i. e. p is inversely proportional to N, the total number of events. With this definition of p, we obtain for an infinite number of documents the following limit for the product of the binomial coefficient and pk : lim N→∞ N k pk = = lim N→∞ N · (N −1) · ... · (N −k +1) k! λ N k = λk k! The limit is close to the actual value for k << N. For large k, the actual value is smaller than the limit. The limit of (1−p)N −k follows from the limit limN→∞(1+ x N )N = ex . lim N→∞ (1 − p)N−k = lim N→∞ 1 − λ N N −k = lim N→∞ e−λ · 1 − λ N −k = e−λ Again, the limit is close to the actual value for k << N. For large k, the actual value is larger than the limit. 4.3 Poisson distribution For an infinite number of events, the Poisson probability function is the limit of the binomial probability function. lim N→∞ binom(N, k, p) = λk k! · e−λ P(k) = poisson(k, λ) := λk k! · e−λ The probability poisson(0, 1) is equal to e−1 , which is the probability of a maximal informative signal. This shows the relationship of the Poisson distribution and information theory. After seeing the convergence of the binomial distribution, we can choose the Poisson distribution as an approximation of the independent term noise probability. First, we define the Poisson noise probability: Definition 4. The Poisson term noise probability: Ppoi (t is noisy|c) := e−λ · n(t) k=1 λk k! For independent documents, the Poisson distribution approximates the probability of the disjunction for large n(t), since the independent term noise probability is equal to the sum over the binomial probabilities where at least one of n(t) document containment events is true. Pin (t is noisy|c) = n(t) k=1 n(t) k pk (1 − p)N −k Pin (t is noisy|c) ≈ Ppoi (t is noisy|c) We have defined a frequency-based and a Poisson-based probability of being noisy, where the latter is the limit of the independence-based probability of being noisy. Before we present in the final section the usage of the noise probability for defining the probability of being informative, we emphasise in the next section that the results apply to the collection space as well as to the the document space. 5. THE COLLECTION SPACE AND THE DOCUMENT SPACE Consider the dual definitions of retrieval parameters in table 1. We associate a collection space D × T with a collection c where D is the set of documents and T is the set of terms in the collection. Let ND := |D| and NT := |T| be the number of documents and terms, respectively. We consider a document as a subset of T and a term as a subset of D. Let nT (d) := |{t|d ∈ t}| be the number of terms that occur in the document d, and let nD(t) := |{d|t ∈ d}| be the number of documents that contain the term t. In a dual way, we associate a document space L × T with a document d where L is the set of locations (also referred to as positions, however, we use the letters L and l and not P and p for avoiding confusion with probabilities) and T is the set of terms in the document. The document dimension in a collection space corresponds to the location (position) dimension in a document space. The definition makes explicit that the classical notion of term frequency of a term in a document (also referred to as the within-document term frequency) actually corresponds to the location frequency of a term in a document. For the 231 space collection document dimensions documents and terms locations and terms document/location frequency nD(t, c): Number of documents in which term t occurs in collection c nL(t, d): Number of locations (positions) at which term t occurs in document d ND(c): Number of documents in collection c NL(d): Number of locations (positions) in document d term frequency nT (d, c): Number of terms that document d contains in collection c nT (l, d): Number of terms that location l contains in document d NT (c): Number of terms in collection c NT (d): Number of terms in document d noise/occurrence P(t|c) (term noise) P(t|d) (term occurrence) containment P(d|c) (document) P(l|d) (location) informativeness − ln P(t|c) − ln P(t|d) conciseness − ln P(d|c) − ln P(l|d) P(informative) ln(P(t|c))/ ln(P(tmin, c)) ln(P(t|d))/ ln(P(tmin, d)) P(concise) ln(P(d|c))/ ln(P(dmin|c)) ln(P(l|d))/ ln(P(lmin|d)) Table 1: Retrieval parameters actual term frequency value, it is common to use the maximal occurrence (number of locations; let lf be the location frequency). tf(t, d):=lf(t, d):= Pfreq (t occurs|d) Pfreq (tmax occurs|d) = nL(t, d) nL(tmax , d) A further duality is between informativeness and conciseness (shortness of documents or locations): informativeness is based on occurrence (noise), conciseness is based on containment. We have highlighted in this section the duality between the collection space and the document space. We concentrate in this paper on the probability of a term to be noisy and informative. Those probabilities are defined in the collection space. However, the results regarding the term noise and informativeness apply to their dual counterparts: term occurrence and informativeness in a document. Also, the results can be applied to containment of documents and locations. 6. THE PROBABILITY OF BEING INFORMATIVE We showed in the previous sections that the disjointness assumption leads to frequency-based probabilities and that the independence assumption leads to Poisson probabilities. In this section, we formulate a frequency-based definition and a Poisson-based definition of the probability of being informative and then we compare the two definitions. Definition 5. The frequency-based probability of being informative: Pfreq (t is informative|c) := − ln n(t) N − ln 1 N = − logN n(t) N = 1 − logN n(t) = 1 − ln n(t) ln N We define the Poisson-based probability of being informative analogously to the frequency-based probability of being informative (see definition 5). Definition 6. The Poisson-based probability of being informative: Ppoi (t is informative|c) := − ln e−λ · Èn(t) k=1 λk k! − ln(e−λ · λ) = λ − ln Èn(t) k=1 λk k! λ − ln λ For the sum expression, the following limit holds: lim n(t)→∞ n(t) k=1 λk k! = eλ − 1 For λ >> 1, we can alter the noise and informativeness Poisson by starting the sum from 0, since eλ >> 1. Then, the minimal Poisson informativeness is poisson(0, λ) = e−λ . We obtain a simplified Poisson probability of being informative: Ppoi (t is informative|c) ≈ λ − ln Èn(t) k=0 λk k! λ = 1 − ln Èn(t) k=0 λk k! λ The computation of the Poisson sum requires an optimisation for large n(t). The implementation for this paper exploits the nature of the Poisson density: The Poisson density yields only values significantly greater than zero in an interval around λ. Consider the illustration of the noise and informativeness definitions in figure 1. The probability functions displayed are summarised in figure 2 where the simplified Poisson is used in the noise and informativeness graphs. The frequency-based noise corresponds to the linear solid curve in the noise figure. With an independence assumption, we obtain the curve in the lower triangle of the noise figure. By changing the parameter p := λ/N of the independence probability, we can lift or lower the independence curve. The noise figure shows the lifting for the value λ := ln N ≈ 9.2. The setting λ = ln N is special in the sense that the frequency-based and the Poisson-based informativeness have the same denominator, namely ln N, and the Poisson sum converges to λ. Whether we can draw more conclusions from this setting is an open question. We can conclude, that the lifting is desirable if we know for a collection that terms that occur in relatively few doc232 0 0.2 0.4 0.6 0.8 1 0 2000 4000 6000 8000 10000 Probabilityofbeingnoisy n(t): Number of documents with term t frequency independence: 1/N independence: ln(N)/N poisson: 1000 poisson: 2000 poisson: 1000,2000 0 0.2 0.4 0.6 0.8 1 0 2000 4000 6000 8000 10000 Probabilityofbeinginformative n(t): Number of documents with term t frequency independence: 1/N independence: ln(N)/N poisson: 1000 poisson: 2000 poisson: 1000,2000 Figure 1: Noise and Informativeness Probability function Noise Informativeness Frequency Pfreq Def n(t)/N ln(n(t)/N)/ ln(1/N) Interval 1/N ≤ Pfreq ≤ 1.0 0.0 ≤ Pfreq ≤ 1.0 Independence Pin Def 1 − (1 − p)n(t) ln(1 − (1 − p)n(t) )/ ln(p) Interval p ≤ Pin < 1 − e−λ ln(p) ≤ Pin ≤ 1.0 Poisson Ppoi Def e−λ Èn(t) k=1 λk k! (λ − ln Èn(t) k=1 λk k! ) /(λ − ln λ) Interval e−λ · λ ≤ Ppoi < 1 − e−λ (λ − ln(eλ − 1))/(λ − ln λ) ≤ Ppoi ≤ 1.0 Poisson Ppoi simplified Def e−λ Èn(t) k=0 λk k! (λ − ln Èn(t) k=0 λk k! ) /λ Interval e−λ ≤ Ppoi < 1.0 0.0 < Ppoi ≤ 1.0 Figure 2: Probability functions uments are no guarantee for finding relevant documents, i. e. we assume that rare terms are still relatively noisy. On the opposite, we could lower the curve when assuming that frequent terms are not too noisy, i. e. they are considered as being still significantly discriminative. The Poisson probabilities approximate the independence probabilities for large n(t); the approximation is better for larger λ. For n(t) < λ, the noise is zero whereas for n(t) > λ the noise is one. This radical behaviour can be smoothened by using a multi-dimensional Poisson distribution. Figure 1 shows a Poisson noise based on a two-dimensional Poisson: poisson(k, λ1, λ2) := π · e−λ1 · λk 1 k! + (1 − π) · e−λ2 · λk 2 k! The two dimensional Poisson shows a plateau between λ1 = 1000 and λ2 = 2000, we used here π = 0.5. The idea behind this setting is that terms that occur in less than 1000 documents are considered to be not noisy (i.e. they are informative), that terms between 1000 and 2000 are half noisy, and that terms with more than 2000 are definitely noisy. For the informativeness, we observe that the radical behaviour of Poisson is preserved. The plateau here is approximately at 1/6, and it is important to realise that this plateau is not obtained with the multi-dimensional Poisson noise using π = 0.5. The logarithm of the noise is normalised by the logarithm of a very small number, namely 0.5 · e−1000 + 0.5 · e−2000 . That is why the informativeness will be only close to one for very little noise, whereas for a bit of noise, informativeness will drop to zero. This effect can be controlled by using small values for π such that the noise in the interval [λ1; λ2] is still very little. The setting π = e−2000/6 leads to noise values of approximately e−2000/6 in the interval [λ1; λ2], the logarithms lead then to 1/6 for the informativeness. The indepence-based and frequency-based informativeness functions do not differ as much as the noise functions do. However, for the indepence-based probability of being informative, we can control the average informativeness by the definition p := λ/N whereas the control on the frequencybased is limited as we address next. For the frequency-based idf , the gradient is monotonously decreasing and we obtain for different collections the same distances of idf -values, i. e. the parameter N does not affect the distance. For an illustration, consider the distance between the value idf(tn+1) of a term tn+1 that occurs in n+1 documents, and the value idf(tn) of a term tn that occurs in n documents. idf(tn+1) − idf(tn) = ln n n + 1 The first three values of the distance function are: idf(t2) − idf(t1) = ln(1/(1 + 1)) = 0.69 idf(t3) − idf(t2) = ln(1/(2 + 1)) = 0.41 idf(t4) − idf(t3) = ln(1/(3 + 1)) = 0.29 For the Poisson-based informativeness, the gradient decreases first slowly for small n(t), then rapidly near n(t) ≈ λ and then it grows again slowly for large n(t). In conclusion, we have seen that the Poisson-based definition provides more control and parameter possibilities than 233 the frequency-based definition does. Whereas more control and parameter promises to be positive for the personalisation of retrieval systems, it bears at the same time the danger of just too many parameters. The framework presented in this paper raises the awareness about the probabilistic and information-theoretic meanings of the parameters. The parallel definitions of the frequency-based probability and the Poisson-based probability of being informative made the underlying assumptions explicit. The frequency-based probability can be explained by binary occurrence, constant containment and disjointness of documents. Independence of documents leads to Poisson, where we have to be aware that Poisson approximates the probability of a disjunction for a large number of events, but not for a small number. This theoretical result explains why experimental investigations on Poisson (see [7]) show that a Poisson estimation does work better for frequent (bad, noisy) terms than for rare (good, informative) terms. In addition to the collection-wide parameter setting, the framework presented here allows for document-dependent settings, as explained for the independence probability. This is in particular interesting for heterogeneous and structured collections, since documents are different in nature (size, quality, root document, sub document), and therefore, binary occurrence and constant containment are less appropriate than in relatively homogeneous collections. 7. SUMMARY The definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf -based probability in probabilistic retrieval approaches. We showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint. By explicitly and mathematically formulating the assumptions, we showed that the classical definition of idf does not take into account parameters such as the different nature (size, quality, structure, etc.) of documents in a collection, or the different nature of terms (coverage, importance, position, etc.) in a document. We discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability. By applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson. From the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative. The frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively. We showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson. The explicit and mathematical formulation of idf - and Poisson-assumptions is the main result of this paper. Also, the paper emphasises the duality of idf and tf , collection space and document space, respectively. Thus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document. This theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. The links between indepence-based noise as document frequency, probabilistic interpretation of idf , information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems. Acknowledgment: I would like to thank Mounia Lalmas, Gabriella Kazai and Theodora Tsikrika for their comments on the as they said heavy pieces. My thanks also go to the meta-reviewer who advised me to improve the presentation to make it less formidable and more accessible for those without a theoretic bent. This work was funded by a research fellowship from Queen Mary University of London. 8. REFERENCES [1] A. Aizawa. An information-theoretic perspective of tf-idf measures. Information Processing and Management, 39:45-65, January 2003. [2] G. Amati and C. J. Rijsbergen. Term frequency normalization via Pareto distributions. In 24th BCS-IRSG European Colloquium on IR Research, Glasgow, Scotland, 2002. [3] R. K. Belew. Finding out about. Cambridge University Press, 2000. [4] A. Bookstein and D. Swanson. Probabilistic models for automatic indexing. Journal of the American Society for Information Science, 25:312-318, 1974. [5] I. N. Bronstein. Taschenbuch der Mathematik. Harri Deutsch, Thun, Frankfurt am Main, 1987. [6] K. Church and W. Gale. Poisson mixtures. Natural Language Engineering, 1(2):163-190, 1995. [7] K. W. Church and W. A. Gale. Inverse document frequency: A measure of deviations from poisson. In Third Workshop on Very Large Corpora, ACL Anthology, 1995. [8] T. Lafouge and C. Michel. Links between information construction and information gain: Entropy and bibliometric distribution. Journal of Information Science, 27(1):39-49, 2001. [9] E. Margulis. N-poisson document modelling. In Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 177-189, 1992. [10] S. E. Robertson and S. Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 232-241, London, et al., 1994. Springer-Verlag. [11] S. Wong and Y. Yao. An information-theoric measure of term specificity. Journal of the American Society for Information Science, 43(1):54-61, 1992. [12] S. Wong and Y. Yao. On modeling information retrieval with probabilistic inference. ACM Transactions on Information Systems, 13(1):38-68, 1995. 234
A Frequency-based and a Poisson-based Definition of the Probability of Being Informative ABSTRACT This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf). We show that an intuitive idf - based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. 1. INTRODUCTION AND BACKGROUND The inverse document frequency (idf) is one of the most successful parameters for a relevance-based ranking of retrieved objects. With N being the total number of documents, and n (t) being the number of documents in which term t occurs, the idf is defined as follows: Ranking based on the sum of the idf - values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications. Also, it is well known that the combination of a document-specific term weight and idf works better than idf alone. This approach is known as tf-idf, where tf (t, d) (0 <= tf (t, d) <= 1) is the so-called term frequency of term t in document d. The idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term. The idf alone works better than the tf alone does. An explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as "noisy" terms. We use the notion of "noisy" terms rather than "frequent" terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document. We associate "noise" with the document frequency of a term in a collection, and we associate "occurrence" with the withindocument frequency of a term. The tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document. Therefore, the removal of noisy terms (known as "stopword removal") is essential when applying tf. In a tf-idf approach, the removal of stopwords is conceptually obsolete, if stopwords are just words with a low idf. From a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an "informative" rather than a probabilistic interpretation. The missing probabilistic interpretation of idf is a problem in probabilistic retrieval models where we combine uncertain knowledge of different dimensions (e.g.: informativeness of terms, structure of documents, quality of documents, age of documents, etc.) such that a good estimate of the probability of relevance is achieved. An intuitive solution is a normalisation of idf such that we obtain values in the interval [0; 1]. For example, consider a normalisation based on the maximal idf-value. Let T be the set of terms occurring in a collection. This frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents. Can we interpret Pfreq, the normalised idf, as the probability that the term is informative? When investigating the probabilistic interpretation of the normalised idf, we made several observations related to disjointness and independence of document events. These observations are reported in section 3. We show in section 3.1 that the frequency-based noise probability n (t) N used in the classic idf-definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events. In section 3.2 we show that by assuming independence of documents, we obtain 1--e − 1 Pz 1--0.37 as the upper bound of the noise probability of a term. The value e − 1 is related to the logarithm and we investigate in section 3.3 the link to information theory. In section 4, we link the results of the previous sections to probability theory. We show the steps from possible worlds to binomial distribution and Poisson distribution. In section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf. Finally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions. 2. BACKGROUND The relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers. In this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson. [4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords). [9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model. [10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model. The non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability. The Poisson model was here applied to the term frequency of a term in a document. We will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively. Our discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document. [7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates. The results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed. [3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson. Different definitions of idf are put into context and a notion of "noise" is defined, where noise is viewed as the complement of idf. We use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events. [11], [12], [8] and [1] link frequencies and probability estimation to information theory. [12] establishes a framework in which information retrieval models are formalised based on probabilistic inference. A key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events. The probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12]. [8] address entropy and bibliometric distributions. Entropy is maximal if all events are equiprobable and the frequency-based Lotka law (N/i λ is the number of scientists that have written i publications, where N and λ are distribution parameters), Zipf and the Pareto distribution are related. The Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences. The Pareto distribution is used by [2] for term frequency normalisation. The Pareto distribution compares to the Poisson distribution in the sense that Pareto is "fat-tailed", i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do. This makes Pareto interesting since Poisson is felt to be too radical on frequent events. We restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval. [1] establishes a theoretical link between tf-idf and information theory and the theoretical research on the meaning of tf-idf "clarifies the statistical model on which the different measures are commonly based". This motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination. 3. FROM DISJOINT TO INDEPENDENT We define and discuss in this section three probabilities: The frequency-based noise probability (definition 1), the total noise probability for disjoint documents (definition 2). and the noise probability for independent documents (definition 3). 3.1 Binary occurrence, constant containment and disjointness of documents We show in this section, that the frequency-based noise probability n (t) N in the idf definition can be explained as a total probability with binary term occurrence, constant document containment and disjointness of document containments. We refer to a probability function as binary if for all events the probability is either 1.0 or 0.0. The occurrence probability P (t1d) is binary, if P (t1d) is equal to 1.0 if t E d, and P (t1d) is equal to 0.0, otherwise. We refer to a probability function as constant if for all events the probability is equal. The document containment probability reflect the chance that a document occurs in a collection. This containment probability is constant if we have no information about the document containment or we ignore that documents differ in containment. Containment could be derived, for example, from the size, quality, age, links, etc. of a document. For a constant containment in a collection with N documents, 1N is often assumed as the containment probability. We generalise this definition and introduce the constant λ where 0 <λ <N. The containment of a document d depends on the collection c, this is reflected by the notation P (d1c) used for the containment of a document. we set λ = 1 and obtain d P (d | c) = 1.0. Next, we define For disjoint documents that cover the whole event space, the frequency-based noise probability and the total noise probability for disjoint documents. We introduce the event notation t is noisy and t occurs for making the difference between the noise probability P (t is noisy | c) in a collection and the occurrence probability P (t occurs | d) in a document more explicit, thereby keeping in mind that the noise probability corresponds to the occurrence probability of a term in a collection. Now, we can formulate a theorem that makes assumptions explicit that explain the classical idf. PROOF. The assumptions are: d the containment for small documents tends to be smaller than for large documents. From that point of view, idf means that P (t ∧ d | c) is constant for all d in which t occurs, and P (t ∧ d | c) is zero otherwise. The occurrence and containment can be term specific. For example, set P (t ∧ d | c) = 1/ND (c) if t occurs in d, where ND (c) is the number of documents in collection c (we used before just N). We choose a document-dependent occurrence P (t | d): = 1/NT (d), i. e. the occurrence probability is equal to the inverse of NT (d), which is the total number of terms in document d. Next, we choose the containment P (d | c): = NT (d) / NT (c) · NT (c) / ND (c) where NT (d) / NT (c) is a document length normalisation (number of terms in document d divided by the number of terms in collection c), and NT (c) / ND (c) is a constant factor of the collection (number of terms in collection c divided by the number of documents in collection c). We obtain P (t ∧ d | c) = 1/ND (c). In a tf-idf - retrieval function, the tf-component reflects the occurrence probability of a term in a document. This is a further explanation why we can estimate the idf with a simple P (t | d), since the combined tf-idf contains the occurrence probability. The containment probability corresponds to a document normalisation (document length normalisation, pivoted document length) and is normally attached to the tf - component or the tf-idf-product. The disjointness assumption is typical for frequency-based probabilities. From a probability theory point of view, we can consider documents as disjoint events, in order to achieve a sound theoretical model for explaining the classical idf. But does disjointness reflect the real world where the containment of a document appears to be independent of the containment of another document? In the next section, we replace the disjointness assumption by the independence assumption. 3.2 The upper bound of the noise probability for independent documents For independent documents, we compute the probability of a disjunction as usual, namely as the complement of the probability of the conjunction of the negated events: The noise probability can be considered as the conjunction of the term occurrence and the document containment. The above result is not a surprise but it is a mathematical formulation of assumptions that can be used to explain the classical idf. The assumptions make explicit that the different types of term occurrence in documents (frequency of a term, importance of a term, position of a term, document part where the term occurs, etc.) and the different types of document containment (size, quality, age, etc.) are ignored, and document containments are considered as disjoint events. From the assumptions, we can conclude that idf (frequencybased noise, respectively) is a relatively simple but strict estimate. Still, idf works well. This could be explained by a leverage effect that justifies the binary occurrence and constant containment: The term occurrence for small documents tends to be larger than for large documents, whereas For disjoint documents, this view of the noise probability led to definition 2. For independent documents, we use now the conjunction of negated events. With binary occurrence and a constant containment P (d | c): = λ / N, we obtain the term noise of a term t that occurs in n (t) documents: For binary occurrence and disjoint documents, the containment probability was 1/N. Now, with independent documents, we can use λ as a collection parameter that controls the average containment probability. We show through the next theorem that the upper bound of the noise probability depends on λ. PROOF. The upper bound of the independent noise probability follows from the limit limN → ∞ (1 + xN) N = ex (see any comprehensive math book, for example, [5], for the convergence equation of the Euler function). With x = − λ, we obtain: For the term noise, we have: Pin (t is noisy | c) is strictly monotonous: The noise of a term tn is less than the noise of a term tn +1, where tn occurs in n documents and tn +1 occurs in n + 1 documents. Therefore, a term with n = N has the largest noise probability. For a collection with infinite many documents, the upper bound of the noise probability for terms tN that occur in all documents becomes: By applying an independence rather a disjointness assumption, we obtain the probability e − 1 that a term is not noisy even if the term does occur in all documents. In the disjoint case, the noise probability is one for a term that occurs in all documents. If we view P (d | c): = λ / N as the average containment, then λ is large for a term that occurs mostly in large documents, and λ is small for a term that occurs mostly in small documents. Thus, the noise of a term t is large if t occurs in n (t) large documents and the noise is smaller if t occurs in small documents. Alternatively, we can assume a constant containment and a term-dependent occurrence. If we assume P (d | c): = 1, then P (t | d): = λ / N can be interpreted as the average probability that t represents a document. The common assumption is that the average containment or occurrence probability is proportional to n (t). However, here is additional potential: The statistical laws (see [3] on Luhn and Zipf) indicate that the average probability could follow a normal distribution, i. e. small probabilities for small n (t) and large n (t), and larger probabilities for medium n (t). For the monotonous case we investigate here, the noise of a term with n (t) = 1 is equal to 1 − (1 − λ / N) = λ / N and the noise of a term with n (t) = N is close to 1 − e − λ. In the next section, we relate the value e − λ to information theory. 3.3 The probability of a maximal informative signal The probability e − 1 is special in the sense that a signal with that probability is a signal with maximal information as derived from the entropy definition. Consider the definition of the entropy contribution H (t) of a signal t. We form the first derivation for computing the optimum. The entropy contribution H (t) is maximal for P (t) = e − 1. This result does not depend on the base of the logarithm as we see next: We summarise this result in the following theorem: THEOREM 3. The probability of a maximal informative signal: The probability Pmax = e − 1 ≈ 0.37 is the probability of a maximal informative signal. The entropy of a maximal informative signal is Hmax = e − 1. PROOF. The probability and entropy follow from the derivation above. The complement of the maximal noise probability is e − λ and we are looking now for a generalisation of the entropy definition such that e − λ is the probability of a maximal informative signal. We can generalise the entropy definition by computing the integral of λ + ln P (t), i. e. this derivation is zero for e − λ. We obtain a generalised entropy: The generalised entropy corresponds for λ = 1 to the classical entropy. By moving from disjoint to independent documents, we have established a link between the complement of the noise probability of a term that occurs in all documents and information theory. Next, we link independent documents to probability theory. 4. THE LINK TO PROBABILITY THEORY We review for independent documents three concepts of probability theory: possible worlds, binomial distribution and Poisson distribution. 4.1 Possible Worlds Each conjunction of document events (for each document, we consider two document events: the document can be true or false) is associated with a so-called possible world. For example, consider the eight possible worlds for three documents (N = 3). With each world w, we associate a probability µ (w), which is equal to the product of the single probabilities of the document events. The sum over the possible worlds in which k documents are true and N − k documents are false is equal to the probability function of the binomial distribution, since the binomial coefficient yields the number of possible worlds in which k documents are true. 4.2 Binomial distribution The binomial probability function yields the probability that k of N events are true where each event is true with the single event probability p. The single event probability is usually defined as p: = λ / N, i. e. p is inversely proportional to N, the total number of events. With this definition of p, we obtain for an infinite number of documents the following limit for the product of the binomial coefficient and pk: The limit is close to the actual value for k <<N. For large k, the actual value is smaller than the limit. The limit of (1 − p) N − k follows from the limit limN → ∞ (1 + Again, the limit is close to the actual value for k <<N. For large k, the actual value is larger than the limit. 4.3 Poisson distribution For an infinite number of events, the Poisson probability function is the limit of the binomial probability function. The probability poisson (0, 1) is equal to e − 1, which is the probability of a maximal informative signal. This shows the relationship of the Poisson distribution and information theory. After seeing the convergence of the binomial distribution, we can choose the Poisson distribution as an approximation of the independent term noise probability. First, we define the Poisson noise probability: For independent documents, the Poisson distribution approximates the probability of the disjunction for large n (t), since the independent term noise probability is equal to the sum over the binomial probabilities where at least one of n (t) document containment events is true. We have defined a frequency-based and a Poisson-based probability of being noisy, where the latter is the limit of the independence-based probability of being noisy. Before we present in the final section the usage of the noise probability for defining the probability of being informative, we emphasise in the next section that the results apply to the collection space as well as to the the document space. 5. THE COLLECTION SPACE AND THE DOCUMENT SPACE Consider the dual definitions of retrieval parameters in table 1. We associate a collection space D x T with a collection c where D is the set of documents and T is the set of terms in the collection. Let ND: = | D | and NT: = | T | be the number of documents and terms, respectively. We consider a document as a subset of T and a term as a subset of D. Let nT (d): = | {t | d E t} | be the number of terms that occur in the document d, and let nD (t): = | {d | t E d} | be the number of documents that contain the term t. In a dual way, we associate a document space L x T with a document d where L is the set of locations (also referred to as positions, however, we use the letters L and l and not P and p for avoiding confusion with probabilities) and T is the set of terms in the document. The document dimension in a collection space corresponds to the location (position) dimension in a document space. The definition makes explicit that the classical notion of term frequency of a term in a document (also referred to as the within-document term frequency) actually corresponds to the location frequency of a term in a document. For the Table 1: Retrieval parameters actual term frequency value, it is common to use the maximal occurrence (number of locations; let lf be the location frequency). A further duality is between informativeness and conciseness (shortness of documents or locations): informativeness is based on occurrence (noise), conciseness is based on containment. We have highlighted in this section the duality between the collection space and the document space. We concentrate in this paper on the probability of a term to be noisy and informative. Those probabilities are defined in the collection space. However, the results regarding the term noise and informativeness apply to their dual counterparts: term occurrence and informativeness in a document. Also, the results can be applied to containment of documents and locations. 6. THE PROBABILITY OF BEING INFORMATIVE We showed in the previous sections that the disjointness assumption leads to frequency-based probabilities and that the independence assumption leads to Poisson probabilities. In this section, we formulate a frequency-based definition and a Poisson-based definition of the probability of being informative and then we compare the two definitions. We define the Poisson-based probability of being informative analogously to the frequency-based probability of being informative (see definition 5). For the sum expression, the following limit holds: lim n (t) → ∞ For λ>> 1, we can alter the noise and informativeness Poisson by starting the sum from 0, since eλ>> 1. Then, the minimal Poisson informativeness is poisson (0, λ) = e − λ. We obtain a simplified Poisson probability of being informative: The computation of the Poisson sum requires an optimisation for large n (t). The implementation for this paper exploits the nature of the Poisson density: The Poisson density yields only values significantly greater than zero in an interval around λ. Consider the illustration of the noise and informativeness definitions in figure 1. The probability functions displayed are summarised in figure 2 where the simplified Poisson is used in the noise and informativeness graphs. The frequency-based noise corresponds to the linear solid curve in the noise figure. With an independence assumption, we obtain the curve in the lower triangle of the noise figure. By changing the parameter p: = λ / N of the independence probability, we can lift or lower the independence curve. The noise figure shows the lifting for the value λ: = ln N ≈ 9.2. The setting λ = ln N is special in the sense that the frequency-based and the Poisson-based informativeness have the same denominator, namely ln N, and the Poisson sum converges to λ. Whether we can draw more conclusions from this setting is an open question. We can conclude, that the lifting is desirable if we know for a collection that terms that occur in relatively few doc Figure 1: Noise and Informativeness Figure 2: Probability functions uments are no guarantee for finding relevant documents, i. e. we assume that rare terms are still relatively noisy. On the opposite, we could lower the curve when assuming that frequent terms are not too noisy, i. e. they are considered as being still significantly discriminative. The Poisson probabilities approximate the independence probabilities for large n (t); the approximation is better for larger λ. For n (t) <λ, the noise is zero whereas for n (t)> λ the noise is one. This radical behaviour can be smoothened by using a multi-dimensional Poisson distribution. Figure 1 shows a Poisson noise based on a two-dimensional Poisson: The two dimensional Poisson shows a plateau between λ1 = 1000 and λ2 = 2000, we used here π = 0.5. The idea behind this setting is that terms that occur in less than 1000 documents are considered to be not noisy (i.e. they are informative), that terms between 1000 and 2000 are half noisy, and that terms with more than 2000 are definitely noisy. For the informativeness, we observe that the radical behaviour of Poisson is preserved. The plateau here is approximately at 1/6, and it is important to realise that this plateau is not obtained with the multi-dimensional Poisson noise using π = 0.5. The logarithm of the noise is normalised by the logarithm of a very small number, namely 0.5 · e--1000 + 0.5 · e--2000. That is why the informativeness will be only close to one for very little noise, whereas for a bit of noise, informativeness will drop to zero. This effect can be controlled by using small values for π such that the noise in the interval [λ1; λ2] is still very little. The setting π = e--2000/6 leads to noise values of approximately e--2000/6 in the interval [λ1; λ2], the logarithms lead then to 1/6 for the informativeness. The indepence-based and frequency-based informativeness functions do not differ as much as the noise functions do. However, for the indepence-based probability of being informative, we can control the average informativeness by the definition p: = λ / N whereas the control on the frequencybased is limited as we address next. For the frequency-based idf, the gradient is monotonously decreasing and we obtain for different collections the same distances of idf - values, i. e. the parameter N does not affect the distance. For an illustration, consider the distance between the value idf (tn +1) of a term tn +1 that occurs in n +1 documents, and the value idf (tn) of a term tn that occurs in n documents. The first three values of the distance function are: For the Poisson-based informativeness, the gradient decreases first slowly for small n (t), then rapidly near n (t) ≈ λ and then it grows again slowly for large n (t). In conclusion, we have seen that the Poisson-based definition provides more control and parameter possibilities than the frequency-based definition does. Whereas more control and parameter promises to be positive for the personalisation of retrieval systems, it bears at the same time the danger of just too many parameters. The framework presented in this paper raises the awareness about the probabilistic and information-theoretic meanings of the parameters. The parallel definitions of the frequency-based probability and the Poisson-based probability of being informative made the underlying assumptions explicit. The frequency-based probability can be explained by binary occurrence, constant containment and disjointness of documents. Independence of documents leads to Poisson, where we have to be aware that Poisson approximates the probability of a disjunction for a large number of events, but not for a small number. This theoretical result explains why experimental investigations on Poisson (see [7]) show that a Poisson estimation does work better for frequent (bad, noisy) terms than for rare (good, informative) terms. In addition to the collection-wide parameter setting, the framework presented here allows for document-dependent settings, as explained for the independence probability. This is in particular interesting for heterogeneous and structured collections, since documents are different in nature (size, quality, root document, sub document), and therefore, binary occurrence and constant containment are less appropriate than in relatively homogeneous collections. 7. SUMMARY The definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf - based probability in probabilistic retrieval approaches. We showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint. By explicitly and mathematically formulating the assumptions, we showed that the classical definition of idf does not take into account parameters such as the different nature (size, quality, structure, etc.) of documents in a collection, or the different nature of terms (coverage, importance, position, etc.) in a document. We discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability. By applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson. From the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative. The frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively. We showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson. The explicit and mathematical formulation of idf - and Poisson-assumptions is the main result of this paper. Also, the paper emphasises the duality of idf and tf, collection space and document space, respectively. Thus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document. This theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. The links between indepence-based noise as document frequency, probabilistic interpretation of idf, information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems. Acknowledgment: I would like to thank Mounia Lalmas, Gabriella Kazai and Theodora Tsikrika for their comments on the as they said "heavy" pieces. My thanks also go to the meta-reviewer who advised me to improve the presentation to make it less "formidable" and more accessible for those "without a theoretic bent". This work was funded by a research fellowship from Queen Mary University of London.
A Frequency-based and a Poisson-based Definition of the Probability of Being Informative ABSTRACT This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf). We show that an intuitive idf - based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. 1. INTRODUCTION AND BACKGROUND The inverse document frequency (idf) is one of the most successful parameters for a relevance-based ranking of retrieved objects. With N being the total number of documents, and n (t) being the number of documents in which term t occurs, the idf is defined as follows: Ranking based on the sum of the idf - values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications. Also, it is well known that the combination of a document-specific term weight and idf works better than idf alone. This approach is known as tf-idf, where tf (t, d) (0 <= tf (t, d) <= 1) is the so-called term frequency of term t in document d. The idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term. The idf alone works better than the tf alone does. An explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as "noisy" terms. We use the notion of "noisy" terms rather than "frequent" terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document. We associate "noise" with the document frequency of a term in a collection, and we associate "occurrence" with the withindocument frequency of a term. The tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document. Therefore, the removal of noisy terms (known as "stopword removal") is essential when applying tf. In a tf-idf approach, the removal of stopwords is conceptually obsolete, if stopwords are just words with a low idf. From a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an "informative" rather than a probabilistic interpretation. The missing probabilistic interpretation of idf is a problem in probabilistic retrieval models where we combine uncertain knowledge of different dimensions (e.g.: informativeness of terms, structure of documents, quality of documents, age of documents, etc.) such that a good estimate of the probability of relevance is achieved. An intuitive solution is a normalisation of idf such that we obtain values in the interval [0; 1]. For example, consider a normalisation based on the maximal idf-value. Let T be the set of terms occurring in a collection. This frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents. Can we interpret Pfreq, the normalised idf, as the probability that the term is informative? When investigating the probabilistic interpretation of the normalised idf, we made several observations related to disjointness and independence of document events. These observations are reported in section 3. We show in section 3.1 that the frequency-based noise probability n (t) N used in the classic idf-definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events. In section 3.2 we show that by assuming independence of documents, we obtain 1--e − 1 Pz 1--0.37 as the upper bound of the noise probability of a term. The value e − 1 is related to the logarithm and we investigate in section 3.3 the link to information theory. In section 4, we link the results of the previous sections to probability theory. We show the steps from possible worlds to binomial distribution and Poisson distribution. In section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf. Finally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions. 2. BACKGROUND The relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers. In this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson. [4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords). [9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model. [10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model. The non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability. The Poisson model was here applied to the term frequency of a term in a document. We will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively. Our discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document. [7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates. The results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed. [3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson. Different definitions of idf are put into context and a notion of "noise" is defined, where noise is viewed as the complement of idf. We use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events. [11], [12], [8] and [1] link frequencies and probability estimation to information theory. [12] establishes a framework in which information retrieval models are formalised based on probabilistic inference. A key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events. The probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12]. [8] address entropy and bibliometric distributions. Entropy is maximal if all events are equiprobable and the frequency-based Lotka law (N/i λ is the number of scientists that have written i publications, where N and λ are distribution parameters), Zipf and the Pareto distribution are related. The Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences. The Pareto distribution is used by [2] for term frequency normalisation. The Pareto distribution compares to the Poisson distribution in the sense that Pareto is "fat-tailed", i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do. This makes Pareto interesting since Poisson is felt to be too radical on frequent events. We restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval. [1] establishes a theoretical link between tf-idf and information theory and the theoretical research on the meaning of tf-idf "clarifies the statistical model on which the different measures are commonly based". This motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination. 3. FROM DISJOINT TO INDEPENDENT 3.1 Binary occurrence, constant containment and disjointness of documents 3.2 The upper bound of the noise probability 3.3 The probability of a maximal informative signal 4. THE LINK TO PROBABILITY THEORY 4.1 Possible Worlds 4.2 Binomial distribution 4.3 Poisson distribution 5. THE COLLECTION SPACE AND THE DOCUMENT SPACE 6. THE PROBABILITY OF BEING INFORMATIVE 7. SUMMARY The definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf - based probability in probabilistic retrieval approaches. We showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint. By explicitly and mathematically formulating the assumptions, we showed that the classical definition of idf does not take into account parameters such as the different nature (size, quality, structure, etc.) of documents in a collection, or the different nature of terms (coverage, importance, position, etc.) in a document. We discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability. By applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson. From the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative. The frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively. We showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson. The explicit and mathematical formulation of idf - and Poisson-assumptions is the main result of this paper. Also, the paper emphasises the duality of idf and tf, collection space and document space, respectively. Thus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document. This theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. The links between indepence-based noise as document frequency, probabilistic interpretation of idf, information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems. Acknowledgment: I would like to thank Mounia Lalmas, Gabriella Kazai and Theodora Tsikrika for their comments on the as they said "heavy" pieces. My thanks also go to the meta-reviewer who advised me to improve the presentation to make it less "formidable" and more accessible for those "without a theoretic bent". This work was funded by a research fellowship from Queen Mary University of London.
A Frequency-based and a Poisson-based Definition of the Probability of Being Informative ABSTRACT This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf). We show that an intuitive idf - based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. 1. INTRODUCTION AND BACKGROUND The inverse document frequency (idf) is one of the most successful parameters for a relevance-based ranking of retrieved objects. With N being the total number of documents, and n (t) being the number of documents in which term t occurs, the idf is defined as follows: Ranking based on the sum of the idf - values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications. Also, it is well known that the combination of a document-specific term weight and idf works better than idf alone. This approach is known as tf-idf, where tf (t, d) (0 <= tf (t, d) <= 1) is the so-called term frequency of term t in document d. The idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term. The idf alone works better than the tf alone does. An explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as "noisy" terms. We use the notion of "noisy" terms rather than "frequent" terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document. We associate "noise" with the document frequency of a term in a collection, and we associate "occurrence" with the withindocument frequency of a term. The tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document. Therefore, the removal of noisy terms (known as "stopword removal") is essential when applying tf. From a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an "informative" rather than a probabilistic interpretation. For example, consider a normalisation based on the maximal idf-value. Let T be the set of terms occurring in a collection. This frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents. Can we interpret Pfreq, the normalised idf, as the probability that the term is informative? When investigating the probabilistic interpretation of the normalised idf, we made several observations related to disjointness and independence of document events. These observations are reported in section 3. We show in section 3.1 that the frequency-based noise probability n (t) N used in the classic idf-definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events. In section 3.2 we show that by assuming independence of documents, we obtain 1--e − 1 Pz 1--0.37 as the upper bound of the noise probability of a term. The value e − 1 is related to the logarithm and we investigate in section 3.3 the link to information theory. In section 4, we link the results of the previous sections to probability theory. We show the steps from possible worlds to binomial distribution and Poisson distribution. In section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf. Finally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions. 2. BACKGROUND The relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers. In this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson. [4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords). [9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model. [10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model. The non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability. The Poisson model was here applied to the term frequency of a term in a document. We will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively. Our discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document. [7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates. The results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed. [3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson. We use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events. [11], [12], [8] and [1] link frequencies and probability estimation to information theory. [12] establishes a framework in which information retrieval models are formalised based on probabilistic inference. A key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events. The probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12]. [8] address entropy and bibliometric distributions. The Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences. The Pareto distribution is used by [2] for term frequency normalisation. The Pareto distribution compares to the Poisson distribution in the sense that Pareto is "fat-tailed", i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do. This makes Pareto interesting since Poisson is felt to be too radical on frequent events. We restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval. This motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination. 7. SUMMARY The definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf - based probability in probabilistic retrieval approaches. We showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint. We discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability. By applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson. From the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative. The frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively. We showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson. Also, the paper emphasises the duality of idf and tf, collection space and document space, respectively. Thus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document. This theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. The links between indepence-based noise as document frequency, probabilistic interpretation of idf, information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems.
J-59
Cost Sharing in a Job Scheduling Problem Using the Shapley Value
A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job.
[ "cost share", "cost share", "job schedul", "job schedul", "process time", "monetari transfer", "fair axiom", "shaplei valu", "queue problem", "agent", "cooper game theori approach", "unit wait cost", "alloc rule", "expect cost bound" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "M", "U", "U", "M", "M", "M" ]
Cost Sharing in a Job Scheduling Problem Using the Shapley Value Debasis Mishra Center for Operations Research and Econometrics (CORE) Universit´e Catholique de Louvain Louvain la Neuve, Belgium mishra@core.ucl.ac.be Bharath Rangarajan Center for Operations Research and Econometrics (CORE) Universit´e Catholique de Louvain Louvain la Neuve, Belgium rangarajan@core.ucl.ac.be ABSTRACT A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job. Categories and Subject Descriptors J.4 [Social and Behaviorial Sciences]: Economics General Terms Economics, Theory 1. INTRODUCTION A set of jobs need to be served by a server. The server can process only one job at a time. Each job has a finite processing time and a per unit time waiting cost. Efficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time. To compensate for waiting by jobs, monetary transfers to jobs are allowed. How should the jobs share the cost equitably amongst themselves (through transfers)? The problem of fair division of costs among agents in a queue has many practical applications. For example, computer programs are regularly scheduled on servers, data are scheduled to be transmitted over networks, jobs are scheduled in shop-floor on machines, and queues appear in many public services (post offices, banks). Study of queueing problems has attracted economists for a long time [7, 17]. Cost sharing is a fundamental problem in many settings on the Internet. Internet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner. The current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24]. Internet has many settings in which our model of job scheduling appears and the agents waiting in a queue incur costs (jobs scheduled on servers, queries answered from a database, data scheduled to be transmitted over a fixed bandwidth network etc.). We hope that our analysis will give new insights on cost sharing problems of this nature. Recently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24]. While many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular. Also, such literature typically does not assume budget-balance (transfers adding up to zero), while it is an inherent feature of our model. A recent paper by Maniquet [15] is the closest to our model and is the motivation behind our work 1 . Maniquet [15] studies a model where he assumes all processing times are unity. For such a model, he characterizes the Shapley value rule using classical fairness axioms. Chun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a reverse rule. Chun characterizes this rule using similar fairness axioms. Chun [2] also studies the envy properties of these rules. Moulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity. Moulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them. Another stream of literature is on sequencing games, first introduced by Curiel et al. [4]. For a detailed survey, refer to Curiel et al. [3]. Curiel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given. Besides, their notion of worth of a coalition is very different from the notions studied in Maniquet [15] and Chun [1] (these are the notions used in our work too). The particular notion of the worth of a coalition makes the sequencing game of Curiel et al. [4] convex, whereas our game is not convex and does not assume the presence of any initial order. In summary, the focus of this stream of 1 The authors thank Fran¸cois Maniquet for several fruitful discussions. 232 research is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]). Recently, Klijn and S´anchez [13, 14] considered sequencing games without any initial ordering of jobs. They take two approaches to define the worth of coalitions. One of their approaches, called the tail game, is related to the reverse rule of Chun [1]. In the tail game, jobs in a coalition are served after the jobs not in the coalition are served. Klijn and S´anchez [14] showed that the tail game is balanced. Further, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors. We provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1]. Klijn and S´anchez [13] study the core of this game in detail. Strategic aspects of queueing problems have also been researched. Mitra [19] studies the first best implementation in queueing models with generic cost functions. First best implementation means that there exists an efficient mechanism in which jobs in the queue have a dominant strategy to reveal their true types and their transfers add up to zero. Suijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible. Mitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear. For another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property. Moulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs. The general cost sharing literature is vast and has a long history. For a good survey, we refer to [20]. From the seminal work of Shapley [25] to recent works on cost sharing in multi-cast transmission and optimization problems [8, 6, 23] this area has attracted economists, computer scientists, and operations researchers. 1.1 Our Contribution Ours is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present. We take a cooperative game theory approach and apply the classical Shapley value rule to the problem. We show that the Shapley value rule satisfies many intuitive fairness axioms. Due to two dimensional nature of our model and one dimensional nature of Maniquet``s model [15], his axioms are insufficient to characterize the Shapley value in our setting. We introduce axioms such as independece of preceding jobs'' unit waiting cost and independence of following jobs'' processing time. A key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering). If such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue). If there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (1 2 ) in an efficient ordering. Depending on the ordering selected, one job inflicts cost on the other. Our fairness axiom says that each job should at least bear such expected costs. We characterize the Shapley value rule using these fairness axioms. We also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms. 2. THE MODEL There are n jobs that need to be served by one server which can process only one job at a time. The set of jobs are denoted as N = {1, ... , n}. σ : N → N is an ordering of jobs in N and σi denotes the position of job i in the ordering σ. Given an ordering σ, define Fi(σ) = {j ∈ N : σi < σj} and Pi(σ) = {j ∈ N : σi > σj}. Every job i is identified by two parameters: (pi, θi). pi is the processing time and θi is the cost per unit waiting time of job i. Thus, a queueing problem is defined by a list q = (N, p, θ) ∈ Q, where Q is the set of all possible lists. We will denote γi = θi pi . Given an ordering of jobs σ, the cost incurred by job i is given by ci(σ) = piθi + θi j∈Pi(σ) pj. The total cost incurred by all jobs due to an ordering σ can be written in two ways: (i) by summing the cost incurred by every job and (ii) by summing the costs inflicted by a job on other jobs with their own processing cost. C(N, σ) = i∈N ci(σ) = i∈N piθi + i∈N ¡θi j∈Pi(σ) pjcents. = i∈N piθi + i∈N ¡pi j∈Fi(σ) θjcents. An efficient ordering σ∗ is the one which minimizes the total cost incurred by all jobs. So, C(N, σ∗ ) ≤ C(N, σ) ∀ σ ∈ Σ. To achieve notational simplicity, we will write the total cost in an efficient ordering of jobs from N as C(N) whenever it is not confusing. Sometimes, we will deal with only a subset of jobs S ⊆ N. The ordering σ will then be defined on jobs in S only and we will write the total cost from an efficient ordering of jobs in S as C(S). The following lemma shows that jobs are ordered in decreasing γ in an efficient ordering. This is also known as the weighted shortest processing time rule, first introduced by Smith [26]. Lemma 1. For any S ⊆ N, let σ∗ be an efficient ordering of jobs in S. For every i = j, i, j ∈ S, if σ∗ i > σ∗ j , then γi ≤ γj. Proof. Assume for contradiction that the statment of the lemma is not true. This means, we can find two consecutive jobs i, j ∈ S (σ∗ i = σ∗ j + 1) such that γi > γj. Define a new ordering σ by interchanging i and j in σ∗ . The costs to jobs in S \ {i, j} is not changed from σ∗ to σ. The difference between total costs in σ∗ and σ is given by, C(S, σ) − C(S, σ∗ ) = θjpi − θipj. From efficiency we get θjpi − θipj ≥ 0. This gives us γj ≥ γi, which is a contradiction. An allocation for q = (N, p, θ) ∈ Q has two components: an ordering σ and a transfer ti for every job i ∈ N. ti denotes the payment received by job i. Given a transfer ti and an ordering σ, the cost share of job i is defined as, πi = ci(σ) − ti = θi j∈N:σj ≤σi pj − ti. 233 An allocation (σ, t) is efficient for q = (N, p, θ) whenever σ is an efficient ordering and #i∈N ti = 0. The set of efficient orderings of q is denoted as Σ∗ (q) and σ∗ (q) will be used to refer to a typical element of the set. The following straightforward lemma says that for two different efficient orderings, the cost share in one efficient allocation is possible to achieve in the other by appropriately modifying the transfers. Lemma 2. Let (σ, t) be an efficient allocation and π be the vector of cost shares of jobs from this allocation. If σ∗ = σ be an efficient ordering and t∗ i = ci(σ∗ ) − πi ∀ i ∈ N, then (σ∗ , t∗ ) is also an efficient allocation. Proof. Since (σ, t) is efficient, #i∈N ti = 0. This gives #i∈N πi = C(N). Since σ∗ is an efficient ordering, #i∈N ci(σ∗ ) = C(N). This means, #i∈N t∗ i = #i∈N [ci(σ∗ ) − πi] = 0. So, (σ∗ , t∗ ) is an efficient allocation. Depending on the transfers, the cost shares in different efficient allocations may differ. An allocation rule ψ associates with every q ∈ Q a non-empty subset ψ(q) of allocations. 3. COST SHARING USING THE SHAPLEY VALUE In this section, we define the coalitional cost of this game and analyze the solution proposed by the Shapley value. Given a queue q ∈ Q, the cost of a coalition of S ⊆ N jobs in the queue is defined as the cost incurred by jobs in S if these are the only jobs served in the queue using an efficient ordering. Formally, the cost of a coalition S ⊆ N is, C(S) = i∈S j∈S:σ∗ j ≤σ∗ i θjpj, where σ∗ = σ∗ (S) is an efficient ordering considering jobs from S only. The worth of a coalition of S jobs is just −C(S). Maniquet [15] observes another equivalent way to define the worth of a coalition is using the dual function of the cost function C(·). Other interesting ways to define the worth of a coalition in such games is discussed by Chun [1], who assume that a coalition of jobs are served after the jobs not in the coalition are served. The Shapley value (or cost share) of a job i is defined as, SVi = S⊆N\{i} |S|! (|N| − |S| − 1)! |N|! ¡C(S∪{i})−C(S)cents. (1) The Shapley value allocation rule says that jobs are ordered using an efficient ordering and transfers are assigned to jobs such that the cost share of job i is given by Equation 1. Lemma 3. Let σ∗ be an efficient ordering of jobs in set N. For all i ∈ N, the Shapley value is given by, SVi = piθi + 1 2 ¡Li + Ricents, where Li = θi #j∈Pi(σ∗) pj and Ri = pi #j∈Fi(σ∗) θj. Proof. Another way to write the Shapley value formula is the following [10], SVi = S⊆N:i∈S ∆(S) |S| , where ∆(S) = C(S) if |S| = 1 and ∆(S) = C(S)−#T S ∆(T). This gives ∆({i}) = C({i}) = piθi ∀i ∈ N. For any i, j ∈ N with i = j, we have ∆({i, j}) = C({i, j}) − C({i}) − C({j}) = min(piθi + pjθj + pjθi, piθi + pjθj + piθj) − piθi − pjθj = min(pjθi, piθj). We will show by induction that ∆(S) = 0 if |S| > 2. For |S| = 3, let S = {i, j, k}. Without loss of generality, assume θi pi ≥ θj pj ≥ θk pk . So, ∆(S) = C(S) − ∆({i, j}) − ∆({j, k}) − ∆({i, k})−∆({i})−∆({j})−∆({k}) = C(S)−piθj −pjθk − piθk − piθi − pjθj − pkθk = C(S) − C(S) = 0. Now, assume for T S, ∆(T) = 0 if |T| > 2. Without loss of generality assume that σ to be the identity mapping. Now, ∆(S) = C(S) − T S ∆(T) = C(S) − i∈S j∈S:j<i ∆({i, j}) − i∈S ∆({i}) = C(S) − i∈S j∈S:j<i pjθi − i∈S piθi = C(S) − C(S) = 0. This proves that ∆(S) = 0 if |S| > 2. Using the Shapley value formula now, SVi = S⊆N:i∈S ∆(S) |S| = ∆({i}) + 1 2 j∈N:j=i ∆({i, j}) = piθi + 1 2 ¡ j<i ∆({i, j}) + j>i ∆({i, j})cents = piθi + 1 2 ¡ j<i pjθi + j>i piθjcents= piθi + 1 2 ¡Li + Ricents. 4. AXIOMATICCHARACTERIZATIONOF THE SHAPLEY VALUE In this section, we will define serveral axioms on fairness and characterize the Shapley value using them. For a given q ∈ Q, we will denote ψ(q) as the set of allocations from allocation rule ψ. Also, we will denote the cost share vector associated with an allocation rule (σ, t) as π and that with allocation rule (σ , t ) as π etc. 4.1 The Fairness Axioms We will define three types of fairness axioms: (i) related to efficiency, (ii) related to equity, and (iii) related to independence. Efficiency Axioms We define two types of efficiency axioms. One related to efficiency which states that an efficient ordering should be selected and the transfers of jobs should add up to zero (budget balance). Definition 1. An allocation rule ψ satisfies efficiency if for every q ∈ Q and (σ, t) ∈ ψ(q), (σ, t) is an efficient allocation. 234 The second axiom related to efficiency says that the allocation rule should not discriminate between two allocations which are equivalent to each other in terms of cost shares of jobs. Definition 2. An allocation rule ψ satisfies Pareto indifference if for every q ∈ Q, (σ, t) ∈ ψ(q), and (σ , t ) ∈ Σ(q), we have ¡πi = πi ∀ i ∈ Ncents⇒ ¡(σ , t ) ∈ ψ(q)cents. An implication of Pareto indifference axiom and Lemma 2 is that for every efficient ordering there is some set of transfers of jobs such that it is part of an efficient rule and the cost share of a job in all these allocations are same. Equity Axioms How should the cost be shared between two jobs if the jobs have some kind of similarity between them? Equity axioms provide us with fairness properties which help us answer this question. We provide five such axioms. Some of these axioms (for example anonymity, equal treatment of equals) are standard in the literature, while some are new. We start with a well known equity axiom called anonymity. Denote ρ : N → N as a permutation of elements in N. Let ρ(σ, t) denote the allocation obtained by permuting elements in σ and t according to ρ. Similarly, let ρ(p, θ) denote the new list of (p, θ) obtained by permuting elements of p and θ according to ρ. Our first equity axiom states that allocation rules should be immune to such permutation of data. Definition 3. An allocation rule ψ satisfies anonymity if for all q ∈ Q, (σ, t) ∈ ψ(q) and every permutation ρ, we then ρ(σ, t) ∈ ψ(N, ρ(q)). The next equity axiom is classical in literature and says that two similar jobs should be compensated such that their cost shares are equal. This implies that if all the jobs are of same type, then jobs should equally share the total system cost. Definition 4. An allocation rule ψ satisfies equal treatment of equals (ETE) if for all q ∈ Q, (σ, t) ∈ ψ(q), i, j ∈ N, then ¡pi = pj; θi = θjcents⇒ ¡πi = πjcents. ETE directs us to share costs equally between jobs if they are of the same per unit waiting cost and processing time. But it is silent about the cost shares of two jobs i and j which satisfy θi pi = θj pj . We introduce a new axiom for this. If an efficient rule chooses σ such that σi < σj for some i, j ∈ N, then job i is inflicting a cost of piθj on job j and job j is inflicting zero cost on job i. Define for some γ ≥ 0, S(γ) = {i ∈ N : γi = γ}. In an efficient rule, the elements in S(γ) can be ordered in any manner (in |S(γ)|! ways). If i, j ∈ S(γ) then we have pjθi = piθj. Probability of σi < σj is 1 2 and so is the probability of σi > σj. The expected cost i inflicts on j is 1 2 piθj and j inflicts on i is 1 2 pjθi. Our next fairness axiom says that i and j should each be responsible for their own processing cost and this expected cost they inflict on each other. Arguing for every pair of jobs i, j ∈ S(γ), we establish a bound on the cost share of jobs in S(γ). We impose this as an equity axiom below. Definition 5. An allocation rule satisfies expected cost bound (ECB) if for all q ∈ Q, (σ, t) ∈ ψ(q) with π being the resulting cost share, for any γ ≥ 0, and for every i ∈ S(γ), we have πi ≥ piθi + 1 2 ¡ j∈S(γ):σj <σi pjθi + j∈S(γ):σj >σi piθjcents. The central idea behind this axiom is that of expected cost inflicted. If an allocation rule chooses multiple allocations, we can assign equal probabilities of selecting one of the allocations. In that case, the expected cost inflicted by a job i on another job j in the allocation rule can be calculated. Our axiom says that the cost share of a job should be at least its own processing cost and the total expected cost it inflicts on others. Note that the above bound poses no constraints on how the costs are shared among different groups. Also observe that if S(γ) contains just one job, ECB says that job should at least bear its own processing cost. A direct consequence of ECB is the following lemma. Lemma 4. Let ψ be an efficient rule which satisfies ECB. For a q ∈ Q if S(γ) = N, then for any (σ, t) ∈ ψ(q) which gives a cost share of π, πi = piθi + 1 2 ¡Li + Ricents∀ i ∈ N. Proof. From ECB, we get πi ≥ piθi+1 2 ¡Li+Ricents∀ i ∈ N. Assume for contradiction that there exists j ∈ N such that πj > pjθj + 1 2 ¡Li + Ricents. Using efficiency and the fact that #i∈N Li = #i∈N Ri, we get #i∈N πi = C(N) > #i∈N piθi + 1 2 #i∈N ¡Li + Ricents = C(N). This gives us a contradiction. Next, we introduce an axiom about sharing the transfer of a job between a set of jobs. In particular, if the last job quits the system, then the ordering need not change. But the transfer to the last job needs to be shared between the other jobs. This should be done in proportion to their processing times because every job influenced the last job based on its processing time. Definition 6. An allocation rule ψ satisfies proportionate responsibility of p (PRp) if for all q ∈ Q, for all (σ, t) ∈ ψ(q), k ∈ N such that σk = |N|, q = (N \ {k}, p , θ ) ∈ Q, such that for all i ∈ N\{k}: θi = θi, pi = pi, there exists (σ , t ) ∈ ψ(q ) such that for all i ∈ N \ {k}: σi = σi and ti = ti + tk pi #j=k pj . An analogous fairness axiom results if we remove the job from the beginning of the queue. Since the presence of the first job influenced each job depending on their θ values, its transfer needs to be shared in proportion to θ values. Definition 7. An allocation rule ψ satisfies proportionate responsibility of θ (PRθ) if for all q ∈ Q, for all (σ, t) ∈ ψ(q), k ∈ N such that σk = 1, q = (N \{k}, p , θ ) ∈ Q, such that for all i ∈ N \{k}: θi = θi, pi = pi, there exists (σ , t ) ∈ ψ(q ) such that for all i ∈ N \ {k}: σi = σi and ti = ti + tk θi #j=k θj . The proportionate responsibility axioms are generalizations of equal responsibility axioms introduced by Maniquet [15]. 235 Independence Axioms The waiting cost of a job does not depend on the per unit waiting cost of its preceding jobs. Similarly, the waiting cost inflicted by a job to its following jobs is independent of the processing times of the following jobs. These independence properties should be carried over to the cost sharing rules. This gives us two independence axioms. Definition 8. An allocation rule ψ satisfies independence of preceding jobs'' θ (IPJθ) if for all q = (N, p, θ), q = (N, p , θ ) ∈ Q, (σ, t) ∈ ψ(q), (σ , t ) ∈ ψ(q ), if for all i ∈ N \ {k}: θi = θi, pi = pi and γk < γk, pk = pk, then for all j ∈ N such that σj > σk: πj = πj, where π is the cost share in (σ, t) and π is the cost share in (σ , t ). Definition 9. An allocation rule ψ satisfies independence of following jobs'' p (IFJp) if for all q = (N, p, θ), q = (N, p , θ ) ∈ Q, (σ, t) ∈ ψ(q), (σ , t ) ∈ ψ(q ), if for all i ∈ N \ {k}: θi = θi, pi = pi and γk > γk, θk = θk, then for all j ∈ N such that σj < σk: πj = πj, where π is the cost share in (σ, t) and π is the cost share in (σ , t ). 4.2 The Characterization Results Having stated the fairness axioms, we propose three different ways to characterize the Shapley value rule using these axioms. All our characterizations involve efficiency and ECB. But if we have IPJθ, we either need IFJp or PRp. Similarly if we have IFJp, we either need IPJθ or PRθ. Proposition 1. Any efficient rule ψ that satisfies ECB, IPJθ, and IFJp is a rule implied by the Shapley value rule. Proof. Define for any i, j ∈ N, θi j = γipj and pi j = θj γi . Assume without loss of generality that σ is an efficient ordering with σi = i ∀ i ∈ N. Consider the following q = (N, p , θ ) corresponding to job i with pj = pj if j ≤ i and pj = pi j if j > i, θj = θi j if j < i and θj = θj if j ≥ i. Observe that all jobs have the same γ: γi. By Lemma 2 and efficiency, (σ, t ) ∈ ψ(q ) for some set of transfers t . Using Lemma 4, we get cost share of i from (σ, t ) as πi = piθi + 1 2 ¡Li + Ricents. Now, for any j < i, if we change θj to θj without changing processing time, the new γ of j is γj ≥ γi. Applying IPJθ, the cost share of job i should not change. Similarly, for any job j > i, if we change pj to pj without changing θj, the new γ of j is γj ≤ γi. Applying IFJp, the cost share of job i should not change. Applying this procedure for every j < i with IPJθ and for every j > i with IFJp, we reach q = (N, p, θ) and the payoff of i does not change from πi. Using this argument for every i ∈ N and using the expression for the Shapley value in Lemma 3, we get the Shapley value rule. It is possible to replace one of the independence axioms with an equity axiom on sharing the transfer of a job. This is shown in Propositions 2 and 3. Proposition 2. Any efficient rule ψ that satisfies ECB, IPJθ, and PRp is a rule implied by the Shapley value rule. Proof. As in the proof of Proposition 1, define θi j = γipj ∀ i, j ∈ N. Assume without loss of generality that σ is an efficient ordering with σi = i ∀ i ∈ N. Consider a queue with jobs in set K = {1, ... , i, i + 1}, where i < n. Define q = (K, p, θ ), where θj = θi+1 j ∀ j ∈ K. Define σj = σj ∀ j ∈ K. σ is an efficient ordering for q . By ECB and Lemma 4 the cost share of job i + 1 in any allocation rule in ψ must be πi+1 = pi+1θi+1 + 1 2 ¡#j<i+1 pjθi+1cents. Now, consider q = (K, p, θ ) such that θj = θi j ∀ j ≤ i and θi+1 = θi+1. σ remains an efficient ordering in q and by IPJθ the cost share of i + 1 remains πi+1. In q = (K \ {i + 1}, p, θ ), we can calculate the cost share of job i using ECB and Lemma 4 as πi = piθi + 1 2 #j<i pjθi. So, using PRp we get the new cost share of job i in q as πi = πi + ti+1 pi j<i+1 pj = piθi + 1 2 ¡#j<i pjθi + piθi+1cents. Now, we can set K = K ∪ {i + 2}. As before, we can find cost share of i + 2 in this queue as πi+2 = pi+2θi+2 + 1 2 ¡#j<i+2 pjθi+2cents. Using PRp we get the new cost share of job i in the new queue as πi = piθi + 1 2 ¡#j<i pjθi + piθi+1 + piθi+2cents. This process can be repeated till we add job n at which point cost share of i is piθi + 1 2 ¡#j<i pjθi + #j>i piθjcents. Then, we can adjust the θ of preceding jobs of i to their original value and applying IPJθ, the payoffs of jobs i through n will not change. This gives us the Shapley values of jobs i through n. Setting i = 1, we get cost shares of all the jobs from ψ as the Shapley value. Proposition 3. Any efficient rule ψ that satisfies ECB, IFJp, and PRθ is a rule implied by the Shapley value rule. Proof. The proof mirrors the proof of Proposition 2. We provide a short sketch. Analogous to the proof of Proposition 2, θs are kept equal to original data and processing times are initialized to pi+1 j . This allows us to use IFJp. Also, contrast to Proposition 2, we consider K = {i, i + 1, ... , n} and repeatedly add jobs to the beginning of the queue maintaining the same efficient ordering. So, we add the cost components of preceding jobs to the cost share of jobs in each iteration and converge to the Shapley value rule. The next proposition shows that the Shapley value rule satisfies all the fairness axioms discussed. Proposition 4. The Shapley value rule satisfies efficiency, pareto indifference, anonymity, ETE, ECB, IPJθ, IFJp, PRp, and PRθ. Proof. The Shapley value rule chooses an efficient ordering and by definition the payments add upto zero. So, it satisfies efficiency. The Shapley value assigns same cost share to a job irrespective of the efficient ordering chosen. So, it is pareto indifferent. The Shapley value is anonymous because the particular index of a job does not effect his ordering or cost share. For ETE, consider two jobs i, j ∈ N such that pi = pj and θi = θj. Without loss of generality assume the efficient ordering to be 1, ... , i, ... , j, ... , n. Now, the Shapley value of job i is 236 SVi = piθi + 1 2 ¡Li + Ricents(From Lemma 3) = pjθj + 1 2 ¡Lj + Rjcents− 1 2 ¡Li − Lj + Ri − Rjcents = SVj − 1 2 ¡ i<k≤j piθk − i≤k<j pkθicents = SVj − 1 2 i<k≤j (piθk − pkθi) (Using pi = pj and θi = θj) = SVj (Using θk pk = θi pi for all i ≤ k ≤ j). The Shapley value satisfies ECB by its expression in Lemma 3. Consider any job i, in an efficient ordering σ, if we increase the value of γj for some j = i such that σj > σi, then the set Pi ( preceding jobs) does not change in the new efficient ordering. If γj is changed such that pj remains the same, then the expression #j∈Pi θipj is unchanged. If (p, θ) values of no other jobs are changed, then the Shapley value is unchanged by increasing γj for some j ∈ Pi while keeping pj unchanged. Thus, the Shapley value rule satisfies IPJθ. An analogous argument shows that the Shapley value rule satisfies IFJp. For PRp, assume without loss of generality that jobs are ordered 1, ... , n in an efficient ordering. Denote the transfer of job i = n due to the Shapley value with set of jobs N and set of jobs N \ {n} as ti and ti respectively. Transfer of last job is tn = 1 2 θn #j<n pj. Now, ti = 1 2 ¡θi j<i pj − pi j>i θjcents = 1 2 ¡θi j<i pj − pi j>i:j=n θjcents− 1 2 piθn = ti − 1 2 θn j<n pj pi #j<n pj = ti − tn pi #j<n pj . A similar argument shows that the Shapley value rule satisfies PRθ. These series of propositions lead us to our main result. Theorem 1. Let ψ be an allocation rule. The following statements are equivalent: 1) For each q ∈ Q, ψ(q) selects all the allocation assigning jobs cost shares implied by the Shapley value. 2) ψ satisfies efficiency, ECB, IFJp, and IPJθ. 3) ψ satisfies efficiency, ECB, IFJp, and PRθ. 4) ψ satisfies efficiency, ECB, PRp, and IPJθ. Proof. The proof follows from Propositions 1, 2, 3, and 4. 5. DISCUSSIONS 5.1 A Reasonable Class of Cost Sharing Mechanisms In this section, we will define a reasonable class of cost sharing mechanisms. We will show how these reasonable mechanisms lead to the Shapley value mechanism. Definition 10. An allocation rule ψ is reasonable if for all q ∈ Q and (σ, t) ∈ ψ(q) we have for all i ∈ N, ti = α ¡θi j∈Pi(σ) pj − pi j∈Fi(σ) θjcents∀ i ∈ N, where 0 ≤ α ≤ 1. The reasonable cost sharing mechanism says that every job should be paid a constant fraction of the difference between the waiting cost he incurs and the waiting cost he inflicts on other jobs. If α = 0, then every job bears its own cost. If α = 1, then every job gets compensated for its waiting cost but compensates others for the cost he inflicts on others. The Shapley value rule comes as a result of ETE as shown in the following proposition. Proposition 5. Any efficient and reasonable allocation rule ψ that satisfies ETE is a rule implied by the Shapley value rule. Proof. Consider a q ∈ Q in which pi = pj and θi = θj. Let (σ, t) ∈ ψ(q) and π be the resulting cost shares. From ETE, we get, πi = πj ⇒ ci(σ) − ti = cj(σ) − tj ⇒ piθi + (1 − α)Li + αRi = pjθj + (1 − α)Lj + αRj (Since ψ is efficient and reasonable) ⇒ (1 − α)(Li − Lj) = α(Rj − Ri) (Using pi = pj, θi = θj) ⇒ 1 − α = α (Using Li − Lj = Rj − Ri = 0) ⇒ α = 1 2 . This gives us the Shapley value rule by Lemma 3. 5.2 Results on Envy Chun [2] discusses a fariness condition called no-envy for the case when processing times of all jobs are unity. Definition 11. An allocation rule satisfies no-envy if for all q ∈ Q, (σ, t) ∈ ψ(q), and i, j ∈ N, we have πi ≤ ci(σij ) − tj, where π is the cost share from allocation rule (σ, t) and σij is the ordering obtaining by swapping i and j. From the result in [2], the Shapley value rule does not satisfy no-envy in our model also. To overcome this, Chun [2] introduces the notion of adjusted no-envy, which he shows is satisfied in the Shapley value rule when processing times of all jobs are unity. Here, we show that adjusted envy continues to hold in the Shapley value rule in our model (when processing times need not be unity). As before denote σij be an ordering where the position of i and j is swapped from an ordering σ. For adjusted noenvy, if (σ, t) is an allocation for some q ∈ Q, let tij be the 237 transfer of job i when the transfer of i is calculated with respect to ordering σij . Observe that an allocation may not allow for calculation of tij . For example, if ψ is efficient, then tij cannot be calculated if σij is also not efficient. For simplicity, we state the definition of adjusted no-envy to apply to all such rules. Definition 12. An allocation rule satisfies adjusted noenvy if for all q ∈ Q, (σ, t) ∈ ψ(q), and i, j ∈ N, we have πi ≤ ci(σij ) − tij i . Proposition 6. The Shapley value rule satisfies adjusted no-envy. Proof. Without loss of generality, assume efficient ordering of jobs is: 1, ... , n. Consider two jobs i and i + k. From Lemma 3, SVi = piθi + 1 2 ¡ j<i θipj + j>i θjpicents. Let ˆπi be the cost share of i due to adjusted transfer tii+k i in the ordering σii+k . ˆπi = ci(σii+k ) − tii+k i = piθi + 1 2 ¡ j<i θipj + θipi+k + i<j<i+k θipj + j>i θjpi − θi+kpi − i<j<i+k θjpicents = SVi + 1 2 i<j≤i+k ¡θipj − θjpicents ≥ SVi (Using the fact that θi pi ≥ θj pj for i < j). 6. CONCLUSION We studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs. We took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties. We characterized the Shapley value rule using different intuitive fairness axioms. In future, we plan to further simplify some of the fairness axioms. Some initial simplifications already appear in [16], where we provide an alternative axiom to ECB and also discuss the implication of transfers between jobs (in stead of transfers from jobs to a central server). We also plan to look at cost sharing mechanisms other than the Shapley value. Investigating the strategic power of jobs in such mechanisms is another line of future research. 7. REFERENCES [1] Youngsub Chun. A Note on Maniquet``s Characterization of the Shapley Value in Queueing Problems. Working Paper, Rochester University, 2004. [2] Youngsub Chun. No-envy in Queuing Problems. Working Paper, Rochester University, 2004. [3] Imma Curiel, Herbert Hamers, and Flip Klijn. Sequencing Games: A Survey. In Peter Borm and Hans Peters, editors, Chapter in Game Theory. Theory and Decision Library, Kulwer Academic Publishers, 2002. [4] Imma Curiel, Giorgio Pederzoli, and Stef Tijs. Sequencing Games. European Journal of Operational Research, 40:344-351, 1989. [5] Imma Curiel, Jos Potters, Rajendra Prasad, Stef Tijs, and Bart Veltman. Sequencing and Cooperation. Operations Research, 42(3):566-568, May-June 1994. [6] Nikhil R. Devanur, Milena Mihail, and Vijay V. Vazirani. Strategyproof Cost-sharing Mechanisms for Set Cover and Facility Location Games. In Proceedings of Fourth Annual ACM Conferece on Electronic Commerce, 2003. [7] Robert J. Dolan. Incentive Mechanisms for Priority Queueing Problems. Bell Journal of Economics, 9:421-436, 1978. [8] Joan Feigenbaum, Christos Papadimitriou, and Scott Shenker. Sharing the Cost of Multicast Transmissions. In Proceedings of Thirty-Second Annual ACM Symposium on Theory of Computing, 2000. [9] Herbert Hamers, Jeroen Suijs, Stef Tijs, and Peter Borm. The Split Core for Sequencing Games. Games and Economic Behavior, 15:165-176, 1996. [10] John C. Harsanyi. Contributions to Theory of Games IV, chapter A Bargaining Model for Cooperative n-person Games. Princeton University Press, 1959. Editors: A. W. Tucker, R. D. Luce. [11] Kamal Jain and Vijay Vazirani. Applications of Approximate Algorithms to Cooperative Games. In Proceedings of 33rd Symposium on Theory of Computing (STOC ``01), 2001. [12] Kamal Jain and Vijay Vazirani. Equitable Cost Allocations via Primal-Dual Type Algorithms. In Proceedings of 34th Symposium on Theory of Computing (STOC ``02), 2002. [13] Flip Klijn and Estela S´anchez. Sequencing Games without a Completely Specified Initial Order. Report in Statistics and Operations Research, pages 1-17, 2002. Report 02-04. [14] Flip Klijn and Estela S´anchez. Sequencing Games without Initial Order. Working Paper, Universitat Aut´onoma de Barcelona, July 2004. [15] Franois Maniquet. A Characterization of the Shapley Value in Queueing Problems. Journal of Economic Theory, 109:90-103, 2003. [16] Debasis Mishra and Bharath Rangarajan. Cost sharing in a job scheduling problem. Working Paper, CORE, 2005. [17] Manipushpak Mitra. Essays on First Best Implementable Incentive Problems. Ph.D.. Thesis, Indian Statistical Institute, New Delhi, 2000. [18] Manipushpak Mitra. Mechanism design in queueing problems. Economic Theory, 17(2):277-305, 2001. [19] Manipushpak Mitra. Achieving the first best in sequencing problems. Review of Economic Design, 7:75-91, 2002. [20] Herv´e Moulin. Handbook of Social Choice and Welfare, chapter Axiomatic Cost and Surplus Sharing. North-Holland, 2002. Publishers: Arrow, Sen, Suzumura. [21] Herv´e Moulin. On Scheduling Fees to Prevent 238 Merging, Splitting and Transferring of Jobs. Working Paper, Rice University, 2004. [22] Herv´e Moulin. Split-proof Probabilistic Scheduling. Working Paper, Rice University, 2004. [23] Herv´e Moulin and Rakesh Vohra. Characterization of Additive Cost Sharing Methods. Economic Letters, 80:399-407, 2003. [24] Martin P´al and ´Eva Tardos. Group Strategyproof Mechanisms via Primal-Dual Algorithms. In Proceedings of the 44th Annual IEEE Symposium on the Foundations of Computer Science (FOCS ``03), 2003. [25] Lloyd S. Shapley. Contributions to the Theory of Games II, chapter A Value for n-person Games, pages 307-317. Annals of Mathematics Studies, 1953. Ediors: H. W. Kuhn, A. W. Tucker. [26] Wayne E. Smith. Various Optimizers for Single-Stage Production. Naval Research Logistics Quarterly, 3:59-66, 1956. [27] Jeroen Suijs. On incentive compatibility and budget balancedness in public decision making. Economic Design, 2, 2002. 239
Cost Sharing in a Job Scheduling Problem Using the Shapley Value ABSTRACT A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job. 1. INTRODUCTION A set of jobs need to be served by a server. The server can process only one job at a time. Each job has a finite processing time and a per unit time waiting cost. Efficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time. To compensate for waiting by jobs, monetary transfers to jobs are allowed. How should the jobs share the cost equitably amongst themselves (through transfers)? The problem of fair division of costs among agents in a queue has many practical applications. For example, computer programs are regularly scheduled on servers, data are scheduled to be transmitted over networks, jobs are scheduled in shop-floor on machines, and queues appear in many public services (post offices, banks). Study of queueing problems has attracted economists for a long time [7, 17]. Cost sharing is a fundamental problem in many settings on the Internet. Internet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner. The current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24]. Internet has many settings in which our model of job scheduling appears and the agents waiting in a queue incur costs (jobs scheduled on servers, queries answered from a database, data scheduled to be transmitted over a fixed bandwidth network etc.). We hope that our analysis will give new insights on cost sharing problems of this nature. Recently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24]. While many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular. Also, such literature typically does not assume budget-balance (transfers adding up to zero), while it is an inherent feature of our model. A recent paper by Maniquet [15] is the closest to our model and is the motivation behind our work 1. Maniquet [15] studies a model where he assumes all processing times are unity. For such a model, he characterizes the Shapley value rule using classical fairness axioms. Chun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a "reverse" rule. Chun characterizes this rule using similar fairness axioms. Chun [2] also studies the envy properties of these rules. Moulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity. Moulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them. Another stream of literature is on "sequencing games", first introduced by Curiel et al. [4]. For a detailed survey, refer to Curiel et al. [3]. Curiel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given. Besides, their notion of worth of a coalition is very different from the notions studied in Maniquet [15] and Chun [1] (these are the notions used in our work too). The particular notion of the worth of a coalition makes the sequencing game of Curiel et al. [4] convex, whereas our game is not convex and does not assume the presence of any initial order. In summary, the focus of this stream of 1The authors thank Fran ¸ cois Maniquet for several fruitful discussions. research is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]). Recently, Klijn and S ´ anchez [13, 14] considered sequencing games without any initial ordering of jobs. They take two approaches to define the worth of coalitions. One of their approaches, called the tail game, is related to the reverse rule of Chun [1]. In the tail game, jobs in a coalition are served after the jobs not in the coalition are served. Klijn and S ´ anchez [14] showed that the tail game is balanced. Further, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors. We provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1]. Klijn and S ´ anchez [13] study the core of this game in detail. Strategic aspects of queueing problems have also been researched. Mitra [19] studies the first best implementation in queueing models with generic cost functions. First best implementation means that there exists an efficient mechanism in which jobs in the queue have a dominant strategy to reveal their true types and their transfers add up to zero. Suijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible. Mitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear. For another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property. Moulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs. The general cost sharing literature is vast and has a long history. For a good survey, we refer to [20]. From the seminal work of Shapley [25] to recent works on cost sharing in multi-cast transmission and optimization problems [8, 6, 23] this area has attracted economists, computer scientists, and operations researchers. 1.1 Our Contribution Ours is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present. We take a cooperative game theory approach and apply the classical Shapley value rule to the problem. We show that the Shapley value rule satisfies many intuitive fairness axioms. Due to two dimensional nature of our model and one dimensional nature of Maniquet's model [15], his axioms are insufficient to characterize the Shapley value in our setting. We introduce axioms such as independece of preceding jobs' unit waiting cost and independence of following jobs' processing time. A key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering). If such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue). If there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (21) in an efficient ordering. Depending on the ordering selected, one job inflicts cost on the other. Our fairness axiom says that each job should at least bear such expected costs. We characterize the Shapley value rule using these fairness axioms. We also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms. 2. THE MODEL There are n jobs that need to be served by one server which can process only one job at a time. The set of jobs are denoted as N = {1,..., n}. a: N--+ N is an ordering of jobs in N and ai denotes the position of job i in the ordering a. Given an ordering a, define Fi (a) = {j E N: ai <aj} and Pi (a) = {j E N: ai> aj}. Every job i is identified by two parameters: (pi, θi). pi is the processing time and θi is the cost per unit waiting time of job i. Thus, a queueing problem is defined by a list q = (N, p, θ) E Q, where Q is the set of all possible lists. We will denote ` yi = θi pi. Given an ordering of jobs a, the cost incurred by job i is given by The total cost incurred by all jobs due to an ordering a can be written in two ways: (i) by summing the cost incurred by every job and (ii) by summing the costs inflicted by a job on other jobs with their own processing cost. An efficient ordering a * is the one which minimizes the total cost incurred by all jobs. So, C (N, a *) <C (N, a) ` d a E Σ. To achieve notational simplicity, we will write the total cost in an efficient ordering of jobs from N as C (N) whenever it is not confusing. Sometimes, we will deal with only a subset of jobs S C N. The ordering a will then be defined on jobs in S only and we will write the total cost from an efficient ordering of jobs in S as C (S). The following lemma shows that jobs are ordered in decreasing' y in an efficient ordering. This is also known as the weighted shortest processing time rule, first introduced by Smith [26]. PROOF. Assume for contradiction that the statment of the lemma is not true. This means, we can find two consecutive jobs i, j E S (a ∗ i = aj * + 1) such that ` yi> ` yj. Define a new ordering a by interchanging i and j in a *. The costs to jobs in S \ {i, j} is not changed from a * to a. The difference between total costs in a * and a is given by, C (S, a) − C (S, a *) = θjpi − θipj. From efficiency we get θjpi − θipj> 0. This gives us ` yj> ` yi, which is a contradiction. An allocation for q = (N, p, θ) E Q has two components: an ordering a and a transfer ti for every job i E N. ti denotes the payment received by job i. Given a transfer ti and an ordering a, the cost share of job i is defined as, LEMMA 2. Let (σ, t) be an efficient allocation and π be the vector of cost shares of jobs from this allocation. If σ * = 6 σ be an efficient ordering and t * i = ci (σ *) − πi ` d i E N, then (σ *, t *) is also an efficient allocation. PROOF. Since (σ, t) is efficient, iEN ti = 0. This gives iEN πi = C (N). Since σ * is an efficient ordering, iEN ci (σ *) = C (N). This means, iEN t * i = iEN [ci (σ *) − πi] = 0. So, (σ *, t *) is an efficient allocation. Depending on the transfers, the cost shares in different efficient allocations may differ. An allocation rule ψ associates with every q E Q a non-empty subset ψ (q) of allocations. An allocation (σ, t) is efficient for q = (N, p, θ) whenever σ is an efficient ordering and iEN ti = 0. The set of efficient orderings of q is denoted as Σ * (q) and σ * (q) will be used to refer to a typical element of the set. The following straightforward lemma says that for two different efficient orderings, the cost share in one efficient allocation is possible to achieve in the other by appropriately modifying the transfers. 3. COST SHARING USING THE SHAPLEY VALUE In this section, we define the coalitional cost of this game and analyze the solution proposed by the Shapley value. Given a queue q E Q, the cost of a coalition of S C N jobs in the queue is defined as the cost incurred by jobs in S if these are the only jobs served in the queue using an efficient ordering. Formally, the cost of a coalition S C N is, where σ * = σ * (S) is an efficient ordering considering jobs from S only. The worth of a coalition of S jobs is just − C (S). Maniquet [15] observes another equivalent way to define the worth of a coalition is using the dual function of the cost function C (·). Other interesting ways to define the worth of a coalition in such games is discussed by Chun [1], who assume that a coalition of jobs are served after the jobs not in the coalition are served. The Shapley value (or cost share) of a job i is defined as, The Shapley value allocation rule says that jobs are ordered using an efficient ordering and transfers are assigned to jobs such that the cost share of job i is given by Equation 1. where Li = θi jEPi (σ *) pj and Ri = pi jEFi (σ *) θj. PROOF. Another way to write the Shapley value formula is the following [10], where Δ (S) = C (S) if | S | = 1 and Δ (S) = C (S) − T (; S Δ (T). This gives Δ ({i}) = C ({i}) = piθi ` di E N. For any i, j E N with i = 6 j, we have Δ ({i, j}) = C ({i, j}) − C ({i}) − C ({j}) We will show by induction that Δ (S) = 0 if | S |> 2. For | S | = 3, let S = {i, j, k}. Without loss of generality, assume Now, assume for T C S, Δ (T) = 0 if | T |> 2. Without loss of generality assume that σ to be the identity mapping. Now, This proves that Δ (S) = 0 if | S |> 2. Using the Shapley value formula now, 4. AXIOMATIC CHARACTERIZATION OF THE SHAPLEY VALUE In this section, we will define serveral axioms on fairness and characterize the Shapley value using them. For a given q E Q, we will denote ψ (q) as the set of allocations from allocation rule ψ. Also, we will denote the cost share vector associated with an allocation rule (σ, t) as π and that with allocation rule (σ', t') as π' etc. . 4.1 The Fairness Axioms We will define three types of fairness axioms: (i) related to efficiency, (ii) related to equity, and (iii) related to independence. Efficiency Axioms We define two types of efficiency axioms. One related to efficiency which states that an efficient ordering should be selected and the transfers of jobs should add up to zero (budget balance). Definition 1. An allocation rule ψ satisfies efficiency if for every q E Q and (σ, t) E ψ (q), (σ, t) is an efficient allocation. The second axiom related to efficiency says that the allocation rule should not discriminate between two allocations which are equivalent to each other in terms of cost shares of jobs. Definition 2. An allocation rule ψ satisfies Pareto indifference if for every q E Q, (σ, t) E ψ (q), and (σ', t') E Σ (q), we have πi = π' i ` d i E N ⇒ (σ', t') E ψ (q). An implication of Pareto indifference axiom and Lemma 2 is that for every efficient ordering there is some set of transfers of jobs such that it is part of an efficient rule and the cost share of a job in all these allocations are same. Equity Axioms How should the cost be shared between two jobs if the jobs have some kind of similarity between them? Equity axioms provide us with fairness properties which help us answer this question. We provide five such axioms. Some of these axioms (for example anonymity, equal treatment of equals) are standard in the literature, while some are new. We start with a well known equity axiom called anonymity. Denote ρ: N--+ N as a permutation of elements in N. Let ρ (σ, t) denote the allocation obtained by permuting elements in σ and t according to ρ. Similarly, let ρ (p, θ) denote the new list of (p, θ) obtained by permuting elements of p and θ according to ρ. Our first equity axiom states that allocation rules should be immune to such permutation of data. Definition 3. An allocation rule ψ satisfies anonymity if for all q E Q, (σ, t) E ψ (q) and every permutation ρ, we then ρ (σ, t) E ψ (N, ρ (q)). The next equity axiom is classical in literature and says that two similar jobs should be compensated such that their cost shares are equal. This implies that if all the jobs are of same type, then jobs should equally share the total system cost. Definition 4. An allocation rule ψ satisfies equal treatment of equals (ETE) if for all q E Q, (σ, t) E ψ (q), i, j E N, then ETE directs us to share costs equally between jobs if they are of the same per unit waiting cost and processing time. But it is silent about the cost shares of two jobs i and j which satisfy pi θi = θj pj. We introduce a new axiom for this. If an efficient rule chooses σ such that σi <σj for some i, j E N, then job i is inflicting a cost of piθj on job j and job j is inflicting zero cost on job i. Define for some γ> 0, S (γ) = {i E N: γi = γ}. In an efficient rule, the elements in S (γ) can be ordered in any manner (in | S (γ) |! ways). If i, j E S (γ) then we have pjθi = piθj. Probability of σi <σj is 21 and so is the probability of σi> σj. The expected cost i inflicts on j is 21 piθj and j inflicts on i is 2 pjθi. Our next fairness axiom says that i and j should each be responsible for their own processing cost and this expected cost they inflict on each other. Arguing for every pair of jobs i, j E S (γ), we establish a bound on the cost share of jobs in S (γ). We impose this as an equity axiom below. Definition 5. An allocation rule satisfies expected cost bound (ECB) if for all q E Q, (σ, t) E ψ (q) with π being the resulting cost share, for any γ> 0, and for every i E S (γ), we have The central idea behind this axiom is that of "expected cost inflicted". If an allocation rule chooses multiple allocations, we can assign equal probabilities of selecting one of the allocations. In that case, the expected cost inflicted by a job i on another job j in the allocation rule can be calculated. Our axiom says that the cost share of a job should be at least its own processing cost and the total expected cost it inflicts on others. Note that the above bound poses no constraints on how the costs are shared among different groups. Also observe that if S (γ) contains just one job, ECB says that job should at least bear its own processing cost. A direct consequence of ECB is the following lemma. LEMMA 4. Let ψ be an efficient rule which satisfies ECB. For a q E Q if S (γ) = N, then for any (σ, t) E ψ (q) which gives a cost share of π, πi = piθi + 21 Li + Ri ` d i E N. PROOF. From ECB, we get πi> piθi +21 Li + Ri ` d i E N. Assume for contradiction that there exists j E N such that πj> pj θj + 21 Li + Ri. Using efficiency and the fact that iEN Li = iEN Ri, we get iEN πi = C (N)> iEN piθi + 21 iEN Li + Ri = C (N). This gives us a contradiction. Next, we introduce an axiom about sharing the transfer of a job between a set of jobs. In particular, if the last job quits the system, then the ordering need not change. But the transfer to the last job needs to be shared between the other jobs. This should be done in proportion to their processing times because every job influenced the last job based on its processing time. Definition 6. An allocation rule ψ satisfies proportionate responsibility of p (PRp) if for all q E Q, for all (σ, t) E ψ (q), k E N such that σk = | N |, q' = (N \ {k}, p', θ') E Q, such that for all i E N \ {k}: θ' i = θi, p' i = pi, there exists (σ', t') E ψ (q') such that for all i E N \ {k}: σ' i = σi and An analogous fairness axiom results if we remove the job from the beginning of the queue. Since the presence of the first job influenced each job depending on their θ values, its transfer needs to be shared in proportion to θ values. Definition 7. An allocation rule ψ satisfies proportionate responsibility of θ (PRθ) if for all q E Q, for all (σ, t) E ψ (q), k E N such that σk = 1, q' = (N \ {k}, p', θ') E Q, such that for all i E N \ {k}: θ' i = θi, p' i = pi, there exists (σ', t') E ψ (q') such that for all i E N \ {k}: σ' i = σi and The proportionate responsibility axioms are generalizations of equal responsibility axioms introduced by Maniquet [15]. Independence Axioms The waiting cost of a job does not depend on the per unit waiting cost of its preceding jobs. Similarly, the waiting cost inflicted by a job to its following jobs is independent of the processing times of the following jobs. These independence properties should be carried over to the cost sharing rules. This gives us two independence axioms. Definition 8. An allocation rule ψ satisfies independence of preceding jobs' θ (IPJθ) if for all q = (N, p, θ), q' = (N, p', θ') E Q, (σ, t) E ψ (q), (σ', t') E ψ (q'), if for all i E N \ {k}: θi = θ' i, pi = p' i and γk <γ ` k, pk = p ` k, then for all j E N such that σj> σk: πj = π ` j, where π is the cost share in (σ, t) and π' is the cost share in (σ', t'). Definition 9. An allocation rule ψ satisfies independence of following jobs' p (IFJp) if for all q = (N, p, θ), q' = (N, p', θ') E Q, (σ, t) E ψ (q), (σ', t') E ψ (q'), if for all i E N \ {k}: θi = θ' i, pi = p' i and γk> γ ` k, θk = θ ` k, then for all j E N such that σj <σk: πj = π ` j, where π is the cost share in (σ, t) and π' is the cost share in (σ', t'). 4.2 The Characterization Results Having stated the fairness axioms, we propose three different ways to characterize the Shapley value rule using these axioms. All our characterizations involve efficiency and ECB. But if we have IPJθ, we either need IFJp or PRp. Similarly if we have IFJp, we either need IPJθ or PRθ. PROOF. Define for any i, j E N, θij = γipj and pij = θj. Assume without loss of generality that σ is an efficient γi ordering with σi = i ` d i E N. Consider the following q' = (N, p', θ') corresponding to job i with pj' = pj if j <i and p' j = pij if j> i, θ' j = θij if j <i and θ' j = θj if j> i. Observe that all jobs have the same γ: γi. By Lemma 2 and efficiency, (σ, t') E ψ (q') for some set of transfers t'. Using Lemma 4, we get cost share of i from (σ, t') as πi = piθi + 21 Li + Ri. Now, for any j <i, if we change θ' j to θj without changing processing time, the new γ of j is γj> γi. Applying IPJθ, the cost share of job i should not change. Similarly, for any job j> i, if we change p' j to pj without changing θj, the new γ of j is γj <γi. Applying IFJp, the cost share of job i should not change. Applying this procedure for every j <i with IPJθ and for every j> i with IFJp, we reach q = (N, p, θ) and the payoff of i does not change from πi. Using this argument for every i E N and using the expression for the Shapley value in Lemma 3, we get the Shapley value rule. It is possible to replace one of the independence axioms with an equity axiom on sharing the transfer of a job. This is shown in Propositions 2 and 3. PROOF. As in the proof of Proposition 1, define θij = γipj ` d i, j E N. Assume without loss of generality that σ is an efficient ordering with σi = i ` d i E N. Consider a queue with jobs in set K = {1,..., i, i + 1}, where i <n. Define q' = (K, p, θ'), where θ' j = θi +1 ordering in q" and by IPJθ the cost share of i + 1 remains πi +1. In q"' = (K \ {i + 1}, p, θ"), we can calculate the cost share of job i using ECB and Lemma 4 as πi = piθi + 2 j <i pjθi. So, using PRp we get the new cost share of job Now, we can set K = K U {i + 2}. As before, we can find cost share of i + 2 in this queue as πi +2 = pi +2 θi +2 + of job i in the new queue as πi = piθi + 21 j <ipjθi + piθi +1 + piθi +2. This process can be repeated till we add job n at which point cost share of i is piθi + 21 j <i pjθi + j> i piθj. Then, we can adjust the θ of preceding jobs of i to their original value and applying IPJθ, the payoffs of jobs i through n will not change. This gives us the Shapley values of jobs i through n. Setting i = 1, we get cost shares of all the jobs from ψ as the Shapley value. PROOF. The proof mirrors the proof of Proposition 2. We provide a short sketch. Analogous to the proof of Proposition 2, θs are kept equal to original data and processing times are initialized to pi +1 j. This allows us to use IFJp. Also, contrast to Proposition 2, we consider K = {i, i + 1,..., n} and repeatedly add jobs to the beginning of the queue maintaining the same efficient ordering. So, we add the cost components of preceding jobs to the cost share of jobs in each iteration and converge to the Shapley value rule. The next proposition shows that the Shapley value rule satisfies all the fairness axioms discussed. PROOF. The Shapley value rule chooses an efficient ordering and by definition the payments add upto zero. So, it satisfies efficiency. The Shapley value assigns same cost share to a job irrespective of the efficient ordering chosen. So, it is pareto indifferent. The Shapley value is anonymous because the particular index of a job does not effect his ordering or cost share. For ETE, consider two jobs i, j E N such that pi = pj and θi = θj. Without loss of generality assume the efficient ordering to be 1,..., i,..., j,..., n. Now, the Shapley value of job i is = piθi + 21 j <ipjθi + 5. DISCUSSIONS 5.1 A Reasonable Class of Cost Sharing Mechanisms In this section, we will define a reasonable class of cost sharing mechanisms. We will show how these reasonable mechanisms lead to the Shapley value mechanism. The Shapley value satisfies ECB by its expression in Lemma 3. Consider any job i, in an efficient ordering σ, if we increase the value of γj for some j = 6 i such that σj> σi, then the set Pi (preceding jobs) does not change in the new efficient ordering. If γj is changed such that pj remains the same, then the expression jEPi θipj is unchanged. If (p, θ) values of no other jobs are changed, then the Shapley value is unchanged by increasing γj for some j E Pi while keeping pj unchanged. Thus, the Shapley value rule satisfies IPJθ. An analogous argument shows that the Shapley value rule satisfies IFJp. For PRp, assume without loss of generality that jobs are ordered 1,..., n in an efficient ordering. Denote the transfer of job i = 6 n due to the Shapley value with set of jobs N and set of jobs N \ {n} as ti and t0i respectively. Transfer of last job is tn = 21θn j <npj. Now, A similar argument shows that the Shapley value rule satisfies PRθ. These series of propositions lead us to our main result. 1) For each q E Q, ψ (q) selects all the allocation assigning jobs cost shares implied by the Shapley value. 2) ψ satisfies efficiency, ECB, IFJp, and IPJθ. 3) ψ satisfies efficiency, ECB, IFJp, and PRθ. 4) ψ satisfies efficiency, ECB, PRp, and IPJθ. PROOF. The proof follows from Propositions 1, 2, 3, and 4. where 0 <α <1. The reasonable cost sharing mechanism says that every job should be paid a constant fraction of the difference between the waiting cost he incurs and the waiting cost he inflicts on other jobs. If α = 0, then every job bears its own cost. If α = 1, then every job gets compensated for its waiting cost but compensates others for the cost he inflicts on others. The Shapley value rule comes as a result of ETE as shown in the following proposition. PROOF. Consider a q E Q in which pi = pj and θi = θj. Let (σ, t) E ψ (q) and π be the resulting cost shares. From ETE, we get, (Using pi = pj, θi = θj) te 1--α = α (Using Li--Lj = Rj--Ri = 6 0) te α = 21. This gives us the Shapley value rule by Lemma 3. 5.2 Results on Envy Chun [2] discusses a fariness condition called no-envy for the case when processing times of all jobs are unity. Definition 11. An allocation rule satisfies no-envy if for all q E Q, (σ, t) E ψ (q), and i, j E N, we have πi <ci (σij)--tj, where π is the cost share from allocation rule (σ, t) and σij is the ordering obtaining by swapping i and j. From the result in [2], the Shapley value rule does not satisfy no-envy in our model also. To overcome this, Chun [2] introduces the notion of adjusted no-envy, which he shows is satisfied in the Shapley value rule when processing times of all jobs are unity. Here, we show that adjusted envy continues to hold in the Shapley value rule in our model (when processing times need not be unity). As before denote σij be an ordering where the position of i and j is swapped from an ordering σ. For adjusted noenvy, if (σ, t) is an allocation for some q E Q, let tij be the transfer of job i when the transfer of i is calculated with respect to ordering σij. Observe that an allocation may not allow for calculation of tij. For example, if ψ is efficient, then tij cannot be calculated if σij is also not efficient. For simplicity, we state the definition of adjusted no-envy to apply to all such rules. Definition 12. An allocation rule satisfies adjusted noenvy if for all q E Q, (σ, t) E ψ (q), and i, j E N, we have PROPOSITION 6. The Shapley value rule satisfies adjusted no-envy. PROOF. Without loss of generality, assume efficient ordering of jobs is: 1,..., n. Consider two jobs i and i + k. From Lemma 3, 6. CONCLUSION We studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs. We took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties. We characterized the Shapley value rule using different intuitive fairness axioms. In future, we plan to further simplify some of the fairness axioms. Some initial simplifications already appear in [16], where we provide an alternative axiom to ECB and also discuss the implication of transfers between jobs (in stead of transfers from jobs to a central server). We also plan to look at cost sharing mechanisms other than the Shapley value. Investigating the strategic power of jobs in such mechanisms is another line of future research.
Cost Sharing in a Job Scheduling Problem Using the Shapley Value ABSTRACT A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job. 1. INTRODUCTION A set of jobs need to be served by a server. The server can process only one job at a time. Each job has a finite processing time and a per unit time waiting cost. Efficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time. To compensate for waiting by jobs, monetary transfers to jobs are allowed. How should the jobs share the cost equitably amongst themselves (through transfers)? The problem of fair division of costs among agents in a queue has many practical applications. For example, computer programs are regularly scheduled on servers, data are scheduled to be transmitted over networks, jobs are scheduled in shop-floor on machines, and queues appear in many public services (post offices, banks). Study of queueing problems has attracted economists for a long time [7, 17]. Cost sharing is a fundamental problem in many settings on the Internet. Internet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner. The current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24]. Internet has many settings in which our model of job scheduling appears and the agents waiting in a queue incur costs (jobs scheduled on servers, queries answered from a database, data scheduled to be transmitted over a fixed bandwidth network etc.). We hope that our analysis will give new insights on cost sharing problems of this nature. Recently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24]. While many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular. Also, such literature typically does not assume budget-balance (transfers adding up to zero), while it is an inherent feature of our model. A recent paper by Maniquet [15] is the closest to our model and is the motivation behind our work 1. Maniquet [15] studies a model where he assumes all processing times are unity. For such a model, he characterizes the Shapley value rule using classical fairness axioms. Chun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a "reverse" rule. Chun characterizes this rule using similar fairness axioms. Chun [2] also studies the envy properties of these rules. Moulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity. Moulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them. Another stream of literature is on "sequencing games", first introduced by Curiel et al. [4]. For a detailed survey, refer to Curiel et al. [3]. Curiel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given. Besides, their notion of worth of a coalition is very different from the notions studied in Maniquet [15] and Chun [1] (these are the notions used in our work too). The particular notion of the worth of a coalition makes the sequencing game of Curiel et al. [4] convex, whereas our game is not convex and does not assume the presence of any initial order. In summary, the focus of this stream of 1The authors thank Fran ¸ cois Maniquet for several fruitful discussions. research is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]). Recently, Klijn and S ´ anchez [13, 14] considered sequencing games without any initial ordering of jobs. They take two approaches to define the worth of coalitions. One of their approaches, called the tail game, is related to the reverse rule of Chun [1]. In the tail game, jobs in a coalition are served after the jobs not in the coalition are served. Klijn and S ´ anchez [14] showed that the tail game is balanced. Further, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors. We provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1]. Klijn and S ´ anchez [13] study the core of this game in detail. Strategic aspects of queueing problems have also been researched. Mitra [19] studies the first best implementation in queueing models with generic cost functions. First best implementation means that there exists an efficient mechanism in which jobs in the queue have a dominant strategy to reveal their true types and their transfers add up to zero. Suijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible. Mitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear. For another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property. Moulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs. The general cost sharing literature is vast and has a long history. For a good survey, we refer to [20]. From the seminal work of Shapley [25] to recent works on cost sharing in multi-cast transmission and optimization problems [8, 6, 23] this area has attracted economists, computer scientists, and operations researchers. 1.1 Our Contribution Ours is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present. We take a cooperative game theory approach and apply the classical Shapley value rule to the problem. We show that the Shapley value rule satisfies many intuitive fairness axioms. Due to two dimensional nature of our model and one dimensional nature of Maniquet's model [15], his axioms are insufficient to characterize the Shapley value in our setting. We introduce axioms such as independece of preceding jobs' unit waiting cost and independence of following jobs' processing time. A key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering). If such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue). If there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (21) in an efficient ordering. Depending on the ordering selected, one job inflicts cost on the other. Our fairness axiom says that each job should at least bear such expected costs. We characterize the Shapley value rule using these fairness axioms. We also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms. 2. THE MODEL 3. COST SHARING USING THE SHAPLEY VALUE 4. AXIOMATIC CHARACTERIZATION OF THE SHAPLEY VALUE 4.1 The Fairness Axioms Efficiency Axioms Equity Axioms Independence Axioms 4.2 The Characterization Results 5. DISCUSSIONS 5.1 A Reasonable Class of Cost Sharing Mechanisms The Shapley value satisfies ECB by its expression in Lemma These series of propositions lead us to our main result. (Using Li--Lj = Rj--Ri = 6 0) 5.2 Results on Envy 6. CONCLUSION We studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs. We took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties. We characterized the Shapley value rule using different intuitive fairness axioms. In future, we plan to further simplify some of the fairness axioms. Some initial simplifications already appear in [16], where we provide an alternative axiom to ECB and also discuss the implication of transfers between jobs (in stead of transfers from jobs to a central server). We also plan to look at cost sharing mechanisms other than the Shapley value. Investigating the strategic power of jobs in such mechanisms is another line of future research.
Cost Sharing in a Job Scheduling Problem Using the Shapley Value ABSTRACT A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job. 1. INTRODUCTION A set of jobs need to be served by a server. The server can process only one job at a time. Each job has a finite processing time and a per unit time waiting cost. Efficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time. To compensate for waiting by jobs, monetary transfers to jobs are allowed. How should the jobs share the cost equitably amongst themselves (through transfers)? The problem of fair division of costs among agents in a queue has many practical applications. Study of queueing problems has attracted economists for a long time [7, 17]. Cost sharing is a fundamental problem in many settings on the Internet. Internet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner. The current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24]. We hope that our analysis will give new insights on cost sharing problems of this nature. Recently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24]. While many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular. Maniquet [15] studies a model where he assumes all processing times are unity. For such a model, he characterizes the Shapley value rule using classical fairness axioms. Chun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a "reverse" rule. Chun characterizes this rule using similar fairness axioms. Chun [2] also studies the envy properties of these rules. Moulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity. Moulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them. Another stream of literature is on "sequencing games", first introduced by Curiel et al. [4]. For a detailed survey, refer to Curiel et al. [3]. Curiel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given. research is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]). Recently, Klijn and S ´ anchez [13, 14] considered sequencing games without any initial ordering of jobs. They take two approaches to define the worth of coalitions. One of their approaches, called the tail game, is related to the reverse rule of Chun [1]. In the tail game, jobs in a coalition are served after the jobs not in the coalition are served. Klijn and S ´ anchez [14] showed that the tail game is balanced. Further, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors. We provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1]. Klijn and S ´ anchez [13] study the core of this game in detail. Strategic aspects of queueing problems have also been researched. Mitra [19] studies the first best implementation in queueing models with generic cost functions. Suijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible. Mitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear. For another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property. Moulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs. The general cost sharing literature is vast and has a long history. For a good survey, we refer to [20]. 1.1 Our Contribution Ours is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present. We take a cooperative game theory approach and apply the classical Shapley value rule to the problem. We show that the Shapley value rule satisfies many intuitive fairness axioms. Due to two dimensional nature of our model and one dimensional nature of Maniquet's model [15], his axioms are insufficient to characterize the Shapley value in our setting. We introduce axioms such as independece of preceding jobs' unit waiting cost and independence of following jobs' processing time. A key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering). If such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue). If there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (21) in an efficient ordering. Depending on the ordering selected, one job inflicts cost on the other. Our fairness axiom says that each job should at least bear such expected costs. We characterize the Shapley value rule using these fairness axioms. We also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms. 6. CONCLUSION We studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs. We took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties. We characterized the Shapley value rule using different intuitive fairness axioms. In future, we plan to further simplify some of the fairness axioms. We also plan to look at cost sharing mechanisms other than the Shapley value. Investigating the strategic power of jobs in such mechanisms is another line of future research.
C-69
pTHINC: A Thin-Client Architecture for Mobile Wireless Web
Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.
[ "pthinc", "thin-client", "mobil", "web applic", "mobil wireless pda", "web browser", "function", "pda thinclient solut", "seamless mobil", "system usabl", "screen resolut", "local pda web browser", "web brows perform", "crucial browser helper applic", "video playback", "full-function web browser", "high-fidel displai", "thin-client comput", "remot displai", "pervas web" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "U", "M" ]
pTHINC: A Thin-Client Architecture for Mobile Wireless Web Joeng Kim, Ricardo A. Baratto, and Jason Nieh Department of Computer Science Columbia University, New York, NY, USA {jk2438, ricardo, nieh}@cs. columbia.edu ABSTRACT Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. Categories and Subject Descriptors: C.2.4 ComputerCommunication-Networks: Distributed Systems - client/ server General Terms: Design, Experimentation, Performance 1. INTRODUCTION The increasing ubiquity of wireless networks and decreasing cost of hardware is fueling a proliferation of mobile wireless handheld devices, both as standalone wireless Personal Digital Assistants (PDA) and popular integrated PDA/cell phone devices. These devices are enabling new forms of mobile computing and communication. Service providers are leveraging these devices to deliver pervasive web access, and mobile web users already often use these devices to access web-enabled information such as news, email, and localized travel guides and maps. It is likely that within a few years, most of the devices accessing the web will be mobile. Users typically access web content by running a web browser and associated helper applications locally on the PDA. Although native web browsers exist for PDAs, they deliver subpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10]. As a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience. This fundamental problem arises for two reasons. First, because PDAs have a completely different hardware/software environment from traditional desktop computers, web applications need to be rewritten and customized for PDAs if at all possible, duplicating development costs. Because the desktop application market is larger and more mature, most development effort generally ends up being spent on desktop applications, resulting in greater functionality and performance than their PDA counterparts. Second, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life. Desktop web browsers are large, complex applications that are unable to run on a PDA. Instead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality. Thin-client computing provides an alternative approach for enabling pervasive web access from handheld devices. A thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol. The protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server. Using the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client. Using a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications. The thin-client model provides several important benefits for mobile wireless web. First, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments. Second, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10]. Third, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance. Fourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend143 ing battery life. Finally, PDA thin clients can be essentially stateless appliances that do not need to be backed up or restored, require almost no maintenance or upgrades, and do not store any sensitive data that can be lost or stolen. This model provides a viable avenue for medical organizations to comply with HIPAA regulations [6] while embracing mobile handhelds in their day to day operations. Despite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices. Existing thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content. Furthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions. While existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content. To harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing). pTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices. pTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes. This enables pTHINC to provide the same persistent web session across different client devices. For example, pTHINC can provide the same web browsing session appropriately scaled for display on a desktop computer and a PDA so that the same cookies, bookmarks, and other meta-data are continuously available on both machines simultaneously. pTHINC``s virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications. Given limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability. We have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software. We have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices. Our experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. This paper presents the design and implementation of pTHINC. Section 2 describes the overall usage model and usability characteristics of pTHINC. Section 3 presents the design and system architecture of pTHINC. Section 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems. Section 5 discusses related work. Finally, we present some concluding remarks. 2. PTHINC USAGE MODEL pTHINC is a thin-client system that consists of a simple client viewer application that runs on the PDA and a server that runs on a commodity PC. The server leverages more powerful PCs to to run web browsers and other application logic. The client takes user input from the PDA stylus and virtual keyboard and sends them to the server to pass to the applications. Screen updates are then sent back from the server to the client for display to the user. When the pTHINC PDA client is started, the user is presented with a simple graphical interface where information such as server address and port, user authentication information, and session settings can be provided. pTHINC first attempts to connect to the server and perform the necessary handshaking. Once this process has been completed, pTHINC presents the user with the most recent display of his session. If the session does not exist, a new session is created. Existing sessions can be seamlessly continued without changes in the session setting or server configuration. Unlike other thin-client systems, pTHINC provides a user with a persistent web session model in which a user can launch a session running a web browser and associated applications at the server, then disconnect from that session and reconnect to it again anytime. When a user reconnects to the session, all of the applications continue running where the user left off, so that the user can continue working as though he or she never disconnected. The ability to disconnect and reconnect to a session at anytime is an important benefit for mobile wireless PDA users which may have intermittent network connectivity. pTHINC``s persistent web session model enables a user to reconnect to a web session from devices other than the one on which the web session was originally initiated. This provides users with seamless mobility across different devices. If a user loses his PDA, he can easily use another PDA to access his web session. Furthermore, pTHINC allows users to use non-PDA devices to access web sessions as well. A user can access the same persistent web session on a desktop PC as on a PDA, enabling a user to use the same web session from any computer. pTHINC``s persistent web session model addresses a key problem encountered by mobile web users, the lack of a common web environment across computers. Web browsers often store important information such as bookmarks, cookies, and history, which enable them to function in a much more useful manner. The problem that occurs when a user moves between computers is that this data, which is specific to a web browser installation, cannot move with the user. Furthermore, web browsers often need helper applications to process different media content, and those applications may not be consistently available across all computers. pTHINC addresses this problem by enabling a user to remotely use the exact same web browser environment and helper applications from any computer. As a result, pTHINC can provide a common, consistent web browsing environment for mobile users across different devices without requiring them to attempt to repeatedly synchronize different web browsing environments across multiple machines. To enable a user to access the same web session on different devices, pTHINC must provide mechanisms to support different display sizes and resolutions. Toward this end, pTHINC provides a zoom feature that enables a user to zoom in and out of a display and allows the display of a web 144 Figure 1: pTHINC shortcut keys session to be resized to fit the screen of the device being used. For example, if the server is running a web session at 1024×768 but the client is a PDA with a display resolution of 640×480, pTHINC will resize the desktop display to fit the full display in the smaller screen of the PDA. pTHINC provides the PDA user with the option to increase the size of the display by zooming in to different parts of the display. Users are often familiar with the general layout of commonly visited websites, and are able to leverage this resizing feature to better navigate through web pages. For example, a user can zoom out of the display to view the entire page content and navigate hyperlinks, then zoom in to a region of interest for a better view. To enable a user to access the same web session on different devices, pTHINC must also provide mechanisms to support different display orientations. In a desktop environment, users are typically accustomed to having displays presented in landscape mode where the screen width is larger than its height. However, in a PDA environment, the choice is not always obvious. Some users may prefer having the display in portrait mode, as it is easier to hold the device in their hands, while others may prefer landscape mode in order to minimize the amount of side-scrolling necessary to view a web page. To accommodate PDA user preferences, pTHINC provides an orientation feature that enables it to seamless rotate the display between landscape and portrait mode. The landscape mode is particularly useful for pTHINC users who frequently access their web sessions on both desktop and PDA devices, providing those users with the same familiar landscape setting across different devices. Because screen space is a relatively scarce resource on PDAs, pTHINC runs in fullscreen mode to maximize the screen area available to display the web session. To be able to use all of the screen on the PDA and still allow the user to control and interact with it, pTHINC reuses the typical shortcut buttons found on PDAs to perform all the control functions available to the user. The buttons used by pTHINC do not require any OS environment changes; they are simply intercepted by the pTHINC client application when they are pressed. Figure 1 shows how pTHINC utilizes the shortcut buttons to provide easy navigation and improve the overall user experience. These buttons are not device specific, and the layout shown is common to widelyused PocketPC devices. pTHINC provides six shortcuts to support its usage model: • Rotate Screen: The record button on the left edge is used to rotate the screen between portrait and landscape mode each time the button is pressed. • Zoom Out: The leftmost button on the bottom front is used to zoom out the display of the web session providing a bird``s eye view of the web session. • Zoom In: The second leftmost button on the bottom front is used to zoom in the display of the web session to more clearly view content of interest. • Directional Scroll: The middle button on the bottom front is used to scroll around the display using a single control button in a way that is already familiar to PDA users. This feature is particularly useful when the user has zoomed in to a region of the display such that only part of the display is visible on the screen. • Show/Hide Keyboard: The second rightmost button on the bottom front is used to bring up a virtual keyboard drawn on the screen for devices which have no physical keyboard. The virtual keyboard uses standard PDA OS mechanisms, providing portability across different PDA environments. • Close Session: The rightmost button on the bottom front is used to disconnect from the pTHINC session. pTHINC uses the PDA touch screen, stylus, and standard user interface mechanisms to provide a user interface pointand-click metaphor similar to that provided by the mouse in a traditional desktop computing environment. pTHINC does not use a cursor since PDA environments do not provide one. Instead, a user can use the stylus to tap on different sections of the touch screen to indicate input focus. A single tap on the touch screen generates a corresponding single click mouse event. A double tap on the touch screen generates a corresponding double click mouse event. pTHINC provides two-button mouse emulation by using the stylus to press down on the screen for one second to generate a right mouse click. All of these actions are identical to the way users already interact with PDA applications in the common PocketPC environment. In web browsing, users can click on hyperlinks and focus on input boxes by simply tapping on the desired screen area of interest. Unlike local PDA web browsers and other PDA applications, pTHINC leverages more powerful desktop user interface metaphors to enable users to manipulate multiple open application windows instead of being limited to a single application window at any given moment. This provides increased browsing flexibility beyond what is currently available on PDA devices. Similar to a desktop environment, browser windows and other application windows can be moved around by pressing down and dragging the stylus similar to a mouse. 3. PTHINC SYSTEM ARCHITECTURE pTHINC builds on the THINC [1] remote display architecture to provide a thin-client system for PDAs. pTHINC virtualizes the display at the server by leveraging the video device abstraction layer, which sits below the window server and above the framebuffer. This is a well-defined, low-level, device-dependent layer that exposes the video hardware to the display system. pTHINC accomplishes this through a simple virtual display driver that intercepts drawing commands, packetizes, and sends them over the network. 145 While other thin-client approaches intercept display commands at other layers of the display subsystem, pTHINC``s display virtualization approach provides some key benefits in efficiently supporting PDA clients. For example, intercepting display commands at a higher layer between applications and the window system as is done by X [17] requires replicating and running a great deal of functionality on the PDA that is traditionally provided by the desktop window system. Given both the size and complexity of traditional window systems, attempting to replicate this functionality in the restricted PDA environment would have proven to be a daunting, and perhaps unfeasible task. Furthermore, applications and the window system often require tight synchronization in their operation and imposing a wireless network between them by running the applications on the server and the window system on the client would significantly degrade performance. On the other hand, intercepting at a lower layer by extracting pixels out of the framebuffer as they are rendered provides a simple solution that requires very little functionality on the PDA client, but can also result in degraded performance. The reason is that by the time the remote display server attempts to send screen updates, it has lost all semantic information that may have helped it encode efficiently, and it must resort to using a generic and expensive encoding mechanism on the server, as well as a potentially expensive decoding mechanism on the limited PDA client. In contrast to both the high and low level interception approaches, pTHINC``s approach of intercepting at the device driver provides an effective balance between client and server simplicity, and the ability to efficiently encode and decode screen updates. By using a low-level virtual display approach, pTHINC can efficiently encode application display commands using only a small set of low-level commands. In a PDA environment, this set of commands provides a crucial component in maintaining the simplicity of the client in the resourceconstrained PDA environment. The display commands are shown in Table 1, and work as follows. COPY instructs the client to copy a region of the screen from its local framebuffer to another location. This command improves the user experience by accelerating scrolling and opaque window movement without having to resend screen data from the server. SFILL, PFILL, and BITMAP are commands that paint a fixed-size region on the screen. They are useful for accelerating the display of solid window backgrounds, desktop patterns, backgrounds of web pages, text drawing, and certain operations in graphics manipulation programs. SFILL fills a sizable region on the screen with a single color. PFILL replicates a tile over a screen region. BITMAP performs a fill using a bitmap of ones and zeros as a stipple to apply a foreground and background color. Finally, RAW is used to transmit unencoded pixel data to be displayed verbatim on a region of the screen. This command is invoked as a last resort if the server is unable to employ any other command, and it is the only command that may be compressed to mitigate its impact on network bandwidth. pTHINC delivers its commands using a non-blocking, serverpush update mechanism, where as soon as display updates are generated on the server, they are sent to the client. Clients are not required to explicitly request display updates, thus minimizing the impact that the typical varying network latency of wireless links may have on the responsiveness of the system. Keeping in mind that resource Command Description COPY Copy a frame buffer area to specified coordinates SFILL Fill an area with a given pixel color value PFILL Tile an area with a given pixel pattern BITMAP Fill a region using a bit pattern RAW Display raw pixel data at a given location Table 1: pTHINC Protocol Display Commands constrained PDAs and wireless networks may not be able to keep up with a fast server generating a large number of updates, pTHINC is able to coalesce, clip, and discard updates automatically if network loss or congestion occurs, or the client cannot keep up with the rate of updates. This type of behavior proves crucial in a web browsing environment, where for example, a page may be redrawn multiple times as it is rendered on the fly by the browser. In this case, the PDA will only receive and render the final result, which clearly is all the user is interesting in seeing. pTHINC prioritizes the delivery of updates to the PDA using a Shortest-Remaining-Size-First (SRSF) preemptive update scheduler. SRSF is analogous to Shortest-RemainingProcessing-Time scheduling, which is known to be optimal for minimizing mean response time in an interactive system. In a web browsing environment, short jobs are associated with text and basic page layout components such as the page``s background, which are critical web content for the user. On the other hand, large jobs are often lower priority beautifying elements, or, even worse, web page banners and advertisements, which are of questionable value to the user as he or she is browsing the page. Using SRSF, pTHINC is able to maximize the utilization of the relatively scarce bandwidth available on the wireless connection between the PDA and the server. 3.1 Display Management To enable users to just as easily access their web browser and helper applications from a desktop computer at home as from a PDA while on the road, pTHINC provides a resize mechanism to zoom in and out of the display of a web session. pTHINC resizing is completely supported by the server, not the client. The server resamples updates to fit within the PDAs viewport before they are transmitted over the network. pTHINC uses Fant``s resampling algorithm to resize pixel updates. This provides smooth, visually pleasing updates with properly antialiasing and has only modest computational requirements. pTHINC``s resizing approach has a number of advantages. First, it allows the PDA to leverage the vastly superior computational power of the server to use high quality resampling algorithms and produce higher quality updates for the PDA to display. Second, resizing the screen does not translate into additional resource requirements for the PDA, since it does not need to perform any additional work. Finally, better utilization of the wireless network is attained since rescaling the updates reduces their bandwidth requirements. To enable users to orient their displays on a PDA to provide a viewing experience that best accommodates user preferences and the layout of web pages or applications, pTHINC provides a display rotation mechanism to switch between landscape and portrait viewing modes. pTHINC display rotation is completely supported by the client, not the server. pTHINC does not explicitly recalculate the ge146 ometry of display updates to perform rotation, which would be computationally expensive. Instead, pTHINC simply changes the way data is copied into the framebuffer to switch between display modes. When in portrait mode, data is copied along the rows of the framebuffer from left to right. When in landscape mode, data is copied along the columns of the framebuffer from top to bottom. These very fast and simple techniques replace one set of copy operations with another and impose no performance overhead. pTHINC provides its own rotation mechanism to support a wide range of devices without imposing additional feature requirements on the PDA. Although some newer PDA devices provide native support for different orientations, this mechanism is not dynamic and requires the user to rotate the PDA``s entire user interface before starting the pTHINC client. Windows Mobile provides native API mechanisms for PDA applications to rotate their UI on the fly, but these mechanisms deliver poor performance and display quality as the rotation is performed naively and is not completely accurate. 3.2 Video Playback Video has gradually become an integral part of the World Wide Web, and its presence will only continue to increase. Web sites today not only use animated graphics and flash to deliver web content in an attractive manner, but also utilize streaming video to enrich the web interface. Users are able to view pre-recorded and live newscasts on CNN, watch sports highlights on ESPN, and even search through large collection of videos on Google Video. To allow applications to provide efficient video playback, interfaces have been created in display systems that allow video device drivers to expose their hardware capabilities back to the applications. pTHINC takes advantage of these interfaces and its virtual device driver approach to provide a virtual bridge between the remote client and its hardware and the applications, and transparently support video playback. On top of this architecture, pTHINC uses the YUV colorspace to encode the video content, which provides a number of benefits. First, it has become increasingly common for PDA video hardware to natively support YUV and be able to perform the colorspace conversion and scaling automatically. As a result, pTHINC is able to provide fullscreen video playback without any performance hits. Second, the use of YUV allows for a more efficient representation of RGB data without loss of quality, by taking advantage of the human eye``s ability to better distinguish differences in brightness than in color. In particular, pTHINC uses the YV12 format, which allows full color RGB data to be encoded using just 12 bits per pixel. Third, YUV data is produced as one of the last steps of the decoding process of most video codecs, allowing pTHINC to provide video playback in a manner that is format independent. Finally, even if the PDA``s video hardware is unable to accelerate playback, the colorspace conversion process is simple enough that it does not impose unreasonable requirements on the PDA. A more concrete example of how pTHINC leverages the PDA video hardware to support video playback can be seen in our prototype implementation on the popular Dell Axim X51v PDA, which is equipped with the Intel 2700G multimedia accelerator. In this case, pTHINC creates an offscreen buffer in video memory and writes and reads from this memory region data on the YV12 format. When a new video frame arrives, video data is copied from the buffer to Figure 2: Experimental Testbed an overlay surface in video memory, which is independent of the normal surface used for traditional drawing. As the YV12 data is put onto the overlay, the Intel accelerator automatically performs both colorspace conversion and scaling. By using the overlay surface, pTHINC has no need to redraw the screen once video playback is over since the overlapped surface is unaffected. In addition, specific overlay regions can be manipulated by leveraging the video hardware, for example to perform hardware linear interpolation to smooth out the frame and display it fullscreen, and to do automatic rotation when the client runs in landscape mode. 4. EXPERIMENTAL RESULTS We have implemented a pTHINC prototype that runs the client on widely-used Windows Mobile-based Pocket PC devices and the server on both Windows and Linux operating systems. To demonstrate its effectiveness in supporting mobile wireless web applications, we have measured its performance on web applications. We present experimental results on different PDA devices for two popular web applications, browsing web pages and playing video content from the web. We compared pTHINC against native web applications running locally on the PDA to demonstrate the improvement that pTHINC can provide over the traditional fat-client approach. We also compared pTHINC against three of the most widely used thin clients that can run on PDAs, Citrix Meta-FrameXP [2], Microsoft Remote Desktop [3] and VNC (Virtual Network Computing) [16]. We follow common practice and refer to Citrix MetaFrameXP and Microsoft Remote Desktop by their respective remote display protocols, ICA (Independent Computing Architecture) and RDP (Remote Desktop Protocol). 4.1 Experimental Testbed We conducted our web experiments using two different wireless Pocket PC PDAs in an isolated Wi-Fi network testbed, as shown in Figure 2. The testbed consisted of two PDA client devices, a packet monitor, a thin-client server, and a web server. Except for the PDAs, all of the other machines were IBM Netfinity 4500R servers with dual 933 MHz Intel PIII CPUs and 512 MB RAM and were connected on a switched 100 Mbps FastEthernet network. The web server used was Apache 1.3.27, the network emulator was NISTNet 2.0.12, and the packet monitor was Ethereal 0.10.9. The PDA clients connected to the testbed through a 802.11b Lucent Orinoco AP-2000 wireless access point. All experiments using the wireless network were conducted within ten feet of the access point, so we considered the amount of packet loss to be negligible in our experiments. Two Pocket PC PDAs were used to provide results across both older, less powerful models and newer higher performance models. The older model was a Dell Axim X5 with 147 Client 1024×768 640×480 Depth Resize Clip RDP no yes 8-bit no yes VNC yes yes 16-bit no no ICA yes yes 16-bit yes no pTHINC yes yes 24-bit yes no Table 2: Thin-client Testbed Configuration Setting a 400 MHz Intel XScale PXA255 CPU and 64 MB RAM running Windows Mobile 2003 and a Dell TrueMobile 1180 2.4Ghz CompactFlash card for wireless networking. The newer model was a Dell Axim X51v with a 624 MHz Intel XScale XPA270 CPU and 64 MB RAM running Windows Mobile 5.0 and integrated 802.11b wireless networking. The X51v has an Intel 2700G multimedia accelerator with 16MB video memory. Both PDAs are capable of 16-bit color but have different screen sizes and display resolutions. The X5 has a 3.5 inch diagonal screen with 240×320 resolution. The X51v has a 3.7 inch diagonal screen with 480×640. The four thin clients that we used support different levels of display quality as summarized in Table 2. The RDP client only supports a fixed 640×480 display resolution on the server with 8-bit color depth, while other platforms provide higher levels of display quality. To provide a fair comparison across all platforms, we conducted our experiments with thin-client sessions configured for two possible resolutions, 1024×768 and 640×480. Both ICA and VNC were configured to use the native PDA resolution of 16-bit color depth. The current pTHINC prototype uses 24-bit color directly and the client downsamples updates to the 16-bit color depth available on the PDA. RDP was configured using only 8-bit color depth since it does not support any better color depth. Since both pTHINC and ICA provide the ability to view the display resized to fit the screen, we measured both clients with and without the display resized to fit the PDA screen. Each thin client was tested using landscape rather than portrait mode when available. All systems run on the X51v could run in landscape mode because the hardware provides a landscape mode feature. However, the X5 does not provide this functionality. Only pTHINC directly supports landscape mode, so it was the only system that could run in landscape mode on both the X5 and X51v. To provide a fair comparison, we also standardized on common hardware and operating systems whenever possible. All of the systems used the Netfinity server as the thin-client server. For the two systems designed for Windows servers, ICA and RDP, we ran Windows 2003 Server on the server. For the other systems which support X-based servers, VNC and pTHINC, we ran the Debian Linux Unstable distribution with the Linux 2.6.10 kernel on the server. We used the latest thin-client server versions available on each platform at the time of our experiments, namely Citrix MetaFrame XP Server for Windows Feature Release 3, Microsoft Remote Desktop built into Windows XP and Windows 2003 using RDP 5.2, and VNC 4.0. 4.2 Application Benchmarks We used two web application benchmarks for our experiments based on two common application scenarios, browsing web pages and playing video content from the web. Since many thin-client systems including two of the ones tested are closed and proprietary, we measured their performance in a noninvasive manner by capturing network traffic with a packet monitor and using a variant of slow-motion benchmarking [13] previously developed to measure thin-client performance in PDA environments [10]. This measurement methodology accounts for both the display decoupling that can occur between client and server in thin-client systems as well as client processing time, which may be significant in the case of PDAs. To measure web browsing performance, we used a web browsing benchmark based on the Web Text Page Load Test from the Ziff-Davis i-Bench benchmark suite [7]. The benchmark consists of JavaScript controlled load of 55 pages from the web server. The pages contain both text and graphics with pages varying in size. The graphics are embedded images in GIF and JPEG formats. The original i-Bench benchmark was modified for slow-motion benchmarking by introducing delays of several seconds between the pages using JavaScript. Then two tests were run, one where delays where added between each page, and one where pages where loaded continuously without waiting for them to be displayed on the client. In the first test, delays were sufficiently adjusted in each case to ensure that each page could be received and displayed on the client completely without temporal overlap in transferring the data belonging to two consecutive pages. We used the packet monitor to record the packet traffic for each run of the benchmark, then used the timestamps of the first and last packet in the trace to obtain our latency measures [10]. The packet monitor also recorded the amount of data transmitted between the client and the server. The ratio between the data traffic in the two tests yields a scale factor. This scale factor shows the loss of data between the server and the client due to inability of the client to process the data quickly enough. The product of the scale factor with the latency measurement produces the true latency accounting for client processing time. To run the web browsing benchmark, we used Mozilla Firefox 1.0.4 running on the thin-client server for the thin clients, and Windows Internet Explorer (IE) Mobile for 2003 and Mobile for 5.0 for the native browsers on the X5 and X51v PDAs, respectively. In all cases, the web browser used was sized to fill the entire display region available. To measure video playback performance, we used a video benchmark that consisted of playing a 34.75s MPEG-1 video clip containing a mix of news and entertainment programming at full-screen resolution. The video clip is 5.11 MB and consists of 834 352x240 pixel frames with an ideal frame rate of 24 frames/sec. We measured video performance using slow-motion benchmarking by monitoring resulting packet traffic at two playback rates, 1 frames/second (fps) and 24 fps, and comparing the results to determine playback delays and frame drops that occur at 24 fps to measure overall video quality [13]. For example, 100% quality means that all video frames were played at real-time speed. On the other hand, 50% quality could mean that half the video data was dropped, or that the clip took twice as long to play even though all of the video data was displayed. To run the video benchmark, we used Windows Media Player 9 for Windows-based thin-client servers, MPlayer 1.0 pre 6 for X-based thin-client servers, and Windows Media Player 9 Mobile and 10 Mobile for the native video players running locally on the X5 and X51v PDAs, respectively. In all cases, the video player used was sized to fill the entire display region available. 4.3 Measurements Figures 3 and 4 show the results of running the web brows148 0 1 10 100 pTHINC Resized pTHINCICA Resized ICAVNCRDPLOCAL Latency(s) Platform Axim X5 (640x480 or less) Axim X51v (640x480) Axim X5 (1024x768) Axim X51v (1024x768) Figure 3: Browsing Benchmark: Average Page Latency ing benchmark. For each platform, we show results for up to four different configurations, two on the X5 and two on the X51v, depending on whether each configuration was supported. However, not all platforms could support all configurations. The local browser only runs at the display resolution of the PDA, 480×680 or less for the X51v and the X5. RDP only runs at 640×480. Neither platform could support 1024×768 display resolution. ICA only ran on the X5 and could not run on the X51v because it did not work on Windows Mobile 5. Figure 3 shows the average latency per web page for each platform. pTHINC provides the lowest average web browsing latency on both PDAs. On the X5, pTHINC performs up to 70 times better than other thin-client systems and 8 times better than the local browser. On the X51v, pTHINC performs up to 80 times better than other thin-client systems and 7 times better than the native browser. In fact, all of the thin clients except VNC outperform the local PDA browser, demonstrating the performance benefits of the thin-client approach. Usability studies have shown that web pages should take less than one second to download for the user to experience an uninterrupted web browsing experience [14]. The measurements show that only the thin clients deliver subsecond web page latencies. In contrast, the local browser requires more than 3 seconds on average per web page. The local browser performs worse since it needs to run a more limited web browser to process the HTML, JavaScript, and do all the rendering using the limited capabilities of the PDA. The thin clients can take advantage of faster server hardware and a highly tuned web browser to process the web content much faster. Figure 3 shows that RDP is the next fastest platform after pTHINC. However, RDP is only able to run at a fixed resolution of 640×480 and 8-bit color depth. Furthermore, RDP also clips the display to the size of the PDA screen so that it does not need to send updates that are not visible on the PDA screen. This provides a performance benefit assuming the remaining web content is not viewed, but degrades performance when a user scrolls around the display to view other web content. RDP achieves its performance with significantly lower display quality compared to the other thin clients and with additional display clipping not used by other systems. As a result, RDP performance alone does not provide a complete comparison with the other platforms. In contrast, pTHINC provides the fastest performance while at the same time providing equal or better display quality than the other systems. 0 1 10 100 1000 pTHINC Resized pTHINCICA Resized ICAVNCRDPLOCAL DataSize(KB) Platform Axim X5 (640x480 or less) Axim X51v (640x480) Axim X5 (1024x768) Axim X51v (1024x768) Figure 4: Browsing Benchmark: Average Page Data Transferred Since VNC and ICA provide similar display quality to pTHINC, these systems provide a more fair comparison of different thin-client approaches. ICA performs worse in part because it uses higher-level display primitives that require additional client processing costs. VNC performs worse in part because it loses display data due to its client-pull delivery mechanism and because of the client processing costs in decompressing raw pixel primitives. In both cases, their performance was limited in part because their PDA clients were unable to keep up with the rate at which web pages were being displayed. Figure 3 also shows measurements for those thin clients that support resizing the display to fit the PDA screen, namely ICA and pTHINC. Resizing requires additional processing, which results in slower average web page latencies. The measurements show that the additional delay incurred by ICA when resizing versus not resizing is much more substantial than for pTHINC. ICA performs resizing on the slower PDA client. In contrast, pTHINC leverage the more powerful server to do resizing, reducing the performance difference between resizing and not resizing. Unlike ICA, pTHINC is able to provide subsecond web page download latencies in both cases. Figure 4 shows the data transferred in KB per page when running the slow-motion version of the tests. All of the platforms have modest data transfer requirements of roughly 100 KB per page or less. This is well within the bandwidth capacity of Wi-Fi networks. The measurements show that the local browser does not transfer the least amount of data. This is surprising as HTML is often considered to be a very compact representation of content. Instead, RDP is the most bandwidth efficient platform, largely as a result of using only 8-bit color depth and screen clipping so that it does not transfer the entire web page to the client. pTHINC overall has the largest data requirements, slightly more than VNC. This is largely a result of the current pTHINC prototype``s lack of native support for 16-bit color data in the wire protocol. However, this result also highlights pTHINC``s performance as it is faster than all other systems even while transferring more data. Furthermore, as newer PDA models support full 24-bit color, these results indicate that pTHINC will continue to provide good web browsing performance. Since display usability and quality are as important as performance, Figures 5 to 8 compare screenshots of the different thin clients when displaying a web page, in this case from the popular BBC news website. Except for ICA, all of the screenshots were taken on the X51v in landscape mode 149 Figure 5: Browser Screenshot: RDP 640x480 Figure 6: Browser Screenshot: VNC 1024x768 Figure 7: Browser Screenshot: ICA Resized 1024x768 Figure 8: Browser Screenshot: pTHINC Resized 1024x768 using the maximum display resolution settings for each platform given in Table 2. The ICA screenshot was taken on the X5 since ICA does not run on the X51v. While the screenshots lack the visual fidelity of the actual device display, several observations can be made. Figure 5 shows that RDP does not support fullscreen mode and wastes lots of screen space for controls and UI elements, requiring the user to scroll around in order to access the full contents of the web browsing session. Figure 6 shows that VNC makes better use of the screen space and provides better display quality, but still forces the user to scroll around to view the web page due to its lack of resizing support. Figure 7 shows ICA``s ability to display the full web page given its resizing support, but that its lack of landscape capability and poorer resize algorithm significantly compromise display quality. In contrast, Figure 8 shows pTHINC using resizing to provide a high quality fullscreen display of the full width of the web page. pTHINC maximizes the entire viewing region by moving all controls to the PDA buttons. In addition, pTHINC leverages the server computational power to use a high quality resizing algorithm to resize the display to fit the PDA screen without significant overhead. Figures 9 and 10 show the results of running the video playback benchmark. For each platform except ICA, we show results for an X5 and X51v configuration. ICA could not run on the X51v as noted earlier. The measurements were done using settings that reflected the environment a user would have to access a web session from both a desktop computer and a PDA. As such, a 1024×768 server display resolution was used whenever possible and the video was shown at fullscreen. RDP was limited to 640×480 display resolution as noted earlier. Since viewing the entire video display is the only really usable option, we resized the display to fit the PDA screen for those platforms that supported this feature, namely ICA and pTHINC. Figure 9 shows the video quality for each platform. pTHINC is the only thin client able to provide perfect video playback quality, similar to the native PDA video player. All of the other thin clients deliver very poor video quality. With the exception of RDP on the X51v which provided unacceptable 35% video quality, none of the other systems were even able to achieve 10% video quality. VNC and ICA have the worst quality at 8% on the X5 device. pTHINC``s native video support enables superior video performance, while other thin clients suffer from their inability to distinguish video from normal display updates. They attempt to apply ineffective and expensive compression algorithms on the video data and are unable to keep up with the stream of updates generated, resulting in dropped frames or long playback times. VNC suffers further from its client-pull update model because video frames are generated faster than the rate at which the client can process and send requests to the server to obtain the next display update. Figure 10 shows the total data transferred during 150 0% 20% 40% 60% 80% 100% pTHINCICAVNCRDPLOCAL VideoQuality Platform Axim X5 Axim X51v Figure 9: Video Benchmark: Fullscreen Video Quality 0 1 10 100 pTHINCICAVNCRDPLOCAL VideoDataSize(MB) Platform Axim X5 Axim X51v Figure 10: Video Benchmark: Fullscreen Video Data video playback for each system. The native player is the most bandwidth efficient platform, sending less than 6 MB of data, which corresponds to about 1.2 Mbps of bandwidth. pTHINC``s 100% video quality requires about 25 MB of data which corresponds to a bandwidth usage of less than 6 Mbps. While the other thin clients send less data than THINC, they do so because they are dropping video data, resulting in degraded video quality. Figures 11 to 14 compare screenshots of the different thin clients when displaying the video clip. Except for ICA, all of the screenshots were taken on the X51v in landscape mode using the maximum display resolution settings for each platform given in Table 2. The ICA screenshot was taken on the X5 since ICA does not run on the X51v. Figures 11 and 12 show that RDP and VNC are unable to display the entire video frame on the PDA screen. RDP wastes screen space for UI elements and VNC only shows the top corner of the video frame on the screen. Figure 13 shows that ICA provides resizing to display the entire video frame, but did not proportionally resize the video data, resulting in strange display artifacts. In contrast, Figure 14 shows pTHINC using resizing to provide a high quality fullscreen display of the entire video frame. pTHINC provides visually more appealing video display than RDP, VNC, or ICA. 5. RELATED WORK Several studies have examined the web browsing performance of thin-client computing [13, 19, 10]. The ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10]. This study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality. The study considered a wide range of web content including content from medical information systems. Our work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video. Many thin clients have been developed and some have PDA clients, including Microsoft``s Remote Desktop [3], Citrix MetraFrame XP [2], Virtual Network Computing [16, 12], GoToMyPC [5], and Tarantella [18]. These systems were first designed for desktop computing and retrofitted for PDAs. Unlike pTHINC, they do not address key system architecture and usability issues important for PDAs. This limits their display quality, system performance, available screen space, and overall usability on PDAs. pTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications. Other approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8]. They work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy. Our thinclient approach differs fundamentally from these fat-client approaches by pushing all web browser logic to the server, leveraging existing investments in desktop web browsers and helper applications to work seamlessly with production systems without any additional proxy configuration or web browser modifications. With the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices. Recently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser. Instead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone. 6. CONCLUSIONS We have introduced pTHINC, a thin-client architecture for wireless PDAs. pTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements. pTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices. We have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications. Our results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser. In addition, pTHINC is the only PDA thin client 151 Figure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768 Figure 13: Video Screenshot: ICA Resized 1024x768 Figure 14: Video Screenshot: pTHINC Resized 1024x768 that transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users. 7. ACKNOWLEDGEMENTS This work was supported in part by NSF ITR grants CCR0219943 and CNS-0426623, and an IBM SUR Award. 8. REFERENCES [1] R. Baratto, L. Kim, and J. Nieh. THINC: A Virtual Display Architecture for Thin-Client Computing. In Proceedings of the 20th ACM Symposium on Operating Systems Principles (SOSP), Oct. 2005. [2] Citrix Metaframe. http://www.citrix.com. [3] B. C. Cumberland, G. Carius, and A. Muir. Microsoft Windows NT Server 4.0, Terminal Server Edition: Technical Reference. Microsoft Press, Redmond, WA, 1999. [4] A. Fox, I. Goldberg, S. D. Gribble, and D. C. Lee. Experience With Top Gun Wingman: A Proxy-Based Graphical Web Browser for the 3Com PalmPilot. In Proceedings of Middleware ``98, Lake District, England, September 1998, 1998. [5] GoToMyPC. http://www.gotomypc.com/. [6] Health Insurance Portability and Accountability Act. http://www.hhs.gov/ocr/hipaa/. [7] i-Bench version 1.5. http: //etestinglabs.com/benchmarks/i-bench/i-bench.asp. [8] A. Joshi. On proxy agents, mobility, and web access. Mobile Networks and Applications, 5(4):233-241, 2000. [9] J. Kangasharju, Y. G. Kwon, and A. Ortega. Design and Implementation of a Soft Caching Proxy. Computer Networks and ISDN Systems, 30(22-23):2113-2121, 1998. [10] A. Lai, J. Nieh, B. Bohra, V. Nandikonda, A. P. Surana, and S. Varshneya. Improving Web Browsing on Wireless PDAs Using Thin-Client Computing. In Proceedings of the 13th International World Wide Web Conference (WWW), May 2004. [11] A. Maheshwari, A. Sharma, K. Ramamritham, and P. Shenoy. TranSquid: Transcoding and caching proxy for heterogenous ecommerce environments. In Proceedings of the 12th IEEE Workshop on Research Issues in Data Engineering (RIDE ``02), Feb. 2002. [12] . NET VNC Viewer for PocketPC. http://dotnetvnc.sourceforge.net/. [13] J. Nieh, S. J. Yang, and N. Novik. Measuring Thin-Client Performance Using Slow-Motion Benchmarking. ACM Trans. Computer Systems, 21(1):87-115, Feb. 2003. [14] J. Nielsen. Designing Web Usability. New Riders Publishing, Indianapolis, IN, 2000. [15] Opera Mini Browser. http://www.opera.com/products/mobile/operamini/. [16] T. Richardson, Q. Stafford-Fraser, K. R. Wood, and A. Hopper. Virtual Network Computing. IEEE Internet Computing, 2(1), Jan./Feb. 1998. [17] R. W. Scheifler and J. Gettys. The X Window System. ACM Trans. Gr., 5(2):79-106, Apr. 1986. [18] Sun Secure Global Desktop. http://www.sun.com/software/products/sgd/. [19] S. J. Yang, J. Nieh, S. Krishnappa, A. Mohla, and M. Sajjadpour. Web Browsing Performance of Wireless Thin-Client Computing. In Proceedings of the 12th International World Wide Web Conference (WWW), May 2003. 152
pTHINC: A Thin-Client Architecture for Mobile Wireless Web ABSTRACT Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. Categories and Subject Descriptors: C. 2.4 ComputerCommunication-Networks: Distributed Systems--client / server 1. INTRODUCTION The increasing ubiquity of wireless networks and decreasing cost of hardware is fueling a proliferation of mobile wireless handheld devices, both as standalone wireless Personal Digital Assistants (PDA) and popular integrated PDA/cell phone devices. These devices are enabling new forms of mobile computing and communication. Service providers are leveraging these devices to deliver pervasive web access, and mobile web users already often use these devices to access web-enabled information such as news, email, and localized travel guides and maps. It is likely that within a few years, most of the devices accessing the web will be mobile. Users typically access web content by running a web browser and associated helper applications locally on the PDA. Although native web browsers exist for PDAs, they deliver subpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10]. As a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience. This fundamental problem arises for two reasons. First, because PDAs have a completely different hardware/software environment from traditional desktop computers, web applications need to be rewritten and customized for PDAs if at all possible, duplicating development costs. Because the desktop application market is larger and more mature, most development effort generally ends up being spent on desktop applications, resulting in greater functionality and performance than their PDA counterparts. Second, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life. Desktop web browsers are large, complex applications that are unable to run on a PDA. Instead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality. Thin-client computing provides an alternative approach for enabling pervasive web access from handheld devices. A thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol. The protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server. Using the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client. Using a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications. The thin-client model provides several important benefits for mobile wireless web. First, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments. Second, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10]. Third, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance. Fourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend ing battery life. Finally, PDA thin clients can be essentially stateless appliances that do not need to be backed up or restored, require almost no maintenance or upgrades, and do not store any sensitive data that can be lost or stolen. This model provides a viable avenue for medical organizations to comply with HIPAA regulations [6] while embracing mobile handhelds in their day to day operations. Despite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices. Existing thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content. Furthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions. While existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content. To harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing). pTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices. pTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes. This enables pTHINC to provide the same persistent web session across different client devices. For example, pTHINC can provide the same web browsing session appropriately scaled for display on a desktop computer and a PDA so that the same cookies, bookmarks, and other meta-data are continuously available on both machines simultaneously. pTHINC's virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications. Given limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability. We have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software. We have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices. Our experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. This paper presents the design and implementation of pTHINC. Section 2 describes the overall usage model and usability characteristics of pTHINC. Section 3 presents the design and system architecture of pTHINC. Section 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems. Section 5 discusses related work. Finally, we present some concluding remarks. 2. PTHINC USAGE MODEL pTHINC is a thin-client system that consists of a simple client viewer application that runs on the PDA and a server that runs on a commodity PC. The server leverages more powerful PCs to to run web browsers and other application logic. The client takes user input from the PDA stylus and virtual keyboard and sends them to the server to pass to the applications. Screen updates are then sent back from the server to the client for display to the user. When the pTHINC PDA client is started, the user is presented with a simple graphical interface where information such as server address and port, user authentication information, and session settings can be provided. pTHINC first attempts to connect to the server and perform the necessary handshaking. Once this process has been completed, pTHINC presents the user with the most recent display of his session. If the session does not exist, a new session is created. Existing sessions can be seamlessly continued without changes in the session setting or server configuration. Unlike other thin-client systems, pTHINC provides a user with a persistent web session model in which a user can launch a session running a web browser and associated applications at the server, then disconnect from that session and reconnect to it again anytime. When a user reconnects to the session, all of the applications continue running where the user left off, so that the user can continue working as though he or she never disconnected. The ability to disconnect and reconnect to a session at anytime is an important benefit for mobile wireless PDA users which may have intermittent network connectivity. pTHINC's persistent web session model enables a user to reconnect to a web session from devices other than the one on which the web session was originally initiated. This provides users with seamless mobility across different devices. If a user loses his PDA, he can easily use another PDA to access his web session. Furthermore, pTHINC allows users to use non-PDA devices to access web sessions as well. A user can access the same persistent web session on a desktop PC as on a PDA, enabling a user to use the same web session from any computer. pTHINC's persistent web session model addresses a key problem encountered by mobile web users, the lack of a common web environment across computers. Web browsers often store important information such as bookmarks, cookies, and history, which enable them to function in a much more useful manner. The problem that occurs when a user moves between computers is that this data, which is specific to a web browser installation, cannot move with the user. Furthermore, web browsers often need helper applications to process different media content, and those applications may not be consistently available across all computers. pTHINC addresses this problem by enabling a user to remotely use the exact same web browser environment and helper applications from any computer. As a result, pTHINC can provide a common, consistent web browsing environment for mobile users across different devices without requiring them to attempt to repeatedly synchronize different web browsing environments across multiple machines. To enable a user to access the same web session on different devices, pTHINC must provide mechanisms to support different display sizes and resolutions. Toward this end, pTHINC provides a zoom feature that enables a user to zoom in and out of a display and allows the display of a web Figure 1: pTHINC shortcut keys session to be resized to fit the screen of the device being used. For example, if the server is running a web session at 1024 × 768 but the client is a PDA with a display resolution of 640 × 480, pTHINC will resize the desktop display to fit the full display in the smaller screen of the PDA. pTHINC provides the PDA user with the option to increase the size of the display by zooming in to different parts of the display. Users are often familiar with the general layout of commonly visited websites, and are able to leverage this resizing feature to better navigate through web pages. For example, a user can zoom out of the display to view the entire page content and navigate hyperlinks, then zoom in to a region of interest for a better view. To enable a user to access the same web session on different devices, pTHINC must also provide mechanisms to support different display orientations. In a desktop environment, users are typically accustomed to having displays presented in landscape mode where the screen width is larger than its height. However, in a PDA environment, the choice is not always obvious. Some users may prefer having the display in portrait mode, as it is easier to hold the device in their hands, while others may prefer landscape mode in order to minimize the amount of side-scrolling necessary to view a web page. To accommodate PDA user preferences, pTHINC provides an orientation feature that enables it to seamless rotate the display between landscape and portrait mode. The landscape mode is particularly useful for pTHINC users who frequently access their web sessions on both desktop and PDA devices, providing those users with the same familiar landscape setting across different devices. Because screen space is a relatively scarce resource on PDAs, pTHINC runs in fullscreen mode to maximize the screen area available to display the web session. To be able to use all of the screen on the PDA and still allow the user to control and interact with it, pTHINC reuses the typical shortcut buttons found on PDAs to perform all the control functions available to the user. The buttons used by pTHINC do not require any OS environment changes; they are simply intercepted by the pTHINC client application when they are pressed. Figure 1 shows how pTHINC utilizes the shortcut buttons to provide easy navigation and improve the overall user experience. These buttons are not device specific, and the layout shown is common to widelyused PocketPC devices. pTHINC provides six shortcuts to support its usage model: • Rotate Screen: The record button on the left edge is used to rotate the screen between portrait and landscape mode each time the button is pressed. • Zoom Out: The leftmost button on the bottom front is used to zoom out the display of the web session providing a bird's eye view of the web session. • Zoom In: The second leftmost button on the bottom front is used to zoom in the display of the web session to more clearly view content of interest. • Directional Scroll: The middle button on the bottom front is used to scroll around the display using a single control button in a way that is already familiar to PDA users. This feature is particularly useful when the user has zoomed in to a region of the display such that only part of the display is visible on the screen. • Show/Hide Keyboard: The second rightmost button on the bottom front is used to bring up a virtual keyboard drawn on the screen for devices which have no physical keyboard. The virtual keyboard uses standard PDA OS mechanisms, providing portability across different PDA environments. • Close Session: The rightmost button on the bottom front is used to disconnect from the pTHINC session. pTHINC uses the PDA touch screen, stylus, and standard user interface mechanisms to provide a user interface pointand-click metaphor similar to that provided by the mouse in a traditional desktop computing environment. pTHINC does not use a cursor since PDA environments do not provide one. Instead, a user can use the stylus to tap on different sections of the touch screen to indicate input focus. A single tap on the touch screen generates a corresponding single click mouse event. A double tap on the touch screen generates a corresponding double click mouse event. pTHINC provides two-button mouse emulation by using the stylus to press down on the screen for one second to generate a right mouse click. All of these actions are identical to the way users already interact with PDA applications in the common PocketPC environment. In web browsing, users can click on hyperlinks and focus on input boxes by simply tapping on the desired screen area of interest. Unlike local PDA web browsers and other PDA applications, pTHINC leverages more powerful desktop user interface metaphors to enable users to manipulate multiple open application windows instead of being limited to a single application window at any given moment. This provides increased browsing flexibility beyond what is currently available on PDA devices. Similar to a desktop environment, browser windows and other application windows can be moved around by pressing down and dragging the stylus similar to a mouse. 3. PTHINC SYSTEM ARCHITECTURE pTHINC builds on the THINC [1] remote display architecture to provide a thin-client system for PDAs. pTHINC virtualizes the display at the server by leveraging the video device abstraction layer, which sits below the window server and above the framebuffer. This is a well-defined, low-level, device-dependent layer that exposes the video hardware to the display system. pTHINC accomplishes this through a simple virtual display driver that intercepts drawing commands, packetizes, and sends them over the network. While other thin-client approaches intercept display commands at other layers of the display subsystem, pTHINC's display virtualization approach provides some key benefits in efficiently supporting PDA clients. For example, intercepting display commands at a higher layer between applications and the window system as is done by X [17] requires replicating and running a great deal of functionality on the PDA that is traditionally provided by the desktop window system. Given both the size and complexity of traditional window systems, attempting to replicate this functionality in the restricted PDA environment would have proven to be a daunting, and perhaps unfeasible task. Furthermore, applications and the window system often require tight synchronization in their operation and imposing a wireless network between them by running the applications on the server and the window system on the client would significantly degrade performance. On the other hand, intercepting at a lower layer by extracting pixels out of the framebuffer as they are rendered provides a simple solution that requires very little functionality on the PDA client, but can also result in degraded performance. The reason is that by the time the remote display server attempts to send screen updates, it has lost all semantic information that may have helped it encode efficiently, and it must resort to using a generic and expensive encoding mechanism on the server, as well as a potentially expensive decoding mechanism on the limited PDA client. In contrast to both the high and low level interception approaches, pTHINC's approach of intercepting at the device driver provides an effective balance between client and server simplicity, and the ability to efficiently encode and decode screen updates. By using a low-level virtual display approach, pTHINC can efficiently encode application display commands using only a small set of low-level commands. In a PDA environment, this set of commands provides a crucial component in maintaining the simplicity of the client in the resourceconstrained PDA environment. The display commands are shown in Table 1, and work as follows. COPY instructs the client to copy a region of the screen from its local framebuffer to another location. This command improves the user experience by accelerating scrolling and opaque window movement without having to resend screen data from the server. SFILL, PFILL, and BITMAP are commands that paint a fixed-size region on the screen. They are useful for accelerating the display of solid window backgrounds, desktop patterns, backgrounds of web pages, text drawing, and certain operations in graphics manipulation programs. SFILL fills a sizable region on the screen with a single color. PFILL replicates a tile over a screen region. BITMAP performs a fill using a bitmap of ones and zeros as a stipple to apply a foreground and background color. Finally, RAW is used to transmit unencoded pixel data to be displayed verbatim on a region of the screen. This command is invoked as a last resort if the server is unable to employ any other command, and it is the only command that may be compressed to mitigate its impact on network bandwidth. pTHINC delivers its commands using a non-blocking, serverpush update mechanism, where as soon as display updates are generated on the server, they are sent to the client. Clients are not required to explicitly request display updates, thus minimizing the impact that the typical varying network latency of wireless links may have on the responsiveness of the system. Keeping in mind that resource Table 1: pTHINC Protocol Display Commands constrained PDAs and wireless networks may not be able to keep up with a fast server generating a large number of updates, pTHINC is able to coalesce, clip, and discard updates automatically if network loss or congestion occurs, or the client cannot keep up with the rate of updates. This type of behavior proves crucial in a web browsing environment, where for example, a page may be redrawn multiple times as it is rendered on the fly by the browser. In this case, the PDA will only receive and render the final result, which clearly is all the user is interesting in seeing. pTHINC prioritizes the delivery of updates to the PDA using a Shortest-Remaining-Size-First (SRSF) preemptive update scheduler. SRSF is analogous to Shortest-RemainingProcessing-Time scheduling, which is known to be optimal for minimizing mean response time in an interactive system. In a web browsing environment, short jobs are associated with text and basic page layout components such as the page's background, which are critical web content for the user. On the other hand, large jobs are often lower priority "beautifying" elements, or, even worse, web page banners and advertisements, which are of questionable value to the user as he or she is browsing the page. Using SRSF, pTHINC is able to maximize the utilization of the relatively scarce bandwidth available on the wireless connection between the PDA and the server. 3.1 Display Management To enable users to just as easily access their web browser and helper applications from a desktop computer at home as from a PDA while on the road, pTHINC provides a resize mechanism to zoom in and out of the display of a web session. pTHINC resizing is completely supported by the server, not the client. The server resamples updates to fit within the PDAs viewport before they are transmitted over the network. pTHINC uses Fant's resampling algorithm to resize pixel updates. This provides smooth, visually pleasing updates with properly antialiasing and has only modest computational requirements. pTHINC's resizing approach has a number of advantages. First, it allows the PDA to leverage the vastly superior computational power of the server to use high quality resampling algorithms and produce higher quality updates for the PDA to display. Second, resizing the screen does not translate into additional resource requirements for the PDA, since it does not need to perform any additional work. Finally, better utilization of the wireless network is attained since rescaling the updates reduces their bandwidth requirements. To enable users to orient their displays on a PDA to provide a viewing experience that best accommodates user preferences and the layout of web pages or applications, pTHINC provides a display rotation mechanism to switch between landscape and portrait viewing modes. pTHINC display rotation is completely supported by the client, not the server. pTHINC does not explicitly recalculate the ge ometry of display updates to perform rotation, which would be computationally expensive. Instead, pTHINC simply changes the way data is copied into the framebuffer to switch between display modes. When in portrait mode, data is copied along the rows of the framebuffer from left to right. When in landscape mode, data is copied along the columns of the framebuffer from top to bottom. These very fast and simple techniques replace one set of copy operations with another and impose no performance overhead. pTHINC provides its own rotation mechanism to support a wide range of devices without imposing additional feature requirements on the PDA. Although some newer PDA devices provide native support for different orientations, this mechanism is not dynamic and requires the user to rotate the PDA's entire user interface before starting the pTHINC client. Windows Mobile provides native API mechanisms for PDA applications to rotate their UI on the fly, but these mechanisms deliver poor performance and display quality as the rotation is performed naively and is not completely accurate. 3.2 Video Playback Video has gradually become an integral part of the World Wide Web, and its presence will only continue to increase. Web sites today not only use animated graphics and flash to deliver web content in an attractive manner, but also utilize streaming video to enrich the web interface. Users are able to view pre-recorded and live newscasts on CNN, watch sports highlights on ESPN, and even search through large collection of videos on Google Video. To allow applications to provide efficient video playback, interfaces have been created in display systems that allow video device drivers to expose their hardware capabilities back to the applications. pTHINC takes advantage of these interfaces and its virtual device driver approach to provide a virtual bridge between the remote client and its hardware and the applications, and transparently support video playback. On top of this architecture, pTHINC uses the YUV colorspace to encode the video content, which provides a number of benefits. First, it has become increasingly common for PDA video hardware to natively support YUV and be able to perform the colorspace conversion and scaling automatically. As a result, pTHINC is able to provide fullscreen video playback without any performance hits. Second, the use of YUV allows for a more efficient representation of RGB data without loss of quality, by taking advantage of the human eye's ability to better distinguish differences in brightness than in color. In particular, pTHINC uses the YV12 format, which allows full color RGB data to be encoded using just 12 bits per pixel. Third, YUV data is produced as one of the last steps of the decoding process of most video codecs, allowing pTHINC to provide video playback in a manner that is format independent. Finally, even if the PDA's video hardware is unable to accelerate playback, the colorspace conversion process is simple enough that it does not impose unreasonable requirements on the PDA. A more concrete example of how pTHINC leverages the PDA video hardware to support video playback can be seen in our prototype implementation on the popular Dell Axim X51v PDA, which is equipped with the Intel 2700G multimedia accelerator. In this case, pTHINC creates an offscreen buffer in video memory and writes and reads from this memory region data on the YV12 format. When a new video frame arrives, video data is copied from the buffer to Figure 2: Experimental Testbed an overlay surface in video memory, which is independent of the normal surface used for traditional drawing. As the YV12 data is put onto the overlay, the Intel accelerator automatically performs both colorspace conversion and scaling. By using the overlay surface, pTHINC has no need to redraw the screen once video playback is over since the overlapped surface is unaffected. In addition, specific overlay regions can be manipulated by leveraging the video hardware, for example to perform hardware linear interpolation to smooth out the frame and display it fullscreen, and to do automatic rotation when the client runs in landscape mode. 4. EXPERIMENTAL RESULTS We have implemented a pTHINC prototype that runs the client on widely-used Windows Mobile-based Pocket PC devices and the server on both Windows and Linux operating systems. To demonstrate its effectiveness in supporting mobile wireless web applications, we have measured its performance on web applications. We present experimental results on different PDA devices for two popular web applications, browsing web pages and playing video content from the web. We compared pTHINC against native web applications running locally on the PDA to demonstrate the improvement that pTHINC can provide over the traditional fat-client approach. We also compared pTHINC against three of the most widely used thin clients that can run on PDAs, Citrix Meta-FrameXP [2], Microsoft Remote Desktop [3] and VNC (Virtual Network Computing) [16]. We follow common practice and refer to Citrix MetaFrameXP and Microsoft Remote Desktop by their respective remote display protocols, ICA (Independent Computing Architecture) and RDP (Remote Desktop Protocol). 4.1 Experimental Testbed We conducted our web experiments using two different wireless Pocket PC PDAs in an isolated Wi-Fi network testbed, as shown in Figure 2. The testbed consisted of two PDA client devices, a packet monitor, a thin-client server, and a web server. Except for the PDAs, all of the other machines were IBM Netfinity 4500R servers with dual 933 MHz Intel PIII CPUs and 512 MB RAM and were connected on a switched 100 Mbps FastEthernet network. The web server used was Apache 1.3.27, the network emulator was NISTNet 2.0.12, and the packet monitor was Ethereal 0.10.9. The PDA clients connected to the testbed through a 802.11 b Lucent Orinoco AP-2000 wireless access point. All experiments using the wireless network were conducted within ten feet of the access point, so we considered the amount of packet loss to be negligible in our experiments. Two Pocket PC PDAs were used to provide results across both older, less powerful models and newer higher performance models. The older model was a Dell Axim X5 with Table 2: Thin-client Testbed Configuration Setting a 400 MHz Intel XScale PXA255 CPU and 64 MB RAM running Windows Mobile 2003 and a Dell TrueMobile 1180 2.4 Ghz CompactFlash card for wireless networking. The newer model was a Dell Axim X51v with a 624 MHz Intel XScale XPA270 CPU and 64 MB RAM running Windows Mobile 5.0 and integrated 802.11 b wireless networking. The X51v has an Intel 2700G multimedia accelerator with 16MB video memory. Both PDAs are capable of 16-bit color but have different screen sizes and display resolutions. The X5 has a 3.5 inch diagonal screen with 240 × 320 resolution. The X51v has a 3.7 inch diagonal screen with 480 × 640. The four thin clients that we used support different levels of display quality as summarized in Table 2. The RDP client only supports a fixed 640 × 480 display resolution on the server with 8-bit color depth, while other platforms provide higher levels of display quality. To provide a fair comparison across all platforms, we conducted our experiments with thin-client sessions configured for two possible resolutions, 1024 × 768 and 640 × 480. Both ICA and VNC were configured to use the native PDA resolution of 16-bit color depth. The current pTHINC prototype uses 24-bit color directly and the client downsamples updates to the 16-bit color depth available on the PDA. RDP was configured using only 8-bit color depth since it does not support any better color depth. Since both pTHINC and ICA provide the ability to view the display resized to fit the screen, we measured both clients with and without the display resized to fit the PDA screen. Each thin client was tested using landscape rather than portrait mode when available. All systems run on the X51v could run in landscape mode because the hardware provides a landscape mode feature. However, the X5 does not provide this functionality. Only pTHINC directly supports landscape mode, so it was the only system that could run in landscape mode on both the X5 and X51v. To provide a fair comparison, we also standardized on common hardware and operating systems whenever possible. All of the systems used the Netfinity server as the thin-client server. For the two systems designed for Windows servers, ICA and RDP, we ran Windows 2003 Server on the server. For the other systems which support X-based servers, VNC and pTHINC, we ran the Debian Linux Unstable distribution with the Linux 2.6.10 kernel on the server. We used the latest thin-client server versions available on each platform at the time of our experiments, namely Citrix MetaFrame XP Server for Windows Feature Release 3, Microsoft Remote Desktop built into Windows XP and Windows 2003 using RDP 5.2, and VNC 4.0. 4.2 Application Benchmarks We used two web application benchmarks for our experiments based on two common application scenarios, browsing web pages and playing video content from the web. Since many thin-client systems including two of the ones tested are closed and proprietary, we measured their performance in a noninvasive manner by capturing network traffic with a packet monitor and using a variant of slow-motion benchmarking [13] previously developed to measure thin-client performance in PDA environments [10]. This measurement methodology accounts for both the display decoupling that can occur between client and server in thin-client systems as well as client processing time, which may be significant in the case of PDAs. To measure web browsing performance, we used a web browsing benchmark based on the Web Text Page Load Test from the Ziff-Davis i-Bench benchmark suite [7]. The benchmark consists of JavaScript controlled load of 55 pages from the web server. The pages contain both text and graphics with pages varying in size. The graphics are embedded images in GIF and JPEG formats. The original i-Bench benchmark was modified for slow-motion benchmarking by introducing delays of several seconds between the pages using JavaScript. Then two tests were run, one where delays where added between each page, and one where pages where loaded continuously without waiting for them to be displayed on the client. In the first test, delays were sufficiently adjusted in each case to ensure that each page could be received and displayed on the client completely without temporal overlap in transferring the data belonging to two consecutive pages. We used the packet monitor to record the packet traffic for each run of the benchmark, then used the timestamps of the first and last packet in the trace to obtain our latency measures [10]. The packet monitor also recorded the amount of data transmitted between the client and the server. The ratio between the data traffic in the two tests yields a scale factor. This scale factor shows the loss of data between the server and the client due to inability of the client to process the data quickly enough. The product of the scale factor with the latency measurement produces the true latency accounting for client processing time. To run the web browsing benchmark, we used Mozilla Firefox 1.0.4 running on the thin-client server for the thin clients, and Windows Internet Explorer (IE) Mobile for 2003 and Mobile for 5.0 for the native browsers on the X5 and X51v PDAs, respectively. In all cases, the web browser used was sized to fill the entire display region available. To measure video playback performance, we used a video benchmark that consisted of playing a 34.75 s MPEG-1 video clip containing a mix of news and entertainment programming at full-screen resolution. The video clip is 5.11 MB and consists of 834 352x240 pixel frames with an ideal frame rate of 24 frames/sec. We measured video performance using slow-motion benchmarking by monitoring resulting packet traffic at two playback rates, 1 frames/second (fps) and 24 fps, and comparing the results to determine playback delays and frame drops that occur at 24 fps to measure overall video quality [13]. For example, 100% quality means that all video frames were played at real-time speed. On the other hand, 50% quality could mean that half the video data was dropped, or that the clip took twice as long to play even though all of the video data was displayed. To run the video benchmark, we used Windows Media Player 9 for Windows-based thin-client servers, MPlayer 1.0 pre 6 for X-based thin-client servers, and Windows Media Player 9 Mobile and 10 Mobile for the native video players running locally on the X5 and X51v PDAs, respectively. In all cases, the video player used was sized to fill the entire display region available. 4.3 Measurements Figures 3 and 4 show the results of running the web brows ing benchmark. For each platform, we show results for up to four different configurations, two on the X5 and two on the X51v, depending on whether each configuration was supported. However, not all platforms could support all configurations. The local browser only runs at the display resolution of the PDA, 480 × 680 or less for the X51v and the X5. RDP only runs at 640 × 480. Neither platform could support 1024 × 768 display resolution. ICA only ran on the X5 and could not run on the X51v because it did not work on Windows Mobile 5. Figure 3 shows the average latency per web page for each platform. pTHINC provides the lowest average web browsing latency on both PDAs. On the X5, pTHINC performs up to 70 times better than other thin-client systems and 8 times better than the local browser. On the X51v, pTHINC performs up to 80 times better than other thin-client systems and 7 times better than the native browser. In fact, all of the thin clients except VNC outperform the local PDA browser, demonstrating the performance benefits of the thin-client approach. Usability studies have shown that web pages should take less than one second to download for the user to experience an uninterrupted web browsing experience [14]. The measurements show that only the thin clients deliver subsecond web page latencies. In contrast, the local browser requires more than 3 seconds on average per web page. The local browser performs worse since it needs to run a more limited web browser to process the HTML, JavaScript, and do all the rendering using the limited capabilities of the PDA. The thin clients can take advantage of faster server hardware and a highly tuned web browser to process the web content much faster. Figure 3 shows that RDP is the next fastest platform after pTHINC. However, RDP is only able to run at a fixed resolution of 640 × 480 and 8-bit color depth. Furthermore, RDP also clips the display to the size of the PDA screen so that it does not need to send updates that are not visible on the PDA screen. This provides a performance benefit assuming the remaining web content is not viewed, but degrades performance when a user scrolls around the display to view other web content. RDP achieves its performance with significantly lower display quality compared to the other thin clients and with additional display clipping not used by other systems. As a result, RDP performance alone does not provide a complete comparison with the other platforms. In contrast, pTHINC provides the fastest performance while at the same time providing equal or better display quality than the other systems. Since VNC and ICA provide similar display quality to pTHINC, these systems provide a more fair comparison of different thin-client approaches. ICA performs worse in part because it uses higher-level display primitives that require additional client processing costs. VNC performs worse in part because it loses display data due to its client-pull delivery mechanism and because of the client processing costs in decompressing raw pixel primitives. In both cases, their performance was limited in part because their PDA clients were unable to keep up with the rate at which web pages were being displayed. Figure 3 also shows measurements for those thin clients that support resizing the display to fit the PDA screen, namely ICA and pTHINC. Resizing requires additional processing, which results in slower average web page latencies. The measurements show that the additional delay incurred by ICA when resizing versus not resizing is much more substantial than for pTHINC. ICA performs resizing on the slower PDA client. In contrast, pTHINC leverage the more powerful server to do resizing, reducing the performance difference between resizing and not resizing. Unlike ICA, pTHINC is able to provide subsecond web page download latencies in both cases. Figure 4 shows the data transferred in KB per page when running the slow-motion version of the tests. All of the platforms have modest data transfer requirements of roughly 100 KB per page or less. This is well within the bandwidth capacity of Wi-Fi networks. The measurements show that the local browser does not transfer the least amount of data. This is surprising as HTML is often considered to be a very compact representation of content. Instead, RDP is the most bandwidth efficient platform, largely as a result of using only 8-bit color depth and screen clipping so that it does not transfer the entire web page to the client. pTHINC overall has the largest data requirements, slightly more than VNC. This is largely a result of the current pTHINC prototype's lack of native support for 16-bit color data in the wire protocol. However, this result also highlights pTHINC's performance as it is faster than all other systems even while transferring more data. Furthermore, as newer PDA models support full 24-bit color, these results indicate that pTHINC will continue to provide good web browsing performance. Since display usability and quality are as important as performance, Figures 5 to 8 compare screenshots of the different thin clients when displaying a web page, in this case from the popular BBC news website. Except for ICA, all of the screenshots were taken on the X51v in landscape mode Figure 3: Browsing Benchmark: Average Page Latency Figure 4: Browsing Benchmark: Average Page Data Figure 5: Browser Screenshot: RDP 640x480 Figure 6: Browser Screenshot: VNC 1024x768 Figure 7: Browser Screenshot: ICA Resized 1024x768 Figure 8: Browser Screenshot: pTHINC Resized 1024x768 using the maximum display resolution settings for each platform given in Table 2. The ICA screenshot was taken on the X5 since ICA does not run on the X51v. While the screenshots lack the visual fidelity of the actual device display, several observations can be made. Figure 5 shows that RDP does not support fullscreen mode and wastes lots of screen space for controls and UI elements, requiring the user to scroll around in order to access the full contents of the web browsing session. Figure 6 shows that VNC makes better use of the screen space and provides better display quality, but still forces the user to scroll around to view the web page due to its lack of resizing support. Figure 7 shows ICA's ability to display the full web page given its resizing support, but that its lack of landscape capability and poorer resize algorithm significantly compromise display quality. In contrast, Figure 8 shows pTHINC using resizing to provide a high quality fullscreen display of the full width of the web page. pTHINC maximizes the entire viewing region by moving all controls to the PDA buttons. In addition, pTHINC leverages the server computational power to use a high quality resizing algorithm to resize the display to fit the PDA screen without significant overhead. Figures 9 and 10 show the results of running the video playback benchmark. For each platform except ICA, we show results for an X5 and X51v configuration. ICA could not run on the X51v as noted earlier. The measurements were done using settings that reflected the environment a user would have to access a web session from both a desktop computer and a PDA. As such, a 1024 × 768 server display resolution was used whenever possible and the video was shown at fullscreen. RDP was limited to 640 × 480 display resolution as noted earlier. Since viewing the entire video display is the only really usable option, we resized the display to fit the PDA screen for those platforms that supported this feature, namely ICA and pTHINC. Figure 9 shows the video quality for each platform. pTHINC is the only thin client able to provide perfect video playback quality, similar to the native PDA video player. All of the other thin clients deliver very poor video quality. With the exception of RDP on the X51v which provided unacceptable 35% video quality, none of the other systems were even able to achieve 10% video quality. VNC and ICA have the worst quality at 8% on the X5 device. pTHINC's native video support enables superior video performance, while other thin clients suffer from their inability to distinguish video from normal display updates. They attempt to apply ineffective and expensive compression algorithms on the video data and are unable to keep up with the stream of updates generated, resulting in dropped frames or long playback times. VNC suffers further from its client-pull update model because video frames are generated faster than the rate at which the client can process and send requests to the server to obtain the next display update. Figure 10 shows the total data transferred during video playback for each system. The native player is the most bandwidth efficient platform, sending less than 6 MB of data, which corresponds to about 1.2 Mbps of bandwidth. pTHINC's 100% video quality requires about 25 MB of data which corresponds to a bandwidth usage of less than 6 Mbps. While the other thin clients send less data than THINC, they do so because they are dropping video data, resulting in degraded video quality. Figures 11 to 14 compare screenshots of the different thin clients when displaying the video clip. Except for ICA, all of the screenshots were taken on the X51v in landscape mode using the maximum display resolution settings for each platform given in Table 2. The ICA screenshot was taken on the X5 since ICA does not run on the X51v. Figures 11 and 12 show that RDP and VNC are unable to display the entire video frame on the PDA screen. RDP wastes screen space for UI elements and VNC only shows the top corner of the video frame on the screen. Figure 13 shows that ICA provides resizing to display the entire video frame, but did not proportionally resize the video data, resulting in strange display artifacts. In contrast, Figure 14 shows pTHINC using resizing to provide a high quality fullscreen display of the entire video frame. pTHINC provides visually more appealing video display than RDP, VNC, or ICA. 5. RELATED WORK Several studies have examined the web browsing performance of thin-client computing [13, 19, 10]. The ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10]. This study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality. The study considered a wide range of web content including content from medical information systems. Our work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video. Many thin clients have been developed and some have PDA clients, including Microsoft's Remote Desktop [3], Citrix MetraFrame XP [2], Virtual Network Computing [16, 12], GoToMyPC [5], and Tarantella [18]. These systems were first designed for desktop computing and retrofitted for PDAs. Unlike pTHINC, they do not address key system architecture and usability issues important for PDAs. Figure 10: Video Benchmark: Fullscreen Video Data This limits their display quality, system performance, available screen space, and overall usability on PDAs. pTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications. Other approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8]. They work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy. Our thinclient approach differs fundamentally from these fat-client approaches by pushing all web browser logic to the server, leveraging existing investments in desktop web browsers and helper applications to work seamlessly with production systems without any additional proxy configuration or web browser modifications. With the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices. Recently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser. Instead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone. 6. CONCLUSIONS We have introduced pTHINC, a thin-client architecture for wireless PDAs. pTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements. pTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices. We have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications. Our results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser. In addition, pTHINC is the only PDA thin client Figure 9: Video Benchmark: Fullscreen Video Quality Figure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768 Figure 14: Video Screenshot: pTHINC Resized 1024x768 Figure 13: Video Screenshot: ICA Resized 1024x768 that transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.
pTHINC: A Thin-Client Architecture for Mobile Wireless Web ABSTRACT Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. Categories and Subject Descriptors: C. 2.4 ComputerCommunication-Networks: Distributed Systems--client / server 1. INTRODUCTION The increasing ubiquity of wireless networks and decreasing cost of hardware is fueling a proliferation of mobile wireless handheld devices, both as standalone wireless Personal Digital Assistants (PDA) and popular integrated PDA/cell phone devices. These devices are enabling new forms of mobile computing and communication. Service providers are leveraging these devices to deliver pervasive web access, and mobile web users already often use these devices to access web-enabled information such as news, email, and localized travel guides and maps. It is likely that within a few years, most of the devices accessing the web will be mobile. Users typically access web content by running a web browser and associated helper applications locally on the PDA. Although native web browsers exist for PDAs, they deliver subpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10]. As a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience. This fundamental problem arises for two reasons. First, because PDAs have a completely different hardware/software environment from traditional desktop computers, web applications need to be rewritten and customized for PDAs if at all possible, duplicating development costs. Because the desktop application market is larger and more mature, most development effort generally ends up being spent on desktop applications, resulting in greater functionality and performance than their PDA counterparts. Second, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life. Desktop web browsers are large, complex applications that are unable to run on a PDA. Instead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality. Thin-client computing provides an alternative approach for enabling pervasive web access from handheld devices. A thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol. The protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server. Using the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client. Using a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications. The thin-client model provides several important benefits for mobile wireless web. First, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments. Second, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10]. Third, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance. Fourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend ing battery life. Finally, PDA thin clients can be essentially stateless appliances that do not need to be backed up or restored, require almost no maintenance or upgrades, and do not store any sensitive data that can be lost or stolen. This model provides a viable avenue for medical organizations to comply with HIPAA regulations [6] while embracing mobile handhelds in their day to day operations. Despite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices. Existing thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content. Furthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions. While existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content. To harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing). pTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices. pTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes. This enables pTHINC to provide the same persistent web session across different client devices. For example, pTHINC can provide the same web browsing session appropriately scaled for display on a desktop computer and a PDA so that the same cookies, bookmarks, and other meta-data are continuously available on both machines simultaneously. pTHINC's virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications. Given limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability. We have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software. We have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices. Our experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. This paper presents the design and implementation of pTHINC. Section 2 describes the overall usage model and usability characteristics of pTHINC. Section 3 presents the design and system architecture of pTHINC. Section 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems. Section 5 discusses related work. Finally, we present some concluding remarks. 2. PTHINC USAGE MODEL 3. PTHINC SYSTEM ARCHITECTURE 3.1 Display Management 3.2 Video Playback 4. EXPERIMENTAL RESULTS 4.1 Experimental Testbed 4.2 Application Benchmarks 4.3 Measurements 5. RELATED WORK Several studies have examined the web browsing performance of thin-client computing [13, 19, 10]. The ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10]. This study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality. The study considered a wide range of web content including content from medical information systems. Our work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video. Many thin clients have been developed and some have PDA clients, including Microsoft's Remote Desktop [3], Citrix MetraFrame XP [2], Virtual Network Computing [16, 12], GoToMyPC [5], and Tarantella [18]. These systems were first designed for desktop computing and retrofitted for PDAs. Unlike pTHINC, they do not address key system architecture and usability issues important for PDAs. Figure 10: Video Benchmark: Fullscreen Video Data This limits their display quality, system performance, available screen space, and overall usability on PDAs. pTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications. Other approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8]. They work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy. Our thinclient approach differs fundamentally from these fat-client approaches by pushing all web browser logic to the server, leveraging existing investments in desktop web browsers and helper applications to work seamlessly with production systems without any additional proxy configuration or web browser modifications. With the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices. Recently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser. Instead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone. 6. CONCLUSIONS We have introduced pTHINC, a thin-client architecture for wireless PDAs. pTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements. pTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices. We have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications. Our results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser. In addition, pTHINC is the only PDA thin client Figure 9: Video Benchmark: Fullscreen Video Quality Figure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768 Figure 14: Video Screenshot: pTHINC Resized 1024x768 Figure 13: Video Screenshot: ICA Resized 1024x768 that transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.
pTHINC: A Thin-Client Architecture for Mobile Wireless Web ABSTRACT Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. Categories and Subject Descriptors: C. 2.4 ComputerCommunication-Networks: Distributed Systems--client / server 1. INTRODUCTION These devices are enabling new forms of mobile computing and communication. It is likely that within a few years, most of the devices accessing the web will be mobile. Users typically access web content by running a web browser and associated helper applications locally on the PDA. Although native web browsers exist for PDAs, they deliver subpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10]. As a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience. Second, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life. Desktop web browsers are large, complex applications that are unable to run on a PDA. Instead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality. Thin-client computing provides an alternative approach for enabling pervasive web access from handheld devices. A thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol. The protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server. Using the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client. Using a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications. The thin-client model provides several important benefits for mobile wireless web. First, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments. Second, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10]. Third, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance. Fourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend ing battery life. Despite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices. Existing thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content. Furthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions. While existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content. To harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing). pTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices. pTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes. This enables pTHINC to provide the same persistent web session across different client devices. pTHINC's virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications. Given limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability. We have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software. We have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices. Our experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback. This paper presents the design and implementation of pTHINC. Section 2 describes the overall usage model and usability characteristics of pTHINC. Section 3 presents the design and system architecture of pTHINC. Section 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems. Section 5 discusses related work. Finally, we present some concluding remarks. 5. RELATED WORK Several studies have examined the web browsing performance of thin-client computing [13, 19, 10]. The ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10]. This study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality. The study considered a wide range of web content including content from medical information systems. Our work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video. These systems were first designed for desktop computing and retrofitted for PDAs. Unlike pTHINC, they do not address key system architecture and usability issues important for PDAs. Figure 10: Video Benchmark: Fullscreen Video Data This limits their display quality, system performance, available screen space, and overall usability on PDAs. pTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications. Other approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8]. They work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy. With the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices. Recently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser. Instead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone. 6. CONCLUSIONS We have introduced pTHINC, a thin-client architecture for wireless PDAs. pTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements. pTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices. We have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications. Our results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser. In addition, pTHINC is the only PDA thin client Figure 9: Video Benchmark: Fullscreen Video Quality that transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.
J-65
Privacy in Electronic Commerce and the Economics of Immediate Gratification
Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only 'naïve' individuals but also 'sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.
[ "privaci", "electron commerc", "immedi gratif", "individu decis make process", "ration", "self-control problem", "psycholog distort", "anonym", "person inform protect", "psycholog inconsist", "time-inconsist prefer", "financi privaci", "privaci sensit decis", "privaci enhanc technolog", "hyperbol discount" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "M", "M", "U", "M", "M", "M", "U" ]
Privacy in Electronic Commerce and the Economics of Immediate Gratification Alessandro Acquisti H. John Heinz III School of Public Policy and Management Carnegie Mellon University acquisti@andrew.cmu.edu ABSTRACT Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only `na¨ıve'' individuals but also `sophisticated'' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.1 [Public Policy Issues]: Privacy General Terms Economics, Security, Human Factors 1. PRIVACY AND ELECTRONIC COMMERCE Privacy remains an important issue for electronic commerce. A PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed would shop more online if they knew retail sites would not do anything with their personal information [15]. A Federal Trade Commission study reported in 2000 that sixty-seven percent of consumers were very concerned about the privacy of the personal information provided on-line [11]. More recently, a February 2002 Harris Interactive survey found that the three biggest consumer concerns in the area of on-line personal information security were: companies trading personal data without permission, the consequences of insecure transactions, and theft of personal data [19]. According to a Jupiter Research study in 2002, $24.5 billion in on-line sales will be lost by 2006 - up from $5.5 billion in 2001. Online retail sales would be approximately twenty-four percent higher in 2006 if consumers'' fears about privacy and security were addressed effectively [21]. Although the media hype has somewhat diminished, risks and costs have notas evidenced by the increasing volumes of electronic spam and identity theft [16]. Surveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture. [36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards. The failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information. The dichotomy between privacy attitudes and behavior has been highlighted in the literature. Preliminary interpretations of this phenomenon have been provided [2, 38, 33, 40]. Still missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy? In this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings. We focus on individual (mis)conceptions about their handling of risks they face when revealing private information. We do not address the issue of whether people should actually protect themselves. We will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory. We apply lessons from behavioral economics. Traditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences. For example, [5] study rational models of addiction. This approach can be compared to those who see in the decision 21 not to protect one``s privacy a rational choice given the (supposedly) low risks at stake. However, developments in the area of behavioral economics have highlighted various forms of psychological inconsistencies (self-control problems, hyperbolic discounting, present-biases, etc.) that clash with the fully rational view of the economic agent. In this paper we draw from these developments to reach the following conclusions: • We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions. • We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data. For example, they can explain the results presented by [36] at the ACM EC ``01 conference. In their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards. • In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved. • We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky. • We show, following similar studies in the economics of immediate gratification [31], that even `sophisticated'' individuals may under certain conditions become `privacy myopic.'' Our conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy. Improved technologies, by lowering costs of adoption and protection, certainly can help. However, more fundamental human behavioral responses must also be addressed if privacy ought to be protected. In the next section we propose a model of rational agents facing privacy sensitive decisions. In Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality. In Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data. In Section 5 we summarize and discuss our conclusions. 2. A MODEL OF RATIONALITY IN PRIVACY DECISION MAKING Some have used the dichotomy between privacy attitudes and behavior to claim that individuals are acting rationally when it comes to privacy. Under this view, individuals may accept small rewards for giving away information because they expect future damages to be even smaller (when discounted over time and with their probability of occurrence). Here we want to investigate what underlying assumptions about personal behavior would support the hypothesis of full rationality in privacy decision making. Since [28, 37, 29] economists have been interested in privacy, but only recently formal models have started appearing [3, 7, 39, 40]. While these studies focus on market interactions between one agent and other parties, here we are interested in formalizing the decision process of the single individual. We want to see if individuals can be economically rational (forward-lookers, bayesian updaters, utility maximizers, and so on) when it comes to protect their own personal information. The concept of privacy, once intended as the right to be left alone [41], has transformed as our society has become more information oriented. In an information society the self is expressed, defined, and affected through and by information and information technology. The boundaries between private and public become blurred. Privacy has therefore become more a class of multifaceted interests than a single, unambiguous concept. Hence its value may be discussed (if not ascertained) only once its context has also been specified. This most often requires the study of a network of relations between a subject, certain information (related to the subject), other parties (that may have various linkages of interest or association with that information or that subject), and the context in which such linkages take place. To understand how a rational agent could navigate through those complex relations, in Equation 1 we abstract the decision process of an idealized rational economic agent who is facing privacy trade-offs when completing a certain transaction. max d Ut = δ vE (a) , pd (a) + γ vE (t) , pd (t) − cd t (1) In Equation 1, δ and γ are unspecified functional forms that describe weighted relations between expected payoffs from a set of events v and the associated probabilities of occurrence of those events p. More precisely, the utility U of completing a transaction t (the transaction being any action - not necessarily a monetary operation - possibly involving exposure of personal information) is equal to some function of the expected payoff vE (a) from maintaining (or not) certain information private during that transaction, and the probability of maintaining [or not maintaining] that information private when using technology d, pd (a) [1 − pd (a)]; plus some function of the expected payoff vE (t) from completing (or non completing) the transaction (possibly revealing personal information), and the probability of completing [or not completing] that transaction with a certain technology d, pd (t) [1 − pd (t)]; minus the cost of using the technology t: cd t .1 The technology d may or may not be privacy enhancing. Since the payoffs in Equation 1 can be either positive or negative, Equation 1 embodies the duality implicit in privacy issues: there are both costs and benefits gained from revealing or from protecting personal information, and the costs and benefits from completing a transaction, vE (t), might be distinct from the costs and benefits from keeping the associated information private, vE (a). For instance, revealing one``s identity to an on-line bookstore may earn a discount. Viceversa, it may also cost a larger bill, because of price discrimination. Protecting one``s financial privacy by not divulging credit card information on-line may protect against future losses and hassles related to identity theft. But it may 1 See also [1]. 22 make one``s on-line shopping experience more cumbersome, and therefore more expensive. The functional parameters δ and γ embody the variable weights and attitudes an individual may have towards keeping her information private (for example, her privacy sensitivity, or her belief that privacy is a right whose respect should be enforced by the government) and completing certain transactions. Note that vE and p could refer to sets of payoffs and the associated probabilities of occurrence. The payoffs are themselves only expected because, regardless of the probability that the transaction is completed or the information remains private, they may depend on other sets of events and their associated probabilities. vE() and pd (), in other words, can be read as multi-variate parameters inside which are hidden several other variables, expectations, and functions because of the complexity of the privacy network described above. Over time, the probability of keeping certain information private, for instance, will not only depend on the chosen technology d but also on the efforts by other parties to appropriate that information. These efforts may be function, among other things, of the expected value of that information to those parties. The probability of keeping information private will also depend on the environment in which the transaction is taking place. Similarly, the expected benefit from keeping information private will also be a collection over time of probability distributions dependent on several parameters. Imagine that the probability of keeping your financial transactions private is very high when you use a bank in Bermuda: still, the expected value from keeping your financial information confidential will depend on a number of other factors. A rational agent would, in theory, choose the technology d that maximizes her expected payoff in Equation 1. Maybe she would choose to complete the transaction under the protection of a privacy enhancing technology. Maybe she would complete the transaction without protection. Maybe she would not complete the transaction at all (d = 0). For example, the agent may consider the costs and benefits of sending an email through an anonymous MIX-net system [8] and compare those to the costs and benefits of sending that email through a conventional, non-anonymous channel. The magnitudes of the parameters in Equation 1 will change with the chosen technology. MIX-net systems may decrease the expected losses from privacy intrusions. Nonanonymous email systems may promise comparably higher reliability and (possibly) reduced costs of operations. 3. RATIONALITY AND PSYCHOLOGICAL DISTORTIONS IN PRIVACY Equation 1 is a comprehensive (while intentionally generic) road-map for navigation across privacy trade-offs that no human agent would be actually able to use. We hinted to some difficulties as we noted that several layers of complexities are hidden inside concepts such as the expected value of maintaining certain information private, and the probability of succeeding doing so. More precisely, an agent will face three problems when comparing the tradeoffs implicit in Equation 1: incomplete information about all parameters; bounded power to process all available information; no deviation from the rational path towards utilitymaximization. Those three problems are precisely the same issues real people have to deal with on an everyday basis as they face privacy-sensitive decisions. We discuss each problem in detail. 1. Incomplete information. What information has the individual access to as she prepares to take privacy sensitive decisions? For instance, is she aware of privacy invasions and the associated risks? What is her knowledge of the existence and characteristics of protective technologies? Economic transactions are often characterized by incomplete or asymmetric information. Different parties involved may not have the same amount of information about the transaction and may be uncertain about some important aspects of it [4]. Incomplete information will affect almost all parameters in Equation 1, and in particular the estimation of costs and benefits. Costs and benefits associated with privacy protection and privacy intrusions are both monetary and immaterial. Monetary costs may for instance include adoption costs (which are probably fixed) and usage costs (which are variable) of protective technologies - if the individual decides to protect herself. Or they may include the financial costs associated to identity theft, if the individual``s information turns out not to have been adequately protected. Immaterial costs may include learning costs of a protective technology, switching costs between different applications, or social stigma when using anonymizing technologies, and many others. Likewise, the benefits from protecting (or not protecting) personal information may also be easy to quantify in monetary terms (the discount you receive for revealing personal data) or be intangible (the feeling of protection when you send encrypted emails). It is difficult for an individual to estimate all these values. Through information technology, privacy invasions can be ubiquitous and invisible. Many of the payoffs associated with privacy protection or intrusion may be discovered or ascertained only ex post through actual experience. Consider, for instance, the difficulties in using privacy and encrypting technologies described in [43]. In addition, the calculations implicit in Equation 1 depend on incomplete information about the probability distribution of future events. Some of those distributions may be predicted after comparable data - for example, the probability that a certain credit card transaction will result in fraud today could be calculated using existing statistics. The probability distributions of other events may be very difficult to estimate because the environment is too dynamicfor example, the probability of being subject to identity theft 5 years in the future because of certain data you are releasing now. And the distributions of some other events may be almost completely subjective - for example, the probability that a new and practical form of attack on a currently secure cryptosystem will expose all of your encrypted personal communications a few years from now. This leads to a related problem: bounded rationality. 2. Bounded rationality. Is the individual able to calculate all the parameters relevant to her choice? Or is she limited by bounded rationality? In our context, bounded rationality refers to the inability to calculate and compare the magnitudes of payoffs associated with various strategies the individual may choose in privacy-sensitive situations. It also refers to the inability to process all the stochastic information related to risks and probabilities of events leading to privacy costs and benefits. 23 In traditional economic theory, the agent is assumed to have both rationality and unbounded `computational'' power to process information. But human agents are unable to process all information in their hands and draw accurate conclusions from it [34]. In the scenario we consider, once an individual provides personal information to other parties, she literally loses control of that information. That loss of control propagates through other parties and persists for unpredictable spans of time. Being in a position of information asymmetry with respect to the party with whom she is transacting, decisions must be based on stochastic assessments, and the magnitudes of the factors that may affect the individual become very difficult to aggregate, calculate, and compare.2 Bounded rationality will affect the calculation of the parameters in Equation 1, and in particular δ, γ, vE(), and pt(). The cognitive costs involved in trying to calculate the best strategy could therefore be so high that the individual may just resort to simple heuristics. 3. Psychological distortions. Eventually, even if an individual had access to complete information and could appropriately compute it, she still may find it difficult to follow the rational strategy presented in Equation 1. A vast body of economic and psychological literature has by now confirmed the impact of several forms of psychological distortions on individual decision making. Privacy seems to be a case study encompassing many of those distortions: hyperbolic discounting, under insurance, self-control problems, immediate gratification, and others. The traditional dichotomy between attitude and behavior, observed in several aspects of human psychology and studied in the social psychology literature since [24] and [13], may also appear in the privacy space because of these distortions. For example, individuals have a tendency to discount `hyperbolically'' future costs or benefits [31, 27]. In economics, hyperbolic discounting implies inconsistency of personal preferences over time - future events may be discounted at different discount rates than near-term events. Hyperbolic discounting may affect privacy decisions, for instance when we heavily discount the (low) probability of (high) future risks such as identity theft.3 Related to hyperbolic discounting is the tendency to underinsure oneself against certain risks [22]. In general, individuals may put constraints on future behavior that limit their own achievement of maximum utility: people may genuinely want to protect themselves, but because of self-control bias, they will not actually take those steps, and opt for immediate gratification instead. People tend to underappreciate the effects of changes in their states, and hence falsely project their current preferences over consumption onto their future preferences. Far more than suggesting merely that people mispredict future tastes, this projection bias posits a systematic pattern in these mispredictions which can lead to systematic errors in dynamicchoice environments [25, p. 2]. 2 The negative utility coming from future potential misuses of somebody``s personal information could be a random shock whose probability and scope are extremely variable. For example, a small and apparently innocuous piece of information might become a crucial asset or a dangerous liability in the right context. 3 A more rigorous description and application of hyperbolic discounting is provided in Section 4. In addition, individuals suffer from optimism bias [42], the misperception that one``s risks are lower than those of other individuals under similar conditions. Optimism bias may lead us to believe that we will not be subject to privacy intrusions. Individuals encounter difficulties when dealing with cumulative risks. [35], for instance, shows that while young smokers appreciate the long term risks of smoking, they do not fully realize the cumulative relation between the low risks of each additional cigarette and the slow building up of a serious danger. Difficulties with dealing with cumulative risks apply to privacy, because our personal information, once released, can remain available over long periods of time. And since it can be correlated to other data, the `anonymity sets'' [32, 14] in which we wish to remain hidden get smaller. As a result, the whole risk associated with revealing different pieces of personal information is more than the sum of the individual risks associated with each piece of data. Also, it is easier to deal with actions and effects that are closer to us in time. Actions and effects that are in the distant future are difficult to focus on given our limited foresight perspective. As the foresight changes, so does behavior, even when preferences remain the same [20]. This phenomenon may also affects privacy decisions, since the costs of privacy protection may be immediate, but the rewards may be invisible (absence of intrusions) and spread over future periods of time. To summarize: whenever we face privacy sensitive decisions, we hardly have all data necessary for an informed choice. But even if we had, we would be likely unable to process it. And even if we could process it, we may still end behaving against our own better judgment. In what follows, we present a model of privacy attitudes and behavior based on some of these findings, and in particular on the plight of immediate gratification. 4. PRIVACY AND THE ECONOMICS OF IMMEDIATE GRATIFICATION The problem of immediate gratification (which is related to the concepts of time inconsistency, hyperbolic discounting, and self-control bias) is so described by O``Donoghue and Rabin [27, p. 4]: A person``s relative preference for wellbeing at an earlier date over a later date gets stronger as the earlier date gets closer. [...] [P]eople have self-control problems caused by a tendency to pursue immediate gratification in a way that their `long-run selves'' do not appreciate. For example, if you were given only two alternatives, on Monday you may claim you will prefer working 5 hours on Saturday to 5 hours and half on Sunday. But as Saturday comes, you will be more likely to prefer postponing work until Sunday. This simple observation has rather important consequences in economic theory, where time-consistency of preferences is the dominant model. Consider first the traditional model of utility that agents derive from consumption: the model states that utility discounts exponentially over time: Ut = T τ=t δτ uτ (2) In Equation 2, the cumulative utility U at time t is the discounted sum of all utilities from time t (the present) until time T (the future). δ is the discount factor, with a value 24 Period 1 Period 2 Period 3 Period 4 Benefits from selling period 1 2 0 0 0 Costs from selling period 1 0 1 1 1 Benefits from selling period 2 0 2 0 0 Costs from selling period 2 0 0 1 1 Benefits from selling period 3 0 0 2 0 Costs from selling period 3 0 0 0 1 Table 1: (Fictional) expected payoffs from joining loyalty program. between 0 and 1. A value of 0 would imply that the individual discounts so heavily that the utility from future periods is worth zero today. A value of 1 would imply that the individual is so patient she does not discount future utilities. The discount factor is used in economics to capture the fact that having (say) one dollar one year from now is valuable, but not as much as having that dollar now. In Equation 2, if all uτ were constant - for instance, 10 - and δ was 0.9, then at time t = 0 (that is, now) u0 would be worth 10, but u1 would be worth 9. Modifying the traditional model of utility discounting, [23] and then [31] have proposed a model which takes into account possible time-inconsistency of preferences. Consider Equation 3: Ut(ut, ut+1, ..., uT ) = δt ut + β T τ=t+1 δτ uτ (3) Assume that δ, β ∈ [0, 1]. δ is the discount factor for intertemporal utility as in Equation 2. β is the parameter that captures an individual``s tendency to gratify herself immediately (a form of time-inconsistent preferences). When β is 1, the model maps the traditional time-consistent utility model, and Equation 3 is identical to Equation 2. But when β is zero, the individual does not care for anything but today. In fact, any β smaller than 1 represents self-control bias. The experimental literature has convincingly proved that human beings tend to have self-control problems even when they claim otherwise: we tend to avoid and postpone undesirable activities even when this will imply more effort tomorrow; and we tend to over-engage in pleasant activities even though this may cause suffering or reduced utility in the future. This analytical framework can be applied to the study of privacy attitudes and behavior. Protecting your privacy sometimes means protecting yourself from a clear and present hassle (telemarketers, or people peeping through your window and seeing how you live - see [33]); but sometimes it represents something akin to getting an insurance against future and only uncertain risks. In surveys completed at time t = 0, subjects asked about their attitude towards privacy risks may mentally consider some costs of protecting themselves at a later time t = s and compare those to the avoided costs of privacy intrusions in an even more distant future t = s + n. Their alternatives at survey time 0 are represented in Equation 4. min wrt x DU0 = β[(E(cs,p)δs x) + (E(cs+n,i)δs+n (1 − x))] (4) x is a dummy variable that can take values 0 or 1. It represents the individual``s choice - which costs the individual opts to face: the expected cost of protecting herself at time s, E(cs,p) (in which case x = 1), or the expected costs of being subject to privacy intrusions at a later time s + n, E(cs+n,i). The individual is trying to minimize the disutility DU of these costs with respect to x. Because she discounts the two future events with the same discount factor (although at different times), for certain values of the parameters the individual may conclude that paying to protect herself is worthy. In particular, this will happen when: E(cs,p)δs < E(cs+n,i)δs+n (5) Now, consider what happens as the moment t = s comes. Now a real price should be paid in order to enjoy some form of protection (say, starting to encrypt all of your emails to protect yourself from future intrusions). Now the individual will perceive a different picture: min wrt x DUs = δE(cs,p)x + βE(cn,i)δn (1 − x)] (6) Note that nothing has changed in the equation (certainly not the individual``s perceived risks) except time. If β (the parameter indicating the degree of self-control problems) is less than one, chances are that the individual now will actually choose not to protect herself. This will in fact happen when: δE(cs,p) > βE(cn,i)δn (7) Note that Disequalities 5 and 7 may be simultaneously met for certain β < 1. At survey time the individual honestly claimed she wanted to protect herself in principlethat is, some time in the future. But as she is asked to make an effort to protect herself right now, she chooses to run the risk of privacy intrusion. Similar mathematical arguments can be made for the comparison between immediate costs with immediate benefits (subscribing to a `no-call'' list to stop telemarketers from harassing you at dinner), and immediate costs with only future expected rewards (insuring yourself against identity theft, or protecting yourself from frauds by never using your credit card on-line), particularly when expected future rewards (or avoided risks) are also intangible: the immaterial consequences of living (or not) in a dossier society, or the chilling effects (or lack thereof) of being under surveillance. The reader will have noticed that we have focused on perceived (expected) costs E(c), rather than real costs. We do not know the real costs and we do not claim that the 25 individual does. But we are able to show that under certain conditions even costs perceived as very high (as during periods of intense privacy debate) will be ignored. We can provide some fictional numerical examples to make the analysis more concrete. We present some scenarios inspired by the calculations in [31]. Imagine an economy with just 4 periods (Table 1). Each individual can enroll in a supermarket``s loyalty program by revealing personal information. If she does so, the individual gets a discount of 2 during the period of enrollment, only to pay one unit each time thereafter because of price discrimination based on the information she revealed (we make no attempt at calibrating the realism of this obviously abstract example; the point we are focusing on is how time inconsistencies may affect individual behavior given the expected costs and benefits of certain actions).4 Depending on which period the individual chooses for `selling'' her data, we have the undiscounted payoffs represented in Table 1. Imagine that the individual is contemplating these options and discounting them according to Equation 3. Suppose that δ = 1 for all types of individuals (this means that for simplicity we do not consider intertemporal discounting) but β = 1/2 for time-inconsistent individuals and β = 1 for everybody else. The time-consistent individual will choose to join the program at the very last period and rip off a benefit of 2-1=1. The individual with immediate gratification problems, for whom β = 1/2, will instead perceive the benefits from joining now or in period 3 as equivalent (0.5), and will join the program now, thus actually making herself worse off. [31] also suggest that, in addition to the distinction between time-consistent individuals and individuals with timeinconsistent preferences, we should also distinguish timeinconsistent individuals who are na¨ıve from those who are sophisticated. Na¨ıve time-inconsistent individuals are not aware of their self-control problems - for example, they are those who always plan to start a diet next week. Sophisticated time-inconsistent individuals suffer of immediate gratification bias, but are at least aware of their inconsistencies. People in this category choose their behavior today correctly estimating their future time-inconsistent behavior. Now consider how this difference affects decisions in another scenario, represented in Table 2. An individual is considering the adoption of a certain privacy enhancing technology. It will cost her some money both to protect herself and not to protect herself. If she decides to protect herself, the cost will be the amount she pays - for example - for some technology that shields her personal information. If she decides not to protect herself, the cost will be the expected consequences of privacy intrusions. We assume that both these aggregate costs increase over time, although because of separate dynamics. As time goes by, more and more information about the individual has been revealed, and it becomes more costly to be protected against privacy intrusions. At the same time, however, intrusions become more frequent and dangerous. 4 One may claim that loyalty cards keep on providing benefits over time. Here we make the simplifying assumption that such benefits are not larger than the future costs incurred after having revealed one``s tastes. We also assume that the economy ends in period 4 for all individuals, regardless of when they chose to join the loyalty program. In period 1, the individual may protect herself by spending 5, or she may choose to face a risk of privacy intrusion the following period, expected to cost 7. In the second period, assuming that no intrusion has yet taken place, she may once again protect herself by spending a little more, 6; or she may choose to face a risk of privacy intrusion the next (third) period, expected to cost 9. In the third period she could protect herself for 8 or face an expected cost of 15 in the following last period. Here too we make no attempt at calibrating the values in Table 2. Again, we focus on the different behavior driven by heterogeneity in time-consistency and sophistication versus na¨ıvete. We assume that β = 1 for individuals with no self control problems and β = 1/2 for everybody else. We assume for simplicity that δ = 1 for all. The time-consistent individuals will obviously choose to protect themselves as soon as possible. In the first period, na¨ıve time-inconsistent individuals will compare the costs of protecting themselves then or face a privacy intrusion in the second period. Because 5 > 7 ∗ (1/2), they will prefer to wait until the following period to protect themselves. But in the second period they will be comparing 6 > 9 ∗ (1/2) - and so they will postpone their protection again. They will keep on doing so, facing higher and higher risks. Eventually, they will risk to incur the highest perceived costs of privacy intrusions (note again that we are simply assuming that individuals believe there are privacy risks and that they increase over time; we will come back to this concept later on). Time-inconsistent but sophisticated individuals, on the other side, will adopt a protective technology in period 2 and pay 6. By period 2, in fact, they will (correctly) realize that if they wait till period 3 (which they are tempted to do, because 6 > 9 ∗ (1/2)), their self-control bias will lead them to postpone adopting the technology once more (because 8 > 15 ∗ (1/2)). Therefore they predict they would incur the expected cost 15 ∗ (1/2), which is larger than 6the cost of protecting oneself in period 2. In period 1, however, they correctly predict that they will not wait to protect themselves further than period 2. So they wait till period 2, because 5 > 6 ∗ (1/2), at which time they will adopt a protective technology (see also [31]). To summarize, time-inconsistent people tend not to fully appreciate future risks and, if na¨ıve, also their inability to deal with them. This happens even if they are aware of those risks and they are aware that those risks are increasing. As we learnt from the second scenario, time inconsistency can lead individuals to accept higher and higher risks. Individuals may tend to downplay the fact that single actions present low risks, but their repetition forms a huge liability: it is a deceiving aspect of privacy that its value is truly appreciated only after privacy itself is lost. This dynamics captures the essence of privacy and the so-called anonymity sets [32, 14], where each bit of information we reveal can be linked to others, so that the whole is more than the sum of the parts. In addition, [31] show that when costs are immediate, time-inconsistent individuals tend to procrastinate; when benefits are immediate, they tend to preoperate. In our context things are even more interesting because all privacy decisions involve at the same time costs and benefits. So we opt against using eCash [9] in order to save us the costs of switching from credit cards. But we accept the risk that our credit card number on the Internet could be used ma26 Period 1 Period 2 Period 3 Period 4 Protection costs 5 6 8 . Expected intrusion costs . 7 9 15 Table 2: (Fictional) costs of protecting privacy and expected costs of privacy intrusions over time. liciously. And we give away our personal information to supermarkets in order to gain immediate discounts - which will likely turn into price discrimination in due time [3, 26]. We have shown in the second scenario above how sophisticated but time-inconsistent individuals may choose to protect their information only in period 2. Sophisticated people with self-control problems may be at a loss, sometimes even when compared to na¨ıve people with time inconsistency problems (how many privacy advocates do use privacy enhancing technologies all the time?) . The reasoning is that sophisticated people are aware of their self-control problems, and rather than ignoring them, they incorporate them into their decision process. This may decrease their own incentive to behave in the optimal way now. Sophisticated privacy advocates might realize that protecting themselves from any possible privacy intrusion is unrealistic, and so they may start misbehaving now (and may get used to that, a form of coherent arbitrariness). This is consistent with the results by [36] presented at the ACM EC ``01 conference. [36] found that privacy advocates were also willing to reveal personal information in exchange for monetary rewards. It is also interesting to note that these inconsistencies are not caused by ignorance of existing risks or confusion about available technologies. Individuals in the abstract scenarios we described are aware of their perceived risks and costs. However, under certain conditions, the magnitude of those liabilities is almost irrelevant. The individual will take very slowly increasing risks, which become steps towards huge liabilities. 5. DISCUSSION Applying models of self-control bias and immediate gratification to the study of privacy decision making may offer a new perspective on the ongoing privacy debate. We have shown that a model of rational privacy behavior is unrealistic, while models based on psychological distortions offer a more accurate depiction of the decision process. We have shown why individuals who genuinely would like to protect their privacy may not do so because of psychological distortions well documented in the behavioral economics literature. We have highlighted that these distortions may affect not only na¨ıve individuals but also sophisticated ones. Surprisingly, we have also found that these inconsistencies may occur when individuals perceive the risks from not protecting their privacy as significant. Additional uncertainties, risk aversion, and varying attitudes towards losses and gains may be confounding elements in our analysis. Empirical validation is necessary to calibrate the effects of different factors. An empirical analysis may start with the comparison of available data on the adoption rate of privacy technologies that offer immediate refuge from minor but pressing privacy concerns (for example, `do not call'' marketing lists), with data on the adoption of privacy technologies that offer less obviously perceivable protection from more dangerous but also less visible privacy risks (for example, identity theft insurances). However, only an experimental approach over different periods of time in a controlled environment may allow us to disentangle the influence of several factors. Surveys alone cannot suffice, since we have shown why survey-time attitudes will rarely match decision-time actions. An experimental verification is part of our ongoing research agenda. The psychological distortions we have discussed may be considered in the ongoing debate on how to deal with the privacy problem: industry self-regulation, users'' self protection (through technology or other strategies), or government``s intervention. The conclusions we have reached suggest that individuals may not be trusted to make decisions in their best interests when it comes to privacy. This does not mean that privacy technologies are ineffective. On the contrary, our results, by aiming at offering a more realistic model of user-behavior, could be of help to technologists in their design of privacy enhancing tools. However, our results also imply that technology alone or awareness alone may not address the heart of the privacy problem. Improved technologies (with lower costs of adoption and protection) and more information about risks and opportunities certainly can help. However, more fundamental human behavioral mechanisms must also be addressed. Self-regulation, even in presence of complete information and awareness, may not be trusted to work for the same reasons. A combination of technology, awareness, and regulative policies - calibrated to generate and enforce liabilities and incentives for the appropriate parties - may be needed for privacy-related welfare increase (as in other areas of an economy: see on a related analysis [25]). Observing that people do not want to pay for privacy or do not care about privacy, therefore, is only a half truth. People may not be able to act as economically rational agents when it comes to personal privacy. And the question whether do consumers care? is a different question from does privacy matter? Whether from an economic standpoint privacy ought to be protected or not, is still an open question. It is a question that involves defining specific contexts in which the concept of privacy is being invoked. But the value of privacy eventually goes beyond the realms of economic reasoning and cost benefit analysis, and ends up relating to one``s views on society and freedom. Still, even from a purely economic perspective, anecdotal evidence suggest that the costs of privacy (from spam to identity theft, lost sales, intrusions, and the like [30, 12, 17, 33, 26]) are high and increasing. 6. ACKNOWLEDGMENTS The author gratefully acknowledges Carnegie Mellon University``s Berkman Development Fund, that partially supported this research. The author also wishes to thank Jens Grossklags, Charis Kaskiris, and three anonymous referees for their helpful comments. 27 7. REFERENCES [1] A. Acquisti, R. Dingledine, and P. Syverson. On the economics of anonymity. In Financial CryptographyFC ``03, pages 84-102. Springer Verlag, LNCS 2742, 2003. [2] A. Acquisti and J. Grossklags. Losses, gains, and hyperbolic discounting: An experimental approach to information security attitudes and behavior. In 2nd Annual Workshop on Economics and Information Security - WEIS ``03, 2003. [3] A. Acquisti and H. R. Varian. Conditioning prices on purchase history. Technical report, University of California, Berkeley, 2001. Presented at the European Economic Association Conference, Venice, IT, August 2002. http://www.heinz.cmu.edu/~acquisti/ papers/privacy. pdf. [4] G. A. Akerlof. The market for `lemons:'' quality uncertainty and the market mechanism. Quarterly Journal of Economics, 84:488-500, 1970. [5] G. S. Becker and K. M. Murphy. A theory of rational addiction. Journal of Political Economy, 96:675-700, 1988. [6] B. D. Brunk. Understanding the privacy space. First Monday, 7, 2002. http://firstmonday.org/issues/ issue7_10/brunk/index. html. [7] G. Calzolari and A. Pavan. Optimal design of privacy policies. Technical report, Gremaq, University of Toulouse, 2001. [8] D. Chaum. Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM, 24(2):84-88, 1981. [9] D. Chaum. Blind signatures for untraceable payments. In Advances in Cryptology - Crypto ``82, pages 199-203. Plenum Press, 1983. [10] R. K. Chellappa and R. Sin. Personalization versus privacy: An empirical examination of the online consumer``s dilemma. In 2002 Informs Meeting, 2002. [11] F. T. Commission. Privacy online: Fair information practices in the electronic marketplace, 2000. http://www.ftc.gov/reports/privacy2000/ privacy2000.pdf. [12] Community Banker Association of Indiana. Identity fraud expected to triple by 2005, 2001. http://www.cbai.org/Newsletter/December2001/ identity_fraud_de2001.htm. [13] S. Corey. Professional attitudes and actual behavior. Journal of Educational Psychology, 28(1):271 - 280, 1937. [14] C. Diaz, S. Seys, J. Claessens, and B. Preneel. Towards measuring anonymity. In P. Syverson and R. Dingledine, editors, Privacy Enhancing Technologies - PET ``02. Springer Verlag, 2482, 2002. [15] ebusinessforum.com. eMarketer: The great online privacy debate, 2000. http://www.ebusinessforum. com/index. asp?doc_id=1785&layout=rich_story. [16] Federal Trade Commission. Identity theft heads the ftc``s top 10 consumer fraud complaints of 2001, 2002. http://www.ftc.gov/opa/2002/01/idtheft.htm. [17] R. Gellman. Privacy, consumers, and costs - How the lack of privacy costs consumers and why business studies of privacy costs are biased and incomplete, 2002. http://www.epic.org/reports/dmfprivacy.html. [18] I.-H. Harn, K.-L. Hui, T. S. Lee, and I. P. L. Png. Online information privacy: Measuring the cost-benefit trade-off. In 23rd International Conference on Information Systems, 2002. [19] Harris Interactive. First major post-9.11 privacy survey finds consumers demanding companies do more to protect privacy; public wants company privacy policies to be independently verified, 2002. http://www.harrisinteractive.com/news/ allnewsbydate.asp?NewsID=429. [20] P. Jehiel and A. Lilico. Smoking today and stopping tomorrow: A limited foresight perspective. Technical report, Department of Economics, UCLA, 2002. [21] Jupiter Research. Seventy percent of US consumers worry about online privacy, but few take protective action, 2002. http: //www.jmm.com/xp/jmm/press/2002/pr_060302.xml. [22] H. Kunreuther. Causes of underinsurance against natural disasters. Geneva Papers on Risk and Insurance, 1984. [23] D. Laibson. Essays on hyperbolic discounting. MIT, Department of Economics, Ph.D.. Dissertation, 1994. [24] R. LaPiere. Attitudes versus actions. Social Forces, 13:230-237, 1934. [25] G. Lowenstein, T. O``Donoghue, and M. Rabin. Projection bias in predicting future utility. Technical report, Carnegie Mellon University, Cornell University, and University of California, Berkeley, 2003. [26] A. Odlyzko. Privacy, economics, and price discrimination on the Internet. In Fifth International Conference on Electronic Commerce, pages 355-366. ACM, 2003. [27] T. O``Donoghue and M. Rabin. Choice and procrastination. Quartely Journal of Economics, 116:121-160, 2001. The page referenced in the text refers to the 2000 working paper version. [28] R. A. Posner. An economic theory of privacy. Regulation, pages 19-26, 1978. [29] R. A. Posner. The economics of privacy. American Economic Review, 71(2):405-409, 1981. [30] Privacy Rights Clearinghouse. Nowhere to turn: Victims speak out on identity theft, 2000. http: //www.privacyrights.org/ar/idtheft2000.htm. [31] M. Rabin and T. O``Donoghue. The economics of immediate gratification. Journal of Behavioral Decision Making, 13:233-250, 2000. [32] A. Serjantov and G. Danezis. Towards an information theoretic metric for anonymity. In P. Syverson and R. Dingledine, editors, Privacy Enhancing Technologies - PET ``02. Springer Verlag, LNCS 2482, 2002. [33] A. Shostack. Paying for privacy: Consumers and infrastructures. In 2nd Annual Workshop on Economics and Information Security - WEIS ``03, 2003. [34] H. A. Simon. Models of bounded rationality. The MIT Press, Cambridge, MA, 1982. 28 [35] P. Slovic. What does it mean to know a cumulative risk? Adolescents'' perceptions of short-term and long-term consequences of smoking. Journal of Behavioral Decision Making, 13:259-266, 2000. [36] S. Spiekermann, J. Grossklags, and B. Berendt. E-privacy in 2nd generation e-commerce: Privacy preferences versus actual behavior. In 3rd ACM Conference on Electronic Commerce - EC ``01, pages 38-47, 2002. [37] G. J. Stigler. An introduction to privacy in economics and politics. Journal of Legal Studies, 9:623-644, 1980. [38] P. Syverson. The paradoxical value of privacy. In 2nd Annual Workshop on Economics and Information Security - WEIS ``03, 2003. [39] C. R. Taylor. Private demands and demands for privacy: Dynamic pricing and the market for customer information. Department of Economics, Duke University, Duke Economics Working Paper 02-02, 2002. [40] T. Vila, R. Greenstadt, and D. Molnar. Why we can``t be bothered to read privacy policies: Models of privacy economics as a lemons market. In 2nd Annual Workshop on Economics and Information SecurityWEIS ``03, 2003. [41] S. Warren and L. Brandeis. The right to privacy. Harvard Law Review, 4:193-220, 1890. [42] N. D. Weinstein. Optimistic biases about personal risks. Science, 24:1232-1233, 1989. [43] A. Whitten and J. D. Tygar. Why Johnny can``t encrypt: A usability evaluation of PGP 5.0. In 8th USENIX Security Symposium, 1999. 29
Privacy in Electronic Commerce and the Economics of Immediate Gratification ABSTRACT Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ` na ¨ ıve' individuals but also ` sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant. 1. PRIVACY AND ELECTRONIC COMMERCE Privacy remains an important issue for electronic commerce. A PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed "would shop more online if they knew retail sites would not do anything with their personal information" [15]. A Federal Trade Commission study reported in 2000 that sixty-seven percent of consumers were "very concerned" about the privacy of the personal information provided on-line [11]. More recently, a February 2002 Harris Interactive survey found that the three biggest consumer concerns in the area of on-line personal information security were: companies trading personal data without permission, the consequences of insecure transactions, and theft of personal data [19]. According to a Jupiter Research study in 2002, "$24.5 billion in on-line sales will be lost by 2006 - up from $5.5 billion in 2001. Online retail sales would be approximately twenty-four percent higher in 2006 if consumers' fears about privacy and security were addressed effectively" [21]. Although the media hype has somewhat diminished, risks and costs have not as evidenced by the increasing volumes of electronic spam and identity theft [16]. Surveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture. [36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards. The failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information. The dichotomy between privacy attitudes and behavior has been highlighted in the literature. Preliminary interpretations of this phenomenon have been provided [2, 38, 33, 40]. Still missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy? In this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings. We focus on individual (mis) conceptions about their handling of risks they face when revealing private information. We do not address the issue of whether people should actually protect themselves. We will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory. We apply lessons from behavioral economics. Traditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences. For example, [5] study rational models of addiction. This approach can be compared to those who see in the decision not to protect one's privacy a rational choice given the (supposedly) low risks at stake. However, developments in the area of behavioral economics have highlighted various forms of psychological inconsistencies (self-control problems, hyperbolic discounting, present-biases, etc.) that clash with the fully rational view of the economic agent. In this paper we draw from these developments to reach the following conclusions: • We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions. • We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data. For example, they can explain the results presented by [36] at the ACM EC ’01 conference. In their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards. • In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved. • We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky. • We show, following similar studies in the economics of immediate gratification [31], that even ` sophisticated' individuals may under certain conditions become ` privacy myopic . ' Our conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy. Improved technologies, by lowering costs of adoption and protection, certainly can help. However, more fundamental human behavioral responses must also be addressed if privacy ought to be protected. In the next section we propose a model of rational agents facing privacy sensitive decisions. In Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality. In Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data. In Section 5 we summarize and discuss our conclusions. 2. A MODEL OF RATIONALITY IN PRIVACY DECISION MAKING Some have used the dichotomy between privacy attitudes and behavior to claim that individuals are acting rationally when it comes to privacy. Under this view, individuals may accept small rewards for giving away information because they expect future damages to be even smaller (when discounted over time and with their probability of occurrence). Here we want to investigate what underlying assumptions about personal behavior would support the hypothesis of full rationality in privacy decision making. Since [28, 37, 29] economists have been interested in privacy, but only recently formal models have started appearing [3, 7, 39, 40]. While these studies focus on market interactions between one agent and other parties, here we are interested in formalizing the decision process of the single individual. We want to see if individuals can be economically rational (forward-lookers, bayesian updaters, utility maximizers, and so on) when it comes to protect their own personal information. The concept of privacy, once intended as the right to be left alone [41], has transformed as our society has become more information oriented. In an information society the self is expressed, defined, and affected through and by information and information technology. The boundaries between private and public become blurred. Privacy has therefore become more a class of multifaceted interests than a single, unambiguous concept. Hence its value may be discussed (if not ascertained) only once its context has also been specified. This most often requires the study of a network of relations between a subject, certain information (related to the subject), other parties (that may have various linkages of interest or association with that information or that subject), and the context in which such linkages take place. To understand how a rational agent could navigate through those complex relations, in Equation 1 we abstract the decision process of an idealized rational economic agent who is facing privacy trade-offs when completing a certain transaction. In Equation 1, δ and γ are unspecified functional forms that describe weighted relations between expected payoffs from a set of events v and the associated probabilities of occurrence of those events p. More precisely, the utility U of completing a transaction t (the transaction being any action - not necessarily a monetary operation - possibly involving exposure of personal information) is equal to some function of the expected payoff vE (a) from maintaining (or not) certain information private during that transaction, and the probability of maintaining [or not maintaining] that information private when using technology d, pd (a) [1 − pd (a)]; plus some function of the expected payoff vE (t) from completing (or non completing) the transaction (possibly revealing personal information), and the probability of completing [or not completing] that transaction with a certain technology d, pd (t) [1 − pd (t)]; minus the cost of using the technology t: cdt . ' The technology d may or may not be privacy enhancing. Since the payoffs in Equation 1 can be either positive or negative, Equation 1 embodies the duality implicit in privacy issues: there are both costs and benefits gained from revealing or from protecting personal information, and the costs and benefits from completing a transaction, vE (t), might be distinct from the costs and benefits from keeping the associated information private, vE (a). For instance, revealing one's identity to an on-line bookstore may earn a discount. Viceversa, it may also cost a larger bill, because of price discrimination. Protecting one's financial privacy by not divulging credit card information on-line may protect against future losses and hassles related to identity theft. But it may make one's on-line shopping experience more cumbersome, and therefore more expensive. The functional parameters δ and γ embody the variable weights and attitudes an individual may have towards keeping her information private (for example, her privacy sensitivity, or her belief that privacy is a right whose respect should be enforced by the government) and completing certain transactions. Note that vE and p could refer to sets of payoffs and the associated probabilities of occurrence. The payoffs are themselves only expected because, regardless of the probability that the transaction is completed or the information remains private, they may depend on other sets of events and their associated probabilities. vE () and pd (), in other words, can be read as multi-variate parameters inside which are hidden several other variables, expectations, and functions because of the complexity of the privacy network described above. Over time, the probability of keeping certain information private, for instance, will not only depend on the chosen technology d but also on the efforts by other parties to appropriate that information. These efforts may be function, among other things, of the expected value of that information to those parties. The probability of keeping information private will also depend on the environment in which the transaction is taking place. Similarly, the expected benefit from keeping information private will also be a collection over time of probability distributions dependent on several parameters. Imagine that the probability of keeping your financial transactions private is very high when you use a bank in Bermuda: still, the expected value from keeping your financial information confidential will depend on a number of other factors. A rational agent would, in theory, choose the technology d that maximizes her expected payoff in Equation 1. Maybe she would choose to complete the transaction under the protection of a privacy enhancing technology. Maybe she would complete the transaction without protection. Maybe she would not complete the transaction at all (d = 0). For example, the agent may consider the costs and benefits of sending an email through an anonymous MIX-net system [8] and compare those to the costs and benefits of sending that email through a conventional, non-anonymous channel. The magnitudes of the parameters in Equation 1 will change with the chosen technology. MIX-net systems may decrease the expected losses from privacy intrusions. Nonanonymous email systems may promise comparably higher reliability and (possibly) reduced costs of operations. 3. RATIONALITY AND PSYCHOLOGICAL DISTORTIONS IN PRIVACY Equation 1 is a comprehensive (while intentionally generic) road-map for navigation across privacy trade-offs that no human agent would be actually able to use. We hinted to some difficulties as we noted that several layers of complexities are hidden inside concepts such as the "expected value of maintaining certain information private," and the "probability" of succeeding doing so. More precisely, an agent will face three problems when comparing the tradeoffs implicit in Equation 1: incomplete information about all parameters; bounded power to process all available information; no deviation from the rational path towards utilitymaximization. Those three problems are precisely the same issues real people have to deal with on an everyday basis as they face privacy-sensitive decisions. We discuss each problem in detail. 1. Incomplete information. What information has the individual access to as she prepares to take privacy sensitive decisions? For instance, is she aware of privacy invasions and the associated risks? What is her knowledge of the existence and characteristics of protective technologies? Economic transactions are often characterized by incomplete or asymmetric information. Different parties involved may not have the same amount of information about the transaction and may be uncertain about some important aspects of it [4]. Incomplete information will affect almost all parameters in Equation 1, and in particular the estimation of costs and benefits. Costs and benefits associated with privacy protection and privacy intrusions are both monetary and immaterial. Monetary costs may for instance include adoption costs (which are probably fixed) and usage costs (which are variable) of protective technologies - if the individual decides to protect herself. Or they may include the financial costs associated to identity theft, if the individual's information turns out not to have been adequately protected. Immaterial costs may include learning costs of a protective technology, switching costs between different applications, or social stigma when using anonymizing technologies, and many others. Likewise, the benefits from protecting (or not protecting) personal information may also be easy to quantify in monetary terms (the discount you receive for revealing personal data) or be intangible (the feeling of protection when you send encrypted emails). It is difficult for an individual to estimate all these values. Through information technology, privacy invasions can be ubiquitous and invisible. Many of the payoffs associated with privacy protection or intrusion may be discovered or ascertained only ex post through actual experience. Consider, for instance, the difficulties in using privacy and encrypting technologies described in [43]. In addition, the calculations implicit in Equation 1 depend on incomplete information about the probability distribution of future events. Some of those distributions may be predicted after comparable data - for example, the probability that a certain credit card transaction will result in fraud today could be calculated using existing statistics. The probability distributions of other events may be very difficult to estimate because the environment is too dynamic for example, the probability of being subject to identity theft 5 years in the future because of certain data you are releasing now. And the distributions of some other events may be almost completely subjective - for example, the probability that a new and practical form of attack on a currently secure cryptosystem will expose all of your encrypted personal communications a few years from now. This leads to a related problem: bounded rationality. 2. Bounded rationality. Is the individual able to calculate all the parameters relevant to her choice? Or is she limited by bounded rationality? In our context, bounded rationality refers to the inability to calculate and compare the magnitudes of payoffs associated with various strategies the individual may choose in privacy-sensitive situations. It also refers to the inability to process all the stochastic information related to risks and probabilities of events leading to privacy costs and benefits. In traditional economic theory, the agent is assumed to have both rationality and unbounded ` computational' power to process information. But human agents are unable to process all information in their hands and draw accurate conclusions from it [34]. In the scenario we consider, once an individual provides personal information to other parties, she literally loses control of that information. That loss of control propagates through other parties and persists for unpredictable spans of time. Being in a position of information asymmetry with respect to the party with whom she is transacting, decisions must be based on stochastic assessments, and the magnitudes of the factors that may affect the individual become very difficult to aggregate, calculate, and compare .2 Bounded rationality will affect the calculation of the parameters in Equation 1, and in particular δ, γ, vE (), and pt (). The cognitive costs involved in trying to calculate the best strategy could therefore be so high that the individual may just resort to simple heuristics. 3. Psychological distortions. Eventually, even if an individual had access to complete information and could appropriately compute it, she still may find it difficult to follow the rational strategy presented in Equation 1. A vast body of economic and psychological literature has by now confirmed the impact of several forms of psychological distortions on individual decision making. Privacy seems to be a case study encompassing many of those distortions: hyperbolic discounting, under insurance, self-control problems, immediate gratification, and others. The traditional dichotomy between attitude and behavior, observed in several aspects of human psychology and studied in the social psychology literature since [24] and [13], may also appear in the privacy space because of these distortions. For example, individuals have a tendency to discount ` hyperbolically' future costs or benefits [31, 27]. In economics, hyperbolic discounting implies inconsistency of personal preferences over time - future events may be discounted at different discount rates than near-term events. Hyperbolic discounting may affect privacy decisions, for instance when we heavily discount the (low) probability of (high) future risks such as identity theft .3 Related to hyperbolic discounting is the tendency to underinsure oneself against certain risks [22]. In general, individuals may put constraints on future behavior that limit their own achievement of maximum utility: people may genuinely want to protect themselves, but because of self-control bias, they will not actually take those steps, and opt for immediate gratification instead. "People tend to underappreciate the effects of changes in their states, and hence falsely project their current preferences over consumption onto their future preferences. Far more than suggesting merely that people mispredict future tastes, this projection bias posits a systematic pattern in these mispredictions which can lead to systematic errors in dynamicchoice environments" [25, p. 2]. 2The negative utility coming from future potential misuses of somebody's personal information could be a random shock whose probability and scope are extremely variable. For example, a small and apparently innocuous piece of information might become a crucial asset or a dangerous liability in the right context. 3A more rigorous description and application of hyperbolic discounting is provided in Section 4. In addition, individuals suffer from optimism bias [42], the misperception that one's risks are lower than those of other individuals under similar conditions. Optimism bias may lead us to believe that we will not be subject to privacy intrusions. Individuals encounter difficulties when dealing with cumulative risks. [35], for instance, shows that while young smokers appreciate the long term risks of smoking, they do not fully realize the cumulative relation between the low risks of each additional cigarette and the slow building up of a serious danger. Difficulties with dealing with cumulative risks apply to privacy, because our personal information, once released, can remain available over long periods of time. And since it can be correlated to other data, the ` anonymity sets' [32, 14] in which we wish to remain hidden get smaller. As a result, the whole risk associated with revealing different pieces of personal information is more than the sum of the individual risks associated with each piece of data. Also, it is easier to deal with actions and effects that are closer to us in time. Actions and effects that are in the distant future are difficult to focus on given our limited foresight perspective. As the foresight changes, so does behavior, even when preferences remain the same [20]. This phenomenon may also affects privacy decisions, since the costs of privacy protection may be immediate, but the rewards may be invisible (absence of intrusions) and spread over future periods of time. To summarize: whenever we face privacy sensitive decisions, we hardly have all data necessary for an informed choice. But even if we had, we would be likely unable to process it. And even if we could process it, we may still end behaving against our own better judgment. In what follows, we present a model of privacy attitudes and behavior based on some of these findings, and in particular on the plight of immediate gratification. 4. PRIVACY AND THE ECONOMICS OF IMMEDIATE GRATIFICATION The problem of immediate gratification (which is related to the concepts of time inconsistency, hyperbolic discounting, and self-control bias) is so described by O'Donoghue and Rabin [27, p. 4]: "A person's relative preference for wellbeing at an earlier date over a later date gets stronger as the earlier date gets closer. [...] [P] eople have self-control problems caused by a tendency to pursue immediate gratification in a way that their ` long-run selves' do not appreciate." For example, if you were given only two alternatives, on Monday you may claim you will prefer working 5 hours on Saturday to 5 hours and half on Sunday. But as Saturday comes, you will be more likely to prefer postponing work until Sunday. This simple observation has rather important consequences in economic theory, where time-consistency of preferences is the dominant model. Consider first the traditional model of utility that agents derive from consumption: the model states that utility discounts exponentially over time: In Equation 2, the cumulative utility U at time t is the discounted sum of all utilities from time t (the present) until time T (the future). δ is the discount factor, with a value Table 1: (Fictional) expected payoffs from joining loyalty program. between 0 and 1. A value of 0 would imply that the individual discounts so heavily that the utility from future periods is worth zero today. A value of 1 would imply that the individual is so patient she does not discount future utilities. The discount factor is used in economics to capture the fact that having (say) one dollar one year from now is valuable, but not as much as having that dollar now. In Equation 2, if all uτ were constant - for instance, 10 - and δ was 0.9, then at time t = 0 (that is, now) u0 would be worth 10, but u1 would be worth 9. Modifying the traditional model of utility discounting, [23] and then [31] have proposed a model which takes into account possible time-inconsistency of preferences. Consider Equation 3: Assume that δ, β ∈ [0, 1]. δ is the discount factor for intertemporal utility as in Equation 2. β is the parameter that captures an individual's tendency to gratify herself immediately (a form of time-inconsistent preferences). When β is 1, the model maps the traditional time-consistent utility model, and Equation 3 is identical to Equation 2. But when β is zero, the individual does not care for anything but today. In fact, any β smaller than 1 represents self-control bias. The experimental literature has convincingly proved that human beings tend to have self-control problems even when they claim otherwise: we tend to avoid and postpone undesirable activities even when this will imply more effort tomorrow; and we tend to over-engage in pleasant activities even though this may cause suffering or reduced utility in the future. This analytical framework can be applied to the study of privacy attitudes and behavior. Protecting your privacy sometimes means protecting yourself from a clear and present hassle (telemarketers, or people peeping through your window and seeing how you live - see [33]); but sometimes it represents something akin to getting an insurance against future and only uncertain risks. In surveys completed at time t = 0, subjects asked about their attitude towards privacy risks may mentally consider some costs of protecting themselves at a later time t = s and compare those to the avoided costs of privacy intrusions in an even more distant future t = s + n. Their alternatives at survey time 0 are represented in Equation 4. x is a dummy variable that can take values 0 or 1. It represents the individual's choice - which costs the individual opts to face: the expected cost of protecting herself at time s, E (cs, p) (in which case x = 1), or the expected costs of being subject to privacy intrusions at a later time s + n, E (cs + n, i). The individual is trying to minimize the disutility DU of these costs with respect to x. Because she discounts the two future events with the same discount factor (although at different times), for certain values of the parameters the individual may conclude that paying to protect herself is worthy. In particular, this will happen when: Now, consider what happens as the moment t = s comes. Now a real price should be paid in order to enjoy some form of protection (say, starting to encrypt all of your emails to protect yourself from future intrusions). Now the individual will perceive a different picture: Note that nothing has changed in the equation (certainly not the individual's perceived risks) except time. If β (the parameter indicating the degree of self-control problems) is less than one, chances are that the individual now will actually choose not to protect herself. This will in fact happen when: Note that Disequalities 5 and 7 may be simultaneously met for certain β <1. At survey time the individual honestly claimed she wanted to protect herself in principle that is, some time in the future. But as she is asked to make an effort to protect herself right now, she chooses to run the risk of privacy intrusion. Similar mathematical arguments can be made for the comparison between immediate costs with immediate benefits (subscribing to a ` no-call' list to stop telemarketers from harassing you at dinner), and immediate costs with only future expected rewards (insuring yourself against identity theft, or protecting yourself from frauds by never using your credit card on-line), particularly when expected future rewards (or avoided risks) are also intangible: the immaterial consequences of living (or not) in a dossier society, or the chilling effects (or lack thereof) of being under surveillance. The reader will have noticed that we have focused on perceived (expected) costs E (c), rather than real costs. We do not know the real costs and we do not claim that the individual does. But we are able to show that under certain conditions even costs perceived as very high (as during periods of intense privacy debate) will be ignored. We can provide some fictional numerical examples to make the analysis more concrete. We present some scenarios inspired by the calculations in [31]. Imagine an economy with just 4 periods (Table 1). Each individual can enroll in a supermarket's loyalty program by revealing personal information. If she does so, the individual gets a discount of 2 during the period of enrollment, only to pay one unit each time thereafter because of price discrimination based on the information she revealed (we make no attempt at calibrating the realism of this obviously abstract example; the point we are focusing on is how time inconsistencies may affect individual behavior given the expected costs and benefits of certain actions).4 Depending on which period the individual chooses for ` selling' her data, we have the undiscounted payoffs represented in Table 1. Imagine that the individual is contemplating these options and discounting them according to Equation 3. Suppose that δ = 1 for all types of individuals (this means that for simplicity we do not consider intertemporal discounting) but β = 1/2 for time-inconsistent individuals and β = 1 for everybody else. The time-consistent individual will choose to join the program at the very last period and rip off a benefit of 2-1 = 1. The individual with immediate gratification problems, for whom β = 1/2, will instead perceive the benefits from joining now or in period 3 as equivalent (0.5), and will join the program now, thus actually making herself worse off. [31] also suggest that, in addition to the distinction between time-consistent individuals and individuals with timeinconsistent preferences, we should also distinguish timeinconsistent individuals who are na ¨ ıve from those who are sophisticated. Na ¨ ıve time-inconsistent individuals are not aware of their self-control problems - for example, they are those who always plan to start a diet next week. Sophisticated time-inconsistent individuals suffer of immediate gratification bias, but are at least aware of their inconsistencies. People in this category choose their behavior today correctly estimating their future time-inconsistent behavior. Now consider how this difference affects decisions in another scenario, represented in Table 2. An individual is considering the adoption of a certain privacy enhancing technology. It will cost her some money both to protect herself and not to protect herself. If she decides to protect herself, the cost will be the amount she pays - for example - for some technology that shields her personal information. If she decides not to protect herself, the cost will be the expected consequences of privacy intrusions. We assume that both these aggregate costs increase over time, although because of separate dynamics. As time goes by, more and more information about the individual has been revealed, and it becomes more costly to be protected against privacy intrusions. At the same time, however, intrusions become more frequent and dangerous. 4One may claim that loyalty cards keep on providing benefits over time. Here we make the simplifying assumption that such benefits are not larger than the future costs incurred after having revealed one's tastes. We also assume that the economy ends in period 4 for all individuals, regardless of when they chose to join the loyalty program. In period 1, the individual may protect herself by spending 5, or she may choose to face a risk of privacy intrusion the following period, expected to cost 7. In the second period, assuming that no intrusion has yet taken place, she may once again protect herself by spending a little more, 6; or she may choose to face a risk of privacy intrusion the next (third) period, expected to cost 9. In the third period she could protect herself for 8 or face an expected cost of 15 in the following last period. Here too we make no attempt at calibrating the values in Table 2. Again, we focus on the different behavior driven by heterogeneity in time-consistency and sophistication versus na ¨ ıvete. We assume that β = 1 for individuals with no self control problems and β = 1/2 for everybody else. We assume for simplicity that δ = 1 for all. The time-consistent individuals will obviously choose to protect themselves as soon as possible. In the first period, na ¨ ıve time-inconsistent individuals will compare the costs of protecting themselves then or face a privacy intrusion in the second period. Because 5> 7 * (1/2), they will prefer to wait until the following period to protect themselves. But in the second period they will be comparing 6> 9 * (1/2) - and so they will postpone their protection again. They will keep on doing so, facing higher and higher risks. Eventually, they will risk to incur the highest perceived costs of privacy intrusions (note again that we are simply assuming that individuals believe there are privacy risks and that they increase over time; we will come back to this concept later on). Time-inconsistent but sophisticated individuals, on the other side, will adopt a protective technology in period 2 and pay 6. By period 2, in fact, they will (correctly) realize that if they wait till period 3 (which they are tempted to do, because 6> 9 * (1/2)), their self-control bias will lead them to postpone adopting the technology once more (because 8> 15 * (1/2)). Therefore they predict they would incur the expected cost 15 * (1/2), which is larger than 6 the cost of protecting oneself in period 2. In period 1, however, they correctly predict that they will not wait to protect themselves further than period 2. So they wait till period 2, because 5> 6 * (1/2), at which time they will adopt a protective technology (see also [31]). To summarize, time-inconsistent people tend not to fully appreciate future risks and, if na ¨ ıve, also their inability to deal with them. This happens even if they are aware of those risks and they are aware that those risks are increasing. As we learnt from the second scenario, time inconsistency can lead individuals to accept higher and higher risks. Individuals may tend to downplay the fact that single actions present low risks, but their repetition forms a huge liability: it is a deceiving aspect of privacy that its value is truly appreciated only after privacy itself is lost. This dynamics captures the essence of privacy and the so-called anonymity sets [32, 14], where each bit of information we reveal can be linked to others, so that the whole is more than the sum of the parts. In addition, [31] show that when costs are immediate, time-inconsistent individuals tend to procrastinate; when benefits are immediate, they tend to preoperate. In our context things are even more interesting because all privacy decisions involve at the same time costs and benefits. So we opt against using eCash [9] in order to save us the costs of switching from credit cards. But we accept the risk that our credit card number on the Internet could be used ma Table 2: (Fictional) costs of protecting privacy and expected costs of privacy intrusions over time. liciously. And we give away our personal information to supermarkets in order to gain immediate discounts - which will likely turn into price discrimination in due time [3, 26]. We have shown in the second scenario above how sophisticated but time-inconsistent individuals may choose to protect their information only in period 2. Sophisticated people with self-control problems may be at a loss, sometimes even when compared to na ¨ ıve people with time inconsistency problems (how many privacy advocates do use privacy enhancing technologies all the time?) . The reasoning is that sophisticated people are aware of their self-control problems, and rather than ignoring them, they incorporate them into their decision process. This may decrease their own incentive to behave in the optimal way now. Sophisticated privacy advocates might realize that protecting themselves from any possible privacy intrusion is unrealistic, and so they may start misbehaving now (and may get used to that, a form of coherent arbitrariness). This is consistent with the results by [36] presented at the ACM EC ’01 conference. [36] found that privacy advocates were also willing to reveal personal information in exchange for monetary rewards. It is also interesting to note that these inconsistencies are not caused by ignorance of existing risks or confusion about available technologies. Individuals in the abstract scenarios we described are aware of their perceived risks and costs. However, under certain conditions, the magnitude of those liabilities is almost irrelevant. The individual will take very slowly increasing risks, which become steps towards huge liabilities.
Privacy in Electronic Commerce and the Economics of Immediate Gratification ABSTRACT Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ` na ¨ ıve' individuals but also ` sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant. 1. PRIVACY AND ELECTRONIC COMMERCE Privacy remains an important issue for electronic commerce. A PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed "would shop more online if they knew retail sites would not do anything with their personal information" [15]. A Federal Trade Commission study reported in 2000 that sixty-seven percent of consumers were "very concerned" about the privacy of the personal information provided on-line [11]. More recently, a February 2002 Harris Interactive survey found that the three biggest consumer concerns in the area of on-line personal information security were: companies trading personal data without permission, the consequences of insecure transactions, and theft of personal data [19]. According to a Jupiter Research study in 2002, "$24.5 billion in on-line sales will be lost by 2006 - up from $5.5 billion in 2001. Online retail sales would be approximately twenty-four percent higher in 2006 if consumers' fears about privacy and security were addressed effectively" [21]. Although the media hype has somewhat diminished, risks and costs have not as evidenced by the increasing volumes of electronic spam and identity theft [16]. Surveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture. [36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards. The failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information. The dichotomy between privacy attitudes and behavior has been highlighted in the literature. Preliminary interpretations of this phenomenon have been provided [2, 38, 33, 40]. Still missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy? In this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings. We focus on individual (mis) conceptions about their handling of risks they face when revealing private information. We do not address the issue of whether people should actually protect themselves. We will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory. We apply lessons from behavioral economics. Traditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences. For example, [5] study rational models of addiction. This approach can be compared to those who see in the decision not to protect one's privacy a rational choice given the (supposedly) low risks at stake. However, developments in the area of behavioral economics have highlighted various forms of psychological inconsistencies (self-control problems, hyperbolic discounting, present-biases, etc.) that clash with the fully rational view of the economic agent. In this paper we draw from these developments to reach the following conclusions: • We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions. • We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data. For example, they can explain the results presented by [36] at the ACM EC ’01 conference. In their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards. • In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved. • We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky. • We show, following similar studies in the economics of immediate gratification [31], that even ` sophisticated' individuals may under certain conditions become ` privacy myopic . ' Our conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy. Improved technologies, by lowering costs of adoption and protection, certainly can help. However, more fundamental human behavioral responses must also be addressed if privacy ought to be protected. In the next section we propose a model of rational agents facing privacy sensitive decisions. In Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality. In Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data. In Section 5 we summarize and discuss our conclusions. 2. A MODEL OF RATIONALITY IN PRIVACY DECISION MAKING 3. RATIONALITY AND PSYCHOLOGICAL DISTORTIONS IN PRIVACY 4. PRIVACY AND THE ECONOMICS OF IMMEDIATE GRATIFICATION
Privacy in Electronic Commerce and the Economics of Immediate Gratification ABSTRACT Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ` na ¨ ıve' individuals but also ` sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant. 1. PRIVACY AND ELECTRONIC COMMERCE Privacy remains an important issue for electronic commerce. A PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed "would shop more online if they knew retail sites would not do anything with their personal information" [15]. A Federal Trade Commission study reported in 2000 that sixty-seven percent of consumers were "very concerned" about the privacy of the personal information provided on-line [11]. Online retail sales would be approximately twenty-four percent higher in 2006 if consumers' fears about privacy and security were addressed effectively" [21]. Although the media hype has somewhat diminished, risks and costs have not as evidenced by the increasing volumes of electronic spam and identity theft [16]. Surveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture. [36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards. The failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information. The dichotomy between privacy attitudes and behavior has been highlighted in the literature. Still missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy? In this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings. We focus on individual (mis) conceptions about their handling of risks they face when revealing private information. We do not address the issue of whether people should actually protect themselves. We will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory. We apply lessons from behavioral economics. Traditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences. For example, [5] study rational models of addiction. This approach can be compared to those who see in the decision not to protect one's privacy a rational choice given the (supposedly) low risks at stake. In this paper we draw from these developments to reach the following conclusions: • We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions. • We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data. In their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards. • In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved. • We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky. • We show, following similar studies in the economics of immediate gratification [31], that even ` sophisticated' individuals may under certain conditions become ` privacy myopic . ' Our conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy. However, more fundamental human behavioral responses must also be addressed if privacy ought to be protected. In the next section we propose a model of rational agents facing privacy sensitive decisions. In Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality. In Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data. In Section 5 we summarize and discuss our conclusions.
C-41
Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems
A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.
[ "adapt resourc manag", "hybrid adapt resourcemanag middlewar", "hybrid control techniqu", "distribut real-time embed system", "servic end-to-end qualiti", "real-time video distribut system", "real-time corba specif", "video encod/decod", "resourc reserv mechan", "dynam environ", "stream servic", "distribut real-time emb system", "hybrid system", "servic qualiti" ]
[ "P", "P", "P", "M", "R", "M", "U", "M", "M", "U", "M", "M", "R", "R" ]
Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems Nishanth Shankaran, ∗ Xenofon Koutsoukos, Douglas C. Schmidt, and Aniruddha Gokhale Dept. of EECS, Vanderbilt University, Nashville ABSTRACT A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability. Categories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed Applications; D.4.7 [Organization and Design]: Real-time Systems and Embedded Systems 1. INTRODUCTION Achieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth. Overutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost. A promising approach to meeting these end-to-end QoS requirements effectively, therefore, is to develop and apply adaptive middleware [10, 15], which is software whose functional and QoS-related properties can be modified either statically or dynamically. Static modifications are carried out to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and/or minimize hardware/software infrastructure dependencies. Objectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency/jitter, and workload. In open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources. To meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source1 distributed resource management middleware. HyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics. In our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables. We achieve this by adapting the resolution of the transmitted video, which is modeled as a continuous variable, and by changing the frame-rate and the compression, which are modeled by discrete actions. We have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12]. Our results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload. The remainder of the paper is organized as follows: Section 2 describes the architecture, functionality, and resource utilization model of our DRE multimedia system case study; Section 3 explains the structure and functionality of HyARM; Section 4 evaluates the adaptive behavior of HyARM via experiments on our multimedia system case study; Section 5 compares our research on HyARM with related work; and Section 6 presents concluding remarks. 1 The code and examples for HyARM are available at www. dre.vanderbilt.edu/∼nshankar/HyARM/. Article 7 2. CASE STUDY: DRE MULTIMEDIA SYSTEM This section describes the architecture and QoS requirements of our DRE multimedia system. 2.1 Multimedia System Architecture Wireless Link Wireless Link Wireless Link ` ` ` Physical Link Physical Link Physical Link Base Station End Receiver End Receiver End Receiver` Physical Link End Receiver UAV Camera Video Encoder Camera Video Encoder Camera Video Encoder UAV Camera Video Encoder Camera Video Encoder Camera Video Encoder UAV Camera Video Encoder Camera Video Encoder Camera Video Encoder Figure 1: DRE Multimedia System Architecture The architecture for our DRE multimedia system is shown in Figure 1 and consists of the following entities: (1)Data source (video capture by UAV), where video is captured (related to subject of interest) by camera(s) on each UAV, followed by encoding of raw video using a specific encoding scheme and transmitting the video to the next stage in the pipeline. (2)Data distributor (base station), where the video is processed to remove noise, followed by retransmission of the processed video to the next stage in the pipeline. (3) Sinks (command and control center), where the received video is again processed to remove noise, then decoded and finally rendered to end user via graphical displays. Significant improvements in video encoding/decoding and (de)compression techniques have been made as a result of recent advances in video encoding and compression techniques [14]. Common video compression schemes are MPEG1, MPEG-2, Real Video, and MPEG-4. Each compression scheme is characterized by its resource requirement, e.g., the computational power to (de)compress the video signal and the network bandwidth required to transmit the compressed video signal. Properties of the compressed video, such as resolution and frame-rate determine both the quality and the resource requirements of the video. Our multimedia system case study has the following endto-end real-time QoS requirements: (1) latency, (2) interframe delay (also know as jitter), (3) frame rate, and (4) picture resolution. These QoS requirements can be classified as being either hard or soft. Hard QoS requirements should be met by the underlying system at all times, whereas soft QoS requirements can be missed occasionally.2 For our case study, we treat QoS requirements such as latency and jitter as harder QoS requirements and strive to meet these requirements at all times. In contrast, we treat QoS requirements such as video frame rate and picture resolution as softer QoS requirements and modify these video properties adaptively to handle dynamic changes in resource availabil2 Although hard and soft are often portrayed as two discrete requirement sets, in practice they are usually two ends of a continuum ranging from softer to harder rather than two disjoint points. ity effectively. 2.2 DRE Multimedia System Rresources There are two primary types of resources in our DRE multimedia system: (1) processors that provide computational power available at the UAVs, base stations, and end receivers and (2) network links that provide communication bandwidth between UAVs, base stations, and end receivers. The computing power required by the video capture and encoding tasks depends on dynamic factors, such as speed of the UAV, speed of the subject (if the subject is mobile), and distance between UAV and the subject. The wireless network bandwidth available to transmit video captured by UAVs to base stations also depends on the wireless connectivity between the UAVs and the base station, which in-turn depend on dynamic factors such as the speed of the UAVs and the relative distance between UAVs and base stations. The bandwidth of the link between the base station and the end receiver is limited, but more stable than the bandwidth of the wireless network. Resource requirements and availability of resources are subjected to dynamic changes. Two classes of applications - QoS-enabled and best-effort - use the multimedia system infrastructure described above to transmit video to their respective receivers. QoS-enabled class of applications have higher priority over best-effort class of application. In our study, emergency response applications belong to QoS-enabled and surveillance applications belong to best-effort class. For example, since a stream from an emergency response application is of higher importance than a video stream from a surveillance application, it receives more resources end-to-end. Since resource availability significantly affects QoS, we use current resource utilization as the primary indicator of system performance. We refer to the current level of system resource utilization as the system condition. Based on this definition, we can classify system conditions as being either under, over, or effectively utilized. Under-utilization of system resources occurs when the current resource utilization is lower than the desired lower bound on resource utilization. In this system condition, residual system resources (i.e., network bandwidth and computational power) are available in large amounts after meeting end-to-end QoS requirements of applications. These residual resources can be used to increase the QoS of the applications. For example, residual CPU and network bandwidth can be used to deliver better quality video (e.g., with greater resolution and higher frame rate) to end receivers. Over-utilization of system resources occurs when the current resource utilization is higher than the desired upper bound on resource utilization. This condition can arise from loss of resources - network bandwidth and/or computing power at base station, end receiver or at UAV - or may be due to an increase in resource demands by applications. Over-utilization is generally undesirable since the quality of the received video (such as resolution and frame rate) and timeliness properties (such as latency and jitter) are degraded and may result in an unstable (and thus ineffective) system. Effective resource utilization is the desired system condition since it ensures that end-to-end QoS requirements of the UAV-based multimedia system are met and utilization of both system resources, i.e., network bandwidth and computational power, are within their desired utilization bounds. Article 7 Section 3 describes techniques we applied to achieve effective utilization, even in the face of fluctuating resource availability and/or demand. 3. OVERVIEW OF HYARM This section describes the architecture of the Hybrid Adaptive Resource-management Middleware (HyARM). HyARM ensures efficient and predictable system performance by providing adaptive resource management, including monitoring of system resources and enforcing bounds on application resource utilization. 3.1 HyARM Structure and Functionality Resource Utilization Legend Resource Allocation Application Parameters Figure 2: HyARM Architecture HyARM is composed of three types of entities shown in Figure 2 and described below: Resource monitors observe the overall resource utilization for each type of resource and resource utilization per application. In our multimedia system, there are resource monitors for CPU utilization and network bandwidth. CPU monitors observe the CPU resource utilization of UAVs, base station, and end receivers. Network bandwidth monitors observe the network resource utilization of (1) wireless network link between UAVs and the base station and (2) wired network link between the base station and end receivers. The central controller maintains the system resource utilization below a desired bound by (1) processing periodic updates it receives from resource monitors and (2) modifying the execution of applications accordingly, e.g., by using different execution algorithms or operating the application with increased/decreased QoS. This adaptation process ensures that system resources are utilized efficiently and end-to-end application QoS requirements are met. In our multimedia system, the HyARM controller determines the value of application parameters such as (1) video compression schemes, such as Real Video and MPEG-4, and/or (2) frame rate, and (3) picture resolution. From the perspective of hybrid control theoretic techniques [8], the different video compression schemes and frame rate form the discrete variables of application execution and picture resolution forms the continuous variables. Application adapters modify application execution according to parameters recommended by the controller and ensures that the operation of the application is in accordance with the recommended parameters. In the current mplementation of HyARM, the application adapter modifies the input parameters to the application that affect application QoS and resource utilization - compression scheme, frame rate, and picture resolution. In our future implementations, we plan to use resource reservation mechanisms such as Differentiated Service [7, 3] and Class-based Kernel Resource Management [4] to provision/reserve network and CPU resources. In our multimedia system, the application adapter ensures that the video is encoded at the recommended frame rate and resolution using the specified compression scheme. 3.2 Applying HyARM to the Multimedia System Case Study HyARM is built atop TAO [13], a widely used open-source implementation of Real-time CORBA [12]. HyARM can be applied to ensure efficient, predictable and adaptive resource management of any DRE system where resource availability and requirements are subject to dynamic change. Figure 3 shows the interaction of various parts of the DRE multimedia system developed with HyARM, TAO, and TAO``s A/V Streaming Service. TAO``s A/V Streaming service is an implementation of the CORBA A/V Streaming Service specification. TAO``s A/V Streaming Service is a QoS-enabled video distribution service that can transfer video in real-time to one or more receivers. We use the A/V Streaming Service to transmit the video from the UAVs to the end receivers via the base station. Three entities of Receiver UAV TAO Resource Utilization HyARM Central Controller A/V Streaming Service : Sender MPEG1 MPEG4 Real Video HyARM Resource Monitor A/V Streaming Service : Receiver Compressed Video Compressed Video Application HyARM Application Adapter Remote Object Call Control Inputs Resource Utilization Resource Utilization / Control Inputs Control Inputs Legend Figure 3: Developing the DRE Multimedia System with HyARM HyARM, namely the resource monitors, central controller, and application adapters are built as CORBA servants, so they can be distributed throughout a DRE system. Resource monitors are remote CORBA objects that update the central controller periodically with the current resource utilization. Application adapters are collocated with applications since the two interact closely. As shown in Figure 3, UAVs compress the data using various compression schemes, such as MPEG1, MPEG4, and Real Video, and uses TAO``s A/V streaming service to transmit the video to end receivers. HyARM``s resource monitors continuously observe the system resource utilization and notify the central controller with the current utilization. 3 The interaction between the controller and the resource monitors uses the Observer pattern [5]. When the controller receives resource utilization updates from monitors, it computes the necessary modifications to application(s) parameters and notifies application adapter(s) via a remote operation call. Application adapter(s), that are collocated with the application, modify the input parameters to the application - in our case video encoder - to modify the application resource utilization and QoS. 3 The base station is not included in the figure since it only retransmits the video received from UAVs to end receivers. Article 7 4. PERFORMANCE RESULTS AND ANALYSIS This section first describes the testbed that provides the infrastructure for our DRE multimedia system, which was used to evaluate the performance of HyARM. We then describe our experiments and analyze the results obtained to empirically evaluate how HyARM behaves during underand over-utilization of system resources. 4.1 Overview of the Hardware and Software Testbed Our experiments were performed on the Emulab testbed at University of Utah. The hardware configuration consists of two nodes acting as UAVs, one acting as base station, and one as end receiver. Video from the two UAVs were transmitted to a base station via a LAN configured with the following properties: average packet loss ratio of 0.3 and bandwidth 1 Mbps. The network bandwidth was chosen to be 1 Mbps since each UAV in the DRE multimedia system is allocated 250 Kbps. These parameters were chosen to emulate an unreliable wireless network with limited bandwidth between the UAVs and the base station. From the base station, the video was retransmitted to the end receiver via a reliable wireline link of 10 Mbps bandwidth with no packet loss. The hardware configuration of all the nodes was chosen as follows: 600 MHz Intel Pentium III processor, 256 MB physical memory, 4 Intel EtherExpress Pro 10/100 Mbps Ethernet ports, and 13 GB hard drive. A real-time version of Linux - TimeSys Linux/NET 3.1.214 based on RedHat Linux 9was used as the operating system for all nodes. The following software packages were also used for our experiments: (1) Ffmpeg 0.4.9-pre1, which is an open-source library (http: //www.ffmpeg.sourceforge.net/download.php) that compresses video into MPEG-2, MPEG-4, Real Video, and many other video formats. (2) Iftop 0.16, which is an opensource library (http://www.ex-parrot.com/∼pdw/iftop/) we used for monitoring network activity and bandwidth utilization. (3) ACE 5.4.3 + TAO 1.4.3, which is an opensource (http://www.dre.vanderbilt.edu/TAO) implementation of the Real-time CORBA [12] specification upon which HyARM is built. TAO provides the CORBA Audio/Video (A/V) Streaming Service that we use to transmit the video from the UAVs to end receivers via the base station. 4.2 Experiment Configuration Our experiment consisted of two (emulated) UAVs that simultaneously send video to the base station using the experimentation setup described in Section 4.1. At the base station, video was retransmitted to the end receivers (without any modifications), where it was stored to a file. Each UAV hosted two applications, one QoS-enabled application (emergency response), and one best-effort application (surveillance). Within each UAV, computational power is shared between the applications, while the network bandwidth is shared among all applications. To evaluate the QoS provided by HyARM, we monitored CPU utilization at the two UAVs, and network bandwidth utilization between the UAV and the base station. CPU resource utilization was not monitored at the base station and the end receiver since they performed no computationallyintensive operations. The resource utilization of the 10 Mpbs physical link between the base station and the end receiver does not affect QoS of applications and is not monitored by HyARM since it is nearly 10 times the 1 MB bandwidth of the LAN between the UAVs and the base station. The experiment also monitors properties of the video that affect the QoS of the applications, such as latency, jitter, frame rate, and resolution. The set point on resource utilization for each resource was specified at 0.69, which is the upper bound typically recommended by scheduling techniques, such as rate monotonic algorithm [9]. Since studies [6] have shown that human eyes can perceive delays more than 200ms, we use this as the upper bound on jitter of the received video. QoS requirements for each class of application is specified during system initialization and is shown in Table 1. 4.3 Empirical Results and Analysis This section presents the results obtained from running the experiment described in Section 4.2 on our DRE multimedia system testbed. We used system resource utilization as a metric to evaluate the adaptive resource management capabilities of HyARM under varying input work loads. We also used application QoS as a metric to evaluate HyARM``s capabilities to support end-to-end QoS requirements of the various classes of applications in the DRE multimedia system. We analyze these results to explain the significant differences in system performance and application QoS. Comparison of system performance is decomposed into comparison of resource utilization and application QoS. For system resource utilization, we compare (1) network bandwidth utilization of the local area network and (2) CPU utilization at the two UAV nodes. For application QoS, we compare mean values of video parameters, including (1) picture resolution, (2) frame rate, (3) latency, and (4) jitter. Comparison of resource utilization. Over-utilization of system resources in DRE systems can yield an unstable system. In contrast, under-utilization of system resources increases system cost. Figure 4 and Figure 5 compare the system resource utilization with and without HyARM. Figure 4 shows that HyARM maintains system utilization close to the desired utilization set point during fluctuation in input work load by transmitting video of higher (or lower) QoS for QoS-enabled (or best-effort) class of applications during over (or under) utilization of system resources. Figure 5 shows that without HyARM, network utilization was as high as 0.9 during increase in workload conditions, which is greater than the utilization set point of 0.7 by 0.2. As a result of over-utilization of resources, QoS of the received video, such as average latency and jitter, was affected significantly. Without HyARM, system resources were either under-utilized or over-utilized, both of which are undesirable. In contrast, with HyARM, system resource utilization is always close to the desired set point, even during fluctuations in application workload. During sudden fluctuation in application workload, system conditions may be temporarily undesirable, but are restored to the desired condition within several sampling periods. Temporary over-utilization of resources is permissible in our multimedia system since the quality of the video may be degraded for a short period of time, though application QoS will be degraded significantly if poor quality video is transmitted for a longer period of time. Comparison of application QoS. Figures 6, Figure 7, and Table 2 compare latency, jitter, resolution, and frameArticle 7 Class Resolution Frame Rate Latency (msec ) Jitter (msec) QoS Enabled 1024 x 768 25 200 200 Best-effort 320 x 240 15 300 250 Table 1: Application QoS Requirements Figure 4: Resource utilization with HyARM Figure 5: Resource utilization without HyARM rate of the received video, respectively. Table 2 shows that HyARM increases the resolution and frame video of QoSenabled applications, but decreases the resolution and frame rate of best effort applications. During over utilization of system resources, resolution and frame rate of lower priority applications are reduced to adapt to fluctuations in application workload and to maintain the utilization of resources at the specified set point. It can be seen from Figure 6 and Figure 7 that HyARM reduces the latency and jitter of the received video significantly. These figures show that the QoS of QoS-enabled applications is greatly improved by HyARM. Although application parameters, such as frame rate and resolutions, which affect the soft QoS requirements of best-effort applications may be compromised, the hard QoS requirements, such as latency and jitter, of all applications are met. HyARM responds to fluctuation in resource availability and/or demand by constant monitoring of resource utilization. As shown in Figure 4, when resources utilization increases above the desired set point, HyARM lowers the utilization by reducing the QoS of best-effort applications. This adaptation ensures that enough resources are available for QoS-enabled applications to meet their QoS needs. Figures 6 and 7 show that the values of latency and jitter of the received video of the system with HyARM are nearly half of the corresponding value of the system without HyARM. With HyARM, values of these parameters are well below the specified bounds, whereas without HyARM, these value are significantly above the specified bounds due to overutilization of the network bandwidth, which leads to network congestion and results in packet loss. HyARM avoids this by reducing video parameters such as resolution, frame-rate, and/or modifying the compression scheme used to compress the video. Our conclusions from analyzing the results described above are that applying adaptive middleware via hybrid control to DRE system helps to (1) improve application QoS, (2) increase system resource utilization, and (3) provide better predictability (lower latency and inter-frame delay) to QoSenabled applications. These improvements are achieved largely due to monitoring of system resource utilization, efficient system workload management, and adaptive resource provisioning by means of HyARM``s network/CPU resource monitors, application adapter, and central controller, respectively. 5. RELATED WORK A number of control theoretic approaches have been applied to DRE systems recently. These techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change. A survey of these techniques is presented in [1]. One such approach is feedback control scheduling (FCS) [2, 11]. FCS algorithms dynamically adjust resource allocation by means of software feedback control loops. FCS algorithms are modeled and designed using rigorous controltheoretic methodologies. These algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and/or demand. Although existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable(s) that can continuously be adjusted. While this assumption holds for certain classes of systems, there are many classes of DRE systems, such as avionics and total-ship computing environments that only support a finite a priori set of discrete configurations. The control variables in such systems are therefore intrinsically discrete. HyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates. HyARM can therefore be applied to system that support continuous and/or discrete set of control variables. The DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables. These variables are modified by HyARM to achieve efficient resource utilization and improved application QoS. 6. CONCLUDING REMARKS Article 7 Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter Source Picture Size / Frame Rate With HyARM Without HyARM UAV1 QoS Enabled Application 1122 X 1496 / 25 960 X 720 / 20 UAV1 Best-effort Application 288 X 384 / 15 640 X 480 / 20 UAV2 QoS Enabled Application 1126 X 1496 / 25 960 X 720 / 20 UAV2 Best-effort Application 288 X 384 / 15 640 X 480 / 20 Table 2: Comparison of Video Quality Many distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly. These systems increasingly run in open environments, where resource availability is subject to dynamic change. To meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications. This paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems. HyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems. We employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A/V Streaming Service. We evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each. Our empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times. Overall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems. 7. REFERENCES [1] T. F. Abdelzaher, J. Stankovic, C. Lu, R. Zhang, and Y. Lu. Feddback Performance Control in Software Services. IEEE: Control Systems, 23(3), June 2003. [2] L. Abeni, L. Palopoli, G. Lipari, and J. Walpole. Analysis of a reservation-based feedback scheduler. In IEEE Real-Time Systems Symposium, Dec. 2002. [3] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss. An architecture for differentiated services. Network Information Center RFC 2475, Dec. 1998. [4] H. Franke, S. Nagar, C. Seetharaman, and V. Kashyap. Enabling Autonomic Workload Management in Linux. In Proceedings of the International Conference on Autonomic Computing (ICAC), New York, New York, May 2004. IEEE. [5] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading, MA, 1995. [6] G. Ghinea and J. P. Thomas. Qos impact on user perception and understanding of multimedia video clips. In MULTIMEDIA ``98: Proceedings of the sixth ACM international conference on Multimedia, pages 49-54, Bristol, United Kingdom, 1998. ACM Press. [7] Internet Engineering Task Force. Differentiated Services Working Group (diffserv) Charter. www.ietf.org/html.charters/diffserv-charter.html, 2000. [8] X. Koutsoukos, R. Tekumalla, B. Natarajan, and C. Lu. Hybrid Supervisory Control of Real-Time Systems. In 11th IEEE Real-Time and Embedded Technology and Applications Symposium, San Francisco, California, Mar. 2005. [9] J. Lehoczky, L. Sha, and Y. Ding. The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior. In Proceedings of the 10th IEEE Real-Time Systems Symposium (RTSS 1989), pages 166-171. IEEE Computer Society Press, 1989. [10] J. Loyall, J. Gossett, C. Gill, R. Schantz, J. Zinky, P. Pal, R. Shapiro, C. Rodrigues, M. Atighetchi, and D. Karr. Comparing and Contrasting Adaptive Middleware Support in Wide-Area and Embedded Distributed Object Applications. In Proceedings of the 21st International Conference on Distributed Computing Systems (ICDCS-21), pages 625-634. IEEE, Apr. 2001. [11] C. Lu, J. A. Stankovic, G. Tao, and S. H. Son. Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms. Real-Time Systems Journal, 23(1/2):85-126, July 2002. [12] Object Management Group. Real-time CORBA Specification, OMG Document formal/02-08-02 edition, Aug. 2002. [13] D. C. Schmidt, D. L. Levine, and S. Mungee. The Design and Performance of Real-Time Object Request Brokers. Computer Communications, 21(4):294-324, Apr. 1998. [14] Thomas Sikora. Trends and Perspectives in Image and Video Coding. In Proceedings of the IEEE, Jan. 2005. [15] X. Wang, H.-M. Huang, V. Subramonian, C. Lu, and C. Gill. CAMRIT: Control-based Adaptive Middleware for Real-time Image Transmission. In Proc. of the 10th IEEE Real-Time and Embedded Tech. and Applications Symp. (RTAS), Toronto, Canada, May 2004. Article 7
Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems ABSTRACT A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability. 1. INTRODUCTION Achieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth. Overutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost. A promising approach to meeting ∗ Contact author:nshankar@dre.vanderbilt.edu these end-to-end QoS requirements effectively, therefore, is to develop and apply adaptive middleware [10, 15], which is software whose functional and QoS-related properties can be modified either statically or dynamically. Static modifications are carried out to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and/or minimize hardware/software infrastructure dependencies. Objectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency/jitter, and workload. In open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources. To meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source' distributed resource management middleware. HyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics. In our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables. We achieve this by adapting the resolution of the transmitted video, which is modeled as a continuous variable, and by changing the frame-rate and the compression, which are modeled by discrete actions. We have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12]. Our results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload. The remainder of the paper is organized as follows: Section 2 describes the architecture, functionality, and resource utilization model of our DRE multimedia system case study; Section 3 explains the structure and functionality of HyARM; Section 4 evaluates the adaptive behavior of HyARM via experiments on our multimedia system case study; Section 5 compares our research on HyARM with related work; and Section 6 presents concluding remarks . ' The code and examples for HyARM are available at www. dre.vanderbilt.edu/∼nshankar/HyARM/. 2. CASE STUDY: DRE MULTIMEDIA SYSTEM This section describes the architecture and QoS requirements of our DRE multimedia system. 2.1 Multimedia System Architecture Figure 1: DRE Multimedia System Architecture The architecture for our DRE multimedia system is shown in Figure 1 and consists of the following entities: (1) Data source (video capture by UAV), where video is captured (related to subject of interest) by camera (s) on each UAV, followed by encoding of raw video using a specific encoding scheme and transmitting the video to the next stage in the pipeline. (2) Data distributor (base station), where the video is processed to remove noise, followed by retransmission of the processed video to the next stage in the pipeline. (3) Sinks (command and control center), where the received video is again processed to remove noise, then decoded and finally rendered to end user via graphical displays. Significant improvements in video encoding/decoding and (de) compression techniques have been made as a result of recent advances in video encoding and compression techniques [14]. Common video compression schemes are MPEG1, MPEG-2, Real Video, and MPEG-4. Each compression scheme is characterized by its resource requirement, e.g., the computational power to (de) compress the video signal and the network bandwidth required to transmit the compressed video signal. Properties of the compressed video, such as resolution and frame-rate determine both the quality and the resource requirements of the video. Our multimedia system case study has the following endto-end real-time QoS requirements: (1) latency, (2) interframe delay (also know as jitter), (3) frame rate, and (4) picture resolution. These QoS requirements can be classified as being either hard or soft. Hard QoS requirements should be met by the underlying system at all times, whereas soft QoS requirements can be missed occasionally .2 For our case study, we treat QoS requirements such as latency and jitter as harder QoS requirements and strive to meet these requirements at all times. In contrast, we treat QoS requirements such as video frame rate and picture resolution as softer QoS requirements and modify these video properties adaptively to handle dynamic changes in resource availabil2Although hard and soft are often portrayed as two discrete requirement sets, in practice they are usually two ends of a continuum ranging from "softer" to "harder" rather than two disjoint points. ity effectively. 2.2 DRE Multimedia System Rresources There are two primary types of resources in our DRE multimedia system: (1) processors that provide computational power available at the UAVs, base stations, and end receivers and (2) network links that provide communication bandwidth between UAVs, base stations, and end receivers. The computing power required by the video capture and encoding tasks depends on dynamic factors, such as speed of the UAV, speed of the subject (if the subject is mobile), and distance between UAV and the subject. The wireless network bandwidth available to transmit video captured by UAVs to base stations also depends on the wireless connectivity between the UAVs and the base station, which in-turn depend on dynamic factors such as the speed of the UAVs and the relative distance between UAVs and base stations. The bandwidth of the link between the base station and the end receiver is limited, but more stable than the bandwidth of the wireless network. Resource requirements and availability of resources are subjected to dynamic changes. Two classes of applications--QoS-enabled and best-effort--use the multimedia system infrastructure described above to transmit video to their respective receivers. QoS-enabled class of applications have higher priority over best-effort class of application. In our study, emergency response applications belong to QoS-enabled and surveillance applications belong to best-effort class. For example, since a stream from an emergency response application is of higher importance than a video stream from a surveillance application, it receives more resources end-to-end. Since resource availability significantly affects QoS, we use current resource utilization as the primary indicator of system performance. We refer to the current level of system resource utilization as the system condition. Based on this definition, we can classify system conditions as being either under, over, or effectively utilized. Under-utilization of system resources occurs when the current resource utilization is lower than the desired lower bound on resource utilization. In this system condition, residual system resources (i.e., network bandwidth and computational power) are available in large amounts after meeting end-to-end QoS requirements of applications. These residual resources can be used to increase the QoS of the applications. For example, residual CPU and network bandwidth can be used to deliver better quality video (e.g., with greater resolution and higher frame rate) to end receivers. Over-utilization of system resources occurs when the current resource utilization is higher than the desired upper bound on resource utilization. This condition can arise from loss of resources - network bandwidth and/or computing power at base station, end receiver or at UAV - or may be due to an increase in resource demands by applications. Over-utilization is generally undesirable since the quality of the received video (such as resolution and frame rate) and timeliness properties (such as latency and jitter) are degraded and may result in an unstable (and thus ineffective) system. Effective resource utilization is the desired system condition since it ensures that end-to-end QoS requirements of the UAV-based multimedia system are met and utilization of both system resources, i.e., network bandwidth and computational power, are within their desired utilization bounds. Article 7 Section 3 describes techniques we applied to achieve effective utilization, even in the face of fluctuating resource availability and/or demand. 3. OVERVIEW OF HYARM This section describes the architecture of the Hybrid Adaptive Resource-management Middleware (HyARM). HyARM ensures efficient and predictable system performance by providing adaptive resource management, including monitoring of system resources and enforcing bounds on application resource utilization. 3.1 HyARM Structure and Functionality Figure 2: HyARM Architecture HyARM is composed of three types of entities shown in Figure 2 and described below: Resource monitors observe the overall resource utilization for each type of resource and resource utilization per application. In our multimedia system, there are resource monitors for CPU utilization and network bandwidth. CPU monitors observe the CPU resource utilization of UAVs, base station, and end receivers. Network bandwidth monitors observe the network resource utilization of (1) wireless network link between UAVs and the base station and (2) wired network link between the base station and end receivers. The central controller maintains the system resource utilization below a desired bound by (1) processing periodic updates it receives from resource monitors and (2) modifying the execution of applications accordingly, e.g., by using different execution algorithms or operating the application with increased/decreased QoS. This adaptation process ensures that system resources are utilized efficiently and end-to-end application QoS requirements are met. In our multimedia system, the HyARM controller determines the value of application parameters such as (1) video compression schemes, such as Real Video and MPEG-4, and/or (2) frame rate, and (3) picture resolution. From the perspective of hybrid control theoretic techniques [8], the different video compression schemes and frame rate form the discrete variables of application execution and picture resolution forms the continuous variables. Application adapters modify application execution according to parameters recommended by the controller and ensures that the operation of the application is in accordance with the recommended parameters. In the current mplementation of HyARM, the application adapter modifies the input parameters to the application that affect application QoS and resource utilization - compression scheme, frame rate, and picture resolution. In our future implementations, we plan to use resource reservation mechanisms such as Differentiated Service [7, 3] and Class-based Kernel Resource Management [4] to provision/reserve network and CPU resources. In our multimedia system, the application adapter ensures that the video is encoded at the recommended frame rate and resolution using the specified compression scheme. 3.2 Applying HyARM to the Multimedia System Case Study HyARM is built atop TAO [13], a widely used open-source implementation of Real-time CORBA [12]. HyARM can be applied to ensure efficient, predictable and adaptive resource management of any DRE system where resource availability and requirements are subject to dynamic change. Figure 3 shows the interaction of various parts of the DRE multimedia system developed with HyARM, TAO, and TAO's A/V Streaming Service. TAO's A/V Streaming service is an implementation of the CORBA A/V Streaming Service specification. TAO's A/V Streaming Service is a QoS-enabled video distribution service that can transfer video in real-time to one or more receivers. We use the A/V Streaming Service to transmit the video from the UAVs to the end receivers via the base station. Three entities of Figure 3: Developing the DRE Multimedia System with HyARM HyARM, namely the resource monitors, central controller, and application adapters are built as CORBA servants, so they can be distributed throughout a DRE system. Resource monitors are remote CORBA objects that update the central controller periodically with the current resource utilization. Application adapters are collocated with applications since the two interact closely. As shown in Figure 3, UAVs compress the data using various compression schemes, such as MPEG1, MPEG4, and Real Video, and uses TAO's A/V streaming service to transmit the video to end receivers. HyARM's resource monitors continuously observe the system resource utilization and notify the central controller with the current utilization. 3 The interaction between the controller and the resource monitors uses the Observer pattern [5]. When the controller receives resource utilization updates from monitors, it computes the necessary modifications to application (s) parameters and notifies application adapter (s) via a remote operation call. Application adapter (s), that are collocated with the application, modify the input parameters to the application--in our case video encoder--to modify the application resource utilization and QoS. 3The base station is not included in the figure since it only retransmits the video received from UAVs to end receivers. 4. PERFORMANCE RESULTS AND ANALYSIS This section first describes the testbed that provides the infrastructure for our DRE multimedia system, which was used to evaluate the performance of HyARM. We then describe our experiments and analyze the results obtained to empirically evaluate how HyARM behaves during underand over-utilization of system resources. 4.1 Overview of the Hardware and Software Testbed Our experiments were performed on the Emulab testbed at University of Utah. The hardware configuration consists of two nodes acting as UAVs, one acting as base station, and one as end receiver. Video from the two UAVs were transmitted to a base station via a LAN configured with the following properties: average packet loss ratio of 0.3 and bandwidth 1 Mbps. The network bandwidth was chosen to be 1 Mbps since each UAV in the DRE multimedia system is allocated 250 Kbps. These parameters were chosen to emulate an unreliable wireless network with limited bandwidth between the UAVs and the base station. From the base station, the video was retransmitted to the end receiver via a reliable wireline link of 10 Mbps bandwidth with no packet loss. The hardware configuration of all the nodes was chosen as follows: 600 MHz Intel Pentium III processor, 256 MB physical memory, 4 Intel EtherExpress Pro 10/100 Mbps Ethernet ports, and 13 GB hard drive. A real-time version of Linux--TimeSys Linux/NET 3.1.214 based on RedHat Linux 9--was used as the operating system for all nodes. The following software packages were also used for our experiments: (1) Ffmpeg 0.4.9-pre1, which is an open-source library (http: / / www.ffmpeg.sourceforge.net/download.php) that compresses video into MPEG-2, MPEG-4, Real Video, and many other video formats. (2) Iftop 0.16, which is an opensource library (http://www.ex-parrot.com/∼pdw/iftop/) we used for monitoring network activity and bandwidth utilization. (3) ACE 5.4.3 + TAO 1.4.3, which is an opensource (http://www.dre.vanderbilt.edu/TAO) implementation of the Real-time CORBA [12] specification upon which HyARM is built. TAO provides the CORBA Audio/Video (A/V) Streaming Service that we use to transmit the video from the UAVs to end receivers via the base station. 4.2 Experiment Configuration Our experiment consisted of two (emulated) UAVs that simultaneously send video to the base station using the experimentation setup described in Section 4.1. At the base station, video was retransmitted to the end receivers (without any modifications), where it was stored to a file. Each UAV hosted two applications, one QoS-enabled application (emergency response), and one best-effort application (surveillance). Within each UAV, computational power is shared between the applications, while the network bandwidth is shared among all applications. To evaluate the QoS provided by HyARM, we monitored CPU utilization at the two UAVs, and network bandwidth utilization between the UAV and the base station. CPU resource utilization was not monitored at the base station and the end receiver since they performed no computationallyintensive operations. The resource utilization of the 10 Mpbs physical link between the base station and the end receiver does not affect QoS of applications and is not monitored by HyARM since it is nearly 10 times the 1 MB bandwidth of the LAN between the UAVs and the base station. The experiment also monitors properties of the video that affect the QoS of the applications, such as latency, jitter, frame rate, and resolution. The set point on resource utilization for each resource was specified at 0.69, which is the upper bound typically recommended by scheduling techniques, such as rate monotonic algorithm [9]. Since studies [6] have shown that human eyes can perceive delays more than 200ms, we use this as the upper bound on jitter of the received video. QoS requirements for each class of application is specified during system initialization and is shown in Table 1. 4.3 Empirical Results and Analysis This section presents the results obtained from running the experiment described in Section 4.2 on our DRE multimedia system testbed. We used system resource utilization as a metric to evaluate the adaptive resource management capabilities of HyARM under varying input work loads. We also used application QoS as a metric to evaluate HyARM's capabilities to support end-to-end QoS requirements of the various classes of applications in the DRE multimedia system. We analyze these results to explain the significant differences in system performance and application QoS. Comparison of system performance is decomposed into comparison of resource utilization and application QoS. For system resource utilization, we compare (1) network bandwidth utilization of the local area network and (2) CPU utilization at the two UAV nodes. For application QoS, we compare mean values of video parameters, including (1) picture resolution, (2) frame rate, (3) latency, and (4) jitter. Comparison of resource utilization. Over-utilization of system resources in DRE systems can yield an unstable system. In contrast, under-utilization of system resources increases system cost. Figure 4 and Figure 5 compare the system resource utilization with and without HyARM. Figure 4 shows that HyARM maintains system utilization close to the desired utilization set point during fluctuation in input work load by transmitting video of higher (or lower) QoS for QoS-enabled (or best-effort) class of applications during over (or under) utilization of system resources. Figure 5 shows that without HyARM, network utilization was as high as 0.9 during increase in workload conditions, which is greater than the utilization set point of 0.7 by 0.2. As a result of over-utilization of resources, QoS of the received video, such as average latency and jitter, was affected significantly. Without HyARM, system resources were either under-utilized or over-utilized, both of which are undesirable. In contrast, with HyARM, system resource utilization is always close to the desired set point, even during fluctuations in application workload. During sudden fluctuation in application workload, system conditions may be temporarily undesirable, but are restored to the desired condition within several sampling periods. Temporary over-utilization of resources is permissible in our multimedia system since the quality of the video may be degraded for a short period of time, though application QoS will be degraded significantly if poor quality video is transmitted for a longer period of time. Comparison of application QoS. Figures 6, Figure 7, and Table 2 compare latency, jitter, resolution, and frame Table 1: Application QoS Requirements Figure 4: Resource utilization with HyARM Figure 5: Resource utilization without HyARM rate of the received video, respectively. Table 2 shows that HyARM increases the resolution and frame video of QoSenabled applications, but decreases the resolution and frame rate of best effort applications. During over utilization of system resources, resolution and frame rate of lower priority applications are reduced to adapt to fluctuations in application workload and to maintain the utilization of resources at the specified set point. It can be seen from Figure 6 and Figure 7 that HyARM reduces the latency and jitter of the received video significantly. These figures show that the QoS of QoS-enabled applications is greatly improved by HyARM. Although application parameters, such as frame rate and resolutions, which affect the soft QoS requirements of best-effort applications may be compromised, the hard QoS requirements, such as latency and jitter, of all applications are met. HyARM responds to fluctuation in resource availability and/or demand by constant monitoring of resource utilization. As shown in Figure 4, when resources utilization increases above the desired set point, HyARM lowers the utilization by reducing the QoS of best-effort applications. This adaptation ensures that enough resources are available for QoS-enabled applications to meet their QoS needs. Figures 6 and 7 show that the values of latency and jitter of the received video of the system with HyARM are nearly half of the corresponding value of the system without HyARM. With HyARM, values of these parameters are well below the specified bounds, whereas without HyARM, these value are significantly above the specified bounds due to overutilization of the network bandwidth, which leads to network congestion and results in packet loss. HyARM avoids this by reducing video parameters such as resolution, frame-rate, and/or modifying the compression scheme used to compress the video. Our conclusions from analyzing the results described above are that applying adaptive middleware via hybrid control to DRE system helps to (1) improve application QoS, (2) increase system resource utilization, and (3) provide better predictability (lower latency and inter-frame delay) to QoSenabled applications. These improvements are achieved largely due to monitoring of system resource utilization, efficient system workload management, and adaptive resource provisioning by means of HyARM's network/CPU resource monitors, application adapter, and central controller, respectively. 5. RELATED WORK A number of control theoretic approaches have been applied to DRE systems recently. These techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change. A survey of these techniques is presented in [1]. One such approach is feedback control scheduling (FCS) [2, 11]. FCS algorithms dynamically adjust resource allocation by means of software feedback control loops. FCS algorithms are modeled and designed using rigorous controltheoretic methodologies. These algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and/or demand. Although existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable (s) that can continuously be adjusted. While this assumption holds for certain classes of systems, there are many classes of DRE systems, such as avionics and total-ship computing environments that only support a finite a priori set of discrete configurations. The control variables in such systems are therefore intrinsically discrete. HyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates. HyARM can therefore be applied to system that support continuous and/or discrete set of control variables. The DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables. These variables are modified by HyARM to achieve efficient resource utilization and improved application QoS. 6. CONCLUDING REMARKS Article 7 Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter Table 2: Comparison of Video Quality Many distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly. These systems increasingly run in open environments, where resource availability is subject to dynamic change. To meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications. This paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems. HyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems. We employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A/V Streaming Service. We evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each. Our empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times. Overall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.
Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems ABSTRACT A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability. 1. INTRODUCTION Achieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth. Overutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost. A promising approach to meeting ∗ Contact author:nshankar@dre.vanderbilt.edu these end-to-end QoS requirements effectively, therefore, is to develop and apply adaptive middleware [10, 15], which is software whose functional and QoS-related properties can be modified either statically or dynamically. Static modifications are carried out to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and/or minimize hardware/software infrastructure dependencies. Objectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency/jitter, and workload. In open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources. To meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source' distributed resource management middleware. HyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics. In our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables. We achieve this by adapting the resolution of the transmitted video, which is modeled as a continuous variable, and by changing the frame-rate and the compression, which are modeled by discrete actions. We have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12]. Our results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload. The remainder of the paper is organized as follows: Section 2 describes the architecture, functionality, and resource utilization model of our DRE multimedia system case study; Section 3 explains the structure and functionality of HyARM; Section 4 evaluates the adaptive behavior of HyARM via experiments on our multimedia system case study; Section 5 compares our research on HyARM with related work; and Section 6 presents concluding remarks . ' The code and examples for HyARM are available at www. dre.vanderbilt.edu/∼nshankar/HyARM/. 2. CASE STUDY: DRE MULTIMEDIA SYSTEM 2.1 Multimedia System Architecture 2.2 DRE Multimedia System Rresources 3. OVERVIEW OF HYARM 3.1 HyARM Structure and Functionality 3.2 Applying HyARM to the Multimedia System Case Study 4. PERFORMANCE RESULTS AND ANALYSIS 4.1 Overview of the Hardware and Software Testbed 4.2 Experiment Configuration 4.3 Empirical Results and Analysis 5. RELATED WORK A number of control theoretic approaches have been applied to DRE systems recently. These techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change. A survey of these techniques is presented in [1]. One such approach is feedback control scheduling (FCS) [2, 11]. FCS algorithms dynamically adjust resource allocation by means of software feedback control loops. FCS algorithms are modeled and designed using rigorous controltheoretic methodologies. These algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and/or demand. Although existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable (s) that can continuously be adjusted. While this assumption holds for certain classes of systems, there are many classes of DRE systems, such as avionics and total-ship computing environments that only support a finite a priori set of discrete configurations. The control variables in such systems are therefore intrinsically discrete. HyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates. HyARM can therefore be applied to system that support continuous and/or discrete set of control variables. The DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables. These variables are modified by HyARM to achieve efficient resource utilization and improved application QoS. 6. CONCLUDING REMARKS Article 7 Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter Table 2: Comparison of Video Quality Many distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly. These systems increasingly run in open environments, where resource availability is subject to dynamic change. To meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications. This paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems. HyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems. We employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A/V Streaming Service. We evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each. Our empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times. Overall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.
Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems ABSTRACT A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability. 1. INTRODUCTION Achieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth. Overutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost. A promising approach to meeting ∗ Contact author:nshankar@dre.vanderbilt.edu Objectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency/jitter, and workload. In open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources. To meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source' distributed resource management middleware. HyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics. In our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables. We have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12]. Our results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload. The code and examples for HyARM are available at www. dre.vanderbilt.edu/∼nshankar/HyARM/. 5. RELATED WORK A number of control theoretic approaches have been applied to DRE systems recently. These techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change. One such approach is feedback control scheduling (FCS) [2, 11]. FCS algorithms dynamically adjust resource allocation by means of software feedback control loops. FCS algorithms are modeled and designed using rigorous controltheoretic methodologies. These algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and/or demand. Although existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable (s) that can continuously be adjusted. The control variables in such systems are therefore intrinsically discrete. HyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates. HyARM can therefore be applied to system that support continuous and/or discrete set of control variables. The DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables. These variables are modified by HyARM to achieve efficient resource utilization and improved application QoS. 6. CONCLUDING REMARKS Table 2: Comparison of Video Quality Many distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly. These systems increasingly run in open environments, where resource availability is subject to dynamic change. To meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications. This paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems. HyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems. We employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A/V Streaming Service. We evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each. Our empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times. Overall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.
C-55
Context Awareness for Group Interaction Support
In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.
[ "context awar", "group interact", "locat sens", "softwar framework", "contextawar", "mobil system", "fifth contextdimens group-context", "xml configur file", "event-condit-action", "sensor fusion" ]
[ "P", "P", "P", "U", "U", "R", "U", "U", "U", "U" ]
Context Awareness for Group Interaction Support Alois Ferscha, Clemens Holzmann, Stefan Oppl Institut für Pervasive Computing, Johannes Kepler Universität Linz Altenbergerstraße 69, A-4040 Linz {ferscha,holzmann,oppl}@soft. uni-linz. ac.at ABSTRACT In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - distributed applications. H.1.2 [Models and Principles]: User/Machine Systems - human factors. H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces - asynchronous interaction, collaborative computing, theory and models, synchronous interaction. General Terms Design, Experimentation 1. INTRODUCTION Today``s computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices. Users can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere. New situations appear, where the user``s context - for example his current location or nearby people - is more dynamic; computation does not occur at a single location and in a single context any longer, but comprises a multitude of situations and locations. This development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together. Motivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS). It supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation. In the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction. A software framework for developing contextsensitive applications is presented, which serves as middleware for GISS. Chapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail. Chapter 5 gives a final summary of our work. 1.1 What is Context Computing? According to Merriam-Webster``s Online Dictionary1 , context is defined as the interrelated conditions in which something exists or occurs. Because this definition is very general, many approaches have been made to define the notion of context with respect to computing environments. Most definitions of context are done by enumerating examples or by choosing synonyms for context. The term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects. In [2], context is also defined by an enumeration of examples, namely location, identities of the people around the user, the time of the day, season, temperature etc. [9] defines context as the user``s location, environment, identity and time. Here we conform to a widely accepted and more formal definition, which defines context as any information than can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves. [4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are - with respect to characterizing the situation of an entity - more important than others. These are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types). For example, if we know a person``s identity, we can easily derive related information about this person from several data sources (e.g. day of birth or e-mail address). According to this definition, [4] defines a system to be contextaware if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user``s task. [4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval. Figure 1. Layers of a context-aware system Context computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging). In order to do this, there are a few layers between (see Figure 1). First, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized. Finally, the stored context information is used to trigger certain context events (context triggering). [7] 1.2 Group Interaction in Context After these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information. In [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2). First, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing- and knowledge gathering needs. The next part is the Awareness System, which is dedicated to the perceptualisation of the effects of team activity. It does this by communicating work context, agenda and workspace information to the users. The Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents. Mobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places. Finally yet importantly, the organisational innovation system integrates aspects of the team itself, like roles, leadership and shared facilities. With respect to these five aspects of team support, we focus on interaction and partly cover mobility- and awareness-support. Group interaction includes all means that enable group members to communicate freely with all the other members. At this point, the question how context information can be used for supporting group interaction comes up. We believe that information about the current situation of a person provides a surplus value to existing group interaction systems. Context information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around. Figure 2. Support for Mobile Groups [6] Most of today``s context-aware applications use location and time only, and location is referred to as a crucial type of context information [3]. We also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity. Nevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity. This provides a comprehensive description of a user``s current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4. When we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself. We believe, that for the support of group interaction, the status of the group itself has also be taken into account. Therefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member``s contexts. Group context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now. 1.3 Context Middleware The Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications. This so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms. 89 A main feature of the framework is the abstraction of context information retrieval via various sensors and its delivery to a level where no difference appears, for the application designer, between these different kinds of context retrieval mechanisms; the information retrieval is hidden from the application developer. This is achieved by so-called entities, which describe objectse.g. a human user - that are important for a certain context scenario. Entities express their functionality by the use of so-called attributes, which can be loaded into the entity. These attributes are complex pieces of software, which are implemented as Java classes. Typical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users. Each entity can contain a collection of such attributes, where an entity itself is an attribute. The initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network. In order to load and deploy new attributes, an entity has to reference a class loader and a transport and lookup layer, which manages the lookup mechanism for discovering other entities and the transport. XML configuration files specify which initial set of entities should be loaded and which attributes these entities own. The communication between entities and attributes is based on context events. Each attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running. Among other things, and event contains the name of the event and a list of parameters delivering information about the event itself. Related with this event-based architecture is the use of ECA (Event-Condition-Action)-rules for defining the behaviour of the context system. Therefore, every entity has a rule-interpreter, which catches triggered events, checks conditions associated with them and causes certain actions. These rules are referenced by the entity``s XML configuration. A rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically. To sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer. Furthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime. 2. ARCHITECTURE OVERVIEW As GISS uses the Context Framework described in chapter 1.3 as middleware, every user is represented by an entity, as well as the central server, which is responsible for context transformation, context representation and context triggering (cf. Figure 1). A main part of our work is about the automated acquisition of position information and its sensor-independent provision at application level. We do not only sense the current location of users, but also determine spatial proximities between them. Developing the architecture, we focused on keeping the client as simple as possible and reducing the communication between client and server to a minimum. Each client may have various location and/or proximity sensors attached, which are encapsulated by respective Context Framework-attributes (Sensor Encapsulation). These attributes are responsible for integrating native sensor-implementations into the Context Framework and sending sensor-dependent position information to the server. We consider it very important to support different types of sensors even at the same time, in order to improve location accuracy on the one hand, while providing a pervasive location-sensing environment with seamless transition between different location sensing techniques on the other hand. All location- and proximity-sensors supported are represented by server-side context-attributes, which correspond to the client-side sensor encapsulation-attributes and abstract the sensor-dependent position information received from all users via the wireless network (sensor abstraction). This requires a context repository, where the mapping of diverse physical positions to standardized locations is stored. The standardized location- and proximity-information of each user is then passed to the so-called Sensor Fusion-attributes, one for symbolic locations and a second one for spatial proximities. Their job is to merge location- and proximityinformation of clients, respectively, which is described in detail in Chapter 3.3. Every time the symbolic location of a user or the spatial proximity between two users changes, the Sensor Fusion-attributes notify the GISS Core-attribute, which controls the application. Because of the abstraction of sensor-dependent position information, the system can easily be extended by additional sensors, just by implementing the (typically two) attributes for encapsulating sensors (some sensors may not need a client-side part), abstracting physical positions and observing the interface to GISS Core. Figure 3. Architecture of the Group Interaction Support System (GISS) The GISS Core-attribute is the central coordinator of the application as it shows to the user. It not only serves as an interface to the location-sensing subsystem, but also collects further context information in other dimensions (time, identity or activity). 90 Every time a change in the context of one or more users is detected, GISS Core evaluates the effect of these changes on the user, on the groups he belongs to and on the other members of these groups. Whenever necessary, events are thrown to the affected clients to trigger context-aware activities, like changing the presentation of awareness information or the execution of services. The client-side part of the application is kept as simple as possible. Furthermore, modular design was not only an issue on the sensor side but also when designing the user interface architecture. Thus, the complete user interface can be easily exchanged, if all of the defined events are taken into account and understood by the new interface-attribute. The currently implemented user interface is split up in two parts, which are also represented by two attributes. The central attribute on client-side is the so-called Instant Messenger Encapsulation, which on the one hand interacts with the server through events and on the other hand serves as a proxy for the external application the user interface is built on. As external application, we use an existing open source instant messenger - the ICQ2 -compliant Simple Instant Messenger (SIM)3 . We have chosen and instant messenger as front-end because it provides a well-known interface for most users and facilitates a seamless integration of group interaction support, thus increasing acceptance and ease of use. As the basic functionality of the instant messenger - to serve as a client in an instant messenger network - remains fully functional, our application is able to use the features already provided by the messenger. For example, the contexts activity and identity are derived from the messenger network as it is described later. The Instant Messenger Encapsulation is also responsible for supporting group communication. Through the interface of the messenger, it provides means of synchronous and asynchronous communication as well as a context-aware reminder system and tools for managing groups and the own availability status. The second part of the user interface is a visualisation of the user``s locations, which is implemented in the attribute Viewer. The current implementation provides a two-dimensional map of the campus, but it can easily be replaced by other visualisations, a three-dimensional VRML-model for example. Furthermore, this visualisation is used to show the artefacts for asynchronous communication. Based on a floor plan-view of the geographical area the user currently resides in, it gives a quick overview of which people are nearby, their state and provides means to interact with them. In the following chapters 3 and 4, we describe the location sensing-backend and the application front-end for supporting group interaction in more detail. 3. LOCATION SENSING In the following chapter, we will introduce a location model, which is used for representing locations; afterwards, we will describe the integration of location- and proximity-sensors in 2 http://www.icq.com/ 3 http://sim-icq.sourceforge.net more detail. Finally, we will have a closer look on the fusion of location- and proximity-information, acquired by various sensors. 3.1 Location Model A location model (i.e. a context representation for the contextinformation location) is needed to represent the locations of users, in order to be able to facilitate location-related queries like given a location, return a list of all the objects there or given an object, return its current location. In general, there are two approaches [3,5]: symbolic models, which represent location as abstract symbols, and a geometric model, which represent location as coordinates. We have chosen a symbolic location model, which refers to locations as abstract symbols like Room P111 or Physics Building, because we do not require geometric location data. Instead, abstract symbols are more convenient for human interaction at application level. Furthermore, we use a symbolic location containment hierarchy similar to the one introduced in [11], which consists of top-level regions, which contain buildings, which contain floors, and the floors again contain rooms. We also distinguish four types, namely region (e.g. a whole campus), section (e.g. a building or an outdoor section), level (e.g. a certain floor in a building) and area (e.g. a certain room). We introduce a fifth type of location, which we refer to as semantic. These socalled semantic locations can appear at any level in the hierarchy and they can be nested, but they do not necessarily have a geographic representation. Examples for such semantic locations are tagged objects within a room (e.g. a desk and a printer on this desk) or the name of a department, which contains certain rooms. Figure 4. Symbolic Location Containment Hierarchy The hierarchy of symbolic locations as well as the type of each position is stored in the context repository. 3.2 Sensors Our architecture supports two different kinds of sensors: location sensors, which acquire location information, and proximity sensors, which detect spatial proximities between users. As described above, each sensor has a server- and in most cases a corresponding client-side-implementation, too. While the clientattributes (Sensor Abstraction) are responsible for acquiring low-level sensor-data and transmitting it to the server, the corresponding Sensor Encapsulation-attributes transform them into a uniform and sensor-independent format, namely symbolic locations and IDs of users in spatial proximity, respectively. 91 Afterwards, the respective attribute Sensor Fusion is being triggered with this sensor-independent information of a certain user, detected by a particular sensor. Such notifications are performed every time the sensor acquired new information. Accordingly, Sensor Abstraction-attributes are responsible to detect when a certain sensor is no longer available on the client side (e.g. if it has been unplugged by the user) or when position respectively proximity could not be determined any longer (e.g. RFID reader cannot detect tags) and notify the corresponding sensor fusion about this. 3.2.1 Location Sensors In order to sense physical positions, the Sensor Encapsulationattributes asynchronously transmit sensor-dependent position information to the server. The corresponding location Sensor Abstraction-attributes collect these physical positions delivered by the sensors of all users, and perform a repository-lookup in order to get the associated symbolic location. This requires certain tables for each sensor, which map physical positions to symbolic locations. One physical position may have multiple symbolic locations at different accuracy-levels in the location hierarchy assigned to, for example if a sensor covers several rooms. If such a mapping could be found, an event is thrown in order to notify the attribute Location Sensor Fusion about the symbolic locations a certain sensor of a particular user determined. We have prototypically implemented three kinds of location sensors, which are based on WLAN (IEEE 802.11), Bluetooth and RFID (Radio Frequency Identification). We have chosen these three completely different sensors because of their differences concerning accuracy, coverage and administrative effort, in order to evaluate the flexibility of our system (see Table 1). The most accurate one is an RFID sensor, which is based on an active RFID-reader. As soon as the reader is plugged into the client, it scans for active RFID tags in range and transmits their serial numbers to the server, where they are mapped to symbolic locations. We also take into account RSSI (Radio Signal Strength Information), which provides position accuracy of few centimetres and thus enables us to determine which RFID-tag is nearest. Due to this high accuracy, RFID is used for locating users within rooms. The administration is quite simple; once a new RFID tag is placed, its serial number simply has to be assigned to a single symbolic location. A drawback is the poor availability, which can be traced back to the fact that RFID readers are still very expensive. The second one is an 802.11 WLAN sensor. Therefore, we integrated a purely software-based, commercial WLAN positioning system for tracking clients on the university campuswide WLAN infrastructure. The reached position accuracy is in the range of few meters and thus is suitable for location sensing at the granularity of rooms. A big disadvantage is that a map of the whole area has to be calibrated with measuring points at a distance of 5 meters each. Because most mobile computers are equipped with WLAN technology and the positioning-system is a software-only solution, nearly everyone is able to use this kind of sensor. Finally, we have implemented a Bluetooth sensor, which detects Bluetooth tags (i.e. Bluetooth-modules with known position) in range and transmits them to the server that maps to symbolic locations. Because of the fact that we do not use signal strengthinformation in the current implementation, the accuracy is above 10 meters and therefore a single Bluetooth MAC address is associated with several symbolic locations, according to the physical locations such a Bluetooth module covers. This leads to the disadvantage that the range of each Bluetooth-tag has to be determined and mapped to symbolic locations within this range. Table 1. Comparison of implemented sensors Sensor Accuracy Coverage Administration RFID < 10 cm poor easy WLAN 1-4 m very well very timeconsuming Bluetooth ~ 10 m well time-consuming 3.2.2 Proximity Sensors Any sensor that is able to detect whether two users are in spatial proximity is referred to as proximity sensor. Similar to the location sensors, the Proximity Sensor Abstraction-attributes collect physical proximity information of all users and transform them to mappings of user-IDs. We have implemented two types of proximity-sensors, which are based on Bluetooth on the one hand and on fused symbolic locations (see chapter 3.3.1) on the other hand. The Bluetooth-implementation goes along with the implementation of the Bluetooth-based location sensor. The already determined Bluetooth MAC addresses in range of a certain client are being compared with those of all other clients, and each time the attribute Bluetooth Sensor Abstraction detects congruence, it notifies the proximity sensor fusion about this. The second sensor is based on symbolic locations processed by Location Sensor Fusion, wherefore it does not need a client-side implementation. Each time the fused symbolic location of a certain user changes, it checks whether he is at the same symbolic location like another user and again notifies the proximity sensor fusion about the proximity between these two users. The range can be restricted to any level of the location containment hierarchy, for example to room granularity. A currently unresolved issue is the incomparable granularity of different proximity sensors. For example, the symbolic locations at same level in the location hierarchy mostly do not cover the same geographic area. 3.3 Sensor Fusion Core of the location sensing subsystem is the sensor fusion. It merges data of various sensors, while coping with differences concerning accuracy, coverage and sample-rate. According to the two kinds of sensors described in chapter 3.2, we distinguish between fusion of location sensors on the one hand, and fusion of proximity sensors on the other hand. The fusion of symbolic locations as well as the fusion of spatial proximities operates on standardized information (cf. Figure 3). This has the advantage, that additional position- and proximitysensors can be added easily or the fusion algorithms can be replaced by ones that are more sophisticated. 92 Fusion is performed for each user separately and takes into account the measurements at a single point in time only (i.e. no history information is used for determining the current location of a certain user). The algorithm collects all events thrown by the Sensor Abstraction-attributes, performs fusion and triggers the GISS Core-attribute if the symbolic location of a certain user or the spatial proximity between users changed. An important feature is the persistent storage of location- and proximity-history in a database in order to allow future retrieval. This enables applications to visualize the movement of users for example. 3.3.1 Location Sensor Fusion Goal of the fusion of location information is to improve precision and accuracy by merging the set of symbolic locations supplied by various location sensors, in order to reduce the number of these locations to a minimum, ideally to a single symbolic location per user. This is quite difficult, because different sensors may differ in accuracy and sample rate as well. The Location Sensor Fusion-attribute is triggered by events, which are thrown by the Location Sensor Abstractionattributes. These events contain information about the identity of the user concerned, his current location and the sensor by which the location has been determined. If the attribute Location Sensor Fusion receives such an event, it checks if the amount of symbolic locations of the user concerned has changed (compared with the last event). If this is the case, it notifies the GISS Core-attribute about all symbolic locations this user is currently associated with. However, this information is not very useful on its own if a certain user is associated with several locations. As described in chapter 3.2.1, a single location sensor may deliver multiple symbolic locations. Moreover, a certain user may have several location sensors, which supply symbolic locations differing in accuracy (i.e. different levels in the location containment hierarchy). To cope with this challenge, we implemented a fusion algorithm in order to reduce the number of symbolic locations to a minimum (ideally to a single location). In a first step, each symbolic location is associated with its number of occurrences. A symbolic location may occur several times if it is referred to by more than one sensor or if a single sensor detects multiple tags, which again refer to several locations. Furthermore, this number is added to the previously calculated number of occurrences of each symbolic location, which is a child-location of the considered one in the location containment hierarchy. For example, if - in Figure 4 - room2 occurs two times and desk occurs a single time, the value 2 of room2 is added to the value 1 of desk, whereby desk finally gets the value 3. In a final step, only those symbolic locations are left which are assigned with the highest number of occurrences. A further reduction can be achieved by assigning priorities to sensors (based on accuracy and confidence) and cumulating these priorities for each symbolic location instead of just counting the number of occurrences. If the remaining fused locations have changed (i.e. if they differ from the fused locations the considered user is currently associated with), they are provided with the current timestamp, written to the database and the GISS-attribute is notified about where the user is probably located. Finally, the most accurate, common location in the location hierarchy is calculated (i.e. the least upper bound of these symbolic locations) in order to get a single symbolic location. If it changes, the GISS Core-attribute is triggered again. 3.3.2 Proximity Sensor Fusion Proximity sensor fusion is much simpler than the fusion of symbolic locations. The corresponding proximity sensor fusionattribute is triggered by events, which are thrown by the Proximity Sensor Abstraction-attributes. These special events contain information about the identity of the two users concerned, if they are currently in spatial proximity or if proximity no longer persists, and by which proximity-sensor this has been detected. If the sensor fusion-attribute is notified by a certain Proximity Sensor Abstraction-attribute about an existing spatial proximity, it first checks if these two users are already known to be in proximity (detected either by another user or by another proximity-sensor of the user, which caused the event). If not, this change in proximity is written to the context repository with current timestamp. Similarly, if the attribute Proximity Fusion is notified about an ended proximity, it checks if the users are still known to be in proximity, and writes this change to the repository if not. Finally, if spatial proximity between the two users actually changed, an event is thrown to notify the GISS Core-attribute about this. 4. CONTEXTSENSITIVE INTERACTION 4.1 Overview In most of today``s systems supporting interaction in groups, the provided means lack any awareness of the user``s current context, thus being unable to adapt to his needs. In our approach, we use context information to enhance interaction and provide further services, which offer new possibilities to the user. Furthermore, we believe that interaction in groups also has to take into account the current context of the group itself and not only the context of individual group members. For this reason, we also retrieve information about the group``s current context, derived from the contexts of the group members together with some sort of meta-information (see chapter 4.3). The sources of context used for our application correspond with the four primary context types given in chapter 1.1 - identity (I), location (L), time (T) and activity (A). As stated before, we also take into account the context of the group the user is interaction with, so that we could add a fifth type of context informationgroup awareness (G) - to the classification. Using this context information, we can trigger context-aware activities in all of the three categories described in chapter 1.1 - presentation of information (P), automatic execution of services (A) and tagging of context to information for later retrieval (T). Table 2 gives an overview of activities we have already implemented; they are described comprehensively in chapter 4.4. The table also shows which types of context information are used for each activity and the category the activity could be classified in. 93 Table 2. Classification of implemented context-aware activities Service L T I A G P A T Location Visualisation X X X Group Building Support X X X X Support for Synchronous Communication X X X X Support for Asynchronous Communication X X X X X X X Availability Management X X X Task Management Support X X X X Meeting Support X X X X X X Reasons for implementing these very features are to take advantage of all four types of context information in order to support group interaction by utilizing a comprehensive knowledge about the situation a single user or a whole group is in. A critical issue for the user acceptance of such a system is the usability of its interface. We have evaluated several ways of presenting context-aware means of interaction to the user, until we came to the solution we use right now. Although we think that the user interface that has been implemented now offers the best trade-off between seamless integration of features and ease of use, it would be no problem to extend the architecture with other user interfaces, even on different platforms. The chosen solution is based on an existing instant messenger, which offers several possibilities to integrate our system (see chapter 4.2). The biggest advantage of this approach is that the user is confronted with a graphical user interface he is already used to in most cases. Furthermore, our system uses an instant messenger account as an identifier, so that the user does not have to register a further account anywhere else (for example, the user can use his already existing ICQ2 -account). 4.2 Instant Messenger Integration Our system is based upon an existing instant messenger, the socalled Simple Instant Messenger (SIM)3 . The implementation of this messenger is carried out as a project at Sourceforge4 . SIM supports multiple messenger protocols such as AIM5 , ICQ2 and MSN6 . It also supports connections to multiple accounts at the same time. Furthermore, full support for SMS-notification (where provided from the used protocol) is given. SIM is based on a plug-in concept. All protocols as well as parts of the user-interface are implemented as plug-ins. Its architecture is also used to extend the application``s abilities to communicate with external applications. For this purpose, a remote control plug-in is provided, by which SIM can be controlled from external applications via socket connection. This remote control interface is extensively used by GISS for retrieving the contact list, setting the user``s availability-state or sending messages. The 4 http://sourceforge.net/ 5 http://www.aim.com/ 6 http://messenger.msn.com/ functionality of the plug-in was extended in several ways, for example to accept messages for an account (as if they would have been sent via the messenger network). The messenger, more exactly the contact list (i.e. a list of profiles of all people registered with the instant messenger, which is visualized by listing their names as it can be seen in Figure 5), is also used to display locations of other members of the groups a user belongs to. This provides location awareness without taking too much space or requesting the user``s full attention. A more comprehensive description of these features is given in chapter 4.4. 4.3 Sources of Context Information While the location-context of a user is obtained from our location sensing subsystem described in chapter 3, we consider further types of context than location relevant for the support of group interaction, too. Local time as a very important context dimension can be easily retrieved from the real time clock of the user``s system. Besides location and time, we also use context information of user``s activity and identity, where we exploit the functionality provided by the underlying instant messenger system. Identity (or more exactly, the mapping of IDs to names as well as additional information from the user``s profile) can be distilled out of the contents of the user``s contact list. Information about the activity or a certain user is only available in a very restricted area, namely the activity at the computer itself. Other activities like making a phone call or something similar, cannot be recognized with the current implementation of the activity sensor. The only context-information used is the instant messenger``s availability state, thus only providing a very coarse classification of the user``s activity (online, offline, away, busy etc.). Although this may not seem to be very much information, it is surely relevant and can be used to improve or even enable several services. Having collected the context information from all available users, it is now possible to distil some information about the context of a certain group. Information about the context of a group includes how many members the group currently has, if the group meets right now, which members are participating at a meeting, how many members have read which of the available posts from other team members and so on. Therefore, some additional information like a list of members for each group is needed. These lists can be assembled manually (by users joining and leaving groups) or retrieved automatically. The context of a group is secondary context and is aggregated from the available contexts of the group members. Every time the context of a single group member changes, the context of the whole group is changing and has to be recalculated. With knowledge about a user``s context and the context of the groups he belongs to, we can provide several context-aware services to the user, which enhance his interaction abilities. A brief description of these services is given in chapter 4.4. 94 4.4 Group Interaction Support 4.4.1 Visualisation of Location Information An important feature is the visualisation of location information, thus allowing users to be aware of the location of other users and members of groups he joined, respectively. As already described in chapter 2, we use two different forms of visualisation. The maybe more important one is to display location information in the contact list of the instant messenger, right beside the name, thus being always visible while not drawing the user``s attention on it (compared with a twodimensional view for example, which requires a own window for displaying a map of the environment). Due to the restricted space in the contact list, it has been necessary to implement some sort of level-of-detail concept. As we use a hierarchical location model, we are able to determine the most accurate common location of two users. In the contact list, the current symbolic location one level below the previously calculated common location is then displayed. If, for example, user A currently resides in room P121 at the first floor of a building and user B, which has to be displayed in the contact list of user A, is in room P304 at the third floor, the most accurate common location of these two users is the building they are in. For that reason, the floor (i.e. one level beyond the common location, namely the building) of user B is displayed in the contact list of user A. If both people reside on the same floor or even in the same room, the room would be taken. Figure 5 shows a screenshot of the Simple Instant Messenger3 , where the current location of those people, whose location is known by GISS, is displayed in brackets right beside their name. On top of the image, the heightened, integrated GISS-toolbar is shown, which currently contains the following, implemented functionality (from left to right): Asynchronous communication for groups (see chapter 4.4.4), context-aware reminders (see chapter 4.4.6), two-dimensional visualisation of locationinformation, forming and managing groups (see chapter 4.4.2), context-aware availability-management (see chapter 4.4.5) and finally a button for terminating GISS. Figure 5. GISS integration in Simple Instant Messenger3 As displaying just this short form of location may not be enough for the user, because he may want to see the most accurate position available, a fully qualified position is shown if a name in the contact-list is clicked (e.g. in the form of desk@room2@department1@1stfloor@building 1@campus). The second possible form of visualisation is a graphical one. We have evaluated a three-dimensional view, which was based on a VRML model of the respective area (cf. Figure 6). Due to lacks in navigational and usability issues, we decided to use a twodimensional view of the floor (it is referred to as level in the location hierarchy, cf. Figure 4). Other levels of granularity like section (e.g. building) and region (e.g. campus) are also provided. In this floor-plan-based view, the current locations are shown in the manner of ICQ2 contacts, which are placed at the currently sensed location of the respective person. The availability-status of a user, for example away if he is not on the computer right now, or busy if he does not want to be disturbed, is visualized by colour-coding the ICQ2 -flower left beside the name. Furthermore, the floor-plan-view shows so-called the virtual post-its, which are virtual counterparts of real-life post-its and serve as our means of asynchronous communication (more about virtual post-its can be found in chapter 4.4.4). Figure 6. 3D-view of the floor (VRML) Figure 7 shows the two-dimensional map of a certain floor, where several users are currently located (visualized by their name and the flower left beside). The location of the client, on which the map is displayed, is visualized by a green circle. Down to the right, two virtual post-its can be seen. Figure 7. 2D view of the floor Another feature of the 2D-view is the visualisation of locationhistory of users. As we store the complete history of a user``s locations together with a timestamp, we are able to provide information about the locations he has been back in time. When the mouse is moved over the name of a certain user in the 2Dview, footprints of a user, placed at the locations he has been, are faded out the stronger, the older the location information is. 95 4.4.2 Forming and Managing Groups To support interaction in groups, it is first necessary to form groups. As groups can have different purposes, we distinguish two types of groups. So-called static groups are groups, which are built up manually by people joining and leaving them. Static groups can be further divided into two subtypes. In open static groups, everybody can join and leave anytime, useful for example to form a group of lecture attendees of some sort of interest group. Closed static groups have an owner, who decides, which persons are allowed to join, although everybody could leave again at any time. Closed groups enable users for example to create a group of their friends, thus being able to communicate with them easily. In contrast to that, we also support the creation of dynamic groups. They are formed among persons, who are at the same location at the same time. The creation of dynamic groups is only performed at locations, where it makes sense to form groups, for example in lecture halls or meeting rooms, but not on corridors or outdoor. It would also be not very meaningful to form a group only of the people residing in the left front sector of a hall; instead, the complete hall should be considered. For these reasons, all the defined locations in the hierarchy are tagged, whether they allow the formation of groups or not. Dynamic groups are also not only formed granularity of rooms, but also on higher levels in the hierarchy, for example with the people currently residing in the area of a department. As the members of dynamic groups constantly change, it is possible to create an open static group out of them. 4.4.3 Synchronous Communication for Groups The most important form of synchronous communication on computers today is instant messaging; some people even see instant messaging to be the real killer application on the Internet. This has also motivated the decision to build GISS upon an instant messaging system. In today``s messenger systems, peer-to-peer-communication is extensively supported. However, when it comes to communication in groups, the support is rather poor most of the time. Often, only sending a message to multiple recipients is supported, lacking means to take into account the current state of the recipients. Furthermore, groups can only be formed of members in one``s contact list, thus being not able to send messages to a group, where not all of its members are known (which may be the case in settings, where the participants of a lecture form a group). Our approach does not have the mentioned restrictions. We introduce group-entries in the user``s contact list; enable him or his to send messages to this group easily, without knowing who exactly is currently a member of this group. Furthermore, group messages are only delivered to persons, who are currently not busy, thus preventing a disturbance by a message, which is possibly unimportant for the user. These features cannot be carried out in the messenger network itself, so whenever a message to a group account is sent, we intercept it and route it through our system to all the recipients, which are available at a certain time. Communication via a group account is also stored centrally, enabling people to query missed messages or simply viewing the message history. 4.4.4 Asynchronous Communication for Groups Asynchronous communication in groups is not a new idea. The goal of this approach is not to reinvent the wheel, as email is maybe the most widely used form of asynchronous communication on computers and is broadly accepted and standardized. In out work, we aim at the combination of asynchronous communication with location awareness. For this reason, we introduce the concept of so-called virtual postits (cp. [13]), which are messages that are bound to physical locations. These virtual post-its could be either visible for all users that are passing by or they can be restricted to be visible for certain groups of people only. Moreover, a virtual post-it can also have an expiry date after which it is dropped and not displayed anymore. Virtual post-its can also be commented by others, thus providing some from of forum-like interaction, where each post-it forms a thread. Virtual post-its are displayed automatically, whenever a user (available) passes by the first time. Afterwards, post-its can be accessed via the 2D-viewer, where all visible post-its are shown. All readers of a post-it are logged and displayed when viewing it, providing some sort of awareness about the group members'' activities in the past. 4.4.5 Context-aware Availability Management Instant messengers in general provide some kind of availability information about a user. Although this information can be only defined in a very coarse granularity, we have decided to use these means of gathering activity context, because the introduction of an additional one would strongly decrease the usability of the system. To support the user managing his availability, we provide an interface that lets the user define rules to adapt his availability to the current context. These rules follow the form on event (E) if condition (C) then action (A), which is directly supported by the ECA-rules of the Context Framework described in chapter 1.3. The testing of conditions is periodically triggered by throwing events (whenever the context of a user changes). The condition itself is defined by the user, who can demand the change of his availability status as the action in the rule. As a condition, the user can define his location, a certain time (also triggering daily, every week or every month) or any logical combination of these criteria. 4.4.6 Context-Aware Reminders Reminders [14] are used to give the user the opportunity of defining tasks and being reminded of those, when certain criteria are fulfilled. Thus, a reminder can be seen as a post-it to oneself, which is only visible in certain cases. Reminders can be bound to a certain place or time, but also to spatial proximity of users or groups. These criteria can be combined with Boolean operators, thus providing a powerful means to remind the user of tasks that he wants to carry out when a certain context occurs. A reminder will only pop up the first time the actual context meets the defined criterion. On showing up the reminder, the user has the chance to resubmit it to be reminded again, for example five minutes later or the next time a certain user is in spatial proximity. 96 4.4.7 Context-Aware Recognition and Notification of Group Meetings With the available context information, we try to recognize meetings of a group. The determination of the criteria, when the system recognizes a group having a meeting, is part of the ongoing work. In a first approach, we use the location- and activity-context of the group members to determine a meeting. Whenever more than 50 % of the members of a group are available at a location, where a meeting is considered to make sense (e.g. not on a corridor), a meeting minutes post-it is created at this location and all absent group members are notified of the meeting and the location it takes place. During the meeting, the comment-feature of virtual post-its provides a means to take notes for all of the participants. When members are joining or leaving the meeting, this is automatically added as a note to the list of comments. Like the recognition of the beginning of a meeting, the recognition of its end is still part of ongoing work. If the end of the meeting is recognized, all group members get the complete list of comments as a meeting protocol at the end of the meeting. 5. CONCLUSIONS This paper discussed the potentials of support for group interaction by using context information. First, we introduced the notions of context and context computing and motivated their value for supporting group interaction. An architecture is presented to support context-aware group interaction in mobile, distributed environments. It is built upon a flexible and extensible framework, thus enabling an easy adoption to available context sources (e.g. by adding additional sensors) as well as the required form of representation. We have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself. Important features are dynamic formation of groups, visualization of location on a two-dimensional map as well as unobtrusively integrated in an instant-messenger, asynchronous communication by virtual post-its, which are bound to certain locations, and a context-aware availability-management, which adapts the availability-status of a user to his current situation. To provide location information, we have implemented a subsystem for automated acquisition of location- and proximityinformation provided by various sensors, which provides a technology-independent presentation of locations and spatial proximities between users and merges this information using sensor-independent fusion algorithms. A history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services. 6. REFERENCES [1] Beer, W., Christian, V., Ferscha, A., Mehrmann, L. Modeling Context-aware Behavior by Interpreted ECA Rules. In Proceedings of the International Conference on Parallel and Distributed Computing (EUROPAR``03). (Klagenfurt, Austria, August 26-29, 2003). Springer Verlag, LNCS 2790, 1064-1073. [2] Brown, P.J., Bovey, J.D., Chen X. Context-Aware Applications: From the Laboratory to the Marketplace. IEEE Personal Communications, 4(5) (1997), 58-64. [3] Chen, H., Kotz, D. A Survey of Context-Aware Mobile Computing Research. Technical Report TR2000-381, Computer Science Department, Dartmouth College, Hanover, New Hampshire, November 2000. [4] Dey, A. Providing Architectural Support for Building Context-Aware Applications. Ph.D.. Thesis, Department of Computer Science, Georgia Institute of Technology, Atlanta, November 2000. [5] Svetlana Domnitcheva. Location Modeling: State of the Art and Challenges. In Proceedings of the Workshop on Location Modeling for Ubiquitous Computing. (Atlanta, Georgia, United States, September 30, 2001). 13-19. [6] Ferscha, A. Workspace Awareness in Mobile Virtual Teams. In Proceedings of the IEEE 9th International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE``00). (Gaithersburg, Maryland, March 14-16, 2000). IEEE Computer Society Press, 272-277. [7] Ferscha, A. Coordination in Pervasive Computing Environments. In Proceedings of the Twelfth International IEEE Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE-2003). (June 9-11, 2003). IEEE Computer Society Press, 3-9. [8] Leonhard, U. Supporting Location Awareness in Open Distributed Systems. Ph.D.. Thesis, Department of Computing, Imperial College, London, May 1998. [9] Ryan, N., Pascoe, J., Morse, D. Enhanced Reality Fieldwork: the Context-Aware Archaeological Assistant. Gaffney, V., Van Leusen, M., Exxon, S. (eds.) Computer Applications in Archaeology (1997) [10] Schilit, B.N., Theimer, M. Disseminating Active Map Information to Mobile Hosts. IEEE Network, 8(5) (1994), 22-32. [11] Schilit, B.N.. A System Architecture for Context-Aware Mobile Computing. Ph.D.. Thesis, Columbia University, Department of Computer Science, May 1995. [12] Wang, B., Bodily, J., Gupta, S.K.S. Supporting Persistent Social Groups in Ubiquitous Computing Environments Using Context-Aware Ephemeral Group Service. In Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom``04). (March 14-17, 2004). IEEE Computer Society Press, 287-296. [13] Pascoe, J. The Stick-e Note Architecture: Extending the Interface Beyond the User. Proceedings of the 2nd International Conference of Intelligent User Interfaces (IUI``97). (Orlando, USA, 1997), 261-264. [14] Dey, A., Abowd, G. CybreMinder: A Context-Aware System for Supporting Re-minders. Proceedings of the 2nd International Symposium on Handheld and Ubiquitous Computing (HUC``00). (Bristol, UK, 2000), 172-186. 97
Context Awareness for Group Interaction Support ABSTRACT In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction. 1. INTRODUCTION Today's computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices. Users can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere. New situations appear, where the user's context--for example his current location or nearby people--is more dynamic; computation does not occur at a single location and in a single context any longer, but comprises a multitude of situations and locations. This development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together. Motivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS). It supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation. In the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction. A software framework for developing contextsensitive applications is presented, which serves as middleware for GISS. Chapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail. Chapter 5 gives a final summary of our work. 1.1 What is Context Computing? According to Merriam-Webster's Online Dictionary1, context is defined as the "interrelated conditions in which something exists or occurs". Because this definition is very general, many approaches have been made to define the notion of context with respect to computing environments. Most definitions of context are done by enumerating examples or by choosing synonyms for context. The term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects. In [2], context is also defined by an enumeration of examples, namely location, identities of the people around the user, the time of the day, season, temperature etc. [9] defines context as the user's location, environment, identity and time. Here we conform to a widely accepted and more formal definition, which defines context as "any information than can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves". [4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are--with respect to characterizing the situation of an entity--more important than others. These are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types). For example, if we know a person's identity, we can easily derive related information about this person from several data sources (e.g. day of birth or e-mail address). According to this definition, [4] defines a system to be contextaware "if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user's task". [4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval. Figure 1. Layers of a context-aware system Context computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging). In order to do this, there are a few layers between (see Figure 1). First, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized. Finally, the stored context information is used to trigger certain context events (context triggering). [7] 1.2 Group Interaction in Context After these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information. In [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2). First, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing - and knowledge gathering needs. The next part is the Awareness System, which is dedicated to the perceptualisation of the effects of team activity. It does this by communicating work context, agenda and workspace information to the users. The Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents. Mobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places. Finally yet importantly, the organisational innovation system integrates aspects of the team itself, like roles, leadership and shared facilities. With respect to these five aspects of team support, we focus on interaction and partly cover mobility - and awareness-support. Group interaction includes all means that enable group members to communicate freely with all the other members. At this point, the question how context information can be used for supporting group interaction comes up. We believe that information about the current situation of a person provides a surplus value to existing group interaction systems. Context information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around. Figure 2. Support for Mobile Groups [6] Most of today's context-aware applications use location and time only, and location is referred to as a crucial type of context information [3]. We also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity. Nevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity. This provides a comprehensive description of a user's current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4. When we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself. We believe, that for the support of group interaction, the status of the group itself has also be taken into account. Therefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member's contexts. Group context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now. 1.3 Context Middleware The Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications. This so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms. A main feature of the framework is the abstraction of context information retrieval via various sensors and its delivery to a level where no difference appears, for the application designer, between these different kinds of context retrieval mechanisms; the information retrieval is hidden from the application developer. This is achieved by so-called entities, which describe objects--e.g. a human user--that are important for a certain context scenario. Entities express their functionality by the use of so-called attributes, which can be loaded into the entity. These attributes are complex pieces of software, which are implemented as Java classes. Typical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users. Each entity can contain a collection of such attributes, where an entity itself is an attribute. The initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network. In order to load and deploy new attributes, an entity has to reference a class loader and a transport and lookup layer, which manages the lookup mechanism for discovering other entities and the transport. XML configuration files specify which initial set of entities should be loaded and which attributes these entities own. The communication between entities and attributes is based on context events. Each attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running. Among other things, and event contains the name of the event and a list of parameters delivering information about the event itself. Related with this event-based architecture is the use of ECA (Event-Condition-Action) - rules for defining the behaviour of the context system. Therefore, every entity has a rule-interpreter, which catches triggered events, checks conditions associated with them and causes certain actions. These rules are referenced by the entity's XML configuration. A rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically. To sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer. Furthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime. 2. ARCHITECTURE OVERVIEW As GISS uses the Context Framework described in chapter 1.3 as middleware, every user is represented by an entity, as well as the central server, which is responsible for context transformation, context representation and context triggering (cf. Figure 1). A main part of our work is about the automated acquisition of position information and its sensor-independent provision at application level. We do not only sense the current location of users, but also determine spatial proximities between them. Developing the architecture, we focused on keeping the client as simple as possible and reducing the communication between client and server to a minimum. Each client may have various location and/or proximity sensors attached, which are encapsulated by respective Context Framework-attributes ("Sensor Encapsulation"). These attributes are responsible for integrating native sensor-implementations into the Context Framework and sending sensor-dependent position information to the server. We consider it very important to support different types of sensors even at the same time, in order to improve location accuracy on the one hand, while providing a pervasive location-sensing environment with seamless transition between different location sensing techniques on the other hand. All location - and proximity-sensors supported are represented by server-side context-attributes, which correspond to the client-side sensor encapsulation-attributes and abstract the sensor-dependent position information received from all users via the wireless network (sensor abstraction). This requires a context repository, where the mapping of diverse physical positions to standardized locations is stored. The standardized location - and proximity-information of each user is then passed to the so-called "Sensor Fusion" - attributes, one for symbolic locations and a second one for spatial proximities. Their job is to merge location - and proximityinformation of clients, respectively, which is described in detail in Chapter 3.3. Every time the symbolic location of a user or the spatial proximity between two users changes, the "Sensor Fusion" - attributes notify the "GISS Core" - attribute, which controls the application. Because of the abstraction of sensor-dependent position information, the system can easily be extended by additional sensors, just by implementing the (typically two) attributes for encapsulating sensors (some sensors may not need a client-side part), abstracting physical positions and observing the interface to "GISS Core". Figure 3. Architecture of the Group Interaction Support System (GISS) The "GISS Core" - attribute is the central coordinator of the application as it shows to the user. It not only serves as an interface to the location-sensing subsystem, but also collects further context information in other dimensions (time, identity or activity). Every time a change in the context of one or more users is detected, "GISS Core" evaluates the effect of these changes on the user, on the groups he belongs to and on the other members of these groups. Whenever necessary, events are thrown to the affected clients to trigger context-aware activities, like changing the presentation of awareness information or the execution of services. The client-side part of the application is kept as simple as possible. Furthermore, modular design was not only an issue on the sensor side but also when designing the user interface architecture. Thus, the complete user interface can be easily exchanged, if all of the defined events are taken into account and understood by the new interface-attribute. The currently implemented user interface is split up in two parts, which are also represented by two attributes. The central attribute on client-side is the so-called "Instant Messenger Encapsulation", which on the one hand interacts with the server through events and on the other hand serves as a proxy for the external application the user interface is built on. As external application, we use an existing open source instant messenger--the ICQ2-compliant Simple Instant Messenger (SIM) 3. We have chosen and instant messenger as front-end because it provides a well-known interface for most users and facilitates a seamless integration of group interaction support, thus increasing acceptance and ease of use. As the basic functionality of the instant messenger--to serve as a client in an instant messenger network--remains fully functional, our application is able to use the features already provided by the messenger. For example, the contexts activity and identity are derived from the messenger network as it is described later. The Instant Messenger Encapsulation is also responsible for supporting group communication. Through the interface of the messenger, it provides means of synchronous and asynchronous communication as well as a context-aware reminder system and tools for managing groups and the own availability status. The second part of the user interface is a visualisation of the user's locations, which is implemented in the attribute "Viewer". The current implementation provides a two-dimensional map of the campus, but it can easily be replaced by other visualisations, a three-dimensional VRML-model for example. Furthermore, this visualisation is used to show the artefacts for asynchronous communication. Based on a floor plan-view of the geographical area the user currently resides in, it gives a quick overview of which people are nearby, their state and provides means to interact with them. In the following chapters 3 and 4, we describe the location sensing-backend and the application front-end for supporting group interaction in more detail. 3. LOCATION SENSING In the following chapter, we will introduce a location model, which is used for representing locations; afterwards, we will describe the integration of location - and proximity-sensors in more detail. Finally, we will have a closer look on the fusion of location - and proximity-information, acquired by various sensors. 3.1 Location Model A location model (i.e. a context representation for the contextinformation location) is needed to represent the locations of users, in order to be able to facilitate location-related queries like "given a location, return a list of all the objects there" or "given an object, return its current location". In general, there are two approaches [3,5]: symbolic models, which represent location as abstract symbols, and a geometric model, which represent location as coordinates. We have chosen a symbolic location model, which refers to locations as abstract symbols like "Room P111" or "Physics Building", because we do not require geometric location data. Instead, abstract symbols are more convenient for human interaction at application level. Furthermore, we use a symbolic location containment hierarchy similar to the one introduced in [11], which consists of top-level regions, which contain buildings, which contain floors, and the floors again contain rooms. We also distinguish four types, namely region (e.g. a whole campus), section (e.g. a building or an outdoor section), level (e.g. a certain floor in a building) and area (e.g. a certain room). We introduce a fifth type of location, which we refer to as semantic. These socalled semantic locations can appear at any level in the hierarchy and they can be nested, but they do not necessarily have a geographic representation. Examples for such semantic locations are tagged objects within a room (e.g. a desk and a printer on this desk) or the name of a department, which contains certain rooms. Figure 4. Symbolic Location Containment Hierarchy The hierarchy of symbolic locations as well as the type of each position is stored in the context repository. 3.2 Sensors Our architecture supports two different kinds of sensors: location sensors, which acquire location information, and proximity sensors, which detect spatial proximities between users. As described above, each sensor has a server - and in most cases a corresponding client-side-implementation, too. While the clientattributes ("Sensor Abstraction") are responsible for acquiring low-level sensor-data and transmitting it to the server, the corresponding "Sensor Encapsulation" - attributes transform them into a uniform and sensor-independent format, namely symbolic locations and IDs of users in spatial proximity, respectively. Afterwards, the respective attribute "Sensor Fusion" is being triggered with this sensor-independent information of a certain user, detected by a particular sensor. Such notifications are performed every time the sensor acquired new information. Accordingly, "Sensor Abstraction" - attributes are responsible to detect when a certain sensor is no longer available on the client side (e.g. if it has been unplugged by the user) or when position respectively proximity could not be determined any longer (e.g. RFID reader cannot detect tags) and notify the corresponding sensor fusion about this. 3.2.1 Location Sensors In order to sense physical positions, the "Sensor Encapsulation" attributes asynchronously transmit sensor-dependent position information to the server. The corresponding location "Sensor Abstraction" - attributes collect these physical positions delivered by the sensors of all users, and perform a repository-lookup in order to get the associated symbolic location. This requires certain tables for each sensor, which map physical positions to symbolic locations. One physical position may have multiple symbolic locations at different accuracy-levels in the location hierarchy assigned to, for example if a sensor covers several rooms. If such a mapping could be found, an event is thrown in order to notify the attribute "Location Sensor Fusion" about the symbolic locations a certain sensor of a particular user determined. We have prototypically implemented three kinds of location sensors, which are based on WLAN (IEEE 802.11), Bluetooth and RFID (Radio Frequency Identification). We have chosen these three completely different sensors because of their differences concerning accuracy, coverage and administrative effort, in order to evaluate the flexibility of our system (see Table 1). The most accurate one is an RFID sensor, which is based on an active RFID-reader. As soon as the reader is plugged into the client, it scans for active RFID tags in range and transmits their serial numbers to the server, where they are mapped to symbolic locations. We also take into account RSSI (Radio Signal Strength Information), which provides position accuracy of few centimetres and thus enables us to determine which RFID-tag is nearest. Due to this high accuracy, RFID is used for locating users within rooms. The administration is quite simple; once a new RFID tag is placed, its serial number simply has to be assigned to a single symbolic location. A drawback is the poor availability, which can be traced back to the fact that RFID readers are still very expensive. The second one is an 802.11 WLAN sensor. Therefore, we integrated a purely software-based, commercial WLAN positioning system for tracking clients on the university campuswide WLAN infrastructure. The reached position accuracy is in the range of few meters and thus is suitable for location sensing at the granularity of rooms. A big disadvantage is that a map of the whole area has to be calibrated with measuring points at a distance of 5 meters each. Because most mobile computers are equipped with WLAN technology and the positioning-system is a software-only solution, nearly everyone is able to use this kind of sensor. Finally, we have implemented a Bluetooth sensor, which detects Bluetooth tags (i.e. Bluetooth-modules with known position) in range and transmits them to the server that maps to symbolic locations. Because of the fact that we do not use signal strengthinformation in the current implementation, the accuracy is above 10 meters and therefore a single Bluetooth MAC address is associated with several symbolic locations, according to the physical locations such a Bluetooth module covers. This leads to the disadvantage that the range of each Bluetooth-tag has to be determined and mapped to symbolic locations within this range. Table 1. Comparison of implemented sensors 3.2.2 Proximity Sensors Any sensor that is able to detect whether two users are in spatial proximity is referred to as proximity sensor. Similar to the location sensors, the "Proximity Sensor Abstraction" - attributes collect physical proximity information of all users and transform them to mappings of user-IDs. We have implemented two types of proximity-sensors, which are based on Bluetooth on the one hand and on fused symbolic locations (see chapter 3.3.1) on the other hand. The Bluetooth-implementation goes along with the implementation of the Bluetooth-based location sensor. The already determined Bluetooth MAC addresses in range of a certain client are being compared with those of all other clients, and each time the attribute "Bluetooth Sensor Abstraction" detects congruence, it notifies the proximity sensor fusion about this. The second sensor is based on symbolic locations processed by "Location Sensor Fusion", wherefore it does not need a client-side implementation. Each time the fused symbolic location of a certain user changes, it checks whether he is at the same symbolic location like another user and again notifies the proximity sensor fusion about the proximity between these two users. The range can be restricted to any level of the location containment hierarchy, for example to room granularity. A currently unresolved issue is the incomparable granularity of different proximity sensors. For example, the symbolic locations at same level in the location hierarchy mostly do not cover the same geographic area. 3.3 Sensor Fusion Core of the location sensing subsystem is the sensor fusion. It merges data of various sensors, while coping with differences concerning accuracy, coverage and sample-rate. According to the two kinds of sensors described in chapter 3.2, we distinguish between fusion of location sensors on the one hand, and fusion of proximity sensors on the other hand. The fusion of symbolic locations as well as the fusion of spatial proximities operates on standardized information (cf. Figure 3). This has the advantage, that additional position - and proximitysensors can be added easily or the fusion algorithms can be replaced by ones that are more sophisticated. Fusion is performed for each user separately and takes into account the measurements at a single point in time only (i.e. no history information is used for determining the current location of a certain user). The algorithm collects all events thrown by the "Sensor Abstraction" - attributes, performs fusion and triggers the "GISS Core" - attribute if the symbolic location of a certain user or the spatial proximity between users changed. An important feature is the persistent storage of location - and proximity-history in a database in order to allow future retrieval. This enables applications to visualize the movement of users for example. 3.3.1 Location Sensor Fusion Goal of the fusion of location information is to improve precision and accuracy by merging the set of symbolic locations supplied by various location sensors, in order to reduce the number of these locations to a minimum, ideally to a single symbolic location per user. This is quite difficult, because different sensors may differ in accuracy and sample rate as well. The "Location Sensor Fusion" - attribute is triggered by events, which are thrown by the "Location Sensor Abstraction" attributes. These events contain information about the identity of the user concerned, his current location and the sensor by which the location has been determined. If the attribute "Location Sensor Fusion" receives such an event, it checks if the amount of symbolic locations of the user concerned has changed (compared with the last event). If this is the case, it notifies the "GISS Core" - attribute about all symbolic locations this user is currently associated with. However, this information is not very useful on its own if a certain user is associated with several locations. As described in chapter 3.2.1, a single location sensor may deliver multiple symbolic locations. Moreover, a certain user may have several location sensors, which supply symbolic locations differing in accuracy (i.e. different levels in the location containment hierarchy). To cope with this challenge, we implemented a fusion algorithm in order to reduce the number of symbolic locations to a minimum (ideally to a single location). In a first step, each symbolic location is associated with its number of occurrences. A symbolic location may occur several times if it is referred to by more than one sensor or if a single sensor detects multiple tags, which again refer to several locations. Furthermore, this number is added to the previously calculated number of occurrences of each symbolic location, which is a child-location of the considered one in the location containment hierarchy. For example, if--in Figure 4--"room2" occurs two times and "desk" occurs a single time, the value 2 of "room2" is added to the value 1 of "desk", whereby "desk" finally gets the value 3. In a final step, only those symbolic locations are left which are assigned with the highest number of occurrences. A further reduction can be achieved by assigning priorities to sensors (based on accuracy and confidence) and cumulating these priorities for each symbolic location instead of just counting the number of occurrences. If the remaining fused locations have changed (i.e. if they differ from the fused locations the considered user is currently associated with), they are provided with the current timestamp, written to the database and the "GISS" - attribute is notified about where the user is probably located. Finally, the most accurate, common location in the location hierarchy is calculated (i.e. the least upper bound of these symbolic locations) in order to get a single symbolic location. If it changes, the "GISS Core" - attribute is triggered again. 3.3.2 Proximity Sensor Fusion Proximity sensor fusion is much simpler than the fusion of symbolic locations. The corresponding proximity sensor fusionattribute is triggered by events, which are thrown by the "Proximity Sensor Abstraction" - attributes. These special events contain information about the identity of the two users concerned, if they are currently in spatial proximity or if proximity no longer persists, and by which proximity-sensor this has been detected. If the sensor fusion-attribute is notified by a certain "Proximity Sensor Abstraction" - attribute about an existing spatial proximity, it first checks if these two users are already known to be in proximity (detected either by another user or by another proximity-sensor of the user, which caused the event). If not, this change in proximity is written to the context repository with current timestamp. Similarly, if the attribute "Proximity Fusion" is notified about an ended proximity, it checks if the users are still known to be in proximity, and writes this change to the repository if not. Finally, if spatial proximity between the two users actually changed, an event is thrown to notify the "GISS Core" - attribute about this. 4. CONTEXTSENSITIVE INTERACTION 4.1 Overview In most of today's systems supporting interaction in groups, the provided means lack any awareness of the user's current context, thus being unable to adapt to his needs. In our approach, we use context information to enhance interaction and provide further services, which offer new possibilities to the user. Furthermore, we believe that interaction in groups also has to take into account the current context of the group itself and not only the context of individual group members. For this reason, we also retrieve information about the group's current context, derived from the contexts of the group members together with some sort of meta-information (see chapter 4.3). The sources of context used for our application correspond with the four primary context types given in chapter 1.1--identity (I), location (L), time (T) and activity (A). As stated before, we also take into account the context of the group the user is interaction with, so that we could add a fifth type of context information--group awareness (G)--to the classification. Using this context information, we can trigger context-aware activities in all of the three categories described in chapter 1.1--presentation of information (P), automatic execution of services (A) and tagging of context to information for later retrieval (T). Table 2 gives an overview of activities we have already implemented; they are described comprehensively in chapter 4.4. The table also shows which types of context information are used for each activity and the category the activity could be classified in. Table 2. Classification of implemented context-aware activities Reasons for implementing these very features are to take advantage of all four types of context information in order to support group interaction by utilizing a comprehensive knowledge about the situation a single user or a whole group is in. A critical issue for the user acceptance of such a system is the usability of its interface. We have evaluated several ways of presenting context-aware means of interaction to the user, until we came to the solution we use right now. Although we think that the user interface that has been implemented now offers the best trade-off between seamless integration of features and ease of use, it would be no problem to extend the architecture with other user interfaces, even on different platforms. The chosen solution is based on an existing instant messenger, which offers several possibilities to integrate our system (see chapter 4.2). The biggest advantage of this approach is that the user is confronted with a graphical user interface he is already used to in most cases. Furthermore, our system uses an instant messenger account as an identifier, so that the user does not have to register a further account anywhere else (for example, the user can use his already existing ICQ2-account). 4.2 Instant Messenger Integration Our system is based upon an existing instant messenger, the socalled Simple Instant Messenger (SIM) 3. The implementation of this messenger is carried out as a project at Sourceforge4. SIM supports multiple messenger protocols such as AIM5, ICQ2 and MSN6. It also supports connections to multiple accounts at the same time. Furthermore, full support for SMS-notification (where provided from the used protocol) is given. SIM is based on a plug-in concept. All protocols as well as parts of the user-interface are implemented as plug-ins. Its architecture is also used to extend the application's abilities to communicate with external applications. For this purpose, a remote control plug-in is provided, by which SIM can be controlled from external applications via socket connection. This remote control interface is extensively used by GISS for retrieving the contact list, setting the user's availability-state or sending messages. The functionality of the plug-in was extended in several ways, for example to accept messages for an account (as if they would have been sent via the messenger network). The messenger, more exactly the contact list (i.e. a list of profiles of all people registered with the instant messenger, which is visualized by listing their names as it can be seen in Figure 5), is also used to display locations of other members of the groups a user belongs to. This provides location awareness without taking too much space or requesting the user's full attention. A more comprehensive description of these features is given in chapter 4.4. 4.3 Sources of Context Information While the location-context of a user is obtained from our location sensing subsystem described in chapter 3, we consider further types of context than location relevant for the support of group interaction, too. Local time as a very important context dimension can be easily retrieved from the real time clock of the user's system. Besides location and time, we also use context information of user's activity and identity, where we exploit the functionality provided by the underlying instant messenger system. Identity (or more exactly, the mapping of IDs to names as well as additional information from the user's profile) can be distilled out of the contents of the user's contact list. Information about the activity or a certain user is only available in a very restricted area, namely the activity at the computer itself. Other activities like making a phone call or something similar, cannot be recognized with the current implementation of the activity sensor. The only context-information used is the instant messenger's availability state, thus only providing a very coarse classification of the user's activity (online, offline, away, busy etc.). Although this may not seem to be very much information, it is surely relevant and can be used to improve or even enable several services. Having collected the context information from all available users, it is now possible to distil some information about the context of a certain group. Information about the context of a group includes how many members the group currently has, if the group meets right now, which members are participating at a meeting, how many members have read which of the available posts from other team members and so on. Therefore, some additional information like a list of members for each group is needed. These lists can be assembled manually (by users joining and leaving groups) or retrieved automatically. The context of a group is secondary context and is aggregated from the available contexts of the group members. Every time the context of a single group member changes, the context of the whole group is changing and has to be recalculated. With knowledge about a user's context and the context of the groups he belongs to, we can provide several context-aware services to the user, which enhance his interaction abilities. A brief description of these services is given in chapter 4.4. 4.4 Group Interaction Support 4.4.1 Visualisation of Location Information An important feature is the visualisation of location information, thus allowing users to be aware of the location of other users and members of groups he joined, respectively. As already described in chapter 2, we use two different forms of visualisation. The maybe more important one is to display location information in the contact list of the instant messenger, right beside the name, thus being always visible while not drawing the user's attention on it (compared with a twodimensional view for example, which requires a own window for displaying a map of the environment). Due to the restricted space in the contact list, it has been necessary to implement some sort of level-of-detail concept. As we use a hierarchical location model, we are able to determine the most accurate common location of two users. In the contact list, the current symbolic location one level below the previously calculated common location is then displayed. If, for example, user A currently resides in room "P121" at the first floor of a building and user B, which has to be displayed in the contact list of user A, is in room "P304" at the third floor, the most accurate common location of these two users is the building they are in. For that reason, the floor (i.e. one level beyond the common location, namely the building) of user B is displayed in the contact list of user A. If both people reside on the same floor or even in the same room, the room would be taken. Figure 5 shows a screenshot of the Simple Instant Messenger3, where the current location of those people, whose location is known by GISS, is displayed in brackets right beside their name. On top of the image, the heightened, integrated GISS-toolbar is shown, which currently contains the following, implemented functionality (from left to right): Asynchronous communication for groups (see chapter 4.4.4), context-aware reminders (see chapter 4.4.6), two-dimensional visualisation of locationinformation, forming and managing groups (see chapter 4.4.2), context-aware availability-management (see chapter 4.4.5) and finally a button for terminating GISS. Figure 5. GISS integration in Simple Instant Messenger3 As displaying just this short form of location may not be enough for the user, because he may want to see the most accurate position available, a "fully qualified" position is shown if a name in the contact-list is clicked (e.g. in the form of "desk@room2@department1@1stfloor@building 1@campus”). The second possible form of visualisation is a graphical one. We have evaluated a three-dimensional view, which was based on a VRML model of the respective area (cf. Figure 6). Due to lacks in navigational and usability issues, we decided to use a twodimensional view of the floor (it is referred to as level in the location hierarchy, cf. Figure 4). Other levels of granularity like section (e.g. building) and region (e.g. campus) are also provided. In this floor-plan-based view, the current locations are shown in the manner of ICQ2 contacts, which are placed at the currently sensed location of the respective person. The availability-status of a user, for example "away" if he is not on the computer right now, or "busy" if he does not want to be disturbed, is visualized by colour-coding the ICQ2-flower left beside the name. Furthermore, the floor-plan-view shows so-called the virtual post-its, which are virtual counterparts of real-life post-its and serve as our means of asynchronous communication (more about virtual post-its can be found in chapter 4.4.4). Figure 6. 3D-view of the floor (VRML) Figure 7 shows the two-dimensional map of a certain floor, where several users are currently located (visualized by their name and the flower left beside). The location of the client, on which the map is displayed, is visualized by a green circle. Down to the right, two virtual post-its can be seen. Figure 7. 2D view of the floor Another feature of the 2D-view is the visualisation of locationhistory of users. As we store the complete history of a user's locations together with a timestamp, we are able to provide information about the locations he has been back in time. When the mouse is moved over the name of a certain user in the 2Dview, "footprints" of a user, placed at the locations he has been, are faded out the stronger, the older the location information is. 4.4.2 Forming and Managing Groups To support interaction in groups, it is first necessary to form groups. As groups can have different purposes, we distinguish two types of groups. So-called static groups are groups, which are built up manually by people joining and leaving them. Static groups can be further divided into two subtypes. In open static groups, everybody can join and leave anytime, useful for example to form a group of lecture attendees of some sort of interest group. Closed static groups have an owner, who decides, which persons are allowed to join, although everybody could leave again at any time. Closed groups enable users for example to create a group of their friends, thus being able to communicate with them easily. In contrast to that, we also support the creation of dynamic groups. They are formed among persons, who are at the same location at the same time. The creation of dynamic groups is only performed at locations, where it makes sense to form groups, for example in lecture halls or meeting rooms, but not on corridors or outdoor. It would also be not very meaningful to form a group only of the people residing in the left front sector of a hall; instead, the complete hall should be considered. For these reasons, all the defined locations in the hierarchy are tagged, whether they allow the formation of groups or not. Dynamic groups are also not only formed granularity of rooms, but also on higher levels in the hierarchy, for example with the people currently residing in the area of a department. As the members of dynamic groups constantly change, it is possible to create an open static group out of them. 4.4.3 Synchronous Communication for Groups The most important form of synchronous communication on computers today is instant messaging; some people even see instant messaging to be the real killer application on the Internet. This has also motivated the decision to build GISS upon an instant messaging system. In today's messenger systems, peer-to-peer-communication is extensively supported. However, when it comes to communication in groups, the support is rather poor most of the time. Often, only sending a message to multiple recipients is supported, lacking means to take into account the current state of the recipients. Furthermore, groups can only be formed of members in one's contact list, thus being not able to send messages to a group, where not all of its members are known (which may be the case in settings, where the participants of a lecture form a group). Our approach does not have the mentioned restrictions. We introduce group-entries in the user's contact list; enable him or his to send messages to this group easily, without knowing who exactly is currently a member of this group. Furthermore, group messages are only delivered to persons, who are currently not busy, thus preventing a disturbance by a message, which is possibly unimportant for the user. These features cannot be carried out in the messenger network itself, so whenever a message to a group account is sent, we intercept it and route it through our system to all the recipients, which are available at a certain time. Communication via a group account is also stored centrally, enabling people to query missed messages or simply viewing the message history. 4.4.4 Asynchronous Communication for Groups Asynchronous communication in groups is not a new idea. The goal of this approach is not to reinvent the wheel, as email is maybe the most widely used form of asynchronous communication on computers and is broadly accepted and standardized. In out work, we aim at the combination of asynchronous communication with location awareness. For this reason, we introduce the concept of so-called virtual postits (cp. [13]), which are messages that are bound to physical locations. These virtual post-its could be either visible for all users that are passing by or they can be restricted to be visible for certain groups of people only. Moreover, a virtual post-it can also have an expiry date after which it is dropped and not displayed anymore. Virtual post-its can also be commented by others, thus providing some from of forum-like interaction, where each post-it forms a thread. Virtual post-its are displayed automatically, whenever a user (available) passes by the first time. Afterwards, post-its can be accessed via the 2D-viewer, where all visible post-its are shown. All readers of a post-it are logged and displayed when viewing it, providing some sort of awareness about the group members' activities in the past. 4.4.5 Context-aware Availability Management Instant messengers in general provide some kind of availability information about a user. Although this information can be only defined in a very coarse granularity, we have decided to use these means of gathering activity context, because the introduction of an additional one would strongly decrease the usability of the system. To support the user managing his availability, we provide an interface that lets the user define rules to adapt his availability to the current context. These rules follow the form "on event (E) if condition (C) then action (A)", which is directly supported by the ECA-rules of the Context Framework described in chapter 1.3. The testing of conditions is periodically triggered by throwing events (whenever the context of a user changes). The condition itself is defined by the user, who can demand the change of his availability status as the action in the rule. As a condition, the user can define his location, a certain time (also triggering daily, every week or every month) or any logical combination of these criteria. 4.4.6 Context-Aware Reminders Reminders [14] are used to give the user the opportunity of defining tasks and being reminded of those, when certain criteria are fulfilled. Thus, a reminder can be seen as a post-it to oneself, which is only visible in certain cases. Reminders can be bound to a certain place or time, but also to spatial proximity of users or groups. These criteria can be combined with Boolean operators, thus providing a powerful means to remind the user of tasks that he wants to carry out when a certain context occurs. A reminder will only pop up the first time the actual context meets the defined criterion. On showing up the reminder, the user has the chance to resubmit it to be reminded again, for example five minutes later or the next time a certain user is in spatial proximity. 4.4.7 Context-Aware Recognition and Notification of Group Meetings With the available context information, we try to recognize meetings of a group. The determination of the criteria, when the system recognizes a group having a meeting, is part of the ongoing work. In a first approach, we use the location - and activity-context of the group members to determine a meeting. Whenever more than 50% of the members of a group are available at a location, where a meeting is considered to make sense (e.g. not on a corridor), a meeting minutes post-it is created at this location and all absent group members are notified of the meeting and the location it takes place. During the meeting, the comment-feature of virtual post-its provides a means to take notes for all of the participants. When members are joining or leaving the meeting, this is automatically added as a note to the list of comments. Like the recognition of the beginning of a meeting, the recognition of its end is still part of ongoing work. If the end of the meeting is recognized, all group members get the complete list of comments as a meeting protocol at the end of the meeting. 5. CONCLUSIONS This paper discussed the potentials of support for group interaction by using context information. First, we introduced the notions of context and context computing and motivated their value for supporting group interaction. An architecture is presented to support context-aware group interaction in mobile, distributed environments. It is built upon a flexible and extensible framework, thus enabling an easy adoption to available context sources (e.g. by adding additional sensors) as well as the required form of representation. We have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself. Important features are dynamic formation of groups, visualization of location on a two-dimensional map as well as unobtrusively integrated in an instant-messenger, asynchronous communication by virtual post-its, which are bound to certain locations, and a context-aware availability-management, which adapts the availability-status of a user to his current situation. To provide location information, we have implemented a subsystem for automated acquisition of location - and proximityinformation provided by various sensors, which provides a technology-independent presentation of locations and spatial proximities between users and merges this information using sensor-independent fusion algorithms. A history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.
Context Awareness for Group Interaction Support ABSTRACT In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction. 1. INTRODUCTION Today's computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices. Users can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere. New situations appear, where the user's context--for example his current location or nearby people--is more dynamic; computation does not occur at a single location and in a single context any longer, but comprises a multitude of situations and locations. This development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together. Motivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS). It supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation. In the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction. A software framework for developing contextsensitive applications is presented, which serves as middleware for GISS. Chapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail. Chapter 5 gives a final summary of our work. 1.1 What is Context Computing? According to Merriam-Webster's Online Dictionary1, context is defined as the "interrelated conditions in which something exists or occurs". Because this definition is very general, many approaches have been made to define the notion of context with respect to computing environments. Most definitions of context are done by enumerating examples or by choosing synonyms for context. The term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects. In [2], context is also defined by an enumeration of examples, namely location, identities of the people around the user, the time of the day, season, temperature etc. [9] defines context as the user's location, environment, identity and time. Here we conform to a widely accepted and more formal definition, which defines context as "any information than can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves". [4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are--with respect to characterizing the situation of an entity--more important than others. These are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types). For example, if we know a person's identity, we can easily derive related information about this person from several data sources (e.g. day of birth or e-mail address). According to this definition, [4] defines a system to be contextaware "if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user's task". [4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval. Figure 1. Layers of a context-aware system Context computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging). In order to do this, there are a few layers between (see Figure 1). First, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized. Finally, the stored context information is used to trigger certain context events (context triggering). [7] 1.2 Group Interaction in Context After these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information. In [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2). First, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing - and knowledge gathering needs. The next part is the Awareness System, which is dedicated to the perceptualisation of the effects of team activity. It does this by communicating work context, agenda and workspace information to the users. The Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents. Mobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places. Finally yet importantly, the organisational innovation system integrates aspects of the team itself, like roles, leadership and shared facilities. With respect to these five aspects of team support, we focus on interaction and partly cover mobility - and awareness-support. Group interaction includes all means that enable group members to communicate freely with all the other members. At this point, the question how context information can be used for supporting group interaction comes up. We believe that information about the current situation of a person provides a surplus value to existing group interaction systems. Context information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around. Figure 2. Support for Mobile Groups [6] Most of today's context-aware applications use location and time only, and location is referred to as a crucial type of context information [3]. We also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity. Nevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity. This provides a comprehensive description of a user's current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4. When we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself. We believe, that for the support of group interaction, the status of the group itself has also be taken into account. Therefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member's contexts. Group context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now. 1.3 Context Middleware The Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications. This so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms. A main feature of the framework is the abstraction of context information retrieval via various sensors and its delivery to a level where no difference appears, for the application designer, between these different kinds of context retrieval mechanisms; the information retrieval is hidden from the application developer. This is achieved by so-called entities, which describe objects--e.g. a human user--that are important for a certain context scenario. Entities express their functionality by the use of so-called attributes, which can be loaded into the entity. These attributes are complex pieces of software, which are implemented as Java classes. Typical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users. Each entity can contain a collection of such attributes, where an entity itself is an attribute. The initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network. In order to load and deploy new attributes, an entity has to reference a class loader and a transport and lookup layer, which manages the lookup mechanism for discovering other entities and the transport. XML configuration files specify which initial set of entities should be loaded and which attributes these entities own. The communication between entities and attributes is based on context events. Each attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running. Among other things, and event contains the name of the event and a list of parameters delivering information about the event itself. Related with this event-based architecture is the use of ECA (Event-Condition-Action) - rules for defining the behaviour of the context system. Therefore, every entity has a rule-interpreter, which catches triggered events, checks conditions associated with them and causes certain actions. These rules are referenced by the entity's XML configuration. A rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically. To sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer. Furthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime. 2. ARCHITECTURE OVERVIEW 3. LOCATION SENSING 3.1 Location Model 3.2 Sensors 3.2.1 Location Sensors 3.2.2 Proximity Sensors 3.3 Sensor Fusion 3.3.1 Location Sensor Fusion 3.3.2 Proximity Sensor Fusion 4. CONTEXTSENSITIVE INTERACTION 4.1 Overview 4.2 Instant Messenger Integration 4.3 Sources of Context Information 4.4 Group Interaction Support 4.4.1 Visualisation of Location Information 4.4.2 Forming and Managing Groups 4.4.3 Synchronous Communication for Groups 4.4.4 Asynchronous Communication for Groups 4.4.5 Context-aware Availability Management 4.4.6 Context-Aware Reminders 4.4.7 Context-Aware Recognition and Notification of Group Meetings 5. CONCLUSIONS This paper discussed the potentials of support for group interaction by using context information. First, we introduced the notions of context and context computing and motivated their value for supporting group interaction. An architecture is presented to support context-aware group interaction in mobile, distributed environments. It is built upon a flexible and extensible framework, thus enabling an easy adoption to available context sources (e.g. by adding additional sensors) as well as the required form of representation. We have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself. Important features are dynamic formation of groups, visualization of location on a two-dimensional map as well as unobtrusively integrated in an instant-messenger, asynchronous communication by virtual post-its, which are bound to certain locations, and a context-aware availability-management, which adapts the availability-status of a user to his current situation. To provide location information, we have implemented a subsystem for automated acquisition of location - and proximityinformation provided by various sensors, which provides a technology-independent presentation of locations and spatial proximities between users and merges this information using sensor-independent fusion algorithms. A history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.
Context Awareness for Group Interaction Support ABSTRACT In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction. 1. INTRODUCTION Today's computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices. Users can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere. This development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together. Motivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS). It supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation. In the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction. A software framework for developing contextsensitive applications is presented, which serves as middleware for GISS. Chapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail. 1.1 What is Context Computing? According to Merriam-Webster's Online Dictionary1, context is defined as the "interrelated conditions in which something exists or occurs". Because this definition is very general, many approaches have been made to define the notion of context with respect to computing environments. Most definitions of context are done by enumerating examples or by choosing synonyms for context. The term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects. Here we conform to a widely accepted and more formal definition, which defines context as "any information than can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves". [4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are--with respect to characterizing the situation of an entity--more important than others. These are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types). According to this definition, [4] defines a system to be contextaware "if it uses context to provide relevant information and/or services to the user, where relevancy depends on the user's task". [4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval. Figure 1. Layers of a context-aware system Context computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging). In order to do this, there are a few layers between (see Figure 1). First, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized. Finally, the stored context information is used to trigger certain context events (context triggering). [7] 1.2 Group Interaction in Context After these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information. In [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2). First, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing - and knowledge gathering needs. It does this by communicating work context, agenda and workspace information to the users. The Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents. Mobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places. With respect to these five aspects of team support, we focus on interaction and partly cover mobility - and awareness-support. Group interaction includes all means that enable group members to communicate freely with all the other members. At this point, the question how context information can be used for supporting group interaction comes up. We believe that information about the current situation of a person provides a surplus value to existing group interaction systems. Context information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around. Figure 2. Support for Mobile Groups [6] Most of today's context-aware applications use location and time only, and location is referred to as a crucial type of context information [3]. We also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity. Nevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity. This provides a comprehensive description of a user's current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4. When we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself. We believe, that for the support of group interaction, the status of the group itself has also be taken into account. Therefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member's contexts. Group context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now. 1.3 Context Middleware The Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications. This so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms. This is achieved by so-called entities, which describe objects--e.g. a human user--that are important for a certain context scenario. Entities express their functionality by the use of so-called attributes, which can be loaded into the entity. These attributes are complex pieces of software, which are implemented as Java classes. Typical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users. Each entity can contain a collection of such attributes, where an entity itself is an attribute. The initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network. XML configuration files specify which initial set of entities should be loaded and which attributes these entities own. The communication between entities and attributes is based on context events. Each attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running. Among other things, and event contains the name of the event and a list of parameters delivering information about the event itself. Related with this event-based architecture is the use of ECA (Event-Condition-Action) - rules for defining the behaviour of the context system. These rules are referenced by the entity's XML configuration. A rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically. To sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer. Furthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime. 5. CONCLUSIONS This paper discussed the potentials of support for group interaction by using context information. First, we introduced the notions of context and context computing and motivated their value for supporting group interaction. An architecture is presented to support context-aware group interaction in mobile, distributed environments. We have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself. A history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.
J-71
A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation
I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions.
[ "dynam pari-mutuel market", "pari-mutuel market", "hedg", "wager", "inform aggreg", "risk alloc", "inform specul", "specul", "dpm", "hybrid", "continu doubl auction", "cda", "zero risk", "market institut", "price", "gain", "loss", "sell", "event resolut", "trader interfac", "doubl auction format", "bid-ask queue", "payoff per share", "price function", "autom market maker", "autom market maker", "demand", "infinit bui-in liquid", "compound secur market", "combinatori bet", "trade", "bet", "gambl" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "U", "U", "U", "U" ]
A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation David M. Pennock Yahoo! Research Labs 74 N. Pasadena Ave, 3rd Floor Pasadena, CA 91103 USA pennockd@yahoo-inc.com ABSTRACT I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions. Categories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Design, Economics, Theory. 1. INTRODUCTION A wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and/or speculative trading on uncertain events. The dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM). The primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker. Horse racing and jai alai wagering traditionally employ the pari-mutuel mechanism. Though there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct. Recently, there has been a move to employ CDAs or CDAwMMs for all types of wagering, including on sports, horse racing, political events, world news, and many other uncertain events, and a simultaneous and opposite trend to use bookie systems for betting on financial markets. These trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting. Some companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business.1 Each mechanism has pros and cons for the market institution and the participating traders. A CDA only matches willing traders, and so poses no risk whatsoever for the market institution. But a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin. A successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders. A CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses. Both the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available. As a result, prices are known to capture the current state of information exceptionally well. Pari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker. Pari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers. However, pari-mutuel mar1 http://www.wired.com/news/ebiz/0,1272,61051,00.html 170 kets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close. For this reason, pari-mutuel prices prior to the market``s close cannot be considered a reflection of current information. Pari-mutuel market participants cannot buy low and sell high: they cannot cash out gains (or limit losses) before the event outcome is revealed. Because the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings. In this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM). A DPM can be thought of as a hybrid between a pari-mutuel market and a CDA. A DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk). A constant, pre-determined subsidy is required to start the market. The subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation. A DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution. A DPM is also able to react to and incorporate information arriving over time, like a CDA. The market institution changes the price for particular outcomes based on the current state of wagering. If a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases. Prices are computed automatically using a price function, which can differ depending on what properties are desired. The price function determines the instantaneous price per share for an infinitesimal quantity of shares; the total cost for purchasing n shares is computed as the integral of the price function from 0 to n. The complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs. DPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed. While there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism. Traders can always hedge-sell by purchasing the opposite outcome than they already own. 2. BACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets Pari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games. In a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future. After the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered. More formally, if there are k mutually exclusive and exhaustive outcomes (e.g., k horses, exactly one of which will win), and M1, M2, ... , Mk dollars are bet on each outcome, and outcome i occurs, then everyone who bet on an outcome j = i loses their wager, while everyone who bet on outcome i receives Pk j=1 Mj/Mi dollars for every dollar they wagered. That is, every dollar wagered on i receives an equal share of all money wagered. An equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or P j=i Mj/Mi dollars. In practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet. Consider a simple example with two outcomes, A and B. The outcomes are mutually exclusive and exhaustive, meaning that Pr(A ∧ B) = 0 and Pr(A) + Pr(B) = 1. Suppose $800 is bet on A and $200 on B. Now suppose that A occurs (e.g., horse A wins the race). People who wagered on B lose their money, or $200 in total. People who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees). Specifically, each $1 wager on A entitles its owner a 1/800 share of the $1000, or $1.25. Every dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed. The only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome. As a result, there is a disincentive to place a wager early if there is any chance that new information might become available. Moreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen. Payoff rates can fluctuate arbitrarily until the market closes. So a second reason not to bet early is to wait to get a better sense of the final payout rates. This is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed. Pari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market``s close. However, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM. If bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market``s close-no one would buy at greater than $1 and no one would sell at less than $1. Pari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss. Unlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time-there is in a sense infinite liquidity for buying. A CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker. In a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk. The main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time. It is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset-situations common in al171 most all trading and wagering scenarios. There is no notion of buying low and selling high, as occurs in a CDA, where buying when few others are buying (and the price is low) is rewarded more than buying when many others are buying (and the price is high). Perhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging. Since a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics. But running multiple consecutive markets would likely thin out trading in each individual market. Also, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market. This last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market. In laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17]. A similar ability has been recognized at real racetracks [1, 22, 24, 25, 26]. 2.2 Financial markets In the financial world, wagering on the outcomes of uncertain future propositions is also common. The typical market mechanism used is the continuous double auction (CDA). The term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible. Examples include stock markets like NASDAQ, options markets like the CBOE [13], futures markets like the CME [21], other derivatives markets, insurance markets, political stock markets [6, 7], idea futures markets [12], decision markets [10] and even market games [3, 15, 16]. Securities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes. So if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs. In this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur. Similarly, buying a put option, which is useful as a hedge for a stockholder, is a bet that the underlying stock will go down. In practice, agents engage in a mixture of hedging and speculating, and there is no clear dividing line between the two [14]. Like pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19]. A CDA constantly matches orders to buy an asset with orders to sell. If at any time one party is willing to buy one unit of the asset at a bid price of pbid, while another party is willing to sell one unit of the asset at an ask price of pask, and pbid is greater than or equal to pask, then the two parties transact (at some price between pbid and pask). If the highest bid price is less than the lowest ask price, then no transactions occur. In a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset. Since the auctioneer only matches willing bidders, the auctioneer takes on no risk. However, buyers can only buy as many shares as sellers are willing to sell; for any transaction to occur, there must be a counterparty on the other side willing to accept the trade. As a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs. The spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading.2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices. We call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades. The market maker may be a person, or may be an automated algorithm. Adding a market maker to the system increases liquidity, but exposes the market maker to risk. Now, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money. 2.3 Wagering markets The typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA. In this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes. Unlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet. The bookmaker profits by offering different odds for the two sides of the bet, essentially defining a bidask spread. While odds may change in response to changing information, any bets made at previously set odds remain in effect according to the odds at the time of the bet; this is precisely in analogy to a CDAwMM. One difference between a bookmaker and a market maker is that the former usually operates in a take it or leave it mode: bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker. Still, the bookmaker certainly reacts to bettor demand. Like a market maker, the bookmaker exposes itself to significant risk. Sports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23]. 2.4 Market scoring rule Hanson``s [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM. Like a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price. An MSR requires a patron to subsidize the market. The patron``s final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded. An MSR maintains a probability distribution over all events. At any time any 2 Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http://TradeSports.com, or in some financial options markets, for example. 3 A very clear example of a CDAwMM is the interactive betting market on http://WSEX.com. 4 Or, alternatively, the bookmaker sets the game line in order to provide even-money odds. 172 trader who believes the probabilities are wrong can change any part of the distribution by accepting a lottery ticket that pays off according to a scoring rule (e.g., the logarithmic scoring rule) [27], as long as that trader also agrees to pay off the most recent person to change the distribution. In the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution. In the limit of many traders, it produces a combined estimate. Since the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity. An MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system. An MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker. In an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market``s close. While the mechanisms are quite different-and so trader acceptance and incentives may strongly differ-the properties and motivations of DPMs and MSRs are quite similar. Hanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes. The patron``s payment for subsidizing trading on all 2n possible combinations of n events is no larger than the sum of subsidizing the n event marginals independently. The mechanism was planned for use in the Policy Analysis Market (PAM), a futures market in Middle East related outcomes and funded by DARPA [20], until a media firestorm killed the project.5 As of this writing, the founders of PAM were considering reopening under private control.6 3. A DYNAMIC PARI-MUTUEL MARKET 3.1 High-level description In contrast to a standard pari-mutuel market, where each dollar always buys an equal share of the payoff, in a DPM each dollar buys a variable share in the payoff depending on the state of wagering at the time of purchase. So a wager on A at a time when most others are wagering on B offers a greater possible profit than a wager on A when most others are also wagering on A. A natural way to communicate the changing payoff of a bet is to say that, at any given time, a certain amount of money will buy a certain number of shares in one outcome the other. Purchasing a share entitles its owner to an equal stake in the winning pot should the chosen outcome occur. The payoff is variable, because when few people are betting on an outcome, shares will generally be cheaper than at a time when many people are betting that outcome. There is no pre-determined limit on the number of shares: new shares can be continually generated as trading proceeds. For simplicity, all analyses in this paper consider the binary outcome case; generalizing to multiple discrete outcomes should be straightforward. Denote the two outcomes A and B. The outcomes are mutually exclusive and ex5 See http://hanson.gmu.edu/policyanalysismarket.html for more information, or http://dpennock.com/pam.html for commentary. 6 http://www.policyanalysismarket.com/ haustive. Denote the instantaneous price per share of A as p1 and the price per share of B as p2. Denote the payoffs per share as P1 and P2, respectively. These four numbers, p1, p2, P1, P2 are the key numbers that traders must track and understand. Note that the price is set at the time of the wager; the payoff per share is finalized only after the event outcome is revealed. At any time, a trader can purchase an infinitesimal quantity of shares of A at price p1 (and similarly for B). However, since the price changes continuously as shares are purchased, the cost of buying n shares is computed as the integral of a price function from 0 to n. The use of continuous functions and integrals can be hidden from traders by aggregating the automated market maker``s sell orders into discrete lots of, say, 100 shares each. These ask orders can be automatically entered into the system by the market institution, so that traders interact with what looks like a more familiar CDA; we examine this interface issue in more detail below in Section 4.2. For our analysis, we introduce the following additional notation. Denote M1 as the total amount of money wagered on A, M2 as the total amount of money wagered on B, T = M1 + M2 as the total amount of money wagered on both sides, N1 as the total number of shares purchased of A, and N2 as the total number of shares purchased of B. There are many ways to formulate the price function. Several natural price functions are outlined below; each is motivated as the unique solution to a particular constraint on price dynamics. 3.2 Advantages and disadvantages To my knowledge, a DPM is the only known mechanism for hedging and speculating that exhibits all three of the following properties: (1) guaranteed liquidity, (2) no risk for the market institution, and (3) continuous incorporation of information. A standard pari-mutuel fails (3). A CDA fails (1). A CDAwMM, the bookmaker mechanism, and an MSR all fail (2). Even though technically an MSR exposes its patron to risk (i.e., a variable future payoff), the patron``s maximum loss is bounded, so the distinction between a DPM and an MSR in terms of these three properties is more technical than practical. DPM traders can cash out of the market early, just like stock market traders, to lock in a profit or limit a loss, an action that is simply not possible in a standard pari-mutuel. A DPM also has some drawbacks. The payoff for a wager depends both on the price at the time of the trade, and on the final payoff per share at the market``s close. This contrasts with the CDA variants, where the payoff vector across possible future outcomes is fixed at the time of the trade. So a trader``s strategic optimization problem is complicated by the need to predict the final values of P1 and P2. If P changes according to a random walk, then traders can take the current P as an unbiased estimate of the final P, greatly decreasing the complexity of their optimization. If P does not change according to a random walk, the mechanism still has utility as a mechanism for hedging and speculating, though optimization may be difficult, and determining a measure of the market``s aggregate opinion of the probabilities of A and B may be difficult. We discuss the implications of random walk behavior further below in Section 4.1 in the discussion surrounding Assumption 3. A second drawback of a DPM is its one-sided nature. 173 While an automated market maker always stands ready to accept buy orders, there is no corresponding market maker to accept sell orders. Traders must sell to each other using a standard CDA mechanism, for example by posting an ask order at a price at or below the market maker``s current ask price. Traders can also always hedge-sell by purchasing shares in the opposite outcome from the market maker, thereby hedging their bet if not fully liquidating it. 3.3 Redistribution rule In a standard pari-mutuel market, payoffs can be computed in either of two equivalent ways: (1) each winning $1 wager receives a refund of the initial $1 paid, plus an equal share of all losing wagers, or (2) each winning $1 wager receives an equal share of all wagers, winning or losing. Because each dollar always earns an equal share of the payoff, the two formulations are precisely the same: $1 + Mlose Mwin = Mwin + Mlose Mwin . In a dynamic pari-mutuel market, because each dollar is not equally weighted, the two formulations are distinct, and lead to significantly different price functions and mechanisms, each with different potentially desirable properties. We consider each case in turn. The next section analyzes case (1), where only losing money is redistributed. Section 5 examines case (2), where all money is redistributed. 4. DPM I: LOSING MONEY REDISTRIBUTED For the case where the initial payments on winning bets are refunded, and only losing money is redistributed, the respective payoffs per share are simply: P1 = M2 N1 P2 = M1 N2 . So, if A occurs, shareholders of A receive all of their initial payment back, plus P1 dollars per share owned, while shareholders of B lose all money wagered. Similarly, if B occurs, shareholders of B receive all of their initial payment back, plus P2 dollars per share owned, while shareholders of A lose all money wagered. Without loss of generality, I will analyze the market from the perspective of A, deriving prices and payoffs for A only. The equations for B are symmetric. The trader``s per-share expected value for purchasing an infinitesimal quantity of shares of A is E[ shares] = Pr(A) · E [P1|A] − (1 − Pr(A)) · p1 E[ shares] = Pr(A) · E '' M2 N1 ˛ ˛ ˛ ˛ A − (1 − Pr(A)) · p1 where is an infinitesimal quantity of shares of A, Pr(A) is the trader``s belief in the probability of A, and p1 is the instantaneous price per share of A for an infinitesimal quantity of shares. E[P1|A] is the trader``s expectation of the payoff per share of A after the market closes and given that A occurs. This is a subtle point. The value of P1 does not matter if B occurs, since in this case shares of A are worthless, and the current value of P1 does not necessarily matter as this may change as trading continues. So, in order to determine the expected value of shares of A, the trader must estimate what he or she expects the payoff per share to be in the end (after the market closes) if A occurs. If E[ shares]/ > 0, a risk-neutral trader should purchase shares of A. How many shares? This depends on the price function determining p1. In general, p1 increases as more shares are purchased. The risk-neutral trader should continue purchasing shares until E[ shares]/ = 0. (A riskaverse trader will generally stop purchasing shares before driving E[ shares]/ all the way to zero.) Assuming riskneutrality, the trader``s optimization problem is to choose a number of shares n ≥ 0 of A to purchase, in order to maximize E[n shares] = Pr(A)·n·E [P1|A]−(1−Pr(A))· Z n 0 p1(n)dn. (1) It``s easy to see that the same value of n can be solved for by finding the number of shares required to drive E[ shares]/ to zero. That is, find n ≥ 0 satisfying 0 = Pr(A) · E [P1|A] − (1 − Pr(A)) · p1(n), if such a n exists, otherwise n = 0. 4.1 Market probability As traders who believe that E[ shares of A]/ > 0 purchase shares of A and traders who believe that E[ shares of B]/ > 0 purchase shares of B, the prices p1 and p2 change according to a price function, as prescribed below. The current prices in a sense reflect the market``s opinion as a whole of the relative probabilities of A and B. Assuming an efficient marketplace, the market as a whole considers E[ shares]/ = 0, since the mechanisms is a zero sum game. For example, if market participants in aggregate felt that E[ shares]/ > 0, then there would be net demand for A, driving up the price of A until E[ shares]/ = 0. Define MPr(A) to be the market probability of A, or the probability of A inferred by assuming that E[ shares]/ = 0. We can consider MPr(A) to be the aggregate probability of A as judged by the market as a whole. MPr(A) is the solution to 0 = MPr(A) · E[P1|A] − (1 − MPr(A)) · p1. Solving we get MPr(A) = p1 p1 + E[P1|A] . (2) At this point we make a critical assumption in order to greatly simplify the analysis; we assume that E[P1|A] = P1. (3) That is, we assume that the current value for the payoff per share of A is the same as the expected final value of the payoff per share of A given that A occurs. This is certainly true for the last (infinitesimal) wager before the market closes. It``s not obvious, however, that the assumption is true well before the market``s close. Basically, we are assuming that the value of P1 moves according to an unbiased random walk: the current value of P1 is the best expectation of its future value. I conjecture that there are reasonable market efficiency conditions under which assumption (3) is true, though I have not been able to prove that it arises naturally from rational trading. We examine scenarios below in which 174 assumption (3) seems especially plausible. Nonetheless, the assumption effects our analysis only. Regardless of whether (3) is true, each price function derived below implies a welldefined zero-sum game in which traders can play. If traders can assume that (3) is true, then their optimization problem (1) is greatly simplified; however, optimizing (1) does not depend on the assumption, and traders can still optimize by strategically projecting the final expected payoff in whatever complicated way they desire. So, the utility of DPM for hedging and speculating does not necessarily hinge on the truth of assumption (3). On the other hand, the ability to easily infer an aggregate market consensus probability from market prices does depend on (3). 4.2 Price functions A variety of price functions seem reasonable, each exhibiting various properties, and implying differing market probabilities. 4.2.1 Price function I: Price of A equals payoff of B One natural price function to consider is to set the price per share of A equal to the payoff per share of B, and set the price per share of B equal to the payoff per share of A. That is, p1 = P2 p2 = P1. (4) Enforcing this relationship reduces the dimensionality of the system from four to two, simplifying the interface: traders need only track two numbers instead of four. The relationship makes sense, since new information supporting A should encourage purchasing of shares A, driving up both the price of A and the payoff of B, and driving down the price of B and the payoff of A. In this setting, assumption (3) seems especially reasonable, since if an efficient market hypothesis leads prices to follow a random walk, than payoffs must also follow a random walk. The constraints (4) lead to the following derivation of the market probability: MPr(A)P1 = MPr(B)p1 MPr(A)P1 = MPr(B)P2 MPr(A) MPr(B) = P2 P1 MPr(A) MPr(B) = M1 N2 M2 N1 MPr(A) MPr(B) = M1N1 M2N2 MPr(A) = M1N1 M1N1 + M2N2 (5) The constraints (4) specify the instantaneous relationship between payoff and price. From this, we can derive how prices change when (non-infinitesimal) shares are purchased. Let n be the number of shares purchased and let m be the amount of money spent purchasing n shares. Note that p1 = dm/dn, the instantaneous price per share, and m = R n 0 p1(n)dn. Substituting into equation (4), we get: p1 = P2 dm dn = M1 + m N2 dm M1 + m = dn N2 Z dm M1 + m = Z dn N2 ln(M1 + m) = n N2 + C m = M1 h e n N2 − 1 i (6) Equation 6 gives the cost of purchasing n shares. The instantaneous price per share as a function of n is p1(n) = dm dn = M1 N2 e n N2 . (7) Note that p1(0) = M1/N2 = P2 as required. The derivation of the price function p2(n) for B is analogous and the results are symmetric. The notion of buying infinitesimal shares, or integrating costs over a continuous function, are probably foreign to most traders. A more standard interface can be implemented by discretizing the costs into round lots of shares, for example lots of 100 shares. Then ask orders of 100 shares each at the appropriate price can be automatically placed by the market institution. For example, the market institution can place an ask order for 100 shares at price m(100)/100, another ask order for 100 shares at price (m(200)−m(100))/100, a third ask for 100 shares at (m(300)− m(200))/100, etc.. In this way, the market looks more familiar to traders, like a typical CDA with a number of ask orders at various prices automatically available. A trader buying less than 100 shares would pay a bit more than if the true cost were computed using (6), but the discretized interface would probably be more intuitive and transparent to the majority of traders. The above equations assume that all money that comes in is eventually returned or redistributed. In other words, the mechanism is a zero sum game, and the market institution takes no portion of the money. This could be generalized so that the market institution always takes a certain amount, or a certain percent, or a certain amount per transaction, or a certain percent per transaction, before money in returned or redistributed. Finally, note that the above price function is undefined when the amount bet or the number of shares are zero. So the system must begin with some positive amount on both sides, and some positive number of shares outstanding on both sides. These initial amounts can be arbitrarily small in principle, but the size of the initial subsidy may affect the incentives of traders to participate. Also, the smaller the initial amounts, the more each new dollar effects the prices. The initialization amounts could be funded as a subsidy from the market institution or a patron, which I``ll call a seed wager, or from a portion of the fees charged, which I``ll call an ante wager. 4.2.2 Price function II: Price of A proportional to money on A A second price function can be derived by requiring the ratio of prices to be equal to the ratio of money wagered. 175 That is, p1 p2 = M1 M2 . (8) In other words, the price of A is proportional to the amount of money wagered on A, and similarly for B. This seems like a particularly natural way to set the price, since the more money that is wagered on one side, the cheaper becomes a share on the other side, in exactly the same proportion. Using Equation 8, along with (2) and (3), we can derive the implied market probability: M1 M2 = p1 p2 = MPr(A) MPr(B) · M2 N1 MPr(B) MPr(A) · M1 N2 = (MPr(A))2 (MPr(B))2 · M2N2 M1N1 (MPr(A))2 (MPr(B))2 = (M1)2 N1 (M2)2N2 MPr(A) MPr(B) = M1 √ N1 M2 √ N2 MPr(A) = M1 √ N1 M1 √ N1 + M2 √ N2 (9) We can solve for the instantaneous price as follows: p1 = MPr(A) MPr(B) · P1 = M1 √ N1 M2 √ N2 · M2 N1 = M1 √ N1N2 (10) Working from the above instantaneous price, we can derive the implied cost function m as a function of the number n of shares purchased as follows: dm dn = M1 + m √ N1 + n √ N2 Z dm M1 + m = Z dn √ N1 + n √ N2 ln(M1 + m) = 2 N2 [(N1 + n)N2] 1 2 + C m = M1 '' e 2 r N1+n N2 −2 r N1 N2 − 1 # . (11) From this we get the price function: p1(n) = dm dn = M1 p (N1 + n)N2 e 2 r N1+n N2 −2 r N1 N2 . (12) Note that, as required, p1(0) = M1/ √ N1N2, and p1(0)/p2(0) = M1/M2. If one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B, which seems fairly natural. Note that, as before, the mechanism can be modified to collect transaction fees of some kind. Also note that seed or ante wagers are required to initialize the system. 5. DPM II: ALL MONEY REDISTRIBUTED Above we examined the policy of refunding winning wagers and redistributing only losing wagers. In this section we consider the second policy mentioned in Section 3.3: all money from all wagers are collected and redistributed to winning wagers. For the case where all money is redistributed, the respective payoffs per share are: P1 = M1 + M2 N1 = T N1 P2 = M1 + M2 N2 = T N2 , where T = M1 + M2 is the total amount of money wagered on both sides. So, if A occurs, shareholders of A lose their initial price paid, but receive P1 dollars per share owned; shareholders of B simply lose all money wagered. Similarly, if B occurs, shareholders of B lose their initial price paid, but receive P2 dollars per share owned; shareholders of A lose all money wagered. In this case, the trader``s per-share expected value for purchasing an infinitesimal quantity of shares of A is E[ shares] = Pr(A) · E [P1|A] − p1. (13) A risk-neutral trader optimizes by choosing a number of shares n ≥ 0 of A to purchase, in order to maximize E[n shares] = Pr(A) · n · E [P1|A] − Z n 0 p1(n)dn = Pr(A) · n · E [P1|A] − m (14) The same value of n can be solved for by finding the number of shares required to drive E[ shares]/ to zero. That is, find n ≥ 0 satisfying 0 = Pr(A) · E [P1|A] − p1(n), if such a n exists, otherwise n = 0. 5.1 Market probability In this case MPr(A), the aggregate probability of A as judged by the market as a whole, is the solution to 0 = MPr(A) · E[P1|A] − p1. Solving we get MPr(A) = p1 E[P1|A] . (15) As before, we make the simplifying assumption (3) that the expected final payoff per share equals the current payoff per share. The assumption is critical for our analysis, but may not be required for a practical implementation. 5.2 Price functions For the case where all money is distributed, the constraints (4) that keep the price of A equal to the payoff of B, and vice versa, do not lead to the derivation of a coherent price function. A reasonable price function can be derived from the constraint (8) employed in Section 4.2.2, where we require that the ratio of prices to be equal to the ratio of money wagered. That is, p1/p2 = M1/M2. In other words, the price of A is proportional to the amount of money wagered on A, and similarly for B. 176 Using Equations 3, 8, and 15 we can derive the implied market probability: M1 M2 = p1 p2 = MPr(A) MPr(B) · T N1 · N2 T = MPr(A) MPr(B) · N2 N1 MPr(A) MPr(B) = M1N1 M2N2 MPr(A) = M1N1 M1N1 + M2N2 (16) Interestingly, this is the same market probability derived in Section 4.2.1 for the case of losing-money redistribution with the constraints that the price of A equal the payoff of B and vice versa. The instantaneous price per share for an infinitesimal quantity of shares is: p1 = (M1)2 + M1M2 M1N1 + M2N2 = M1 + M2 N1 + M2 M1 N2 Working from the above instantaneous price, we can derive the number of shares n that can be purchased for m dollars, as follows: dm dn = M1 + M2 + m N1 + n + M2 M1+m N2 dn dm = N1 + n + M2 M1+m N2 M1 + M2 + m (17) · · · n = m(N1 − N2) T + N2(T + m) M2 ln '' T(M1 + m) M1(T + m) . Note that we solved for n(m) rather than m(n). I could not find a closed-form solution for m(n), as was derived for the two other cases above. Still, n(m) can be used to determine how many shares can be purchased for m dollars, and the inverse function can be approximated to any degree numerically. From n(m) we can also compute the price function: p1(m) = dm dn = (M1 + m)M2T denom , (18) where denom = (M1 + m)M2N1 + (M2 − m)M2N2 +T(M1 + m)N2 ln '' T(M1 + m) M1(T + m) Note that, as required, p1(0)/p2(0) = M1/M2. If one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B. This price function has another desirable property: it acts such that the expected value of wagering $1 on A and simultaneously wagering $1 on B equals zero, assuming (3). That is, E[$1 of A + $1 of B] = 0. The derivation is omitted. 5.3 Comparing DPM I and II The main advantage of refunding winning wagers (DPM I) is that every bet on the winning outcome is guaranteed to at least break even. The main disadvantage of refunding winning wagers is that shares are not homogenous: each share of A, for example, is actually composed of two distinct parts: (1) the refund, or a lottery ticket that pays $p if A occurs, where p is the price paid per share, and (2) one share of the final payoff ($P1) if A occurs. This complicates the implementation of an aftermarket to cash out of the market early, which we will examine below in Section 7. When all money is redistributed (DPM II), shares are homogeneous: each share entitles its owner to an equal slice of the final payoff. Because shares are homogenous, the implementation of an aftermarket is straightforward, as we shall see in Section 7. On the other hand, because initial prices paid are not refunded for winning bets, there is a chance that, if prices swing wildly enough, a wager on the correct outcome might actually lose money. Traders must be aware that if they buy in at an excessively high price that later tumbles allowing many others to get in at a much lower price, they may lose money in the end regardless of the outcome. From informal experiments, I don``t believe this eventuality would be common, but nonetheless it requires care in communicating to traders the possible risks. One potential fix would be for the market maker to keep track of when the price is going too low, endangering an investor on the correct outcome. At this point, the market maker could artificially stop lowering the price. Sell orders in the aftermarket might still come in below the market maker``s price, but in this way the system could ensure that every wager on the correct outcome at least breaks even. 6. OTHER VARIATIONS A simple ascending price function would set p1 = αM1 and p2 = αM2, where α > 0. In this case, prices would only go up. For the case of all money being redistributed, this would eliminate the possibility of losing money on a wager on the correct outcome. Even though the market maker``s price only rises, the going price may fall well below the market maker``s price, as ask orders are placed in the aftermarket. I have derived price functions for several other cases, using the same methodology above. Each price function may have its own desirable properties, but it``s not clear which is best, or even that a single best method exists. Further analyses and, more importantly, empirical investigations are required to answer these questions. 7. AFTERMARKETS A key advantage of DPM over a standard pari-mutuel market is the ability to cash out of the market before it closes, in order to take a profit or limit a loss. This is accomplished by allowing traders to place ask orders on the same queue as the market maker. So traders can sell the shares that they purchased at or below the price set by the market maker. Or traders can place a limit sell order at any price. Buyers will purchase any existing shares for sale at the lower prices first, before purchasing new shares from the market maker. 7.1 Aftermarket for DPM II For the second main case explored above, where all money 177 is redistributed, allowing an aftermarket is simple. In fact, aftermarket may be a poor descriptor: buying and selling are both fully integrated into the same mechanism. Every share is worth precisely the same amount, so traders can simply place ask orders on the same queue as the market maker in order to sell their shares. New buyers will accept the lowest ask price, whether it comes from the market maker or another trader. In this way, traders can cash out early and walk away with their current profit or loss, assuming they can find a willing buyer. 7.2 Aftermarket for DPM I When winning wagers are refunded and only losing wagers are redistributed, each share is potentially worth a different amount, depending on how much was paid for it, so it is not as simple a matter to set up an aftermarket. However, an aftermarket is still possible. In fact, much of the complexity can be hidden from traders, so it looks nearly as simple as placing a sell order on the queue. In this case shares are not homogenous: each share of A is actually composed of two distinct parts: (1) the refund of p · 1A dollars, and (2) the payoff of P1 · 1A dollars, where p is the per-share price paid and 1A is the indicator function equalling 1 if A occurs, and 0 otherwise. One can imagine running two separate aftermarkets where people can sell these two respective components. However, it is possible to automate the two aftermarkets, by automatically bundling them together in the correct ratio and selling them in the central DPM. In this way, traders can cash out by placing sell orders on the same queue as the DPM market maker, effectively hiding the complexity of explicitly having two separate aftermarkets. The bundling mechanism works as follows. Suppose the current price for 1 share of A is p1. A buyer agrees to purchase the share at p1. The buyer pays p1 dollars and receives p1 · 1A + P1 · 1A dollars. If there is enough inventory in the aftermarkets, the buyer``s share is constructed by bundling together p1 ·1A from the first aftermarket, and P1 ·1A from the second aftermarket. The seller in the first aftermarket receives p1MPr(A) dollars, and the seller in the second aftermarket receives p1MPr(B) dollars. 7.3 Pseudo aftermarket for DPM I There is an alternative pseudo aftermarket that``s possible for the case of DPM I that does not require bundling. Consider a share of A purchased for $5. The share is composed of $5·1A and $P1 ·1A. Now suppose the current price has moved from $5 to $10 per share and the trader wants to cash out at a profit. The trader can sell 1/2 share at market price (1/2 share for $5), receiving all of the initial $5 investment back, and retaining 1/2 share of A. The 1/2 share is worth either some positive amount, or nothing, depending on the outcome and the final payoff. So the trader is left with shares worth a positive expected value and all of his or her initial investment. The trader has essentially cashed out and locked in his or her gains. Now suppose instead that the price moves downward, from $5 to $2 per share. The trader decides to limit his or her loss by selling the share for $2. The buyer gets the 1 share plus $2·1A (the buyer``s price refunded). The trader (seller) gets the $2 plus what remains of the original price refunded, or $3 · 1A. The trader``s loss is now limited to $3 at most instead of $5. If A occurs, the trader breaks even; if B occurs, the trader loses $3. Also note that-in either DPM formulation-traders can always hedge sell by buying the opposite outcome without the need for any type of aftermarket. 8. CONCLUSIONS I have presented a new market mechanism for wagering on, or hedging against, a future uncertain event, called a dynamic pari-mutuel market (DPM). The mechanism combines the infinite liquidity and risk-free nature of a parimutuel market with the dynamic nature of a CDA, making it suitable for continuous information aggregation. To my knowledge, all existing mechanisms-including the standard pari-mutuel market, the CDA, the CDAwMM, the bookie mechanism, and the MSR-exhibit at most two of the three properties. An MSR is the closest to a DPM in terms of these properties, if not in terms of mechanics. Given some natural constraints on price dynamics, I have derived in closed form the implied price functions, which encode how prices change continuously as shares are purchased. The interface for traders looks much like the familiar CDA, with the system acting as an automated market maker willing to accept an infinite number of buy orders at some price. I have explored two main variations of a DPM: one where only losing money is redistributed, and one where all money is redistributed. Each has its own pros and cons, and each supports several reasonable price functions. I have described the workings of an aftermarket, so that traders can cash out of the market early, like in a CDA, to lock in their gains or limit their losses, an operation that is not possible in a standard pari-mutuel setting. 9. FUTURE WORK This paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market. Many avenues for future work present themselves, including the following: • Random walk conjecture. The most important question mark in my mind is whether the random walk assumption (3) can be proven under reasonable market efficiency conditions and, if not, how severely it effects the practicality of the system. • Incentive analysis. Formally, what are the incentives for traders to act on new information and when? How does the level of initial subsidy effect trader incentives? • Laboratory experiments and field tests. This paper concentrated on the mathematics and algorithmics of the mechanism. However, the true test of the mechanism``s ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment. In reality, how do people behave when faced with a DPM mechanism? • DPM call market. I have derived the price functions to react to wagers on one outcome at a time. The mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers. • Real-valued variables. I believe the mechanisms in this paper can easily be generalized to multiple discrete 178 outcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100). However, the generalization to real-valued variables with arbitrary range is less clear, and open for future development. • Compound/combinatorial betting. I believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task. DPM may also have some computational advantages over MSR, though this remains to be seen. Acknowledgments I thank Dan Fain, Gary Flake, Lance Fortnow, and Robin Hanson. 10. REFERENCES [1] Mukhtar M. Ali. Probability and utility estimates for racetrack bettors. Journal of Political Economy, 85(4):803-816, 1977. [2] Peter Bossaerts, Leslie Fine, and John Ledyard. Inducing liquidity in thin financial markets through combined-value trading mechanisms. European Economic Review, 46:1671-1695, 2002. [3] Kay-Yut Chen, Leslie R. Fine, and Bernardo A. Huberman. Forecasting uncertain events with small groups. In Third ACM Conference on Electronic Commerce (EC``01), pages 58-64, 2001. [4] Sandip Debnath, David M. Pennock, C. Lee Giles, and Steve Lawrence. Information incorporation in online in-game sports betting markets. In Fourth ACM Conference on Electronic Commerce (EC``03), 2003. [5] Robert Forsythe and Russell Lundholm. Information aggregation in an experimental market. Econometrica, 58(2):309-347, 1990. [6] Robert Forsythe, Forrest Nelson, George R. Neumann, and Jack Wright. Anatomy of an experimental political stock market. American Economic Review, 82(5):1142-1161, 1992. [7] Robert Forsythe, Thomas A. Rietz, and Thomas W. Ross. Wishes, expectations, and actions: A survey on price formation in election stock markets. Journal of Economic Behavior and Organization, 39:83-110, 1999. [8] Lance Fortnow, Joe Kilian, David M. Pennock, and Michael P. Wellman. Betting boolean-style: A framework for trading in securities based on logical formulas. In Proceedings of the Fourth Annual ACM Conference on Electronic Commerce, pages 144-155, 2003. [9] John M. Gandar, William H. Dare, Craig R. Brown, and Richard A. Zuber. Informed traders and price variations in the betting market for professional basketball games. Journal of Finance, LIII(1):385-401, 1998. [10] Robin Hanson. Decision markets. IEEE Intelligent Systems, 14(3):16-19, 1999. [11] Robin Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1), 2002. [12] Robin D. Hanson. Could gambling save science? Encouraging an honest consensus. Social Epistemology, 9(1):3-33, 1995. [13] Jens Carsten Jackwerth and Mark Rubinstein. Recovering probability distributions from options prices. Journal of Finance, 51(5):1611-1631, 1996. [14] Joseph B. Kadane and Robert L. Winkler. Separating probability elicitation from utilities. Journal of the American Statistical Association, 83(402):357-363, 1988. [15] David M. Pennock, Steve Lawrence, C. Lee Giles, and Finn ˚Arup Nielsen. The real power of artificial markets. Science, 291:987-988, February 9 2001. [16] David M. Pennock, Steve Lawrence, Finn ˚Arup Nielsen, and C. Lee Giles. Extracting collective probabilistic forecasts from web games. In Seventh International Conference on Knowledge Discovery and Data Mining, pages 174-183, 2001. [17] C. R. Plott, J. Wit, and W. C. Yang. Parimutuel betting markets as information aggregation devices: Experimental results. Technical Report Social Science Working Paper 986, California Institute of Technology, April 1997. [18] Charles R. Plott. Markets as information gathering tools. Southern Economic Journal, 67(1):1-15, 2000. [19] Charles R. Plott and Shyam Sunder. Rational expectations and the aggregation of diverse information in laboratory security markets. Econometrica, 56(5):1085-1118, 1988. [20] Charles Polk, Robin Hanson, John Ledyard, and Takashi Ishikida. Policy analysis market: An electronic commerce application of a combinatorial information market. In Proceedings of the Fourth Annual ACM Conference on Electronic Commerce, pages 272-273, 2003. [21] R. Roll. Orange juice and weather. American Economic Review, 74(5):861-880, 1984. [22] Richard N. Rosett. Gambling and rationality. Journal of Political Economy, 73(6):595-607, 1965. [23] Carsten Schmidt and Axel Werwatz. How accurate do markets predict the outcome of an event? The Euro 2000 soccer championships experiment. Technical Report 09-2002, Max Planck Institute for Research into Economic Systems, 2002. [24] Wayne W. Snyder. Horse racing: Testing the efficient markets model. Journal of Finance, 33(4):1109-1118, 1978. [25] Richard H. Thaler and William T. Ziemba. Anomalies: Parimutuel betting markets: Racetracks and lotteries. Journal of Economic Perspectives, 2(2):161-174, 1988. [26] Martin Weitzman. Utility analysis and group behavior: An empirical study. Journal of Political Economy, 73(1):18-26, 1965. [27] Robert L. Winkler and Allan H. Murphy. Good probability assessors. J. Applied Meteorology, 7:751-758, 1968. 179
A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation ABSTRACT I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions. 1. INTRODUCTION A wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and/or speculative trading on uncertain events. The dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM). The primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker. Horse racing and jai alai wagering traditionally employ the pari-mutuel mechanism. Though there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct. Recently, there has been a move to employ CDAs or CDAwMMs for all types of wagering, including on sports, horse racing, political events, world news, and many other uncertain events, and a simultaneous and opposite trend to use bookie systems for betting on financial markets. These trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting. Some companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business .1 Each mechanism has pros and cons for the market institution and the participating traders. A CDA only matches willing traders, and so poses no risk whatsoever for the market institution. But a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin. A successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders. A CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses. Both the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available. As a result, prices are known to capture the current state of information exceptionally well. Pari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker. Pari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers. However, pari-mutuel mar kets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close. For this reason, pari-mutuel "prices" prior to the market's close cannot be considered a reflection of current information. Pari-mutuel market participants cannot "buy low and sell high": they cannot cash out gains (or limit losses) before the event outcome is revealed. Because the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings. In this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM). A DPM can be thought of as a hybrid between a pari-mutuel market and a CDA. A DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk). A constant, pre-determined subsidy is required to start the market. The subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation. A DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution. A DPM is also able to react to and incorporate information arriving over time, like a CDA. The market institution changes the price for particular outcomes based on the current state of wagering. If a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases. Prices are computed automatically using a price function, which can differ depending on what properties are desired. The price function determines the instantaneous price per share for an infinitesimal quantity of shares; the total cost for purchasing n shares is computed as the integral of the price function from 0 to n. The complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs. DPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed. While there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism. Traders can always "hedge-sell" by purchasing the opposite outcome than they already own. 2. BACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets Pari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games. In a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future. After the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered. More formally, if there are k mutually exclusive and exhaustive outcomes (e.g., k horses, exactly one of which will win), and M1, M2,..., Mk dollars are bet on each outcome, and outcome i occurs, then everyone who bet on an outcome j = ~ i loses their wager, while everyone who bet on outcome i receives Pkj = 1 Mj / Mi dollars for every dollar they wagered. That is, every dollar wagered on i receives an equal share of all money wagered. An equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or Pj ~ = i Mj/Mi dollars. In practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet. Consider a simple example with two outcomes, A and B. The outcomes are mutually exclusive and exhaustive, meaning that Pr (A ∧ B) = 0 and Pr (A) + Pr (B) = 1. Suppose $800 is bet on A and $200 on B. Now suppose that A occurs (e.g., horse A wins the race). People who wagered on B lose their money, or $200 in total. People who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees). Specifically, each $1 wager on A entitles its owner a 1/800 share of the $1000, or $1.25. Every dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed. The only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome. As a result, there is a disincentive to place a wager early if there is any chance that new information might become available. Moreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen. Payoff rates can fluctuate arbitrarily until the market closes. So a second reason not to bet early is to wait to get a better sense of the final payout rates. This is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed. Pari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market's close. However, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM. If bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market's close--no one would buy at greater than $1 and no one would sell at less than $1. Pari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss. Unlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time--there is in a sense infinite liquidity for buying. A CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker. In a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk. The main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time. It is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset--situations common in al most all trading and wagering scenarios. There is no notion of "buying low and selling high", as occurs in a CDA, where buying when few others are buying (and the price is low) is rewarded more than buying when many others are buying (and the price is high). Perhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging. Since a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics. But running multiple consecutive markets would likely thin out trading in each individual market. Also, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market. This last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market. In laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17]. A similar ability has been recognized at real racetracks [1, 22, 24, 25, 26]. 2.2 Financial markets In the financial world, wagering on the outcomes of uncertain future propositions is also common. The typical market mechanism used is the continuous double auction (CDA). The term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible. Examples include stock markets like NASDAQ, options markets like the CBOE [13], futures markets like the CME [21], other derivatives markets, insurance markets, political stock markets [6, 7], idea futures markets [12], decision markets [10] and even market games [3, 15, 16]. Securities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes. So if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs. In this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur. Similarly, buying a put option, which is useful as a hedge for a stockholder, is a bet that the underlying stock will go down. In practice, agents engage in a mixture of hedging and speculating, and there is no clear dividing line between the two [14]. Like pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19]. A CDA constantly matches orders to buy an asset with orders to sell. If at any time one party is willing to buy one unit of the asset at a bid price of pbid, while another party is willing to sell one unit of the asset at an ask price of pask, and pbid is greater than or equal to pask, then the two parties transact (at some price between pbid and pask). If the highest bid price is less than the lowest ask price, then no transactions occur. In a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset. Since the auctioneer only matches willing bidders, the auctioneer takes on no risk. However, buyers can only buy as many shares as sellers are willing to sell; for any transaction to occur, there must be a counterparty on the other side willing to accept the trade. As a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs. The spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading .2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices. We call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades. The market maker may be a person, or may be an automated algorithm. Adding a market maker to the system increases liquidity, but exposes the market maker to risk. Now, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money. 2.3 Wagering markets The typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA. In this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes. Unlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet. The bookmaker profits by offering different odds for the two sides of the bet, essentially defining a bidask spread. While odds may change in response to changing information, any bets made at previously set odds remain in effect according to the odds at the time of the bet; this is precisely in analogy to a CDAwMM. One difference between a bookmaker and a market maker is that the former usually operates in a "take it or leave it mode": bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker. Still, the bookmaker certainly reacts to bettor demand. Like a market maker, the bookmaker exposes itself to significant risk. Sports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23]. 2.4 Market scoring rule Hanson's [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM. Like a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price. An MSR requires a patron to subsidize the market. The patron's final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded. An MSR maintains a probability distribution over all events. At any time any 2Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http://TradeSports.com, or in some financial options markets, for example. 3A very clear example of a CDAwMM is the "interactive" betting market on http://WSEX.com. 4Or, alternatively, the bookmaker sets the game line in order to provide even-money odds. trader who believes the probabilities are wrong can change any part of the distribution by accepting a lottery ticket that pays off according to a scoring rule (e.g., the logarithmic scoring rule) [27], as long as that trader also agrees to pay off the most recent person to change the distribution. In the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution. In the limit of many traders, it produces a combined estimate. Since the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity. An MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system. An MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker. In an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market's close. While the mechanisms are quite different--and so trader acceptance and incentives may strongly differ--the properties and motivations of DPMs and MSRs are quite similar. Hanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes. The patron's payment for subsidizing trading on all 2n possible combinations of n events is no larger than the sum of subsidizing the n event marginals independently. The mechanism was planned for use in the Policy Analysis Market (PAM), a futures market in Middle East related outcomes and funded by DARPA [20], until a media firestorm killed the project .5 As of this writing, the founders of PAM were considering reopening under private control .6 3. A DYNAMIC PARI-MUTUEL MARKET 3.1 High-level description In contrast to a standard pari-mutuel market, where each dollar always buys an equal share of the payoff, in a DPM each dollar buys a variable share in the payoff depending on the state of wagering at the time of purchase. So a wager on A at a time when most others are wagering on B offers a greater possible profit than a wager on A when most others are also wagering on A. A natural way to communicate the changing payoff of a bet is to say that, at any given time, a certain amount of money will buy a certain number of shares in one outcome the other. Purchasing a share entitles its owner to an equal stake in the winning pot should the chosen outcome occur. The payoff is variable, because when few people are betting on an outcome, shares will generally be cheaper than at a time when many people are betting that outcome. There is no pre-determined limit on the number of shares: new shares can be continually generated as trading proceeds. For simplicity, all analyses in this paper consider the binary outcome case; generalizing to multiple discrete outcomes should be straightforward. Denote the two outcomes A and B. The outcomes are mutually exclusive and ex haustive. Denote the instantaneous price per share of A as p1 and the price per share of B as p2. Denote the payoffs per share as P1 and P2, respectively. These four numbers, p1, p2, P1, P2 are the key numbers that traders must track and understand. Note that the price is set at the time of the wager; the payoff per share is finalized only after the event outcome is revealed. At any time, a trader can purchase an infinitesimal quantity of shares of A at price p1 (and similarly for B). However, since the price changes continuously as shares are purchased, the cost of buying n shares is computed as the integral of a price function from 0 to n. The use of continuous functions and integrals can be hidden from traders by aggregating the automated market maker's sell orders into discrete lots of, say, 100 shares each. These ask orders can be automatically entered into the system by the market institution, so that traders interact with what looks like a more familiar CDA; we examine this interface issue in more detail below in Section 4.2. For our analysis, we introduce the following additional notation. Denote M1 as the total amount of money wagered on A, M2 as the total amount of money wagered on B, T = M1 + M2 as the total amount of money wagered on both sides, N1 as the total number of shares purchased of A, and N2 as the total number of shares purchased of B. There are many ways to formulate the price function. Several natural price functions are outlined below; each is motivated as the unique solution to a particular constraint on price dynamics. 3.2 Advantages and disadvantages To my knowledge, a DPM is the only known mechanism for hedging and speculating that exhibits all three of the following properties: (1) guaranteed liquidity, (2) no risk for the market institution, and (3) continuous incorporation of information. A standard pari-mutuel fails (3). A CDA fails (1). A CDAwMM, the bookmaker mechanism, and an MSR all fail (2). Even though technically an MSR exposes its patron to risk (i.e., a variable future payoff), the patron's maximum loss is bounded, so the distinction between a DPM and an MSR in terms of these three properties is more technical than practical. DPM traders can cash out of the market early, just like stock market traders, to lock in a profit or limit a loss, an action that is simply not possible in a standard pari-mutuel. A DPM also has some drawbacks. The payoff for a wager depends both on the price at the time of the trade, and on the final payoff per share at the market's close. This contrasts with the CDA variants, where the payoff vector across possible future outcomes is fixed at the time of the trade. So a trader's strategic optimization problem is complicated by the need to predict the final values of P1 and P2. If P changes according to a random walk, then traders can take the current P as an unbiased estimate of the final P, greatly decreasing the complexity of their optimization. If P does not change according to a random walk, the mechanism still has utility as a mechanism for hedging and speculating, though optimization may be difficult, and determining a measure of the market's aggregate opinion of the probabilities of A and B may be difficult. We discuss the implications of random walk behavior further below in Section 4.1 in the discussion surrounding Assumption 3. A second drawback of a DPM is its one-sided nature. While an automated market maker always stands ready to accept buy orders, there is no corresponding market maker to accept sell orders. Traders must sell to each other using a standard CDA mechanism, for example by posting an ask order at a price at or below the market maker's current ask price. Traders can also always "hedge-sell" by purchasing shares in the opposite outcome from the market maker, thereby hedging their bet if not fully liquidating it. 3.3 Redistribution rule In a standard pari-mutuel market, payoffs can be computed in either of two equivalent ways: (1) each winning $1 wager receives a refund of the initial $1 paid, plus an equal share of all losing wagers, or (2) each winning $1 wager receives an equal share of all wagers, winning or losing. Because each dollar always earns an equal share of the payoff, the two formulations are precisely the same: In a dynamic pari-mutuel market, because each dollar is not equally weighted, the two formulations are distinct, and lead to significantly different price functions and mechanisms, each with different potentially desirable properties. We consider each case in turn. The next section analyzes case (1), where only losing money is redistributed. Section 5 examines case (2), where all money is redistributed. 4. DPM I: LOSING MONEY REDISTRIBUTED For the case where the initial payments on winning bets are refunded, and only losing money is redistributed, the respective payoffs per share are simply: So, if A occurs, shareholders of A receive all of their initial payment back, plus P1 dollars per share owned, while shareholders of B lose all money wagered. Similarly, if B occurs, shareholders of B receive all of their initial payment back, plus P2 dollars per share owned, while shareholders of A lose all money wagered. Without loss of generality, I will analyze the market from the perspective of A, deriving prices and payoffs for A only. The equations for B are symmetric. The trader's per-share expected value for purchasing an infinitesimal quantity ~ of shares of A is where ~ is an infinitesimal quantity of shares of A, Pr (A) is the trader's belief in the probability of A, and p1 is the instantaneous price per share of A for an infinitesimal quantity of shares. E [P1JA] is the trader's expectation of the payoff per share of A after the market closes and given that A occurs. This is a subtle point. The value of P1 does not matter if B occurs, since in this case shares of A are worthless, and the current value of P1 does not necessarily matter as this may change as trading continues. So, in order to determine the expected value of shares of A, the trader must estimate what he or she expects the payoff per share to be in the end (after the market closes) if A occurs. If E [~ shares] / ~> 0, a risk-neutral trader should purchase shares of A. How many shares? This depends on the price function determining p1. In general, p1 increases as more shares are purchased. The risk-neutral trader should continue purchasing shares until E [~ shares] / ~ = 0. (A riskaverse trader will generally stop purchasing shares before driving E [~ shares] / ~ all the way to zero.) Assuming riskneutrality, the trader's optimization problem is to choose a number of shares n> 0 of A to purchase, in order to maximize It's easy to see that the same value of n can be solved for by finding the number of shares required to drive E [~ shares] / ~ to zero. That is, find n> 0 satisfying 0 = Pr (A) · E [P1JA] − (1 − Pr (A)) · p1 (n), if such a n exists, otherwise n = 0. 4.1 Market probability As traders who believe that E [~ shares of A] / ~> 0 purchase shares of A and traders who believe that E [~ shares of B] / ~> 0 purchase shares of B, the prices p1 and p2 change according to a price function, as prescribed below. The current prices in a sense reflect the market's opinion as a whole of the relative probabilities of A and B. Assuming an efficient marketplace, the market as a whole considers E [~ shares] / ~ = 0, since the mechanisms is a zero sum game. For example, if market participants in aggregate felt that E [~ shares] / ~> 0, then there would be net demand for A, driving up the price of A until E [~ shares] / ~ = 0. Define MPr (A) to be the market probability of A, or the probability of A inferred by assuming that E [~ shares] / ~ = 0. We can consider MPr (A) to be the aggregate probability of A as judged by the market as a whole. MPr (A) is the solution to At this point we make a critical assumption in order to greatly simplify the analysis; we assume that That is, we assume that the current value for the payoff per share of A is the same as the expected final value of the payoff per share of A given that A occurs. This is certainly true for the last (infinitesimal) wager before the market closes. It's not obvious, however, that the assumption is true well before the market's close. Basically, we are assuming that the value of P1 moves according to an unbiased random walk: the current value of P1 is the best expectation of its future value. I conjecture that there are reasonable market efficiency conditions under which assumption (3) is true, though I have not been able to prove that it arises naturally from rational trading. We examine scenarios below in which assumption (3) seems especially plausible. Nonetheless, the assumption effects our analysis only. Regardless of whether (3) is true, each price function derived below implies a welldefined zero-sum game in which traders can play. If traders can assume that (3) is true, then their optimization problem (1) is greatly simplified; however, optimizing (1) does not depend on the assumption, and traders can still optimize by strategically projecting the final expected payoff in whatever complicated way they desire. So, the utility of DPM for hedging and speculating does not necessarily hinge on the truth of assumption (3). On the other hand, the ability to easily infer an aggregate market consensus probability from market prices does depend on (3). 4.2 Price functions A variety of price functions seem reasonable, each exhibiting various properties, and implying differing market probabilities. 4.2.1 Price function I: Price of A equals payoff of B One natural price function to consider is to set the price per share of A equal to the payoff per share of B, and set the price per share of B equal to the payoff per share of A. That is, Enforcing this relationship reduces the dimensionality of the system from four to two, simplifying the interface: traders need only track two numbers instead of four. The relationship makes sense, since new information supporting A should encourage purchasing of shares A, driving up both the price of A and the payoff of B, and driving down the price of B and the payoff of A. In this setting, assumption (3) seems especially reasonable, since if an efficient market hypothesis leads prices to follow a random walk, than payoffs must also follow a random walk. The constraints (4) lead to the following derivation of the market probability: The constraints (4) specify the instantaneous relationship between payoff and price. From this, we can derive how prices change when (non-infinitesimal) shares are purchased. Let n be the number of shares purchased and let m be the amount of money spent purchasing n shares. Note that p1 = dm/dn, the instantaneous price per share, and Note that p1 (0) = M1/N2 = P2 as required. The derivation of the price function p2 (n) for B is analogous and the results are symmetric. The notion of buying infinitesimal shares, or integrating costs over a continuous function, are probably foreign to most traders. A more standard interface can be implemented by discretizing the costs into round lots of shares, for example lots of 100 shares. Then ask orders of 100 shares each at the appropriate price can be automatically placed by the market institution. For example, the market institution can place an ask order for 100 shares at price m (100) / 100, another ask order for 100 shares at price (m (200) − m (100)) / 100, a third ask for 100 shares at (m (300) − m (200)) / 100, etc. . In this way, the market looks more familiar to traders, like a typical CDA with a number of ask orders at various prices automatically available. A trader buying less than 100 shares would pay a bit more than if the true cost were computed using (6), but the discretized interface would probably be more intuitive and transparent to the majority of traders. The above equations assume that all money that comes in is eventually returned or redistributed. In other words, the mechanism is a zero sum game, and the market institution takes no portion of the money. This could be generalized so that the market institution always takes a certain amount, or a certain percent, or a certain amount per transaction, or a certain percent per transaction, before money in returned or redistributed. Finally, note that the above price function is undefined when the amount bet or the number of shares are zero. So the system must begin with some positive amount on both sides, and some positive number of shares outstanding on both sides. These initial amounts can be arbitrarily small in principle, but the size of the initial subsidy may affect the incentives of traders to participate. Also, the smaller the initial amounts, the more each new dollar effects the prices. The initialization amounts could be funded as a subsidy from the market institution or a patron, which I'll call a seed wager, or from a portion of the fees charged, which I'll call an ante wager. 4.2.2 Price function II: Price of A proportional to money on A A second price function can be derived by requiring the ratio of prices to be equal to the ratio of money wagered. Equation 6 gives the cost of purchasing n shares. The instantaneous price per share as a function of n is In other words, the price of A is proportional to the amount of money wagered on A, and similarly for B. This seems like a particularly natural way to set the price, since the more money that is wagered on one side, the cheaper becomes a share on the other side, in exactly the same proportion. Using Equation 8, along with (2) and (3), we can derive the implied market probability: Working from the above instantaneous price, we can derive the implied cost function m as a function of the number n of shares purchased as follows: Note that, as required, p1 (0) = M1 / √ N1N2, and p1 (0) / p2 (0) = M1/M2. If one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B, which seems fairly natural. Note that, as before, the mechanism can be modified to collect transaction fees of some kind. Also note that seed or ante wagers are required to initialize the system. 5. DPM II: ALL MONEY REDISTRIBUTED Above we examined the policy of refunding winning wagers and redistributing only losing wagers. In this section we consider the second policy mentioned in Section 3.3: all money from all wagers are collected and redistributed to winning wagers. For the case where all money is redistributed, the respective payoffs per share are: where T = M1 + M2 is the total amount of money wagered on both sides. So, if A occurs, shareholders of A lose their initial price paid, but receive P1 dollars per share owned; shareholders of B simply lose all money wagered. Similarly, if B occurs, shareholders of B lose their initial price paid, but receive P2 dollars per share owned; shareholders of A lose all money wagered. In this case, the trader's per-share expected value for purchasing an infinitesimal quantity ~ of shares of A is The same value of n can be solved for by finding the number of shares required to drive E [~ shares] / ~ to zero. That is, find n ≥ 0 satisfying if such a n exists, otherwise n = 0. 5.1 Market probability In this case MPr (A), the aggregate probability of A as judged by the market as a whole, is the solution to As before, we make the simplifying assumption (3) that the expected final payoff per share equals the current payoff per share. The assumption is critical for our analysis, but may not be required for a practical implementation. 5.2 Price functions For the case where all money is distributed, the constraints (4) that keep the price of A equal to the payoff of B, and vice versa, do not lead to the derivation of a coherent price function. A reasonable price function can be derived from the constraint (8) employed in Section 4.2.2, where we require that the ratio of prices to be equal to the ratio of money wagered. That is, p1/p2 = M1/M2. In other words, the price of A is proportional to the amount of money wagered on A, and similarly for B. Using Equations 3, 8, and 15 we can derive the implied market probability: Interestingly, this is the same market probability derived in Section 4.2.1 for the case of losing-money redistribution with the constraints that the price of A equal the payoff of B and vice versa. The instantaneous price per share for an infinitesimal quantity of shares is: Working from the above instantaneous price, we can derive the number of shares n that can be purchased for m dollars, as follows: Note that we solved for n (m) rather than m (n). I could not find a closed-form solution for m (n), as was derived for the two other cases above. Still, n (m) can be used to determine how many shares can be purchased for m dollars, and the inverse function can be approximated to any degree numerically. From n (m) we can also compute the price function: where Note that, as required, p1 (0) / p2 (0) = M1/M2. If one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B. This price function has another desirable property: it acts such that the expected value of wagering $1 on A and simultaneously wagering $1 on B equals zero, assuming (3). That is, E [$1 of A + $1 of B] = 0. The derivation is omitted. 5.3 Comparing DPM I and II The main advantage of refunding winning wagers (DPM I) is that every bet on the winning outcome is guaranteed to at least break even. The main disadvantage of refunding winning wagers is that shares are not homogenous: each share of A, for example, is actually composed of two distinct parts: (1) the refund, or a lottery ticket that pays $p if A occurs, where p is the price paid per share, and (2) one share of the final payoff ($P1) if A occurs. This complicates the implementation of an aftermarket to cash out of the market early, which we will examine below in Section 7. When all money is redistributed (DPM II), shares are homogeneous: each share entitles its owner to an equal slice of the final payoff. Because shares are homogenous, the implementation of an aftermarket is straightforward, as we shall see in Section 7. On the other hand, because initial prices paid are not refunded for winning bets, there is a chance that, if prices swing wildly enough, a wager on the correct outcome might actually lose money. Traders must be aware that if they buy in at an excessively high price that later tumbles allowing many others to get in at a much lower price, they may lose money in the end regardless of the outcome. From informal experiments, I don't believe this eventuality would be common, but nonetheless it requires care in communicating to traders the possible risks. One potential fix would be for the market maker to keep track of when the price is going too low, endangering an investor on the correct outcome. At this point, the market maker could artificially stop lowering the price. Sell orders in the aftermarket might still come in below the market maker's price, but in this way the system could ensure that every wager on the correct outcome at least breaks even. 6. OTHER VARIATIONS A simple ascending price function would set p1 = αM1 and p2 = αM2, where α> 0. In this case, prices would only go up. For the case of all money being redistributed, this would eliminate the possibility of losing money on a wager on the correct outcome. Even though the market maker's price only rises, the going price may fall well below the market maker's price, as ask orders are placed in the aftermarket. I have derived price functions for several other cases, using the same methodology above. Each price function may have its own desirable properties, but it's not clear which is best, or even that a single best method exists. Further analyses and, more importantly, empirical investigations are required to answer these questions. 7. AFTERMARKETS A key advantage of DPM over a standard pari-mutuel market is the ability to cash out of the market before it closes, in order to take a profit or limit a loss. This is accomplished by allowing traders to place ask orders on the same queue as the market maker. So traders can sell the shares that they purchased at or below the price set by the market maker. Or traders can place a limit sell order at any price. Buyers will purchase any existing shares for sale at the lower prices first, before purchasing new shares from the market maker. 7.1 Aftermarket for DPM II For the second main case explored above, where all money is redistributed, allowing an aftermarket is simple. In fact, "aftermarket" may be a poor descriptor: buying and selling are both fully integrated into the same mechanism. Every share is worth precisely the same amount, so traders can simply place ask orders on the same queue as the market maker in order to sell their shares. New buyers will accept the lowest ask price, whether it comes from the market maker or another trader. In this way, traders can cash out early and walk away with their current profit or loss, assuming they can find a willing buyer. 7.2 Aftermarket for DPM I When winning wagers are refunded and only losing wagers are redistributed, each share is potentially worth a different amount, depending on how much was paid for it, so it is not as simple a matter to set up an aftermarket. However, an aftermarket is still possible. In fact, much of the complexity can be hidden from traders, so it looks nearly as simple as placing a sell order on the queue. In this case shares are not homogenous: each share of A is actually composed of two distinct parts: (1) the refund of P · 1A dollars, and (2) the payoff of P1 · 1A dollars, where P is the per-share price paid and 1A is the indicator function equalling 1 if A occurs, and 0 otherwise. One can imagine running two separate aftermarkets where people can sell these two respective components. However, it is possible to automate the two aftermarkets, by automatically bundling them together in the correct ratio and selling them in the central DPM. In this way, traders can cash out by placing sell orders on the same queue as the DPM market maker, effectively hiding the complexity of explicitly having two separate aftermarkets. The bundling mechanism works as follows. Suppose the current price for 1 share of A is P1. A buyer agrees to purchase the share at P1. The buyer pays P1 dollars and receives P1 · 1A + P1 · 1A dollars. If there is enough inventory in the aftermarkets, the buyer's share is constructed by bundling together P1 · 1A from the first aftermarket, and P1 · 1A from the second aftermarket. The seller in the first aftermarket receives P1MPr (A) dollars, and the seller in the second aftermarket receives P1MPr (B) dollars. 7.3 Pseudo aftermarket for DPM I There is an alternative "pseudo aftermarket" that's possible for the case of DPM I that does not require bundling. Consider a share of A purchased for $5. The share is composed of $5 · 1A and $P1 · 1A. Now suppose the current price has moved from $5 to $10 per share and the trader wants to cash out at a profit. The trader can sell 1/2 share at market price (1/2 share for $5), receiving all of the initial $5 investment back, and retaining 1/2 share of A. The 1/2 share is worth either some positive amount, or nothing, depending on the outcome and the final payoff. So the trader is left with shares worth a positive expected value and all of his or her initial investment. The trader has essentially cashed out and locked in his or her gains. Now suppose instead that the price moves downward, from $5 to $2 per share. The trader decides to limit his or her loss by selling the share for $2. The buyer gets the 1 share plus $2 · 1A (the buyer's price refunded). The trader (seller) gets the $2 plus what remains of the original price refunded, or $3 · 1A. The trader's loss is now limited to $3 at most instead of $5. If A occurs, the trader breaks even; if B occurs, the trader loses $3. Also note that--in either DPM formulation--traders can always "hedge sell" by buying the opposite outcome without the need for any type of aftermarket. 8. CONCLUSIONS I have presented a new market mechanism for wagering on, or hedging against, a future uncertain event, called a dynamic pari-mutuel market (DPM). The mechanism combines the infinite liquidity and risk-free nature of a parimutuel market with the dynamic nature of a CDA, making it suitable for continuous information aggregation. To my knowledge, all existing mechanisms--including the standard pari-mutuel market, the CDA, the CDAwMM, the bookie mechanism, and the MSR--exhibit at most two of the three properties. An MSR is the closest to a DPM in terms of these properties, if not in terms of mechanics. Given some natural constraints on price dynamics, I have derived in closed form the implied price functions, which encode how prices change continuously as shares are purchased. The interface for traders looks much like the familiar CDA, with the system acting as an automated market maker willing to accept an infinite number of buy orders at some price. I have explored two main variations of a DPM: one where only losing money is redistributed, and one where all money is redistributed. Each has its own pros and cons, and each supports several reasonable price functions. I have described the workings of an aftermarket, so that traders can cash out of the market early, like in a CDA, to lock in their gains or limit their losses, an operation that is not possible in a standard pari-mutuel setting. 9. FUTURE WORK This paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market. Many avenues for future work present themselves, including the following: 9 Random walk conjecture. The most important question mark in my mind is whether the random walk assumption (3) can be proven under reasonable market efficiency conditions and, if not, how severely it effects the practicality of the system. 9 Incentive analysis. Formally, what are the incentives for traders to act on new information and when? How does the level of initial subsidy effect trader incentives? 9 Laboratory experiments and field tests. This paper concentrated on the mathematics and algorithmics of the mechanism. However, the true test of the mechanism's ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment. In reality, how do people behave when faced with a DPM mechanism? 9 DPM call market. I have derived the price functions to react to wagers on one outcome at a time. The mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers. 9 Real-valued variables. I believe the mechanisms in this paper can easily be generalized to multiple discrete outcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100). However, the generalization to real-valued variables with arbitrary range is less clear, and open for future development. • Compound/combinatorial betting. I believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task. DPM may also have some computational advantages over MSR, though this remains to be seen.
A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation ABSTRACT I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions. 1. INTRODUCTION A wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and/or speculative trading on uncertain events. The dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM). The primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker. Horse racing and jai alai wagering traditionally employ the pari-mutuel mechanism. Though there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct. Recently, there has been a move to employ CDAs or CDAwMMs for all types of wagering, including on sports, horse racing, political events, world news, and many other uncertain events, and a simultaneous and opposite trend to use bookie systems for betting on financial markets. These trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting. Some companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business .1 Each mechanism has pros and cons for the market institution and the participating traders. A CDA only matches willing traders, and so poses no risk whatsoever for the market institution. But a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin. A successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders. A CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses. Both the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available. As a result, prices are known to capture the current state of information exceptionally well. Pari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker. Pari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers. However, pari-mutuel mar kets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close. For this reason, pari-mutuel "prices" prior to the market's close cannot be considered a reflection of current information. Pari-mutuel market participants cannot "buy low and sell high": they cannot cash out gains (or limit losses) before the event outcome is revealed. Because the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings. In this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM). A DPM can be thought of as a hybrid between a pari-mutuel market and a CDA. A DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk). A constant, pre-determined subsidy is required to start the market. The subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation. A DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution. A DPM is also able to react to and incorporate information arriving over time, like a CDA. The market institution changes the price for particular outcomes based on the current state of wagering. If a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases. Prices are computed automatically using a price function, which can differ depending on what properties are desired. The price function determines the instantaneous price per share for an infinitesimal quantity of shares; the total cost for purchasing n shares is computed as the integral of the price function from 0 to n. The complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs. DPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed. While there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism. Traders can always "hedge-sell" by purchasing the opposite outcome than they already own. 2. BACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets Pari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games. In a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future. After the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered. More formally, if there are k mutually exclusive and exhaustive outcomes (e.g., k horses, exactly one of which will win), and M1, M2,..., Mk dollars are bet on each outcome, and outcome i occurs, then everyone who bet on an outcome j = ~ i loses their wager, while everyone who bet on outcome i receives Pkj = 1 Mj / Mi dollars for every dollar they wagered. That is, every dollar wagered on i receives an equal share of all money wagered. An equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or Pj ~ = i Mj/Mi dollars. In practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet. Consider a simple example with two outcomes, A and B. The outcomes are mutually exclusive and exhaustive, meaning that Pr (A ∧ B) = 0 and Pr (A) + Pr (B) = 1. Suppose $800 is bet on A and $200 on B. Now suppose that A occurs (e.g., horse A wins the race). People who wagered on B lose their money, or $200 in total. People who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees). Specifically, each $1 wager on A entitles its owner a 1/800 share of the $1000, or $1.25. Every dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed. The only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome. As a result, there is a disincentive to place a wager early if there is any chance that new information might become available. Moreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen. Payoff rates can fluctuate arbitrarily until the market closes. So a second reason not to bet early is to wait to get a better sense of the final payout rates. This is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed. Pari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market's close. However, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM. If bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market's close--no one would buy at greater than $1 and no one would sell at less than $1. Pari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss. Unlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time--there is in a sense infinite liquidity for buying. A CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker. In a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk. The main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time. It is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset--situations common in al most all trading and wagering scenarios. There is no notion of "buying low and selling high", as occurs in a CDA, where buying when few others are buying (and the price is low) is rewarded more than buying when many others are buying (and the price is high). Perhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging. Since a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics. But running multiple consecutive markets would likely thin out trading in each individual market. Also, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market. This last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market. In laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17]. A similar ability has been recognized at real racetracks [1, 22, 24, 25, 26]. 2.2 Financial markets In the financial world, wagering on the outcomes of uncertain future propositions is also common. The typical market mechanism used is the continuous double auction (CDA). The term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible. Examples include stock markets like NASDAQ, options markets like the CBOE [13], futures markets like the CME [21], other derivatives markets, insurance markets, political stock markets [6, 7], idea futures markets [12], decision markets [10] and even market games [3, 15, 16]. Securities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes. So if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs. In this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur. Similarly, buying a put option, which is useful as a hedge for a stockholder, is a bet that the underlying stock will go down. In practice, agents engage in a mixture of hedging and speculating, and there is no clear dividing line between the two [14]. Like pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19]. A CDA constantly matches orders to buy an asset with orders to sell. If at any time one party is willing to buy one unit of the asset at a bid price of pbid, while another party is willing to sell one unit of the asset at an ask price of pask, and pbid is greater than or equal to pask, then the two parties transact (at some price between pbid and pask). If the highest bid price is less than the lowest ask price, then no transactions occur. In a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset. Since the auctioneer only matches willing bidders, the auctioneer takes on no risk. However, buyers can only buy as many shares as sellers are willing to sell; for any transaction to occur, there must be a counterparty on the other side willing to accept the trade. As a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs. The spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading .2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices. We call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades. The market maker may be a person, or may be an automated algorithm. Adding a market maker to the system increases liquidity, but exposes the market maker to risk. Now, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money. 2.3 Wagering markets The typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA. In this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes. Unlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet. The bookmaker profits by offering different odds for the two sides of the bet, essentially defining a bidask spread. While odds may change in response to changing information, any bets made at previously set odds remain in effect according to the odds at the time of the bet; this is precisely in analogy to a CDAwMM. One difference between a bookmaker and a market maker is that the former usually operates in a "take it or leave it mode": bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker. Still, the bookmaker certainly reacts to bettor demand. Like a market maker, the bookmaker exposes itself to significant risk. Sports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23]. 2.4 Market scoring rule Hanson's [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM. Like a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price. An MSR requires a patron to subsidize the market. The patron's final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded. An MSR maintains a probability distribution over all events. At any time any 2Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http://TradeSports.com, or in some financial options markets, for example. 3A very clear example of a CDAwMM is the "interactive" betting market on http://WSEX.com. 4Or, alternatively, the bookmaker sets the game line in order to provide even-money odds. trader who believes the probabilities are wrong can change any part of the distribution by accepting a lottery ticket that pays off according to a scoring rule (e.g., the logarithmic scoring rule) [27], as long as that trader also agrees to pay off the most recent person to change the distribution. In the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution. In the limit of many traders, it produces a combined estimate. Since the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity. An MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system. An MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker. In an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market's close. While the mechanisms are quite different--and so trader acceptance and incentives may strongly differ--the properties and motivations of DPMs and MSRs are quite similar. Hanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes. The patron's payment for subsidizing trading on all 2n possible combinations of n events is no larger than the sum of subsidizing the n event marginals independently. The mechanism was planned for use in the Policy Analysis Market (PAM), a futures market in Middle East related outcomes and funded by DARPA [20], until a media firestorm killed the project .5 As of this writing, the founders of PAM were considering reopening under private control .6 3. A DYNAMIC PARI-MUTUEL MARKET 3.1 High-level description 3.2 Advantages and disadvantages 3.3 Redistribution rule 4. DPM I: LOSING MONEY REDISTRIBUTED 4.1 Market probability 4.2 Price functions 4.2.1 Price function I: Price of A equals payoff of B 4.2.2 Price function II: Price of A proportional to money on A 5. DPM II: ALL MONEY REDISTRIBUTED 5.1 Market probability 5.2 Price functions 5.3 Comparing DPM I and II 6. OTHER VARIATIONS 7. AFTERMARKETS 7.1 Aftermarket for DPM II 7.2 Aftermarket for DPM I 7.3 Pseudo aftermarket for DPM I 8. CONCLUSIONS 9. FUTURE WORK This paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market. Many avenues for future work present themselves, including the following: 9 Random walk conjecture. The most important question mark in my mind is whether the random walk assumption (3) can be proven under reasonable market efficiency conditions and, if not, how severely it effects the practicality of the system. 9 Incentive analysis. Formally, what are the incentives for traders to act on new information and when? How does the level of initial subsidy effect trader incentives? 9 Laboratory experiments and field tests. This paper concentrated on the mathematics and algorithmics of the mechanism. However, the true test of the mechanism's ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment. In reality, how do people behave when faced with a DPM mechanism? 9 DPM call market. I have derived the price functions to react to wagers on one outcome at a time. The mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers. 9 Real-valued variables. I believe the mechanisms in this paper can easily be generalized to multiple discrete outcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100). However, the generalization to real-valued variables with arbitrary range is less clear, and open for future development. • Compound/combinatorial betting. I believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task. DPM may also have some computational advantages over MSR, though this remains to be seen.
A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation ABSTRACT I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions. 1. INTRODUCTION A wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and/or speculative trading on uncertain events. The dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM). The primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker. Horse racing and jai alai wagering traditionally employ the pari-mutuel mechanism. Though there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct. These trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting. Some companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business .1 Each mechanism has pros and cons for the market institution and the participating traders. A CDA only matches willing traders, and so poses no risk whatsoever for the market institution. But a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin. A successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders. A CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses. Both the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available. As a result, prices are known to capture the current state of information exceptionally well. Pari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker. Pari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers. However, pari-mutuel mar kets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close. For this reason, pari-mutuel "prices" prior to the market's close cannot be considered a reflection of current information. Pari-mutuel market participants cannot "buy low and sell high": they cannot cash out gains (or limit losses) before the event outcome is revealed. Because the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings. In this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM). A DPM can be thought of as a hybrid between a pari-mutuel market and a CDA. A DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk). A constant, pre-determined subsidy is required to start the market. The subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation. A DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution. A DPM is also able to react to and incorporate information arriving over time, like a CDA. The market institution changes the price for particular outcomes based on the current state of wagering. If a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases. Prices are computed automatically using a price function, which can differ depending on what properties are desired. The complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs. DPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed. While there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism. Traders can always "hedge-sell" by purchasing the opposite outcome than they already own. 2. BACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets Pari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games. In a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future. After the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered. That is, every dollar wagered on i receives an equal share of all money wagered. An equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or Pj ~ = i Mj/Mi dollars. In practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet. Consider a simple example with two outcomes, A and B. Now suppose that A occurs (e.g., horse A wins the race). People who wagered on B lose their money, or $200 in total. People who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees). Every dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed. The only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome. As a result, there is a disincentive to place a wager early if there is any chance that new information might become available. Moreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen. Payoff rates can fluctuate arbitrarily until the market closes. This is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed. Pari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market's close. However, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM. If bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market's close--no one would buy at greater than $1 and no one would sell at less than $1. Pari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss. Unlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time--there is in a sense infinite liquidity for buying. A CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker. In a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk. The main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time. It is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset--situations common in al most all trading and wagering scenarios. Perhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging. Since a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics. But running multiple consecutive markets would likely thin out trading in each individual market. Also, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market. This last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market. In laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17]. 2.2 Financial markets In the financial world, wagering on the outcomes of uncertain future propositions is also common. The typical market mechanism used is the continuous double auction (CDA). The term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible. Securities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes. So if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs. In this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur. Like pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19]. A CDA constantly matches orders to buy an asset with orders to sell. If the highest bid price is less than the lowest ask price, then no transactions occur. In a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset. Since the auctioneer only matches willing bidders, the auctioneer takes on no risk. As a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs. The spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading .2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices. We call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades. The market maker may be a person, or may be an automated algorithm. Adding a market maker to the system increases liquidity, but exposes the market maker to risk. Now, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money. 2.3 Wagering markets The typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA. In this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes. Unlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet. One difference between a bookmaker and a market maker is that the former usually operates in a "take it or leave it mode": bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker. Still, the bookmaker certainly reacts to bettor demand. Like a market maker, the bookmaker exposes itself to significant risk. Sports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23]. 2.4 Market scoring rule Hanson's [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM. Like a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price. An MSR requires a patron to subsidize the market. The patron's final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded. An MSR maintains a probability distribution over all events. At any time any 2Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http://TradeSports.com, or in some financial options markets, for example. 3A very clear example of a CDAwMM is the "interactive" betting market on http://WSEX.com. 4Or, alternatively, the bookmaker sets the game line in order to provide even-money odds. In the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution. In the limit of many traders, it produces a combined estimate. Since the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity. An MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system. An MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker. In an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market's close. While the mechanisms are quite different--and so trader acceptance and incentives may strongly differ--the properties and motivations of DPMs and MSRs are quite similar. Hanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes. 9. FUTURE WORK This paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market. Many avenues for future work present themselves, including the following: 9 Random walk conjecture. 9 Incentive analysis. Formally, what are the incentives for traders to act on new information and when? How does the level of initial subsidy effect trader incentives? 9 Laboratory experiments and field tests. This paper concentrated on the mathematics and algorithmics of the mechanism. However, the true test of the mechanism's ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment. In reality, how do people behave when faced with a DPM mechanism? 9 DPM call market. I have derived the price functions to react to wagers on one outcome at a time. The mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers. 9 Real-valued variables. I believe the mechanisms in this paper can easily be generalized to multiple discrete outcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100). However, the generalization to real-valued variables with arbitrary range is less clear, and open for future development. • Compound/combinatorial betting. I believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task.
C-83
Concept and Architecture of a Pervasive Document Editing and Managing System
Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with up-to-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example.
[ "pervas document edit and manag system", "pervas document edit and manag system", "collabor document process", "collabor document", "text edit", "real-time transact", "comput support collabor work", "busi logic layer", "real-time server compon", "collabor layout", "hierarch file system", "restrict", "granular", "charact insert", "cscw" ]
[ "P", "P", "P", "P", "M", "U", "M", "U", "U", "M", "M", "U", "U", "U", "U" ]
Concept and Architecture of a Pervasive Document Editing and Managing System Stefania Leone Thomas B. Hodel Harald Gall University of Zurich, Switzerland University of Zurich, Switzerland University of Zurich, Switzerland Department of Informatics Department of Informatics Department of Informatics leone@ifi.unizh.ch hodel@ifi.unizh.ch gall@ifi.unizh.ch ABSTRACT Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with upto-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example. Categories and Subject Descriptors C.2.4 Distributed Systems [Computer-Communication Networks]: Computer System Organization, Distributed Systems, Distributed Applications General Terms Management, Measurement, Documentation, Economics, Human Factors 1. INTRODUCTION Text documents are a valuable resource for virtually any enterprise and organization. Documents like papers, reports and general business documentations contain a large part of today``s (business) knowledge. Documents are mostly stored in a hierarchical folder structure on file servers and it is difficult to organize them in regard to classification, versioning etc., although it is of utmost importance that users can find, retrieve and edit up-to-date versions of documents whenever they want and, in a user-friendly way. 1.1 Problem Description With most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today``s world. Despite the fact, that people strive for location- and time- independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected. Documents could therefore be seen as a vulnerable source in today``s world, which demands for an appropriate solution: The need to store, retrieve and edit these documents collaboratively anytime, everywhere and with almost every suitable device and with guaranteed mechanisms for security, consistency, availability and access control, is obvious. In addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management. Such meta data includes creation date, creator, authors, version, location-based information such as time and place when/where a user reads/edits a document and so on. Such meta data can be gathered during the documents creation process and can be used versatilely. Especially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user``s actual situation influences the user``s objectives. Meta data could be used to give the user the best possible view on the documents, dependent of his actual information. On the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents. As far as we know, no system exists, that satisfies the aforementioned requirements. A very good overview about realtime communication and collaboration system is described in [7]. We therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device. In this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality. It enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data. Additionally document data is treated as `first class citizen'' of the database as demanded in [1]. 1.2 Underlying Concepts The concept of our pervasive document editing and management system requires an appropriate architectural foundation. Our concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing. TeNDaX is a Text Native Database eXtension. It enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions. Under the term `text editing'' we understand the following: writing and deleting text (characters), copying & pasting text, defining text layout & structure, inserting notes, setting access rights, defining business processes, inserting tables, pictures, and so on i.e. all the actions regularly carried out by word processing users. With `real-time transaction'' we mean that editing text (e.g. writing a character/word) invokes one or several database transactions so that everything, which is typed appears within the editor as soon as these objects are stored persistently. Instead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2]. The database schema and the above-mentioned transactions are created in such a way that everything can be done within a multiuser environment, as is usual done by database technology. As a consequence, many of the achievements (with respect to data organization and querying, recovery, integrity and security enforcement, multi-user operation, distribution management, uniform tool access, etc.) are now, by means of this approach, also available for word processing. 2. APPROACH Our pervasive editing and management system is based on the above-mentioned database-based TeNDaX approach, where document data is stored natively in the database and supports pervasive collaborative text editing and document management. We define the pervasive document editing and management system, as a system, where documents can easily be accessed and manipulated everywhere (within the network), anytime (independently of the number of users working on the same document) and with any device (desktop, notebook, PDA, mobile phone etc.). DB 3 RTSC 4 RTSC 1 RTSC 2 RTSC 3 AS 1 AS 3 DB 1 DB 2 AS 2 AS 4 DB 4 A B C D E F G Figure 1. TeNDaX Application Architecture In contrast to documents stored locally on the hard drive or on a file server, our system automatically serves the user with the up-to-date version of a document and changes done on the document are stored persistently in the database and immediately propagated to all clients who are working on the same document. Additionally, meta data gathered during the whole document creation process enables sophisticated document management. With the TeXt SQL API as abstract interface, this approach can be used by any tool and for any device. The system is built on the following components (see Figure 1): An editor in Java implements the presentation layer (A-G in Figure 1). The aim of this layer is the integration in a well-known wordprocessing application such as OpenOffice. The business logic layer represents the interface between the database and the word-processing application. It consists of the following three components: The application server (marked as AS 1-4 in Figure 1) enables text editing within the database environment and takes care of awareness, security, document management etc., all within a collaborative, real-time and multi-user environment. The real-time server component (marked as RTSC 14 in Figure 1) is responsible for the propagation of information, i.e. updates between all of the connected editors. The storage engine (data layer) primarily stores the content of documents as well as all related meta data within the database Databases can be distributed in a peer-to-peer network (DB 1-4 in Figure 1). . In the following, we will briefly present the database schema, the editor and the real-time server component as well as the concept of dynamic folders, which enables sophisticated document management on the basis of meta data. 2.1 Application Architecture A database-based real-time collaborative editor allows the same document to be opened and edited simultaneously on the same computer or over a network of several computers and mobile devices. All concurrency issues, as well as message propagation, are solved within this approach, while multiple instances of the same document are being opened [3]. Each insert or delete action is a database transaction and as such, is immediately stored persistently in the database and propagated to all clients working on the same document. 2.1.1 Database Schema As it was mentioned earlier that text is stored in a native way. Each character of a text document is stored as a single object in the database [3]. When storing text in such a native form, the performance of the employed database system is of crucial importance. The concept and performance issues of such a text database are described in [3], collaborative layouting in [2], dynamic collaborative business processes within documents in [5], the text editing creation time meta data model in [6] and the relation to XML databases in [7]. Figure 2 depicts the core database schema. By connecting a client to the database, a Session instance is created. One important attribute of the Session is the DocumentSession. This attribute refers to DocumentSession instances, which administrates all opened documents. For each opened document, a DocumentSession instance is created. The DocumentSession is important for the realtime server component, which, in case of a 42 is beforeis after Char (ID) has TextElement (ID) starts with is used by InternalFile (ID) is in includes created at has inserted by inserted is active ir ir CharacterValue (Unicode) has List (ID) starts starts with ends ends with FileSize has User (ID) last read by last written by created at created by Style DTD (ID) is used by uses uses is used by Authors arehas Description Password Picture UserColors UserListSecurity has has has has has has FileNode (ID) references/isreferencedby is dynamic DynStructure NodeDetails has has is NodeType is parent of has parent has Role (ID) created at created created by Name has Description is user Name has has main role FileNodeAccessMatrix (ID) has is AccessMatrix read option grand option write option contains has access Times opened ... times with ... by contains/ispartof ir ir is...andincludes Lineage (ID) references is after is before CopyPaste (ID) references is in is copy of is a copy from hasCopyPaste (ID) is activeLength has Str (Stream) has inserted by / inserted RegularChar StartChar EndChar File ExternalFile is from URL Type (extension) is of Title has DocumentSession (ID) is opened by has opened has opened Session (ID) isconnectedwith launched by VersionNumber uses has read option grand option write option ends with is used by is in has is unique DTD (Stream) has has Name Column (ID) has set on On/off isvisible...for false LanguageProfile (ID) has contains Name Profile Marking (ID) has parent internal is copy from hasRank is onPosition starts with ends with is logical style is itemized is italic is enumerated is underline is is part of Alignment Size has Font has hasColor is bold has uses ElementName StylesheetName isused by Process (ID) is running by OS is web session MainRoles Roles has has Timestamp (Date, Time) created at Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time)created at Type has Port IP has has MessagePropagator (ID) Picture (Stream) Name Picture (ID) has contains LayoutBlock WorkflowBlockLogicalBlock contains BlockDataType has property BlockData is of WorkflowInstance (ID) isin TaskInstance (ID) has parent Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) last modified at completed at started at created at is on has Name created by has attached Comment Typeis of Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) created at started at << last modified at is Category Editors has Status has Timestamp (Date, Time) << status last modified Timestamp (Date, Time) is due at DueType has Timezone has Notes has SecurityLevel hasset Timestamp (Date, Time) << is completed at isfollowedby Task (Code) Description has Indent references hasbeenopenedat...by Timestamp RedoHistory is before is after references hasCharCounter is inhas has Offset ActionID (Code) Timestamp (Date, Time) invoked at invoked by Version (ID) isbuild from has created byarchived has Comment Timestamp (Date, Time) <<createdat UndoHistory (ID) starts ends has Name created by Name has is before is after << references CharCounter has is in created at Timestamp is active created by is used by Offset has created at Timestamp Index (ID) lastmodifiedby Lexicon (ID) isof Frequency is occurring is stop word Term is is in ends with starts with << original starts with WordNumber SentenceNumber ParagraphNumber Citatons has is in is is in istemporary is in has Structure has ElementPath createdat Timestamp << describes SpiderBuild (ID) is updated is deleted Timestamp (Date, Time) <<lastupdatedat has validated structure <<neededtoindex Time (ms) IndexUpdate nextupdatein hasindexed isrunningbyOS lastupdate enabled Timestamp Time (s) Documents StopCharacter Description Character Value (ASCII) is sentence stop is paragraph stop Name has is is OptionsSettings show information show warningsshow exceptions do lineage recording do internal lineage recording ask for unknown source show intra document lineage information are set for X X X VirtualBorder (ID) isonhas {1, 2} {1, 2} ir ir UserMode (Code) UserMode (Code) Figure 2. TeNDaX Database Schema (Object Role Modeling Diagram) change on a document done by a client, is responsible for sending update information to all the clients working on the same document. The DocumentId in the class DocumentSession points to a FileNode instance, and corresponds to the ID of the opened document. Instances of the class FileNode either represent a folder node or a document node. The folder node corresponds to a folder of a file system and the document node to that of a file. Instances of the class Char represent the characters of a document. The value of a character is stored in the attribute CharacterValue. The sequence is defined by the attributes After and Before of the class Char. Particular instances of Char mark the beginning and the end of a document. The methods InsertChars and RemoveChars are used to add and delete characters. 2.1.2 Editor As seen above, each document is natively stored in the database. Our editor does not have a replica of one part of the native text database in the sense of database replicas. Instead, it has a so-called image as its replica. Even if several authors edit the same text at the same time, they work on one unique document at all times. The system guarantees this unique view. Editing a document involves a number of steps: first, getting the required information out of the image, secondly, invoking the corresponding methods within the database, thirdly, changing the image, and fourthly, informing all other clients about the changes. 2.1.3 Real-Time Server Component The real-time server component is responsible for the real-time propagation of any changes on a document done within an editor to all the editors who are working or have opened the same document. When an editor connects to the application server, which in turn connects to the database, the database also establishes a connection to the real-time server component (if there isn``t already a connection). The database system informs the real-time server component about each new editor session (session), which the realtime server component administrates in his SessionManager. Then, the editor as well connects to the real-time server component. The real-time server component adds the editor socket to the client``s data structure in the SessionManager and is then ready to communicate. Each time a change on a document from an editor is persistently stored in the database, the database sends a message to the real-time server component, which in turns, sends the changes to all the 43 editors working on the same document. Therefore, a special communication protocol is used: the update protocol. Update Protocol The real-time server component uses the update protocol to communicate with the database and the editors. Messages are sent from the database to the real-time server component, which sends the messages to the affected editors. The update protocol consists of different message types. Messages consist of two packages: package one contains information for the real-time server component whereas package two is passed to the editors and contains the update information, as depicted in Figure 3. || RTSC || Parameter | ... | Parameter|| || Editor Data || Protocol between database system and real-time server component Protocol between real -time server component and editors Figure 3. Update Protocol In the following, two message types are presented: ||u|sessionId,...,sessionId||||editor data|| u: update message, sessionId: Id of the client session With this message type the real-time server component sends the editor data package to all editors specified in the sessionId list. ||ud|fileId||||editor data|| ud: update document message, fileId: Id of the file With this message type, the real-time server component sends the editor data to all editors who have opened the document with the indicated file-Id. Class Model Figure 4 depicts the class model as well as the environment of the real-time server component. The environment consists mainly of the editor and the database, but any other client application that could make use of the real-time server component can connect. ConnectionListener: This class is responsible for the connection to the clients, i.e. to the database and the editors. Depending on the connection type (database or editor) the connection is passed to an EditorWorker instance or DatabaseMessageWorker instance respectively. EditorWorker: This class manages the connections of type `editor''. The connection (a socket and its input and output stream) is stored in the SessionManager. SessionManager: This class is similar to an `in-memory database'': all editor session information, e.g. the editor sockets, which editor has opened which document etc. are stored within this data structure. DatabaseMessageWorker: This class is responsible for the connections of type `database''. At run-time, only one connection exists for each database. Update messages from the database are sent to the DatabaseMessageWorker and, with the help of additional information from the SessionManager, sent to the corresponding clients. ServiceClass: This class offers a set of methods for reading, writing and logging messages. tdb.mp.editor tdb.mp.database tdb.mp.mgmt EditorWorker DatabaseMessageWorker SessionManager MessageHandler ConnectionListener ServiceClass MessageQueue tdb.mp.listener tdb.mp.service junit.tests 1 * 1 * 1 * 1 * 1* 1 * Editors Datenbanksystem 1 2 1 * 1 * 1 * TCP/IP Figure 4. Real-Time Server Component Class Diagram 2.1.4 Dynamic Folders As mentioned above, every editing action invoked by a user is immediately transferred to the database. At the same time, more information about the current transaction is gathered. As all information is stored in the database, one character can hold a multitude of information, which can later be used for the retrieval of documents. Meta data is collected at character level, from document structure (layout, workflow, template, semantics, security, workflow and notes), on the level of a document section and on the level of the whole document [6]. All of the above-mentioned meta data is crucial information for creating content and knowledge out of word processing documents. This meta data can be used to create an alternative storage system for documents. In any case, it is not an easy task to change users'' familiarity to the well known hierarchical file system. This is also the main reason why we do not completely disregard the classical file system, but rather enhance it. Folders which correspond to the classical hierarchical file system will be called static folders. Folders where the documents are organized according to meta data, will be called dynamic folders. As all information is stored in the database, the file system, too, is based on the database. The dynamic folders build up sub-trees, which are guided by the meta data selected by the user. Thus, the first step in using a dynamic folder is the definition of how it should be built. For each level of a dynamic folder, exactly one meta data item is used to. The following example illustrates the steps which have to be taken in order to define a dynamic folder, and the meta data which should be used. As a first step, the meta data which will be used for the dynamic folder must be chosen (see Table 1): The sequence of the meta data influences the structure of the folder. Furthermore, for each meta data used, restrictions and granularity must be defined by the user; if no restrictions are defined, all accessible documents are listed. The granularity therefore influences the number of sub-folders which will be created for the partitioning of the documents. 44 As the user enters the tree structure of the dynamic folder, he can navigate through the branches to arrive at the document(s) he is looking for. The directory names indicate which meta data determines the content of the sub-folder in question. At each level, the documents, which have so far been found to match the meta data, can be inspected. Table 1. Defining dynamic folders (example) Level Meta data Restrictions Granularity 1 Creator Only show documents which have been created by the users Leone or Hodel or Gall One folder per creator 2 Current location Only show documents which where read at my current location One folder per task status 3 Authors Only show documents where at least 40% was written by user `Leone'' Each 20% one folder ad-hoc changes of granularity and restrictions are possible in order to maximize search comfort for the user. It is possible to predefine dynamic folders for frequent use, e.g. a location-based folder, as well as to create and modify dynamic folders on an ad-hoc basis. Furthermore, the content of such dynamic folders can change from one second to another, depending on the changes made by other users at that moment. 3. VALIDATION The proposed architecture is validated on the example of a character insertion. Insert operations are the mostly used operations in a (collaborative) editing system. The character insertion is based on the TeNDaX Insert Algorithm which is formally described in the following. The algorithm is simplified for this purpose. 3.1 Insert Characters Algorithm The symbol c stands for the object character, p stands for the previous character, n stands for the next character of a character object c and the symbol l stands for a list of character objects. c = character p=previous character n = next character l = list of characters The symbol c1 stands for the first character in the list l, ci stands for a character in the list l at the position i, whereas i is a value between 1 and the length of the list l, and cn stands for the last character in the list l. c1 = first character in list l ci = character at position i in list l cn = last character in list l The symbol β stands for the special character that marks the beginning of a document and ε stands for the special character that marks the end of a document. β=beginning of document ε=end of document The function startTA starts a transaction. startTA = start transaction The function commitTA commits a transaction that was started. commitTA = commit transaction The function checkWriteAccess checks if the write access for a document session s is granted. checkWriteAccess(s) = check if write access for document session s is granted The function lock acquires an exclusive lock for a character c and returns 1 for a success and 0 for no success. lock(c) = acquire the lock for character c success : return 1, no success : return 0 The function releaseLocks releases all locks that a transaction has acquired so far. releaseLocks = release all locks The function getPrevious returns the previous character and getNext returns the next character of a character c. getPrevious(c) = return previous character of character c getNext(c) = return next character of character c The function linkBefore links a preceding character p with a succeeding character x and the function linkAfter links a succeeding character n with a preceding character y. linkBefore(p,x) = link character p to character x linkAfter(n,y) = link character n to character y The function updateString links a character p with the first character c1 of a character list l and a character n with the last character cn of a character list l updateString(l, p, n) = linkBefore(p cl)∧ linkAfter(n, cn ) The function insertChar inserts a character c in the table Char with the fields After set to a character p and Before set to a character n. insertChar(c, p, n) = linkAfter(c,p) ∧ linkBefore(c,n) ∧ linkBefore(p,c) ∧ linkAfter(n,c) The function checkPreceding determines the previous character's CharacterValue of a character c and if the previous character's status is active. checkPreceding(c) = return status and CharacterValue of the previous character The function checkSucceeding determines the next character's CharacterValue of a character c and if the next character's status is active. 45 checkSucceeding(c) = return status and CharacterValue of the next character The function checkCharValue determines the CharacterValue of a character c. checkCharValue(c) = return CharacterValue of character c The function sendUpdate sends an update message (UpdateMessage) from the database to the real-time server component. sendUpdate(UpdateMessage) The function Read is used in the real-time server component to read the UpdateMessage. Read(UpdateInformationMessage) The function AllocatEditors checks on the base of the UpdateMessage and the SessionManager, which editors have to be informed. AllocateEditors(UpdateInformationMessage, SessionManager) = returns the affected editors The function SendMessage(EditorData) sends the editor part of the UpdateMessage to the editors SendMessage(EditorData) In TeNDaX, the Insert Algorithm is implemented in the class method InsertChars of the class Char which is depicted in Figure 2. The relevant parameters for the definitions beneath, are introduced in the following list: - nextCharacterOID: OID of the character situated next to the string to be inserted - previousCharacterOID: OID of the character situated previously to the string to be inserted - characterOIDs (List): List of character which have to be inserted Thus, the insertion of characters can be defined stepwise as follows: Start a transaction. startTA Select the character that is situated before the character that follows the string to be inserted. getPrevious(nextCharacterOID) = PrevChar(prevCharOID) ⇐ Π After ϑOID = nextCharacterOID(Char)) Acquire the lock for the character that is situated in the document before the character that follows the string which shall be inserted. lock(prevCharId) At this time the list characterOIDs contains the characters c1 to cn that shall be inserted. characterOIDs={ c1, ..., cn } Each character of the string is inserted at the appropriate position by linking the preceding and the succeeding character to it. For each character ci of characterOIDs: insertChar(ci, p, n) Whereas ci ∈ { c1,..., cn } Check if the preceding and succeeding characters are active or if it is the beginning or the end of the document. checkPreceding(prevCharOID) = IsOK(IsActive, CharacterValue) ⇐ Π IsActive, CharacterValue (ϑ OID = nextCharacterOID(Char)) checkSucceeding(nextCharacterOID) = IsOK(IsActive, CharacterValue)⇐ Π IsActive, CharacterValue (ϑ OID = nextCharacterOID(Char)) Update characters before and after the string to be inserted. updateString(characterOIDs, prevCharOID, nextCharacterOID) Release all locks and commit Transaction. releaseLocks commitTA Send update information to the real-time server component sendUpdate(UpdatenMessage) Read update message and inform affected editors of the change Read(UpdateMessage) Allocate Editors(UpdateMessage, SessionManager) SendMessage(EditorData) 3.2 Insert Characters Example Figure 1 gives a snapshot the system, i.e. of its architecture: four databases are distributed over a peer-to-peer network. Each database is connected to an application server (AS) and each application server is connected to a real-time server component (RTSC). Editors are connected to one or more real-time server components and to the corresponding databases. Considering that editor A (connected to database 1 and 4) and editor B (connected to database 1 and 2) are working on the same document stored in database 1. Editor B now inserts a character into this document. The insert operation is passed to application server 1, which in turns, passes it to the database 1, where an insert operation is invoked; the characters are inserted according to the algorithm discussed in the previous section. After the insertion, database 1 sends an update message (according to the update protocol discussed before) to real-time server component 1 (via AS 1). RTCS 1 combines the received update information with the information in his SessionManager and sends the editor data to the affected editors, in this case to editor A and B, where the changes are immediately shown. Occurring collaboration conflicts are solved and described in [3]. 4. SUMMARY With the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database. With this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management. Pervasive document editing and management is enabled due to the proposed architecture with the embedded real46 time server component, which propagates changes to a document immediately and consequently offers up-to-date documents. Document editing and managing is consequently enabled anywhere, anytime and with any device. The above-descried system is implemented in a running prototype. The system will be tested soon in line with a student workshop next autumn. REFERENCES [1] Abiteboul, S., Agrawal, R., et al.: The Lowell Database Research Self Assessment. Massachusetts, USA, 2003. [2] Hodel, T. B., Businger, D., and Dittrich, K. R.: Supporting Collaborative Layouting in Word Processing. IEEE International Conference on Cooperative Information Systems (CoopIS), Larnaca, Cyprus, IEEE, 2004. [3] Hodel, T. B. and Dittrich, K. R.: ``Concept and prototype of a collaborative business process environment for document processing.'' Data & Knowledge Engineering 52, Special Issue: Collaborative Business Process Technologies(1): 61120, 2005. [4] Hodel, T. B., Dubacher, M., and Dittrich, K. R.: Using Database Management Systems for Collaborative Text Editing. ACM European Conference of Computersupported Cooperative Work (ECSCW CEW 2003), Helsinki, Finland, 2003. [5] Hodel, T. B., Gall, H., and Dittrich, K. R.: Dynamic Collaborative Business Processes within Documents. ACM Special Interest Group on Design of Communication (SIGDOC) , Memphis, USA, 2004. [6] Hodel, T. B., R. Hacmac, and Dittrich, K. R.: Using Text Editing Creation Time Meta Data for Document Management. Conference on Advanced Information Systems Engineering (CAiSE'05), Porto, Portugal, Springer Lecture Notes, 2005. [7] Hodel, T. B., Specker, F. and Dittrich, K. R.: Embedded SOAP Server on the Operating System Level for ad-hoc Automatic Real-Time Bidirectional Communication. Information Resources Management Association (IRMA), San Diego, USA, 2005. [8] O``Kelly, P.: Revolution in Real-Time Communication and Collaboration: For Real This Time. Application Strategies: In-Depth Research Report. Burton Group, 2005. 47
Concept and Architecture of a Pervasive Document Editing and Managing System ABSTRACT Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with upto-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example. 1. INTRODUCTION Text documents are a valuable resource for virtually any enterprise and organization. Documents like papers, reports and general business documentations contain a large part of today's (business) knowledge. Documents are mostly stored in a hierarchical folder structure on file servers and it is difficult to organize them in regard to classification, versioning etc., although it is of utmost importance that users can find, retrieve and edit up-to-date versions of documents whenever they want and, in a user-friendly way. 1.1 Problem Description With most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today's world. Despite the fact, that people strive for location - and time - independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected. Documents could therefore be seen as a vulnerable source in today's world, which demands for an appropriate solution: The need to store, retrieve and edit these documents collaboratively anytime, everywhere and with almost every suitable device and with guaranteed mechanisms for security, consistency, availability and access control, is obvious. In addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management. Such meta data includes creation date, creator, authors, version, location-based information such as time and place when/where a user reads/edits a document and so on. Such meta data can be gathered during the documents creation process and can be used versatilely. Especially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user's actual situation influences the user's objectives. Meta data could be used to give the user the best possible view on the documents, dependent of his actual information. On the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents. As far as we know, no system exists, that satisfies the aforementioned requirements. A very good overview about realtime communication and collaboration system is described in [7]. We therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device. In this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality. It enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data. Additionally document data is treated as ` first class citizen' of the database as demanded in [1]. 1.2 Underlying Concepts The concept of our pervasive document editing and management system requires an appropriate architectural foundation. Our concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing. TeNDaX is a Text Native Database eXtension. It enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions. Under the term ` text editing' we understand the following: writing and deleting text (characters), copying & pasting text, defining text layout & structure, inserting notes, setting access rights, defining business processes, inserting tables, pictures, and so on i.e. all the actions regularly carried out by word processing users. With ` real-time transaction' we mean that editing text (e.g. writing a character/word) invokes one or several database transactions so that everything, which is typed appears within the editor as soon as these objects are stored persistently. Instead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2]. The database schema and the above-mentioned transactions are created in such a way that everything can be done within a multiuser environment, as is usual done by database technology. As a consequence, many of the achievements (with respect to data organization and querying, recovery, integrity and security enforcement, multi-user operation, distribution management, uniform tool access, etc.) are now, by means of this approach, also available for word processing. 2. APPROACH Our pervasive editing and management system is based on the above-mentioned database-based TeNDaX approach, where document data is stored natively in the database and supports pervasive collaborative text editing and document management. We define the pervasive document editing and management system, as a system, where documents can easily be accessed and manipulated everywhere (within the network), anytime (independently of the number of users working on the same document) and with any device (desktop, notebook, PDA, mobile phone etc.). Figure 1. TeNDaX Application Architecture In contrast to documents stored locally on the hard drive or on a file server, our system automatically serves the user with the up-to-date version of a document and changes done on the document are stored persistently in the database and immediately propagated to all clients who are working on the same document. Additionally, meta data gathered during the whole document creation process enables sophisticated document management. With the TeXt SQL API as abstract interface, this approach can be used by any tool and for any device. The system is built on the following components (see Figure 1): An editor in Java implements the presentation layer (A-G in Figure 1). The aim of this layer is the integration in a well-known wordprocessing application such as OpenOffice. The business logic layer represents the interface between the database and the word-processing application. It consists of the following three components: The application server (marked as AS 1-4 in Figure 1) enables text editing within the database environment and takes care of awareness, security, document management etc., all within a collaborative, real-time and multi-user environment. The real-time server component (marked as RTSC 14 in Figure 1) is responsible for the propagation of information, i.e. updates between all of the connected editors. The storage engine (data layer) primarily stores the content of documents as well as all related meta data within the database Databases can be distributed in a peer-to-peer network (DB 1-4 in Figure 1). . In the following, we will briefly present the database schema, the editor and the real-time server component as well as the concept of dynamic folders, which enables sophisticated document management on the basis of meta data. 2.1 Application Architecture A database-based real-time collaborative editor allows the same document to be opened and edited simultaneously on the same computer or over a network of several computers and mobile devices. All concurrency issues, as well as message propagation, are solved within this approach, while multiple instances of the same document are being opened [3]. Each insert or delete action is a database transaction and as such, is immediately stored persistently in the database and propagated to all clients working on the same document. 2.1.1 Database Schema As it was mentioned earlier that text is stored in a native way. Each character of a text document is stored as a single object in the database [3]. When storing text in such a native form, the performance of the employed database system is of crucial importance. The concept and performance issues of such a text database are described in [3], collaborative layouting in [2], dynamic collaborative business processes within documents in [5], the text editing creation time meta data model in [6] and the relation to XML databases in [7]. Figure 2 depicts the core database schema. By connecting a client to the database, a Session instance is created. One important attribute of the Session is the DocumentSession. This attribute refers to DocumentSession instances, which administrates all opened documents. For each opened document, a DocumentSession instance is created. The DocumentSession is important for the realtime server component, which, in case of a Figure 2. TeNDaX Database Schema (Object Role Modeling Diagram) change on a document done by a client, is responsible for sending update information to all the clients working on the same document. The DocumentId in the class DocumentSession points to a FileNode instance, and corresponds to the ID of the opened document. Instances of the class FileNode either represent a folder node or a document node. The folder node corresponds to a folder of a file system and the document node to that of a file. Instances of the class Char represent the characters of a document. The value of a character is stored in the attribute CharacterValue. The sequence is defined by the attributes After and Before of the class Char. Particular instances of Char mark the beginning and the end of a document. The methods InsertChars and RemoveChars are used to add and delete characters. Editing a document involves a number of steps: first, getting the required information out of the image, secondly, invoking the corresponding methods within the database, thirdly, changing the image, and fourthly, informing all other clients about the changes. 2.1.3 Real-Time Server Component The real-time server component is responsible for the real-time propagation of any changes on a document done within an editor to all the editors who are working or have opened the same document. When an editor connects to the application server, which in turn connects to the database, the database also establishes a connection to the real-time server component (if there isn't already a connection). The database system informs the real-time server component about each new editor session (session), which the realtime server component administrates in his SessionManager. Then, the editor as well connects to the real-time server component. The real-time server component adds the editor socket to the client's data structure in the SessionManager and is then ready to communicate. 2.1.2 Editor As seen above, each document is natively stored in the database. Our editor does not have a replica of one part of the native text database in the sense of database replicas. Instead, it has a so-called image as its replica. Even if several authors edit the same text at the same time, they work on one unique document at all times. The system guarantees this unique view. Each time a change on a document from an editor is persistently stored in the database, the database sends a message to the real-time server component, which in turns, sends the changes to all the editors working on the same document. Therefore, a special communication protocol is used: the update protocol. Update Protocol The real-time server component uses the update protocol to communicate with the database and the editors. Messages are sent from the database to the real-time server component, which sends the messages to the affected editors. The update protocol consists of different message types. Messages consist of two packages: package one contains information for the real-time server component whereas package two is passed to the editors and contains the update information, as depicted in Figure 3. Figure 3. Update Protocol In the following, two message types are presented: | | u | sessionId,..., sessionId | editor data | | u: update message, sessionId: Id of the client session With this message type the real-time server component sends the editor data package to all editors specified in the sessionId list. | | ud | fileId | editor data | | ud: update document message, fileId: Id of the file With this message type, the real-time server component sends the editor data to all editors who have opened the document with the indicated file-Id. Class Model Figure 4 depicts the class model as well as the environment of the real-time server component. The environment consists mainly of the editor and the database, but any other client application that could make use of the real-time server component can connect. ConnectionListener: This class is responsible for the connection to the clients, i.e. to the database and the editors. Depending on the connection type (database or editor) the connection is passed to an EditorWorker instance or DatabaseMessageWorker instance respectively. EditorWorker: This class manages the connections of type ` editor'. The connection (a socket and its input and output stream) is stored in the SessionManager. SessionManager: This class is similar to an ` in-memory database': all editor session information, e.g. the editor sockets, which editor has opened which document etc. are stored within this data structure. DatabaseMessageWorker: This class is responsible for the connections of type ` database'. At run-time, only one connection exists for each database. Update messages from the database are sent to the DatabaseMessageWorker and, with the help of additional information from the SessionManager, sent to the corresponding clients. ServiceClass: This class offers a set of methods for reading, writing and logging messages. Figure 4. Real-Time Server Component Class Diagram 2.1.4 Dynamic Folders As mentioned above, every editing action invoked by a user is immediately transferred to the database. At the same time, more information about the current transaction is gathered. As all information is stored in the database, one character can hold a multitude of information, which can later be used for the retrieval of documents. Meta data is collected at character level, from document structure (layout, workflow, template, semantics, security, workflow and notes), on the level of a document section and on the level of the whole document [6]. All of the above-mentioned meta data is crucial information for creating content and knowledge out of word processing documents. This meta data can be used to create an alternative storage system for documents. In any case, it is not an easy task to change users' familiarity to the well known hierarchical file system. This is also the main reason why we do not completely disregard the classical file system, but rather enhance it. Folders which correspond to the classical hierarchical file system will be called "static folders". Folders where the documents are organized according to meta data, will be called "dynamic folders". As all information is stored in the database, the file system, too, is based on the database. The dynamic folders build up sub-trees, which are guided by the meta data selected by the user. Thus, the first step in using a dynamic folder is the definition of how it should be built. For each level of a dynamic folder, exactly one meta data item is used to. The following example illustrates the steps which have to be taken in order to define a dynamic folder, and the meta data which should be used. As a first step, the meta data which will be used for the dynamic folder must be chosen (see Table 1): The sequence of the meta data influences the structure of the folder. Furthermore, for each meta data used, restrictions and granularity must be defined by the user; if no restrictions are defined, all accessible documents are listed. The granularity therefore influences the number of sub-folders which will be created for the partitioning of the documents. As the user enters the tree structure of the dynamic folder, he can navigate through the branches to arrive at the document (s) he is looking for. The directory names indicate which meta data determines the content of the sub-folder in question. At each level, the documents, which have so far been found to match the meta data, can be inspected. Table 1. Defining dynamic folders (example) Ad hoc changes of granularity and restrictions are possible in order to maximize search comfort for the user. It is possible to predefine dynamic folders for frequent use, e.g. a location-based folder, as well as to create and modify dynamic folders on an ad hoc basis. Furthermore, the content of such dynamic folders can change from one second to another, depending on the changes made by other users at that moment. 3. VALIDATION The proposed architecture is validated on the example of a character insertion. Insert operations are the mostly used operations in a (collaborative) editing system. The character insertion is based on the TeNDaX Insert Algorithm which is formally described in the following. The algorithm is simplified for this purpose. 3.1 Insert Characters Algorithm The symbol c stands for the object" character", p stands for the previous character, n stands for the next character of a character object c and the symbol l stands for a list of character objects. c = character p = previous character n = next character l = list of characters The symbol c1 stands for the first character in the list l, ci stands for a character in the list l at the position i, whereas i is a value between 1 and the length of the list l, and cn stands for the last character in the list l. c1 = first character in list l ci = character at position i in list l cn = last character in list l The symbol β stands for the special character that marks the beginning of a document and ε stands for the special character that marks the end of a document. β = beginning of document ε = end of document The function startTA starts a transaction. The function checkWriteAccess checks if the write access for a document session s is granted. The function lock acquires an exclusive lock for a character c and returns 1 for a success and 0 for no success. The function releaseLocks releases all locks that a transaction has acquired so far. releaseLocks = release all locks The function getPrevious returns the previous character and getNext returns the next character of a character c. The function linkBefore links a preceding character p with a succeeding character x and the function linkAfter links a succeeding character n with a preceding character y. The function updateString links a character p with the first character c1 of a character list l and a character n with the last character cn of a character list l updateString (l, p, n) = linkBefore (p cl) ∧ linkAfter (n, cn) The function insertChar inserts a character c in the table Char with the fields After set to a character p and Before set to a character n. The function checkPreceding determines the previous character's CharacterValue of a character c and if the previous character's status is active. checkPreceding (c) = return status and CharacterValue of the previous character The function checkSucceeding determines the next character's CharacterValue of a character c and if the next character's status is active. checkSucceeding (c) = return status and CharacterValue of the next character The function checkCharValue determines the CharacterValue of a character c. checkCharValue (c) = return CharacterValue of character c The function sendUpdate sends an update message (UpdateMessage) from the database to the real-time server component. sendUpdate (UpdateMessage) The function Read is used in the real-time server component to read the UpdateMessage. The function AllocatEditors checks on the base of the UpdateMessage and the SessionManager, which editors have to be informed. AllocateEditors (UpdateInformationMessage, SessionManager) = returns the affected editors The function SendMessage (EditorData) sends the editor part of the UpdateMessage to the editors SendMessage (EditorData) In TeNDaX, the Insert Algorithm is implemented in the class method InsertChars of the class Char which is depicted in Figure 2. The relevant parameters for the definitions beneath, are introduced in the following list: - nextCharacterOID: OID of the character situated next to the string to be inserted - previousCharacterOID: OID of the character situated previously to the string to be inserted - characterOIDs (List): List of character which have to be inserted Thus, the insertion of characters can be defined stepwise as follows: Start a transaction. startTA Select the character that is situated before the character that follows the string to be inserted. Acquire the lock for the character that is situated in the document before the character that follows the string which shall be inserted. lock (prevCharId) At this time the list characterOIDs contains the characters c1 to cn that shall be inserted. Each character of the string is inserted at the appropriate position by linking the preceding and the succeeding character to it. For each character ci of characterOIDs: insertChar (ci, p, n) Whereas ci ∈ {c1,..., cn} Check if the preceding and succeeding characters are active or if it is the beginning or the end of the document. 3.2 Insert Characters Example Figure 1 gives a snapshot the system, i.e. of its architecture: four databases are distributed over a peer-to-peer network. Each database is connected to an application server (AS) and each application server is connected to a real-time server component (RTSC). Editors are connected to one or more real-time server components and to the corresponding databases. Considering that editor A (connected to database 1 and 4) and editor B (connected to database 1 and 2) are working on the same document stored in database 1. Editor B now inserts a character into this document. The insert operation is passed to application server 1, which in turns, passes it to the database 1, where an insert operation is invoked; the characters are inserted according to the algorithm discussed in the previous section. After the insertion, database 1 sends an update message (according to the update protocol discussed before) to real-time server component 1 (via AS 1). RTCS 1 combines the received update information with the information in his SessionManager and sends the editor data to the affected editors, in this case to editor A and B, where the changes are immediately shown. Occurring collaboration conflicts are solved and described in [3]. 4. SUMMARY With the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database. With this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management. Pervasive document editing and management is enabled due to the proposed architecture with the embedded real time server component, which propagates changes to a document immediately and consequently offers up-to-date documents. Document editing and managing is consequently enabled anywhere, anytime and with any device. The above-descried system is implemented in a running prototype. The system will be tested soon in line with a student workshop next autumn.
Concept and Architecture of a Pervasive Document Editing and Managing System ABSTRACT Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with upto-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example. 1. INTRODUCTION Text documents are a valuable resource for virtually any enterprise and organization. Documents like papers, reports and general business documentations contain a large part of today's (business) knowledge. Documents are mostly stored in a hierarchical folder structure on file servers and it is difficult to organize them in regard to classification, versioning etc., although it is of utmost importance that users can find, retrieve and edit up-to-date versions of documents whenever they want and, in a user-friendly way. 1.1 Problem Description With most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today's world. Despite the fact, that people strive for location - and time - independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected. Documents could therefore be seen as a vulnerable source in today's world, which demands for an appropriate solution: The need to store, retrieve and edit these documents collaboratively anytime, everywhere and with almost every suitable device and with guaranteed mechanisms for security, consistency, availability and access control, is obvious. In addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management. Such meta data includes creation date, creator, authors, version, location-based information such as time and place when/where a user reads/edits a document and so on. Such meta data can be gathered during the documents creation process and can be used versatilely. Especially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user's actual situation influences the user's objectives. Meta data could be used to give the user the best possible view on the documents, dependent of his actual information. On the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents. As far as we know, no system exists, that satisfies the aforementioned requirements. A very good overview about realtime communication and collaboration system is described in [7]. We therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device. In this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality. It enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data. Additionally document data is treated as ` first class citizen' of the database as demanded in [1]. 1.2 Underlying Concepts The concept of our pervasive document editing and management system requires an appropriate architectural foundation. Our concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing. TeNDaX is a Text Native Database eXtension. It enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions. Under the term ` text editing' we understand the following: writing and deleting text (characters), copying & pasting text, defining text layout & structure, inserting notes, setting access rights, defining business processes, inserting tables, pictures, and so on i.e. all the actions regularly carried out by word processing users. With ` real-time transaction' we mean that editing text (e.g. writing a character/word) invokes one or several database transactions so that everything, which is typed appears within the editor as soon as these objects are stored persistently. Instead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2]. The database schema and the above-mentioned transactions are created in such a way that everything can be done within a multiuser environment, as is usual done by database technology. As a consequence, many of the achievements (with respect to data organization and querying, recovery, integrity and security enforcement, multi-user operation, distribution management, uniform tool access, etc.) are now, by means of this approach, also available for word processing. 2. APPROACH 2.1 Application Architecture 2.1.1 Database Schema 2.1.3 Real-Time Server Component 2.1.2 Editor Update Protocol Class Model 2.1.4 Dynamic Folders 3. VALIDATION 3.1 Insert Characters Algorithm 3.2 Insert Characters Example 4. SUMMARY With the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database. With this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management. Pervasive document editing and management is enabled due to the proposed architecture with the embedded real time server component, which propagates changes to a document immediately and consequently offers up-to-date documents. Document editing and managing is consequently enabled anywhere, anytime and with any device. The above-descried system is implemented in a running prototype. The system will be tested soon in line with a student workshop next autumn.
Concept and Architecture of a Pervasive Document Editing and Managing System ABSTRACT Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with upto-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example. 1. INTRODUCTION Text documents are a valuable resource for virtually any enterprise and organization. Documents like papers, reports and general business documentations contain a large part of today's (business) knowledge. 1.1 Problem Description With most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today's world. Despite the fact, that people strive for location - and time - independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected. Documents could therefore be In addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management. Such meta data includes creation date, creator, authors, version, location-based information such as time and place when/where a user reads/edits a document and so on. Such meta data can be gathered during the documents creation process and can be used versatilely. Especially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user's actual situation influences the user's objectives. Meta data could be used to give the user the best possible view on the documents, dependent of his actual information. On the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents. As far as we know, no system exists, that satisfies the aforementioned requirements. A very good overview about realtime communication and collaboration system is described in [7]. We therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device. In this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality. It enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data. Additionally document data is treated as ` first class citizen' of the database as demanded in [1]. 1.2 Underlying Concepts The concept of our pervasive document editing and management system requires an appropriate architectural foundation. Our concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing. TeNDaX is a Text Native Database eXtension. It enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions. Under the term ` text editing' we understand the following: writing and deleting text Instead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2]. 4. SUMMARY With the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database. With this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management. Pervasive document editing and management is enabled due to the proposed architecture with the embedded real time server component, which propagates changes to a document immediately and consequently offers up-to-date documents. Document editing and managing is consequently enabled anywhere, anytime and with any device. The above-descried system is implemented in a running prototype. The system will be tested soon in line with a student workshop next autumn.
C-54
Remote Access to Large Spatial Databases
Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the client-server architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.
[ "remot access", "larg spatial data", "internet", "spatial queri evalu", "data visual", "data manag", "network latenc", "client-server architectur", "central peer-to-peer approach", "sand", "dynam network infrastructur", "web browser", "internet-enabl databas manag system", "gi", "client-server", "peer-to-peer" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "M", "U", "M", "U", "M", "U", "U", "U" ]
Remote Access to Large Spatial Databases ∗ Egemen Tanin Frantiˇsek Brabec Hanan Samet Computer Science Department Center for Automation Research Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742 {egemen,brabec,hjs}@umiacs. umd.edu www.cs.umd.edu/{~egemen,~brabec,~hjs} ABSTRACT Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the clientserver architecture act on the server``s behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Client/server, Distributed applications, Distributed databases; H.2.8 [Database Management]: Database Applications-Spatial databases and GIS General Terms Performance, Management 1. INTRODUCTION In recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet. Interactive work with such large volumes of online spatial data is a challenging task. We have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser. Users of this browser can interactively and visually manipulate spatial data remotely. Unfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms. We developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data. The core functionality responsible for the actual database operations is performed by the server-based SAND system. SAND is a spatial database system developed at the University of Maryland [12]. The client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet. Users specify queries by choosing the desired selection conditions from a variety of menus and dialog boxes. SAND Internet Browser is Java-based, which makes it deployable across many platforms. In addition, since Java has often been installed on target computers beforehand, our clients can be deployed on these systems with little or no need for any additional software installation or customization. The system can start being utilized immediately without any prior setup which can be extremely beneficial in time-sensitive usage scenarios such as emergencies. There are two ways to deploy SAND. First, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet. This way, users across various platforms can continuously access large spatial data on a remote location with little or 15 no need for any preceding software installation. The second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece). In this case, the SAND Internet Browser can still be utilized to view data from remote locations. However, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally. Power users can also upload large volumes of spatial data back to the remote server using this enhanced client. We focused our efforts in two directions. We first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other. The low bandwidth of this connection is the primary concern in both cases. The outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications. The second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods. We have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture. We call this architecture APPOINTApproach for Peer-to-Peer Offloading the INTernet. The results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application). The rest of this paper is organized as follows. Section 2 describes our client-server approach in more detail. Section 3 focuses on APPOINT, our peer-to-peer approach. Section 4 discusses our work in relation to existing work. Section 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches. Section 6 contains concluding remarks as well as future research directions. 2. THE CLIENT-SERVER APPROACH Traditionally, Geographic Information Systems (GIS) such as ArcInfo from ESRI [2] and many spatial databases are designed to be stand-alone products. The spatial database is kept on the same computer or local area network from where it is visualized and queried. This architecture allows for instantaneous transfer of large amounts of data between the spatial database and the visualization module so that it is perfectly reasonable to use large-bandwidth protocols for communication between them. There are however many applications where a more distributed approach is desirable. In these cases, the database is maintained in one location while users need to work with it from possibly distant sites over the network (e.g., the Internet). These connections can be far slower and less reliable than local area networks and thus it is desirable to limit the data flow between the database (server) and the visualization unit (client) in order to get a timely response from the system. Our client-server approach (Figure 1) allows the actual database engine to be run in a central location maintained by spatial database experts, while end users acquire a Javabased client component that provides them with a gateway into the SAND spatial database engine. Our client is more than a simple image viewer. Instead, it operates on vector data allowing the client to execute many operations such as zooming or locational queries locally. In Figure 1: SAND Internet Browser - Client-Server architecture. essence, a simple spatial database engine is run on the client. This database keeps a copy of a subset of the whole database whose full version is maintained on the server. This is a concept similar to `caching''. In our case, the client acts as a lightweight server in that given data, it evaluates queries and provides the visualization module with objects to be displayed. It initiates communication with the server only in cases where it does not have enough data stored locally. Since the locally run database is only updated when additional or newer data is needed, our architecture allows the system to minimize the network traffic between the client and the server when executing the most common user-side operations such as zooming and panning. In fact, as long as the user explores one region at a time (i.e., he or she is not panning all over the database), no additional data needs to be retrieved after the initial population of the client-side database. This makes the system much more responsive than the Web mapping services. Due to the complexity of evaluating arbitrary queries (i.e., more complex queries than window queries that are needed for database visualization), we do not perform user-specified queries on the client. All user queries are still evaluated on the server side and the results are downloaded onto the client for display. However, assuming that the queries are selective enough (i.e., there are far fewer elements returned from the query than the number of elements in the database), the response delay is usually within reasonable limits. 2.1 Client-Server Communication As mentioned above, the SAND Internet Browser is a client piece of the remotely accessible spatial database server built around the SAND kernel. In order to communicate with the server, whose application programming interface (API) is a Tcl-based scripting language, a servlet specifically designed to interface the SAND Internet Browser with the SAND kernel is required on the server side. This servlet listens on a given port of the server for incoming requests from the client. It translates these requests into the SAND-Tcl language. Next, it transmits these SAND-Tcl commands or scripts to the SAND kernel. After results are provided by the kernel, the servlet fetches and processes them, and then sends those results back to the originating client. Once the Java servlet is launched, it waits for a client to initiate a connection. It handles both requests for the actual client Java code (needed when the client is run as an applet) and the SAND traffic. When the client piece is launched, it connects back to the SAND servlet, the communication is driven by the client piece; the server only responds to the client``s queries. The client initiates a transaction by 6 sending a query. The Java servlet parses the query and creates a corresponding SAND-Tcl expression or script in the SAND kernel``s native format. It is then sent to the kernel for evaluation or execution. The kernel``s response naturally depends on the query and can be a boolean value, a number or a string representing a value (e.g., a default color) or, a whole tuple (e.g., in response to a nearest tuple query). If a script was sent to the kernel (e.g., requesting all the tuples matching some criteria), then an arbitrary amount of data can be returned by the SAND server. In this case, the data is first compressed before it is sent over the network to the client. The data stream gets decompressed at the client before the results are parsed. Notice, that if another spatial database was to be used instead of the SAND kernel, then only a simple modification to the servlet would need to be made in order for the SAND Internet Browser to function properly. In particular, the queries sent by the client would need to be recoded into another query language which is native to this different spatial database. The format of the protocol used for communication between the servlet and the client is unaffected. 3. THE PEER-TO-PEER APPROACH Many users may want to work on a complete spatial data set for a prolonged period of time. In this case, making an initial investment of downloading the whole data set may be needed to guarantee a satisfactory session. Unfortunately, spatial data tends to be large. A few download requests to a large data set from a set of idle clients waiting to be served can slow the server to a crawl. This is due to the fact that the common client-server approach to transferring data between the two ends of a connection assumes a designated role for each one of the ends (i.e, some clients and a server). We built APPOINT as a centralized peer-to-peer system to demonstrate our approach for improving the common client-server systems. A server still exists. There is a central source for the data and a decision mechanism for the service. The environment still functions as a client-server environment under many circumstances. Yet, unlike many common client-server environments, APPOINT maintains more information about the clients. This includes, inventories of what each client downloads, their availabilities, etc.. When the client-server service starts to perform poorly or a request for a data item comes from a client with a poor connection to the server, APPOINT can start appointing appropriate active clients of the system to serve on behalf of the server, i.e., clients who have already volunteered their services and can take on the role of peers (hence, moving from a client-server scheme to a peer-to-peer scheme). The directory service for the active clients is still performed by the server but the server no longer serves all of the requests. In this scheme, clients are used mainly for the purpose of sharing their networking resources rather than introducing new content and hence they help offload the server and scale up the service. The existence of a server is simpler in terms of management of dynamic peers in comparison to pure peerto-peer approaches where a flood of messages to discover who is still active in the system should be used by each peer that needs to make a decision. The server is also the main source of data and under regular circumstances it may not forward the service. Data is assumed to be formed of files. A single file forms the atomic means of communication. APPOINT optimizes requests with respect to these atomic requests. Frequently accessed data sets are replicated as a byproduct of having been requested by a large number of users. This opens up the potential for bypassing the server in future downloads for the data by other users as there are now many new points of access to it. Bypassing the server is useful when the server``s bandwidth is limited. Existence of a server assures that unpopular data is also available at all times. The service depends on the availability of the server. The server is now more resilient to congestion as the service is more scalable. Backups and other maintenance activities are already being performed on the server and hence no extra administrative effort is needed for the dynamic peers. If a peer goes down, no extra precautions are taken. In fact, APPOINT does not require any additional resources from an already existing client-server environment but, instead, expands its capability. The peers simply get on to or get off from a table on the server. Uploading data is achieved in a similar manner as downloading data. For uploads, the active clients can again be utilized. Users can upload their data to a set of peers other than the server if the server is busy or resides in a distant location. Eventually the data is propagated to the server. All of the operations are performed in a transparent fashion to the clients. Upon initial connection to the server, they can be queried as to whether or not they want to share their idle networking time and disk space. The rest of the operations follow transparently after the initial contact. APPOINT works on the application layer but not on lower layers. This achieves platform independence and easy deployment of the system. APPOINT is not a replacement but an addition to the current client-server architectures. We developed a library of function calls that when placed in a client-server architecture starts the service. We are developing advanced peer selection schemes that incorporate the location of active clients, bandwidth among active clients, data-size to be transferred, load on active clients, and availability of active clients to form a complete means of selecting the best clients that can become efficient alternatives to the server. With APPOINT we are defining a very simple API that could be used within an existing client-server system easily. Instead of denial of service or a slow connection, this API can be utilized to forward the service appropriately. The API for the server side is: start(serverPortNo) makeFileAvailable(file,location,boolean) callback receivedFile(file,location) callback errorReceivingFile(file,location,error) stop() Similarly the API for the client side is: start(clientPortNo,serverPortNo,serverAddress) makeFileAvailable(file,location,boolean) receiveFile(file,location) sendFile(file,location) stop() The server, after starting the APPOINT service, can make all of the data files available to the clients by using the makeFileAvailable method. This will enable APPOINT to treat the server as one of the peers. The two callback methods of the server are invoked when a file is received from a client, or when an error is encountered while receiving a file from a client. APPOINT guar7 Figure 2: The localization operation in APPOINT. antees that at least one of the callbacks will be called so that the user (who may not be online anymore) can always be notified (i.e., via email). Clients localizing large data files can make these files available to the public by using the makeFileAvailable method on the client side. For example, in our SAND Internet Browser, we have the localization of spatial data as a function that can be chosen from our menus. This functionality enables users to download data sets completely to their local disks before starting their queries or analysis. In our implementation, we have calls to the APPOINT service both on the client and the server sides as mentioned above. Hence, when a localization request comes to the SAND Internet Browser, the browser leaves the decisions to optimally find and localize a data set to the APPOINT service. Our server also makes its data files available over APPOINT. The mechanism for the localization operation is shown with more details from the APPOINT protocols in Figure 2. The upload operation is performed in a similar fashion. 4. RELATED WORK There has been a substantial amount of research on remote access to spatial data. One specific approach has been adopted by numerous Web-based mapping services (MapQuest [5], MapsOnUs [6], etc.). The goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company``s spatial database server and retrieve information in the form of pictorial maps from them. The solution presented by most of these vendors is based on performing all the calculations on the server side and transferring only bitmaps that represent results of user queries and commands. Although the advantage of this solution is the minimization of both hardware and software resources on the client site, the resulting product has severe limitations in terms of available functionality and response time (each user action results in a new bitmap being transferred to the client). Work described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection. It presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client. In this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image. On the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations. Both the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested. This also allows dropping unnecessary parts of the image from the main memory on the server. Other related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server. It is assumed that this data server manages vast databases that are impractical to be stored on individual clients. This work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20]. For our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks. Our application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach. Gnutella is a pure (decentralized) peer-to-peer file exchange system. Unfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required. Other systems followed these popular systems, each addressing a different flavor of sharing over the Internet. Many peer-to-peer storage systems have also recently emerged. PAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems. Some of these systems have focused on anonymity while others have focused on persistence of storage. Also, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems. Our goal is different than these approaches. With APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients. Hence, other issues like anonymity, decentralization, and persistence of storage were less important in our decisions. Confirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT. We want to expand our research, in the future, to address this issue. From our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures. Squirrel [13] forms the middle ground. It creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network. Except for this recent peer-to-peer approach, Web caching is mostly a wellstudied topic in the realm of server/proxy level caching [8, 11, 14, 17]. Collaborative Web caching systems, the most relevant of these for our research, focus on creating either a hierarchical, hash-based, central directory-based, or multicast-based caching schemes. We do not compete with these approaches. In fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together. We try to address the situation where a request arrives at a server, meaning all the caches report a miss. Hence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down8 load and upload operations. Cache misses are especially common in the type of large data-based services on which we are working. Most of the Web caching schemes that are in use today employ a replacement policy that gives a priority to replacing the largest sized items over smaller-sized ones. Hence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently. In addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes. Finally, none of the Web caching methods address the symmetric issue of large data uploads. 5. A SAMPLE APPLICATION FedStats [1] is an online source that enables ordinary citizens access to official statistics of numerous federal agencies without knowing in advance which agency produced them. We are using a FedStats data set as a testbed for our work. Our goal is to provide more power to the users of FedStats by utilizing the SAND Internet Browser. As an example, we looked at two data files corresponding to Environmental Protection Agency (EPA)-regulated facilities that have chlorine and arsenic, respectively. For each file, we had the following information available: EPA-ID, name, street, city, state, zip code, latitude, longitude, followed by flags to indicate if that facility is in the following EPA programs: Hazardous Waste, Wastewater Discharge, Air Emissions, Abandoned Toxic Waste Dump, and Active Toxic Release. We put this data into a SAND relation where the spatial attribute `location'' corresponds to the latitude and longitude. Some queries that can be handled with our system on this data include: 1. Find all EPA-regulated facilities that have arsenic and participate in the Air Emissions program, and: (a) Lie in Georgia to Illinois, alphabetically. (b) Lie within Arkansas or 30 miles within its border. (c) Lie within 30 miles of the border of Arkansas (i.e., both sides of the border). 2. For each EPA-regulated facility that has arsenic, find all EPA-regulated facilities that have chlorine and: (a) That are closer to it than to any other EPAregulated facility that has arsenic. (b) That participate in the Air Emissions program and are closer to it than to any other EPAregulated facility which has arsenic. In order to avoid reporting a particular facility more than once, we use our `group by EPA-ID'' mechanism. Figure 3 illustrates the output of an example query that finds all arsenic sites within a given distance of the border of Arkansas. The sites are obtained in an incremental manner with respect to a given point. This ordering is shown by using different color shades. With this example data, it is possible to work with the SAND Internet Browser online as an applet (connecting to a remote server) or after localizing the data and then opening it locally. In the first case, for each action taken, the client-server architecture will decide what to ask for from the server. In the latter case, the browser will use the peerto-peer APPOINT architecture for first localizing the data. 6. CONCLUDING REMARKS An overview of our efforts in providing remote access to large spatial data has been given. We have outlined our approaches and introduced their individual elements. Our client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients. APPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers. For the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously. We assume that a peer (client) can become unavailable at any anytime and hence provisions need to be in place to handle such a situation. To address this, we will augment our methods to include efficient dynamic updates. Upon completion of this step of our work, we also plan to run comprehensive performance studies on our methods. Another issue is how to access data from different sources in different formats. In order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design. The XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data. GML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data. We are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework. This will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface. This will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort. 7. REFERENCES [1] Fedstats: The gateway to statistics from over 100 U.S. federal agencies. http://www.fedstats.gov/, 2001. [2] Arcinfo: Scalable system of software for geographic data creation, management, integration, analysis, and dissemination. http://www.esri.com/software/ arcgis/arcinfo/index. html, 2002. [3] Extensible markup language (xml). http://www.w3.org/XML/, 2002. [4] Geography markup language (gml) 2.0. http://opengis.net/gml/01-029/GML2.html, 2002. [5] Mapquest: Consumer-focused interactive mapping site on the web. http://www.mapquest.com, 2002. [6] Mapsonus: Suite of online geographic services. http://www.mapsonus.com, 2002. [7] R. Anderson. The Eternity Service. In Proceedings of the PRAGOCRYPT``96, pages 242-252, Prague, Czech Republic, September 1996. [8] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker. Web caching and Zipf-like distributions: 9 Figure 3: Sample output from the SAND Internet Browser - Large dark dots indicate the result of a query that looks for all arsenic sites within a given distance from Arkansas. Different color shades are used to indicate ranking order by the distance from a given point. Evidence and implications. In Proceedings of the IEEE Infocom``99, pages 126-134, New York, NY, March 1999. [9] E. Chang, C. Yap, and T. Yen. Realtime visualization of large images over a thinwire. In R. Yagel and H. Hagen, editors, Proceedings IEEE Visualization``97 (Late Breaking Hot Topics), pages 45-48, Phoenix, AZ, October 1997. [10] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and I. Stoica. Wide-area cooperative storage with CFS. In Proceedings of the ACM SOSP``01, pages 202-215, Banff, AL, October 2001. [11] A. Dingle and T. Partl. Web cache coherence. Computer Networks and ISDN Systems, 28(7-11):907-920, May 1996. [12] C. Esperan¸ca and H. Samet. Experience with SAND/Tcl: a scripting tool for spatial databases. Journal of Visual Languages and Computing, 13(2):229-255, April 2002. [13] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A decentralized peer-to-peer Web cache. Rice University/Microsoft Research, submitted for publication, 2002. [14] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad, R. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, and Y. Yerushalmi. Web caching with consistent hashing. Computer Networks, 31(11-16):1203-1213, May 1999. [15] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao. OceanStore: An architecture for global-scale persistent store. In Proceedings of the ACM ASPLOS``00, pages 190-201, Cambridge, MA, November 2000. [16] M. Potmesil. Maps alive: viewing geospatial information on the WWW. Computer Networks and ISDN Systems, 29(8-13):1327-1342, September 1997. Also Hyper Proceedings of the 6th International World Wide Web Conference, Santa Clara, CA, April 1997. [17] M. Rabinovich, J. Chase, and S. Gadde. Not all hits are created equal: Cooperative proxy caching over a wide-area network. Computer Networks and ISDN Systems, 30(22-23):2253-2259, November 1998. [18] A. Rowstron and P. Druschel. Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility. In Proceedings of the ACM SOSP``01, pages 160-173, Banff, AL, October 2001. [19] H. Samet. Applications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS. Addison-Wesley, Reading, MA, 1990. [20] H. Samet. The Design and Analysis of Spatial Data Structures. Addison-Wesley, Reading, MA, 1990. [21] SETI@Home. http://setiathome.ssl.berkeley.edu/, 2001. [22] L. J. Williams. Pyramidal parametrics. Computer Graphics, 17(3):1-11, July 1983. Also Proceedings of the SIGGRAPH``83 Conference, Detroit, July 1983. 10
Remote Access to Large Spatial Databases * ABSTRACT Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the clientserver architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. 1. INTRODUCTION In recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet. Interactive work with such large volumes of online spatial data is a challenging task. We have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser. Users of this browser can interactively and visually manipulate spatial data remotely. Unfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms. We developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data. The core functionality responsible for the actual database operations is performed by the server-based SAND system. SAND is a spatial database system developed at the University of Maryland [12]. The client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet. Users specify queries by choosing the desired selection conditions from a variety of menus and dialog boxes. SAND Internet Browser is Java-based, which makes it deployable across many platforms. In addition, since Java has often been installed on target computers beforehand, our clients can be deployed on these systems with little or no need for any additional software installation or customization. The system can start being utilized immediately without any prior setup which can be extremely beneficial in time-sensitive usage scenarios such as emergencies. There are two ways to deploy SAND. First, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet. This way, users across various platforms can continuously access large spatial data on a remote location with little or no need for any preceding software installation. The second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece). In this case, the SAND Internet Browser can still be utilized to view data from remote locations. However, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally. Power users can also upload large volumes of spatial data back to the remote server using this enhanced client. We focused our efforts in two directions. We first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other. The low bandwidth of this connection is the primary concern in both cases. The outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications. The second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods. We have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture. We call this architecture APPOINT--Approach for Peer-to-Peer Offloading the INTernet. The results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application). The rest of this paper is organized as follows. Section 2 describes our client-server approach in more detail. Section 3 focuses on APPOINT, our peer-to-peer approach. Section 4 discusses our work in relation to existing work. Section 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches. Section 6 contains concluding remarks as well as future research directions. 2. THE CLIENT-SERVER APPROACH Traditionally, Geographic Information Systems (GIS) such as ArcInfo from ESRI [2] and many spatial databases are designed to be stand-alone products. The spatial database is kept on the same computer or local area network from where it is visualized and queried. This architecture allows for instantaneous transfer of large amounts of data between the spatial database and the visualization module so that it is perfectly reasonable to use large-bandwidth protocols for communication between them. There are however many applications where a more distributed approach is desirable. In these cases, the database is maintained in one location while users need to work with it from possibly distant sites over the network (e.g., the Internet). These connections can be far slower and less reliable than local area networks and thus it is desirable to limit the data flow between the database (server) and the visualization unit (client) in order to get a timely response from the system. Our client-server approach (Figure 1) allows the actual database engine to be run in a central location maintained by spatial database experts, while end users acquire a Javabased client component that provides them with a gateway into the SAND spatial database engine. Our client is more than a simple image viewer. Instead, it operates on vector data allowing the client to execute many operations such as zooming or locational queries locally. In Figure 1: SAND Internet Browser--Client-Server architecture. essence, a simple spatial database engine is run on the client. This database keeps a copy of a subset of the whole database whose full version is maintained on the server. This is a concept similar to ` caching'. In our case, the client acts as a lightweight server in that given data, it evaluates queries and provides the visualization module with objects to be displayed. It initiates communication with the server only in cases where it does not have enough data stored locally. Since the locally run database is only updated when additional or newer data is needed, our architecture allows the system to minimize the network traffic between the client and the server when executing the most common user-side operations such as zooming and panning. In fact, as long as the user explores one region at a time (i.e., he or she is not panning all over the database), no additional data needs to be retrieved after the initial population of the client-side database. This makes the system much more responsive than the Web mapping services. Due to the complexity of evaluating arbitrary queries (i.e., more complex queries than window queries that are needed for database visualization), we do not perform user-specified queries on the client. All user queries are still evaluated on the server side and the results are downloaded onto the client for display. However, assuming that the queries are selective enough (i.e., there are far fewer elements returned from the query than the number of elements in the database), the response delay is usually within reasonable limits. 2.1 Client-Server Communication As mentioned above, the SAND Internet Browser is a client piece of the remotely accessible spatial database server built around the SAND kernel. In order to communicate with the server, whose application programming interface (API) is a Tcl-based scripting language, a servlet specifically designed to interface the SAND Internet Browser with the SAND kernel is required on the server side. This servlet listens on a given port of the server for incoming requests from the client. It translates these requests into the SAND-Tcl language. Next, it transmits these SAND-Tcl commands or scripts to the SAND kernel. After results are provided by the kernel, the servlet fetches and processes them, and then sends those results back to the originating client. Once the Java servlet is launched, it waits for a client to initiate a connection. It handles both requests for the actual client Java code (needed when the client is run as an applet) and the SAND traffic. When the client piece is launched, it connects back to the SAND servlet, the communication is driven by the client piece; the server only responds to the client's queries. The client initiates a transaction by sending a query. The Java servlet parses the query and creates a corresponding SAND-Tcl expression or script in the SAND kernel's native format. It is then sent to the kernel for evaluation or execution. The kernel's response naturally depends on the query and can be a boolean value, a number or a string representing a value (e.g., a default color) or, a whole tuple (e.g., in response to a nearest tuple query). If a script was sent to the kernel (e.g., requesting all the tuples matching some criteria), then an arbitrary amount of data can be returned by the SAND server. In this case, the data is first compressed before it is sent over the network to the client. The data stream gets decompressed at the client before the results are parsed. Notice, that if another spatial database was to be used instead of the SAND kernel, then only a simple modification to the servlet would need to be made in order for the SAND Internet Browser to function properly. In particular, the queries sent by the client would need to be recoded into another query language which is native to this different spatial database. The format of the protocol used for communication between the servlet and the client is unaffected. 3. THE PEER-TO-PEER APPROACH Many users may want to work on a complete spatial data set for a prolonged period of time. In this case, making an initial investment of downloading the whole data set may be needed to guarantee a satisfactory session. Unfortunately, spatial data tends to be large. A few download requests to a large data set from a set of idle clients waiting to be served can slow the server to a crawl. This is due to the fact that the common client-server approach to transferring data between the two ends of a connection assumes a designated role for each one of the ends (i.e, some clients and a server). We built APPOINT as a centralized peer-to-peer system to demonstrate our approach for improving the common client-server systems. A server still exists. There is a central source for the data and a decision mechanism for the service. The environment still functions as a client-server environment under many circumstances. Yet, unlike many common client-server environments, APPOINT maintains more information about the clients. This includes, inventories of what each client downloads, their availabilities, etc. . When the client-server service starts to perform poorly or a request for a data item comes from a client with a poor connection to the server, APPOINT can start appointing appropriate active clients of the system to serve on behalf of the server, i.e., clients who have already volunteered their services and can take on the role of peers (hence, moving from a client-server scheme to a peer-to-peer scheme). The directory service for the active clients is still performed by the server but the server no longer serves all of the requests. In this scheme, clients are used mainly for the purpose of sharing their networking resources rather than introducing new content and hence they help offload the server and scale up the service. The existence of a server is simpler in terms of management of dynamic peers in comparison to pure peerto-peer approaches where a flood of messages to discover who is still active in the system should be used by each peer that needs to make a decision. The server is also the main source of data and under regular circumstances it may not forward the service. Data is assumed to be formed of files. A single file forms the atomic means of communication. APPOINT optimizes requests with respect to these atomic requests. Frequently accessed data sets are replicated as a byproduct of having been requested by a large number of users. This opens up the potential for bypassing the server in future downloads for the data by other users as there are now many new points of access to it. Bypassing the server is useful when the server's bandwidth is limited. Existence of a server assures that unpopular data is also available at all times. The service depends on the availability of the server. The server is now more resilient to congestion as the service is more scalable. Backups and other maintenance activities are already being performed on the server and hence no extra administrative effort is needed for the dynamic peers. If a peer goes down, no extra precautions are taken. In fact, APPOINT does not require any additional resources from an already existing client-server environment but, instead, expands its capability. The peers simply get on to or get off from a table on the server. Uploading data is achieved in a similar manner as downloading data. For uploads, the active clients can again be utilized. Users can upload their data to a set of peers other than the server if the server is busy or resides in a distant location. Eventually the data is propagated to the server. All of the operations are performed in a transparent fashion to the clients. Upon initial connection to the server, they can be queried as to whether or not they want to share their idle networking time and disk space. The rest of the operations follow transparently after the initial contact. APPOINT works on the application layer but not on lower layers. This achieves platform independence and easy deployment of the system. APPOINT is not a replacement but an addition to the current client-server architectures. We developed a library of function calls that when placed in a client-server architecture starts the service. We are developing advanced peer selection schemes that incorporate the location of active clients, bandwidth among active clients, data-size to be transferred, load on active clients, and availability of active clients to form a complete means of selecting the best clients that can become efficient alternatives to the server. With APPOINT we are defining a very simple API that could be used within an existing client-server system easily. Instead of denial of service or a slow connection, this API can be utilized to forward the service appropriately. The API for the server side is: The server, after starting the APPOINT service, can make all of the data files available to the clients by using the makeFileAvailable method. This will enable APPOINT to treat the server as one of the peers. The two callback methods of the server are invoked when a file is received from a client, or when an error is encountered while receiving a file from a client. APPOINT guar Figure 2: The localization operation in APPOINT. antees that at least one of the callbacks will be called so that the user (who may not be online anymore) can always be notified (i.e., via email). Clients localizing large data files can make these files available to the public by using the makeFileAvailable method on the client side. For example, in our SAND Internet Browser, we have the localization of spatial data as a function that can be chosen from our menus. This functionality enables users to download data sets completely to their local disks before starting their queries or analysis. In our implementation, we have calls to the APPOINT service both on the client and the server sides as mentioned above. Hence, when a localization request comes to the SAND Internet Browser, the browser leaves the decisions to optimally find and localize a data set to the APPOINT service. Our server also makes its data files available over APPOINT. The mechanism for the localization operation is shown with more details from the APPOINT protocols in Figure 2. The upload operation is performed in a similar fashion. 4. RELATED WORK There has been a substantial amount of research on remote access to spatial data. One specific approach has been adopted by numerous Web-based mapping services (MapQuest [5], MapsOnUs [6], etc.). The goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company's spatial database server and retrieve information in the form of pictorial maps from them. The solution presented by most of these vendors is based on performing all the calculations on the server side and transferring only bitmaps that represent results of user queries and commands. Although the advantage of this solution is the minimization of both hardware and software resources on the client site, the resulting product has severe limitations in terms of available functionality and response time (each user action results in a new bitmap being transferred to the client). Work described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection. It presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client. In this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image. On the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations. Both the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested. This also allows dropping unnecessary parts of the image from the main memory on the server. Other related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server. It is assumed that this data server manages vast databases that are impractical to be stored on individual clients. This work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20]. For our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks. Our application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach. Gnutella is a pure (decentralized) peer-to-peer file exchange system. Unfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required. Other systems followed these popular systems, each addressing a different flavor of sharing over the Internet. Many peer-to-peer storage systems have also recently emerged. PAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems. Some of these systems have focused on anonymity while others have focused on persistence of storage. Also, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems. Our goal is different than these approaches. With APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients. Hence, other issues like anonymity, decentralization, and persistence of storage were less important in our decisions. Confirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT. We want to expand our research, in the future, to address this issue. From our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures. Squirrel [13] forms the middle ground. It creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network. Except for this recent peer-to-peer approach, Web caching is mostly a wellstudied topic in the realm of server/proxy level caching [8, 11, 14, 17]. Collaborative Web caching systems, the most relevant of these for our research, focus on creating either a hierarchical, hash-based, central directory-based, or multicast-based caching schemes. We do not compete with these approaches. In fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together. We try to address the situation where a request arrives at a server, meaning all the caches report a miss. Hence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down load and upload operations. Cache misses are especially common in the type of large data-based services on which we are working. Most of the Web caching schemes that are in use today employ a replacement policy that gives a priority to replacing the largest sized items over smaller-sized ones. Hence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently. In addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes. Finally, none of the Web caching methods address the symmetric issue of large data uploads. 5. A SAMPLE APPLICATION FedStats [1] is an online source that enables ordinary citizens access to official statistics of numerous federal agencies without knowing in advance which agency produced them. We are using a FedStats data set as a testbed for our work. Our goal is to provide more power to the users of FedStats by utilizing the SAND Internet Browser. As an example, we looked at two data files corresponding to Environmental Protection Agency (EPA) - regulated facilities that have chlorine and arsenic, respectively. For each file, we had the following information available: EPA-ID, name, street, city, state, zip code, latitude, longitude, followed by flags to indicate if that facility is in the following EPA programs: Hazardous Waste, Wastewater Discharge, Air Emissions, Abandoned Toxic Waste Dump, and Active Toxic Release. We put this data into a SAND relation where the spatial attribute ` location' corresponds to the latitude and longitude. Some queries that can be handled with our system on this data include: 1. Find all EPA-regulated facilities that have arsenic and participate in the Air Emissions program, and: (a) Lie in Georgia to Illinois, alphabetically. (b) Lie within Arkansas or 30 miles within its border. (c) Lie within 30 miles of the border of Arkansas (i.e., both sides of the border). 2. For each EPA-regulated facility that has arsenic, find all EPA-regulated facilities that have chlorine and: (a) That are closer to it than to any other EPAregulated facility that has arsenic. (b) That participate in the Air Emissions program and are closer to it than to any other EPAregulated facility which has arsenic. In order to avoid reporting a particular facility more than once, we use our ` group by EPA-ID' mechanism. Figure 3 illustrates the output of an example query that finds all arsenic sites within a given distance of the border of Arkansas. The sites are obtained in an incremental manner with respect to a given point. This ordering is shown by using different color shades. With this example data, it is possible to work with the SAND Internet Browser online as an applet (connecting to a remote server) or after localizing the data and then opening it locally. In the first case, for each action taken, the client-server architecture will decide what to ask for from the server. In the latter case, the browser will use the peerto-peer APPOINT architecture for first localizing the data. 6. CONCLUDING REMARKS An overview of our efforts in providing remote access to large spatial data has been given. We have outlined our approaches and introduced their individual elements. Our client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients. APPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers. For the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously. We assume that a peer (client) can become unavailable at any anytime and hence provisions need to be in place to handle such a situation. To address this, we will augment our methods to include efficient dynamic updates. Upon completion of this step of our work, we also plan to run comprehensive performance studies on our methods. Another issue is how to access data from different sources in different formats. In order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design. The XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data. GML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data. We are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework. This will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface. This will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.
Remote Access to Large Spatial Databases * ABSTRACT Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the clientserver architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. 1. INTRODUCTION In recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet. Interactive work with such large volumes of online spatial data is a challenging task. We have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser. Users of this browser can interactively and visually manipulate spatial data remotely. Unfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms. We developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data. The core functionality responsible for the actual database operations is performed by the server-based SAND system. SAND is a spatial database system developed at the University of Maryland [12]. The client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet. Users specify queries by choosing the desired selection conditions from a variety of menus and dialog boxes. SAND Internet Browser is Java-based, which makes it deployable across many platforms. In addition, since Java has often been installed on target computers beforehand, our clients can be deployed on these systems with little or no need for any additional software installation or customization. The system can start being utilized immediately without any prior setup which can be extremely beneficial in time-sensitive usage scenarios such as emergencies. There are two ways to deploy SAND. First, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet. This way, users across various platforms can continuously access large spatial data on a remote location with little or no need for any preceding software installation. The second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece). In this case, the SAND Internet Browser can still be utilized to view data from remote locations. However, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally. Power users can also upload large volumes of spatial data back to the remote server using this enhanced client. We focused our efforts in two directions. We first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other. The low bandwidth of this connection is the primary concern in both cases. The outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications. The second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods. We have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture. We call this architecture APPOINT--Approach for Peer-to-Peer Offloading the INTernet. The results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application). The rest of this paper is organized as follows. Section 2 describes our client-server approach in more detail. Section 3 focuses on APPOINT, our peer-to-peer approach. Section 4 discusses our work in relation to existing work. Section 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches. Section 6 contains concluding remarks as well as future research directions. 2. THE CLIENT-SERVER APPROACH 2.1 Client-Server Communication 3. THE PEER-TO-PEER APPROACH 4. RELATED WORK There has been a substantial amount of research on remote access to spatial data. One specific approach has been adopted by numerous Web-based mapping services (MapQuest [5], MapsOnUs [6], etc.). The goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company's spatial database server and retrieve information in the form of pictorial maps from them. The solution presented by most of these vendors is based on performing all the calculations on the server side and transferring only bitmaps that represent results of user queries and commands. Although the advantage of this solution is the minimization of both hardware and software resources on the client site, the resulting product has severe limitations in terms of available functionality and response time (each user action results in a new bitmap being transferred to the client). Work described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection. It presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client. In this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image. On the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations. Both the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested. This also allows dropping unnecessary parts of the image from the main memory on the server. Other related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server. It is assumed that this data server manages vast databases that are impractical to be stored on individual clients. This work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20]. For our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks. Our application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach. Gnutella is a pure (decentralized) peer-to-peer file exchange system. Unfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required. Other systems followed these popular systems, each addressing a different flavor of sharing over the Internet. Many peer-to-peer storage systems have also recently emerged. PAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems. Some of these systems have focused on anonymity while others have focused on persistence of storage. Also, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems. Our goal is different than these approaches. With APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients. Hence, other issues like anonymity, decentralization, and persistence of storage were less important in our decisions. Confirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT. We want to expand our research, in the future, to address this issue. From our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures. Squirrel [13] forms the middle ground. It creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network. Except for this recent peer-to-peer approach, Web caching is mostly a wellstudied topic in the realm of server/proxy level caching [8, 11, 14, 17]. Collaborative Web caching systems, the most relevant of these for our research, focus on creating either a hierarchical, hash-based, central directory-based, or multicast-based caching schemes. We do not compete with these approaches. In fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together. We try to address the situation where a request arrives at a server, meaning all the caches report a miss. Hence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down load and upload operations. Cache misses are especially common in the type of large data-based services on which we are working. Most of the Web caching schemes that are in use today employ a replacement policy that gives a priority to replacing the largest sized items over smaller-sized ones. Hence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently. In addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes. Finally, none of the Web caching methods address the symmetric issue of large data uploads. 5. A SAMPLE APPLICATION 6. CONCLUDING REMARKS An overview of our efforts in providing remote access to large spatial data has been given. We have outlined our approaches and introduced their individual elements. Our client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients. APPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers. For the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously. We assume that a peer (client) can become unavailable at any anytime and hence provisions need to be in place to handle such a situation. To address this, we will augment our methods to include efficient dynamic updates. Upon completion of this step of our work, we also plan to run comprehensive performance studies on our methods. Another issue is how to access data from different sources in different formats. In order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design. The XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data. GML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data. We are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework. This will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface. This will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.
Remote Access to Large Spatial Databases * ABSTRACT Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the clientserver architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. 1. INTRODUCTION In recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet. Interactive work with such large volumes of online spatial data is a challenging task. We have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser. Users of this browser can interactively and visually manipulate spatial data remotely. Unfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms. We developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data. The core functionality responsible for the actual database operations is performed by the server-based SAND system. SAND is a spatial database system developed at the University of Maryland [12]. The client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet. SAND Internet Browser is Java-based, which makes it deployable across many platforms. There are two ways to deploy SAND. First, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet. This way, users across various platforms can continuously access large spatial data on a remote location with little or no need for any preceding software installation. The second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece). In this case, the SAND Internet Browser can still be utilized to view data from remote locations. However, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally. Power users can also upload large volumes of spatial data back to the remote server using this enhanced client. We focused our efforts in two directions. We first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other. The outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications. The second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods. We have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture. We call this architecture APPOINT--Approach for Peer-to-Peer Offloading the INTernet. The results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application). Section 2 describes our client-server approach in more detail. Section 3 focuses on APPOINT, our peer-to-peer approach. Section 4 discusses our work in relation to existing work. Section 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches. Section 6 contains concluding remarks as well as future research directions. 4. RELATED WORK There has been a substantial amount of research on remote access to spatial data. The goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company's spatial database server and retrieve information in the form of pictorial maps from them. Work described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection. It presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client. In this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image. On the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations. Both the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested. This also allows dropping unnecessary parts of the image from the main memory on the server. Other related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server. It is assumed that this data server manages vast databases that are impractical to be stored on individual clients. This work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20]. For our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks. Our application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach. Gnutella is a pure (decentralized) peer-to-peer file exchange system. Unfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required. Other systems followed these popular systems, each addressing a different flavor of sharing over the Internet. Many peer-to-peer storage systems have also recently emerged. PAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems. Some of these systems have focused on anonymity while others have focused on persistence of storage. Also, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems. Our goal is different than these approaches. With APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients. Confirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT. We want to expand our research, in the future, to address this issue. From our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures. It creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network. We do not compete with these approaches. In fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together. We try to address the situation where a request arrives at a server, meaning all the caches report a miss. Hence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down load and upload operations. Cache misses are especially common in the type of large data-based services on which we are working. Hence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently. In addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes. Finally, none of the Web caching methods address the symmetric issue of large data uploads. 6. CONCLUDING REMARKS An overview of our efforts in providing remote access to large spatial data has been given. We have outlined our approaches and introduced their individual elements. Our client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients. APPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers. For the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously. To address this, we will augment our methods to include efficient dynamic updates. Upon completion of this step of our work, we also plan to run comprehensive performance studies on our methods. Another issue is how to access data from different sources in different formats. In order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design. The XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data. GML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data. We are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework. This will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface. This will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.
J-70
Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions
Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach -- called automated mechanism design -- a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have best-only preferences. We show that here, too, designing an optimal deterministic auction is NP-complete, but designing an optimal randomized auction is easy.
[ "autom mechan design", "autom mechan design", "mechan design", "combinatori auction", "desir outcom", "prefer aggreg", "manipul", "individu ration", "nonmanipul mechan", "statist knowledg", "classic mechan", "payment maxim", "fallback outcom", "minsat", "self-interest amd", "complementar", "revenu maxim" ]
[ "P", "P", "P", "P", "P", "P", "U", "U", "M", "U", "M", "M", "M", "U", "R", "U", "U" ]
Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions∗ Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach-called automated mechanism design-a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents'' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have best-only preferences. We show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy. Categories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION In multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents. Such outcomes could be potential presidents, joint plans, allocations of goods or resources, etc.. The preference aggregator generally does not know the agents'' preferences a priori. Rather, the agents report their preferences to the coordinator. Unfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully. Such manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen. Manipulability is a pervasive problem across preference aggregation mechanisms. A seminal negative result, the Gibbard-Satterthwaite theorem, shows that under any nondictatorial preference aggregation scheme, if there are at least 3 possible outcomes, there are preferences under which an agent is better off reporting untruthfully [10, 23]. (A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.) What the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective. This is the classic setting of mechanism design in game theory. In this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out132 come relates to the agents'' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself. This is the mechanism design setting most relevant to electronic commerce. In the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents'' preferences is clear. It is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents'' preferences. The reason for this is that often the agents'' preferences impose limits on how the designer chooses the outcome and payments. The most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism. For instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents'' preferences. Nevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it. Therefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off. On the other hand, the designer will not necessarily choose a social welfare maximizing outcome. For example, if the designer always chooses an outcome that maximizes social welfare with respect to the reported preferences, and forces each agent to pay the difference between the utility it has now and the utility it would have had if it had not participated in the mechanism, it is easy to see that agents may have an incentive to misreport their preferences-and this may actually lead to less revenue being collected. Indeed, one of the counterintuitive results of optimal auction design theory is that sometimes the good is allocated to nobody even when the auctioneer has a reservation price of 0. Classical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective. The upside of these mechanisms is that they do not rely on (even probabilistic) information about the agents'' preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24, 4, 11]), or they can be easily applied to any probability distribution over the preferences (e.g., the dAGVA mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley multi-unit auction [17]). However, the general mechanisms also have significant downsides: • The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare. If the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer``s objective. • The general mechanisms that do focus on a selfinterested designer are only applicable in very restricted settings-such as Myerson``s expected revenue maximizing auction for selling a single item, and Maskin and Riley``s expected revenue maximizing auction for selling multiple identical units of an item. • Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization. In practice, the designer may also be interested in the outcome per se. For example, an auctioneer may care which bidder receives the item. • It is often assumed that side payments can be used to tailor the agents'' incentives, but this is not always practical. For example, in barter-based electronic marketplaces-such as Recipco, firstbarter.com, BarterOne, and Intagio-side payments are not allowed. Furthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments. In contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand. This approach addresses all of the downsides listed above. We formulate the mechanism design problem as an optimization problem. The input is characterized by the number of agents, the agents'' possible types (preferences), and the aggregator``s prior distributions over the agents'' types. The output is a nonmanipulable mechanism that is optimal with respect to some objective. This approach is called automated mechanism design. The automated mechanism design approach has four advantages over the classical approach of designing general mechanisms. First, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare). Second, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences. When the mechanism is designed for the setting at hand, it does not matter that it would not work more generally. Third, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents'' types). Given the vast amount of information that parties have about each other today, this approach is likely to lead to tremendous savings over classical mechanisms, which largely ignore that information. For example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction. Fourth, the burden of design is shifted from humans to a machine. However, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting. Hence its computational complexity becomes a key issue. Previous research has studied this question for benevolent designers-that wish to maximize, for example, social welfare [5, 6]. In this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer. This is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested. We also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem. 133 The rest of this paper is organized as follows. In Section 2, we justify the focus on nonmanipulable mechanisms. In Section 3, we define the problem we study. In Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it. In Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen. In Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case. Finally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions. 2. JUSTIFYING THE FOCUS ON NONMANIPULABLE MECHANISMS Before we define the computational problem of automated mechanism design, we should justify our focus on nonmanipulable mechanisms. After all, it is not immediately obvious that there are no manipulable mechanisms that, even when agents report their types strategically and hence sometimes untruthfully, still reach better outcomes (according to whatever objective we use) than any nonmanipulable mechanism. This does, however, turn out to be the case: given any mechanism, we can construct a nonmanipulable mechanism whose performance is identical, as follows. We build an interface layer between the agents and the original mechanism. The agents report their preferences (or types) to the interface layer; subsequently, the interface layer inputs into the original mechanism the types that the agents would have strategically reported to the original mechanism, if their types were as declared to the interface layer. The resulting outcome is the outcome of the new mechanism. Since the interface layer acts strategically on each agent``s behalf, there is never an incentive to report falsely to the interface layer; and hence, the types reported by the interface layer are the strategic types that would have been reported without the interface layer, so the results are exactly as they would have been with the original mechanism. This argument is known in the mechanism design literature as the revelation principle [16]. (There are computational difficulties with applying the revelation principle in large combinatorial outcome and type spaces [7, 22]. However, because here we focus on flatly represented outcome and type spaces, this is not a concern here.) Given this, we can focus on truthful mechanisms in the rest of the paper. 3. DEFINITIONS We now formalize the automated mechanism design setting. Definition 1. In an automated mechanism design setting, we are given: • a finite set of outcomes O; • a finite set of N agents; • for each agent i, 1. a finite set of types Θi, 2. a probability distribution γi over Θi (in the case of correlated types, there is a single joint distribution γ over Θ1 × ... × ΘN ), and 3. a utility function ui : Θi × O → R; 1 • An objective function whose expectation the designer wishes to maximize. There are many possible objective functions the designer might have, for example, social welfare (where the designer seeks to maximize the sum of the agents'' utilities), or the minimum utility of any agent (where the designer seeks to maximize the worst utility had by any agent). In both of these cases, the designer is benevolent, because the designer, in some sense, is pursuing the agents'' collective happiness. However, in this paper, we focus on the case of a self-interested designer. A self-interested designer cares only about the outcome chosen (that is, the designer does not care how the outcome relates to the agents'' preferences, but rather has a fixed preference over the outcomes), and about the net payments made by the agents, which flow to the designer. Definition 2. A self-interested designer has an objective function given by g(o) + N i=1 πi, where g : O → R indicates the designer``s own preference over the outcomes, and πi is the payment made by agent i. In the case where g = 0 everywhere, the designer is said to be payment maximizing. In the case where payments are not possible, g constitutes the objective function by itself. We now define the kinds of mechanisms under study. By the revelation principle, we can restrict attention to truthful, direct revelation mechanisms, where agents report their types directly and never have an incentive to misreport them. Definition 3. We consider the following kinds of mechanism: • A deterministic mechanism without payments consists of an outcome selection function o : Θ1 × Θ2 × ... × ΘN → O. • A randomized mechanism without payments consists of a distribution selection function p : Θ1 × Θ2 × ... × ΘN → P(O), where P(O) is the set of probability distributions over O. • A deterministic mechanism with payments consists of an outcome selection function o : Θ1 ×Θ2 ×...×ΘN → O and for each agent i, a payment selection function πi : Θ1 × Θ2 × ... × ΘN → R, where πi(θ1, ... , θN ) gives the payment made by agent i when the reported types are θ1, ... , θN . 1 Though this follows standard game theory notation [16], the fact that the agent has both a utility function and a type is perhaps confusing. The types encode the various possible preferences that the agent may turn out to have, and the agent``s type is not known to the aggregator. The utility function is common knowledge, but because the agent``s type is a parameter in the agent``s utility function, the aggregator cannot know what the agent``s utility is without knowing the agent``s type. 134 • A randomized mechanism with payments consists of a distribution selection function p : Θ1 × Θ2 × ... × ΘN → P(O), and for each agent i, a payment selection function πi : Θ1 × Θ2 × ... × ΘN → R.2 There are two types of constraint on the designer in building the mechanism. 3.1 Individual rationality (IR) constraints The first type of constraint is the following. The utility of each agent has to be at least as great as the agent``s fallback utility, that is, the utility that the agent would receive if it did not participate in the mechanism. Otherwise that agent would not participate in the mechanism-and no agent``s participation can ever hurt the mechanism designer``s objective because at worst, the mechanism can ignore an agent by pretending the agent is not there. (Furthermore, if no such constraint applied, the designer could simply make the agents pay an infinite amount.) This type of constraint is called an IR (individual rationality) constraint. There are three different possible IR constraints: ex ante, ex interim, and ex post, depending on what the agent knows about its own type and the others'' types when deciding whether to participate in the mechanism. Ex ante IR means that the agent would participate if it knew nothing at all (not even its own type). We will not study this concept in this paper. Ex interim IR means that the agent would always participate if it knew only its own type, but not those of the others. Ex post IR means that the agent would always participate even if it knew everybody``s type. We will define the latter two notions of IR formally. First, we need to formalize the concept of the fallback outcome. We assume that each agent``s fallback utility is zero for each one of its types. This is without loss of generality because we can add a constant term to an agent``s utility function (for a given type), without affecting the decision-making behavior of that expected utility maximizing agent [16]. Definition 4. In any automated mechanism design setting with an IR constraint, there is a fallback outcome o0 ∈ O where, for any agent i and any type θi ∈ Θi, we have ui(θi, o0) = 0. (Additionally, in the case of a self-interested designer, g(o0) = 0.) We can now to define the notions of individual rationality. Definition 5. Individual rationality (IR) is defined by: • A deterministic mechanism is ex interim IR if for any agent i, and any type θi ∈ Θi, we have E(θ1,. . ,θi−1,θi+1,. . ,θN )|θi [ui(θi, o(θ1, . . , θN ))−πi(θ1, . . , θN )] ≥ 0. A randomized mechanism is ex interim IR if for any agent i, and any type θi ∈ Θi, we have E(θ1,. . ,θi−1,θi+1,. . ,θN )|θi Eo|θ1,. . ,θn [ui(θi, o)−πi(θ1, . . , θN )] ≥ 0. • A deterministic mechanism is ex post IR if for any agent i, and any type vector (θ1, ... , θN ) ∈ Θ1 × ... × ΘN , we have ui(θi, o(θ1, ... , θN )) − πi(θ1, ... , θN ) ≥ 0. 2 We do not randomize over payments because as long as the agents and the designer are risk neutral with respect to payments, that is, their utility is linear in payments, there is no reason to randomize over payments. A randomized mechanism is ex post IR if for any agent i, and any type vector (θ1, ... , θN ) ∈ Θ1 × ... × ΘN , we have Eo|θ1,. . ,θn [ui(θi, o) − πi(θ1, . . , θN )] ≥ 0. The terms involving payments can be left out in the case where payments are not possible. 3.2 Incentive compatibility (IC) constraints The second type of constraint says that the agents should never have an incentive to misreport their type (as justified above by the revelation principle). For this type of constraint, the two most common variants (or solution concepts) are implementation in dominant strategies, and implementation in Bayes-Nash equilibrium. Definition 6. Given an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in dominant strategies if truthtelling is always optimal even when the types reported by the other agents are already known. Formally, for any agent i, any type vector (θ1, ... , θi, ... , θN ) ∈ Θ1 × ... × Θi × ... × ΘN , and any alternative type report ˆθi ∈ Θi, in the case of deterministic mechanisms we have ui(θi, o(θ1, ... , θi, ... , θN )) − πi(θ1, ... , θi, ... , θN ) ≥ ui(θi, o(θ1, ... , ˆθi, ... , θN )) − πi(θ1, ... , ˆθi, ... , θN ). In the case of randomized mechanisms we have Eo|θ1,. . ,θi,. . ,θn [ui(θi, o) − πi(θ1, ... , θi, ... , θN )] ≥ Eo|θ1,. . , ˆθi,. . ,θn [ui(θi, o) − πi(θ1, ... , ˆθi, ... , θN )]. The terms involving payments can be left out in the case where payments are not possible. Thus, in dominant strategies implementation, truthtelling is optimal regardless of what the other agents report. If it is optimal only given that the other agents are truthful, and given that one does not know the other agents'' types, we have implementation in Bayes-Nash equilibrium. Definition 7. Given an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in Bayes-Nash equilibrium if truthtelling is always optimal to an agent when that agent does not yet know anything about the other agents'' types, and the other agents are telling the truth. Formally, for any agent i, any type θi ∈ Θi, and any alternative type report ˆθi ∈ Θi, in the case of deterministic mechanisms we have E(θ1,. . ,θi−1,θi+1,. . ,θN )|θi [ui(θi, o(θ1, ... , θi, ... , θN ))− πi(θ1, ... , θi, ... , θN )] ≥ E(θ1,. . ,θi−1,θi+1,. . ,θN )|θi [ui(θi, o(θ1, ... , ˆθi, ... , θN ))− πi(θ1, ... , ˆθi, ... , θN )]. In the case of randomized mechanisms we have E(θ1,. . ,θi−1,θi+1,. . ,θN )|θi Eo|θ1,. . ,θi,. . ,θn [ui(θi, o)− πi(θ1, ... , θi, ... , θN )] ≥ E(θ1,. . ,θi−1,θi+1,. . ,θN )|θi Eo|θ1,. . , ˆθi,. . ,θn [ui(θi, o)− πi(θ1, ... , ˆθi, ... , θN )]. The terms involving payments can be left out in the case where payments are not possible. 135 3.3 Automated mechanism design We can now define the computational problem we study. Definition 8. (AUTOMATED-MECHANISM-DESIGN (AMD)) We are given: • an automated mechanism design setting, • an IR notion (ex interim, ex post, or none), • a solution concept (dominant strategies or Bayes-Nash), • whether payments are possible, • whether randomization is possible, • (in the decision variant of the problem) a target value G. We are asked whether there exists a mechanism of the specified kind (in terms of payments and randomization) that satisfies both the IR notion and the solution concept, and gives an expected value of at least G for the objective. An interesting special case is the setting where there is only one agent. In this case, the reporting agent always knows everything there is to know about the other agents'' types-because there are no other agents. Since ex post and ex interim IR only differ on what an agent is assumed to know about other agents'' types, the two IR concepts coincide here. Also, because implementation in dominant strategies and implementation in Bayes-Nash equilibrium only differ on what an agent is assumed to know about other agents'' types, the two solution concepts coincide here. This observation will prove to be a useful tool in proving hardness results: if we prove computational hardness in the singleagent setting, this immediately implies hardness for both IR concepts, for both solution concepts, for any number of agents. 4. PAYMENT-MAXIMIZINGDETERMINISTIC AMD IS HARD In this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expected sum of the payments collected from the agents. We show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts. To demonstrate NPhardness, we reduce from the MINSAT problem. Definition 9 (MINSAT). We are given a formula φ in conjunctive normal form, represented by a set of Boolean variables V and a set of clauses C, and an integer K (K < |C|). We are asked whether there exists an assignment to the variables in V such that at most K clauses in φ are satisfied. MINSAT was recently shown to be NP-complete [14]. We can now present our result. Theorem 1. Payment-maximizing deterministic AMD is NP-complete, even for a single agent, even with a uniform distribution over types. Proof. It is easy to show that the problem is in NP. To show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent payment-maximizing deterministic AMD instance. Let the agent``s type set be Θ = {θc : c ∈ C} ∪ {θv : v ∈ V }, where C is the set of clauses in the MINSAT instance, and V is the set of variables. Let the probability distribution over these types be uniform. Let the outcome set be O = {o0} ∪ {oc : c ∈ C} ∪ {ol : l ∈ L}, where L is the set of literals, that is, L = {+v : v ∈ V } ∪ {−v : v ∈ V }. Let the notation v(l) = v denote that v is the variable corresponding to the literal l, that is, l ∈ {+v, −v}. Let l ∈ c denote that the literal l occurs in clause c. Then, let the agent``s utility function be given by u(θc, ol) = |Θ| + 1 for all l ∈ L with l ∈ c; u(θc, ol) = 0 for all l ∈ L with l /∈ c; u(θc, oc) = |Θ| + 1; u(θc, oc ) = 0 for all c ∈ C with c = c ; u(θv, ol) = |Θ| for all l ∈ L with v(l) = v; u(θv, ol) = 0 for all l ∈ L with v(l) = v; u(θv, oc) = 0 for all c ∈ C. The goal of the AMD instance is G = |Θ| + |C|−K |Θ| , where K is the goal of the MINSAT instance. We show the instances are equivalent. First, suppose there is a solution to the MINSAT instance. Let the assignment of truth values to the variables in this solution be given by the function f : V → L (where v(f(v)) = v for all v ∈ V ). Then, for every v ∈ V , let o(θv) = of(v) and π(θv) = |Θ|. For every c ∈ C, let o(θc) = oc; let π(θc) = |Θ| + 1 if c is not satisfied in the MINSAT solution, and π(θc) = |Θ| if c is satisfied. It is straightforward to check that the IR constraint is satisfied. We now check that the agent has no incentive to misreport. If the agent``s type is some θv, then any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport. If the agent``s type is some θc where c is a satisfied clause, again, any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport. The final case to check is where the agent``s type is some θc where c is an unsatisfied clause. In this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l ∈ c, precisely because the clause is not satisfied in the MINSAT instance. Because also, no type besides θc leads to the outcome oc, reporting any other type will give an outcome with utility 0, while still forcing a payment of at least |Θ| from the agent. Clearly the agent is better off reporting truthfully, for a total utility of 0. This establishes that the agent never has an incentive to misreport. Finally, we show that the goal is reached. If s is the number of satisfied clauses in the MINSAT solution (so that s ≤ K), the expected payment from this mechanism is |V ||Θ|+s|Θ|+(|C|−s)(|Θ|+1) |Θ| ≥ |V ||Θ|+K|Θ|+(|C|−K)(|Θ|+1) |Θ| = |Θ| + |C|−K |Θ| = G. So there is a solution to the AMD instance. Now suppose there is a solution to the AMD instance, given by an outcome function o and a payment function π. First, suppose there is some v ∈ V such that o(θv) /∈ {o+v, o−v}. Then the utility that the agent derives from the given outcome for this type is 0, and hence, by IR, no payment can be extracted from the agent for this type. Because, again by IR, the maximum payment that can be extracted for any other type is |Θ| + 1, it follows that the maximum expected payment that could be obtained is at most (|Θ|−1)(|Θ|+1) |Θ| < |Θ| < G, contradicting that this is a solution to the AMD instance. It follows that in the solution to the AMD instance, for every v ∈ V , o(θv) ∈ {o+v, o−v}. 136 We can interpret this as an assignment of truth values to the variables: v is set to true if o(θv) = o+v, and to false if o(θv) = o−v. We claim this assignment is a solution to the MINSAT instance. By the IR constraint, the maximum payment we can extract from any type θv is |Θ|. Because there can be no incentives for the agent to report falsely, for any clause c satisfied by the given assignment, the maximum payment we can extract for the corresponding type θc is |Θ|. (For if we extracted more from this type, the agent``s utility in this case would be less than 1; and if v is the variable satisfying c in the assignment, so that o(θv) = ol where l occurs in c, then the agent would be better off reporting θv instead of the truthful report θc, to get an outcome worth |Θ|+1 to it while having to pay at most |Θ|.) Finally, for any unsatisfied clause c, by the IR constraint, the maximum payment we can extract for the corresponding type θc is |Θ| + 1. It follows that the expected payment from our mechanism is at most V |Θ|+s|Θ|+(|C|−s)(|Θ|+1) Θ , where s is the number of satisfied clauses. Because our mechanism achieves the goal, it follows that V |Θ|+s|Θ|+(|C|−s)(|Θ|+1) Θ ≥ G, which by simple algebraic manipulations is equivalent to s ≤ K. So there is a solution to the MINSAT instance. Because payment-maximizing AMD is just the special case of AMD for a self-interested designer where the designer has no preferences over the outcome chosen, this immediately implies hardness for the general case of AMD for a selfinterested designer where payments are possible. However, it does not yet imply hardness for the special case where payments are not possible. We will prove hardness in this case in the next section. 5. SELF-INTERESTED DETERMINISTIC AMD WITHOUT PAYMENTS IS HARD In this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expectation of the designer``s objective when payments are not possible. We show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts. Theorem 2. Without payments, deterministic AMD for a self-interested designer is NP-complete, even for a single agent, even with a uniform distribution over types. Proof. It is easy to show that the problem is in NP. To show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent self-interested deterministic AMD without payments instance. Let the agent``s type set be Θ = {θc : c ∈ C} ∪ {θv : v ∈ V }, where C is the set of clauses in the MINSAT instance, and V is the set of variables. Let the probability distribution over these types be uniform. Let the outcome set be O = {o0} ∪ {oc : c ∈ C}∪{ol : l ∈ L}∪{o∗ }, where L is the set of literals, that is, L = {+v : v ∈ V } ∪ {−v : v ∈ V }. Let the notation v(l) = v denote that v is the variable corresponding to the literal l, that is, l ∈ {+v, −v}. Let l ∈ c denote that the literal l occurs in clause c. Then, let the agent``s utility function be given by u(θc, ol) = 2 for all l ∈ L with l ∈ c; u(θc, ol) = −1 for all l ∈ L with l /∈ c; u(θc, oc) = 2; u(θc, oc ) = −1 for all c ∈ C with c = c ; u(θc, o∗ ) = 1; u(θv, ol) = 1 for all l ∈ L with v(l) = v; u(θv, ol) = −1 for all l ∈ L with v(l) = v; u(θv, oc) = −1 for all c ∈ C; u(θv, o∗ ) = −1. Let the designer``s objective function be given by g(o∗ ) = |Θ|+1; g(ol) = |Θ| for all l ∈ L; g(oc) = |Θ| for all c ∈ C. The goal of the AMD instance is G = |Θ| + |C|−K |Θ| , where K is the goal of the MINSAT instance. We show the instances are equivalent. First, suppose there is a solution to the MINSAT instance. Let the assignment of truth values to the variables in this solution be given by the function f : V → L (where v(f(v)) = v for all v ∈ V ). Then, for every v ∈ V , let o(θv) = of(v). For every c ∈ C that is satisfied in the MINSAT solution, let o(θc) = oc; for every unsatisfied c ∈ C, let o(θc) = o∗ . It is straightforward to check that the IR constraint is satisfied. We now check that the agent has no incentive to misreport. If the agent``s type is some θv, it is getting the maximum utility for that type, so it has no incentive to misreport. If the agent``s type is some θc where c is a satisfied clause, again, it is getting the maximum utility for that type, so it has no incentive to misreport. The final case to check is where the agent``s type is some θc where c is an unsatisfied clause. In this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l ∈ c, precisely because the clause is not satisfied in the MINSAT instance. Because also, no type leads to the outcome oc, there is no outcome that the mechanism ever selects that would give the agent utility greater than 1 for type θc, and hence the agent has no incentive to report falsely. This establishes that the agent never has an incentive to misreport. Finally, we show that the goal is reached. If s is the number of satisfied clauses in the MINSAT solution (so that s ≤ K), then the expected value of the designer``s objective function is |V ||Θ|+s|Θ|+(|C|−s)(|Θ|+1) |Θ| ≥ |V ||Θ|+K|Θ|+(|C|−K)(|Θ|+1) |Θ| = |Θ| + |C|−K |Θ| = G. So there is a solution to the AMD instance. Now suppose there is a solution to the AMD instance, given by an outcome function o. First, suppose there is some v ∈ V such that o(θv) /∈ {o+v, o−v}. The only other outcome that the mechanism is allowed to choose under the IR constraint is o0. This has an objective value of 0, and because the highest value the objective function ever takes is |Θ| + 1, it follows that the maximum expected value of the objective function that could be obtained is at most (|Θ|−1)(|Θ|+1) |Θ| < |Θ| < G, contradicting that this is a solution to the AMD instance. It follows that in the solution to the AMD instance, for every v ∈ V , o(θv) ∈ {o+v, o−v}. We can interpret this as an assignment of truth values to the variables: v is set to true if o(θv) = o+v, and to false if o(θv) = o−v. We claim this assignment is a solution to the MINSAT instance. By the above, for any type θv, the value of the objective function in this mechanism will be |Θ|. For any clause c satisfied by the given assignment, the value of the objective function in the case where the agent reports type θc will be at most |Θ|. (This is because we cannot choose the outcome o∗ for such a type, as in this case the agent would have an incentive to report θv instead, where v is the variable satisfying c in the assignment (so that o(θv) = ol where l occurs in c).) Finally, for any unsatisfied clause c, the maximum value the objective function can take in the case where the agent reports type θc is |Θ| + 1, simply because this is the largest value the function ever takes. It follows that the expected value of the objective function for our mechanism is at most V |Θ|+s|Θ|+(|C|−s)(|Θ|+1) Θ , where s is the number of satisfied 137 clauses. Because our mechanism achieves the goal, it follows that V |Θ|+s|Θ|+(|C|−s)(|Θ|+1) Θ ≥ G, which by simple algebraic manipulations is equivalent to s ≤ K. So there is a solution to the MINSAT instance. Both of our hardness results relied on the constraint that the mechanism should be deterministic. In the next section, we show that the hardness of design disappears when we allow for randomization in the mechanism. 6. RANDOMIZED AMD FOR A SELFINTERESTED DESIGNER IS EASY We now show how allowing for randomization over the outcomes makes the problem of self-interested AMD tractable through linear programming, for any constant number of agents. Theorem 3. Self-interested randomized AMD with a constant number of agents is solvable in polynomial time by linear programming, both with and without payments, both for ex post and ex interim IR, and both for implementation in dominant strategies and for implementation in Bayes-Nash equilibrium-even if the types are correlated. Proof. Because linear programs can be solved in polynomial time [13], all we need to show is that the number of variables and equations in our program is polynomial for any constant number of agents-that is, exponential only in N. Throughout, for purposes of determining the size of the linear program, let T = maxi{|Θi|}. The variables of our linear program will be the probabilities (p(θ1, θ2, ... , θN ))(o) (at most TN |O| variables) and the payments πi(θ1, θ2, ... , θN ) (at most NTN variables). (We show the linear program for the case where payments are possible; the case without payments is easily obtained from this by simply omitting all the payment variables in the program, or by adding additional constraints forcing the payments to be 0.) First, we show the IR constraints. For ex post IR, we add the following (at most NTN ) constraints to the LP: • For every i ∈ {1, 2, ... , N}, and for every (θ1, θ2, ... , θN ) ∈ Θ1 × Θ2 × ... × ΘN , we add ( o∈O (p(θ1, θ2, ... , θN ))(o)u(θi, o)) − πi(θ1, θ2, ... , θN ) ≥ 0. For ex interim IR, we add the following (at most NT) constraints to the LP: • For every i ∈ {1, 2, ... , N}, for every θi ∈ Θi, we add θ1,... ,θN γ(θ1, ... , θN |θi)(( o∈O (p(θ1, θ2, ... , θN ))(o)u(θi, o))− πi(θ1, θ2, ... , θN )) ≥ 0. Now, we show the solution concept constraints. For implementation in dominant strategies, we add the following (at most NTN+1 ) constraints to the LP: • For every i ∈ {1, 2, ... , N}, for every (θ1, θ2, ... , θi, ... , θN ) ∈ Θ1 × Θ2 × ... × ΘN , and for every alternative type report ˆθi ∈ Θi, we add the constraint ( o∈O (p(θ1, θ2, ... , θi, ... , θN ))(o)u(θi, o)) − πi(θ1, θ2, ... , θi, ... , θN ) ≥ ( o∈O (p(θ1, θ2, ... , ˆθi, ... , θN ))(o)u(θi, o)) − πi(θ1, θ2, ... , ˆθi, ... , θN ). Finally, for implementation in Bayes-Nash equilibrium, we add the following (at most NT2 ) constraints to the LP: • For every i ∈ {1, 2, ..., N}, for every θi ∈ Θi, and for every alternative type report ˆθi ∈ Θi, we add the constraint θ1,...,θN γ(θ1, ..., θN |θi)(( o∈O (p(θ1, θ2, ..., θi, ..., θN ))(o)u(θi, o)) − πi(θ1, θ2, ..., θi, ..., θN )) ≥ θ1,...,θN γ(θ1, ..., θN |θi)(( o∈O (p(θ1, θ2, ..., ˆθi, ..., θN ))(o)u(θi, o)) − πi(θ1, θ2, ..., ˆθi, ..., θN )). All that is left to do is to give the expression the designer is seeking to maximize, which is: • θ1,...,θN γ(θ1, ..., θN )(( o∈O (p(θ1, θ2, ..., θi, ..., θN ))(o)g(o)) + N i=1 πi(θ1, θ2, ..., θN )). As we indicated, the number of variables and constraints is exponential only in N, and hence the linear program is of polynomial size for constant numbers of agents. Thus the problem is solvable in polynomial time. 7. IMPLICATIONS FOR AN OPTIMAL COMBINATORIAL AUCTION DESIGN PROBLEM In this section, we will demonstrate some interesting consequences of the problem of automated mechanism design for a self-interested designer on designing optimal combinatorial auctions. Consider a combinatorial auction with a set S of items for sale. For any bundle B ⊆ S, let ui(θi, B) be bidder i``s utility for receiving bundle B when the bidder``s type is θi. The optimal auction design problem is to specify the rules of the auction so as to maximize expected revenue to the auctioneer. (By the revelation principle, without loss of generality, we can assume the auction is truthful.) The optimal auction design problem is solved for the case of a single item by the famous Myerson auction [18]. However, designing optimal auctions in combinatorial auctions is a recognized open research problem [3, 25]. The problem is open even if there are only two items for sale. (The twoitem case with a very special form of complementarity and no substitutability has been solved recently [1].) Suppose we have free disposal-items can be thrown away at no cost. Also, suppose that the bidders'' preferences have the following structure: whenever a bidder receives a bundle of items, the bidder``s utility for that bundle is determined by the best item in the bundle only. (We emphasize that 138 which item is the best is allowed to depend on the bidder``s type.) Definition 10. Bidder i is said to have best-only preferences over bundles of items if there exists a function vi : Θi × S → R such that for any θi ∈ Θi, for any B ⊆ S, ui(θi, B) = maxs∈B vi(θi, s). We make the following useful observation in this setting: there is no sense in awarding a bidder more than one item. The reason is that if the bidder is reporting truthfully, taking all but the highest valued item away from the bidder will not hurt the bidder; and, by free disposal, doing so can only reduce the incentive for this bidder to falsely report this type, when the bidder actually has another type. We now show that the problem of designing a deterministic optimal auction here is NP-complete, by a reduction from the payment maximizing AMD problem! Theorem 4. Given an optimal combinatorial auction design problem under best-only preferences (given by a set of items S and for each bidder i, a finite type space Θi and a function vi : Θi × S → R such that for any θi ∈ Θi, for any B ⊆ S, ui(θi, B) = maxs∈B vi(θi, s)), designing the optimal deterministic auction is NP-complete, even for a single bidder with a uniform distribution over types. Proof. The problem is in NP because we can nondeterministically generate an allocation rule, and then set the payments using linear programming. To show NP-hardness, we reduce an arbitrary paymentmaximizing deterministic AMD instance, with a single agent and a uniform distribution over types, to the following optimal combinatorial auction design problem instance with a single bidder with best-only preferences. For every outcome o ∈ O in the AMD instance (besides the outcome o0), let there be one item so ∈ S. Let the type space be the same, and let v(θi, so) = ui(θi, o) (where u is as specified in the AMD instance). Let the expected revenue target value be the same in both instances. We show the instances are equivalent. First suppose there exists a solution to the AMD instance, given by an outcome function and a payment function. Then, if the AMD solution chooses outcome o for a type, in the optimal auction solution, allocate {so} to the bidder for this type. (Unless o = o0, in which case we allocate {} to the bidder.) Let the payment functions be the same in both instances. Then, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the optimal auction solution. Moreover, because the type distribution and the payment function are the same, the expected revenue to the auctioneer/designer is the same. It follows that there exists a solution to the optimal auction design instance. Now suppose there exists a solution to the optimal auction design instance. By the at-most-one-item observation, we can assume without loss of generality that the solution never allocates more than one item. Then, if the optimal auction solution allocates item so to the bidder for a type, in the AMD solution, let the mechanism choose outcome o for that type. If the optimal auction solution allocates nothing to the bidder for a type, in the AMD solution, let the mechanism choose outcome o0 for that type. Let the payment functions be the same. Then, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the AMD solution. Moreover, because the type distribution and the payment function are the same, the expected revenue to the designer/auctioneer is the same. It follows that there exists a solution to the AMD instance. Fortunately, we can also carry through the easiness result for randomized mechanisms to this combinatorial auction setting-giving us one of the few known polynomial-time algorithms for an optimal combinatorial auction design problem. Theorem 5. Given an optimal combinatorial auction design problem under best-only preferences (given by a set of items S and for each bidder i, a finite type space Θi and a function vi : Θi × S → R such that for any θi ∈ Θi, for any B ⊆ S, ui(θi, B) = maxs∈B vi(θi, s)), if the number of bidders is a constant k, then the optimal randomized auction can be designed in polynomial time. (For any IC and IR constraints.) Proof. By the at-most-one-item observation, we can without loss of generality restrict ourselves to allocations where each bidder receives at most one item. There are fewer than (|S| + 1)k such allocations-that is, a polynomial number of allocations. Because we can list the outcomes explicitly, we can simply solve this as a payment-maximizing AMD instance, with linear programming. 8. RELATED RESEARCH ON COMPLEXITY IN MECHANISM DESIGN There has been considerable recent interest in mechanism design in computer science. Some of it has focused on issues of computational complexity, but most of that work has strived toward designing mechanisms that are easy to execute (e.g. [20, 15, 19, 9, 12]), rather than studying the complexity of designing the mechanism. The closest piece of earlier work studied the complexity of automated mechanism design by a benevolent designer [5, 6]. Roughgarden has studied the complexity of designing a good network topology for agents that selfishly choose the links they use [21]. This is related to mechanism design, but differs significantly in that the designer only has restricted control over the rules of the game because there is no party that can impose the outcome (or side payments). Also, there is no explicit reporting of preferences. 9. CONCLUSIONS AND FUTURE RESEARCH Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently emerging approach-called automated mechanism design-a mechanism is computed for the specific preference aggregation setting at hand. This has several advantages, 139 but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interesteda setting much more relevant for electronic commerce. In this setting, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents'' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. These hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer. The hardness results apply whether the individual rationality (participation) constraints are applied ex interim or ex post, and whether the solution concept is dominant strategies implementation or Bayes-Nash equilibrium implementation. We then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy. Finally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have best-only preferences. We showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy. Future research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer``s objective may include preferences about which bidder should receive the good-as well as payments). We also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely. Finally, we plan to assemble a data set of real-world mechanism design problems-both historical and current-and apply automated mechanism design to those problems. 10. REFERENCES [1] M. Armstrong. Optimal multi-object auctions. Review of Economic Studies, 67:455-481, 2000. [2] K. Arrow. The property rights doctrine and demand revelation under incomplete information. In M. Boskin, editor, Economics and human welfare. New York Academic Press, 1979. [3] C. Avery and T. Hendershott. Bundling and optimal auctions of multiple products. Review of Economic Studies, 67:483-497, 2000. [4] E. H. Clarke. Multipart pricing of public goods. Public Choice, 11:17-33, 1971. [5] V. Conitzer and T. Sandholm. Complexity of mechanism design. In Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 103-110, Edmonton, Canada, 2002. [6] V. Conitzer and T. Sandholm. Automated mechanism design: Complexity results stemming from the single-agent setting. In Proceedings of the 5th International Conference on Electronic Commerce (ICEC-03), pages 17-24, Pittsburgh, PA, USA, 2003. [7] V. Conitzer and T. Sandholm. Computational criticisms of the revelation principle. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), New York, NY, 2004. Short paper. Full-length version appeared in the AAMAS-03 workshop on Agent-Mediated Electronic Commerce (AMEC). [8] C. d``Aspremont and L. A. G´erard-Varet. Incentives and incomplete information. Journal of Public Economics, 11:25-45, 1979. [9] J. Feigenbaum, C. Papadimitriou, and S. Shenker. Sharing the cost of muliticast transmissions. Journal of Computer and System Sciences, 63:21-41, 2001. Early version in Proceedings of the Annual ACM Symposium on Theory of Computing (STOC), 2000. [10] A. Gibbard. Manipulation of voting schemes. Econometrica, 41:587-602, 1973. [11] T. Groves. Incentives in teams. Econometrica, 41:617-631, 1973. [12] J. Hershberger and S. Suri. Vickrey prices and shortest paths: What is an edge worth? In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), 2001. [13] L. Khachiyan. A polynomial algorithm in linear programming. Soviet Math. Doklady, 20:191-194, 1979. [14] R. Kohli, R. Krishnamurthi, and P. Mirchandani. The minimum satisfiability problem. SIAM Journal of Discrete Mathematics, 7(2):275-283, 1994. [15] D. Lehmann, L. I. O``Callaghan, and Y. Shoham. Truth revelation in rapid, approximately efficient combinatorial auctions. Journal of the ACM, 49(5):577-602, 2002. Early version appeared in Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), 1999. [16] A. Mas-Colell, M. Whinston, and J. R. Green. Microeconomic Theory. Oxford University Press, 1995. [17] E. S. Maskin and J. Riley. Optimal multi-unit auctions. In F. Hahn, editor, The Economics of Missing Markets, Information, and Games, chapter 14, pages 312-335. Clarendon Press, Oxford, 1989. [18] R. Myerson. Optimal auction design. Mathematics of Operation Research, 6:58-73, 1981. [19] N. Nisan and A. Ronen. Computationally feasible VCG mechanisms. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 242-252, Minneapolis, MN, 2000. [20] N. Nisan and A. Ronen. Algorithmic mechanism design. Games and Economic Behavior, 35:166-196, 2001. Early version in Proceedings of the Annual ACM Symposium on Theory of Computing (STOC), 1999. [21] T. Roughgarden. Designing networks for selfish users is hard. In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), 2001. [22] T. Sandholm. Issues in computational Vickrey auctions. International Journal of Electronic Commerce, 4(3):107-129, 2000. Special Issue on 140 Applying Intelligent Agents for Electronic Commerce. A short, early version appeared at the Second International Conference on Multi-Agent Systems (ICMAS), pages 299-306, 1996. [23] M. A. Satterthwaite. Strategy-proofness and Arrow``s conditions: existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10:187-217, 1975. [24] W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. Journal of Finance, 16:8-37, 1961. [25] R. V. Vohra. Research problems in combinatorial auctions. Mimeo, version Oct. 29, 2001. 141
Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions ∗ ABSTRACT Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach--called automated mechanism design--a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have "best-only" preferences. We show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy. ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. 1. INTRODUCTION In multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents. Such outcomes could be potential presidents, joint plans, allocations of goods or resources, etc. . The preference aggregator generally does not know the agents' preferences a priori. Rather, the agents report their preferences to the coordinator. Unfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully. Such manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen. Manipulability is a pervasive problem across preference aggregation mechanisms. A seminal negative result, the Gibbard-Satterthwaite theorem, shows that under any nondictatorial preference aggregation scheme, if there are at least 3 possible outcomes, there are preferences under which an agent is better off reporting untruthfully [10, 23]. (A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.) What the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective. This is the classic setting of mechanism design in game theory. In this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out come relates to the agents' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself. This is the mechanism design setting most relevant to electronic commerce. In the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents' preferences is clear. It is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents' preferences. The reason for this is that often the agents' preferences impose limits on how the designer chooses the outcome and payments. The most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism. For instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents' preferences. Nevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it. Therefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off. On the other hand, the designer will not necessarily choose a social welfare maximizing outcome. For example, if the designer always chooses an outcome that maximizes social welfare with respect to the reported preferences, and forces each agent to pay the difference between the utility it has now and the utility it would have had if it had not participated in the mechanism, it is easy to see that agents may have an incentive to misreport their preferences--and this may actually lead to less revenue being collected. Indeed, one of the counterintuitive results of optimal auction design theory is that sometimes the good is allocated to nobody even when the auctioneer has a reservation price of 0. Classical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective. The upside of these mechanisms is that they do not rely on (even probabilistic) information about the agents' preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24, 4, 11]), or they can be easily applied to any probability distribution over the preferences (e.g., the dAGVA mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley multi-unit auction [17]). However, the general mechanisms also have significant downsides: 9 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare. If the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer's objective. 9 The general mechanisms that do focus on a selfinterested designer are only applicable in very restricted settings--such as Myerson's expected revenue maximizing auction for selling a single item, and Maskin and Riley's expected revenue maximizing auction for selling multiple identical units of an item. 9 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization. In practice, the designer may also be interested in the outcome per se. For example, an auctioneer may care which bidder receives the item. 9 It is often assumed that side payments can be used to tailor the agents' incentives, but this is not always practical. For example, in barter-based electronic marketplaces--such as Recipco, firstbarter.com, BarterOne, and Intagio--side payments are not allowed. Furthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments. In contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand. This approach addresses all of the downsides listed above. We formulate the mechanism design problem as an optimization problem. The input is characterized by the number of agents, the agents' possible types (preferences), and the aggregator's prior distributions over the agents' types. The output is a nonmanipulable mechanism that is optimal with respect to some objective. This approach is called automated mechanism design. The automated mechanism design approach has four advantages over the classical approach of designing general mechanisms. First, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare). Second, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences. When the mechanism is designed for the setting at hand, it does not matter that it would not work more generally. Third, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents' types). Given the vast amount of information that parties have about each other today, this approach is likely to lead to tremendous savings over classical mechanisms, which largely ignore that information. For example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction. Fourth, the burden of design is shifted from humans to a machine. However, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting. Hence its computational complexity becomes a key issue. Previous research has studied this question for benevolent designers--that wish to maximize, for example, social welfare [5, 6]. In this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer. This is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested. We also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem. The rest of this paper is organized as follows. In Section 2, we justify the focus on nonmanipulable mechanisms. In Section 3, we define the problem we study. In Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it. In Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen. In Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case. Finally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions. 2. JUSTIFYING THE FOCUS ON NONMANIPULABLE MECHANISMS Before we define the computational problem of automated mechanism design, we should justify our focus on nonmanipulable mechanisms. After all, it is not immediately obvious that there are no manipulable mechanisms that, even when agents report their types strategically and hence sometimes untruthfully, still reach better outcomes (according to whatever objective we use) than any nonmanipulable mechanism. This does, however, turn out to be the case: given any mechanism, we can construct a nonmanipulable mechanism whose performance is identical, as follows. We build an interface layer between the agents and the original mechanism. The agents report their preferences (or types) to the interface layer; subsequently, the interface layer inputs into the original mechanism the types that the agents would have strategically reported to the original mechanism, if their types were as declared to the interface layer. The resulting outcome is the outcome of the new mechanism. Since the interface layer acts "strategically on each agent's behalf", there is never an incentive to report falsely to the interface layer; and hence, the types reported by the interface layer are the strategic types that would have been reported without the interface layer, so the results are exactly as they would have been with the original mechanism. This argument is known in the mechanism design literature as the revelation principle [16]. (There are computational difficulties with applying the revelation principle in large combinatorial outcome and type spaces [7, 22]. However, because here we focus on flatly represented outcome and type spaces, this is not a concern here.) Given this, we can focus on truthful mechanisms in the rest of the paper. 3. DEFINITIONS We now formalize the automated mechanism design setting. DEFINITION 1. In an automated mechanism design setting, we are given: • a finite set of outcomes O; • a finite set of N agents; • for each agent i, 1. a finite set of types 8i, 2. a probability distribution γi over 8i (in the case of correlated types, there is a single joint distribution γ over 81 ×...× 8N), and 3. a utility function ui: 8i × O → R; 1 • An objective function whose expectation the designer wishes to maximize. There are many possible objective functions the designer might have, for example, social welfare (where the designer seeks to maximize the sum of the agents' utilities), or the minimum utility of any agent (where the designer seeks to maximize the worst utility had by any agent). In both of these cases, the designer is benevolent, because the designer, in some sense, is pursuing the agents' collective happiness. However, in this paper, we focus on the case of a self-interested designer. A self-interested designer cares only about the outcome chosen (that is, the designer does not care how the outcome relates to the agents' preferences, but rather has a fixed preference over the outcomes), and about the net payments made by the agents, which flow to the designer. the designer's own preference over the outcomes, and πi is the payment made by agent i. In the case where g = 0 everywhere, the designer is said to be payment maximizing. In the case where payments are not possible, g constitutes the objective function by itself. We now define the kinds of mechanisms under study. By the revelation principle, we can restrict attention to truthful, direct revelation mechanisms, where agents report their types directly and never have an incentive to misreport them. • A deterministic mechanism without payments consists of an outcome selection function o: 81 × 82 ×...× 8N → O. • A randomized mechanism without payments consists of a distribution selection function p: 81 × 82 ×...× 8N → P (O), where P (O) is the set of probability distributions over O. • A deterministic mechanism with payments consists of an outcome selection function o: 81 × 82 ×...× 8N → O and for each agent i, a payment selection function πi: 81 × 82 ×...× 8N → R, where πi (θ1,..., θN) gives the payment made by agent i when the reported types are θ1,..., θN. 1Though this follows standard game theory notation [16], the fact that the agent has both a utility function and a type is perhaps confusing. The types encode the various possible preferences that the agent may turn out to have, and the agent's type is not known to the aggregator. The utility function is common knowledge, but because the agent's type is a parameter in the agent's utility function, the aggregator cannot know what the agent's utility is without knowing the agent's type. 9 A randomized mechanism with payments consists of a distribution selection function p: Θ1 x Θ2 x...x ΘN--+ P (O), and for each agent i, a payment selection function πi: Θ1 x Θ2 x...x ΘN--+ R. 2 There are two types of constraint on the designer in building the mechanism. 3.1 Individual rationality (IR) constraints The first type of constraint is the following. The utility of each agent has to be at least as great as the agent's fallback utility, that is, the utility that the agent would receive if it did not participate in the mechanism. Otherwise that agent would not participate in the mechanism--and no agent's participation can ever hurt the mechanism designer's objective because at worst, the mechanism can ignore an agent by pretending the agent is not there. (Furthermore, if no such constraint applied, the designer could simply make the agents pay an infinite amount.) This type of constraint is called an IR (individual rationality) constraint. There are three different possible IR constraints: ex ante, ex interim, and ex post, depending on what the agent knows about its own type and the others' types when deciding whether to participate in the mechanism. Ex ante IR means that the agent would participate if it knew nothing at all (not even its own type). We will not study this concept in this paper. Ex interim IR means that the agent would always participate if it knew only its own type, but not those of the others. Ex post IR means that the agent would always participate even if it knew everybody's type. We will define the latter two notions of IR formally. First, we need to formalize the concept of the fallback outcome. We assume that each agent's fallback utility is zero for each one of its types. This is without loss of generality because we can add a constant term to an agent's utility function (for a given type), without affecting the decision-making behavior of that expected utility maximizing agent [16]. 2We do not randomize over payments because as long as the agents and the designer are risk neutral with respect to payments, that is, their utility is linear in payments, there is no reason to randomize over payments. A randomized mechanism is ex post IR if for any agent i, and any type vector (θ1,..., θN) E Θ1 x...x ΘN, we have EoIθ1,. . , θn [ui (θi, o)--πi (θ1,. . , θN)]> 0. The terms involving payments can be left out in the case where payments are not possible. 3.2 Incentive compatibility (IC) constraints The second type of constraint says that the agents should never have an incentive to misreport their type (as justified above by the revelation principle). For this type of constraint, the two most common variants (or solution concepts) are implementation in dominant strategies, and implementation in Bayes-Nash equilibrium. DEFINITION 6. Given an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in dominant strategies if truthtelling is always optimal even when the types reported by the other agents are already known. Formally, for any agent i, any type vector (θ1,..., θi,..., θN) E Θ1 x...x Θi x...x ΘN, and any alternative type report ˆθi E Θi, in the case of deterministic mechanisms we have The terms involving payments can be left out in the case where payments are not possible. Thus, in dominant strategies implementation, truthtelling is optimal regardless of what the other agents report. If it is optimal only given that the other agents are truthful, and given that one does not know the other agents' types, we have implementation in Bayes-Nash equilibrium. DEFINITION 7. Given an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in Bayes-Nash equilibrium if truthtelling is always optimal to an agent when that agent does not yet know anything about the other agents' types, and the other The terms involving payments can be left out in the case where payments are not possible. 3.3 Automated mechanism design We can now define the computational problem we study. DEFINITION 8. (AUTOMATED-MECHANISM-DESIGN (AMD)) We are given: • an automated mechanism design setting, • an IR notion (ex interim, ex post, or none), • a solution concept (dominant strategies or Bayes-Nash), • whether payments are possible, • whether randomization is possible, • (in the decision variant of the problem) a target value G. We are asked whether there exists a mechanism of the specified kind (in terms of payments and randomization) that satisfies both the IR notion and the solution concept, and gives an expected value of at least G for the objective. An interesting special case is the setting where there is only one agent. In this case, the reporting agent always knows everything there is to know about the other agents' types--because there are no other agents. Since ex post and ex interim IR only differ on what an agent is assumed to know about other agents' types, the two IR concepts coincide here. Also, because implementation in dominant strategies and implementation in Bayes-Nash equilibrium only differ on what an agent is assumed to know about other agents' types, the two solution concepts coincide here. This observation will prove to be a useful tool in proving hardness results: if we prove computational hardness in the singleagent setting, this immediately implies hardness for both IR concepts, for both solution concepts, for any number of agents. 4. PAYMENT-MAXIMIZING DETERMINISTIC AMD IS HARD In this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expected sum of the payments collected from the agents. We show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts. To demonstrate NPhardness, we reduce from the MINSAT problem. DEFINITION 9 (MINSAT). We are given a formula φ in conjunctive normal form, represented by a set of Boolean variables V and a set of clauses C, and an integer K (K <| C |). We are asked whether there exists an assignment to the variables in V such that at most K clauses in φ are satisfied. MINSAT was recently shown to be NP-complete [14]. We can now present our result. THEOREM 1. Payment-maximizing deterministic AMD is NP-complete, even for a single agent, even with a uniform distribution over types. PROOF. It is easy to show that the problem is in NP. To show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent payment-maximizing deterministic AMD instance. Let the agent's type set be Θ = {θc: c ∈ C} ∪ {θv: v ∈ V}, where C is the set of clauses in the MINSAT instance, and V is the set of variables. Let the probability distribution over these types be uniform. Let the outcome set be O = {o0} ∪ {oc: c ∈ C} ∪ {ol: l ∈ L}, where L is the set of literals, that is, L = {+ v: v ∈ V} ∪ {− v: v ∈ V}. Let the notation v (l) = v denote that v is the variable corresponding to the literal l, that is, l ∈ {+ v, − v}. Let l ∈ c denote that the literal l occurs in clause c. Then, let the agent's utility function be given by u (θc, ol) = | Θ | + 1 for all l ∈ L with l ∈ c; stance. We show the instances are equivalent. First, suppose there is a solution to the MINSAT instance. Let the assignment of truth values to the variables in this solution be given by the function f: V → L (where v (f (v)) = v for all v ∈ V). Then, for every v ∈ V, let o (θv) = of (v) and π (θv) = | Θ |. For every c ∈ C, let o (θc) = oc; let π (θc) = | Θ | + 1 if c is not satisfied in the MINSAT solution, and π (θc) = | Θ | if c is satisfied. It is straightforward to check that the IR constraint is satisfied. We now check that the agent has no incentive to misreport. If the agent's type is some θv, then any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport. If the agent's type is some θc where c is a satisfied clause, again, any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport. The final case to check is where the agent's type is some θc where c is an unsatisfied clause. In this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l ∈ c, precisely because the clause is not satisfied in the MINSAT instance. Because also, no type besides θc leads to the outcome oc, reporting any other type will give an outcome with utility 0, while still forcing a payment of at least | Θ | from the agent. Clearly the agent is better off reporting truthfully, for a total utility of 0. This establishes that the agent never has an incentive to misreport. Finally, we show that the goal is reached. If s is the number of satisfied clauses in the MINSAT solution (so that s ≤ K), the expected payment from this mechanism Now suppose there is a solution to the AMD instance, given by an outcome function o and a payment function π. First, suppose there is some v ∈ V such that o (θv) ∈ / {o + v, o − v}. Then the utility that the agent derives from the given outcome for this type is 0, and hence, by IR, no payment can be extracted from the agent for this type. Because, again by IR, the maximum payment that can be extracted for any other type is | Θ | + 1, it follows that the maximum expected payment that could be obtained is at most (| Θ | − 1) (| Θ | +1) <| Θ | <G, contradicting that this is a | Θ | solution to the AMD instance. It follows that in the solution to the AMD instance, for every v ∈ V, o (θv) ∈ {o + v, o − v}. We can interpret this as an assignment of truth values to the variables: v is set to true if o (θv) = o + v, and to false if o (θv) = o − v. We claim this assignment is a solution to the MINSAT instance. By the IR constraint, the maximum payment we can extract from any type θv is | 8 |. Because there can be no incentives for the agent to report falsely, for any clause c satisfied by the given assignment, the maximum payment we can extract for the corresponding type θc is | 8 |. (For if we extracted more from this type, the agent's utility in this case would be less than 1; and if v is the variable satisfying c in the assignment, so that o (θv) = ol where l occurs in c, then the agent would be better off reporting θv instead of the truthful report θc, to get an outcome worth | 8 | + 1 to it while having to pay at most | 8 |.) Finally, for any unsatisfied clause c, by the IR constraint, the maximum payment we can extract for the corresponding type θc is | 8 | + 1. It follows that the expected payment from our mechanism is algebraic manipulations is equivalent to s ≤ K. So there is a solution to the MINSAT instance. Because payment-maximizing AMD is just the special case of AMD for a self-interested designer where the designer has no preferences over the outcome chosen, this immediately implies hardness for the general case of AMD for a selfinterested designer where payments are possible. However, it does not yet imply hardness for the special case where payments are not possible. We will prove hardness in this case in the next section. 5. SELF-INTERESTED DETERMINISTIC AMD WITHOUT PAYMENTS IS HARD In this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expectation of the designer's objective when payments are not possible. We show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts. THEOREM 2. Without payments, deterministic AMD for a self-interested designer is NP-complete, even for a single agent, even with a uniform distribution over types. PROOF. It is easy to show that the problem is in NP. To show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent self-interested deterministic AMD without payments instance. Let the agent's type set be 8 = {θc: c ∈ C} ∪ {θv: v ∈ V}, where C is the set of clauses in the MINSAT instance, and V is the set of variables. Let the probability distribution over these types be uniform. Let the outcome set be O = {o0} ∪ {oc: c ∈ C} ∪ {ol: l ∈ L} ∪ {o ∗}, where L is the set of literals, that is, L = {+ v: v ∈ V} ∪ {− v: v ∈ V}. Let the notation v (l) = v denote that v is the variable corresponding to the literal l, that is, l ∈ {+ v, − v}. Let l ∈ c denote that the literal l occurs in clause c. Then, let the agent's utility function be given by u (θc, ol) = 2 for all l ∈ L with l ∈ c; u (θc, ol) = − 1 for all l ∈ L with l ∈ / c; u (θc, oc) = 2; u (θc, oc ~) = − 1 for all c ~ ∈ C with c = ~ c ~; u (θc, o ∗) = 1; u (θv, ol) = 1 for all l ∈ L with v (l) = v; u (θv, ol) = − 1 for all l ∈ L with v (l) = ~ v; u (θv, oc) = − 1 for all c ∈ C; u (θv, o ∗) = − 1. Let the designer's objective function be given by g (o ∗) = | 8 | +1; g (ol) = | 8 | for all l ∈ L; g (oc) = | 8 | for all c ∈ C. The goal of the AMD instance is G = | 8 | + | C | − K | Θ |, where K is the goal of the MINSAT instance. We show the instances are equivalent. First, suppose there is a solution to the MINSAT instance. Let the assignment of truth values to the variables in this solution be given by the function f: V → L (where v (f (v)) = v for all v ∈ V). Then, for every v ∈ V, let o (θv) = of (v). For every c ∈ C that is satisfied in the MINSAT solution, let o (θc) = oc; for every unsatisfied c ∈ C, let o (θc) = o ∗. It is straightforward to check that the IR constraint is satisfied. We now check that the agent has no incentive to misreport. If the agent's type is some θv, it is getting the maximum utility for that type, so it has no incentive to misreport. If the agent's type is some θc where c is a satisfied clause, again, it is getting the maximum utility for that type, so it has no incentive to misreport. The final case to check is where the agent's type is some θc where c is an unsatisfied clause. In this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l ∈ c, precisely because the clause is not satisfied in the MINSAT instance. Because also, no type leads to the outcome oc, there is no outcome that the mechanism ever selects that would give the agent utility greater than 1 for type θc, and hence the agent has no incentive to report falsely. This establishes that the agent never has an incentive to misreport. Finally, we show that the goal is reached. If s is the number of satisfied clauses in the MINSAT solution (so that s ≤ K), then the expected value of the designer's objective function | Θ | = G. So there is a solution to the AMD instance. Now suppose there is a solution to the AMD instance, given by an outcome function o. First, suppose there is some v ∈ V such that o (θv) ∈ / {o + v, o − v}. The only other outcome that the mechanism is allowed to choose under the IR constraint is o0. This has an objective value of 0, and because the highest value the objective function ever takes is | 8 | + 1, it follows that the maximum expected value of the objective function that could be obtained is at most tion to the AMD instance. It follows that in the solution to the AMD instance, for every v ∈ V, o (θv) ∈ {o + v, o − v}. We can interpret this as an assignment of truth values to the variables: v is set to true if o (θv) = o + v, and to false if o (θv) = o − v. We claim this assignment is a solution to the MINSAT instance. By the above, for any type θv, the value of the objective function in this mechanism will be | 8 |. For any clause c satisfied by the given assignment, the value of the objective function in the case where the agent reports type θc will be at most | 8 |. (This is because we cannot choose the outcome o ∗ for such a type, as in this case the agent would have an incentive to report θv instead, where v is the variable satisfying c in the assignment (so that o (θv) = ol where l occurs in c).) Finally, for any unsatisfied clause c, the maximum value the objective function can take in the case where the agent reports type θc is | 8 | + 1, simply because this is the largest value the function ever takes. It follows that the expected value of the objective function for our mechanism is at most clauses. Because our mechanism achieves the goal, it follows manipulations is equivalent to s ≤ K. So there is a solution to the MINSAT instance. Both of our hardness results relied on the constraint that the mechanism should be deterministic. In the next section, we show that the hardness of design disappears when we allow for randomization in the mechanism. 6. RANDOMIZED AMD FOR A SELFINTERESTED DESIGNER IS EASY We now show how allowing for randomization over the outcomes makes the problem of self-interested AMD tractable through linear programming, for any constant number of agents. • For every i ∈ {1, 2,..., N}, for every Finally, for implementation in Bayes-Nash equilibrium, we add the following (at most NT 2) constraints to the LP: • For every i ∈ {1, 2,..., N}, for every θi ∈ Θi, and for every alternative type report ˆθi ∈ Θi, we add the constraint THEOREM 3. Self-interested randomized AMD with a constant number of agents is solvable in polynomial time by linear programming, both with and without payments, both for ex post and ex interim IR, and both for implementation in dominant strategies and for implementation in Bayes-Nash equilibrium--even if the types are correlated. PROOF. Because linear programs can be solved in polynomial time [13], all we need to show is that the number of variables and equations in our program is polynomial for any constant number of agents--that is, exponential only in N. Throughout, for purposes of determining the size of the linear program, let T = maxi {| Θi |}. The variables of our linear program will be the probabilities (p (θ1, θ2,..., θN)) (o) (at most TN | O | variables) and the payments πi (θ1, θ2,..., θN) (at most NTN variables). (We show the linear program for the case where payments are possible; the case without payments is easily obtained from this by simply omitting all the payment variables in the program, or by adding additional constraints forcing the payments to be 0.) First, we show the IR constraints. For ex post IR, we add the following (at most NTN) constraints to the LP: • For every i ∈ {1, 2,..., N}, and for every (θ1, θ2,..., θN) ∈ Θ1 × Θ2 ×...× ΘN, we add For ex interim IR, we add the following (at most NT) constraints to the LP: • For every i ∈ {1, 2,..., N}, for every θi ∈ Θi, we add Now, we show the solution concept constraints. For implementation in dominant strategies, we add the following (at most NTN +1) constraints to the LP: All that is left to do is to give the expression the designer is seeking to maximize, which is: As we indicated, the number of variables and constraints is exponential only in N, and hence the linear program is of polynomial size for constant numbers of agents. Thus the problem is solvable in polynomial time. 7. IMPLICATIONS FOR AN OPTIMAL COMBINATORIAL AUCTION DESIGN PROBLEM In this section, we will demonstrate some interesting consequences of the problem of automated mechanism design for a self-interested designer on designing optimal combinatorial auctions. Consider a combinatorial auction with a set S of items for sale. For any bundle B ⊆ S, let ui (θi, B) be bidder i's utility for receiving bundle B when the bidder's type is θi. The optimal auction design problem is to specify the rules of the auction so as to maximize expected revenue to the auctioneer. (By the revelation principle, without loss of generality, we can assume the auction is truthful.) The optimal auction design problem is solved for the case of a single item by the famous Myerson auction [18]. However, designing optimal auctions in combinatorial auctions is a recognized open research problem [3, 25]. The problem is open even if there are only two items for sale. (The twoitem case with a very special form of complementarity and no substitutability has been solved recently [1].) Suppose we have free disposal--items can be thrown away at no cost. Also, suppose that the bidders' preferences have the following structure: whenever a bidder receives a bundle of items, the bidder's utility for that bundle is determined by the "best" item in the bundle only. (We emphasize that which item is the best is allowed to depend on the bidder's type.) We make the following useful observation in this setting: there is no sense in awarding a bidder more than one item. The reason is that if the bidder is reporting truthfully, taking all but the highest valued item away from the bidder will not hurt the bidder; and, by free disposal, doing so can only reduce the incentive for this bidder to falsely report this type, when the bidder actually has another type. We now show that the problem of designing a deterministic optimal auction here is NP-complete, by a reduction from the payment maximizing AMD problem! PROOF. The problem is in NP because we can nondeterministically generate an allocation rule, and then set the payments using linear programming. To show NP-hardness, we reduce an arbitrary paymentmaximizing deterministic AMD instance, with a single agent and a uniform distribution over types, to the following optimal combinatorial auction design problem instance with a single bidder with best-only preferences. For every outcome o ∈ O in the AMD instance (besides the outcome o0), let there be one item so ∈ S. Let the type space be the same, and let v (θi, so) = ui (θi, o) (where u is as specified in the AMD instance). Let the expected revenue target value be the same in both instances. We show the instances are equivalent. First suppose there exists a solution to the AMD instance, given by an outcome function and a payment function. Then, if the AMD solution chooses outcome o for a type, in the optimal auction solution, allocate {so} to the bidder for this type. (Unless o = o0, in which case we allocate {} to the bidder.) Let the payment functions be the same in both instances. Then, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the optimal auction solution. Moreover, because the type distribution and the payment function are the same, the expected revenue to the auctioneer/designer is the same. It follows that there exists a solution to the optimal auction design instance. Now suppose there exists a solution to the optimal auction design instance. By the at-most-one-item observation, we can assume without loss of generality that the solution never allocates more than one item. Then, if the optimal auction solution allocates item so to the bidder for a type, in the AMD solution, let the mechanism choose outcome o for that type. If the optimal auction solution allocates nothing to the bidder for a type, in the AMD solution, let the mechanism choose outcome o0 for that type. Let the payment functions be the same. Then, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the AMD solution. Moreover, because the type distribution and the payment function are the same, the expected revenue to the designer/auctioneer is the same. It follows that there exists a solution to the AMD instance. Fortunately, we can also carry through the easiness result for randomized mechanisms to this combinatorial auction setting--giving us one of the few known polynomial-time algorithms for an optimal combinatorial auction design problem. PROOF. By the at-most-one-item observation, we can without loss of generality restrict ourselves to allocations where each bidder receives at most one item. There are fewer than (| S | + 1) k such allocations--that is, a polynomial number of allocations. Because we can list the outcomes explicitly, we can simply solve this as a payment-maximizing AMD instance, with linear programming. 8. RELATED RESEARCH ON COMPLEXITY IN MECHANISM DESIGN There has been considerable recent interest in mechanism design in computer science. Some of it has focused on issues of computational complexity, but most of that work has strived toward designing mechanisms that are easy to execute (e.g. [20, 15, 19, 9, 12]), rather than studying the complexity of designing the mechanism. The closest piece of earlier work studied the complexity of automated mechanism design by a benevolent designer [5, 6]. Roughgarden has studied the complexity of designing a good network topology for agents that selfishly choose the links they use [21]. This is related to mechanism design, but differs significantly in that the designer only has restricted control over the rules of the game because there is no party that can impose the outcome (or side payments). Also, there is no explicit reporting of preferences. 9. CONCLUSIONS AND FUTURE RESEARCH Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently emerging approach--called automated mechanism design--a mechanism is computed for the specific preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interested--a setting much more relevant for electronic commerce. In this setting, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. These hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer. The hardness results apply whether the individual rationality (participation) constraints are applied ex interim or ex post, and whether the solution concept is dominant strategies implementation or Bayes-Nash equilibrium implementation. We then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy. Finally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have "best-only" preferences. We showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy. Future research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer's objective may include preferences about which bidder should receive the good--as well as payments). We also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely. Finally, we plan to assemble a data set of real-world mechanism design problems--both historical and current--and apply automated mechanism design to those problems.
Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions ∗ ABSTRACT Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach--called automated mechanism design--a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have "best-only" preferences. We show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy. ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. 1. INTRODUCTION In multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents. Such outcomes could be potential presidents, joint plans, allocations of goods or resources, etc. . The preference aggregator generally does not know the agents' preferences a priori. Rather, the agents report their preferences to the coordinator. Unfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully. Such manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen. Manipulability is a pervasive problem across preference aggregation mechanisms. A seminal negative result, the Gibbard-Satterthwaite theorem, shows that under any nondictatorial preference aggregation scheme, if there are at least 3 possible outcomes, there are preferences under which an agent is better off reporting untruthfully [10, 23]. (A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.) What the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective. This is the classic setting of mechanism design in game theory. In this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out come relates to the agents' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself. This is the mechanism design setting most relevant to electronic commerce. In the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents' preferences is clear. It is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents' preferences. The reason for this is that often the agents' preferences impose limits on how the designer chooses the outcome and payments. The most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism. For instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents' preferences. Nevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it. Therefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off. On the other hand, the designer will not necessarily choose a social welfare maximizing outcome. For example, if the designer always chooses an outcome that maximizes social welfare with respect to the reported preferences, and forces each agent to pay the difference between the utility it has now and the utility it would have had if it had not participated in the mechanism, it is easy to see that agents may have an incentive to misreport their preferences--and this may actually lead to less revenue being collected. Indeed, one of the counterintuitive results of optimal auction design theory is that sometimes the good is allocated to nobody even when the auctioneer has a reservation price of 0. Classical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective. The upside of these mechanisms is that they do not rely on (even probabilistic) information about the agents' preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24, 4, 11]), or they can be easily applied to any probability distribution over the preferences (e.g., the dAGVA mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley multi-unit auction [17]). However, the general mechanisms also have significant downsides: 9 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare. If the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer's objective. 9 The general mechanisms that do focus on a selfinterested designer are only applicable in very restricted settings--such as Myerson's expected revenue maximizing auction for selling a single item, and Maskin and Riley's expected revenue maximizing auction for selling multiple identical units of an item. 9 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization. In practice, the designer may also be interested in the outcome per se. For example, an auctioneer may care which bidder receives the item. 9 It is often assumed that side payments can be used to tailor the agents' incentives, but this is not always practical. For example, in barter-based electronic marketplaces--such as Recipco, firstbarter.com, BarterOne, and Intagio--side payments are not allowed. Furthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments. In contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand. This approach addresses all of the downsides listed above. We formulate the mechanism design problem as an optimization problem. The input is characterized by the number of agents, the agents' possible types (preferences), and the aggregator's prior distributions over the agents' types. The output is a nonmanipulable mechanism that is optimal with respect to some objective. This approach is called automated mechanism design. The automated mechanism design approach has four advantages over the classical approach of designing general mechanisms. First, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare). Second, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences. When the mechanism is designed for the setting at hand, it does not matter that it would not work more generally. Third, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents' types). Given the vast amount of information that parties have about each other today, this approach is likely to lead to tremendous savings over classical mechanisms, which largely ignore that information. For example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction. Fourth, the burden of design is shifted from humans to a machine. However, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting. Hence its computational complexity becomes a key issue. Previous research has studied this question for benevolent designers--that wish to maximize, for example, social welfare [5, 6]. In this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer. This is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested. We also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem. The rest of this paper is organized as follows. In Section 2, we justify the focus on nonmanipulable mechanisms. In Section 3, we define the problem we study. In Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it. In Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen. In Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case. Finally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions. 2. JUSTIFYING THE FOCUS ON NONMANIPULABLE MECHANISMS 3. DEFINITIONS 3.1 Individual rationality (IR) constraints 3.2 Incentive compatibility (IC) constraints 3.3 Automated mechanism design 4. PAYMENT-MAXIMIZING DETERMINISTIC AMD IS HARD 5. SELF-INTERESTED DETERMINISTIC AMD WITHOUT PAYMENTS IS HARD 6. RANDOMIZED AMD FOR A SELFINTERESTED DESIGNER IS EASY 7. IMPLICATIONS FOR AN OPTIMAL COMBINATORIAL AUCTION DESIGN PROBLEM 8. RELATED RESEARCH ON COMPLEXITY IN MECHANISM DESIGN 9. CONCLUSIONS AND FUTURE RESEARCH Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently emerging approach--called automated mechanism design--a mechanism is computed for the specific preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interested--a setting much more relevant for electronic commerce. In this setting, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. These hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer. The hardness results apply whether the individual rationality (participation) constraints are applied ex interim or ex post, and whether the solution concept is dominant strategies implementation or Bayes-Nash equilibrium implementation. We then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy. Finally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have "best-only" preferences. We showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy. Future research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer's objective may include preferences about which bidder should receive the good--as well as payments). We also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely. Finally, we plan to assemble a data set of real-world mechanism design problems--both historical and current--and apply automated mechanism design to those problems.
Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions ∗ ABSTRACT Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach--called automated mechanism design--a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have "best-only" preferences. We show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy. ∗ Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. 1. INTRODUCTION In multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents. The preference aggregator generally does not know the agents' preferences a priori. Rather, the agents report their preferences to the coordinator. Unfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully. Such manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen. Manipulability is a pervasive problem across preference aggregation mechanisms. (A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.) What the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective. This is the classic setting of mechanism design in game theory. In this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out come relates to the agents' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself. This is the mechanism design setting most relevant to electronic commerce. In the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents' preferences is clear. It is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents' preferences. The reason for this is that often the agents' preferences impose limits on how the designer chooses the outcome and payments. The most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism. For instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents' preferences. Nevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it. Therefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off. On the other hand, the designer will not necessarily choose a social welfare maximizing outcome. Classical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective. However, the general mechanisms also have significant downsides: 9 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare. If the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer's objective. 9 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization. In practice, the designer may also be interested in the outcome per se. 9 It is often assumed that side payments can be used to tailor the agents' incentives, but this is not always practical. Furthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments. In contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand. This approach addresses all of the downsides listed above. We formulate the mechanism design problem as an optimization problem. The input is characterized by the number of agents, the agents' possible types (preferences), and the aggregator's prior distributions over the agents' types. The output is a nonmanipulable mechanism that is optimal with respect to some objective. This approach is called automated mechanism design. The automated mechanism design approach has four advantages over the classical approach of designing general mechanisms. First, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare). Second, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences. When the mechanism is designed for the setting at hand, it does not matter that it would not work more generally. Third, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents' types). For example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction. Fourth, the burden of design is shifted from humans to a machine. However, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting. Previous research has studied this question for benevolent designers--that wish to maximize, for example, social welfare [5, 6]. In this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer. This is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested. We also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem. In Section 2, we justify the focus on nonmanipulable mechanisms. In Section 3, we define the problem we study. In Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it. In Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen. In Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case. Finally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions. 9. CONCLUSIONS AND FUTURE RESEARCH Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently emerging approach--called automated mechanism design--a mechanism is computed for the specific preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interested--a setting much more relevant for electronic commerce. In this setting, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. These hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer. We then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy. Finally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have "best-only" preferences. We showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy. Future research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer's objective may include preferences about which bidder should receive the good--as well as payments). We also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely. Finally, we plan to assemble a data set of real-world mechanism design problems--both historical and current--and apply automated mechanism design to those problems.
C-68
An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks
On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.
[ "avail latenc", "latenc", "audio and video clip", "data carrier", "term zebroid", "zebroid", "mobil devic", "mobil", "car densiti", "storag per devic", "repositori size", "replac polici", "naiv random replac polici", "peer-to-peer vehicular ad-hoc network", "zebroid simplifi instanti", "vehicular network", "automaton", "markov model" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "R", "U", "U" ]
An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks Shahram Ghandeharizadeh Dept of Computer Science Univ of Southern California Los Angeles, CA 90089, USA shahram@usc.edu Shyam Kapadia Dept of Computer Science Univ of Southern California Los Angeles, CA 90089, USA kapadia@usc.edu Bhaskar Krishnamachari Dept of Computer Science Dept of Electrical Engineering Univ of Southern California Los Angeles, CA 90089, USA bkrishna@usc.edu ABSTRACT On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items. Categories and Subject Descriptors: C.2.4 [Distributed Systems]: Client/Server General Terms: Algorithms, Performance, Design, Experimentation. 1. INTRODUCTION Technological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks. In prior work, Ghandeharizadeh et al. [10] introduce the concept of vehicles equipped with a Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment. The notable features of an AutoMata include a mass storage device offering hundreds of gigabytes (GB) of storage, a fast processor, and several types of networking cards. Even with today``s 500 GB disk drives, a repository of diverse entertainment content may exceed the storage capacity of a single AutoMata. Such repositories constitute the focus of this study. To exchange data, we assume each AutoMata is configured with two types of networking cards: 1) a low-bandwidth networking card with a long radio-range in the order of miles that enables an AutoMata device to communicate with a nearby cellular or WiMax station, 2) a high-bandwidth networking card with a limited radio-range in the order of hundreds of feet. The high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad-hoc peer to peer network between the vehicles. This is labelled as the data plane and is employed to exchange data items between devices. The low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers. This connection offers bandwidths in the order of tens to hundreds of Kilobits per second. The centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data. These schedules are transmitted to the participating vehicles using the control plane. The technical feasibility of such a two-tier architecture is presented in [7], with preliminary results to demonstrate the bandwidth of the control plane is sufficient for exchange of control information needed for realizing such an application. In a typical scenario, an AutoMata device presents a passenger with a list of data items1 , showing both the name of each data item and its availability latency. The latter, denoted as δ, is defined as the earliest time at which the client encounters a copy of its requested data item. A data item is available immediately when it resides in the local storage of the AutoMata device serving the request. Due to storage constraints, an AutoMata may not store the entire repository. In this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item. (The terms car and AutoMata are used interchangeably in this study.) The availability latency for an item is a function of the current location of the client, its destination and travel path, the mobility model of the AutoMata equipped vehicles, the number of replicas constructed for the different data items, and the placement of data item replicas across the vehicles. A method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it. These data carriers are termed `zebroids''. Selection of zebroids is facilitated by the two-tiered architecture. The control plane enables centralized information gathering at a dispatcher present at a base-station.2 Some examples of control in1 Without loss of generality, the term data item might be either traditional media such as text or continuous media such as an audio or video clip. 2 There may be dispatchers deployed at a subset of the base-stations for fault-tolerance and robustness. Dispatchers between basestations may communicate via the wired infrastructure. 75 formation are currently active requests, travel path of the clients and their destinations, and paths of the other cars. For each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids). Here, z is the number of zebroids such that 0 ≤ z < N, where N is the total number of cars. When z = 0 there are no carriers, requiring a server to deliver the data item directly to the client. Otherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details). To increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request. This may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars. Finally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item. In this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions. For a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations. Our main findings are as follows. A naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency. With such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead. In more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements. A surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1). This study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency. Related Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15]. However, none of these studies employ zebroids as data carriers to reduce the latency of the client``s requests. Several novel and important studies such as ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20], and Seek and Focus [17] have analyzed factors impacting intermittently connected networks consisting of data carriers similar in spirit to zebroids. Factors considered by each study are dictated by their assumed environment and target application. A novel characteristic of our study is the impact on availability latency for a given database repository of items. A detailed description of related works can be obtained in [9]. The rest of this paper is organized as follows. Section 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids. Section 3 describes how the zebroids may be employed. Section 4 provides details of the analysis methodology employed to capture the performance with zebroids. Section 5 describes the details of the simulation environment used for evaluation. Section 6 enlists the key questions examined in this study and answers them via analysis and simulations. Finally, Section 7 presents brief conclusions and future research directions. 2. OVERVIEW AND TERMINOLOGY Table 1 summarizes the notation of the parameters used in the paper. Below we introduce some terminology used in the paper. Assume a network of N AutoMata-equipped cars, each with storage capacity of α bytes. The total storage capacity of the system is ST =N ·α. There are T data items in the database, each with Database Parameters T Number of data items. Si Size of data item i fi Frequency of access to data item i. Replication Parameters Ri Normalized frequency of access to data item i ri Number of replicas for data item i n Characterizes a particular replication scheme. δi Average availability latency of data item i δagg Aggregate availability latency, δagg = T j=1 δj · fj AutoMata System Parameters G Number of cells in the map (2D-torus). N Number of AutoMata devices in the system. α Storage capacity per AutoMata. γ Trip duration of the client AutoMata. ST Total storage capacity of the AutoMata system, ST = N · α. Table 1: Terms and their definitions size Si. The frequency of access to data item i is denoted as fi, with T j=1 fj = 1. Let the trip duration of the client AutoMata under consideration be γ. We now define the normalized frequency of access to the data item i, denoted by Ri, as: Ri = (fi)n T j=1(fj)n ; 0 ≤ n ≤ ∞ (1) The exponent n characterizes a particular replication technique. A square-root replication scheme is realized when n = 0.5 [5]. This serves as the base-line for comparison with the case when zebroids are deployed. Ri is normalized to a value between 0 and 1. The number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, Ri·N·α Si )). This captures the case when at least one copy of every data item must be present in the ad-hoc network at all times. In cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, Ri·N·α Si )). In this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server. The availability latency for a data item i, denoted as δi, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it. If this condition is not satisfied, then we set δi to γ. This indicates that data item i was not available to the client during its journey. Note that since there is at least one replica in the system for every data item i, by setting γ to a large value we ensure that the client``s request for any data item i will be satisfied. However, in most practical circumstances γ may not be so large as to find every data item. We are interested in the availability latency observed across all data items. Hence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (δagg) metric: δagg = T i=1 δi · fi Next, we present our solution approach describing how zebroids are selected. 3. SOLUTION APPROACH 3.1 Zebroids When a client references a data item missing from its local storage, the dispatcher identifies all cars with a copy of the data item as servers. Next, the dispatcher obtains the future routes of all cars for a finite time duration equivalent to the maximum time the client is willing to wait for its request to be serviced. Using this information, the dispatcher schedules the quickest delivery path from any of the servers to the client using any other cars as intermediate carriers. Hence, it determines the optimal set of forwarding decisions 76 that will enable the data item to be delivered to the client in the minimum amount of time. Note that the latency along the quickest delivery path that employs a relay team of z zebroids is similar to that obtained with epidemic routing [19] under the assumptions of infinite storage and no interference. A simple instantiation of z-relay zebroids occurs when z = 1 and the client``s request triggers a transfer of a copy of the requested data item from a server to a zebroid in its vicinity. Such a zebroid is termed one-instantaneous zebroid. In some cases, the dispatcher might have inaccurate information about the routes of the cars. Hence, a zebroid scheduled on the basis of this inaccurate information may not rendezvous with its target client. To minimize the likelihood of such scenarios, the dispatcher may schedule multiple zebroids. This may incur additional overhead due to redundant resource utilization to obtain the same latency improvements. The time required to transfer a data item from a server to a zebroid depends on its size and the available link bandwidth. With small data items, it is reasonable to assume that this transfer time is small, especially in the presence of the high bandwidth data plane. Large data items may be divided into smaller chunks enabling the dispatcher to schedule one or more zebroids to deliver each chunk to a client in a timely manner. This remains a future research direction. Initially, number of replicas for each data item replicas might be computed using Equation 1. This scheme computes the number of data item replicas as a function of their popularity. It is static because number of replicas in the system do not change and no replacements are performed. Hence, this is referred to as the `nozebroids'' environment. We quantify the performance of the various replacement policies with reference to this base-line that does not employ zebroids. One may assume a cold start phase, where initially only one or few copies of every data item exist in the system. Many storage slots of the cars may be unoccupied. When the cars encounter one another they construct new replicas of some selected data items to occupy the empty slots. The selection procedure may be to choose the data items uniformly at random. New replicas are created as long as a car has a certain threshold of its storage unoccupied. Eventually, majority of the storage capacity of a car will be exhausted. 3.2 Carrier-based Replacement policies The replacement policies considered in this paper are reactive since a replacement occurs only in response to a request issued for a certain data item. When the local storage of a zebroid is completely occupied, it needs to replace one of its existing items to carry the client requested data item. For this purpose, the zebroid must select an appropriate candidate for eviction. This decision process is analogous to that encountered in operating system paging where the goal is to maximize the cache hit ratio to prevent disk access delay [18]. The carrier-based replacement policies employed in our study are Least Recently Used (LRU), Least Frequently Used (LFU) and Random (where a eviction candidate is chosen uniformly at random). We have considered local and global variants of the LRU/LFU policies which determine whether local or global knowledge of contents of the cars known at the dispatcher is used for the eviction decision at a zebroid (see [9] for more details). The replacement policies incur the following overheads. First, the complexity associated with the implementation of a policy. Second, the bandwidth used to transfer a copy of a data item from a server to the zebroid. Third, the average number of replacements incurred by the zebroids. Note that in the no-zebroids case neither overhead is incurred. The metrics considered in this study are aggregate availability latency, δagg, percentage improvement in δagg with zebroids as compared to the no-zebroids case and average number of replacements incurred per client request which is an indicator of the overhead incurred by zebroids. Note that the dispatchers with the help of the control plane may ensure that no data item is lost from the system. In other words, at least one replica of every data item is maintained in the ad-hoc network at all times. In such cases, even though a car may meet a requesting client earlier than other servers, if its local storage contains data items with only a single copy in the system, then such a car is not chosen as a zebroid. 4. ANALYSIS METHODOLOGY Here, we present the analytical evaluation methodology and some approximations as closed-form equations that capture the improvements in availability latency that can be obtained with both oneinstantaneous and z-relay zebroids. First, we present some preliminaries of our analysis methodology. • Let N be the number of cars in the network performing a 2D random walk on a √ G× √ G torus. An additional car serves as a client yielding a total of N + 1 cars. Such a mobility model has been used widely in the literature [17, 16] chiefly because it is amenable to analysis and provides a baseline against which performance of other mobility models can be compared. Moreover, this class of Markovian mobility models has been used to model the movements of vehicles [3, 21]. • We assume that all cars start from the stationary distribution and perform independent random walks. Although for sparse density scenarios, the independence assumption does hold, it is no longer valid when N approaches G. • Let the size of data item repository of interest be T. Also, data item i has ri replicas. This implies ri cars, identified as servers, have a copy of this data item when the client requests item i. All analysis results presented in this section are obtained assuming that the client is willing to wait as long as it takes for its request to be satisfied (unbounded trip duration γ = ∞). With the random walk mobility model on a 2D-torus, there is a guarantee that as long as there is at least one replica of the requested data item in the network, the client will eventually encounter this replica [2]. Extensions to the analysis that also consider finite trip durations can be obtained in [9]. Consider a scenario where no zebroids are employed. In this case, the expected availability latency for the data item is the expected meeting time of the random walk undertaken by the client with any of the random walks performed by the servers. Aldous et al. [2] show that the the meeting time of two random walks in such a setting can be modelled as an exponential distribution with the mean C = c · G · log G, where the constant c 0.17 for G ≥ 25. The meeting time, or equivalently the availability latency δi, for the client requesting data item i is the time till it encounters any of these ri replicas for the first time. This is also an exponential distribution with the following expected value (note that this formulation is valid only for sparse cases when G >> ri): δi = cGlogG ri The aggregate availability latency without employing zebroids is then this expression averaged over all data items, weighted by their frequency of access: δagg(no − zeb) = T i=1 fi · c · G · log G ri = T i=1 fi · C ri (2) 77 4.1 One-instantaneous zebroids Recall that with one-instantaneous zebroids, for a given request, a new replica is created on a car in the vicinity of the server, provided this car meets the client earlier than any of the ri servers. Moreover, this replica is spawned at the time step when the client issues the request. Let Nc i be the expected total number of nodes that are in the same cell as any of the ri servers. Then, we have Nc i = (N − ri) · (1 − (1 − 1 G )ri ) (3) In the analytical model, we assume that Nc i new replicas are created, so that the total number of replicas is increased to ri +Nc i . The availability latency is reduced since the client is more likely to meet a replica earlier. The aggregated expected availability latency in the case of one-instantaneous zebroids is then given by, δagg(zeb) = T i=1 fi · c · G · log G ri + Nc i = T i=1 fi · C ri + Nc i (4) Note that in obtaining this expression, for ease of analysis, we have assumed that the new replicas start from random locations in the torus (not necessarily from the same cell as the original ri servers). It thus treats all the Nc i carriers independently, just like the ri original servers. As we shall show below by comparison with simulations, this approximation provides an upper-bound on the improvements that can be obtained because it results in a lower expected latency at the client. It should be noted that the procedure listed above will yield a similar latency to that employed by a dispatcher employing oneinstantaneous zebroids (see Section 3.1). Since the dispatcher is aware of all future car movements it would only transfer the requested data item on a single zebroid, if it determines that the zebroid will meet the client earlier than any other server. This selected zebroid is included in the Nc i new replicas. 4.2 z-relay zebroids To calculate the expected availability latency with z-relay zebroids, we use a coloring problem analog similar to an approach used by Spyropoulos et al. [17]. Details of the procedure to obtain a closed-form expression are given in [9]. The aggregate availability latency (δagg) with z-relay zebroids is given by, δagg(zeb) = T i=1 [fi · C N + 1 · 1 N + 1 − ri · (N · log N ri − log (N + 1 − ri))] (5) 5. SIMULATION METHODOLOGY The simulation environment considered in this study comprises of vehicles such as cars that carry a fraction of the data item repository. A prediction accuracy parameter inherently provides a certain probabilistic guarantee on the confidence of the car route predictions known at the dispatcher. A value of 100% implies that the exact routes of all cars are known at all times. A 70% value for this parameter indicates that the routes predicted for the cars will match the actual ones with probability 0.7. Note that this probability is spread across the car routes for the entire trip duration. We now provide the preliminaries of the simulation study and then describe the parameter settings used in our experiments. • Similar to the analysis methodology, the map used is a 2D torus. A Markov mobility model representing a unbiased 2D random walk on the surface of the torus describes the movement of the cars across this torus. • Each grid/cell is a unique state of this Markov chain. In each time slot, every car makes a transition from a cell to any of its neighboring 8 cells. The transition is a function of the current location of the car and a probability transition matrix Q = [qij] where qij is the probability of transition from state i to state j. Only AutoMata equipped cars within the same cell may communicate with each other. • The parameters γ, δ have been discretized and expressed in terms of the number of time slots. • An AutoMata device does not maintain more than one replica of a data item. This is because additional replicas occupy storage without providing benefits. • Either one-instantaneous or z-relay zebroids may be employed per client request for latency improvement. • Unless otherwise mentioned, the prediction accuracy parameter is assumed to be 100%. This is because this study aims to quantify the effect of a large number of parameters individually on availability latency. Here, we set the size of every data item, Si, to be 1. α represents the number of storage slots per AutoMata. Each storage slot stores one data item. γ represents the duration of the client``s journey in terms of the number of time slots. Hence the possible values of availability latency are between 0 and γ. δ is defined as the number of time slots after which a client AutoMata device will encounter a replica of the data item for the first time. If a replica for the data item requested was encountered by the client in the first cell then we set δ = 0. If δ > γ then we set δ = γ indicating that no copy of the requested data item was encountered by the client during its entire journey. In all our simulations, for illustration we consider a 5 × 5 2D-torus with γ set to 10. Our experiments indicate that the trends in the results scale to maps of larger size. We simulated a skewed distribution of access to the T data items that obeys Zipf``s law with a mean of 0.27. This distribution is shown to correspond to sale of movie theater tickets in the United States [6]. We employ a replication scheme that allocates replicas for a data item as a function of the square-root of the frequency of access of that item. The square-root replication scheme is shown to have competitive latency performance over a large parameter space [8]. The data item replicas are distributed uniformly across the AutoMata devices. This serves as the base-line no-zebroids case. The square-root scheme also provides the initial replica distribution when zebroids are employed. Note that the replacements performed by the zebroids will cause changes to the data item replica distribution. Requests generated as per the Zipf distribution are issued one at a time. The client car that issues the request is chosen in a round-robin manner. After a maximum period of γ, the latency encountered by this request is recorded. In all the simulation results, each point is an average of 200,000 requests. Moreover, the 95% confidence intervals determined for the results are quite tight for the metrics of latency and replacement overhead. Hence, we only present them for the metric that captures the percentage improvement in latency with respect to the no-zebroids case. 6. RESULTS In this section, we describe our evaluation results where the following key questions are addressed. With a wide choice of replacement schemes available for a zebroid, what is their effect on availability latency? A more central question is: Do zebroids provide 78 0 20 40 60 80 100 1.5 2 2.5 3 3.5 Number of cars Aggregate availability latency (δ agg ) lru_global lfu_global lru_local lfu_local random Figure 1: Figure 1 shows the availability latency when employing one-instantaneous zebroids as a function of (N,α) values, when the total storage in the system is kept fixed, ST = 200. significant improvements in availability latency? What is the associated overhead incurred in employing these zebroids? What happens to these improvements in scenarios where a dispatcher may have imperfect information about the car routes? What inherent trade-offs exist between car density and storage per car with regards to their combined as well as individual effect on availability latency in the presence of zebroids? We present both simple analysis and detailed simulations to provide answers to these questions as well as gain insights into design of carrier-based systems. 6.1 How does a replacement scheme employed by a zebroid impact availability latency? For illustration, we present `scale-up'' experiments where oneinstantaneous zebroids are employed (see Figure 1). By scale-up, we mean that α and N are changed proportionally to keep the total system storage, ST , constant. Here, T = 50 and ST = 200. We choose the following values of (N,α) = {(20,10), (25,8), (50,4), (100,2)}. The figure indicates that a random replacement scheme shows competitive performance. This is because of several reasons. Recall that the initial replica distribution is set as per the squareroot replication scheme. The random replacement scheme does not alter this distribution since it makes replacements blind to the popularity of a data item. However, the replacements cause dynamic data re-organization so as to better serve the currently active request. Moreover, the mobility pattern of the cars is random, hence, the locations from which the requests are issued by clients are also random and not known a priori at the dispatcher. These findings are significant because a random replacement policy can be implemented in a simple decentralized manner. The lru-global and lfu-global schemes provide a latency performance that is worse than random. This is because these policies rapidly develop a preference for the more popular data items thereby creating a larger number of replicas for them. During eviction, the more popular data items are almost never selected as a replacement candidate. Consequently, there are fewer replicas for the less popular items. Hence, the initial distribution of the data item replicas changes from square-root to that resembling linear replication. The higher number of replicas for the popular data items provide marginal additional benefits, while the lower number of replicas for the other data items hurts the latency performance of these global policies. The lfu-local and lru-local schemes have similar performance to random since they do not have enough history of local data item requests. We speculate that the performance of these local policies will approach that of their global variants for a large enough history of data item requests. On account of the competitive performance shown by a random policy, for the remainder of the paper, we present the performance of zebroids that employ a random replacement policy. 6.2 Do zebroids provide significant improvements in availability latency? We find that in many scenarios employing zebroids provides substantial improvements in availability latency. 6.2.1 Analysis We first consider the case of one-instantaneous zebroids. Figure 2. a shows the variation in δagg as a function of N for T = 10 and α = 1 with a 10 × 10 torus using Equation 4. Both the x and y axes are drawn to a log-scale. Figure 2. b show the % improvement in δagg obtained with one-instantaneous zebroids. In this case, only the x-axis is drawn to a log-scale. For illustration, we assume that the T data items are requested uniformly. Initially, when the network is sparse the analytical approximation for improvements in latency with zebroids, obtained from Equations 2 and 4, closely matches the simulation results. However, as N increases, the sparseness assumption for which the analysis is valid, namely N << G, is no longer true. Hence, the two curves rapidly diverge. The point at which the two curves move away from each other corresponds to a value of δagg ≤ 1. Moreover, as mentioned earlier, the analysis provides an upper bound on the latency improvements, as it treats the newly created replicas given by Nc i independently. However, these Nc i replicas start from the same cell as one of the server replicas ri. Finally, the analysis captures a oneshot scenario where given an initial data item replica distribution, the availability latency is computed. The new replicas created do not affect future requests from the client. On account of space limitations, here, we summarize the observations in the case when z-relay zebroids are employed. The interested reader can obtain further details in [9]. Similar observations, like the one-instantaneous zebroid case, apply since the simulation and analysis curves again start diverging when the analysis assumptions are no longer valid. However, the key observation here is that the latency improvement with z-relay zebroids is significantly better than the one-instantaneous zebroids case, especially for lower storage scenarios. This is because in sparse scenarios, the transitive hand-offs between the zebroids creates higher number of replicas for the requested data item, yielding lower availability latency. Moreover, it is also seen that the simulation validation curve for the improvements in δagg with z-relay zebroids approaches that of the one-instantaneous zebroid case for higher storage (higher N values). This is because one-instantaneous zebroids are a special case of z-relay zebroids. 6.2.2 Simulation We conduct simulations to examine the entire storage spectrum obtained by changing car density N or storage per car α to also capture scenarios where the sparseness assumptions for which the analysis is valid do not hold. We separate the effect of N and α by capturing the variation of N while keeping α constant (case 1) and vice-versa (case 2) both with z-relay and one-instantaneous zebroids. Here, we set the repository size as T = 25. Figure 3 captures case 1 mentioned above. Similar trends are observed with case 2, a complete description of those results are available in [9]. With Figure 3. b, keeping α constant, initially increasing car density has higher latency benefits because increasing N introduces more zebroids in the system. As N is further increased, ω reduces because the total storage in the system goes up. Consequently, the number of replicas per data item goes up thereby increasing the 79 number of servers. Hence, the replacement policy cannot find a zebroid as often to transport the requested data item to the client earlier than any of the servers. On the other hand, the increased number of servers benefits the no-zebroids case in bringing δagg down. The net effect results in reduction in ω for larger values of N. 10 1 10 2 10 3 10 −1 10 0 10 1 10 2 Number of cars no−zebroidsanal no−zebroids sim one−instantaneous anal one−instantaneoussim Aggregate Availability latency (δagg ) 2. a) δagg 10 1 10 2 10 3 0 10 20 30 40 50 60 70 80 90 100 Number of cars % Improvement in δagg wrt no−zebroids (ω) analytical upper−bound simulation 2. b) ω Figure 2: Figure 2 shows the latency performance with oneinstantaneous zebroids via simulations along with the analytical approximation for a 10 × 10 torus with T = 10. The trends mentioned above are similar to that obtained from the analysis. However, somewhat counter-intuitively with relatively higher system storage, z-relay zebroids provide slightly lower improvements in latency as compared to one-instantaneous zebroids. We speculate that this is due to the different data item replica distributions enforced by them. Note that replacements performed by the zebroids cause fluctuations in these replica distributions which may effect future client requests. We are currently exploring suitable choices of parameters that can capture these changing replica distributions. 6.3 What is the overhead incurred with improvements in latency with zebroids? We find that the improvements in latency with zebroids are obtained at a minimal replacement overhead (< 1 per client request). 6.3.1 Analysis With one-instantaneous zebroids, for each client request a maximum of one zebroid is employed for latency improvement. Hence, the replacement overhead per client request can amount to a maximum of one. Recall that to calculate the latency with one-instantaneous 0 50 100 150 200 250 300 350 400 0 1 2 3 4 5 6 Number of cars Aggregate availability latency (δagg ) no−zebroids one−instantaneous z−relays 3. a 0 50 100 150 200 250 300 350 400 0 10 20 30 40 50 60 Number of cars % Improvement in δagg wrt no−zebroids (ω) one−instantaneous z−relays 3. b Figure 3: Figure 3 depicts the latency performance with both one-instantaneous and z-relay zebroids as a function of the car density when α = 2 and T = 25. zebroids, Nc i new replicas are created in the same cell as the servers. Now a replacement is only incurred if one of these Nc i newly created replicas meets the client earlier than any of the ri servers. Let Xri and XNc i respectively be random variables that capture the minimum time till any of the ri and Nc i replicas meet the client. Since Xri and XNc i are assumed to be independent, by the property of exponentially distributed random variables we have, Overhead/request = 1 · P(XNc i < Xri ) + 0 · P(Xri ≤ XNc i ) (6) Overhead/request = ri C ri C + Nc i C = ri ri + Nc i (7) Recall that the number of replicas for data item i, ri, is a function of the total storage in the system i.e., ri = k·N ·α where k satisfies the constraint 1 ≤ ri ≤ N. Using this along with Equation 2, we get Overhead/request = 1 − G G + N · (1 − k · α) (8) Now if we keep the total system storage N · α constant since G and T are also constant, increasing N increases the replacement overhead. However, if N ·α is constant then increasing N causes α 80 0 20 40 60 80 100 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Number of cars one−instantaneous zebroids Average number of replacements per request (N=20,α=10) (N=25,α=8) (N=50,α=4) (N=100,α=2) Figure 4: Figure 4 captures replacement overhead when employing one-instantaneous zebroids as a function of (N,α) values, when the total storage in the system is kept fixed, ST = 200. to go down. This implies that a higher replacement overhead is incurred for higher N and lower α values. Moreover, when ri = N, this means that every car has a replica of data item i. Hence, no zebroids are employed when this item is requested, yielding an overhead/request for this item as zero. Next, we present simulation results that validate our analysis hypothesis for the overhead associated with deployment of one-instantaneous zebroids. 6.3.2 Simulation Figure 4 shows the replacement overhead with one-instantaneous zebroids when (N,α) are varied while keeping the total system storage constant. The trends shown by the simulation are in agreement with those predicted by the analysis above. However, the total system storage can be changed either by varying car density (N) or storage per car (α). On account of similar trends, here we present the case when α is kept constant and N is varied (Figure 5). We refer the reader to [9] for the case when α is varied and N is held constant. We present an intuitive argument for the behavior of the perrequest replacement overhead curves. When the storage is extremely scarce so that only one replica per data item exists in the AutoMata network, the number of replacements performed by the zebroids is zero since any replacement will cause a data item to be lost from the system. The dispatcher ensures that no data item is lost from the system. At the other end of the spectrum, if storage becomes so abundant that α = T then the entire data item repository can be replicated on every car. The number of replacements is again zero since each request can be satisfied locally. A similar scenario occurs if N is increased to such a large value that another car with the requested data item is always available in the vicinity of the client. However, there is a storage spectrum in the middle where replacements by the scheduled zebroids result in improvements in δagg (see Figure 3). Moreover, we observe that for sparse storage scenarios, the higher improvements with z-relay zebroids are obtained at the cost of a higher replacement overhead when compared to the one-instantaneous zebroids case. This is because in the former case, each of these z zebroids selected along the lowest latency path to the client needs to perform a replacement. However, the replacement overhead is still less than 1 indicating that on an average less than one replacement per client request is needed even when z-relay zebroids are employed. 0 50 100 150 200 250 300 350 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of cars z−relays one−instantaneous Average number of replacements per request Figure 5: Figure 5 shows the replacement overhead with zebroids for the cases when N is varied keeping α = 2. 10 20 30 40 50 60 70 80 90 100 0 0.5 1 1.5 2 2.5 3 3.5 4 Prediction percentage no−zebroids (N=50) one−instantaneous (N=50) z−relays (N=50) no−zebroids (N=200) one−instantaneous (N=200) z−relays (N=200) Aggregate Availability Latency (δagg ) Figure 6: Figure 6 shows δagg for different car densities as a function of the prediction accuracy metric with α = 2 and T = 25. 6.4 What happens to the availability latency with zebroids in scenarios with inaccuracies in the car route predictions? We find that zebroids continue to provide improvements in availability latency even with lower accuracy in the car route predictions. We use a single parameter p to quantify the accuracy of the car route predictions. 6.4.1 Analysis Since p represents the probability that a car route predicted by the dispatcher matches the actual one, hence, the latency with zebroids can be approximated by, δerr agg = p · δagg(zeb) + (1 − p) · δagg(no − zeb) (9) δerr agg = p · δagg(zeb) + (1 − p) · C ri (10) Expressions for δagg(zeb) can be obtained from Equations 4 (one-instantaneous) or 5 (z-relay zebroids). 6.4.2 Simulation Figure 6 shows the variation in δagg as a function of this route prediction accuracy metric. We observe a smooth reduction in the 81 improvement in δagg as the prediction accuracy metric reduces. For zebroids that are scheduled but fail to rendezvous with the client due to the prediction error, we tag any such replacements made by the zebroids as failed. It is seen that failed replacements gradually increase as the prediction accuracy reduces. 6.5 Under what conditions are the improvements in availability latency with zebroids maximized? Surprisingly, we find that the improvements in latency obtained with one-instantaneous zebroids are independent of the input distribution of the popularity of the data items. 6.5.1 Analysis The fractional difference (labelled ω) in the latency between the no-zebroids and one-instantaneous zebroids is obtained from equations 2, 3, and 4 as ω = T i=1 fi·C ri − T i=1 fi·C ri+(N−ri)·(1−(1− 1 G )ri ) T i=1 fi·C ri (11) Here C = c·G·log G. This captures the fractional improvement in the availability latency obtained by employing one-instantaneous zebroids. Let α = 1, making the total storage in the system ST = N. Assuming the initial replica distribution is as per the squareroot replication scheme, we have, ri = √ fi·N T j=1 √ fj . Hence, we get fi = K2 ·r2 i N2 , where K = T j=1 fj. Using this, along with the approximation (1 − x)n 1 − n · x for small x, we simplify the above equation to get, ω = 1 − T i=1 ri 1+ N−ri G T i=1 ri In order to determine when the gains with one-instantaneous zebroids are maximized, we can frame an optimization problem as follows: Maximize ω, subject to T i=1 ri = ST THEOREM 1. With a square-root replication scheme, improvements obtained with one-instantaneous zebroids are independent of the input popularity distribution of the data items. (See [9] for proof) 6.5.2 Simulation We perform simulations with two different frequency distribution of data items: Uniform and Zipfian (with mean= 0.27). Similar latency improvements with one-instantaneous zebroids are obtained in both cases. This result has important implications. In cases with biased popularity toward certain data items, the aggregate improvements in latency across all data item requests still remain the same. Even in scenarios where the frequency of access to the data items changes dynamically, zebroids will continue to provide similar latency improvements. 7. CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS In this study, we examined the improvements in latency that can be obtained in the presence of carriers that deliver a data item from a server to a client. We quantified the variation in availability latency as a function of a rich set of parameters such as car density, storage per car, title database size, and replacement policies employed by zebroids. Below we summarize some key future research directions we intend to pursue. To better reflect reality we would like to validate the observations obtained from this study with some real world simulation traces of vehicular movements (for example using CORSIM [1]). This will also serve as a validation for the utility of the Markov mobility model used in this study. We are currently analyzing the performance of zebroids on a real world data set comprising of an ad-hoc network of buses moving around a small neighborhood in Amherst [4]. Zebroids may also be used for delivery of data items that carry delay sensitive information with a certain expiry. Extensions to zebroids that satisfy such application requirements presents an interesting future research direction. 8. ACKNOWLEDGMENTS This research was supported in part by an Annenberg fellowship and NSF grants numbered CNS-0435505 (NeTS NOSS), CNS-0347621 (CAREER), and IIS-0307908. 9. REFERENCES [1] Federal Highway Administration. Corridor simulation. Version 5.1, http://www.ops.fhwa.dot.gov/trafficanalysistools/cors im.htm. [2] D. Aldous and J. Fill. Reversible markov chains and random walks on graphs. Under preparation. [3] A. Bar-Noy, I. Kessler, and M. Sidi. Mobile Users: To Update or Not to Update. In IEEE Infocom, 1994. [4] J. Burgess, B. Gallagher, D. Jensen, and B. Levine. MaxProp: Routing for Vehicle-Based Disruption-Tolerant Networking. In IEEE Infocom, April 2006. [5] E. Cohen and S. Shenker. Replication Strategies in Unstructured Peer-to-Peer Networks. In SIGCOMM, 2002. [6] A. Dan, D. Dias, R. Mukherjee, D. Sitaram, and R. Tewari. Buffering and Caching in Large-Scale Video Servers. In COMPCON, 1995. [7] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. PAVAN: A Policy Framework for Content Availabilty in Vehicular ad-hoc Networks. In VANET, New York, NY, USA, 2004. ACM Press. [8] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. Comparison of Replication Strategies for Content Availability in C2P2 networks. In MDM, May 2005. [9] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks. Technical report, Department of Computer Science, University of Southern California,CENG-2006-1, 2006. [10] S. Ghandeharizadeh and B. Krishnamachari. C2p2: A peer-to-peer network for on-demand automobile information services. In Globe. IEEE, 2004. [11] T. Hara. Effective Replica Allocation in ad-hoc Networks for Improving Data Accessibility. In IEEE Infocom, 2001. [12] H. Hayashi, T. Hara, and S. Nishio. A Replica Allocation Method Adapting to Topology Changes in ad-hoc Networks. In DEXA, 2005. [13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein. Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet. SIGARCH Comput. Archit. News, 2002. [14] A. Pentland, R. Fletcher, and A. Hasson. DakNet: Rethinking Connectivity in Developing Nations. Computer, 37(1):78-83, 2004. [15] F. Sailhan and V. Issarny. Cooperative Caching in ad-hoc Networks. In MDM, 2003. [16] R. Shah, S. Roy, S. Jain, and W. Brunette. Data mules: Modeling and analysis of a three-tier architecture for sparse sensor networks. Elsevier ad-hoc Networks Journal, 1, September 2003. [17] T. Spyropoulos, K. Psounis, and C. Raghavendra. Single-Copy Routing in Intermittently Connected Mobile Networks. In SECON, April 2004. [18] A. Tanenbaum. Modern Operating Systems, 2nd Edition, Chapter 4, Section 4.4 . Prentice Hall, 2001. [19] A. Vahdat and D. Becker. Epidemic routing for partially-connected ad-hoc networks. Technical report, Department of Computer Science, Duke University, 2000. [20] W. Zhao, M. Ammar, and E. Zegura. A message ferrying approach for data delivery in sparse mobile ad-hoc networks. In MobiHoc, pages 187-198, New York, NY, USA, 2004. ACM Press. [21] M. Zonoozi and P. Dassanayake. User Mobility Modeling and Characterization of Mobility Pattern. IEEE Journal on Selected Areas in Communications, 15:1239-1252, September 1997. 82
An Evaluation of Availability Latency in Carrier-based Vehicular Ad-Hoc Networks Shahram ABSTRACT On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items. Categories and Subject Descriptors: C. 2.4 [Distributed Systems]: Client/Server 1. INTRODUCTION Technological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks. In prior work, Ghandeharizadeh et al. [10] introduce the concept of vehicles equipped with a Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment. The notable features of an AutoMata include a mass storage device offering hundreds of gigabytes (GB) of storage, a fast processor, and several types of networking cards. Even with today's 500 GB disk drives, a repository of diverse entertainment content may exceed the storage capacity of a single AutoMata. Such repositories con stitute the focus of this study. To exchange data, we assume each AutoMata is configured with two types of networking cards: 1) a low-bandwidth networking card with a long radio-range in the order of miles that enables an AutoMata device to communicate with a nearby cellular or WiMax station, 2) a high-bandwidth networking card with a limited radio-range in the order of hundreds of feet. The high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad hoc peer to peer network between the vehicles. This is labelled as the data plane and is employed to exchange data items between devices. The low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers. This connection offers bandwidths in the order of tens to hundreds of Kilobits per second. The centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data. These schedules are transmitted to the participating vehicles using the control plane. The technical feasibility of such a two-tier architecture is presented in [7], with preliminary results to demonstrate the bandwidth of the control plane is sufficient for exchange of control information needed for realizing such an application. In a typical scenario, an AutoMata device presents a passenger with a list of data items1, showing both the name of each data item and its availability latency. The latter, denoted as δ, is defined as the earliest time at which the client encounters a copy of its requested data item. A data item is available immediately when it resides in the local storage of the AutoMata device serving the request. Due to storage constraints, an AutoMata may not store the entire repository. In this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item. (The terms car and AutoMata are used interchangeably in this study.) The availability latency for an item is a function of the current location of the client, its destination and travel path, the mobility model of the AutoMata equipped vehicles, the number of replicas constructed for the different data items, and the placement of data item replicas across the vehicles. A method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it. These data carriers are termed ` zebroids'. Selection of zebroids is facilitated by the two-tiered architecture. The control plane enables centralized information gathering at a dispatcher present at a base-station .2 Some examples of control in formation are currently active requests, travel path of the clients and their destinations, and paths of the other cars. For each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids). Here, z is the number of zebroids such that 0 ≤ z <N, where N is the total number of cars. When z = 0 there are no carriers, requiring a server to deliver the data item directly to the client. Otherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details). To increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request. This may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars. Finally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item. In this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions. For a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations. Our main findings are as follows. A naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency. With such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead. In more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements. A surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1). This study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency. Related Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15]. However, none of these studies employ zebroids as data carriers to reduce the latency of the client's requests. Several novel and important studies such as ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20], and Seek and Focus [17] have analyzed factors impacting intermittently connected networks consisting of data carriers similar in spirit to zebroids. Factors considered by each study are dictated by their assumed environment and target application. A novel characteristic of our study is the impact on availability latency for a given database repository of items. A detailed description of related works can be obtained in [9]. The rest of this paper is organized as follows. Section 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids. Section 3 describes how the zebroids may be employed. Section 4 provides details of the analysis methodology employed to capture the performance with zebroids. Section 5 describes the details of the simulation environment used for evaluation. Section 6 enlists the key questions examined in this study and answers them via analysis and simulations. Finally, Section 7 presents brief conclusions and future research directions. 2. OVERVIEW AND TERMINOLOGY Table 1 summarizes the notation of the parameters used in the paper. Below we introduce some terminology used in the paper. Assume a network of N AutoMata-equipped cars, each with storage capacity of α bytes. The total storage capacity of the system is ST = N · α. There are T data items in the database, each with Table 1: Terms and their definitions with ET size Si. The frequency of access to data item i is denoted as fi, The exponent n characterizes a particular replication technique. A square-root replication scheme is realized when n = 0.5 [5]. This serves as the base-line for comparison with the case when zebroids are deployed. Ri is normalized to a value between 0 and 1. The number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, bRi · N · α Si c)). This captures the case when at least one copy of every data item must be present in the ad-hoc network at all times. In cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, b Ri · N · α Si c)). In this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server. The availability latency for a data item i, denoted as δi, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it. If this condition is not satisfied, then we set δi to - y. This indicates that data item i was not available to the client during its journey. Note that since there is at least one replica in the system for every data item i, by setting - y to a large value we ensure that the client's request for any data item i will be satisfied. However, in most practical circumstances - y may not be so large as to find every data item. We are interested in the availability latency observed across all data items. Hence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (δagg) metric: δagg = Ti = 1 δi · fi Next, we present our solution approach describing how zebroids are selected. 3. SOLUTION APPROACH 3.1 Zebroids When a client references a data item missing from its local storage, the dispatcher identifies all cars with a copy of the data item as servers. Next, the dispatcher obtains the future routes of all cars for a finite time duration equivalent to the maximum time the client is willing to wait for its request to be serviced. Using this information, the dispatcher schedules the quickest delivery path from any of the servers to the client using any other cars as intermediate carriers. Hence, it determines the optimal set of forwarding decisions that will enable the data item to be delivered to the client in the minimum amount of time. Note that the latency along the quickest delivery path that employs a relay team of z zebroids is similar to that obtained with epidemic routing [19] under the assumptions of infinite storage and no interference. A simple instantiation of z-relay zebroids occurs when z = 1 and the client's request triggers a transfer of a copy of the requested data item from a server to a zebroid in its vicinity. Such a zebroid is termed one-instantaneous zebroid. In some cases, the dispatcher might have inaccurate information about the routes of the cars. Hence, a zebroid scheduled on the basis of this inaccurate information may not rendezvous with its target client. To minimize the likelihood of such scenarios, the dispatcher may schedule multiple zebroids. This may incur additional overhead due to redundant resource utilization to obtain the same latency improvements. The time required to transfer a data item from a server to a zebroid depends on its size and the available link bandwidth. With small data items, it is reasonable to assume that this transfer time is small, especially in the presence of the high bandwidth data plane. Large data items may be divided into smaller chunks enabling the dispatcher to schedule one or more zebroids to deliver each chunk to a client in a timely manner. This remains a future research direction. Initially, number of replicas for each data item replicas might be computed using Equation 1. This scheme computes the number of data item replicas as a function of their popularity. It is static because number of replicas in the system do not change and no replacements are performed. Hence, this is referred to as the ` nozebroids' environment. We quantify the performance of the various replacement policies with reference to this base-line that does not employ zebroids. One may assume a cold start phase, where initially only one or few copies of every data item exist in the system. Many storage slots of the cars may be unoccupied. When the cars encounter one another they construct new replicas of some selected data items to occupy the empty slots. The selection procedure may be to choose the data items uniformly at random. New replicas are created as long as a car has a certain threshold of its storage unoccupied. Eventually, majority of the storage capacity of a car will be exhausted. 3.2 Carrier-based Replacement policies The replacement policies considered in this paper are reactive since a replacement occurs only in response to a request issued for a certain data item. When the local storage of a zebroid is completely occupied, it needs to replace one of its existing items to carry the client requested data item. For this purpose, the zebroid must select an appropriate candidate for eviction. This decision process is analogous to that encountered in operating system paging where the goal is to maximize the cache hit ratio to prevent disk access delay [18]. The carrier-based replacement policies employed in our study are Least Recently Used (LRU), Least Frequently Used (LFU) and Random (where a eviction candidate is chosen uniformly at random). We have considered local and global variants of the LRU/LFU policies which determine whether local or global knowledge of contents of the cars known at the dispatcher is used for the eviction decision at a zebroid (see [9] for more details). The replacement policies incur the following overheads. First, the complexity associated with the implementation of a policy. Second, the bandwidth used to transfer a copy of a data item from a server to the zebroid. Third, the average number of replacements incurred by the zebroids. Note that in the no-zebroids case neither overhead is incurred. The metrics considered in this study are aggregate availability latency, 6,, gg, percentage improvement in 6,, gg with zebroids as compared to the no-zebroids case and average number of replacements incurred per client request which is an indicator of the overhead incurred by zebroids. Note that the dispatchers with the help of the control plane may ensure that no data item is lost from the system. In other words, at least one replica of every data item is maintained in the ad-hoc network at all times. In such cases, even though a car may meet a requesting client earlier than other servers, if its local storage contains data items with only a single copy in the system, then such a car is not chosen as a zebroid. 4. ANALYSIS METHODOLOGY Here, we present the analytical evaluation methodology and some approximations as closed-form equations that capture the improvements in availability latency that can be obtained with both oneinstantaneous and z-relay zebroids. First, we present some preliminaries of our analysis methodology. • Let N be the number of cars in the network performing a 2D V V random walk on a G x G torus. An additional car serves as a client yielding a total of N + 1 cars. Such a mobility model has been used widely in the literature [17, 16] chiefly because it is amenable to analysis and provides a baseline against which performance of other mobility models can be compared. Moreover, this class of Markovian mobility models has been used to model the movements of vehicles [3, 21]. • We assume that all cars start from the stationary distribution and perform independent random walks. Although for sparse density scenarios, the independence assumption does hold, it is no longer valid when N approaches G. • Let the size of data item repository of interest be T. Also, data item i has ri replicas. This implies ri cars, identified as servers, have a copy of this data item when the client requests item i. All analysis results presented in this section are obtained assuming that the client is willing to wait as long as it takes for its request to be satisfied (unbounded trip duration γ = oo). With the random walk mobility model on a 2D-torus, there is a guarantee that as long as there is at least one replica of the requested data item in the network, the client will eventually encounter this replica [2]. Extensions to the analysis that also consider finite trip durations can be obtained in [9]. Consider a scenario where no zebroids are employed. In this case, the expected availability latency for the data item is the expected meeting time of the random walk undertaken by the client with any of the random walks performed by the servers. Aldous et al. [2] show that the the meeting time of two random walks in such a setting can be modelled as an exponential distribution with the mean C = c • G • log G, where the constant c - 0.17 for G> 25. The meeting time, or equivalently the availability latency 6i, for the client requesting data item i is the time till it encounters any of these ri replicas for the first time. This is also an exponential distribution with the following expected value (note that this formulation is valid only for sparse cases when G>> ri): 6i = cGlogG ri The aggregate availability latency without employing zebroids is then this expression averaged over all data items, weighted by their frequency of access: 4.1 One-instantaneous zebroids Recall that with one-instantaneous zebroids, for a given request, a new replica is created on a car in the vicinity of the server, provided this car meets the client earlier than any of the ri servers. Moreover, this replica is spawned at the time step when the client issues the request. Let Nic be the expected total number of nodes that are in the same cell as any of the ri servers. Then, we have In the analytical model, we assume that Nic new replicas are created, so that the total number of replicas is increased to ri + Nic. The availability latency is reduced since the client is more likely to meet a replica earlier. The aggregated expected availability latency in the case of one-instantaneous zebroids is then given by, Note that in obtaining this expression, for ease of analysis, we have assumed that the new replicas start from random locations in the torus (not necessarily from the same cell as the original ri servers). It thus treats all the Nic carriers independently, just like the ri original servers. As we shall show below by comparison with simulations, this approximation provides an upper-bound on the improvements that can be obtained because it results in a lower expected latency at the client. It should be noted that the procedure listed above will yield a similar latency to that employed by a dispatcher employing oneinstantaneous zebroids (see Section 3.1). Since the dispatcher is aware of all future car movements it would only transfer the requested data item on a single zebroid, if it determines that the zebroid will meet the client earlier than any other server. This selected zebroid is included in the Nic new replicas. 4.2 z-relay zebroids To calculate the expected availability latency with z-relay zebroids, we use a coloring problem analog similar to an approach used by Spyropoulos et al. [17]. Details of the procedure to obtain a closed-form expression are given in [9]. The aggregate availability latency (δa99) with z-relay zebroids is given by, 5. SIMULATION METHODOLOGY The simulation environment considered in this study comprises of vehicles such as cars that carry a fraction of the data item repository. A prediction accuracy parameter inherently provides a certain probabilistic guarantee on the confidence of the car route predictions known at the dispatcher. A value of 100% implies that the exact routes of all cars are known at all times. A 70% value for this parameter indicates that the routes predicted for the cars will match the actual ones with probability 0.7. Note that this probability is spread across the car routes for the entire trip duration. We now provide the preliminaries of the simulation study and then describe the parameter settings used in our experiments. • Similar to the analysis methodology, the map used is a 2D torus. A Markov mobility model representing a unbiased 2D random walk on the surface of the torus describes the movement of the cars across this torus. • Each grid/cell is a unique state of this Markov chain. In each time slot, every car makes a transition from a cell to any of its neighboring 8 cells. The transition is a function of the current location of the car and a probability transition matrix Q = [qij] where qij is the probability of transition from state i to state j. Only AutoMata equipped cars within the same cell may communicate with each other. • The parameters - y, δ have been discretized and expressed in terms of the number of time slots. • An AutoMata device does not maintain more than one replica of a data item. This is because additional replicas occupy storage without providing benefits. • Either one-instantaneous or z-relay zebroids may be employed per client request for latency improvement. • Unless otherwise mentioned, the prediction accuracy parameter is assumed to be 100%. This is because this study aims to quantify the effect of a large number of parameters individually on availability latency. Here, we set the size of every data item, Si, to be 1. α represents the number of storage slots per AutoMata. Each storage slot stores one data item. - y represents the duration of the client's journey in terms of the number of time slots. Hence the possible values of availability latency are between 0 and - y. δ is defined as the number of time slots after which a client AutoMata device will encounter a replica of the data item for the first time. If a replica for the data item requested was encountered by the client in the first cell then we set δ = 0. If δ> - y then we set δ = - y indicating that no copy of the requested data item was encountered by the client during its entire journey. In all our simulations, for illustration we consider a 5 × 5 2D-torus with - y set to 10. Our experiments indicate that the trends in the results scale to maps of larger size. We simulated a skewed distribution of access to the T data items that obeys Zipf's law with a mean of 0.27. This distribution is shown to correspond to sale of movie theater tickets in the United States [6]. We employ a replication scheme that allocates replicas for a data item as a function of the square-root of the frequency of access of that item. The square-root replication scheme is shown to have competitive latency performance over a large parameter space [8]. The data item replicas are distributed uniformly across the AutoMata devices. This serves as the base-line no-zebroids case. The square-root scheme also provides the initial replica distribution when zebroids are employed. Note that the replacements performed by the zebroids will cause changes to the data item replica distribution. Requests generated as per the Zipf distribution are issued one at a time. The client car that issues the request is chosen in a round-robin manner. After a maximum period of - y, the latency encountered by this request is recorded. In all the simulation results, each point is an average of 200,000 requests. Moreover, the 95% confidence intervals determined for the results are quite tight for the metrics of latency and replacement overhead. Hence, we only present them for the metric that captures the percentage improvement in latency with respect to the no-zebroids case. 6. RESULTS In this section, we describe our evaluation results where the following key questions are addressed. With a wide choice of replacement schemes available for a zebroid, what is their effect on availability latency? A more central question is: Do zebroids provide Figure 1: Figure 1 shows the availability latency when employing one-instantaneous zebroids as a function of (N, α) values, when the total storage in the system is kept fixed, ST = 200. significant improvements in availability latency? What is the associated overhead incurred in employing these zebroids? What happens to these improvements in scenarios where a dispatcher may have imperfect information about the car routes? What inherent trade-offs exist between car density and storage per car with regards to their combined as well as individual effect on availability latency in the presence of zebroids? We present both simple analysis and detailed simulations to provide answers to these questions as well as gain insights into design of carrier-based systems. 6.1 How does a replacement scheme employed by a zebroid impact availability latency? For illustration, we present ` scale-up' experiments where oneinstantaneous zebroids are employed (see Figure 1). By scale-up, we mean that α and N are changed proportionally to keep the total system storage, ST, constant. Here, T = 50 and ST = 200. We choose the following values of (N, α) = {(20,10), (25,8), (50,4), (100,2)}. The figure indicates that a random replacement scheme shows competitive performance. This is because of several reasons. Recall that the initial replica distribution is set as per the squareroot replication scheme. The random replacement scheme does not alter this distribution since it makes replacements blind to the popularity of a data item. However, the replacements cause dynamic data re-organization so as to better serve the currently active request. Moreover, the mobility pattern of the cars is random, hence, the locations from which the requests are issued by clients are also random and not known a priori at the dispatcher. These findings are significant because a random replacement policy can be implemented in a simple decentralized manner. The lru-global and lfu-global schemes provide a latency performance that is worse than random. This is because these policies rapidly develop a preference for the more popular data items thereby creating a larger number of replicas for them. During eviction, the more popular data items are almost never selected as a replacement candidate. Consequently, there are fewer replicas for the less popular items. Hence, the initial distribution of the data item replicas changes from square-root to that resembling linear replication. The higher number of replicas for the popular data items provide marginal additional benefits, while the lower number of replicas for the other data items hurts the latency performance of these global policies. The lfu-local and lru-local schemes have similar performance to random since they do not have enough history of local data item requests. We speculate that the performance of these local policies will approach that of their global variants for a large enough history of data item requests. On account of the competitive performance shown by a random policy, for the remainder of the paper, we present the performance of zebroids that employ a random replacement policy. 6.2 Do zebroids provide significant improvements in availability latency? We find that in many scenarios employing zebroids provides substantial improvements in availability latency. 6.2.1 Analysis We first consider the case of one-instantaneous zebroids. Figure 2. a shows the variation in 6,99 as a function of N for T = 10 and α = 1 with a 10 × 10 torus using Equation 4. Both the x and y axes are drawn to a log-scale. Figure 2. b show the% improvement in 6,99 obtained with one-instantaneous zebroids. In this case, only the x-axis is drawn to a log-scale. For illustration, we assume that the T data items are requested uniformly. Initially, when the network is sparse the analytical approximation for improvements in latency with zebroids, obtained from Equations 2 and 4, closely matches the simulation results. However, as N increases, the sparseness assumption for which the analysis is valid, namely N <<G, is no longer true. Hence, the two curves rapidly diverge. The point at which the two curves move away from each other corresponds to a value of 6,99 ≤ 1. Moreover, as mentioned earlier, the analysis provides an upper bound on the latency improvements, as it treats the newly created replicas given by Nic independently. However, these Nic replicas start from the same cell as one of the server replicas ri. Finally, the analysis captures a oneshot scenario where given an initial data item replica distribution, the availability latency is computed. The new replicas created do not affect future requests from the client. On account of space limitations, here, we summarize the observations in the case when z-relay zebroids are employed. The interested reader can obtain further details in [9]. Similar observations, like the one-instantaneous zebroid case, apply since the simulation and analysis curves again start diverging when the analysis assumptions are no longer valid. However, the key observation here is that the latency improvement with z-relay zebroids is significantly better than the one-instantaneous zebroids case, especially for lower storage scenarios. This is because in sparse scenarios, the transitive hand-offs between the zebroids creates higher number of replicas for the requested data item, yielding lower availability latency. Moreover, it is also seen that the simulation validation curve for the improvements in 6,99 with z-relay zebroids approaches that of the one-instantaneous zebroid case for higher storage (higher N values). This is because one-instantaneous zebroids are a special case of z-relay zebroids. 6.2.2 Simulation We conduct simulations to examine the entire storage spectrum obtained by changing car density N or storage per car α to also capture scenarios where the sparseness assumptions for which the analysis is valid do not hold. We separate the effect of N and α by capturing the variation of N while keeping α constant (case 1) and vice-versa (case 2) both with z-relay and one-instantaneous zebroids. Here, we set the repository size as T = 25. Figure 3 captures case 1 mentioned above. Similar trends are observed with case 2, a complete description of those results are available in [9]. With Figure 3. b, keeping α constant, initially increasing car density has higher latency benefits because increasing N introduces more zebroids in the system. As N is further increased, ω reduces because the total storage in the system goes up. Consequently, the number of replicas per data item goes up thereby increasing the number of servers. Hence, the replacement policy cannot find a zebroid as often to transport the requested data item to the client earlier than any of the servers. On the other hand, the increased number of servers benefits the no-zebroids case in bringing 6agg down. The net effect results in reduction in ω for larger values of N. Number of cars 2. b) ω Figure 2: Figure 2 shows the latency performance with oneinstantaneous zebroids via simulations along with the analytical approximation for a 10 x 10 torus with T = 10. The trends mentioned above are similar to that obtained from the analysis. However, somewhat counter-intuitively with relatively higher system storage, z-relay zebroids provide slightly lower improvements in latency as compared to one-instantaneous zebroids. We speculate that this is due to the different data item replica distributions enforced by them. Note that replacements performed by the zebroids cause fluctuations in these replica distributions which may effect future client requests. We are currently exploring suitable choices of parameters that can capture these changing replica distributions. 6.3 What is the overhead incurred with improvements in latency with zebroids? We find that the improvements in latency with zebroids are obtained at a minimal replacement overhead (<1 per client request). 6.3.1 Analysis With one-instantaneous zebroids, for each client request a maximum of one zebroid is employed for latency improvement. Hence, the replacement overhead per client request can amount to a maximum of one. Recall that to calculate the latency with one-instantaneous Figure 3: Figure 3 depicts the latency performance with both one-instantaneous and z-relay zebroids as a function of the car density when α = 2 and T = 25. zebroids, Nic new replicas are created in the same cell as the servers. Now a replacement is only incurred if one of these Nic newly created replicas meets the client earlier than any of the ri servers. Let Xri and XNc respectively be random variables that capture i the minimum time till any of the ri and Nic replicas meet the client. Since Xri and XNc are assumed to be independent, by the property i of exponentially distributed random variables we have, Recall that the number of replicas for data item i, ri, is a function of the total storage in the system i.e., ri = k • N • α where k satisfies the constraint 1 <ri <N. Using this along with Equation 2, we get Now if we keep the total system storage N • α constant since G and T are also constant, increasing N increases the replacement overhead. However, if N • α is constant then increasing N causes α Figure 4: Figure 4 captures replacement overhead when em ploying one-instantaneous zebroids as a function of (N, α) values, when the total storage in the system is kept fixed, ST = 200. to go down. This implies that a higher replacement overhead is incurred for higher N and lower α values. Moreover, when ri = N, this means that every car has a replica of data item i. Hence, no zebroids are employed when this item is requested, yielding an overhead/request for this item as zero. Next, we present simulation results that validate our analysis hypothesis for the overhead associated with deployment of one-instantaneous zebroids. 6.3.2 Simulation Figure 4 shows the replacement overhead with one-instantaneous zebroids when (N, α) are varied while keeping the total system storage constant. The trends shown by the simulation are in agreement with those predicted by the analysis above. However, the total system storage can be changed either by varying car density (N) or storage per car (α). On account of similar trends, here we present the case when α is kept constant and N is varied (Figure 5). We refer the reader to [9] for the case when α is varied and N is held constant. We present an intuitive argument for the behavior of the perrequest replacement overhead curves. When the storage is extremely scarce so that only one replica per data item exists in the AutoMata network, the number of replacements performed by the zebroids is zero since any replacement will cause a data item to be lost from the system. The dispatcher ensures that no data item is lost from the system. At the other end of the spectrum, if storage becomes so abundant that α = T then the entire data item repository can be replicated on every car. The number of replacements is again zero since each request can be satisfied locally. A similar scenario occurs if N is increased to such a large value that another car with the requested data item is always available in the vicinity of the client. However, there is a storage spectrum in the middle where replacements by the scheduled zebroids result in improvements in 6agg (see Figure 3). Moreover, we observe that for sparse storage scenarios, the higher improvements with z-relay zebroids are obtained at the cost of a higher replacement overhead when compared to the one-instantaneous zebroids case. This is because in the former case, each of these z zebroids selected along the lowest latency path to the client needs to perform a replacement. However, the replacement overhead is still less than 1 indicating that on an average less than one replacement per client request is needed even when z-relay zebroids are employed. Figure 5: Figure 5 shows the replacement overhead with zebroids for the cases when N is varied keeping α = 2. Figure 6: Figure 6 shows 6agg for different car densities as a function of the prediction accuracy metric with α = 2 and T = 25. 6.4 What happens to the availability latency with zebroids in scenarios with inaccuracies in the car route predictions? We find that zebroids continue to provide improvements in availability latency even with lower accuracy in the car route predictions. We use a single parameter p to quantify the accuracy of the car route predictions. 6.4.1 Analysis Since p represents the probability that a car route predicted by the dispatcher matches the actual one, hence, the latency with zebroids can be approximated by, Expressions for 6agg (zeb) can be obtained from Equations 4 (one-instantaneous) or 5 (z-relay zebroids). 6.4.2 Simulation Figure 6 shows the variation in 6agg as a function of this route prediction accuracy metric. We observe a smooth reduction in the improvement in δagg as the prediction accuracy metric reduces. For zebroids that are scheduled but fail to rendezvous with the client due to the prediction error, we tag any such replacements made by the zebroids as failed. It is seen that failed replacements gradually increase as the prediction accuracy reduces. 6.5 Under what conditions are the improve ments in availability latency with zebroids maximized? Surprisingly, we find that the improvements in latency obtained with one-instantaneous zebroids are independent of the input distribution of the popularity of the data items. 6.5.1 Analysis The fractional difference (labelled ω) in the latency between the no-zebroids and one-instantaneous zebroids is obtained from equations 2, 3, and 4 as Here C = c · G · log G. This captures the fractional improvement in the availability latency obtained by employing one-instantaneous zebroids. Let α = 1, making the total storage in the system ST = N. Assuming the initial replica distribution is as per the squareroot replication scheme, we have, ri = T √ fj. Hence, we get above equation to get, ω = 1 Ti = 1 ri In order to determine when the gains with one-instantaneous zebroids are maximized, we can frame an optimization problem as follows: Maximize ω, subject to / teTi = 1 ri = ST THEOREM 1. With a square-root replication scheme, improvements obtained with one-instantaneous zebroids are independent of the input popularity distribution of the data items. (See [9] for proof) 6.5.2 Simulation We perform simulations with two different frequency distribution of data items: Uniform and Zipfian (with mean = 0.27). Similar latency improvements with one-instantaneous zebroids are obtained in both cases. This result has important implications. In cases with biased popularity toward certain data items, the aggregate improvements in latency across all data item requests still remain the same. Even in scenarios where the frequency of access to the data items changes dynamically, zebroids will continue to provide similar latency improvements. 7. CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS In this study, we examined the improvements in latency that can be obtained in the presence of carriers that deliver a data item from a server to a client. We quantified the variation in availability latency as a function of a rich set of parameters such as car density, storage per car, title database size, and replacement policies employed by zebroids. Below we summarize some key future research directions we intend to pursue. To better reflect reality we would like to validate the observations obtained from this study with some real world simulation traces of vehicular movements (for example using CORSIM [1]). This will also serve as a validation for the utility of the Markov mobility model used in this study. We are currently analyzing the performance of zebroids on a real world data set comprising of an ad-hoc network of buses moving around a small neighborhood in Amherst [4]. Zebroids may also be used for delivery of data items that carry delay sensitive information with a certain expiry. Extensions to zebroids that satisfy such application requirements presents an interesting future research direction.
An Evaluation of Availability Latency in Carrier-based Vehicular Ad-Hoc Networks Shahram ABSTRACT On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items. Categories and Subject Descriptors: C. 2.4 [Distributed Systems]: Client/Server 1. INTRODUCTION Technological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks. In prior work, Ghandeharizadeh et al. [10] introduce the concept of vehicles equipped with a Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment. The notable features of an AutoMata include a mass storage device offering hundreds of gigabytes (GB) of storage, a fast processor, and several types of networking cards. Even with today's 500 GB disk drives, a repository of diverse entertainment content may exceed the storage capacity of a single AutoMata. Such repositories con stitute the focus of this study. To exchange data, we assume each AutoMata is configured with two types of networking cards: 1) a low-bandwidth networking card with a long radio-range in the order of miles that enables an AutoMata device to communicate with a nearby cellular or WiMax station, 2) a high-bandwidth networking card with a limited radio-range in the order of hundreds of feet. The high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad hoc peer to peer network between the vehicles. This is labelled as the data plane and is employed to exchange data items between devices. The low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers. This connection offers bandwidths in the order of tens to hundreds of Kilobits per second. The centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data. These schedules are transmitted to the participating vehicles using the control plane. The technical feasibility of such a two-tier architecture is presented in [7], with preliminary results to demonstrate the bandwidth of the control plane is sufficient for exchange of control information needed for realizing such an application. In a typical scenario, an AutoMata device presents a passenger with a list of data items1, showing both the name of each data item and its availability latency. The latter, denoted as δ, is defined as the earliest time at which the client encounters a copy of its requested data item. A data item is available immediately when it resides in the local storage of the AutoMata device serving the request. Due to storage constraints, an AutoMata may not store the entire repository. In this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item. (The terms car and AutoMata are used interchangeably in this study.) The availability latency for an item is a function of the current location of the client, its destination and travel path, the mobility model of the AutoMata equipped vehicles, the number of replicas constructed for the different data items, and the placement of data item replicas across the vehicles. A method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it. These data carriers are termed ` zebroids'. Selection of zebroids is facilitated by the two-tiered architecture. The control plane enables centralized information gathering at a dispatcher present at a base-station .2 Some examples of control in formation are currently active requests, travel path of the clients and their destinations, and paths of the other cars. For each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids). Here, z is the number of zebroids such that 0 ≤ z <N, where N is the total number of cars. When z = 0 there are no carriers, requiring a server to deliver the data item directly to the client. Otherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details). To increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request. This may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars. Finally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item. In this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions. For a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations. Our main findings are as follows. A naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency. With such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead. In more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements. A surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1). This study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency. Related Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15]. However, none of these studies employ zebroids as data carriers to reduce the latency of the client's requests. Several novel and important studies such as ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20], and Seek and Focus [17] have analyzed factors impacting intermittently connected networks consisting of data carriers similar in spirit to zebroids. Factors considered by each study are dictated by their assumed environment and target application. A novel characteristic of our study is the impact on availability latency for a given database repository of items. A detailed description of related works can be obtained in [9]. The rest of this paper is organized as follows. Section 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids. Section 3 describes how the zebroids may be employed. Section 4 provides details of the analysis methodology employed to capture the performance with zebroids. Section 5 describes the details of the simulation environment used for evaluation. Section 6 enlists the key questions examined in this study and answers them via analysis and simulations. Finally, Section 7 presents brief conclusions and future research directions. 2. OVERVIEW AND TERMINOLOGY Table 1 summarizes the notation of the parameters used in the paper. Below we introduce some terminology used in the paper. Assume a network of N AutoMata-equipped cars, each with storage capacity of α bytes. The total storage capacity of the system is ST = N · α. There are T data items in the database, each with Table 1: Terms and their definitions with ET size Si. The frequency of access to data item i is denoted as fi, The exponent n characterizes a particular replication technique. A square-root replication scheme is realized when n = 0.5 [5]. This serves as the base-line for comparison with the case when zebroids are deployed. Ri is normalized to a value between 0 and 1. The number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, bRi · N · α Si c)). This captures the case when at least one copy of every data item must be present in the ad-hoc network at all times. In cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, b Ri · N · α Si c)). In this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server. The availability latency for a data item i, denoted as δi, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it. If this condition is not satisfied, then we set δi to - y. This indicates that data item i was not available to the client during its journey. Note that since there is at least one replica in the system for every data item i, by setting - y to a large value we ensure that the client's request for any data item i will be satisfied. However, in most practical circumstances - y may not be so large as to find every data item. We are interested in the availability latency observed across all data items. Hence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (δagg) metric: δagg = Ti = 1 δi · fi Next, we present our solution approach describing how zebroids are selected. 3. SOLUTION APPROACH 3.1 Zebroids 3.2 Carrier-based Replacement policies 4. ANALYSIS METHODOLOGY 4.1 One-instantaneous zebroids 4.2 z-relay zebroids 5. SIMULATION METHODOLOGY 6. RESULTS 6.1 How does a replacement scheme employed 6.2 Do zebroids provide significant improvements in availability latency? 6.2.1 Analysis 6.2.2 Simulation Number of cars 6.3 What is the overhead incurred with improvements in latency with zebroids? 6.3.1 Analysis 6.3.2 Simulation 6.4 What happens to the availability latency with zebroids in scenarios with inaccuracies in the car route predictions? 6.4.1 Analysis 6.4.2 Simulation 6.5 Under what conditions are the improve 6.5.1 Analysis 6.5.2 Simulation 7. CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS
An Evaluation of Availability Latency in Carrier-based Vehicular Ad-Hoc Networks Shahram ABSTRACT On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items. Categories and Subject Descriptors: C. 2.4 [Distributed Systems]: Client/Server 1. INTRODUCTION Technological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks. Such repositories con stitute the focus of this study. The high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad hoc peer to peer network between the vehicles. This is labelled as the data plane and is employed to exchange data items between devices. The low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers. The centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data. These schedules are transmitted to the participating vehicles using the control plane. In a typical scenario, an AutoMata device presents a passenger with a list of data items1, showing both the name of each data item and its availability latency. The latter, denoted as δ, is defined as the earliest time at which the client encounters a copy of its requested data item. A data item is available immediately when it resides in the local storage of the AutoMata device serving the request. Due to storage constraints, an AutoMata may not store the entire repository. In this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item. (The terms car and AutoMata are used interchangeably in this study.) A method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it. These data carriers are termed ` zebroids'. Selection of zebroids is facilitated by the two-tiered architecture. The control plane enables centralized information gathering at a dispatcher present at a base-station .2 Some examples of control in formation are currently active requests, travel path of the clients and their destinations, and paths of the other cars. For each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids). Here, z is the number of zebroids such that 0 ≤ z <N, where N is the total number of cars. When z = 0 there are no carriers, requiring a server to deliver the data item directly to the client. Otherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details). To increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request. This may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars. Finally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item. In this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions. For a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations. Our main findings are as follows. A naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency. With such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead. In more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements. A surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1). This study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency. Related Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15]. However, none of these studies employ zebroids as data carriers to reduce the latency of the client's requests. Factors considered by each study are dictated by their assumed environment and target application. A novel characteristic of our study is the impact on availability latency for a given database repository of items. The rest of this paper is organized as follows. Section 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids. Section 3 describes how the zebroids may be employed. Section 4 provides details of the analysis methodology employed to capture the performance with zebroids. Section 5 describes the details of the simulation environment used for evaluation. Section 6 enlists the key questions examined in this study and answers them via analysis and simulations. Finally, Section 7 presents brief conclusions and future research directions. 2. OVERVIEW AND TERMINOLOGY Table 1 summarizes the notation of the parameters used in the paper. Below we introduce some terminology used in the paper. Assume a network of N AutoMata-equipped cars, each with storage capacity of α bytes. The total storage capacity of the system is ST = N · α. There are T data items in the database, each with Table 1: Terms and their definitions with ET size Si. The frequency of access to data item i is denoted as fi, The exponent n characterizes a particular replication technique. A square-root replication scheme is realized when n = 0.5 [5]. This serves as the base-line for comparison with the case when zebroids are deployed. The number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, bRi · N · α Si c)). This captures the case when at least one copy of every data item must be present in the ad-hoc network at all times. In cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, b Ri · N · α Si c)). In this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server. The availability latency for a data item i, denoted as δi, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it. This indicates that data item i was not available to the client during its journey. Note that since there is at least one replica in the system for every data item i, by setting - y to a large value we ensure that the client's request for any data item i will be satisfied. However, in most practical circumstances - y may not be so large as to find every data item. We are interested in the availability latency observed across all data items. Hence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (δagg) metric: δagg = Ti = 1 δi · fi Next, we present our solution approach describing how zebroids are selected.
J-58
Towards Truthful Mechanisms for Binary Demand Games: A General Framework
The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through or/and combinations, round-based combinations, and some more complex combinations of the outputs from subgames.
[ "truth mechan", "binari demand game", "demand game", "mechan design", "object function", "monoton properti", "combin", "vickrei-clark-grove", "composit-base techniqu", "selfish wireless network", "price", "cut valu function", "selfish agent" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "U", "M", "M" ]
Towards Truthful Mechanisms for Binary Demand Games: A General Framework Ming-Yang Kao ∗ Dept. of Computer Science Northwestern University Evanston, IL, USA kao@cs.northwestern.edu Xiang-Yang Li † Dept. of Computer Science Illinois Institute of Technology Chicago, IL, USA xli@cs.iit.edu WeiZhao Wang Dept. of Computer Science Illinois Institute of Technology Chicago, IL, USA wangwei4@iit.edu ABSTRACT The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents'' only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through or/and combinations, round-based combinations, and some more complex combinations of the outputs from subgames. Categories and Subject Descriptors F.2 [Analysis of Algorithms and Problem Complexity]: General; J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computer and Society]: Electronic Commerce General Terms Algorithms, Economics, Theory 1. INTRODUCTION In recent years, with the rapid development of the Internet, many protocols and algorithms have been proposed to make the Internet more efficient and reliable. The Internet is a complex distributed system where a multitude of heterogeneous agents cooperate to achieve some common goals, and the existing protocols and algorithms often assume that all agents will follow the prescribed rules without deviation. However, in some settings where the agents are selfish instead of altruistic, it is more reasonable to assume these agents are rational - maximize their own profits - according to the neoclassic economics, and new models are needed to cope with the selfish behavior of such agents. Towards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines. The VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents'' valuations. Unfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P=NP. Some mechanisms other than VCG mechanism are needed to address these issues. Archer and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output. They pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem. Auletta et al. [3] studied a similar scheduling problem. They provided a family of deterministic truthful (2 + )-approximation mechanisms for any fixed number of machines and several (1 + )-truthful mechanisms for some NP-hard restrictions of their scheduling problem. Lehmann et al. [12] studied the single-minded combinatorial auction and gave a√ m-approximation truthful mechanism, where m is the number of goods. They also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism. The work of Mu``alem and Nisan [13] is the closest in spirit to our work. They characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting. They also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems. As shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions. More generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either selected or not selected. We also assume that the valuations 213 of agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type. Recall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately. In contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function. In fact, we do not even require the existence of an objective function. Given any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property. The monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful. We complement this existence theorem with a general framework to design such a payment scheme P. Furthermore, we present general techniques to compute the payment when the output is a composition of the outputs of subgames through the operators or and and; through round-based combinations; or through intermediate results, which may be themselves computed from other subproblems. The remainder of the paper is organized as follows. In Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games. In Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P). A framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O. In Section 5, we provide several examples to demonstrate the effectiveness of our general framework. We conclude our paper in Section 6 with some possible future directions. 2. PRELIMINARIES 2.1 Mechanism Design As usually done in the literatures about the designing of algorithms or protocols with inputs from individual agents, we adopt the assumption in neoclassic economics that all agents are rational, i.e., they respond to well-defined incentives and will deviate from the protocol only if the deviation improves their gain. A standard model for mechanism design is as follows. There are n agents 1, ... , n and each agent i has some private information ti, called its type, only known to itself. For example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction. The agents'' types define the type vector t = (t1, t2, ... , tn). Each agent i has a set of strategies Ai from which it can choose. For each input vector a = (a1, ... , an) where agent i plays strategy ai ∈ Ai, the mechanism M = (O, P) computes an output o = O(a) and a payment vector p(a) = (p1(a), ... , pn(a)). Here the payment pi(·) is the money given to agent i and depends on the strategies used by the agents. A game is defined as G = (S, M), where S is the setting for the game G. Here, S consists the parameters of the game that are set before the game starts and do not depend on the players'' strategies. For example, in a unicast routing game [14], the setting consists of the topology of the network, the source node and the destination node. Throughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O. A valuation function v(ti, o) assigns a monetary amount to agent i for each possible output o. Everything about a game S, M , including the setting S, the allocation rule O and the payment scheme P, is public knowledge except the agent i``s actual type ti, which is private information to agent i. Let ui(ti, o) denote the utility of agent i at the outcome of the game o, given its preferences ti. Here, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui(ti, o) = v(ti, o) + Pi(a). Let a|i ai = (a1, · · · , ai−1, ai, ai+1, · · · , an), i.e., each agent j = i plays an action aj except that the agent i plays ai. Let a−i = (a1, · · · , ai−1, ai+1, · · · , an) denote the actions of all agents except i. Sometimes, we write (a−i, bi) as a|i bi. An action ai is called dominant for i if it (weakly) maximizes the utility of i for all possible strategies b−i of other agents, i.e., ui(ti, O(b−i, ai)) ≥ ui(ti, O(b−i, ai)) for all ai = ai and b−i. A direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism. An incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility. Then, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v(ti, O(t)) + pi(t) ≥ v(ti, O(t|i ti)) + pi(t|i ti). Another common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent``s utility of participating in the output of the mechanism is not less than the utility of the agent of not participating. A direct-revelation mechanism is strategproof if it satisfies both IC and IR properties. Arguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11]. The VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g(o, t) = P i v(ti, o) (i.e., the sum of all agents'' valuations) and the set of possible outputs is assumed to be finite. A direct revelation mechanism M = (O(t), P(t)) belongs to the VCG family if (1) the allocation O(t) maximizesP i v(ti, o), and (2) the payment to agent i is pi(t) = P j=i vj(tj, O(t))+ hi (t−i), where hi () is an arbitrary function of t−i. Under mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10]. The allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function. This makes the mechanism computationally intractable in many cases. Furthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used. In this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function. 2.2 Binary Demand Games A binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1}n . In other words, the output is a n-tuple vector O(t) = (O1(t), O2(t), ... , On(t)), where Oi(t) = 1 (respectively, 0) means that agent i is (respectively, is not) selected. Examples of binary demand games include: unicast [14, 22, 9] and multicast [23, 24, 8] (generally subgraph construction by selecting some links/nodes to satisfy some property), facility location [7], and a certain auction [12, 2, 13]. Hereafter, we make the following further assumptions. 1. The valuation of the agents are not correlated, i.e., v(ti, o) is a function of v(ti, oi) only is denoted as v(ti, oi). 2. The valuation v(ti, oi) is a publicly known value and is normalized to 0. This assumption is needed to guarantee the IR property. Thus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v(ti, 1). 214 Notice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative. For the convenience of presentation, we define the cost of agent as ci = −v(ti, 1), i.e., it costs agent i ci to provide the service. Throughout this paper, we will use ci instead of vi in our analysis. All our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction. In a binary demand game, if we want to optimize an objective function g(o, t), then we call it a binary optimization demand game. The main differences between the binary demand games and those problems that can be solved by VCG mechanisms are: 1. The objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game. 2. The allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function. We even do not require the existence of an objective function. 3. We assume that the agents'' valuations are not correlated in a binary demand game, while the agents'' valuations may be correlated in a VCG mechanism. In this paper, we assume for technical convenience that the objective function g(o, c), if exists, is continuous with respect to the cost ci, but most of our results are directly applicable to the discrete case without any modification. 2.3 Previous Work Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction. In a singleminded combinatorial auction, each agent i (1 ≤ i ≤ n) only wants to buy a subset Si ⊆ S with private price ci. A single-minded bidder i declares a bid bi = Si, ai with Si ⊆ S and ai ∈ R+ . In [12], it is assumed that the set of goods allocated to an agent i is either Si or ∅, which is known as exactness. Lehmann et al. gave a greedy round-based allocation algorithm, based on the rank ai |Si|1/2 , that has an approximation ratio √ m, where m is the number of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme. For an allocation rule satisfying (1) exactness: the set of goods allocated to an agent i is either Si or ∅; (2) monotonicity: proposing more money for fewer goods cannot cause a bidder to lose its bid, they proposed a truthful payment scheme as follows: (1) charge a winning bidder a certain amount that does not depend on its own bidding; (2) charge a losing bidder 0. Notice the assumption of exactness reveals that the single minded auction is indeed a binary demand game. Their payment scheme inspired our payment scheme for binary demand game. In [1], Archer et al. studied the combinatorial auctions where multiple copies of many different items are on sale, and each bidder i desires only one subset Si. They devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction. As they pointed out, their method is strongly truthful in sense that it is truthful with high probability 1 − , where is an error probability. On the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules. In [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent``s private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load. The mechanism``s output could be arbitrary real number but their valuation is a quasi-linear function t · w, where t is the private per unit cost and w is the work load. Archer and Tardos characterized that all truthful mechanism should have decreasing work curves w and that the truthful payment should be Pi(bi) = Pi(0) + biwi(bi) − R bi 0 wi(u)du Using this model, Archer and Tardos designed truthful mechanisms for several scheduling related problems, including minimizing the span, maximizing flow and minimizing the weighted sum of completion time problems. Notice when the load of the problems is w = {0, 1}, it is indeed a binary demand game. If we apply their characterization of the truthful mechanism, their decreasing work curves w implies exactly the monotonicity property of the output. But notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can``t directly apply to binary demand games. The paper of Ahuva Mu``alem and Noam Nisan [13] is closest in spirit to our work. They clearly stated that we only discussed a limited class of bidders, single minded bidders, that was introduced by [12]. They proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value. With a simple generalization, we get our conclusion for general binary demand game. They proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search. All of their methods required the welfare function associated with the output satisfying bitonic property. Distinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property. Theorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction. In addition, the binary demand game studied here is different from the traditional packing IP``s: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function. Furthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property. 3. GENERAL APPROACHES 3.1 Properties of Strategyproof Mechanisms We discuss several properties that mechanisms need to satisfy in order to be truthful. THEOREM 1. If a mechanism M = (O, P) satisfies IC, then ∀i, if Oi(t|i ti1 ) = Oi(t|i ti2 ), then pi(t|i ti1 ) = pi(t|i ti2 ). COROLLARY 2. For any strategy-proof mechanism for a binary demand game G with setting S, if we fix the cost c−i of all agents other than i, the payment to agent i is a constant p1 i if Oi(c) = 1, and it is another constant p0 i if Oi(c) = 0. THEOREM 3. Fixed the setting S for a binary demand game, if mechanism M = (O, P) satisfies IC, then mechanism M = (O, P ) with the same output method O and pi(c) = pi(c) − δi(c−i) for any function δi(c−i) also satisfies IC. The proofs of above theorems are straightforward and thus omitted due to space limit. This theorem implies that for the binary demand games we can always normalize the payment to an agent i such that the payment to the agent is 0 when it is not selected. Hereafter, we will only consider normalized payment schemes. 215 3.2 Existence of Strategyproof Mechanisms Notice, given the setting S, a mechanism design problem is composed of two parts: the allocation rule O and a payment scheme P. In this paper, given an allocation rule O we focus our attention on how to design a truthful payment scheme based on O. Given an allocation rule O for a binary demand game, we first present a sufficient and necessary condition for the existence of a truthful payment scheme P. DEFINITION 1 (MONOTONE NON-INCREASING PROPERTY (MP)). An output method O is said to satisfy the monotone non-increasing property if for every agent i and two of its possible costs ci1 < ci2 , Oi(c|i ci2 ) ≤ Oi(c|i ci1 ). This definition is not restricted only to binary demand games. For binary demand games, this definition implies that if Oi(c|i ci2 ) = 1 then Oi(c|i ci1 ) = 1. THEOREM 4. Fix the setting S, c−i in a binary demand game G with the allocation rule O, the following three conditions are equivalent: 1. There exists a value κi(O, c−i)(which we will call a cut value, such that Oi(c) = 1 if ci < κi(O, c−i) and Oi(c) = 0 if ci > κi(O, c−i). When ci = κi(O, c−i), Oi(c) can be either 0 or 1 depending on the tie-breaker of the allocation rule O. Hereafter, we will not consider the tie-breaker scenario in our proofs. 2. The allocation rule O satisfies MP. 3. There exists a truthful payment scheme P for this binary demand game. PROOF. The proof that Condition 2 implies Condition is straightforward and is omitted here. We then show Condition 3 implies Condition 2. The proof of this is similar to a proof in [13]. To prove this direction, we assume there exists an agent i and two valuation vectors c|i ci1 and c|i ci2 , where ci1 < ci2 , Oi(c|i ci2 ) = 1 and Oi(c|i ci1 ) = 0. From corollary 2, we know that pi(c|i ci1 ) = p0 i and pi(c|i ci2 ) = p1 i . Now fix c−i, the utility for i when ci = ci1 is ui(ci1 ) = p0 i . When agent i lies its valuation to ci2 , its utility is p1 i − ci1 . Since M = (O, P) is truthful, we have p0 i > p1 i − ci1 . Now consider the scenario when the actual valuation of agent i is ci = ci2 . Its utility is p1 i − ci2 when it reports its true valuation. Similarly, if it lies its valuation to ci1 , its utility is p0 i . Since M = (O, P) is truthful, we have p0 i < p1 i − ci2 . Consequently, we have p1 i −ci2 > p0 i > p1 i −ci1 . This inequality implies that ci1 > ci2 , which is a contradiction. We then show Condition 1 implies Condition 3. We prove this by constructing a payment scheme and proving that this payment scheme is truthful. The payment scheme is: If Oi(c) = 1, then agent i gets payment pi(c) = κi(O, c−i); else it gets payment pi(c) = 0. From condition 1, if Oi(c) = 1 then ci > κi(O, c−i). Thus, its utility is κi(O, c−i) − ci > 0, which implies that the payment scheme satisfies the IR. In the following we prove that this payment scheme also satisfies IC property. There are two cases here. Case 1: ci < κ(O, c−i). In this case, when i declares its true cost ci, its utility is κi(O, c−i) − ci > 0. Now consider the situation when i declares a cost di = ci. If di < κi(O, c−i), then i gets the same payment and utility since it is still selected. If di > κi(O, c−i), then its utility becomes 0 since it is not selected anymore. Thus, it has no incentive to lie in this case. Case 2: ci ≥ κ(O, c−i). In this case, when i reveals its true valuation, its payment is 0 and the utility is 0. Now consider the situation when i declares a valuation di = ci. If di > κi(O, c−i), then i gets the same payment and utility since it is still not selected. If di ≤ κi(O, c−i), then its utility becomes κi(O, c−i) − ci ≤ 0 since it is selected now. Thus, it has no incentive to lie. The equivalence of the monotonicity property of the allocation rule O and the existence of a truthful mechanism using O can be extended to games beyond binary demand games. The details are omitted here due to space limit. We now summarize the process to design a truthful payment scheme for a binary demand game based on an output method O. General Framework 1 Truthful mechanism design for a binary demand game Stage 1: Check whether the allocation rule O satisfies MP. If it does not, then there is no payment scheme P such that mechanism M = (O, P) is truthful. Otherwise, define the payment scheme P as follows. Stage 2: Based on the allocation rule O, find the cut value κi(O, c−i) for agent i such that Oi(c|i di) = 1 when di < κi(O, c−i), and Oi(c|i di) = 0 when di > κi(O, c−i). Stage 3: The payment for agent i is 0 if Oi(c) = 0; the payment is κi(O, c−i) if Oi(c) = 1. THEOREM 5. The payment defined by our general framework is minimum among all truthful payment schemes using O as output. 4. COMPUTING CUT VALUE FUNCTIONS To find the truthful payment scheme by using General Framework 1, the most difficult stage seems to be the stage 2. Notice that binary search does not work generally since the valuations of agents may be continuous. We give some general techniques that can help with finding the cut value function under certain circumstances. Our basic approach is as follows. First, we decompose the allocation rule into several allocation rules. Next find the cut value function for each of these new allocation rules. Then, we compute the original cut value function by combining these cut value functions of the new allocation rules. 4.1 Simple Combinations In this subsection, we introduce techniques to compute the cut value function by combining multiple allocation rules with conjunctions or disconjunctions. For simplicity, given an allocation rule O, we will use κ(O, c) to denote a n-tuple vector (κ1(O, c−1), κ2(O, c−2), ... , κn(O, c−n)). Here, κi(O, c−i) is the cut value for agent i when the allocation rule is O and the costs c−i of all other agents are fixed. THEOREM 6. With a fixed setting S of a binary demand game, assume that there are m allocation rules O1 , O2 , · · · , Om satisfying the monotonicity property, and κ(Oi , c) is the cut value vector for Oi . Then the allocation rule O(c) = Wm i=1 Oi (c) satisfies the monotonicity property. Moreover, the cut value for O is κ(O, c) = maxm i=1{κ(Oi , c)} Here κ(O, c) = maxm i=1{κ(Oi , c)} means, ∀j ∈ [1, n], κj(O, c−j) = maxm i=1{κj(Oi , c−j)} and O(c) =Wm i=1 Oi (c) means, ∀j ∈ [1, n], Oj(c) = O1 j (c) ∨ O2 j (c) ∨ · · · ∨ Om j (c). PROOF. Assume that ci > ci and Oi(c) = 1. Without loss of generality, we assume that Ok i (c) = 1 for some k, 1 ≤ k ≤ m. From the assumption that Ok i (c) satisfies MP, we obtain that 216 Ok i (c|i ci) = 1. Thus, Oi(c|i ci) = Wm j=1 Oj (c) = 1. This proves that O(c) satisfies MP. The correctness of the cut value function follows directly from Theorem 4. Many algorithms indeed fall into this category. To demonstrate the usefulness of Theorem 6, we discuss a concrete example here. In a network, sometimes we want to deliver a packet to a set of nodes instead of one. This problem is known as multicast. The most commonly used structure in multicast routing is so called shortest path tree (SPT). Consider a network G = (V, E, c), where V is the set of nodes, and vector c is the actual cost of the nodes forwarding the data. Assume that the source node is s and the receivers are Q ⊂ V . For each receiver qi ∈ Q, we compute the shortest path (least cost path), denoted by LCP(s, qi, d), from the source s to qi under the reported cost profile d. The union of all such shortest paths forms the shortest path tree. We then use General Framework 1 to design the truthful payment scheme P when the SPT structure is used as the output for multicast, i.e., we design a mechanism M = (SPT, P). Notice that VCG mechanisms cannot be applied here since SPT is not an affine maximization. We define LCP(s,qi) as the allocation corresponds to the path LCP(s, qi, d), i.e., LCP (s,qi) k (d) = 1 if and only if node vk is in LCP(s, qi, d). Then the output SPT is defined as W qi∈Q LCP(s,qi) . In other words, SPTk(d) = 1 if and only if qk is selected in some LCP(s, qi, d). The shortest path allocation rule is a utilitarian and satisfies MP. Thus, from Theorem 6, SPT also satisfies MP, and the cut value function vector for SPT can be calculated as κ(SPT, c) = maxqi∈Q κ(LCP(s,qi) , c), where κ(LCP(s,qi) , c) is the cut value function vector for the shortest path LCP(s, qi, c). Consequently, the payment scheme above is truthful and the minimum among all truthful payment schemes when the allocation rule is SPT. THEOREM 7. Fixed the setting S of a binary demand game, assume that there are m output methods O1 , O2 , · · · , Om satisfying MP, and κ(Oi , c) are the cut value functions respectively for Oi where i = 1, 2, · · · , m. Then the allocation rule O(c) =Vm i=1 Oi (c) satisfies MP. Moreover, the cut value function for O is κ(O, c) = minm i=1{κ(Oi , c)}. We show that our simple combination generalizes the IF-THENELSE function defined in [13]. For an agent i, assume that there are two allocation rules O1 and O2 satisfying MP. Let κi(O1 , c−i), κi(O2 , c−i) be the cut value functions for O1 , O2 respectively. Then the IF-THEN-ELSE function Oi(c) is actually Oi(c) = [(ci ≤ κi(O1 , c−i) + δ1(c−i)) ∧ O2 (c−i, ci)] ∨ (ci < κi(O1 , c−i) − δ2(c−i)) where δ1(c−i) and δ2(c−i) are two positive functions. By applying Theorems 6 and 7, we know that the allocation rule O satisfies MP and consequently κi(O, c−i) = max{min(κi(O1 , c−i)+ δ1(c−i), κi(O2 , c−i)), κi(O1 , c−i) − δ2(c−i))}. 4.2 Round-Based Allocations Some approximation algorithms are round-based, where each round of an algorithm selects some agents and updates the setting and the cost profile if necessary. For example, several approximation algorithms for minimum weight vertex cover [19], maximum weight independent set, minimum weight set cover [4], and minimum weight Steiner [18] tree fall into this category. As an example, we discuss the minimum weighted vertex cover problem (MWVC) [16, 15] to show how to compute the cut value for a round-based output. Given a graph G = (V, E), where the nodes v1, v2, ... , vn are the agents and each agent vi has a weight ci, we want to find a node set V ⊆ V such that for every edge (u, v) ∈ E at least one of u and v is in V . Such V is called a vertex cover of G. The valuation of a node i is −ci if it is selected; otherwise its valuation is 0. For a subset of nodes V ∈ V , we define its weight as c(V ) = P i∈V ci. We want to find a vertex cover with the minimum weight. Hence, the objective function to be implemented is utilitarian. To use the VCG mechanism, we need to find the vertex cover with the minimum weight, which is NP-hard [16]. Since we are interested in mechanisms that can be computed in polynomial time, we must use polynomial-time computable allocation rules. Many algorithms have been proposed in the literature to approximate the optimal solution. In this paper, we use a 2-approximation algorithm given in [16]. For the sake of completeness, we briefly review this algorithm here. The algorithm is round-based. Each round selects some vertices and discards some vertices. For each node i, w(i) is initialized to its weight ci, and when w(i) drops to 0, i is included in the vertex cover. To make the presentation clear, we say an edge (i1, j1) is lexicographically smaller than edge (i2, j2) if (1) min(i1, j1) < min(i2, j2), or (2) min(i1, j1) = min(i2, j2) and max(i1, j1) < max(i2, j2). Algorithm 2 Approximate Minimum Weighted Vertex Cover Input: A node weighted graph G = (V, E, c). Output: A vertex cover V . 1: Set V = ∅. For each i ∈ V , set w(i) = ci. 2: while V is not a vertex cover do 3: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges. 4: Let m = min(w(i), w(j)). 5: Update w(i) to w(i) − m and w(j) to w(j) − m. 6: If w(i) = 0, add i to V . If w(j) = 0, add j to V . Notice, selecting an edge using the lexicographic order is crutial to guarantee the monotonicity property. Algorithm 2 outputs a vertex cover V whose weight is within 2 times of the optimum. For convenience, we use VC(c) to denote the vertex cover computed by Algorithm 2 when the cost vector of vertices is c. Below we generalize Algorithm 2 to a more general scenario. Typically, a round-based output can be characterized as follows (Algorithm 3). DEFINITION 2. An updating rule Ur is said to be crossingindependent if, for any agent i not selected in round r, (1) Sr+1 and cr+1 −i do not depend on cr j (2) for fixed cr −i, cr i1 ≤ cr i2 implies that cr+1 i1 ≤ cr+1 i2 . We have the following theorem about the existence of a truthful payment using a round based allocation rule A. THEOREM 8. A round-based output A, with the framework defined in Algorithm 3, satisfies MP if the output methods Or satisfy MP and all updating rules Ur are crossing-independent. PROOF. Consider an agent i and fixed c−i. We prove that when an agent i is selected with cost ci, then it is also selected with cost di < ci. Assume that i is selected in round r with cost ci. Then under cost di, if agent i is selected in a round before r, our claim holds. Otherwise, consider in round r. Clearly, the setting Sr and the costs of all other agents are the same as what if agent i had cost ci since i is not selected in the previous rounds due to the crossindependent property. Since i is selected in round r with cost ci, i is also selected in round r with di < ci due to the reason that Or satisfies MP. This finishes the proof. 217 Algorithm 3 A General Round-Based Allocation Rule A 1: Set r = 0, c0 = c, and G0 = G initially. 2: repeat 3: Compute an output or using a deterministic algorithm Or : Sr × cr → {0, 1}n . Here Or , cr and Sr are allocation rule, cost vector and game setting in game Gr , respectively. Remark: Or is often a simple greedy algorithm such as selecting the agents that minimize some utilitarian function. For the example of vertex cover, Or will always select the light-weighted node on the lexicographically least uncovered edge (i, j). 4: Let r = r + 1. Update the game Gr−1 to obtain a new game Gr with setting Sr and cost vector cr according to some rule Ur : Or−1 × (Sr−1 , cr−1 ) → (Sr , cr ). Here we updates the cost and setting of the game. Remark: For the example of vertex cover, the updating rule will decrease the weight of vertices i and j by min(w(i), w(j)). 5: until a valid output is found 6: Return the union of the set of selected players of each round as the final output. For the example of vertex cover, it is the union of nodes selected in all rounds. Algorithm 4 Compute Cut Value for Round-Based Algorithms Input: A round-based output A, a game G1 = G, and a updating function vector U. Output: The cut value x for agent k. 1: Set r = 0 and ck = ζ. Recall that ζ is a value that can guarantee Ak = 0 when an agent reports the cost ζ. 2: repeat 3: Compute an output or using a deterministic algorithm based on setting Sr using allocation rule Or : Sr ×cr → {0, 1}n . 4: Find the cut value for agent k based on the allocation rule Or for costs cr −k. Let r = κk(Or , cr −k) be the cut value. 5: Set r = r + 1 and obtain a new game Gr from Gr−1 and or according to the updating rule Ur . 6: Let cr be the new cost vector for game Gr . 7: until a valid output is found. 8: Let gi(x) be the cost of ci k when the original cost vector is c|k x. 9: Find the minimum value x such that 8 >>>>>< >>>>>: g1(x) ≥ 1; g2(x) ≥ 2; ... gt−1(x) ≥ t−1; gt(x) ≥ t. Here, t is the total number of rounds. 10: Output the value x as the cut value. If the round-based output satisfies monotonicity property, the cut-value always exists. We then show how to find the cut value for a selected agent k in Algorithm 4. The correctness of Algorithm 4 is straightforward. To compute the cut value, we assume that (1) the cut value r for each round r can be computed in polynomial time; (2) we can solve the equation gr(x) = r to find x in polynomial time when the cost vector c−i and b are given. Now we consider the vertex cover problem. For each round r, we select a vertex with the least weight and that is incident on the lexicographically least uncovered edge. The output satisfies MP. For agent i, we update its cost to cr i − cr j iff edge (i, j) is selected. It is easy to verify this updating rule is crossing-independent, thus we can apply Algorithm 4 to compute the cut value for the set cover game as shown in Algorithm 5. Algorithm 5 Compute Cut Value for MVC. Input: A node weighted graph G = (V, E, c) and a node k selected by Algorithm 2. Output: The cut value κk(V C, c−k). 1: For each i ∈ V , set w(i) = ci. 2: Set w(k) = ∞, pk = 0 and V = ∅. 3: while V is not a vertex cover do 4: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges. 5: Set m = min(w(i), w(j)). 6: Update w(i) = w(i) − m and w(j) = w(j) − m. 7: If w(i) = 0, add i to V ; else add j to V . 8: If i == k or j == k then set pk = pk + m. 9: Output pk as the cut value κk(V C, c−k). 4.3 Complex Combinations In subsection 4.1, we discussed how to find the cut value function when the output of the binary demand game is a simple combination of some outputs, whose cut values can be computed through other means (typically VCG). However, some algorithms cannot be decomposed in the way described in subsection 4.1. Next we present a more complex way to combine allocation rules, and as we may expected, the way to find the cut value is also more complicated. Assume that there are n agents 1 ≤ i ≤ n with cost vector c, and there are m binary demand games Gi with objective functions fi(o, c), setting Si and allocation rule ψi where i = 1, 2, · · · , m. There is another binary demand game with setting S and allocation rule O, whose input is a cost vector d = (d1, d2, · · · , dm). Let f be the function vector (f1, f2, · · · , fm), ψ be the allocation rule vector (ψ1 , ψ2 , · · · , ψm ) and ∫ be the setting vector (S1, S2, · · · , Sm). For notation simplicity, we define Fi(c) = fi(ψi (c), c), for each 1 ≤ i ≤ m, and F(c) = (F1(c), F2(c), · · · , Fm(c)). Let us see a concrete example of these combinations. Consider a link weighted graph G = (V, E, c), and a subset of q nodes Q ⊆ V . The Steiner tree problem is to find a set of links with minimum total cost to connect Q. One way to find an approximation of the Steiner tree is as follows: (1) we build a virtual complete graph H using Q as its vertices, and the cost of each edge (i, j) is the cost of LCP(i, j, c) in graph G; (2) build the minimum spanning tree of H, denoted as MST(H); (3) an edge of G is selected iff it is selected in some LCP(i, j, c) and edge (i, j) of H is selected to MST(H). In this game, we define q(q − 1)/2 games Gi,j, where i, j ∈ Q, with objective functions fi,j(o, c) being the minimum cost of 218 connecting i and j in graph G, setting Si being the original graph G and allocation rule is LCP(i, j, c). The game G corresponds to the MST game on graph H. The cost of the pair-wise q(q − 1)/2 shortest paths defines the input vector d = (d1, d2, · · · , dm) for game MST. More details will be given in Section 5.2. DEFINITION 3. Given an allocation rule O and setting S, an objective function vector f, an allocation rule vector ψ and setting vector ∫, we define a compound binary demand game with setting S and output O ◦ F as (O ◦ F)i(c) = Wm j=1(Oj(F(c)) ∧ ψj i (c)). The allocation rule of the above definition can be interpreted as follows. An agent i is selected if and only if there is a j such that (1) i is selected in ψj (c), and (2) the allocation rule O will select index j under cost profile F(c). For simplicity, we will use O ◦ F to denote the output of this compound binary demand game. Notice that a truthful payment scheme using O ◦ F as output exists if and only if it satisfies the monotonicity property. To study when O ◦F satisfies MP, several necessary definitions are in order. DEFINITION 4. Function Monotonicity Property (FMP) Given an objective function g and an allocation rule O, a function H(c) = g(O(c), c) is said to satisfy the function monotonicity property, if, given fixed c−i, it satisfies: 1. When Oi(c) = 0, H(c) does not increase over ci. 2. When Oi(c) = 1, H(c) does not decrease over ci. DEFINITION 5. Strong Monotonicity Property (SMP) An allocation rule O is said to satisfy the strong monotonicity property if O satisfies MP, and for any agent i with Oi(c) = 1 and agent j = i, Oi(c|j cj) = 1 if cj ≥ cj or Oj(c|j cj) = 0. LEMMA 1. For a given allocation rule O satisfying SMP and cost vectors c, c with ci = ci, if Oi(c) = 1 and Oi(c ) = 0, then there must exist j = i such that cj < cj and Oj(c ) = 1. From the definition of the strong monotonicity property, we have Lemma 1 directly. We now can give a sufficient condition when O ◦ F satisfies the monotonicity property. THEOREM 9. If ∀i ∈ [1, m], Fi satisfies FMP, ψi satisfies MP, and the output O satisfies SMP, then O ◦ F satisfies MP. PROOF. Assuming for cost vector c we have (O ◦ F)i(c) = 1, we should prove for any cost vector c = c|i ci with ci < ci, (O ◦ F)i(c ) = 1. Noticing that (O ◦ F)i(c) = 1, without loss of generality, we assume that Ok(F(c)) = 1 and ψk i (c) = 1 for some index 1 ≤ k ≤ m. Now consider the output O with the cost vector F(c )|k Fk(c). There are two scenarios, which will be studied one by one as follows. One scenario is that index k is not chosen by the output function O. From Lemma 1, there must exist j = k such that Fj(c ) < Fj(c) (1) Oj(F(c )|k Fk(c)) = 1 (2) We then prove that agent i will be selected in the output ψj (c ), i.e., ψj i (c ) = 1. If it is not, since ψj (c) satisfies MP, we have ψj i (c) = ψj i (c ) = 0 from ci < ci. Since Fj satisfies FMP, we know Fj(c ) ≥ Fj(c), which is a contradiction to the inequality (1). Consequently, we have ψj i (c ) = 1. From Equation (2), the fact that index k is not selected by allocation rule O and the definition of SMP, we have Oj(F(c )) = 1, Thus, agent i is selected by O ◦ F because of Oj(F(c )) = 1 and ψj i (c ) = 1. The other scenario is that index k is chosen by the output function O. First, agent i is chosen in ψk (c ) since the output ψk (c) satisfies the monotonicity property and ci < ci and ψk i (c) = 1. Secondly, since the function Fk satisfies FMP, we know that Fk(c ) ≤ Fk(c). Remember that output O satisfies the SMP, thus we can obtain Ok(F(c )) = 1 from the fact that Ok(F(c )|k Fk(c)) = 1 and Fk(c ) ≤ Fk(c). Consequently, agent i will also be selected in the final output O ◦ F. This finishes our proof. This theorem implies that there is a cut value for the compound output O ◦ F. We then discuss how to find the cut value for this output. Below we will give an algorithm to calculate κi(O ◦ F) when (1) O satisfies SMP, (2) ψj satisfies MP, and (3) for fixed c−i, Fj(c) is a constant, say hj, when ψj i (c) = 0, and Fj(c) increases when ψj i (c) = 1. Notice that here hj can be easily computed by setting ci = ∞ since ψj satisfies the monotonicity property. When given i and fixed c−i, we define (Fi j )−1 (y) as the smallest x such that Fj(c|i x) = y. For simplicity, we denote (Fi j )−1 as F−1 j if no confusion is caused when i is a fixed agent. In this paper, we assume that given any y, we can find such x in polynomial time. Algorithm 6 Find Cut Value for Compound Method O ◦ F Input: allocation rule O, objective function vector F and inverse function vector F−1 = {F−1 1 , · · · , F−1 m }, allocation rule vector ψ and fixed c−i. Output: Cut value for agent i based on O ◦ F. 1: for 1 ≤ j ≤ m do 2: Compute the outputs ψj (ci). 3: Compute hj = Fj(c|i ∞). 4: Use h = (h1, h2, · · · , hm) as the input for the output function O. Denote τj = κj(O, h−j) as the cut value function of output O based on input h. 5: for 1 ≤ j ≤ m do 6: Set κi,j = F−1 j (min{τj, hj}). 7: The cut value for i is κi(O ◦ F, c−i) = maxm j=1 κi,j. THEOREM 10. Algorithm 6 computes the correct cut value for agent i based on the allocation rule O ◦ F. PROOF. In order to prove the correctness of the cut value function calculated by Algorithm 6, we prove the following two cases. For our convenience, we will use κi to represent κi(O ◦ F, c−i) if no confusion caused. First, if di < κi then (O ◦ F)i(c|i di) = 1. Without loss of generality, we assume that κi = κi,j for some j. Since function Fj satisfies FMP and ψj i (c|i di) = 1, we have Fj(c|i di) < Fj(κi). Notice di < κi,j, from the definition of κi,j = F−1 j (min{τj, hj}) we have (1) ψj i (c|i di) = 1, (2) Fj(c|i di) < τj due to the fact that Fj(x) is a non-decreasing function when j is selected. Thus, from the monotonicity property of O and τj is the cut value for output O, we have Oj(h|j Fj(c|i di)) = 1. (3) If Oj(F(c|i di)) = 1 then (O◦F)i(c|i di) = 1. Otherwise, since O satisfies SMP, Lemma 1 and equation 3 imply that there exists at least one index k such that Ok(F(c|i di)) = 1 and Fk(c|i di) < hk. Note Fk(c|i di) < hk implies that i is selected in ψk (c|i di) since hk = Fk(ci|i ∞). In other words, agent i is selected in O◦F. 219 Second, if di ≥ κi(O ◦ F, c−i) then (O ◦ F)i(c|i di) = 0. Assume for the sake of contradiction that (O ◦ F)i(c|i di) = 1. Then there exists an index 1 ≤ j ≤ m such that Oj(F(c|i di)) = 1 and ψj i (c|i di) = 1. Remember that hk ≥ Fk(c|i di) for any k. Thus, from the fact that O satisfies SMP, when changing the cost vector from F(c|i di) to h|j Fj(c|i di), we still have Oj(h|j Fj(c|i di)) = 1. This implies that Fj(c|i di) < τj. Combining the above inequality and the fact that Fj(c|i c|i di) < hj, we have Fj(c|i di) < min{hj, τj}. This implies di < F−1 j (min{hj, τj}) = κi,j < κi(O ◦ F, c−i). which is a contradiction. This finishes our proof. In most applications, the allocation rule ψj implements the objective function fj and fj is utilitarian. Thus, we can compute the inverse of F−1 j efficiently. Another issue is that it seems the conditions when we can apply Algorithm 6 are restrictive. However, lots of games in practice satisfy these properties and here we show how to deduct the MAX combination in [13]. Assume A1 and A2 are two allocation rules for single minded combinatorial auction, then the combination MAX(A1, A2) returns the allocation with the larger welfare. If algorithm A1 and A2 satisfy MP and FMP, the operation max(x, y) which returns the larger element of x and y satisfies SMP. From Theorem 9 we obtain that combination MAX(A1, A2) also satisfies MP. Further, the cut value of the MAX combination can be found by Algorithm 6. As we will show in Section 5, the complex combination can apply to some more complicated problems. 5. CONCRETE EXAMPLES 5.1 Set Cover In the set cover problem, there is a set U of m elements needed to be covered, and each agent 1 ≤ i ≤ n can cover a subset of elements Si with a cost ci. Let S = {S1, S2, · · · , Sn} and c = (c1, c2, · · · , cn). We want to find a subset of agents D such that U ⊆ S i∈D Si. The selected subsets is called the set cover for U. The social efficiency of the output D is defined as P i∈D ci, which is the objective function to be minimized. Clearly, this is a utilitarian and thus VCG mechanism can be applied if we can find the subset of S that covers U with the minimum cost. It is well-known that finding the optimal solution is NP-hard. In [4], an algorithm of approximation ratio of Hm has been proposed and it has been proved that this is the best ratio possible for the set cover problem. For the completeness of presentation, we review their method here. Algorithm 7 Greedy Set Cover (GSC) Input: Agent i``s subset Si covered and cost ci. (1 ≤ i ≤ n). Output: A set of agents that can cover all elements. 1: Initialize r = 1, T0 = ∅, and R = ∅. 2: while R = U do 3: Find the set Sj with the minimum density cj |Sj −Tr| . 4: Set Tr+1 = Tr S Sj and R = R S j. 5: r = r + 1 6: Output R. Let GSC(S) be the sets selected by the Algorithm 7. Notice that the output set is a function of S and c. Some works assume that the type of an agent could be ci, i.e., Si is assumed to be a public knowledge. Here, we consider a more general case in which the type of an agent is (Si, ci). In other words, we assume that every agent i can not only lie about its cost ci but also can lie about the set Si. This problem now looks similar to the combinatorial auction with single minded bidder studied in [12], but with the following differences: in the set cover problem we want to cover all the elements and the sets chosen can have some overlap while in combinatorial auction the chosen sets are disjoint. We can show that the mechanism M = (GSC, PV CG ), using Algorithm 7 to find a set cover and apply VCG mechanism to compute the payment to the selected agents, is not truthful. Obviously, the set cover problem is a binary demand game. For the moment, we assume that agent i won``t be able to lie about Si. We will drop this assumption later. We show how to design a truthful mechanism by applying our general framework. 1. Check the monotonicity property: The output of Algorithm 7 is a round-based output. Thus, for an agent i, we first focus on the output of one round r. In round r, if i is selected by Algorithm 7, then it has the minimum ratio ci |Si−Tr| among all remaining agents. Now consider the case when i lies its cost to ci < ci, obviously ci |Si−Tr| is still minimum among all remaining agents. Consequently, agent i is still selected in round r, which means the output of round r satisfies MP. Now we look into the updating rules. For every round, we only update the Tr+1 = Tr S Sj and R = R S j, which is obviously cross-independent. Thus, by applying Theorem 8, we know the output by Algorithm 7 satisfies MP. 2. Find the cut value: To calculate the cut value for agent i with fixed cost vector c−i, we follow the steps in Algorithm 4. First, we set ci = ∞ and apply Algorithm 7. Let ir be the agent selected in round r and T−i r+1 be the corresponding set. Then the cut value of round r is r = cir |Sir − T−i r | · |Si − T−i r |. Remember the updating rule only updates the game setting but not the cost of the agent, thus we have gr(x) = x ≥ r for 1 ≤ r ≤ t. Therefore, the final cut value for agent i is κi(GSC, c−i) = max r { cir |Sir − T−i r | · |Si − T−i r |} The payment to an agent i is κi if i is selected; otherwise its payment is 0. We now consider the scenario when agent i can lie about Si. Assume that agent i cannot lie upward, i.e., it can only report a set Si ⊆ Si. We argue that agent i will not lie about its elements Si. Notice that the cut value computed for round r is r = cir |Sir −T −i r | · |Si − T−i r |. Obviously |Si − T−i r | ≤ |Si − T−i r | for any Si ⊆ Si. Thus, lying its set as Si will not increase the cut value for each round. Thus lying about Si will not improve agent i``s utility. 5.2 Link Weighted Steiner Trees Consider any link weighted network G = (V, E, c), where E = {e1, e2, · · · , em} are the set of links and ci is the weight of the link ei. The link weighted Steiner tree problem is to find a tree rooted at source node s spanning a given set of nodes Q = {q1, q2, · · · , qk} ⊂ V . For simplicity, we assume that qi = vi, for 1 ≤ i ≤ k. Here the links are agents. The total cost of links in a graph H ⊆ G is called the weight of H, denoted as ω(H). It is NP-hard to find the minimum cost multicast tree when given an arbitrary link weighted 220 graph G [17, 20]. The currently best polynomial time method has approximation ratio 1 + ln 3 2 [17]. Here, we review and discuss the first approximation method by Takahashi and Matsuyama [20]. Algorithm 8 Find LinkWeighted SteinerTree (LST) Input: Network G = (V, E, c) where c is the cost vector for link set E. Source node s and receiver set Q. Output: A tree LST rooted at s and spanned all receivers. 1: Set r = 1, G1 = G, Q1 = Q and s1 = s. 2: repeat 3: In graph Gr, find the receiver, say qi, that is closest to the source s, i.e., LCP(s, qi, c) has the least cost among the shortest paths from s to all receivers in Qr . 4: Select all links on LCP(s, qi, c) as relay links and set their cost to 0. The new graph is denoted as Gr+1. 5: Set tr as qi and Pr = LCP(s, qi, c). 6: Set Qr+1 = Qr \qi and r = r + 1. 7: until all receivers are spanned. Hereafter, let LST(G) be the final tree constructed using the above method. It is shown in [24] that mechanism M = (LST, pV CG ) is not truthful, where pV CG is the payment calculated based on VCG mechanism. We then show how to design a truthful payment scheme using our general framework. Observe that the output Pr, for any round r, satisfies MP, and the update rule for every round satisfies crossing-independence. Thus, from Theorem 8, the roundbased output LST satisfies MP. In round r, the cut value for a link ei can be obtained by using the VCG mechanism. Now we set ci = ∞ and execute Algorithm 8. Let w−i r (ci) be the cost of the path Pr(ci) selected in the rth round and Πi r(ci) be the shortest path selected in round r if the cost of ci is temporarily set to −∞. Then the cut value for round r is r = wi r(c−i) − |Πi r(c−i)| where |Πi r(c−i)| is the cost of the path Πi r(c−i) excluding node vi. Using Algorithm 4, we obtain the final cut value for agent i: κi(LST, c−i) = maxr{ r}. Thus, the payment to a link ei is κi(LST, c−i) if its reported cost is di < κi(LST, d−i); otherwise, its payment is 0. 5.3 Virtual Minimal Spanning Trees To connect the given set of receivers to the source node, besides the Steiner tree constructed by the algorithms described before, a virtual minimum spanning tree is also often used. Assume that Q is the set of receivers, including the sender. Assume that the nodes in a node-weighted graph are all agents. The virtual minimum spanning tree is constructed as follows. Algorithm 9 Construct VMST 1: for all pairs of receivers qi, qj ∈ Q do 2: Calculate the least cost path LCP(qi, qj, d). 3: Construct a virtual complete link weighted graph K(d) using Q as its node set, where the link qiqj corresponds to the least cost path LCP(qi, qj, d), and its weight is w(qiqj) = |LCP(qi, qj, d)|. 4: Build the minimum spanning tree on K(d), denoted as V MST(d). 5: for every virtual link qiqj in V MST(d) do 6: Find the corresponding least cost path LCP(qi, qj, d) in the original network. 7: Mark the agents on LCP(qi, qj, d) selected. The mechanism M = (V MST, pV CG ) is not truthful [24], where the payment pV CG to a node is based on the VCG mechanism. We then show how to design a truthful mechanism based on the framework we described. 1. Check the monotonicity property: Remember that in the complete graph K(d), the weight of a link qiqj is |LCP(qi, qj, d)|. In other words, we implicitly defined |Q|(|Q| − 1)/2 functions fi,j, for all i < j and qi ∈ Q and qj ∈ Q, with fi,j(d) = |LCP(qi, qj, d)|. We can show that the function fi,j(d) = |LCP(qi, qj, d)| satisfies FMP, LCP satisfies MP, and the output MST satisfies SMP. From Theorem 9, the allocation rule VMST satisfies the monotonicity property. 2. Find the cut value: Notice VMST is the combination of MST and function fi,j, so cut value for VMST can be computed based on Algorithm 6 as follows. (a) Given a link weighted complete graph K(d) on Q, we should find the cut value function for edge ek = (qi, qj) based on MST. Given a spanning tree T and a pair of terminals p and q, clearly there is a unique path connecting them on T. We denote this path as ΠT (p, q), and the edge with the maximum length on this path as LE(p, q, T). Thus, the cut value can be represented as κk(MST, d) = LE(qi, qj, MST(d|k ∞)) (b) We find the value-cost function for LCP. Assume vk ∈ LCP(qi, qj, d), then the value-cost function is xk = yk − |LCPvk (qi, qj, d|k 0)|. Here, LCPvk (qi, qj, d) is the least cost path between qi and qj with node vk on this path. (c) Remove vk and calculate the value K(d|k ∞). Set h(i,j) = |LCP(qi, qj, d|∞ ))| for every pair of node i = j and let h = {h(i,j)} be the vector. Then it is easy to show that τ(i,j) = |LE(qi, qj, MST(h|(i,j) ∞))| is the cut value for output VMST. It easy to verify that min{h(i,j), τ(i,j)} = |LE(qi, qj, MST(h)|. Thus, we know κ (i,j) k (V MST, d) is |LE(qi, qj, MST(h)|− |LCPvk (qi, qj, d|k 0)|. The cut value for agent k is κk(V MST, d−k) = max0≤i,j≤r κij k (V MST, d−k). 3. We pay agent k κk(V MST, d−k) if and only if k is selected in V MST(d); else we pay it 0. 5.4 Combinatorial Auctions Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction. In a singleminded combinatorial auction, there is a set of items S to be sold and there is a set of agents 1 ≤ i ≤ n who wants to buy some of the items: agent i wants to buy a subset Si ⊆ S with maximum price mi. A single-minded bidder i declares a bid bi = Si, ai with Si ⊆ S and ai ∈ R+ . Two bids Si, ai and Sj, aj conflict if Si ∩ Sj = ∅. Given the bids b1, b2, · · · , bn, they gave a greedy round-based algorithm as follows. First the bids are sorted by some criterion ( ai |Si|1/2 is used in[12]) in an increasing order and let L be the list of sorted bids. The first bid is granted. Then the algorithm exams each bid of L in order and grants the bid if it does not conflict with any of the bids previously granted. If it does, it is denied. They proved that this greedy allocation scheme using criterion ai |Si|1/2 approximates the optimal allocation within a factor of √ m, where m is the number of goods in S. In the auction settings, we have ci = −ai. It is easy to verify the output of the greedy algorithm is a round-based output. Remember after bidder j is selected for round r, every bidder has conflict 221 with j will not be selected in the rounds after. This equals to update the cost of every bidder having conflict with j to 0, which satisfies crossing-independence. In addition, in any round, if bidder i is selected with ai then it will still be selected when it declares ai > ai. Thus, for every round, it satisfies MP and the cut value is |Si|1/2 · ajr |Sjr |1/2 where jr is the bidder selected in round r if we did not consider the agent i at all. Notice ajr |Sjr |1/2 does not increase when round r increases, so the final cut value is |Si|1/2 · aj |Sj |1/2 where bj is the first bid that has been denied but would have been selected were it not only for the presence of bidder i. Thus, the payment by agent i is |Si|1/2 · aj |Sj |1/2 if ai ≥ |Si|1/2 · aj |Sj |1/2 , and 0 otherwise. This payment scheme is exactly the same as the payment scheme in [12]. 6. CONCLUSIONS In this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game. We first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist. We then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time. We further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time. In this paper, we have concentrated on how to compute P in polynomial time. Our algorithms do not necessarily have the optimal running time for computing P given O. It would be of interest to design algorithms to compute P in optimal time. We have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O(n log n + m) time. Another research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game. Many works [12, 13] in the mechanism design literature are in this direction. We point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given. It would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme. In this paper, we have studied mechanism design for binary demand games. However, some problems cannot be directly formulated as binary demand games. The job scheduling problem in [2] is such an example. For this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner. It wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games. Towards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R+ . The remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist. Acknowledgements We would like to thank Rakesh Vohra, Tuomas Sandholm, and anonymous reviewers for helpful comments and discussions. 7. REFERENCES [1] A. ARCHER, C. PAPADIMITRIOU, K. T., AND TARDOS, E. An approximate truthful mechanism for combinatorial auctions with single parameter agents. In ACM-SIAM SODA (2003), pp. 205-214. [2] ARCHER, A., AND TARDOS, E. Truthful mechanisms for one-parameter agents. In Proceedings of the 42nd IEEE FOCS (2001), IEEE Computer Society, p. 482. [3] AULETTA, V., PRISCO, R. D., PENNA, P., AND PERSIANO, P. Deterministic truthful approximation schemes for scheduling related machines. [4] CHVATAL, V. A greedy heuristic for the set covering problem. Mathematics of Operations Research 4, 3 (1979), 233-235. [5] CLARKE, E. H. Multipart pricing of public goods. Public Choice (1971), 17-33. [6] R. Muller, and R. V. Vohra. On Dominant Strategy Mechanisms. Working paper, 2003. [7] DEVANUR, N. R., MIHAIL, M., AND VAZIRANI, V. V. Strategyproof cost-sharing mechanisms for set cover and facility location games. In ACM Electronic Commerce (EC03) (2003). [8] FEIGENBAUM, J., KRISHNAMURTHY, A., SAMI, R., AND SHENKER, S. Approximation and collusion in multicast cost sharing (abstract). In ACM Economic Conference (2001). [9] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S. A BGP-based mechanism for lowest-cost routing. In Proceedings of the 2002 ACM Symposium on Principles of Distributed Computing. (2002), pp. 173-182. [10] GREEN, J., AND LAFFONT, J. J. Characterization of satisfactory mechanisms for the revelation of preferences for public goods. Econometrica (1977), 427-438. [11] GROVES, T. Incentives in teams. Econometrica (1973), 617-631. [12] LEHMANN, D., OCALLAGHAN, L. I., AND SHOHAM, Y. Truth revelation in approximately efficient combinatorial auctions. Journal of ACM 49, 5 (2002), 577-602. [13] MU``ALEM, A., AND NISAN, N. Truthful approximation mechanisms for restricted combinatorial auctions: extended abstract. In 18th National Conference on Artificial intelligence (2002), American Association for Artificial Intelligence, pp. 379-384. [14] NISAN, N., AND RONEN, A. Algorithmic mechanism design. In Proc. 31st Annual ACM STOC (1999), pp. 129-140. [15] E. Halperin. Improved approximation algorithms for the vertex cover problem in graphs and hypergraphs. In Proceedings of the 11th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 329-337, 2000. [16] R. Bar-Yehuda and S. Even. A local ratio theorem for approximating the weighted vertex cover problem. Annals of Discrete Mathematics, Volume 25: Analysis and Design of Algorithms for Combinatorial Problems, pages 27-46, 1985. Editor: G. Ausiello and M. Lucertini [17] ROBINS, G., AND ZELIKOVSKY, A. Improved steiner tree approximation in graphs. In Proceedings of the 11th annual ACM-SIAM SODA (2000), pp. 770-779. [18] A. Zelikovsky. An 11/6-approximation algorithm for the network Steiner problem. Algorithmica, 9(5):463-470, 1993. [19] D. S. Hochbaum. Efficient bounds for the stable set, vertex cover, and set packing problems, Discrete Applied Mathematics, 6:243-254, 1983. [20] TAKAHASHI, H., AND MATSUYAMA, A. An approximate solution for the steiner problem in graphs. Math. Japonica 24 (1980), 573-577. [21] VICKREY, W. Counterspeculation, auctions and competitive sealed tenders. Journal of Finance (1961), 8-37. [22] WANG, W., AND LI, X.-Y. Truthful low-cost unicast in selfish wireless networks. In 4th IEEE Transactions on Mobile Computing (2005), to appear. [23] WANG, W., LI, X.-Y., AND SUN, Z. Design multicast protocols for non-cooperative networks. IEEE INFOCOM 2005, to appear. [24] WANG, W., LI, X.-Y., AND WANG, Y. Truthful multicast in selfish wireless networks. ACM MobiCom, 2005. 222
Towards Truthful Mechanisms for Binary Demand Games: A General Framework ABSTRACT The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through "or/and" combinations, round-based combinations, and some more complex combinations of the outputs from subgames. 1. INTRODUCTION In recent years, with the rapid development of the Internet, many protocols and algorithms have been proposed to make the Internet more efficient and reliable. The Internet is a complex distributed system where a multitude of heterogeneous agents cooperate to achieve some common goals, and the existing protocols and algorithms often assume that all agents will follow the prescribed rules without deviation. However, in some settings where the agents are selfish instead of altruistic, it is more reasonable to assume these agents are rational--maximize their own profits--according to the neoclassic economics, and new models are needed to cope with the selfish behavior of such agents. Towards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines. The VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents' valuations. Unfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P = NP. Some mechanisms other than VCG mechanism are needed to address these issues. Archer and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output. They pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem. Auletta et al. [3] studied a similar scheduling problem. They provided a family of deterministic truthful (2 + e) - approximation mechanisms for any fixed number of machines and several (1 + e) - truthful mechanisms for some NP-hard restrictions of their scheduling problem. Lehmann et al. √ m-approximation truthful mechanism, where m is the number of [12] studied the single-minded combinatorial auction and gave a goods. They also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism. The work of Mu'alem and Nisan [13] is the closest in spirit to our work. They characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting. They also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems. As shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions. More generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either "selected" or "not selected". We also assume that the valuations of agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type. Recall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately. In contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function. In fact, we do not even require the existence of an objective function. Given any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property. The monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful. We complement this existence theorem with a general framework to design such a payment scheme P. Furthermore, we present general techniques to compute the payment when the output is a composition of the outputs of subgames through the operators "or" and "and"; through round-based combinations; or through intermediate results, which may be themselves computed from other subproblems. The remainder of the paper is organized as follows. In Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games. In Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P). A framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O. In Section 5, we provide several examples to demonstrate the effectiveness of our general framework. We conclude our paper in Section 6 with some possible future directions. 2. PRELIMINARIES 2.1 Mechanism Design As usually done in the literatures about the designing of algorithms or protocols with inputs from individual agents, we adopt the assumption in neoclassic economics that all agents are rational, i.e., they respond to well-defined incentives and will deviate from the protocol only if the deviation improves their gain. A standard model for mechanism design is as follows. There are n agents 1,..., n and each agent i has some private information ti, called its type, only known to itself. For example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction. The agents' types define the type vector t = (t1, t2,..., tn). Each agent i has a set of strategies Ai from which it can choose. For each input vector a = (a1,..., an) where agent i plays strategy ai ∈ Ai, the mechanism M = (O, P) computes an output o = O (a) and a payment vector p (a) = (p1 (a),..., pn (a)). Here the payment pi (·) is the money given to agent i and depends on the strategies used by the agents. A game is defined as G = (S, M), where S is the setting for the game G. Here, S consists the parameters of the game that are set before the game starts and do not depend on the players' strategies. For example, in a unicast routing game [14], the setting consists of the topology of the network, the source node and the destination node. Throughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O. A valuation function v (ti, o) assigns a monetary amount to agent i for each possible output o. Everything about a game hS, Mi, including the setting S, the allocation rule O and the payment scheme P, is public knowledge except the agent i's actual type ti, which is private information to agent i. Let ui (ti, o) denote the utility of agent i at the outcome of the game o, given its preferences ti. Here, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui (ti, o) = v (ti, o) + Pi (a). Let a | ia'i = (a1, · · ·, ai_1, a' i, ai +1, · · ·, an), i.e., each agent j = 6 i plays an action aj except that the agent i plays a' i. Let a_i = (a1, · · ·, ai_1, ai +1, · · ·, an) denote the actions of all agents except i. Sometimes, we write (a_i, bi) as a | ibi. An action ai is called dominant for i if it (weakly) maximizes the utility of i for all possible strategies b_i of other agents, i.e., ui (ti, O (b_i, ai)) ≥ ui (ti, O (b_i, a' i)) for all a' i = 6 ai and b_i. A direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism. An incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility. Then, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v (ti, O (t)) + pi (t) ≥ v (ti, O (t | it' i)) + pi (t | it' i). Another common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent's utility of participating in the output of the mechanism is not less than the utility of the agent of not participating. A direct-revelation mechanism is strategproof if it satisfies both IC and IR properties. Arguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11]. The VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g (o, t) = Ei v (ti, o) (i.e., the sum of all agents' valuations) and the set of possible outputs is assumed to be finite. A direct revelation mechanism M = (O (t), P (t)) belongs to the VCG family if (1) the allocation O (t) maximizes Ei v (ti, o), and (2) the payment to agent i is pi (t) vj (tj, O (t)) + hi (t_i), where hi () is an arbitrary function of t_i. Under mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10]. The allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function. This makes the mechanism computationally intractable in many cases. Furthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used. In this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function. 2.2 Binary Demand Games A binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1} n. In other words, the output is a n-tuple vector O (t) = (O1 (t), O2 (t),..., On (t)), where Oi (t) = 1 (respectively, 0) means that agent i is (respectively, is not) selected. Examples of binary demand games include: unicast [14, 22, 9] and multicast [23, 24, 8] (generally subgraph construction by selecting some links/nodes to satisfy some property), facility location [7], and a certain auction [12, 2, 13]. Hereafter, we make the following further assumptions. 1. The valuation of the agents are not correlated, i.e., v (ti, o) is a function of v (ti, oi) only is denoted as v (ti, oi). 2. The valuation v (ti, oi) is a publicly known value and is normalized to 0. This assumption is needed to guarantee the IR property. Thus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v (ti, 1). Notice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative. For the convenience of presentation, we define the cost of agent as ci = − v (ti, 1), i.e., it costs agent i ci to provide the service. Throughout this paper, we will use ci instead of vi in our analysis. All our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction. In a binary demand game, if we want to optimize an objective function g (o, t), then we call it a binary optimization demand game. The main differences between the binary demand games and those problems that can be solved by VCG mechanisms are: 1. The objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game. 2. The allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function. We even do not require the existence of an objective function. 3. We assume that the agents' valuations are not correlated in a binary demand game, while the agents' valuations may be correlated in a VCG mechanism. In this paper, we assume for technical convenience that the objective function g (o, c), if exists, is continuous with respect to the cost ci, but most of our results are directly applicable to the discrete case without any modification. 2.3 Previous Work Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction. In a singleminded combinatorial auction, each agent i (1 <i <n) only wants to buy a subset Si C _ S with private price ci. A single-minded bidder i declares a bid bi = (S0 i, ai) with S0i C _ S and ai ∈ R +. In [12], it is assumed that the set of goods allocated to an agent i is either S0i or 0, which is known as exactness. Lehmann et al. gave a greedy round-based allocation algorithm, based on the rank i | l/z, that has an approximation ratio √ m, where m is the num ber of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme. For an allocation rule satisfying (1) exactness: the set of goods allocated to an agent i is either S0i or 0; (2) monotonicity: proposing more money for fewer goods cannot cause a bidder to lose its bid, they proposed a truthful payment scheme as follows: (1) charge a winning bidder a certain amount that does not depend on its own bidding; (2) charge a losing bidder 0. Notice the assumption of exactness reveals that the single minded auction is indeed a binary demand game. Their payment scheme inspired our payment scheme for binary demand game. In [1], Archer et al. studied the combinatorial auctions where multiple copies of many different items are on sale, and each bidder i desires only one subset Si. They devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction. As they pointed out, their method is strongly truthful in sense that it is truthful with high probability 1 − e, where a is an error probability. On the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules. In [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent's private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load. The mechanism's output could be arbitrary real number but their valuation is a quasi-linear function t · w, where t is the private per unit cost and w is the work load. Archer and Tardos characterized that all truthful mechanism should have decreasing "work curves" w and that the truthful payment should be Pi (bi) = Pi (0) + biwi (bi) − fbi 0 wi (u) du Using this model, Archer and Tardos designed truthful mechanisms for several scheduling related problems, including minimizing the span, maximizing flow and minimizing the weighted sum of completion time problems. Notice when the load of the problems is w = {0, 1}, it is indeed a binary demand game. If we apply their characterization of the truthful mechanism, their decreasing "work curves" w implies exactly the monotonicity property of the output. But notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can't directly apply to binary demand games. The paper of Ahuva Mu'alem and Noam Nisan [13] is closest in spirit to our work. They clearly stated that "we only discussed a limited class ofbidders, single minded bidders, that was introduced by" [12]. They proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value. With a simple generalization, we get our conclusion for general binary demand game. They proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search. All of their methods required the welfare function associated with the output satisfying bitonic property. Distinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property. Theorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction. In addition, the binary demand game studied here is different from the traditional packing IP's: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function. Furthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property. 3. GENERAL APPROACHES 3.1 Properties of Strategyproof Mechanisms We discuss several properties that mechanisms need to satisfy in order to be truthful. The proofs of above theorems are straightforward and thus omitted due to space limit. This theorem implies that for the binary demand games we can always normalize the payment to an agent i such that the payment to the agent is 0 when it is not selected. Hereafter, we will only consider normalized payment schemes. 3.2 Existence of Strategyproof Mechanisms Notice, given the setting S, a mechanism design problem is composed of two parts: the allocation rule O and a payment scheme P. In this paper, given an allocation rule O we focus our attention on how to design a truthful payment scheme based on O. Given an allocation rule O for a binary demand game, we first present a sufficient and necessary condition for the existence of a truthful payment scheme P. THEOREM 4. Fix the setting S, c--i in a binary demand game G with the allocation rule O, the following three conditions are equivalent: 1. There exists a value κi (O, c--i) (which we will calla cut value, such that Oi (c) = 1 if ci <κi (O, c--i) and Oi (c) = 0 if ci> κi (O, c--i). When ci = κi (O, c--i), Oi (c) can be either 0 or 1 depending on the tie-breaker ofthe allocation rule O. Hereafter, we will not consider the tie-breaker scenario in our proofs. 2. The allocation rule O satisfies MP. 3. There exists a truthful payment scheme P for this binary demand game. PROOF. The proof that Condition 2 implies Condition is straightforward and is omitted here. We then show Condition 3 implies Condition 2. The proof of this is similar to a proof in [13]. To prove this direction, we assume there exists an agent i and two valuation vectors c | ici1 and c | ici2, where ci1 <ci2, Oi (c | ici2) = 1 and Oi (c | ici1) = 0. From corollary 2, we know that pi (c | ici1) = p0i and pi (c | ici2) = p1i. Now fix c--i, the utility for i when ci = ci1 is ui (ci1) = p0i. When agent i lies its valuation to ci2, its utility is p1i − ci1. Since M = (O, P) is truthful, we have p0i> p1i − ci1. Now consider the scenario when the actual valuation of agent i is ci = ci2. Its utility is p1i − ci2 when it reports its true valuation. Similarly, if it lies its valuation to ci1, its utility is p0i. Since M = (O, P) is truthful, we have p0i <p1i − ci2. Consequently, we have p1i − ci2> p0i> p1i − ci1. This inequality implies that ci1> ci2, which is a contradiction. We then show Condition 1 implies Condition 3. We prove this by constructing a payment scheme and proving that this payment scheme is truthful. The payment scheme is: If Oi (c) = 1, then agent i gets payment pi (c) = κi (O, c--i); else it gets payment pi (c) = 0. From condition 1, if Oi (c) = 1 then ci> κi (O, c--i). Thus, its utility is κi (O, c--i) − ci> 0, which implies that the payment scheme satisfies the IR. In the following we prove that this payment scheme also satisfies IC property. There are two cases here. Case 1: ci <κ (O, c--i). In this case, when i declares its true cost ci, its utility is κi (O, c--i) − ci> 0. Now consider the situation when i declares a cost di = 6 ci. If di <κi (O, c--i), then i gets the same payment and utility since it is still selected. If di> κi (O, c--i), then its utility becomes 0 since it is not selected anymore. Thus, it has no incentive to lie in this case. Case 2: ci ≥ κ (O, c--i). In this case, when i reveals its true valuation, its payment is 0 and the utility is 0. Now consider the situation when i declares a valuation di = 6 ci. If di> κi (O, c--i), then i gets the same payment and utility since it is still not selected. If di ≤ κi (O, c--i), then its utility becomes κi (O, c--i) − ci ≤ 0 since it is selected now. Thus, it has no incentive to lie. The equivalence of the monotonicity property of the allocation rule O and the existence of a truthful mechanism using O can be extended to games beyond binary demand games. The details are omitted here due to space limit. We now summarize the process to design a truthful payment scheme for a binary demand game based on an output method O. General Framework 1 Truthful mechanism design for a binary demand game Stage 1: Check whether the allocation rule O satisfies MP. If it does not, then there is no payment scheme P such that mechanism M = (O, P) is truthful. Otherwise, define the payment scheme P as follows. Stage 2: Based on the allocation rule O, find the cut value κi (O, c--i) for agent i such that Oi (c | idi) = 1 when di <κi (O, c--i), and Oi (c | idi) = 0 when di> κi (O, c--i). Stage 3: The payment for agent i is 0 if Oi (c) = 0; the payment is κi (O, c--i) if Oi (c) = 1. THEOREM 5. The payment defined by our general framework is minimum among all truthful payment schemes using O as output. 4. COMPUTING CUT VALUE FUNCTIONS To find the truthful payment scheme by using General Framework 1, the most difficult stage seems to be the stage 2. Notice that binary search does not work generally since the valuations of agents may be continuous. We give some general techniques that can help with finding the cut value function under certain circumstances. Our basic approach is as follows. First, we decompose the allocation rule into several allocation rules. Next find the cut value function for each of these new allocation rules. Then, we compute the original cut value function by combining these cut value functions of the new allocation rules. 4.1 Simple Combinations In this subsection, we introduce techniques to compute the cut value function by combining multiple allocation rules with conjunctions or disconjunctions. For simplicity, given an allocation rule O, we will use κ (O, c) to denote a n-tuple vector Here, κi (O, c--i) is the cut value for agent i when the allocation rule is O and the costs c--i of all other agents are fixed. THEOREM 6. With a fixed setting S ofa binary demand game, for Oi. Then the allocation rule O (c) = Vm assume that there are m allocation rules O1, O2, · · ·, Om satisfying the monotonicity property, and κ (Oi, c) is the cut value vector i = 1 Oi (c) satisfies the monotonicity property. Moreover, the cut value for O is κ (O, c) = maxmi = 1 {κ (Oi, c)} Here κ (O, c) = maxmi = 1 {κ (Oi, c)} means, ∀ j ∈ [1, n], κj (O, c--j) = maxmi = 1 {κj (Oi, c--j)} and O (c) = Vmi = 1 Oi (c) means, ∀ j ∈ [1, n], Oj (c) = O1j (c) ∨ O2j (c) ∨ · · · ∨ Omj (c). PROOF. Assume that ci> c' i and Oi (c) = 1. Without loss of generality, we assume that Oki (c) = 1 for some k, 1 ≤ k ≤ m. From the assumption that Oki (c) satisfies MP, we obtain that that O (c) satisfies MP. The correctness of the cut value function follows directly from Theorem 4. Many algorithms indeed fall into this category. To demonstrate the usefulness of Theorem 6, we discuss a concrete example here. In a network, sometimes we want to deliver a packet to a set of nodes instead of one. This problem is known as multicast. The most commonly used structure in multicast routing is so called shortest path tree (SPT). Consider a network G = (V, E, c), where V is the set of nodes, and vector c is the actual cost of the nodes forwarding the data. Assume that the source node is s and the receivers are Q ⊂ V. For each receiver qi ∈ Q, we compute the shortest path (least cost path), denoted by LCP (s, qi, d), from the source s to qi under the reported cost profile d. The union of all such shortest paths forms the shortest path tree. We then use General Framework 1 to design the truthful payment scheme P when the SPT structure is used as the output for multicast, i.e., we design a mechanism M = (SPT, P). Notice that VCG mechanisms cannot be applied here since SPT is not an affine maximization. LCP (s, qi, d). Then the output SPT is defined asVqi ∈ Q LCP (s, qi). We define LCP (s, qi) as the allocation corresponds to the path LCP (s, qi, d), i.e., LCP (s, qi) k (d) = 1 if and only if node vk is in In other words, SPTk (d) = 1 if and only if qk is selected in some LCP (s, qi, d). The shortest path allocation rule is a utilitarian and satisfies MP. Thus, from Theorem 6, SPT also satisfies MP, and the cut value function vector for SPT can be calculated as r, (SPT, c) = maxqi ∈ Q r, (LCP (s, qi), c), where r, (LCP (s, qi), c) is the cut value function vector for the shortest path LCP (s, qi, c). Consequently, the payment scheme above is truthful and the minimum among all truthful payment schemes when the allocation rule is SPT. THEOREM 7. Fixed the setting S of a binary demand game, assume that there are m output methods O1, O2, · · ·, Om satisfying MP, and r, (Oi, c) are the cut value functions respectively for Oi Amwhere i = 1, 2, · · ·, m. Then the allocation rule O (c) = i = 1 Oi (c) satisfies MP. Moreover, the cut value function for O is r, (O, c) = minmi = 1 {r, (Oi, c)}. We show that our simple combination generalizes the IF-THENELSE function defined in [13]. For an agent i, assume that there are two allocation rules O1 and O2 satisfying MP. Let r, i (O1, c − i), r, i (O2, c − i) be the cut value functions for O1, O2 respectively. Then the IF-THEN-ELSE function Oi (c) is actually Oi (c) = [(ci ≤ r, i (O1, c − i) + 61 (c − i)) ∧ O2 (c − i, ci)] ∨ (ci <r, i (O1, c − i) − 62 (c − i)) where 61 (c − i) and 62 (c − i) are two positive functions. By applying Theorems 6 and 7, we know that the allocation rule O satisfies MP and consequently r, i (O, c − i) = max {min (r, i (O1, c − i) + 61 (c − i), r, i (O2, c − i)), r, i (O1, c − i) − 62 (c − i))}. 4.2 Round-Based Allocations Some approximation algorithms are round-based, where each round of an algorithm selects some agents and updates the setting and the cost profile if necessary. For example, several approximation algorithms for minimum weight vertex cover [19], maximum weight independent set, minimum weight set cover [4], and minimum weight Steiner [18] tree fall into this category. As an example, we discuss the minimum weighted vertex cover problem (MWVC) [16, 15] to show how to compute the cut value for a round-based output. Given a graph G = (V, E), where the nodes v1, v2,..., vn are the agents and each agent vi has a weight ci, we want to find a node set V 0 ⊆ V such that for every edge (u, v) ∈ E at least one of u and v is in V0. Such V 0 is called a vertex cover of G. The valuation of a node i is − ci if it is selected; otherwise its valuation is 0. For a subset of nodes V 0 ∈ V, we define its weight as c (V0) = Ei ∈ V, ci. We want to find a vertex cover with the minimum weight. Hence, the objective function to be implemented is utilitarian. To use the VCG mechanism, we need to find the vertex cover with the minimum weight, which is NP-hard [16]. Since we are interested in mechanisms that can be computed in polynomial time, we must use polynomial-time computable allocation rules. Many algorithms have been proposed in the literature to approximate the optimal solution. In this paper, we use a 2-approximation algorithm given in [16]. For the sake of completeness, we briefly review this algorithm here. The algorithm is round-based. Each round selects some vertices and discards some vertices. For each node i, w (i) is initialized to its weight ci, and when w (i) drops to 0, i is included in the vertex cover. To make the presentation clear, we say an edge (i1, j1) is lexicographically smaller than edge (i2, j2) if (1) min (i1, j1) <min (i2, j2), or (2) min (i1, j1) = min (i2, j2) and max (i1, j1) <max (i2, j2). Algorithm 2 Approximate Minimum Weighted Vertex Cover Input: A node weighted graph G = (V, E, c). Output: A vertex cover V0. 1: Set V 0 = ∅. For each i ∈ V, set w (i) = ci. 2: while V 0 is not a vertex cover do 3: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges. 4: Let m = min (w (i), w (j)). 5: Update w (i) to w (i) − m and w (j) to w (j) − m. 6: If w (i) = 0, add i to V0. If w (j) = 0, add j to V0. Notice, selecting an edge using the lexicographic order is crutial to guarantee the monotonicity property. Algorithm 2 outputs a vertex cover V 0 whose weight is within 2 times of the optimum. For convenience, we use VC (c) to denote the vertex cover computed by Algorithm 2 when the cost vector of vertices is c. Below we generalize Algorithm 2 to a more general scenario. Typically, a round-based output can be characterized as follows (Algorithm 3). We have the following theorem about the existence of a truthful payment using a round based allocation rule A. THEOREM 8. A round-based output A, with the framework defined in Algorithm 3, satisfies MP if the output methods Or satisfy MP and all updating rules Ur are crossing-independent. PROOF. Consider an agent i and fixed c − i. We prove that when an agent i is selected with cost ci, then it is also selected with cost di <ci. Assume that i is selected in round r with cost ci. Then under cost di, if agent i is selected in a round before r, our claim holds. Otherwise, consider in round r. Clearly, the setting Sr and the costs of all other agents are the same as what if agent i had cost ci since i is not selected in the previous rounds due to the crossindependent property. Since i is selected in round r with cost ci, i is also selected in round r with di <ci due to the reason that Or satisfies MP. This finishes the proof. 1: Set r = 0, c ° = c, and G ° = G initially. 2: repeat 3: Compute an output or using a deterministic algorithm Here Or, cr and Sr are allocation rule, cost vector and game setting in game Gr, respectively. Remark: Or is often a simple greedy algorithm such as selecting the agents that minimize some utilitarian function. For the example of vertex cover, Or will always select the light-weighted node on the lexicographically least uncovered edge (i, j). 4: Let r = r + 1. Update the game Gr--1 to obtain a new game Gr with setting Sr and cost vector cr according to some rule Here we updates the cost and setting of the game. Remark: For the example of vertex cover, the updating rule will decrease the weight of vertices i and j by min (w (i), w (j)). 5: until a valid output is found 6: Return the union of the set of selected players of each round as the final output. For the example of vertex cover, it is the union of nodes selected in all rounds. Algorithm 4 Compute Cut Value for Round-Based Algorithms Input: A round-based output A, a game G1 = G, and a updating function vector U. Output: The cut value x for agent k. 1: Set r = 0 and ck = ζ. Recall that ζ is a value that can guarantee Ak = 0 when an agent reports the cost ζ. 2: repeat 3: Compute an output or using a deterministic algorithm based on setting Sr using allocation rule Or: Sr × cr → {0, 1} n. 4: Find the cut value for agent k based on the allocation rule Or for costs cr--k. Let ` r = κk (Or, cr--k) be the cut value. 5: Set r = r + 1 and obtain a new game Gr from Gr--1 and or according to the updating rule Ur. 6: Let cr be the new cost vector for game Gr. 7: until a valid output is found. 8: Let gi (x) be the cost of cik when the original cost vector is c | kx. 9: Find the minimum value x such that Here, t is the total number of rounds. 10: Output the value x as the cut value. If the round-based output satisfies monotonicity property, the cut-value always exists. We then show how to find the cut value for a selected agent k in Algorithm 4. The correctness of Algorithm 4 is straightforward. To compute the cut value, we assume that (1) the cut value ` r for each round r can be computed in polynomial time; (2) we can solve the equation gr (x) = ` r to find x in polynomial time when the cost vector c--i and b are given. Now we consider the vertex cover problem. For each round r, we select a vertex with the least weight and that is incident on the lexicographically least uncovered edge. The output satisfies MP. For agent i, we update its cost to cri − crj iff edge (i, j) is selected. It is easy to verify this updating rule is crossing-independent, thus we can apply Algorithm 4 to compute the cut value for the set cover game as shown in Algorithm 5. Algorithm 5 Compute Cut Value for MVC. Input: A node weighted graph G = (V, E, c) and a node k selected by Algorithm 2. Output: The cut value κk (V C, c--k). 1: For each i ∈ V, set w (i) = ci. 2: Set w (k) = ∞, pk = 0 and V' = ∅. 3: while V' is not a vertex cover do 4: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges. 5: Set m = min (w (i), w (j)). 6: Update w (i) = w (i) − m and w (j) = w (j) − m. 7: If w (i) = 0, add i to V'; else add j to V'. 8: If i == k or j == k then set pk = pk + m. 9: Output pk as the cut value κk (V C, c--k). 4.3 Complex Combinations In subsection 4.1, we discussed how to find the cut value function when the output of the binary demand game is a simple combination of some outputs, whose cut values can be computed through other means (typically VCG). However, some algorithms cannot be decomposed in the way described in subsection 4.1. Next we present a more complex way to combine allocation rules, and as we may expected, the way to find the cut value is also more complicated. Assume that there are n agents 1 ≤ i ≤ n with cost vector c, and there are m binary demand games Gi with objective functions fi (o, c), setting Si and allocation rule ψi where i = 1, 2, · · ·, m. There is another binary demand game with setting S and allocation rule O, whose input is a cost vector d = (d1, d2, · · ·, dm). Let f be the function vector (f1, f2, · · ·, fm), ψ be the allocation rule vector (ψ1, ψ2, · · ·, ψm) and ∫ be the setting vector (S1, S2, · · ·, Sm). For notation simplicity, we define Fi (c) = fi (ψi (c), c), for each 1 ≤ i ≤ m, and F (c) = (F1 (c), F2 (c), · · ·, Fm (c)). Let us see a concrete example of these combinations. Consider a link weighted graph G = (V, E, c), and a subset of q nodes Q ⊆ V. The Steiner tree problem is to find a set of links with minimum total cost to connect Q. One way to find an approximation of the Steiner tree is as follows: (1) we build a virtual complete graph H using Q as its vertices, and the cost of each edge (i, j) is the cost of LCP (i, j, c) in graph G; (2) build the minimum spanning tree of H, denoted as MST (H); (3) an edge of G is selected iff it is selected in some LCP (i, j, c) and edge (i, j) of H is selected to MST (H). In this game, we define q (q − 1) / 2 games Gi, j, where i, j ∈ Q, with objective functions fi, j (o, c) being the minimum cost of <8>>>>>>>>>>: connecting i and j in graph G, setting Si being the original graph G and allocation rule is LCP (i, j, c). The game G corresponds to the MST game on graph H. The cost of the pair-wise q (q − 1) / 2 shortest paths defines the input vector d = (d1, d2, · · ·, dm) for game MST. More details will be given in Section 5.2. DEFINITION 3. Given an allocation rule O and setting S, an objective function vector f, an allocation rule vector ψ and setting vector ∫, we define a compound binary demand game with setting S and output O ◦ F as (O ◦ F) i (c) = Vm j = 1 (Oj (F (c)) ∧ ψji (c)). The allocation rule of the above definition can be interpreted as follows. An agent i is selected if and only if there is a j such that (1) i is selected in ψj (c), and (2) the allocation rule O will select index j under cost profile F (c). For simplicity, we will use O ◦ F to denote the output of this compound binary demand game. Notice that a truthful payment scheme using O ◦ F as output exists if and only if it satisfies the monotonicity property. To study when O ◦ F satisfies MP, several necessary definitions are in order. DEFINITION 4. Function Monotonicity Property (FMP) Given an objectivefunction g and an allocation rule O, afunction H (c) = g (O (c), c) is said to satisfy the function monotonicity property, if, given fixed c--i, it satisfies: 1. When Oi (c) = 0, H (c) does not increase over ci. 2. When Oi (c) = 1, H (c) does not decrease over ci. From the definition of the strong monotonicity property, we have Lemma 1 directly. We now can give a sufficient condition when O ◦ F satisfies the monotonicity property. THEOREM 9. If ∀ i ∈ [1, m], Fi satisfies FMP, ψi satisfies MP, and the output O satisfies SMP, then O ◦ F satisfies MP. PROOF. Assuming for cost vector c we have (O ◦ F) i (c) = 1, we should prove for any cost vector c' = c | ic' i with c' i <ci, (O ◦ F) i (c') = 1. Noticing that (O ◦ F) i (c) = 1, without loss of generality, we assume that Ok (F (c)) = 1 and ψki (c) = 1 for some index 1 <k <m. Now consider the output O with the cost vector F (c') | kFk (c). There are two scenarios, which will be studied one by one as follows. One scenario is that index k is not chosen by the output function O. From Lemma 1, there must exist j = 6 k such that We then prove that agent i will be selected in the output ψj (c'), i.e., ψji (c') = 1. If it is not, since ψj (c) satisfies MP, we have ψji (c) = ψji (c') = 0 from c' i <ci. Since Fj satisfies FMP, we know Fj (c')> Fj (c), which is a contradiction to the inequality (1). Consequently, we have ψji (c') = 1. From Equation (2), the fact that index k is not selected by allocation rule O and the definition of SMP, we have Oj (F (c')) = 1, Thus, agent i is selected by O ◦ F because of Oj (F (c')) = 1 and ψji (c') = 1. The other scenario is that index k is chosen by the output function O. First, agent i is chosen in ψk (c') since the output ψk (c) satisfies the monotonicity property and c' i <ci and ψki (c) = 1. Secondly, since the function Fk satisfies FMP, we know that Fk (c') <Fk (c). Remember that output O satisfies the SMP, thus we can obtain Ok (F (c')) = 1 from the fact that Ok (F (c') | kFk (c)) = 1 and Fk (c') <Fk (c). Consequently, agent i will also be selected in the final output O ◦ F. This finishes our proof. This theorem implies that there is a cut value for the compound output O ◦ F. We then discuss how to find the cut value for this output. Below we will give an algorithm to calculate κi (O ◦ F) when (1) O satisfies SMP, (2) ψj satisfies MP, and (3) for fixed c--i, Fj (c) is a constant, say hj, when ψji (c) = 0, and Fj (c) increases when ψji (c) = 1. Notice that here hj can be easily computed by setting ci = ∞ since ψj satisfies the monotonicity property. When given i and fixed c--i, we define (Fji)--1 (y) as the smallest x such thatFj (c | ix) = y. For simplicity, we denote (Fji)--1 asF--1 j if no confusion is caused when i is a fixed agent. In this paper, we assume that given any y, we can find such x in polynomial time. Algorithm 6 Find Cut Value for Compound Method O ◦ F Input: allocation rule O, objective function vector F and inverse function vector F--1 = {F1--1, · · ·, F--1 m}, allocation rule vector ψ and fixed c--i. Output: Cut value for agent i based on O ◦ F. 1: for 1 <j <m do 2: Compute the outputs ψj (ci). 3: Compute hj = Fj (c | i ∞). 4: Use h = (h1, h2, · · ·, hm) as the input for the output function O. Denote τj = κj (O, h--j) as the cut value function of output O based on input h. 5: for 1 <j <m do 6: Set κi, j = F--1 j (min {τj, hj}). 7: The cut value for i is κi (O ◦ F, c--i) = maxmj = 1 κi, j. THEOREM 10. Algorithm 6 computes the correct cut value for agent i based on the allocation rule O ◦ F. PROOF. In order to prove the correctness of the cut value function calculated by Algorithm 6, we prove the following two cases. For our convenience, we will use κi to represent κi (O ◦ F, c--i) if no confusion caused. First, if di <κi then (O ◦ F) i (c | idi) = 1. Without loss of generality, we assume that κi = κi, j for some j. Since function Fj satisfies FMP and ψji (c | idi) = 1, we have Fj (c | idi) <Fj (κi). Notice di <κi, j, from the definition of κi, j = F--1 we have (1) ψji (c | idi) = 1, (2) Fj (c | idi) <τj due to the fact that Fj (x) is a non-decreasing function when j is selected. Thus, from the monotonicity property of O and τj is the cut value for output O, we have If Oj (F (c | idi)) = 1 then (O ◦ F) i (c | idi) = 1. Otherwise, since O satisfies SMP, Lemma 1 and equation 3 imply that there exists at least one index k such that Ok (F (c | idi)) = 1 and Fk (c | idi) <hk. Note Fk (c | idi) <hk implies that i is selected in ψk (c | idi) since hk = Fk (ci | i ∞). In other words, agent i is selected in O ◦ F. Second, if di ≥ κi (O ◦ F, c − i) then (O ◦ F) i (c | idi) = 0. Assume for the sake of contradiction that (O ◦ F) i (c | idi) = 1. Then there exists an index 1 ≤ j ≤ m such that Oj (F (c | idi)) = 1 and ψji (c | idi) = 1. Remember that hk ≥ Fk (c | idi) for any k. Thus, from the fact that O satisfies SMP, when changing the cost vector from F (c | idi) to h | jFj (c | idi), we still have Oj (h | jFj (c | idi)) = 1. This implies that Fj (c | idi) <τj. Combining the above inequality and the fact that Fj (c | ic | idi) <hj, we have Fj (c | idi) <min {hj, τj}. This implies di <F − 1 j (min {hj, τj}) = κi, j <κi (O ◦ F, c − i). which is a contradiction. This finishes our proof. In most applications, the allocation rule ψj implements the objective function fj and fj is utilitarian. Thus, we can compute the inverse of F − 1 j efficiently. Another issue is that it seems the conditions when we can apply Algorithm 6 are restrictive. However, lots of games in practice satisfy these properties and here we show how to deduct the MAX combination in [13]. Assume A1 and A2 are two allocation rules for single minded combinatorial auction, then the combination MAX (A1, A2) returns the allocation with the larger welfare. If algorithm A1 and A2 satisfy MP and FMP, the operation max (x, y) which returns the larger element of x and y satisfies SMP. From Theorem 9 we obtain that combination MAX (A1, A2) also satisfies MP. Further, the cut value of the MAX combination can be found by Algorithm 6. As we will show in Section 5, the complex combination can apply to some more complicated problems. 5. CONCRETE EXAMPLES 5.1 Set Cover In the set cover problem, there is a set U of m elements needed to be covered, and each agent 1 ≤ i ≤ n can cover a subset of elements Si with a cost ci. Let S = {S1, S2, · · ·, Sn} and c = (c1, c2, · · ·, cn). We want to find a subset of agents D such that U ⊆ Ui ∈ D Si. The selected subsets is called the set cover for U. The social efficiency of the output D is defined asEi ∈ D ci, which is the objective function to be minimized. Clearly, this is a utilitarian and thus VCG mechanism can be applied if we can find the subset of S that covers U with the minimum cost. It is well-known that finding the optimal solution is NP-hard. In [4], an algorithm of approximation ratio of Hm has been proposed and it has been proved that this is the best ratio possible for the set cover problem. For the completeness of presentation, we review their method here. Algorithm 7 Greedy Set Cover (GSC) Input: Agent i's subset Si covered and cost ci. (1 ≤ i ≤ n). Output: A set of agents that can cover all elements. 1: Initialize r = 1, T0 = ∅, and R = ∅. 2: while R = 6 U do 4: Set Tr +1 = TrUSj and R = RUj. 3: Find the set Sj with the minimum density cj | Sj − Tr |. 5: r = r + 1 6: Output R. Let GSC (S) be the sets selected by the Algorithm 7. Notice that the output set is a function of S and c. Some works assume that the type of an agent could be ci, i.e., Si is assumed to be a public knowledge. Here, we consider a more general case in which the type of an agent is (Si, ci). In other words, we assume that every agent i cannot only lie about its cost ci but also can lie about the set Si. This problem now looks similar to the combinatorial auction with single minded bidder studied in [12], but with the following differences: in the set cover problem we want to cover all the elements and the sets chosen can have some overlap while in combinatorial auction the chosen sets are disjoint. We can show that the mechanism M = (GSC, PV CG), using Algorithm 7 to find a set cover and apply VCG mechanism to compute the payment to the selected agents, is not truthful. Obviously, the set cover problem is a binary demand game. For the moment, we assume that agent i won't be able to lie about Si. We will drop this assumption later. We show how to design a truthful mechanism by applying our general framework. 1. Check the monotonicity property: The output of Algorithm 7 is a round-based output. Thus, for an agent i, we first focus on the output of one round r. In round r, if i is se among all remaining agents. Consequently, agent i is still selected in round r, which means the output of round r satisfies MP. Now we look into the updating rules. For every round, we only update the Tr +1 = TrUSj and R = RUj, which is obviously cross-independent. Thus, by applying Theorem 8, we know the output by Algorithm 7 satisfies MP. 2. Find the cut value: To calculate the cut value for agent i with fixed cost vector c − i, we follow the steps in Algorithm 4. First, we set ci = ∞ and apply Algorithm 7. Let ir be the agent selected in round r and T − i r +1 be the corresponding set. Then the cut value of round r is Remember the updating rule only updates the game setting but not the cost of the agent, thus we have gr (x) = x ≥ fr for 1 ≤ r ≤ t. Therefore, the final cut value for agent i is The payment to an agent i is κi if i is selected; otherwise its payment is 0. We now consider the scenario when agent i can lie about Si. Assume that agent i cannot lie upward, i.e., it can only report a set S0i ⊆ Si. We argue that agent i will not lie about its elements Si. Notice that the cut value computed for round r is fr = cir Thus, lying its set as S0i will not increase the cut value for each round. Thus lying about Si will not improve agent i's utility. 5.2 Link Weighted Steiner Trees Consider any link weighted network G = (V, E, c), where E = {e1, e2, · · ·, em} are the set of links and ci is the weight of the link ei. The link weighted Steiner tree problem is to find a tree rooted at source node s spanning a given set of nodes Q = {q1, q2, · · ·, qk} ⊂ V. For simplicity, we assume that qi = vi, for 1 ≤ i ≤ k. Here the links are agents. The total cost of links in a graph H ⊆ G is called the weight of H, denoted as ω (H). It is NP-hard to find the minimum cost multicast tree when given an arbitrary link weighted graph G [17, 20]. The currently best polynomial time method has approximation ratio 1 + ln 3 2 [17]. Here, we review and discuss the first approximation method by Takahashi and Matsuyama [20]. Algorithm 8 Find LinkWeighted SteinerTree (LST) Input: Network G = (V, E, c) where c is the cost vector for link set E. Source node s and receiver set Q. Output: A tree LST rooted at s and spanned all receivers. 1: Set r = 1, G1 = G, Q1 = Q and s1 = s. 2: repeat 3: In graph Gr, find the receiver, say qi, that is closest to the source s, i.e., LCP (s, qi, c) has the least cost among the shortest paths from s to all receivers in Qr. 4: Select all links on LCP (s, qi, c) as relay links and set their cost to 0. The new graph is denoted as Gr +1. 5: Set tr as qi and Pr = LCP (s, qi, c). 6: Set Qr +1 = Qr \ qi and r = r + 1. 7: until all receivers are spanned. Hereafter, let LST (G) be the final tree constructed using the above method. It is shown in [24] that mechanism M = (LST, pV CG) is not truthful, where pV CG is the payment calculated based on VCG mechanism. We then show how to design a truthful payment scheme using our general framework. Observe that the output Pr, for any round r, satisfies MP, and the update rule for every round satisfies crossing-independence. Thus, from Theorem 8, the roundbased output LST satisfies MP. In round r, the cut value for a link ei can be obtained by using the VCG mechanism. Now we set ci = ∞ and execute Algorithm 8. Let w − ir (ci) be the cost of the path Pr (ci) selected in the rth round and Πir (ci) be the shortest path selected in round r if the cost of ci is temporarily set to − ∞. Then the cut value for round r is fr = wir (c − i) − | Πir (c − i) | where | Πir (c − i) | is the cost of the path Πir (c − i) excluding node vi. Using Algorithm 4, we obtain the final cut value for agent i: r, i (LST, c − i) = maxr {$r}. Thus, the payment to a link ei is r, i (LST, c − i) if its reported cost is di <r, i (LST, d − i); otherwise, its payment is 0. 5.3 Virtual Minimal Spanning Trees To connect the given set of receivers to the source node, besides the Steiner tree constructed by the algorithms described before, a virtual minimum spanning tree is also often used. Assume that Q is the set of receivers, including the sender. Assume that the nodes in a node-weighted graph are all agents. The virtual minimum spanning tree is constructed as follows. Algorithm 9 Construct VMST 1: for all pairs of receivers qi, qj ∈ Q do 2: Calculate the least cost path LCP (qi, qj, d). 3: Construct a virtual complete link weighted graph K (d) using Q as its node set, where the link qiqj corresponds to the least cost path LCP (qi, qj, d), and its weight is w (qiqj) = | LCP (qi, qj, d) |. 4: Build the minimum spanning tree on K (d), denoted as V MST (d). 5: for every virtual link qiqj in V MST (d) do 6: Find the corresponding least cost path LCP (qi, qj, d) in the original network. 7: Mark the agents on LCP (qi, qj, d) selected. The mechanism M = (V MST, pV CG) is not truthful [24], where the payment pV CG to a node is based on the VCG mechanism. We then show how to design a truthful mechanism based on the framework we described. 1. Check the monotonicity property: Remember that in the complete graph K (d), the weight of a link qiqj is | LCP (qi, qj, d) |. In other words, we implicitly defined | Q | (| Q | − 1) / 2 functions fi, j, for all i <j and qi ∈ Q and qj ∈ Q, with fi, j (d) = | LCP (qi, qj, d) |. We can show that the function fi, j (d) = | LCP (qi, qj, d) | satisfies FMP, LCP satisfies MP, and the output MST satisfies SMP. From Theorem 9, the allocation rule VMST satisfies the monotonicity property. 2. Find the cut value: Notice VMST is the combination of MST and function fi, j, so cut value for VMST can be computed based on Algorithm 6 as follows. (a) Given a link weighted complete graph K (d) on Q, we should find the cut value function for edge ek = (qi, qj) based on MST. Given a spanning tree T and a pair of terminals p and q, clearly there is a unique path connecting them on T. We denote this path as ΠT (p, q), and the edge with the maximum length on this path as LE (p, q, T). Thus, the cut value can be represented as r, k (MST, d) = LE (qi, qj, MST (d | k ∞)) (b) We find the value-cost function for LCP. Assume vk ∈ LCP (qi, qj, d), then the value-cost function is xk = yk − | LCPv. (qi, qj, d | k0) |. Here, LCPv. (qi, qj, d) is the least cost path between qi and qj with node vk on this path. (c) Remove vk and calculate the value K (d | k ∞). Set h (i, j) = | LCP (qi, qj, d | ∞)) | for every pair of node i = 6 j and let h = {h (i, j)} be the vector. Then it is easy to show that τ (i, j) = | LE (qi, qj, MST (h | (i, j) ∞)) | is the cut value for output VMST. It easy to verify that min {h (i, j), τ (i, j)} = | LE (qi, qj, MST (h) |. Thus, we know r, (i, j) k (V MST, d) is | LE (qi, qj, MST (h) | − | LCPv. (qi, qj, d | k0) |. The cut value for agent k is r, k (V MST, d − k) = max0 ≤ i, j ≤ r r, ijk (V MST, d − k). 3. We pay agent k r, k (V MST, d − k) if and only if k is selected in V MST (d); else we pay it 0. 5.4 Combinatorial Auctions Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction. In a singleminded combinatorial auction, there is a set of items S to be sold and there is a set of agents 1 ≤ i ≤ n who wants to buy some of the items: agent i wants to buy a subset Si ⊆ S with maximum price mi. A single-minded bidder i declares a bid bi = hS0i, aii with S0i ⊆ S and ai ∈ R +. Two bids hS0i, aii and hS0j, aji conflict if S0i ∩ S0j = 6 ∅. Given the bids b1, b2, · · ·, bn, they gave a greedy round-based algorithm as follows. First the bids are sorted by some criterion (ai i | 1/2 is used in [12]) in an increasing order and let L be | S0 the list of sorted bids. The first bid is granted. Then the algorithm exams each bid of L in order and grants the bid if it does not conflict with any of the bids previously granted. If it does, it is denied. They proved that this greedy allocation scheme using criterion ai approximates the optimal allocation within a factor of √ m, where m is the number of goods in S. In the auction settings, we have ci = − ai. It is easy to verify the output of the greedy algorithm is a round-based output. Remember after bidder j is selected for round r, every bidder has conflict with j will not be selected in the rounds after. This equals to update the cost of every bidder having conflict with j to 0, which satisfies crossing-independence. In addition, in any round, if bidder i is selected with ai then it will still be selected when it declares a' i> ai. Thus, for every round, it satisfies MP and the ISj 0I1/2 where bj is the first bid that has been denied but would have been selected were it not only for the presence of bidder i. Thus, the payment by agent i is | S' i | 1/2 · aj ISj 0I1/2, and 0 otherwise. This payment scheme is exactly the same as the payment scheme in [12]. 6. CONCLUSIONS In this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game. We first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist. We then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time. We further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time. In this paper, we have concentrated on how to compute P in polynomial time. Our algorithms do not necessarily have the optimal running time for computing P given O. It would be of interest to design algorithms to compute P in optimal time. We have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O (n log n + m) time. Another research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game. Many works [12, 13] in the mechanism design literature are in this direction. We point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given. It would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme. In this paper, we have studied mechanism design for binary demand games. However, some problems cannot be directly formulated as binary demand games. The job scheduling problem in [2] is such an example. For this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner. It wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games. Towards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R +. The remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.
Towards Truthful Mechanisms for Binary Demand Games: A General Framework ABSTRACT The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through "or/and" combinations, round-based combinations, and some more complex combinations of the outputs from subgames. 1. INTRODUCTION In recent years, with the rapid development of the Internet, many protocols and algorithms have been proposed to make the Internet more efficient and reliable. The Internet is a complex distributed system where a multitude of heterogeneous agents cooperate to achieve some common goals, and the existing protocols and algorithms often assume that all agents will follow the prescribed rules without deviation. However, in some settings where the agents are selfish instead of altruistic, it is more reasonable to assume these agents are rational--maximize their own profits--according to the neoclassic economics, and new models are needed to cope with the selfish behavior of such agents. Towards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines. The VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents' valuations. Unfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P = NP. Some mechanisms other than VCG mechanism are needed to address these issues. Archer and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output. They pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem. Auletta et al. [3] studied a similar scheduling problem. They provided a family of deterministic truthful (2 + e) - approximation mechanisms for any fixed number of machines and several (1 + e) - truthful mechanisms for some NP-hard restrictions of their scheduling problem. Lehmann et al. √ m-approximation truthful mechanism, where m is the number of [12] studied the single-minded combinatorial auction and gave a goods. They also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism. The work of Mu'alem and Nisan [13] is the closest in spirit to our work. They characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting. They also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems. As shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions. More generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either "selected" or "not selected". We also assume that the valuations of agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type. Recall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately. In contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function. In fact, we do not even require the existence of an objective function. Given any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property. The monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful. We complement this existence theorem with a general framework to design such a payment scheme P. Furthermore, we present general techniques to compute the payment when the output is a composition of the outputs of subgames through the operators "or" and "and"; through round-based combinations; or through intermediate results, which may be themselves computed from other subproblems. The remainder of the paper is organized as follows. In Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games. In Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P). A framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O. In Section 5, we provide several examples to demonstrate the effectiveness of our general framework. We conclude our paper in Section 6 with some possible future directions. 2. PRELIMINARIES 2.1 Mechanism Design As usually done in the literatures about the designing of algorithms or protocols with inputs from individual agents, we adopt the assumption in neoclassic economics that all agents are rational, i.e., they respond to well-defined incentives and will deviate from the protocol only if the deviation improves their gain. A standard model for mechanism design is as follows. There are n agents 1,..., n and each agent i has some private information ti, called its type, only known to itself. For example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction. The agents' types define the type vector t = (t1, t2,..., tn). Each agent i has a set of strategies Ai from which it can choose. For each input vector a = (a1,..., an) where agent i plays strategy ai ∈ Ai, the mechanism M = (O, P) computes an output o = O (a) and a payment vector p (a) = (p1 (a),..., pn (a)). Here the payment pi (·) is the money given to agent i and depends on the strategies used by the agents. A game is defined as G = (S, M), where S is the setting for the game G. Here, S consists the parameters of the game that are set before the game starts and do not depend on the players' strategies. For example, in a unicast routing game [14], the setting consists of the topology of the network, the source node and the destination node. Throughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O. A valuation function v (ti, o) assigns a monetary amount to agent i for each possible output o. Everything about a game hS, Mi, including the setting S, the allocation rule O and the payment scheme P, is public knowledge except the agent i's actual type ti, which is private information to agent i. Let ui (ti, o) denote the utility of agent i at the outcome of the game o, given its preferences ti. Here, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui (ti, o) = v (ti, o) + Pi (a). Let a | ia'i = (a1, · · ·, ai_1, a' i, ai +1, · · ·, an), i.e., each agent j = 6 i plays an action aj except that the agent i plays a' i. Let a_i = (a1, · · ·, ai_1, ai +1, · · ·, an) denote the actions of all agents except i. Sometimes, we write (a_i, bi) as a | ibi. An action ai is called dominant for i if it (weakly) maximizes the utility of i for all possible strategies b_i of other agents, i.e., ui (ti, O (b_i, ai)) ≥ ui (ti, O (b_i, a' i)) for all a' i = 6 ai and b_i. A direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism. An incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility. Then, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v (ti, O (t)) + pi (t) ≥ v (ti, O (t | it' i)) + pi (t | it' i). Another common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent's utility of participating in the output of the mechanism is not less than the utility of the agent of not participating. A direct-revelation mechanism is strategproof if it satisfies both IC and IR properties. Arguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11]. The VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g (o, t) = Ei v (ti, o) (i.e., the sum of all agents' valuations) and the set of possible outputs is assumed to be finite. A direct revelation mechanism M = (O (t), P (t)) belongs to the VCG family if (1) the allocation O (t) maximizes Ei v (ti, o), and (2) the payment to agent i is pi (t) vj (tj, O (t)) + hi (t_i), where hi () is an arbitrary function of t_i. Under mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10]. The allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function. This makes the mechanism computationally intractable in many cases. Furthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used. In this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function. 2.2 Binary Demand Games A binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1} n. In other words, the output is a n-tuple vector O (t) = (O1 (t), O2 (t),..., On (t)), where Oi (t) = 1 (respectively, 0) means that agent i is (respectively, is not) selected. Examples of binary demand games include: unicast [14, 22, 9] and multicast [23, 24, 8] (generally subgraph construction by selecting some links/nodes to satisfy some property), facility location [7], and a certain auction [12, 2, 13]. Hereafter, we make the following further assumptions. 1. The valuation of the agents are not correlated, i.e., v (ti, o) is a function of v (ti, oi) only is denoted as v (ti, oi). 2. The valuation v (ti, oi) is a publicly known value and is normalized to 0. This assumption is needed to guarantee the IR property. Thus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v (ti, 1). Notice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative. For the convenience of presentation, we define the cost of agent as ci = − v (ti, 1), i.e., it costs agent i ci to provide the service. Throughout this paper, we will use ci instead of vi in our analysis. All our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction. In a binary demand game, if we want to optimize an objective function g (o, t), then we call it a binary optimization demand game. The main differences between the binary demand games and those problems that can be solved by VCG mechanisms are: 1. The objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game. 2. The allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function. We even do not require the existence of an objective function. 3. We assume that the agents' valuations are not correlated in a binary demand game, while the agents' valuations may be correlated in a VCG mechanism. In this paper, we assume for technical convenience that the objective function g (o, c), if exists, is continuous with respect to the cost ci, but most of our results are directly applicable to the discrete case without any modification. 2.3 Previous Work Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction. In a singleminded combinatorial auction, each agent i (1 <i <n) only wants to buy a subset Si C _ S with private price ci. A single-minded bidder i declares a bid bi = (S0 i, ai) with S0i C _ S and ai ∈ R +. In [12], it is assumed that the set of goods allocated to an agent i is either S0i or 0, which is known as exactness. Lehmann et al. gave a greedy round-based allocation algorithm, based on the rank i | l/z, that has an approximation ratio √ m, where m is the num ber of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme. For an allocation rule satisfying (1) exactness: the set of goods allocated to an agent i is either S0i or 0; (2) monotonicity: proposing more money for fewer goods cannot cause a bidder to lose its bid, they proposed a truthful payment scheme as follows: (1) charge a winning bidder a certain amount that does not depend on its own bidding; (2) charge a losing bidder 0. Notice the assumption of exactness reveals that the single minded auction is indeed a binary demand game. Their payment scheme inspired our payment scheme for binary demand game. In [1], Archer et al. studied the combinatorial auctions where multiple copies of many different items are on sale, and each bidder i desires only one subset Si. They devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction. As they pointed out, their method is strongly truthful in sense that it is truthful with high probability 1 − e, where a is an error probability. On the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules. In [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent's private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load. The mechanism's output could be arbitrary real number but their valuation is a quasi-linear function t · w, where t is the private per unit cost and w is the work load. Archer and Tardos characterized that all truthful mechanism should have decreasing "work curves" w and that the truthful payment should be Pi (bi) = Pi (0) + biwi (bi) − fbi 0 wi (u) du Using this model, Archer and Tardos designed truthful mechanisms for several scheduling related problems, including minimizing the span, maximizing flow and minimizing the weighted sum of completion time problems. Notice when the load of the problems is w = {0, 1}, it is indeed a binary demand game. If we apply their characterization of the truthful mechanism, their decreasing "work curves" w implies exactly the monotonicity property of the output. But notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can't directly apply to binary demand games. The paper of Ahuva Mu'alem and Noam Nisan [13] is closest in spirit to our work. They clearly stated that "we only discussed a limited class ofbidders, single minded bidders, that was introduced by" [12]. They proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value. With a simple generalization, we get our conclusion for general binary demand game. They proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search. All of their methods required the welfare function associated with the output satisfying bitonic property. Distinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property. Theorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction. In addition, the binary demand game studied here is different from the traditional packing IP's: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function. Furthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property. 3. GENERAL APPROACHES 3.1 Properties of Strategyproof Mechanisms 3.2 Existence of Strategyproof Mechanisms 4. COMPUTING CUT VALUE FUNCTIONS 4.1 Simple Combinations 4.2 Round-Based Allocations 4.3 Complex Combinations 5. CONCRETE EXAMPLES 5.1 Set Cover 5.2 Link Weighted Steiner Trees 5.3 Virtual Minimal Spanning Trees 5.4 Combinatorial Auctions 6. CONCLUSIONS In this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game. We first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist. We then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time. We further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time. In this paper, we have concentrated on how to compute P in polynomial time. Our algorithms do not necessarily have the optimal running time for computing P given O. It would be of interest to design algorithms to compute P in optimal time. We have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O (n log n + m) time. Another research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game. Many works [12, 13] in the mechanism design literature are in this direction. We point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given. It would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme. In this paper, we have studied mechanism design for binary demand games. However, some problems cannot be directly formulated as binary demand games. The job scheduling problem in [2] is such an example. For this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner. It wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games. Towards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R +. The remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.
Towards Truthful Mechanisms for Binary Demand Games: A General Framework ABSTRACT The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through "or/and" combinations, round-based combinations, and some more complex combinations of the outputs from subgames. 1. INTRODUCTION Towards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines. The VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents' valuations. Unfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P = NP. Some mechanisms other than VCG mechanism are needed to address these issues. Archer and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output. They pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem. Auletta et al. [3] studied a similar scheduling problem. They provided a family of deterministic truthful (2 + e) - approximation mechanisms for any fixed number of machines and several (1 + e) - truthful mechanisms for some NP-hard restrictions of their scheduling problem. Lehmann et al. √ m-approximation truthful mechanism, where m is the number of [12] studied the single-minded combinatorial auction and gave a goods. They also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism. The work of Mu'alem and Nisan [13] is the closest in spirit to our work. They characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting. They also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems. As shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions. More generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either "selected" or "not selected". We also assume that the valuations of agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type. Recall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately. In contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function. In fact, we do not even require the existence of an objective function. Given any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property. The monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful. The remainder of the paper is organized as follows. In Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games. In Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P). A framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O. In Section 5, we provide several examples to demonstrate the effectiveness of our general framework. We conclude our paper in Section 6 with some possible future directions. 2. PRELIMINARIES 2.1 Mechanism Design A standard model for mechanism design is as follows. There are n agents 1,..., n and each agent i has some private information ti, called its type, only known to itself. For example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction. The agents' types define the type vector t = (t1, t2,..., tn). Each agent i has a set of strategies Ai from which it can choose. For each input vector a = (a1,..., an) where agent i plays strategy ai ∈ Ai, the mechanism M = (O, P) computes an output o = O (a) and a payment vector p (a) = (p1 (a),..., pn (a)). Here the payment pi (·) is the money given to agent i and depends on the strategies used by the agents. Throughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O. Here, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui (ti, o) = v (ti, o) + Pi (a). A direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism. An incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility. Then, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v (ti, O (t)) + pi (t) ≥ v (ti, O (t | it' i)) + pi (t | it' i). Another common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent's utility of participating in the output of the mechanism is not less than the utility of the agent of not participating. A direct-revelation mechanism is strategproof if it satisfies both IC and IR properties. Arguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11]. The VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g (o, t) = Ei v (ti, o) (i.e., the sum of all agents' valuations) and the set of possible outputs is assumed to be finite. A direct revelation mechanism M = (O (t), P (t)) belongs to the VCG family if (1) the allocation O (t) maximizes Ei v (ti, o), and (2) the payment to agent i is pi (t) vj (tj, O (t)) + hi (t_i), where hi () is an arbitrary function of t_i. Under mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10]. The allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function. This makes the mechanism computationally intractable in many cases. Furthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used. In this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function. 2.2 Binary Demand Games A binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1} n. Hereafter, we make the following further assumptions. 1. The valuation of the agents are not correlated, i.e., v (ti, o) is a function of v (ti, oi) only is denoted as v (ti, oi). 2. This assumption is needed to guarantee the IR property. Thus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v (ti, 1). Notice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative. For the convenience of presentation, we define the cost of agent as ci = − v (ti, 1), i.e., it costs agent i ci to provide the service. Throughout this paper, we will use ci instead of vi in our analysis. All our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction. In a binary demand game, if we want to optimize an objective function g (o, t), then we call it a binary optimization demand game. The main differences between the binary demand games and those problems that can be solved by VCG mechanisms are: 1. The objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game. 2. The allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function. We even do not require the existence of an objective function. 3. We assume that the agents' valuations are not correlated in a binary demand game, while the agents' valuations may be correlated in a VCG mechanism. 2.3 Previous Work Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction. In a singleminded combinatorial auction, each agent i (1 <i <n) only wants to buy a subset Si C _ S with private price ci. In [12], it is assumed that the set of goods allocated to an agent i is either S0i or 0, which is known as exactness. Lehmann et al. gave a greedy round-based allocation algorithm, based on the rank i | l/z, that has an approximation ratio √ m, where m is the num ber of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme. Notice the assumption of exactness reveals that the single minded auction is indeed a binary demand game. Their payment scheme inspired our payment scheme for binary demand game. They devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction. On the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules. In [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent's private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load. The mechanism's output could be arbitrary real number but their valuation is a quasi-linear function t · w, where t is the private per unit cost and w is the work load. Notice when the load of the problems is w = {0, 1}, it is indeed a binary demand game. If we apply their characterization of the truthful mechanism, their decreasing "work curves" w implies exactly the monotonicity property of the output. But notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can't directly apply to binary demand games. The paper of Ahuva Mu'alem and Noam Nisan [13] is closest in spirit to our work. They proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value. With a simple generalization, we get our conclusion for general binary demand game. They proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search. All of their methods required the welfare function associated with the output satisfying bitonic property. Distinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property. Theorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction. In addition, the binary demand game studied here is different from the traditional packing IP's: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function. Furthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property. 6. CONCLUSIONS In this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game. We first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist. We then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time. We further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time. In this paper, we have concentrated on how to compute P in polynomial time. Our algorithms do not necessarily have the optimal running time for computing P given O. It would be of interest to design algorithms to compute P in optimal time. We have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O (n log n + m) time. Another research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game. Many works [12, 13] in the mechanism design literature are in this direction. We point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given. It would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme. In this paper, we have studied mechanism design for binary demand games. However, some problems cannot be directly formulated as binary demand games. The job scheduling problem in [2] is such an example. For this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner. It wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games. Towards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R +. The remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.
H-61
Impedance Coupling in Content-targeted Advertising
The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for ads and keywords) can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.
[ "content-target advertis", "advertis", "web", "match strategi", "ad and keyword", "imped coupl strategi", "on-line advertis", "paid placement strategi", "keyword target advertis", "bayesian network model", "expans term", "ad placement strategi", "bayesian network", "knn" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "M", "U", "U", "M", "U", "U" ]
Impedance Coupling in Content-targeted Advertising Berthier Ribeiro-Neto Computer Science Department Federal University of Minas Gerais Belo Horizonte, Brazil berthier@dcc.ufmg.br Marco Cristo Computer Science Department Federal University of Minas Gerais Belo Horizonte, Brazil marco@dcc.ufmg.br Paulo B. Golgher Akwan Information Technologies Av. Abra˜ao Caram 430 - Pampulha Belo Horizonte, Brazil golgher@akwan.com.br Edleno Silva de Moura Computer Science Department Federal University of Amazonas Manaus, Brazil edleno@dcc.ufam.edu.br ABSTRACT The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser``s business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for ads and keywords) can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; I.5.3 [Pattern Recognition]: Applications-Text processing General Terms Algorithms, Experimentation 1. INTRODUCTION The emergence of the Internet has opened up new marketing opportunities. In fact, a company has now the possibility of showing its advertisements (ads) to millions of people at a low cost. During the 90``s, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16]. This situation radically changed in the following decade when the failure of many Web companies led to a dropping in supply of cheap venture capital and a considerable reduction in on-line advertising investments [15,16]. It was clear then that more effective strategies for on-line advertising were required. For that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14]. As a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8]. This raised privacy issues which stimulated the research for less invasive measures [16]. More recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3]. In such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee. Amongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16]. In this technique, keywords extracted from the user``s search query are matched against keywords associated with ads provided by advertisers. A ranking of the ads, which also takes into consideration the amount that each advertiser is willing to pay, is computed. The top ranked ads are displayed in the search result page together with the answers for the user query. The success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts. For example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals. The motivation is to take advantage of 496 the users immediate information interests at browsing time. The problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing. In this case, instead of dealing with users'' keywords, we have to use the contents of a Web page to decide which ads to display. Figure 1: Example of content-based advertising in the page of a newspaper. The middle slice of the page shows the beginning of an article about the launch of a DVD movie. At the bottom slice, we can see advertisements picked for this page by Google``s content-based advertising system, AdSense. It is important to notice that paid placement advertising strategies imply some risks to information gatekeepers. For instance, there is the possibility of a negative impact on their credibility which, at long term, can demise their market share [3]. This makes investments in the quality of ad recommendation systems even more important to minimize the possibility of exhibiting ads unrelated to the user``s interests. By investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14]. Further, that can translate into higher clickthrough rates that lead to an increase in revenues for information gatekeepers and advertisers, with gains to all parts [3]. In this work, we focus on the problem of content-targeted advertising. We propose new strategies for associating ads with a Web page. Five of these strategies are referred to as matching strategies. They are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords. Five other strategies, which we here introduce, are referred to as impedance coupling strategies. They are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages. This is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement. We say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance. Further, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems. Thus, no other data from the advertiser is required. Using a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies. First, we evaluate the five matching strategies. They match ads to a Web page using a standard vector model and provide what we may call trivial solutions. Our results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page. Such strategy, which we call AAK (for ads and keywords), is then taken as our baseline. Following we evaluate the five impedance coupling strategies. They are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts. Our results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy. The paper is organized as follows. In section 2, we introduce five matching strategies to solve content-targeted advertising. In section 3, we present our impedance coupling strategies. In section 4, we describe our experimental methodology and datasets and discuss our results. In section 5 we discuss related work. In section 6 we present our conclusions. 2. MATCHING STRATEGIES Keyword advertising relies on matching search queries to ads and its associated keywords. Context-based advertising, which we address here, relies on matching ads and its associated keywords to the text of a Web page. Given a certain Web page p, which we call triggering page, our task is to select advertisements related to the contents of p. Without loss of generality, we consider that an advertisement ai is composed of a title, a textual description, and a hyperlink. To illustrate, for the first ad by Google shown in Figure 1, the title is Star Wars Trilogy Full, the description is Get this popular DVD free. Free w/ free shopping. Sign up now, and the hyperlink points to the site www.freegiftworld.com. Advertisements can be grouped by advertisers in groups called campaigns, such that a campaign can have one or more advertisements. Given our triggering page p and a set A of ads, a simple way of ranking ai ∈ A with regard to p is by matching the contents of p to the contents of ai. For this, we use the vector space model [2], as discussed in the immediately following. In the vector space model, queries and documents are represented as weighted vectors in an n-dimensional space. Let wiq be the weight associated with term ti in the query q and wij be the weight associated with term ti in the document dj. Then, q = (w1q, w2q, ..., wiq, ..., wnq) and dj = (w1j, w2j, ..., wij, ..., wnj) are the weighted vectors used to represent the query q and the document dj. These weights can be computed using classic tf-idf schemes. In such schemes, weights are taken as the product between factors that quantify the importance of a term in a document (given by the term frequency, or tf, factor) and its rarity in the whole collection (given by the inverse document factor, or idf, factor), see [2] for details. The ranking of the query q with regard to the document dj is computed by the cosine similarity 497 formula, that is, the cosine of the angle between the two corresponding vectors: sim(q, dj) = q • dj |q| × |dj| = Pn i=1 wiq · wij qPn i=1 w2 iq qPn i=1 w2 ij (1) By considering p as the query and ai as the document, we can rank the ads with regard to the Web page p. This is our first matching strategy. It is represented by the function AD given by: AD(p, ai) = sim(p, ai) where AD stands for direct match of the ad, composed by title and description and sim(p, ai) is computed according to Eq. (1). In our second method, we use other source of evidence provided by the advertisers: the keywords. With each advertisement ai an advertiser associates a keyword ki, which may be composed of one or more terms. We denote the association between an advertisement ai and a keyword ki as the pair (ai, ki) ∈ K, where K is the set of associations made by the advertisers. In the case of keyword targeted advertising, such keywords are used to match the ads to the user queries. In here, we use them to match ads to the Web page p. This provides our second method for ad matching given by: KW(p, ai) = sim(p, ki) where (ai, ki) ∈ K and KW stands for match the ad keywords. We notice that most of the keywords selected by advertisers are also present in the ads associated with those keywords. For instance, in our advertisement test collection, this is true for 90% of the ads. Thus, instead of using the keywords as matching devices, we can use them to emphasize the main concepts in an ad, in an attempt to improve our AD strategy. This leads to our third method of ad matching given by: AD KW(p, ai) = sim(p, ai ∪ ki) where (ai, ki) ∈ K and AD KW stands for match the ad and its keywords. Finally, it is important to notice that the keyword ki associated with ai could not appear at all in the triggering page p, even when ai is highly ranked. However, if we assume that ki summarizes the main topic of ai according to an advertiser viewpoint, it can be interesting to assure its presence in p. This reasoning suggests that requiring the occurrence of the keyword ki in the triggering page p as a condition to associate ai with p might lead to improved results. This leads to two extra matching strategies as follows: ANDKW(p, ai) = sim(p, ai) if ki p 0 if otherwise AD ANDKW(p, ai) = AAK(p, ai) = sim(p, ai ∪ ki) if ki p 0 if otherwise where (ai, ki) ∈ K, ANDKW stands for match the ad keywords and force their appearance, and AD ANDKW (or AAK for ads and keywords) stands for match the ad, its keywords, and force their appearance. As we will see in our results, the best among these simple methods is AAK. Thus, it will be used as baseline for our impedance coupling strategies which we now discuss. 3. IMPEDANCE COUPLING STRATEGIES Two key issues become clear as one plays with the contenttargeted advertising problem. First, the triggering page normally belongs to a broader contextual scope than that of the advertisements. Second, the association between a good advertisement and the triggering page might depend on a topic that is not mentioned explicitly in the triggering page. The first issue is due to the fact that Web pages can be about any subject and that advertisements are concise in nature. That is, ads tend to be more topic restricted than Web pages. The second issue is related to the fact that, as we later discuss, most advertisers place a small number of advertisements. As a result, we have few terms describing their interest areas. Consequently, these terms tend to be of a more general nature. For instance, a car shop probably would prefer to use car instead of super sport to describe its core business topic. As a consequence, many specific terms that appear in the triggering page find no match in the advertisements. To make matters worst, a page might refer to an entity or subject of the world through a label that is distinct from the label selected by an advertiser to refer to the same entity. A consequence of these two issues is that vocabularies of pages and ads have low intersection, even when an ad is related to a page. We cite this problem from now on as the vocabulary impedance problem. In our experiments, we realized that this problem limits the final quality of direct matching strategies. Therefore, we studied alternatives to reduce the referred vocabulary impedance. For this, we propose to expand the triggering pages with new terms. Figure 2 illustrates our intuition. We already know that the addition of keywords (selected by the advertiser) to the ads leads to improved results. We say that a keyword reduces the vocabulary impedance by providing an alternative matching path. Our idea is to add new terms (words) to the Web page p to also reduce the vocabulary impedance by providing a second alternative matching path. We refer to our expansion technique as impedance coupling. For this, we proceed as follows. expansion terms keyword vocabulary impedance triggering page p ad Figure 2: Addition of new terms to a Web page to reduce the vocabulary impedance. An advertiser trying to describe a certain topic in a concise way probably will choose general terms to characterize that topic. To facilitate the matching between this ad and our triggering page p, we need to associate new general terms with p. For this, we assume that Web documents similar to the triggering page p share common topics. Therefore, 498 by inspecting the vocabulary of these similar documents we might find good terms for better characterizing the main topics in the page p. We now describe this idea using a Bayesian network model [10,11,13] depicted in Figure 3. R D0 D1 Dj Dk T1 T2 T3 Ti Tm ... ... ... ... Figure 3: Bayesian network model for our impedance coupling technique. In our model, which is based on the belief network in [11], the nodes represent pieces of information in the domain. With each node is associated a binary random variable, which takes the value 1 to mean that the corresponding entity (a page or terms) is observed and, thus, relevant in our computations. In this case, we say that the information was observed. Node R represents the page r, a new representation for the triggering page p. Let N be the set of the k most similar documents to the triggering page, including the triggering page p itself, in a large enough Web collection C. Root nodes D0 through Dk represent the documents in N, that is, the triggering page D0 and its k nearest neighbors, D1 through Dk, among all pages in C. There is an edge from node Dj to node R if document dj is in N. Nodes T1 through Tm represent the terms in the vocabulary of C. There is an edge from node Dj to a node Ti if term ti occurs in document dj. In our model, the observation of the pages in N leads to the observation of a new representation of the triggering page p and to a set of terms describing the main topics associated with p and its neighbors. Given these definitions, we can now use the network to determine the probability that a term ti is a good term for representing a topic of the triggering page p. In other words, we are interested in the probability of observing the final evidence regarding a term ti, given that the new representation of the page p has been observed, P(Ti = 1|R = 1). This translates into the following equation1 : P(Ti|R) = 1 P(R) X d P(Ti|d)P(R|d)P(d) (2) where d represents the set of states of the document nodes. Since we are interested just in the states in which only a single document dj is observed and P(d) can be regarded as a constant, we can rewrite Eq. (2) as: P(Ti|R) = ν P(R) kX j=0 P(Ti|dj)P(R|dj) (3) where dj represents the state of the document nodes in which only document dj is observed and ν is a constant 1 To simplify our notation we represent the probabilities P(X = 1) as P(X) and P(X = 0) as P(X). associated with P(dj). Eq. (3) is the general equation to compute the probability that a term ti is related to the triggering page. We now define the probabilities P(Ti|dj) and P(R|dj) as follows: P(Ti|dj) = η wij (4) P(R|dj) = (1 − α) j = 0 α sim(r, dj) 1 ≤ j ≤ k (5) where η is a normalizing constant, wij is the weight associated with term ti in the document dj, and sim(p, dj) is given by Eq. (1), i.e., is the cosine similarity between p and dj. The weight wij is computed using a classic tf-idf scheme and is zero if term ti does not occur in document dj. Notice that P(Ti|dj) = 1 − P(Ti|dj) and P(R|dj) = 1 − P(R|dj). By defining the constant α, it is possible to determine how important should be the influence of the triggering page p to its new representation r. By substituting Eq. (4) and Eq. (5) into Eq. (3), we obtain: P(Ti|R) = ρ ((1 − α) wi0 + α kX j=1 wij sim(r, dj)) (6) where ρ = η ν is a normalizing constant. We use Eq. (6) to determine the set of terms that will compose r, as illustrated in Figure 2. Let ttop be the top ranked term according to Eq. (6). The set r is composed of the terms ti such that P (Ti|R) P (Ttop|R) ≥ β, where β is a given threshold. In our experiments, we have used β = 0.05. Notice that the set r might contain terms that already occur in p. That is, while we will refer to the set r as expansion terms, it should be clear that p ∩ r = ∅. By using α = 0, we simply consider the terms originally in page p. By increasing α, we relax the context of the page p, adding terms from neighbor pages, turning page p into its new representation r. This is important because, sometimes, a topic apparently not important in the triggering page offers a good opportunity for advertising. For example, consider a triggering page that describes a congress in London about digital photography. Although London is probably not an important topic in this page, advertisements about hotels in London would be appropriate. Thus, adding hotels to page p is important. This suggests using α > 0, that is, preserving the contents of p and using the terms in r to expand p. In this paper, we examine both approaches. Thus, in our sixth method we match r, the set of new expansion terms, directly to the ads, as follows: AAK T(p, ai) = AAK(r, ai) where AAK T stands for match the ad and keywords to the set r of expansion terms. In our seventh method, we match an expanded page p to the ads as follows: AAK EXP(p, ai) = AAK(p ∪ r, ai) where AAK EXP stands for match the ad and keywords to the expanded triggering page. 499 To improve our ad placement methods, other external source that we can use is the content of the page h pointed to by the advertisement``s hyperlink, that is, its landing page. After all, this page comprises the real target of the ad and perhaps could present a more detailed description of the product or service being advertised. Given that the advertisement ai points to the landing page hi, we denote this association as the pair (ai, hi) ∈ H, where H is the set of associations between the ads and the pages they point to. Our eighth method consists of matching the triggering page p to the landing pages pointed to by the advertisements, as follows: H(p, ai) = sim(p, hi) where (ai, hi) ∈ H and H stands for match the hyperlink pointed to by the ad. We can also combine this information with the more promising methods previously described, AAK and AAK EXP as follows. Given that (ai, hi) ∈ H and (ai, ki) ∈ K, we have our last two methods: AAK H(p, ai) = sim(p, ai ∪ hi ∪ ki) if ki p 0 if otherwise AAK EXP H(p, ai) = sim(p ∪ r, ai ∪ hi ∪ ki) if ki (p ∪ r) 0 if otherwise where AAK H stands for match ads and keywords also considering the page pointed by the ad and AAH EXP H stands for match ads and keywords with expanded triggering page, also considering the page pointed by the ad. Notice that other combinations were not considered in this study due to space restrictions. These other combinations led to poor results in our experimentation and for this reason were discarded. 4. EXPERIMENTS 4.1 Methodology To evaluate our ad placement strategies, we performed a series of experiments using a sample of a real case ad collection with 93,972 advertisements, 1,744 advertisers, and 68,238 keywords2 . The advertisements are grouped in 2,029 campaigns with an average of 1.16 campaigns per advertiser. For the strategies AAK T and AAK EXP, we had to generate a set of expansion terms. For that, we used a database of Web pages crawled by the TodoBR search engine [12] (http://www.todobr.com.br/). This database is composed of 5,939,061 pages of the Brazilian Web, under the domain . br. For the strategies H, AAK H, and AAK EXP H, we also crawled the pages pointed to by the advertisers. No other filtering method was applied to these pages besides the removal of HTML tags. Since we are initially interested in the placement of advertisements in the pages of information portals, our test collection was composed of 100 pages extracted from a Brazilian newspaper. These are our triggering pages. They were crawled in such a way that only the contents of their articles was preserved. As we have no preferences for particular 2 Data in portuguese provided by an on-line advertisement company that operates in Brazil. topics, the crawled pages cover topics as diverse as politics, economy, sports, and culture. For each of our 100 triggering pages, we selected the top three ranked ads provided by each of our 10 ad placement strategies. Thus, for each triggering page we select no more than 30 ads. These top ads were then inserted in a pool for that triggering page. Each pool contained an average of 15.81 advertisements. All advertisements in each pool were submitted to a manual evaluation by a group of 15 users. The average number of relevant advertisements per page pool was 5.15. Notice that we adopted the same pooling method used to evaluate the TREC Web-based collection [6]. To quantify the precision of our results, we used 11-point average figures [2]. Since we are not able to evaluate the entire ad collection, recall values are relative to the set of evaluated advertisements. 4.2 Tuning Idf factors We start by analyzing the impact of different idf factors in our advertisement collection. Idf factors are important because they quantify how discriminative is a term in the collection. In our ad collection, idf factors can be computed by taking ads, advertisers or campaigns as documents. To exemplify, consider the computation of ad idf for a term ti that occurs 9 times in a collection of 100 ads. Then, the inverse document frequency of ti is given by: idfi = log 100 9 Hence, we can compute ad, advertiser or campaign idf factors. As we observe in Figure 4, for the AD strategy, the best ranking is obtained by the use of campaign idf, that is, by calculating our idf factor so that it discriminates campaigns. Similar results were obtained for all the other methods. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.2 0.4 0.6 0.8 1 precision recall Campaign idf Advertiser idf Ad idf Figure 4: Precision-recall curves obtained for the AD strategy using ad, advertiser, and campaign idf factors. This reflects the fact that terms might be better discriminators for a business topic than for an specific ad. This effect can be accomplished by calculating the factor relative to idf advertisers or campaigns instead of ads. In fact, campaign idf factors yielded the best results. Thus, they will be used in all the experiments reported from now on. 500 4.3 Results Matching Strategies Figure 5 displays the results for the matching strategies presented in Section 2. As shown, directly matching the contents of the ad to the triggering page (AD strategy) is not so effective. The reason is that the ad contents are very noisy. It may contain messages that do not properly describe the ad topics such as requisitions for user actions (e.g, visit our site) and general sentences that could be applied to any product or service (e.g, we delivery for the whole country). On the other hand, an advertiser provided keyword summarizes well the topic of the ad. As a consequence, the KW strategy is superior to the AD and AD KW strategies. This situation changes when we require the keywords to appear in the target Web page. By filtering out ads whose keywords do not occur in the triggering page, much noise is discarded. This makes ANDKW a better alternative than KW. Further, in this new situation, the contents of the ad becomes useful to rank the most relevant ads making AD ANDKW (or AAK for ads and keywords) the best among all described methods. For this reason, we adopt AAK as our baseline in the next set of experiments. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 precision recall AAK ANDKW KW AD_KW AD Figure 5: Comparison among our five matching strategies. AAK (ads and keywords) is superior. Table 1 illustrates average precision figures for Figure 5. We also present actual hits per advertisement slot. We call hit an assignment of an ad (to the triggering page) that was considered relevant by the evaluators. We notice that our AAK strategy provides a gain in average precision of 60% relative to the trivial AD strategy. This shows that careful consideration of the evidence related to the problem does pay off. Impedance Coupling Strategies Table 2 shows top ranked terms that occur in a page covering Argentinean wines produced using grapes derived from the Bordeaux region of France. The p column includes the top terms for this page ranked according to our tf-idf weighting scheme. The r column includes the top ranked expansion terms generated according to Eq. (6). Notice that the expansion terms not only emphasize important terms of the target page (by increasing their weights) such as wines and Methods Hits 11-pt average #1 #2 #3 total score gain(%) AD 41 32 13 86 0.104 AD KW 51 28 17 96 0.106 +1.9 KW 46 34 28 108 0.125 +20.2 ANDKW 49 37 35 121 0.153 +47.1 AD ANDKW (AAK) 51 48 39 138 0.168 +61.5 Table 1: Average precision figures, corresponding to Figure 5, for our five matching strategies. Columns labelled #1, #2, and #3 indicate total of hits in first, second, and third advertisement slots, respectively. The AAK strategy provides improvements of 60% relative to the AD strategy. Rank p r term score term score 1 argentina 0.090 wines 0.251 2 obtained* 0.047 wine* 0.140 3 class* 0.036 whites 0.091 4 whites 0.035 red* 0.057 5 french* 0.031 grape 0.051 6 origin* 0.029 bordeaux 0.045 7 france* 0.029 acideness* 0.038 8 grape 0.017 argentina 0.037 9 sweet* 0.016 aroma* 0.037 10 country* 0.013 blanc* 0.036 ... 35 wines 0.010 -... Table 2: Top ranked terms for the triggering page p according to our tf-idf weighting scheme and top ranked terms for r, the expansion terms for p, generated according to Eq. (6). Ranking scores were normalized in order to sum up to 1. Terms marked with `*'' are not shared by the sets p and r. whites, but also reveal new terms related to the main topic of the page such as aroma and red. Further, they avoid some uninteresting terms such as obtained and country. Figure 6 illustrates our results when the set r of expansion terms is used. They show that matching the ads to the terms in the set r instead of to the triggering page p (AAK T strategy) leads to a considerable improvement over our baseline, AAK. The gain is even larger when we use the terms in r to expand the triggering page (AAK EXP method). This confirms our hypothesis that the triggering page could have some interesting terms that should not be completely discarded. Finally, we analyze the impact on the ranking of using the contents of pages pointed by the ads. Figure 7 displays our results. It is clear that using only the contents of the pages pointed by the ads (H strategy) yields very poor results. However, combining evidence from the pages pointed by the ads with our baseline yields improved results. Most important, combining our best strategy so far (AAK EXP) with pages pointed by ads (AAK EXP H strategy) leads to superior results. This happens because the two additional sources of evidence, expansion terms and pages pointed by the ads, are distinct and complementary, providing extra and valuable information for matching ads to a Web page. 501 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 precision recall AAK_EXP AAK_T AAK Figure 6: Impact of using a new representation for the triggering page, one that includes expansion terms. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 precision recall AAK_EXP_H AAK_H AAK H Figure 7: Impact of using the contents of the page pointed by the ad (the hyperlink). Figure 8 and Table 3 summarize all results described in this section. In Figure 8 we show precision-recall curves and in Table 3 we show 11-point average figures. We also present actual hits per advertisement slot and gains in average precision relative to our baseline, AAK. We notice that the highest number of hits in the first slot was generated by the method AAK EXP. However, the method with best overall retrieval performance was AAK EXP H, yielding a gain in average precision figures of roughly 50% over the baseline (AAK). 4.4 Performance Issues In a keyword targeted advertising system, ads are assigned at query time, thus the performance of the system is a very important issue. In content-targeted advertising systems, we can associate ads with a page at publishing (or updating) time. Also, if a new ad comes in we might consider assigning this ad to already published pages in offline mode. That is, we might design the system such that its performance depends fundamentally on the rate that new pages 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 precision recall AAK_EXP_H AAK_EXP AAK_T AAK_H AAK H Figure 8: Comparison among our ad placement strategies. Methods Hits 11-pt average #1 #2 #3 total score gain(%) H 28 5 6 39 0.026 -84.3 AAK 51 48 39 138 0.168 AAK H 52 50 46 148 0.191 +13.5 AAK T 65 49 43 157 0.226 +34.6 AAK EXP 70 52 53 175 0.242 +43.8 AAK EXP H 64 61 51 176 0.253 +50.3 Table 3: Results for our impedance coupling strategies. are published and the rate that ads are added or modified. Further, the data needed by our strategies (page crawling, page expansion, and ad link crawling) can be gathered and processed offline, not affecting the user experience. Thus, from this point of view, the performance is not critical and will not be addressed in this work. 5. RELATED WORK Several works have stressed the importance of relevance in advertising. For example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance. Thus, in order to be effective, the authors conclude that advertisements should be relevant to consumer concerns at the time of exposure. The results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is. Therefore it is not surprising that other works have addressed the relevance issue. For instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user``s short-term interests in a non-intrusive way. Contrary to our work, ADWIZ does not directly use the content of the page viewed by the user. It relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user. On the other hand, in [7] the authors presented an intrusive approach in which an agent sits between advertisers and the user``s browser allowing a banner to be placed into the currently viewed page. In spite of having the opportunity to use the page``s content, 502 the agent infers relevance based on category information and user``s private information collected along the time. In [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems. Both systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers'' willingness to pay. In particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance. The authors conclude that Google``s strategy is better than that used by Overture. As mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising. Instead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse. Finally, the work in [4] focuses on improving search engine results in a TREC collection by means of an automatic query expansion method based on kNN [17]. Such method resembles our expansion approach presented in section 3. Our method is different from that presented by [4]. They expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection. In our case, we use two collections: an advertisement and a Web collection. We expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection. By doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them. 6. CONCLUSIONS In this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising). Five of our strategies attempt to match the ads directly to the Web page. Because of that, they are called matching strategies. The other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms. Because of that they are called impedance coupling strategies. Using a sample of a real case database with over 93 thousand ads, we evaluated our strategies. For the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%. This was obtained by a strategy called AAK (for ads and keywords), which is taken as the baseline for evaluating our more advanced impedance coupling strategies. For our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible. These were generated by expanding the Web page with new terms (obtained using a sample Web collection containing over five million pages) and the ads with the contents of the page they point to (a hyperlink provided by the advertisers). These are first time results that indicate that high quality content-targeted advertising is feasible and practical. 7. ACKNOWLEDGEMENTS This work was supported in part by the GERINDO project, grant MCT/CNPq/CT-INFO 552.087/02-5, by CNPq grant 300.188/95-1 (Berthier Ribeiro-Neto), and by CNPq grant 303.576/04-9 (Edleno Silva de Moura). Marco Cristo is supported by Fucapi, Manaus, AM, Brazil. 8. REFERENCES [1] The Google adwords. Google content-targeted advertising. http://adwords.google.com/select/ct_faq.html, November 2004. [2] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley-Longman, 1st edition, 1999. [3] H. K. Bhargava and J. Feng. Paid placement strategies for internet search engines. In Proceedings of the eleventh international conference on World Wide Web, pages 117-123. ACM Press, 2002. [4] E. P. Chan, S. Garcia, and S. Roukos. Trec-5 ad-hoc retrieval using k nearest-neighbors re-scoring. In The Fifth Text REtrieval Conference (TREC-5). National Institute of Standards and Technology (NIST), November 1996. [5] J. Feng, H. K. Bhargava, and D. Pennock. Comparison of allocation rules for paid placement advertising in search engines. In Proceedings of the 5th international conference on Electronic commerce, pages 294-299. ACM Press, 2003. [6] D. Hawking, N. Craswell, and P. B. Thistlewaite. Overview of TREC-7 very large collection track. In The Seventh Text REtrieval Conference (TREC-7), pages 91-104, Gaithersburg, Maryland, USA, November 1998. [7] Y. Kohda and S. Endo. Ubiquitous advertising on the www: merging advertisement on the browser. Comput. Netw. ISDN Syst., 28(7-11):1493-1499, 1996. [8] M. Langheinrich, A. Nakamura, N. Abe, T. Kamba, and Y. Koseki. Unintrusive customization techniques for web advertising. Comput. Networks, 31(11-16):1259-1272, 1999. [9] T. P. Novak and D. L. Hoffman. New metrics for new media: toward the development of web measurement standards. World Wide Web J., 2(1):213-246, 1997. [10] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of plausible inference. Morgan Kaufmann Publishers, 2nd edition, 1988. [11] B. Ribeiro-Neto and R. Muntz. A belief network model for IR. In Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 253-260, Zurich, Switzerland, August 1996. [12] A. Silva, E. Veloso, P. Golgher, B. Ribeiro-Neto, A. Laender, and N. Ziviani. CobWeb - a crawler for the brazilian web. In Proceedings of the String Processing and Information Retrieval Symposium (SPIRE``99), pages 184-191, Cancun, Mexico, September 1999. [13] H. Turtle and W. B. Croft. Evaluation of an inference network-based retrieval model. ACM Transactions on Information Systems, 9(3):187-222, July 1991. [14] C. Wang, P. Zhang, R. Choi, and M. Daeredita. Understanding consumers attitude toward advertising. In Eighth Americas Conference on Information Systems, pages 1143-1148, August 2002. [15] M. Weideman. Ethical issues on content distribution to digital consumers via paid placement as opposed to website visibility in search engine results. In The Seventh ETHICOMP International Conference on the Social and Ethical Impacts of Information and Communication Technologies, pages 904-915. Troubador Publishing Ltd, April 2004. [16] M. Weideman and T. Haig-Smith. An investigation into search engines as a form of targeted advert delivery. In Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on Enablement through technology, pages 258-258. South African Institute for Computer Scientists and Information Technologists, 2002. [17] Y. Yang. Expert network: Effective and efficient learning from human decisions in text categorization and retrieval. In W. B. Croft and e. C. J. van Rijsbergen, editors, Proceedings of the 17rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 13-22. Springer-Verlag, 1994. 503
Impedance Coupling in Content-targeted Advertising ABSTRACT The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for "ads and keywords") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms. 1. INTRODUCTION The emergence of the Internet has opened up new marketing opportunities. In fact, a company has now the possibility of showing its advertisements (ads) to millions of people at a low cost. During the 90's, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16]. This situation radically changed in the following decade when the failure of many Web companies led to a dropping in supply of cheap venture capital and a considerable reduction in on-line advertising investments [15,16]. It was clear then that more effective strategies for on-line advertising were required. For that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14]. As a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8]. This raised privacy issues which stimulated the research for less invasive measures [16]. More recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3]. In such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee. Amongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16]. In this technique, keywords extracted from the user's search query are matched against keywords associated with ads provided by advertisers. A ranking of the ads, which also takes into consideration the amount that each advertiser is willing to pay, is computed. The top ranked ads are displayed in the search result page together with the answers for the user query. The success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts. For example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals. The motivation is to take advantage of the users immediate information interests at browsing time. The problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing. In this case, instead of dealing with users' keywords, we have to use the contents of a Web page to decide which ads to display. Figure 1: Example of content-based advertising in the page of a newspaper. The middle slice of the page shows the beginning of an article about the launch of a DVD movie. At the bottom slice, we can see advertisements picked for this page by Google's content-based advertising system, AdSense. It is important to notice that paid placement advertising strategies imply some risks to information gatekeepers. For instance, there is the possibility of a negative impact on their credibility which, at long term, can demise their market share [3]. This makes investments in the quality of ad recommendation systems even more important to minimize the possibility of exhibiting ads unrelated to the user's interests. By investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14]. Further, that can translate into higher clickthrough rates that lead to an increase in revenues for information gatekeepers and advertisers, with gains to all parts [3]. In this work, we focus on the problem of content-targeted advertising. We propose new strategies for associating ads with a Web page. Five of these strategies are referred to as matching strategies. They are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords. Five other strategies, which we here introduce, are referred to as impedance coupling strategies. They are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages. This is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement. We say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance. Further, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems. Thus, no other data from the advertiser is required. Using a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies. First, we evaluate the five matching strategies. They match ads to a Web page using a standard vector model and provide what we may call trivial solutions. Our results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page. Such strategy, which we call AAK (for "ads and keywords"), is then taken as our baseline. Following we evaluate the five impedance coupling strategies. They are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts. Our results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy. The paper is organized as follows. In section 2, we introduce five matching strategies to solve content-targeted advertising. In section 3, we present our impedance coupling strategies. In section 4, we describe our experimental methodology and datasets and discuss our results. In section 5 we discuss related work. In section 6 we present our conclusions. 2. MATCHING STRATEGIES Keyword advertising relies on matching search queries to ads and its associated keywords. Context-based advertising, which we address here, relies on matching ads and its associated keywords to the text of a Web page. Given a certain Web page p, which we call triggering page, our task is to select advertisements related to the contents of p. Without loss of generality, we consider that an advertisement ai is composed of a title, a textual description, and a hyperlink. To illustrate, for the first ad by Google shown in Figure 1, the title is "Star Wars Trilogy Full", the description is "Get this popular DVD free. Free w / free shopping. Sign up now", and the hyperlink points to the site “www.freegiftworld.com". Advertisements can be grouped by advertisers in groups called campaigns, such that a campaign can have one or more advertisements. Given our triggering page p and a set A of ads, a simple way of ranking ai ∈ A with regard to p is by matching the contents of p to the contents of ai. For this, we use the vector space model [2], as discussed in the immediately following. In the vector space model, queries and documents are represented as weighted vectors in an n-dimensional space. Let wiq be the weight associated with term ti in the query q and wij be the weight associated with term ti in the document dj. Then, q ~ = (w1q, w2q,..., wiq,..., wnq) and ~ dj = (w1j, w2j,..., wij,..., wnj) are the weighted vectors used to represent the query q and the document dj. These weights can be computed using classic tf-idf schemes. In such schemes, weights are taken as the product between factors that quantify the importance of a term in a document (given by the term frequency, or tf, factor) and its rarity in the whole collection (given by the inverse document factor, or idf, factor), see [2] for details. The ranking of the query q with regard to the document dj is computed by the cosine similarity formula, that is, the cosine of the angle between the two corresponding vectors: By considering p as the query and ai as the document, we can rank the ads with regard to the Web page p. This is our first matching strategy. It is represented by the function AD given by: AD (p, ai) = sim (p, ai) where AD stands for "direct match of the ad, composed by title and description" and sim (p, ai) is computed according to Eq. (1). In our second method, we use other source of evidence provided by the advertisers: the keywords. With each advertisement ai an advertiser associates a keyword ki, which may be composed of one or more terms. We denote the association between an advertisement ai and a keyword ki as the pair (ai, ki) E K, where K is the set of associations made by the advertisers. In the case of keyword targeted advertising, such keywords are used to match the ads to the user queries. In here, we use them to match ads to the Web page p. This provides our second method for ad matching given by: KW (p, ai) = sim (p, ki) where (ai, ki) E K and KW stands for "match the ad keywords". We notice that most of the keywords selected by advertisers are also present in the ads associated with those keywords. For instance, in our advertisement test collection, this is true for 90% of the ads. Thus, instead of using the keywords as matching devices, we can use them to emphasize the main concepts in an ad, in an attempt to improve our AD strategy. This leads to our third method of ad matching given by: where (ai, ki) E K and AD KW stands for "match the ad and its keywords". Finally, it is important to notice that the keyword ki associated with ai could not appear at all in the triggering page p, even when ai is highly ranked. However, if we assume that ki summarizes the main topic of ai according to an advertiser viewpoint, it can be interesting to assure its presence in p. This reasoning suggests that requiring the occurrence of the keyword ki in the triggering page p as a condition to associate ai with p might lead to improved results. This leads to two extra matching strategies as follows: where (ai, ki) E K, ANDKW stands for "match the ad keywords and force their appearance", and AD ANDKW (or AAK for "ads and keywords") stands for "match the ad, its keywords, and force their appearance". As we will see in our results, the best among these simple methods is AAK. Thus, it will be used as baseline for our impedance coupling strategies which we now discuss. 3. IMPEDANCE COUPLING STRATEGIES Two key issues become clear as one plays with the contenttargeted advertising problem. First, the triggering page normally belongs to a broader contextual scope than that of the advertisements. Second, the association between a good advertisement and the triggering page might depend on a topic that is not mentioned explicitly in the triggering page. The first issue is due to the fact that Web pages can be about any subject and that advertisements are concise in nature. That is, ads tend to be more topic restricted than Web pages. The second issue is related to the fact that, as we later discuss, most advertisers place a small number of advertisements. As a result, we have few terms describing their interest areas. Consequently, these terms tend to be of a more general nature. For instance, a car shop probably would prefer to use "car" instead of "super sport" to describe its core business topic. As a consequence, many specific terms that appear in the triggering page find no match in the advertisements. To make matters worst, a page might refer to an entity or subject of the world through a label that is distinct from the label selected by an advertiser to refer to the same entity. A consequence of these two issues is that vocabularies of pages and ads have low intersection, even when an ad is related to a page. We cite this problem from now on as the vocabulary impedance problem. In our experiments, we realized that this problem limits the final quality of direct matching strategies. Therefore, we studied alternatives to reduce the referred vocabulary impedance. For this, we propose to expand the triggering pages with new terms. Figure 2 illustrates our intuition. We already know that the addition of keywords (selected by the advertiser) to the ads leads to improved results. We say that a keyword reduces the vocabulary impedance by providing an alternative matching path. Our idea is to add new terms (words) to the Web page p to also reduce the vocabulary impedance by providing a second alternative matching path. We refer to our expansion technique as impedance coupling. For this, we proceed as follows. Figure 2: Addition of new terms to a Web page to reduce the vocabulary impedance. An advertiser trying to describe a certain topic in a concise way probably will choose general terms to characterize that topic. To facilitate the matching between this ad and our triggering page p, we need to associate new general terms with p. For this, we assume that Web documents similar to the triggering page p share common topics. Therefore, by inspecting the vocabulary of these similar documents we might find good terms for better characterizing the main topics in the page p. We now describe this idea using a Bayesian network model [10,11,13] depicted in Figure 3. Figure 3: Bayesian network model for our impedance coupling technique. In our model, which is based on the belief network in [11], the nodes represent pieces of information in the domain. With each node is associated a binary random variable, which takes the value 1 to mean that the corresponding entity (a page or terms) is observed and, thus, relevant in our computations. In this case, we say that the information was observed. Node R represents the page r, a new representation for the triggering page p. Let N be the set of the k most similar documents to the triggering page, including the triggering page p itself, in a large enough Web collection C. Root nodes D0 through Dk represent the documents in N, that is, the triggering page D0 and its k nearest neighbors, D1 through Dk, among all pages in C. There is an edge from node Dj to node R if document dj is in N. Nodes T1 through Tm represent the terms in the vocabulary of C. There is an edge from node Dj to a node Ti if term ti occurs in document dj. In our model, the observation of the pages in N leads to the observation of a new representation of the triggering page p and to a set of terms describing the main topics associated with p and its neighbors. Given these definitions, we can now use the network to determine the probability that a term ti is a good term for representing a topic of the triggering page p. In other words, we are interested in the probability of observing the final evidence regarding a term ti, given that the new representation of the page p has been observed, P (Ti = 1 | R = 1). This translates into the following equation1: where d represents the set of states of the document nodes. Since we are interested just in the states in which only a single document dj is observed and P (d) can be regarded as a constant, we can rewrite Eq. (2) as: where dj represents the state of the document nodes in which only document dj is observed and ν is a constant 1To simplify our notation we represent the probabilities P (X = 1) as P (X) and P (X = 0) as P (X). associated with P (dj). Eq. (3) is the general equation to compute the probability that a term ti is related to the triggering page. We now define the probabilities P (Ti | dj) and P (R | dj) as follows: where η is a normalizing constant, wij is the weight associated with term ti in the document dj, and sim (p, dj) is given by Eq. (1), i.e., is the cosine similarity between p and dj. The weight wij is computed using a classic tf-idf scheme and is zero if term ti does not occur in document dj. Notice that P (Ti | dj) = 1 − P (Ti | dj) and P (R | dj) = 1 − P (R | dj). By defining the constant α, it is possible to determine how important should be the influence of the triggering page p to its new representation r. By substituting Eq. (4) and Eq. (5) into Eq. (3), we obtain: where ρ = η ν is a normalizing constant. We use Eq. (6) to determine the set of terms that will compose r, as illustrated in Figure 2. Let ttop be the top ranked term according to Eq. (6). The set r is composed of the terms ti such that P (Ti | R) P (Ttop | R) ≥ β, where β is a given threshold. In our experiments, we have used β = 0.05. Notice that the set r might contain terms that already occur in p. That is, while we will refer to the set r as expansion terms, it should be clear that p ∩ r = 6 ∅. By using α = 0, we simply consider the terms originally in page p. By increasing α, we relax the context of the page p, adding terms from neighbor pages, turning page p into its new representation r. This is important because, sometimes, a topic apparently not important in the triggering page offers a good opportunity for advertising. For example, consider a triggering page that describes a congress in London about digital photography. Although London is probably not an important topic in this page, advertisements about hotels in London would be appropriate. Thus, adding "hotels" to page p is important. This suggests using α> 0, that is, preserving the contents of p and using the terms in r to expand p. In this paper, we examine both approaches. Thus, in our sixth method we match r, the set of new expansion terms, directly to the ads, as follows: AAK T (p, ai) = AAK (r, ai) where AAK T stands for "match the ad and keywords to the set r of expansion terms". In our seventh method, we match an expanded page p to the ads as follows: AAK EXP (p, ai) = AAK (p ∪ r, ai) where AAK EXP stands for "match the ad and keywords to the expanded triggering page". To improve our ad placement methods, other external source that we can use is the content of the page h pointed to by the advertisement's hyperlink, that is, its landing page. After all, this page comprises the real target of the ad and perhaps could present a more detailed description of the product or service being advertised. Given that the advertisement ai points to the landing page hi, we denote this association as the pair (ai, hi) G H, where H is the set of associations between the ads and the pages they point to. Our eighth method consists of matching the triggering page p to the landing pages pointed to by the advertisements, as follows: H (p, ai) = sim (p, hi) where (ai, hi) G H and H stands for "match the hyperlink pointed to by the ad". We can also combine this information with the more promising methods previously described, AAK and AAK EXP as follows. Given that (ai, hi) G H and (ai, ki) G 1C, we have our last two methods: where AAK H stands for "match ads and keywords also considering the page pointed by the ad" and AAH EXP H stands for "match ads and keywords with expanded triggering page, also considering the page pointed by the ad". Notice that other combinations were not considered in this study due to space restrictions. These other combinations led to poor results in our experimentation and for this reason were discarded. 4. EXPERIMENTS 4.1 Methodology To evaluate our ad placement strategies, we performed a series of experiments using a sample of a real case ad collection with 93,972 advertisements, 1,744 advertisers, and 68,238 keywords2. The advertisements are grouped in 2,029 campaigns with an average of 1.16 campaigns per advertiser. For the strategies AAK T and AAK EXP, we had to generate a set of expansion terms. For that, we used a database of Web pages crawled by the TodoBR search engine [12] (http://www.todobr.com.br/). This database is composed of 5,939,061 pages of the Brazilian Web, under the domain ". br". For the strategies H, AAK H, and AAK EXP H, we also crawled the pages pointed to by the advertisers. No other filtering method was applied to these pages besides the removal of HTML tags. Since we are initially interested in the placement of advertisements in the pages of information portals, our test collection was composed of 100 pages extracted from a Brazilian newspaper. These are our triggering pages. They were crawled in such a way that only the contents of their articles was preserved. As we have no preferences for particular 2Data in portuguese provided by an on-line advertisement company that operates in Brazil. topics, the crawled pages cover topics as diverse as politics, economy, sports, and culture. For each of our 100 triggering pages, we selected the top three ranked ads provided by each of our 10 ad placement strategies. Thus, for each triggering page we select no more than 30 ads. These top ads were then inserted in a pool for that triggering page. Each pool contained an average of 15.81 advertisements. All advertisements in each pool were submitted to a manual evaluation by a group of 15 users. The average number of relevant advertisements per page pool was 5.15. Notice that we adopted the same pooling method used to evaluate the TREC Web-based collection [6]. To quantify the precision of our results, we used 11-point average figures [2]. Since we are not able to evaluate the entire ad collection, recall values are relative to the set of evaluated advertisements. 4.2 Tuning Idf factors We start by analyzing the impact of different idf factors in our advertisement collection. Idf factors are important because they quantify how discriminative is a term in the collection. In our ad collection, idf factors can be computed by taking ads, advertisers or campaigns as documents. To exemplify, consider the computation of "ad idf" for a term ti that occurs 9 times in a collection of 100 ads. Then, the inverse document frequency of ti is given by: Hence, we can compute ad, advertiser or campaign idf factors. As we observe in Figure 4, for the AD strategy, the best ranking is obtained by the use of campaign idf, that is, by calculating our idf factor so that it discriminates campaigns. Similar results were obtained for all the other methods. Figure 4: Precision-recall curves obtained for the AD strategy using ad, advertiser, and campaign idf factors. This reflects the fact that terms might be better discriminators for a business topic than for an specific ad. This effect can be accomplished by calculating the factor relative to idf advertisers or campaigns instead of ads. In fact, campaign idf factors yielded the best results. Thus, they will be used in all the experiments reported from now on. 4.3 Results Matching Strategies Figure 5 displays the results for the matching strategies presented in Section 2. As shown, directly matching the contents of the ad to the triggering page (AD strategy) is not so effective. The reason is that the ad contents are very noisy. It may contain messages that do not properly describe the ad topics such as requisitions for user actions (e.g, "visit our site") and general sentences that could be applied to any product or service (e.g, "we delivery for the whole country"). On the other hand, an advertiser provided keyword summarizes well the topic of the ad. As a consequence, the KW strategy is superior to the AD and AD KW strategies. This situation changes when we require the keywords to appear in the target Web page. By filtering out ads whose keywords do not occur in the triggering page, much noise is discarded. This makes ANDKW a better alternative than KW. Further, in this new situation, the contents of the ad becomes useful to rank the most relevant ads making AD ANDKW (or AAK for "ads and keywords") the best among all described methods. For this reason, we adopt AAK as our baseline in the next set of experiments. Figure 5: Comparison among our five matching strategies. AAK ("ads and keywords") is superior. Table 1 illustrates average precision figures for Figure 5. We also present actual hits per advertisement slot. We call "hit" an assignment of an ad (to the triggering page) that was considered relevant by the evaluators. We notice that our AAK strategy provides a gain in average precision of 60% relative to the trivial AD strategy. This shows that careful consideration of the evidence related to the problem does pay off. Impedance Coupling Strategies Table 2 shows top ranked terms that occur in a page covering Argentinean wines produced using grapes derived from the Bordeaux region of France. The p column includes the top terms for this page ranked according to our tf-idf weighting scheme. The r column includes the top ranked expansion terms generated according to Eq. (6). Notice that the expansion terms not only emphasize important terms of the target page (by increasing their weights) such as "wines" and Table 1: Average precision figures, corresponding to Figure 5, for our five matching strategies. Columns labelled #1, #2, and #3 indicate total of hits in first, second, and third advertisement slots, respectively. The AAK strategy provides improvements of 60% relative to the AD strategy. Table 2: Top ranked terms for the triggering page p according to our tf-idf weighting scheme and top ranked terms for r, the expansion terms for p, generated according to Eq. (6). Ranking scores were normalized in order to sum up to 1. Terms marked with ` *' are not shared by the sets p and r. "whites", but also reveal new terms related to the main topic of the page such as "aroma" and "red". Further, they avoid some uninteresting terms such as "obtained" and "country". Figure 6 illustrates our results when the set r of expansion terms is used. They show that matching the ads to the terms in the set r instead of to the triggering page p (AAK T strategy) leads to a considerable improvement over our baseline, AAK. The gain is even larger when we use the terms in r to expand the triggering page (AAK EXP method). This confirms our hypothesis that the triggering page could have some interesting terms that should not be completely discarded. Finally, we analyze the impact on the ranking of using the contents of pages pointed by the ads. Figure 7 displays our results. It is clear that using only the contents of the pages pointed by the ads (H strategy) yields very poor results. However, combining evidence from the pages pointed by the ads with our baseline yields improved results. Most important, combining our best strategy so far (AAK EXP) with pages pointed by ads (AAK EXP H strategy) leads to superior results. This happens because the two additional sources of evidence, expansion terms and pages pointed by the ads, are distinct and complementary, providing extra and valuable information for matching ads to a Web page. Figure 6: Impact of using a new representation for the triggering page, one that includes expansion terms. Figure 7: Impact of using the contents of the page pointed by the ad (the hyperlink). Figure 8 and Table 3 summarize all results described in this section. In Figure 8 we show precision-recall curves and in Table 3 we show 11-point average figures. We also present actual hits per advertisement slot and gains in average precision relative to our baseline, AAK. We notice that the highest number of hits in the first slot was generated by the method AAK EXP. However, the method with best overall retrieval performance was AAK EXP H, yielding a gain in average precision figures of roughly 50% over the baseline (AAK). 4.4 Performance Issues In a keyword targeted advertising system, ads are assigned at query time, thus the performance of the system is a very important issue. In content-targeted advertising systems, we can associate ads with a page at publishing (or updating) time. Also, if a new ad comes in we might consider assigning this ad to already published pages in offline mode. That is, we might design the system such that its performance depends fundamentally on the rate that new pages Figure 8: Comparison among our ad placement strategies. Table 3: Results for our impedance coupling strategies. are published and the rate that ads are added or modified. Further, the data needed by our strategies (page crawling, page expansion, and ad link crawling) can be gathered and processed offline, not affecting the user experience. Thus, from this point of view, the performance is not critical and will not be addressed in this work. 5. RELATED WORK Several works have stressed the importance of relevance in advertising. For example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance. Thus, in order to be effective, the authors conclude that advertisements should be relevant to consumer concerns at the time of exposure. The results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is. Therefore it is not surprising that other works have addressed the relevance issue. For instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user's short-term interests in a non-intrusive way. Contrary to our work, ADWIZ does not directly use the content of the page viewed by the user. It relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user. On the other hand, in [7] the authors presented an intrusive approach in which an agent sits between advertisers and the user's browser allowing a banner to be placed into the currently viewed page. In spite of having the opportunity to use the page's content, the agent infers relevance based on category information and user's private information collected along the time. In [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems. Both systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers' willingness to pay. In particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance. The authors conclude that Google's strategy is better than that used by Overture. As mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising. Instead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse. Finally, the work in [4] focuses on improving search engine results in a TREC collection by means of an automatic query expansion method based on kNN [17]. Such method resembles our expansion approach presented in section 3. Our method is different from that presented by [4]. They expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection. In our case, we use two collections: an advertisement and a Web collection. We expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection. By doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them. 6. CONCLUSIONS In this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising). Five of our strategies attempt to match the ads directly to the Web page. Because of that, they are called matching strategies. The other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms. Because of that they are called impedance coupling strategies. Using a sample of a real case database with over 93 thousand ads, we evaluated our strategies. For the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%. This was obtained by a strategy called AAK (for "ads and keywords"), which is taken as the baseline for evaluating our more advanced impedance coupling strategies. For our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible. These were generated by expanding the Web page with new terms (obtained using a sample Web collection containing over five million pages) and the ads with the contents of the page they point to (a hyperlink provided by the advertisers). These are first time results that indicate that high quality content-targeted advertising is feasible and practical.
Impedance Coupling in Content-targeted Advertising ABSTRACT The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for "ads and keywords") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms. 1. INTRODUCTION The emergence of the Internet has opened up new marketing opportunities. In fact, a company has now the possibility of showing its advertisements (ads) to millions of people at a low cost. During the 90's, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16]. This situation radically changed in the following decade when the failure of many Web companies led to a dropping in supply of cheap venture capital and a considerable reduction in on-line advertising investments [15,16]. It was clear then that more effective strategies for on-line advertising were required. For that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14]. As a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8]. This raised privacy issues which stimulated the research for less invasive measures [16]. More recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3]. In such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee. Amongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16]. In this technique, keywords extracted from the user's search query are matched against keywords associated with ads provided by advertisers. A ranking of the ads, which also takes into consideration the amount that each advertiser is willing to pay, is computed. The top ranked ads are displayed in the search result page together with the answers for the user query. The success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts. For example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals. The motivation is to take advantage of the users immediate information interests at browsing time. The problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing. In this case, instead of dealing with users' keywords, we have to use the contents of a Web page to decide which ads to display. Figure 1: Example of content-based advertising in the page of a newspaper. The middle slice of the page shows the beginning of an article about the launch of a DVD movie. At the bottom slice, we can see advertisements picked for this page by Google's content-based advertising system, AdSense. It is important to notice that paid placement advertising strategies imply some risks to information gatekeepers. For instance, there is the possibility of a negative impact on their credibility which, at long term, can demise their market share [3]. This makes investments in the quality of ad recommendation systems even more important to minimize the possibility of exhibiting ads unrelated to the user's interests. By investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14]. Further, that can translate into higher clickthrough rates that lead to an increase in revenues for information gatekeepers and advertisers, with gains to all parts [3]. In this work, we focus on the problem of content-targeted advertising. We propose new strategies for associating ads with a Web page. Five of these strategies are referred to as matching strategies. They are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords. Five other strategies, which we here introduce, are referred to as impedance coupling strategies. They are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages. This is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement. We say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance. Further, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems. Thus, no other data from the advertiser is required. Using a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies. First, we evaluate the five matching strategies. They match ads to a Web page using a standard vector model and provide what we may call trivial solutions. Our results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page. Such strategy, which we call AAK (for "ads and keywords"), is then taken as our baseline. Following we evaluate the five impedance coupling strategies. They are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts. Our results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy. The paper is organized as follows. In section 2, we introduce five matching strategies to solve content-targeted advertising. In section 3, we present our impedance coupling strategies. In section 4, we describe our experimental methodology and datasets and discuss our results. In section 5 we discuss related work. In section 6 we present our conclusions. 2. MATCHING STRATEGIES 3. IMPEDANCE COUPLING STRATEGIES 4. EXPERIMENTS 4.1 Methodology 4.2 Tuning Idf factors 4.3 Results Matching Strategies Impedance Coupling Strategies 4.4 Performance Issues 5. RELATED WORK Several works have stressed the importance of relevance in advertising. For example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance. Thus, in order to be effective, the authors conclude that advertisements should be relevant to consumer concerns at the time of exposure. The results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is. Therefore it is not surprising that other works have addressed the relevance issue. For instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user's short-term interests in a non-intrusive way. Contrary to our work, ADWIZ does not directly use the content of the page viewed by the user. It relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user. On the other hand, in [7] the authors presented an intrusive approach in which an agent sits between advertisers and the user's browser allowing a banner to be placed into the currently viewed page. In spite of having the opportunity to use the page's content, the agent infers relevance based on category information and user's private information collected along the time. In [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems. Both systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers' willingness to pay. In particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance. The authors conclude that Google's strategy is better than that used by Overture. As mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising. Instead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse. Finally, the work in [4] focuses on improving search engine results in a TREC collection by means of an automatic query expansion method based on kNN [17]. Such method resembles our expansion approach presented in section 3. Our method is different from that presented by [4]. They expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection. In our case, we use two collections: an advertisement and a Web collection. We expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection. By doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them. 6. CONCLUSIONS In this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising). Five of our strategies attempt to match the ads directly to the Web page. Because of that, they are called matching strategies. The other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms. Because of that they are called impedance coupling strategies. Using a sample of a real case database with over 93 thousand ads, we evaluated our strategies. For the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%. This was obtained by a strategy called AAK (for "ads and keywords"), which is taken as the baseline for evaluating our more advanced impedance coupling strategies. For our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible. These were generated by expanding the Web page with new terms (obtained using a sample Web collection containing over five million pages) and the ads with the contents of the page they point to (a hyperlink provided by the advertisers). These are first time results that indicate that high quality content-targeted advertising is feasible and practical.
Impedance Coupling in Content-targeted Advertising ABSTRACT The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for "ads and keywords") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms. 1. INTRODUCTION During the 90's, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16]. It was clear then that more effective strategies for on-line advertising were required. For that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14]. As a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8]. More recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3]. In such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee. Amongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16]. In this technique, keywords extracted from the user's search query are matched against keywords associated with ads provided by advertisers. The top ranked ads are displayed in the search result page together with the answers for the user query. The success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts. For example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals. The motivation is to take advantage of the users immediate information interests at browsing time. The problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing. In this case, instead of dealing with users' keywords, we have to use the contents of a Web page to decide which ads to display. Figure 1: Example of content-based advertising in the page of a newspaper. The middle slice of the page shows the beginning of an article about the launch of a DVD movie. At the bottom slice, we can see advertisements picked for this page by Google's content-based advertising system, AdSense. It is important to notice that paid placement advertising strategies imply some risks to information gatekeepers. By investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14]. In this work, we focus on the problem of content-targeted advertising. We propose new strategies for associating ads with a Web page. Five of these strategies are referred to as matching strategies. They are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords. Five other strategies, which we here introduce, are referred to as impedance coupling strategies. They are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages. This is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement. We say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance. Further, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems. Thus, no other data from the advertiser is required. Using a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies. First, we evaluate the five matching strategies. They match ads to a Web page using a standard vector model and provide what we may call trivial solutions. Our results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page. Such strategy, which we call AAK (for "ads and keywords"), is then taken as our baseline. Following we evaluate the five impedance coupling strategies. They are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts. Our results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy. In section 2, we introduce five matching strategies to solve content-targeted advertising. In section 3, we present our impedance coupling strategies. In section 4, we describe our experimental methodology and datasets and discuss our results. In section 5 we discuss related work. In section 6 we present our conclusions. 5. RELATED WORK Several works have stressed the importance of relevance in advertising. For example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance. The results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is. Therefore it is not surprising that other works have addressed the relevance issue. For instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user's short-term interests in a non-intrusive way. Contrary to our work, ADWIZ does not directly use the content of the page viewed by the user. It relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user. In spite of having the opportunity to use the page's content, the agent infers relevance based on category information and user's private information collected along the time. In [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems. Both systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers' willingness to pay. In particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance. The authors conclude that Google's strategy is better than that used by Overture. As mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising. Instead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse. Such method resembles our expansion approach presented in section 3. Our method is different from that presented by [4]. They expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection. In our case, we use two collections: an advertisement and a Web collection. We expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection. By doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them. 6. CONCLUSIONS In this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising). Five of our strategies attempt to match the ads directly to the Web page. Because of that, they are called matching strategies. The other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms. Because of that they are called impedance coupling strategies. Using a sample of a real case database with over 93 thousand ads, we evaluated our strategies. For the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%. This was obtained by a strategy called AAK (for "ads and keywords"), which is taken as the baseline for evaluating our more advanced impedance coupling strategies. For our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible. These are first time results that indicate that high quality content-targeted advertising is feasible and practical.
H-49
Performance Prediction Using Spatial Autocorrelation
Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages.
[ "perform predict", "spatial autocorrel", "autocorrel", "inform retriev", "cluster hypothesi", "zero relev judgment", "predictor relationship", "predictor predict power", "languag model score", "queri rank", "regular" ]
[ "P", "P", "P", "P", "U", "M", "M", "M", "M", "R", "U" ]
Performance Prediction Using Spatial Autocorrelation Fernando Diaz Center for Intelligent Information Retrieval Department of Computer Science University of Massachusetts Amherst, MA 01003 fdiaz@cs.umass.edu ABSTRACT Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models; H.3.4 [Systems and Software]: Performance evaluation (efficiency and effectiveness) General Terms Performance, Design, Reliability, Experimentation 1. INTRODUCTION In information retrieval, a user poses a query to a system. The system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance. If we randomly select pairs of documents from this set, we expect some pairs to share the same topic and other pairs to not share the same topic. Take two topically-related documents from the set and call them a and b. If the scores of a and b are very different, we may be concerned about the performance of our system. That is, if a and b are both on the topic of the query, we would like them both to receive a high score; if a and b are not on the topic of the query, we would like them both to receive a low score. We might become more worried as we find more differences between scores of related documents. We would be more comfortable with a retrieval where scores are consistent between related documents. Our paper studies the quantification of this inconsistency in a retrieval from a spatial perspective. Spatial analysis is appropriate since many retrieval models embed documents in some vector space. If documents are embedded in a space, proximity correlates with topical relationships. Score consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10]. In this paper, we demonstrate a strong correlation between IM and retrieval performance. The discussion up to this point is reminiscent of the cluster hypothesis. The cluster hypothesis states: closely-related documents tend to be relevant to the same request [12]. As we shall see, a retrieval function``s spatial autocorrelation measures the degree to which closely-related documents receive similar scores. Because of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis. If this connection is reasonable, in Section 6, we present evidence that failure to satisfy the cluster hypothesis correlates strongly with poor performance. In this work, we provide the following contributions, 1. A general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3). 2. A theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4). 3. The first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6). 2. PROBLEM DEFINITION Given a query, an information retrieval system produces a ranking of documents in the collection encoded as a set of scores associated with documents. We refer to the set of scores for a particular query-system combination as a retrieval. We would like to predict the performance of this retrieval with respect to some evaluation measure (eg, mean average precision). In this paper, we present results for ranking retrievals from arbitrary systems. We would like this ranking to approximate the ranking of retrievals by the evaluation measure. This is different from ranking queries by the average performance on each query. It is also different from ranking systems by the average performance on a set of queries. Scores are often only computed for the top n documents from the collection. We place these scores in the length n vector, y, where yi refers to the score of the ith-ranked document. We adjust scores to have zero mean and unit variance. We use this method because of its simplicity and its success in previous work [15]. 3. SPATIAL CORRELATION In information retrieval, we often assume that the representations of documents exist in some high-dimensional vector space. For example, given a vocabulary, V, this vector space may be an arbitrary |V|-dimensional space with cosine inner-product or a multinomial simplex with a distributionbased distance measure. An embedding space is often selected to respect topical proximity; if two documents are near, they are more likely to share a topic. Because of the prevalence and success of spatial models of information retrieval, we believe that the application of spatial data analysis techniques are appropriate. Whereas in information retrieval, we are concerned with the score at a point in a space, in spatial data analysis, we are concerned with the value of a function at a point or location in a space. We use the term function here to mean a mapping from a location to a real value. For example, we might be interested in the prevalence of a disease in the neighborhood of some city. The function would map the location of a neighborhood to an infection rate. If we want to quantify the spatial dependencies of a function, we would employ a measure referred to as the spatial autocorrelation [5, 10]. High spatial autocorrelation suggests that knowing the value of a function at location a will tell us a great deal about the value at a neighboring location b. There is a high spatial autocorrelation for a function representing the temperature of a location since knowing the temperature at a location a will tell us a lot about the temperature at a neighboring location b. Low spatial autocorrelation suggests that knowing the value of a function at location a tells us little about the value at a neighboring location b. There is low spatial autocorrelation in a function measuring the outcome of a coin toss at a and b. In this section, we will begin by describing what we mean by spatial proximity for documents and then define a measure of spatial autocorrelation. We conclude by extending this model to include information from multiple retrievals from multiple systems for a single query. 3.1 Spatial Representation of Documents Our work does not focus on improving a specific similarity measure or defining a novel vector space. Instead, we choose an inner product known to be effective at detecting interdocument topical relationships. Specifically, we adopt tf.idf document vectors, ˜di = di log „ (n + 0.5) − ci 0.5 + ci `` (1) where d is a vector of term frequencies, c is the length-|V| document frequency vector. We use this weighting scheme due to its success for topical link detection in the context of Topic Detection and Tracking (TDT) evaluations [6]. Assuming vectors are scaled by their L2 norm, we use the inner product, ˜di, ˜dj , to define similarity. Given documents and some similarity measure, we can construct a matrix which encodes the similarity between pairs of documents. Recall that we are given the top n documents retrieved in y. We can compute an n × n similarity matrix, W. An element of this matrix, Wij represents the similarity between documents ranked i and j. In practice, we only include the affinities for a document``s k-nearest neighbors. In all of our experiments, we have fixed k to 5. We leave exploration of parameter sensitivity to future work. We also row normalize the matrix so that Pn j=1 Wij = 1 for all i. 3.2 Spatial Autocorrelation of a Retrieval Recall that we are interested in measuring the similarity between the scores of spatially-close documents. One such suitable measure is the Moran coefficient of spatial autocorrelation. Assuming the function y over n locations, this is defined as ˜IM = n eTWe P i,j Wijyiyj P i y2 i = n eTWe yT Wy yTy (2) where eT We = P ij Wij. We would like to compare autocorrelation values for different retrievals. Unfortunately, the bound for Equation 2 is not consistent for different W and y. Therefore, we use the Cauchy-Schwartz inequality to establish a bound, ˜IM ≤ n eTWe s yTWTWy yTy And we define the normalized spatial autocorrelation as IM = yT Wy p yTy × yTWTWy Notice that if we let ˜y = Wy, then we can write this formula as, IM = yT ˜y y 2 ˜y 2 (3) which can be interpreted as the correlation between the original retrieval scores and a set of retrieval scores diffused in the space. We present some examples of autocorrelations of functions on a grid in Figure 1. 3.3 Correlation with Other Retrievals Sometimes we are interested in the performance of a single retrieval but have access to scores from multiple systems for (a) IM = 0.006 (b) IM = 0.241 (c) IM = 0.487 Figure 1: The Moran coefficient, IM for a several binary functions on a grid. The Moran coefficient is a local measure of function consistency. From the perspective of information retrieval, each of these grid spaces would represent a document and documents would be organized so that they lay next to topically-related documents. Binary retrieval scores would define a pattern on this grid. Notice that, as the Moran coefficient increases, neighboring cells tend to have similar values. the same query. In this situation, we can use combined information from these scores to construct a surrogate for a high-quality ranking [17]. We can treat the correlation between the retrieval we are interested in and the combined scores as a predictor of performance. Assume that we are given m score functions, yi, for the same n documents. We will represent the mean of these vectors as yµ = Pm i=1 yi. We use the mean vector as an approximation to relevance. Since we use zero mean and unit variance normalization, work in metasearch suggests that this assumption is justified [15]. Because yµ represents a very good retrieval, we hypothesize that a strong similarity between yµ and y will correlate positively with system performance. We use Pearson``s product-moment correlation to measure the similarity between these vectors, ρ(y, yµ) = yT yµ y 2 yµ 2 (4) We will comment on the similarity between Equation 3 and 4 in Section 7. Of course, we can combine ρ(y, ˜y) and ρ(y, yµ) if we assume that they capture different factors in the prediction. One way to accomplish this is to combine these predictors as independent variables in a linear regression. An alternative means of combination is suggested by the mathematical form of our predictors. Since ˜y encodes the spatial dependencies in y and yµ encodes the spatial properties of the multiple runs, we can compute a third correlation between these two vectors, ρ(˜y, yµ) = ˜yT yµ ˜y 2 yµ 2 (5) We can interpret Equation 5 as measuring the correlation between a high quality ranking (yµ) and a spatially smoothed version of the retrieval (˜y). 4. RELATIONSHIP WITH OTHER PREDICTORS One way to predict the effectiveness of a retrieval is to look at the shared vocabulary of the top n retrieved documents. If we computed the most frequent content words in this set, we would hope that they would be consistent with our topic. In fact, we might believe that a bad retrieval would include documents on many disparate topics, resulting in an overlap of terminological noise. The Clarity of a query attempts to quantify exactly this [7]. Specifically, Clarity measures the similarity of the words most frequently used in retrieved documents to those most frequently used in the whole corpus. The conjecture is that a good retrieval will use language distinct from general text; the overlapping language in a bad retrieval will tend to be more similar to general text. Mathematically, we can compute a representation of the language used in the initial retrieval as a weighted combination of document language models, P(w|θQ) = nX i=1 P(w|θi) P(Q|θi) Z (6) where θi is the language model of the ith-ranked document, P(Q|θi) is the query likelihood score of the ith-ranked document and Z = Pn i=1 P(Q|θi) is a normalization constant. The similarity between the multinomial P(w|θQ) and a model of general text can be computed using the Kullback-Leibler divergence, DV KL(θQ θC ). Here, the distribution P(w|θC ) is our model of general text which can be computed using term frequencies in the corpus. In Figure 2a, we present Clarity as measuring the distance between the weighted center of mass of the retrieval (labeled y) and the unweighted center of mass of the collection (labeled O). Clarity reaches a minimum when a retrieval assigns every document the same score. Let``s again assume we have a set of n documents retrieved for our query. Another way to quantify the dispersion of a set of documents is to look at how clustered they are. We may hypothesize that a good retrieval will return a single, tight cluster. A poorly performing retrieval will return a loosely related set of documents covering many topics. One proposed method of quantifying this dispersion is to measure the distance from a random document a to it``s nearest neighbor, b. A retrieval which is tightly clustered will, on average, have a low distance between a and b; a retrieval which is less tightly-closed will, on average have high distances between a and b. This average corresponds to using the Cox-Lewis statistic to measure the randomness of the top n documents retrieved from a system [18]. In Figure 2a, this is roughly equivalent to measuring the area of the set n. Notice that we are throwing away information about the retrieval function y. Therefore the Cox-Lewis statistic is highly dependent on selecting the top n documents.1 Remember that we have n documents and a set of scores. Let``s assume that we have access to the system which provided the original scores and that we can also request scores for new documents. This suggests a third method for predicting performance. Take some document, a, from the retrieved set and arbitrarily add or remove words at random to create a new document ˜a. Now, we can ask our system to score ˜a with respect to our query. If, on average over the n documents, the scores of a and ˜a tend to be very different, we might suspect that the system is failing on this query. So, an alternative approach is to measure the simi1 The authors have suggested coupling the query with the distance measure [18]. The information introduced by the query, though, is retrieval-independent so that, if two retrievals return the same set of documents, the approximate Cox-Lewis statistic will be the same regardless of the retrieval scores. yOy (a) Global Divergence µ(y)˜y y (b) Score Perturbation µ(y) y (c) Multirun Averaging Figure 2: Representation of several performance predictors on a grid. In Figure 2a, we depict predictors which measure the divergence between the center of mass of a retrieval and the center of the embedding space. In Figure 2b, we depict predictors which compare the original retrieval, y, to a perturbed version of the retrieval, ˜y. Our approach uses a particular type of perturbation based on score diffusion. Finally, in Figure 2c, we depict prediction when given retrievals from several other systems on the same query. Here, we can consider the fusion of these retrieval as a surrogate for relevance. larity between the retrieval and a perturbed version of that retrieval [18, 19]. This can be accomplished by either perturbing the documents or queries. The similarity between the two retrievals can be measured using some correlation measure. This is depicted in Figure 2b. The upper grid represents the original retrieval, y, while the lower grid represents the function after having been perturbed, ˜y. The nature of the perturbation process requires additional scorings or retrievals. Our predictor does not require access to the original scoring function or additional retrievals. So, although our method is similar to other perturbation methods in spirit, it can be applied in situations when the retrieval system is inaccessible or costly to access. Finally, assume that we have, in addition to the retrieval we want to evaluate, m retrievals from a variety of different systems. In this case, we might take a document a, compare its rank in the retrieval to its average rank in the m retrievals. If we believe that the m retrievals provide a satisfactory approximation to relevance, then a very large difference in rank would suggest that our retrieval is misranking a. If this difference is large on average over all n documents, then we might predict that the retrieval is bad. If, on the other hand, the retrieval is very consistent with the m retrievals, then we might predict that the retrieval is good. The similarity between the retrieval and the combined retrieval may be computed using some correlation measure. This is depicted in Figure 2c. In previous work, the Kullback-Leibler divergence between the normalized scores of the retrieval and the normalized scores of the combined retrieval provides the similarity [1]. 5. EXPERIMENTS Our experiments focus on testing the predictive power of each of our predictors: ρ(y, ˜y), ρ(y, yµ), and ρ(˜y, yµ). As stated in Section 2, we are interested in predicting the performance of the retrieval generated by an arbitrary system. Our methodology is consistent with previous research in that we predict the relative performance of a retrieval by comparing a ranking based on our predictor to a ranking based on average precision. We present results for two sets of experiments. The first set of experiments presents detailed comparisons of our predictors to previously-proposed predictors using identical data sets. Our second set of experiments demonstrates the generalizability of our approach to arbitrary retrieval methods, corpus types, and corpus languages. 5.1 Detailed Experiments In these experiments, we will predict the performance of language modeling scores using our autocorrelation predictor, ρ(y, ˜y); we do not consider ρ(y, yµ) or ρ(˜y, yµ) because, in these detailed experiments, we focus on ranking the retrievals from a single system. We use retrievals, values for baseline predictors, and evaluation measures reported in previous work [19]. 5.1.1 Topics and Collections These performance prediction experiments use language model retrievals performed for queries associated with collections in the TREC corpora. Using TREC collections allows us to confidently associate an average precision with a retrieval. In these experiments, we use the following topic collections: TREC 4 ad-hoc, TREC 5 ad-hoc, Robust 2004, Terabyte 2004, and Terabyte 2005. 5.1.2 Baselines We provide two baselines. Our first baseline is the classic Clarity predictor presented in Equation 6. Clarity is designed to be used with language modeling systems. Our second baseline is Zhou and Croft``s ranking robustness predictor. This predictor corrupts the top k documents from retrieval and re-computes the language model scores for these corrupted documents. The value of the predictor is the Spearman rank correlation between the original ranking and the corrupted ranking. In our tables, we will label results for Clarity using DV KL and the ranking robustness predictor using P. 5.2 Generalizability Experiments Our predictors do not require a particular baseline retrieval system; the predictors can be computed for an arbitrary retrieval, regardless of how scores were generated. We believe that that is one of the most attractive aspects of our algorithm. Therefore, in a second set of experiments, we demonstrate the ability of our techniques to generalize to a variety of collections, topics, and retrieval systems. 5.2.1 Topics and Collections We gathered a diverse set of collections from all possible TREC corpora. We cast a wide net in order to locate collections where our predictors might fail. Our hypothesis is that documents with high topical similarity should have correlated scores. Therefore, we avoided collections where scores were unlikely to be correlated (eg, question-answering) or were likely to be negatively correlated (eg, novelty). Nevertheless, our collections include corpora where correlations are weakly justified (eg, non-English corpora) or not justified at all (eg, expert search). We use the ad-hoc tracks from TREC3-8, TREC Robust 2003-2005, TREC Terabyte 20042005, TREC4-5 Spanish, TREC5-6 Chinese, and TREC Enterprise Expert Search 2005. In all cases, we use only the automatic runs for ad-hoc tracks submitted to NIST. For all English and Spanish corpora, we construct the matrix W according to the process described in Section 3.1. For Chinese corpora, we use na¨ıve character-based tf.idf vectors. For entities, entries in W are proportional to the number of documents in which two entities cooccur. 5.2.2 Baselines In our detailed experiments, we used the Clarity measure as a baseline. Since we are predicting the performance of retrievals which are not based on language modeling, we use a version of Clarity referred to as ranked-list Clarity [7]. Ranked-list clarity converts document ranks to P(Q|θi) values. This conversion begins by replacing all of the scores in y with the respective ranks. Our estimation of P(Q|θi) from the ranks, then is, P(Q|θi) = ( 2(c+1−yi) c(c+1) if yi ≤ c 0 otherwise (7) where c is a cutoff parameter. As suggested by the authors, we fix the algorithm parameters c and λ2 so that c = 60 and λ2 = 0.10. We use Equation 6 to estimate P(w|θQ) and DV KL(θQ θC ) to compute the value of the predictor. We will refer to this predictor as DV KL, superscripted by V to indicate that the Kullback-Leibler divergence is with respect to the term embedding space. When information from multiple runs on the same query is available, we use Aslam and Pavlu``s document-space multinomial divergence as a baseline [1]. This rank-based method first normalizes the scores in a retrieval as an n-dimensional multinomial. As with ranked-list Clarity, we begin by replacing all of the scores in y with their respective ranks. Then, we adjust the elements of y in the following way, ˆyi = 1 2n 0 @1 + nX k=yi 1 k 1 A (8) In our multirun experiments, we only use the top 75 documents from each retrieval (n = 75); this is within the range of parameter values suggested by the authors. However, we admit not tuning this parameter for either our system or the baseline. The predictor is the divergence between the candidate distribution, y, and the mean distribution, yµ . With the uniform linear combination of these m retrievals represented as yµ, we can compute the divergence as Dn KL(ˆy ˆyµ) where we use the superscript n to indicate that the summation is over the set of n documents. This baseline was developed in the context of predicting query difficulty but we adopt it as a reasonable baseline for predicting retrieval performance. 5.2.3 Parameter Settings When given multiple retrievals, we use documents in the union of the top k = 75 documents from each of the m retrievals for that query. If the size of this union is ˜n, then yµ and each yi is of length ˜n. In some cases, a system did not score a document in the union. Since we are making a Gaussian assumption about our scores, we can sample scores for these unseen documents from the negative tail of the distribution. Specifically, we sample from the part of the distribution lower than the minimum value of in the normalized retrieval. This introduces randomness into our algorithm but we believe it is more appropriate than assigning an arbitrary fixed value. We optimized the linear regression using the square root of each predictor. We found that this substantially improved fits for all predictors, including the baselines. We considered linear combinations of pairs of predictors (labeled by the components) and all predictors (labeled as β). 5.3 Evaluation Given a set of retrievals, potentially from a combination of queries and systems, we measure the correlation of the rank ordering of this set by the predictor and by the performance metric. In order to ensure comparability with previous results, we present Kendall``s τ correlation between the predictor``s ranking and ranking based on average precision of the retrieval. Unless explicitly noted, all correlations are significant with p < 0.05. Predictors can sometimes perform better when linearly combined [9, 11]. Although previous work has presented the coefficient of determination (R2 ) to measure the quality of the regression, this measure cannot be reliably used when comparing slight improvements from combining predictors. Therefore, we adopt the adjusted coefficient of determination which penalizes models with more variables. The adjusted R2 allows us to evaluate the improvement in prediction achieved by adding a parameter but loses the statistical interpretation of R2 . We will use Kendall``s τ to evaluate the magnitude of the correlation and the adjusted R2 to evaluate the combination of variables. 6. RESULTS We present results for our detailed experiments comparing the prediction of language model scores in Table 1. Although the Clarity measure is theoretically designed for language model scores, it consistently underperforms our system-agnostic predictor. Ranking robustness was presented as an improvement to Clarity for web collections (represented in our experiments by the terabyte04 and terabyte05 collections), shifting the τ correlation from 0.139 to 0.150 for terabyte04 and 0.171 to 0.208 for terabyte05. However, these improvements are slight compared to the performance of autocorrelation on these collections. Our predictor achieves a τ correlation of 0.454 for terabyte04 and 0.383 for terabyte05. Though not always the strongest, autocorrelation achieves correlations competitive with baseline predictors. When examining the performance of linear combinations of predictors, we note that in every case, autocorrelation factors as a necessary component of a strong predictor. We also note that the adjusted R2 for individual baselines are always significantly improved by incorporating autocorrelation. We present our generalizability results in Table 2. We begin by examining the situation in column (a) where we are presented with a single retrieval and no information from additional retrievals. For every collection except one, we achieve significantly better correlations than ranked-list Clarity. Surprisingly, we achieve relatively strong correlations for Spanish and Chinese collections despite our na¨ıve processing. We do not have a ranked-list clarity correlation for ent05 because entity modeling is itself an open research question. However, our autocorrelation measure does not achieve high correlations perhaps because relevance for entity retrieval does not propagate according to the cooccurrence links we use. As noted above, the poor Clarity performance on web data is consistent with our findings in the detailed experiments. Clarity also notably underperforms for several news corpora (trec5, trec7, and robust04). On the other hand, autocorrelation seems robust to the changes between different corpora. Next, we turn to the introduction of information from multiple retrievals. We compare the correlations between those predictors which do not use this information in column (a) and those which do in column (b). For every collection, the predictors in column (b) outperform the predictors in column (a), indicating that the information from additional runs can be critical to making good predictions. Inspecting the predictors in column (b), we only draw weak conclusions. Our new predictors tend to perform better on news corpora. And between our new predictors, the hybrid ρ(˜y, yµ) predictor tends to perform better. Recall that our ρ(˜y, yµ) measure incorporates both spatial and multiple retrieval information. Therefore, we believe that the improvement in correlation is the result of incorporating information from spatial behavior. In column (c), we can investigate the utility of incorporating spatial information with information from multiple retrievals. Notice that in the cases where autocorrelation, ρ(y, ˜y), alone performs well (trec3, trec5-spanish, and trec6-chinese), it is substantially improved by incorporating multiple-retrieval information from ρ(y, yµ) in the linear regression, β. In the cases where ρ(y, yµ) performs well, incorporating autocorrelation rarely results in a significant improvement in performance. In fact, in every case where our predictor outperforms the baseline, it includes information from multiple runs. 7. DISCUSSION The most important result from our experiments involves prediction when no information is available from multiple runs (Tables 1 and 2a). This situation arises often in system design. For example, a system may need to, at retrieval time, assess its performance before deciding to conduct more intensive processing such as pseudo-relevance feedback or interaction. Assuming the presence of multiple retrievals is unrealistic in this case. We believe that autocorrelation is, like multiple-retrieval algorithms, approximating a good ranking; in this case by diffusing scores. Why is ˜y a reasonable surrogate? We know that diffusion of scores on the web graph and language model graphs improves performance [14, 16]. Therefore, if score diffusion tends to, in general, improve performance, then diffused scores will, in general, provide a good surrogate for relevance. Our results demonstrate that this approximation is not as powerful as information from multiple retrievals. Nevertheless, in situations where this information is lacking, autocorrelation provides substantial information. The success of autocorrelation as a predictor may also have roots in the clustering hypothesis. Recall that we regard autocorrelation as the degree to which a retrieval satisfies the clustering hypothesis. Our experiments, then, demonstrate that a failure to respect the clustering hypothesis correlates with poor performance. Why might systems fail to conform to the cluster hypothesis? Query-based information retrieval systems often score documents independently. The score of document a may be computed by examining query term or phrase matches, the document length, and perhaps global collection statistics. Once computed, a system rarely compares the score of a to the score of a topically-related document b. With some exceptions, the correlation of document scores has largely been ignored. We should make it clear that we have selected tasks where topical autocorrelation is appropriate. There are certainly cases where there is no reason to believe that retrieval scores will have topical autocorrelation. For example, ranked lists which incorporate document novelty should not exhibit spatial autocorrelation; if anything autocorrelation should be negative for this task. Similarly, answer candidates in a question-answering task may or may not exhibit autocorrelation; in this case, the semantics of links is questionable too. It is important before applying this measure to confirm that, given the semantics for some link between two retrieved items, we should expect a correlation between scores. 8. RELATED WORK In this section we draw more general comparisons to other work in performance prediction and spatial data analysis. There is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19]. We have attempted to place our work in the context of much of this work in Section 4. However, a complete comparison is beyond the scope of this paper. We note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined. Much previous work-particularly in the context of TRECfocuses on predicting the performance of systems. Here, each system generates k retrievals. The task is, given these retrievals, to predict the ranking of systems according to some performance measure. Several papers attempt to address this task under the constraint of few judgments [2, 4]. Some work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17]. Our task differs because we focus on ranking retrievals independent of the generating system. The task here is not to test the hypothesis system A is superior to system B but to test the hypothesis retrieval A is superior to retrieval B. Autocorrelation manifests itself in many classification tasks. Neville and Jensen define relational autocorrelation for relational learning problems and demonstrate that many classification tasks manifest autocorrelation [13]. Temporal autocorrelation of initial retrievals has also been used to predict performance [9]. However, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space. In our work, we focus on the behavior of the function over the relationships between documents. τ adjusted R2 DV KL P ρ(y, ˜y) DV KL P ρ(y, ˜y) DV KL, P DV KL, ρ(y, ˜y) Pρ(y, ˜y) β trec4 0.353 0.548 0.513 0.168 0.363 0.422 0.466 0.420 0.557 0.553 trec5 0.311 0.329 0.357 0.116 0.190 0.236 0.238 0.244 0.266 0.269 robust04 0.418 0.398 0.373 0.256 0.304 0.278 0.403 0.373 0.402 0.442 terabyte04 0.139 0.150 0.454 0.059 0.045 0.292 0.076 0.293 0.289 0.284 terabyte05 0.171 0.208 0.383 0.022 0.072 0.193 0.120 0.225 0.218 0.257 Table 1: Comparison to Robustness and Clarity measures for language model scores. Evaluation replicates experiments from [19]. We present correlations between the classic Clarity measure (DV KL), the ranking robustness measure (P), and autocorrelation (ρ(y, ˜y)) each with mean average precision in terms of Kendall``s τ. The adjusted coefficient of determination is presented to measure the effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. multiple run (a) (b) (c) τ τ adjusted R2 DKL ρ(y, ˜y) Dn KL ρ(y, yµ) ρ(˜y, yµ) Dn KL ρ(y, ˜y) ρ(y, yµ) ρ(˜y, yµ) β trec3 0.201 0.461 0.461 0.439 0.456 0.444 0.395 0.394 0.386 0.498 trec4 0.252 0.396 0.455 0.482 0.489 0.379 0.263 0.429 0.482 0.483 trec5 0.016 0.277 0.433 0.459 0.393 0.280 0.157 0.375 0.323 0.386 trec6 0.230 0.227 0.352 0.428 0.418 0.203 0.089 0.323 0.325 0.325 trec7 0.083 0.326 0.341 0.430 0.483 0.264 0.182 0.363 0.442 0.400 trec8 0.235 0.396 0.454 0.508 0.567 0.402 0.272 0.490 0.580 0.523 robust03 0.302 0.354 0.377 0.385 0.447 0.269 0.206 0.274 0.392 0.303 robust04 0.183 0.308 0.301 0.384 0.453 0.200 0.182 0.301 0.393 0.335 robust05 0.224 0.249 0.371 0.377 0.404 0.341 0.108 0.313 0.328 0.336 terabyte04 0.043 0.245 0.544 0.420 0.392 0.516 0.105 0.357 0.343 0.365 terabyte05 0.068 0.306 0.480 0.434 0.390 0.491 0.168 0.384 0.309 0.403 trec4-spanish 0.307 0.388 0.488 0.398 0.395 0.423 0.299 0.282 0.299 0.388 trec5-spanish 0.220 0.458 0.446 0.484 0.475 0.411 0.398 0.428 0.437 0.529 trec5-chinese 0.092 0.199 0.367 0.379 0.384 0.379 0.199 0.273 0.276 0.310 trec6-chinese 0.144 0.276 0.265 0.353 0.376 0.115 0.128 0.188 0.223 0.199 ent05 - 0.181 0.324 0.305 0.282 0.211 0.043 0.158 0.155 0.179 Table 2: Large scale prediction experiments. We predict the ranking of large sets of retrievals for various collections and retrieval systems. Kendall``s τ correlations are computed between the predicted ranking and a ranking based on the retrieval``s average precision. In column (a), we have predictors which do not use information from other retrievals for the same query. In columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals. The adjusted coefficient of determination is computed to determine effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Finally, regularization-based re-ranking processes are also closely-related to our work [8]. These techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem. The maximization of consistency is equivalent to maximizing the Moran autocorrelation. Therefore, we believe that our work provides explanation for why regularization-based re-ranking works. 9. CONCLUSION We have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments. We consider two cases. First, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision. This performance, combined with a simple implementation, makes our predictors, in particular, very attractive. We have demonstrated this improvement for many, diverse settings. To our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction. Second, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines. Our experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided. Our results suggest two conclusions. First, our results could affect retrieval algorithm design. Retrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance. Second, our results could affect the design of minimal test collection algorithms. Much of the recent work in ranking systems sometimes ignores correlations between document labels and scores. We believe that these two directions could be rewarding given the theoretical and experimental evidence in this paper. 10. ACKNOWLEDGMENTS This work was supported in part by the Center for Intelligent Information Retrieval and in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are the author``s and do not necessarily reflect those of the sponsor. We thank Yun Zhou and Desislava Petkova for providing data and Andre Gauthier for technical assistance. 11. REFERENCES [1] J. Aslam and V. Pavlu. Query hardness estimation using jensen-shannon divergence among multiple scoring functions. In ECIR 2007: Proceedings of the 29th European Conference on Information Retrieval, 2007. [2] J. A. Aslam, V. Pavlu, and E. Yilmaz. A statistical method for system evaluation using incomplete judgments. In S. Dumais, E. N. Efthimiadis, D. Hawking, and K. Jarvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 541-548. ACM Press, August 2006. [3] D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg. What makes a query difficult? In SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 390-397, New York, NY, USA, 2006. ACM Press. [4] B. Carterette, J. Allan, and R. Sitaraman. Minimal test collections for retrieval evaluation. In SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 268-275, New York, NY, USA, 2006. ACM Press. [5] A. D. Cliff and J. K. Ord. Spatial Autocorrelation. Pion Ltd., 1973. [6] M. Connell, A. Feng, G. Kumaran, H. Raghavan, C. Shah, and J. Allan. Umass at tdt 2004. Technical Report CIIR Technical Report IR - 357, Department of Computer Science, University of Massachusetts, 2004. [7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Precision prediction based on ranked list coherence. Inf. Retr., 9(6):723-755, 2006. [8] F. Diaz. Regularizing ad-hoc retrieval scores. In CIKM ``05: Proceedings of the 14th ACM international conference on Information and knowledge management, pages 672-679, New York, NY, USA, 2005. ACM Press. [9] F. Diaz and R. Jones. Using temporal profiles of queries for precision prediction. In SIGIR ``04: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 18-24, New York, NY, USA, 2004. ACM Press. [10] D. A. Griffith. Spatial Autocorrelation and Spatial Filtering. Springer Verlag, 2003. [11] B. He and I. Ounis. Inferring Query Performance Using Pre-retrieval Predictors. In The Eleventh Symposium on String Processing and Information Retrieval (SPIRE), 2004. [12] N. Jardine and C. J. V. Rijsbergen. The use of hierarchic clustering in information retrieval. Information Storage and Retrieval, 7:217-240, 1971. [13] D. Jensen and J. Neville. Linkage and autocorrelation cause feature selection bias in relational learning. In ICML ``02: Proceedings of the Nineteenth International Conference on Machine Learning, pages 259-266, San Francisco, CA, USA, 2002. Morgan Kaufmann Publishers Inc. [14] O. Kurland and L. Lee. Corpus structure, language models, and ad-hoc information retrieval. In SIGIR ``04: Proceedings of the 27th annual international conference on Research and development in information retrieval, pages 194-201, New York, NY, USA, 2004. ACM Press. [15] M. Montague and J. A. Aslam. Relevance score normalization for metasearch. In CIKM ``01: Proceedings of the tenth international conference on Information and knowledge management, pages 427-433, New York, NY, USA, 2001. ACM Press. [16] T. Qin, T.-Y. Liu, X.-D. Zhang, Z. Chen, and W.-Y. Ma. A study of relevance propagation for web search. In SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 408-415, New York, NY, USA, 2005. ACM Press. [17] I. Soboroff, C. Nicholas, and P. Cahan. Ranking retrieval systems without relevance judgments. In SIGIR ``01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 66-73, New York, NY, USA, 2001. ACM Press. [18] V. Vinay, I. J. Cox, N. Milic-Frayling, and K. Wood. On ranking the effectiveness of searches. In SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 398-404, New York, NY, USA, 2006. ACM Press. [19] Y. Zhou and W. B. Croft. Ranking robustness: a novel framework to predict query performance. In CIKM ``06: Proceedings of the 15th ACM international conference on Information and knowledge management, pages 567-574, New York, NY, USA, 2006. ACM Press.
Performance Prediction Using Spatial Autocorrelation ABSTRACT Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages. 1. INTRODUCTION In information retrieval, a user poses a query to a system. The system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance. If we randomly select pairs of documents from this set, we expect some pairs to share the same topic and other pairs to not share the same topic. Take two topically-related documents from the set and call them a and b. If the scores of a and b are very different, we may be concerned about the performance of our system. That is, if a and b are both on the topic of the query, we would like them both to receive a high score; if a and b are not on the topic of the query, we would like them both to receive a low score. We might become more worried as we find more differences between scores of related documents. We would be more comfortable with a retrieval where scores are consistent between related documents. Our paper studies the quantification of this inconsistency in a retrieval from a spatial perspective. Spatial analysis is appropriate since many retrieval models embed documents in some vector space. If documents are embedded in a space, proximity correlates with topical relationships. Score consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10]. In this paper, we demonstrate a strong correlation between IM and retrieval performance. The discussion up to this point is reminiscent of the cluster hypothesis. The cluster hypothesis states: closely-related documents tend to be relevant to the same request [12]. As we shall see, a retrieval function's spatial autocorrelation measures the degree to which closely-related documents receive similar scores. Because of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis. If this connection is reasonable, in Section 6, we present evidence that failure to satisfy the cluster hypothesis correlates strongly with poor performance. In this work, we provide the following contributions, 1. A general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3). 2. A theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4). 3. The first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6). 2. PROBLEM DEFINITION Given a query, an information retrieval system produces a ranking of documents in the collection encoded as a set of scores associated with documents. We refer to the set of scores for a particular query-system combination as a retrieval. We would like to predict the performance of this retrieval with respect to some evaluation measure (eg, mean average precision). In this paper, we present results for ranking retrievals from arbitrary systems. We would like this ranking to approximate the ranking of retrievals by the evaluation measure. This is different from ranking queries by the average performance on each query. It is also different from ranking systems by the average performance on a set of queries. Scores are often only computed for the top n documents from the collection. We place these scores in the length n vector, y, where yi refers to the score of the ith-ranked document. We adjust scores to have zero mean and unit variance. We use this method because of its simplicity and its success in previous work [15]. 3. SPATIAL CORRELATION In information retrieval, we often assume that the representations of documents exist in some high-dimensional vector space. For example, given a vocabulary, V, this vector space may be an arbitrary | V | - dimensional space with cosine inner-product or a multinomial simplex with a distributionbased distance measure. An embedding space is often selected to respect topical proximity; if two documents are near, they are more likely to share a topic. Because of the prevalence and success of spatial models of information retrieval, we believe that the application of spatial data analysis techniques are appropriate. Whereas in information retrieval, we are concerned with the score at a point in a space, in spatial data analysis, we are concerned with the value of a function at a point or location in a space. We use the term function here to mean a mapping from a location to a real value. For example, we might be interested in the prevalence of a disease in the neighborhood of some city. The function would map the location of a neighborhood to an infection rate. If we want to quantify the spatial dependencies of a function, we would employ a measure referred to as the spatial autocorrelation [5, 10]. High spatial autocorrelation suggests that knowing the value of a function at location a will tell us a great deal about the value at a neighboring location b. There is a high spatial autocorrelation for a function representing the temperature of a location since knowing the temperature at a location a will tell us a lot about the temperature at a neighboring location b. Low spatial autocorrelation suggests that knowing the value of a function at location a tells us little about the value at a neighboring location b. There is low spatial autocorrelation in a function measuring the outcome of a coin toss at a and b. In this section, we will begin by describing what we mean by spatial proximity for documents and then define a measure of spatial autocorrelation. We conclude by extending this model to include information from multiple retrievals from multiple systems for a single query. 3.1 Spatial Representation of Documents Our work does not focus on improving a specific similarity measure or defining a novel vector space. Instead, we choose an inner product known to be effective at detecting interdocument topical relationships. Specifically, we adopt tf.idf document vectors, where d is a vector of term frequencies, c is the length - | V | document frequency vector. We use this weighting scheme due to its success for topical link detection in the context of Topic Detection and Tracking (TDT) evaluations [6]. Assuming vectors are scaled by their L2 norm, we use the inner ˜di, ˜dj), to define similarity. Given documents and some similarity measure, we can construct a matrix which encodes the similarity between pairs of documents. Recall that we are given the top n documents retrieved in y. We can compute an n × n similarity matrix, W. An element of this matrix, Wij represents the similarity between documents ranked i and j. In practice, we only include the affinities for a document's k-nearest neighbors. In all of our experiments, we have fixed k to 5. We leave exploration of parameter sensitivity to future work. We also row normalize the matrix so that En j = 1 Wij = 1 for all i. 3.2 Spatial Autocorrelation of a Retrieval Recall that we are interested in measuring the similarity between the scores of spatially-close documents. One such suitable measure is the Moran coefficient of spatial autocorrelation. Assuming the function y over n locations, this is defined as where eTWe = Eij Wij. We would like to compare autocorrelation values for different retrievals. Unfortunately, the bound for Equation 2 is not consistent for different W and y. Therefore, we use the Cauchy-Schwartz inequality to establish a bound, And we define the normalized spatial autocorrelation as which can be interpreted as the correlation between the original retrieval scores and a set of retrieval scores "diffused" in the space. We present some examples of autocorrelations of functions on a grid in Figure 1. 3.3 Correlation with Other Retrievals Sometimes we are interested in the performance of a single retrieval but have access to scores from multiple systems for Figure 1: The Moran coefficient, IM for a several binary functions on a grid. The Moran coefficient is a local measure of function consistency. From the perspective of information retrieval, each of these grid spaces would represent a document and documents would be organized so that they lay next to topically-related documents. Binary retrieval scores would define a pattern on this grid. Notice that, as the Moran coefficient increases, neighboring cells tend to have similar values. the same query. In this situation, we can use combined information from these scores to construct a surrogate for a high-quality ranking [17]. We can treat the correlation between the retrieval we are interested in and the combined scores as a predictor of performance. Assume that we are given m score functions, yi, for the vectors as yµ = Pm same n documents. We will represent the mean of these i = 1 yi. We use the mean vector as an approximation to relevance. Since we use zero mean and unit variance normalization, work in metasearch suggests that this assumption is justified [15]. Because yµ represents a very good retrieval, we hypothesize that a strong similarity between yµ and y will correlate positively with system performance. We use Pearson's product-moment correlation to measure the similarity between these vectors, kyk2kyµk2 We will comment on the similarity between Equation 3 and 4 in Section 7. Of course, we can combine ρ (y, ˜y) and ρ (y, yµ) if we assume that they capture different factors in the prediction. One way to accomplish this is to combine these predictors as independent variables in a linear regression. An alternative means of combination is suggested by the mathematical form of our predictors. Since y˜ encodes the spatial dependencies in y and yµ encodes the spatial properties of the multiple runs, we can compute a third correlation between these two vectors, We can interpret Equation 5 as measuring the correlation between a high quality ranking (yµ) and a spatially smoothed version of the retrieval (˜y). 4. RELATIONSHIP WITH OTHER PREDICTORS One way to predict the effectiveness of a retrieval is to look at the shared vocabulary of the top n retrieved documents. If we computed the most frequent content words in this set, we would hope that they would be consistent with our topic. In fact, we might believe that a bad retrieval would include documents on many disparate topics, resulting in an overlap of terminological noise. The Clarity of a query attempts to quantify exactly this [7]. Specifically, Clarity measures the similarity of the words most frequently used in retrieved documents to those most frequently used in the whole corpus. The conjecture is that a good retrieval will use language distinct from general text; the overlapping language in a bad retrieval will tend to be more similar to general text. Mathematically, we can compute a representation of the language used in the initial retrieval as a weighted combination of document language models, where θi is the language model of the ith-ranked document, P (Q | θi) is the query likelihood score of the ith-ranked document and Z = Pn i = 1 P (Q | θi) is a normalization constant. The similarity between the multinomial P (w | θQ) and a model of "general text" can be computed using the Kullback-Leibler divergence, DVKL (θQkθC). Here, the distribution P (w | θC) is our model of general text which can be computed using term frequencies in the corpus. In Figure 2a, we present Clarity as measuring the distance between the "weighted center of mass" of the retrieval (labeled y) and the "unweighted center of mass" of the collection (labeled O). Clarity reaches a minimum when a retrieval assigns every document the same score. Let's again assume we have a set of n documents retrieved for our query. Another way to quantify the dispersion of a set of documents is to look at how clustered they are. We may hypothesize that a good retrieval will return a single, tight cluster. A poorly performing retrieval will return a loosely related set of documents covering many topics. One proposed method of quantifying this dispersion is to measure the distance from a random document a to it's nearest neighbor, b. A retrieval which is tightly clustered will, on average, have a low distance between a and b; a retrieval which is less tightly-closed will, on average have high distances between a and b. This average corresponds to using the Cox-Lewis statistic to measure the randomness of the top n documents retrieved from a system [18]. In Figure 2a, this is roughly equivalent to measuring the area of the set n. Notice that we are throwing away information about the retrieval function y. Therefore the Cox-Lewis statistic is highly dependent on selecting the top n documents .1 Remember that we have n documents and a set of scores. Let's assume that we have access to the system which provided the original scores and that we can also request scores for new documents. This suggests a third method for predicting performance. Take some document, a, from the retrieved set and arbitrarily add or remove words at random to create a new document ˜a. Now, we can ask our system to score a˜ with respect to our query. If, on average over the n documents, the scores of a and a˜ tend to be very different, we might suspect that the system is failing on this query. So, an alternative approach is to measure the simi1The authors have suggested coupling the query with the distance measure [18]. The information introduced by the query, though, is retrieval-independent so that, if two retrievals return the same set of documents, the approximate Cox-Lewis statistic will be the same regardless of the retrieval scores. Figure 2: Representation of several performance predictors on a grid. In Figure 2a, we depict predictors which measure the divergence between the "center of mass" of a retrieval and the center of the embedding space. In Figure 2b, we depict predictors which compare the original retrieval, y, to a perturbed version of the retrieval, ˜y. Our approach uses a particular type of perturbation based on score diffusion. Finally, in Figure 2c, we depict prediction when given retrievals from several other systems on the same query. Here, we can consider the fusion of these retrieval as a surrogate for relevance. larity between the retrieval and a perturbed version of that retrieval [18, 19]. This can be accomplished by either perturbing the documents or queries. The similarity between the two retrievals can be measured using some correlation measure. This is depicted in Figure 2b. The upper grid represents the original retrieval, y, while the lower grid represents the function after having been perturbed, ˜y. The nature of the perturbation process requires additional scorings or retrievals. Our predictor does not require access to the original scoring function or additional retrievals. So, although our method is similar to other perturbation methods in spirit, it can be applied in situations when the retrieval system is inaccessible or costly to access. Finally, assume that we have, in addition to the retrieval we want to evaluate, m retrievals from a variety of different systems. In this case, we might take a document a, compare its rank in the retrieval to its average rank in the m retrievals. If we believe that the m retrievals provide a satisfactory approximation to relevance, then a very large difference in rank would suggest that our retrieval is misranking a. If this difference is large on average over all n documents, then we might predict that the retrieval is bad. If, on the other hand, the retrieval is very consistent with the m retrievals, then we might predict that the retrieval is good. The similarity between the retrieval and the combined retrieval may be computed using some correlation measure. This is depicted in Figure 2c. In previous work, the Kullback-Leibler divergence between the normalized scores of the retrieval and the normalized scores of the combined retrieval provides the similarity [1]. 5. EXPERIMENTS Our experiments focus on testing the predictive power of each of our predictors: ρ (y, ˜y), ρ (y, y,,), and ρ (˜y, y,,). As stated in Section 2, we are interested in predicting the performance of the retrieval generated by an arbitrary system. Our methodology is consistent with previous research in that we predict the relative performance of a retrieval by comparing a ranking based on our predictor to a ranking based on average precision. We present results for two sets of experiments. The first set of experiments presents detailed comparisons of our predictors to previously-proposed predictors using identical data sets. Our second set of experiments demonstrates the generalizability of our approach to arbitrary retrieval methods, corpus types, and corpus languages. 5.1 Detailed Experiments In these experiments, we will predict the performance of language modeling scores using our autocorrelation predictor, ρ (y, ˜y); we do not consider ρ (y, y,,) or ρ (˜y, y,,) because, in these detailed experiments, we focus on ranking the retrievals from a single system. We use retrievals, values for baseline predictors, and evaluation measures reported in previous work [19]. 5.1.1 Topics and Collections These performance prediction experiments use language model retrievals performed for queries associated with collections in the TREC corpora. Using TREC collections allows us to confidently associate an average precision with a retrieval. In these experiments, we use the following topic collections: TREC 4 ad hoc, TREC 5 ad hoc, Robust 2004, Terabyte 2004, and Terabyte 2005. 5.1.2 Baselines We provide two baselines. Our first baseline is the classic Clarity predictor presented in Equation 6. Clarity is designed to be used with language modeling systems. Our second baseline is Zhou and Croft's "ranking robustness" predictor. This predictor corrupts the top k documents from retrieval and re-computes the language model scores for these corrupted documents. The value of the predictor is the Spearman rank correlation between the original ranking and the corrupted ranking. In our tables, we will label results for Clarity using DVKL and the ranking robustness predictor using P. 5.2 Generalizability Experiments Our predictors do not require a particular baseline retrieval system; the predictors can be computed for an arbitrary retrieval, regardless of how scores were generated. We believe that that is one of the most attractive aspects of our algorithm. Therefore, in a second set of experiments, we demonstrate the ability of our techniques to generalize to a variety of collections, topics, and retrieval systems. 5.2.1 Topics and Collections We gathered a diverse set of collections from all possible TREC corpora. We cast a wide net in order to locate collections where our predictors might fail. Our hypothesis is that documents with high topical similarity should have correlated scores. Therefore, we avoided collections where scores were unlikely to be correlated (eg, question-answering) or were likely to be negatively correlated (eg, novelty). Nevertheless, our collections include corpora where correlations are weakly justified (eg, non-English corpora) or not justified at all (eg, expert search). We use the ad hoc tracks from TREC3-8, TREC Robust 2003-2005, TREC Terabyte 20042005, TREC4-5 Spanish, TREC5-6 Chinese, and TREC Enterprise Expert Search 2005. In all cases, we use only the automatic runs for ad hoc tracks submitted to NIST. For all English and Spanish corpora, we construct the matrix W according to the process described in Section 3.1. For Chinese corpora, we use na ¨ ıve character-based tf.idf vectors. For entities, entries in W are proportional to the number of documents in which two entities cooccur. 5.2.2 Baselines In our detailed experiments, we used the Clarity measure as a baseline. Since we are predicting the performance of retrievals which are not based on language modeling, we use a version of Clarity referred to as ranked-list Clarity [7]. Ranked-list clarity converts document ranks to P (Q | θi) values. This conversion begins by replacing all of the scores in Y with the respective ranks. Our estimation of P (Q | θi) from the ranks, then is, where c is a cutoff parameter. As suggested by the authors, we fix the algorithm parameters c and λ2 so that c = 60 and λ2 = 0.10. We use Equation 6 to estimate P (w | θQ) and DVKL (θQIIθC) to compute the value of the predictor. We will refer to this predictor as DVKL, superscripted by V to indicate that the Kullback-Leibler divergence is with respect to the term embedding space. When information from multiple runs on the same query is available, we use Aslam and Pavlu's document-space multinomial divergence as a baseline [1]. This rank-based method first normalizes the scores in a retrieval as an n-dimensional multinomial. As with ranked-list Clarity, we begin by replacing all of the scores in Y with their respective ranks. Then, we adjust the elements of Y in the following way, In our multirun experiments, we only use the top 75 documents from each retrieval (n = 75); this is within the range of parameter values suggested by the authors. However, we admit not tuning this parameter for either our system or the baseline. The predictor is the divergence between the candidate distribution, Y, and the mean distribution, Yµ. With the uniform linear combination of these m retrievals represented as Yµ, we can compute the divergence as DnKL (ˆYIIˆYµ) where we use the superscript n to indicate that the summation is over the set of n documents. This baseline was developed in the context of predicting query difficulty but we adopt it as a reasonable baseline for predicting retrieval performance. 5.2.3 Parameter Settings When given multiple retrievals, we use documents in the union of the top k = 75 documents from each of the m retrievals for that query. If the size of this union is ˜n, then Yµ and each Yi is of length ˜n. In some cases, a system did not score a document in the union. Since we are making a Gaussian assumption about our scores, we can sample scores for these unseen documents from the negative tail of the distribution. Specifically, we sample from the part of the distribution lower than the minimum value of in the normalized retrieval. This introduces randomness into our algorithm but we believe it is more appropriate than assigning an arbitrary fixed value. We optimized the linear regression using the square root of each predictor. We found that this substantially improved fits for all predictors, including the baselines. We considered linear combinations of pairs of predictors (labeled by the components) and all predictors (labeled as β). 5.3 Evaluation Given a set of retrievals, potentially from a combination of queries and systems, we measure the correlation of the rank ordering of this set by the predictor and by the performance metric. In order to ensure comparability with previous results, we present Kendall's τ correlation between the predictor's ranking and ranking based on average precision of the retrieval. Unless explicitly noted, all correlations are significant with p <0.05. Predictors can sometimes perform better when linearly combined [9, 11]. Although previous work has presented the coefficient of determination (R2) to measure the quality of the regression, this measure cannot be reliably used when comparing slight improvements from combining predictors. Therefore, we adopt the adjusted coefficient of determination which penalizes models with more variables. The adjusted R2 allows us to evaluate the improvement in prediction achieved by adding a parameter but loses the statistical interpretation of R2. We will use Kendall's τ to evaluate the magnitude of the correlation and the adjusted R2 to evaluate the combination of variables. 6. RESULTS We present results for our detailed experiments comparing the prediction of language model scores in Table 1. Although the Clarity measure is theoretically designed for language model scores, it consistently underperforms our system-agnostic predictor. Ranking robustness was presented as an improvement to Clarity for web collections (represented in our experiments by the terabyte04 and terabyte05 collections), shifting the τ correlation from 0.139 to 0.150 for terabyte04 and 0.171 to 0.208 for terabyte05. However, these improvements are slight compared to the performance of autocorrelation on these collections. Our predictor achieves a τ correlation of 0.454 for terabyte04 and 0.383 for terabyte05. Though not always the strongest, autocorrelation achieves correlations competitive with baseline predictors. When examining the performance of linear combinations of predictors, we note that in every case, autocorrelation factors as a necessary component of a strong predictor. We also note that the adjusted R2 for individual baselines are always significantly improved by incorporating autocorrelation. We present our generalizability results in Table 2. We begin by examining the situation in column (a) where we are presented with a single retrieval and no information from additional retrievals. For every collection except one, we achieve significantly better correlations than ranked-list Clarity. Surprisingly, we achieve relatively strong correlations for Spanish and Chinese collections despite our na ¨ ıve processing. We do not have a ranked-list clarity correlation for ent05 because entity modeling is itself an open research question. However, our autocorrelation measure does not achieve high correlations perhaps because relevance for entity retrieval does not propagate according to the cooccurrence links we use. As noted above, the poor Clarity performance on web data is consistent with our findings in the detailed experiments. Clarity also notably underperforms for several news corpora (trec5, trec7, and robust04). On the other hand, autocorrelation seems robust to the changes between different corpora. Next, we turn to the introduction of information from multiple retrievals. We compare the correlations between those predictors which do not use this information in column (a) and those which do in column (b). For every collection, the predictors in column (b) outperform the predictors in column (a), indicating that the information from additional runs can be critical to making good predictions. Inspecting the predictors in column (b), we only draw weak conclusions. Our new predictors tend to perform better on news corpora. And between our new predictors, the hybrid ρ (˜y, y,,) predictor tends to perform better. Recall that our ρ (˜y, y,,) measure incorporates both spatial and multiple retrieval information. Therefore, we believe that the improvement in correlation is the result of incorporating information from spatial behavior. In column (c), we can investigate the utility of incorporating spatial information with information from multiple retrievals. Notice that in the cases where autocorrelation, ρ (y, ˜y), alone performs well (trec3, trec5-spanish, and trec6-chinese), it is substantially improved by incorporating multiple-retrieval information from ρ (y, y,,) in the linear regression, β. In the cases where ρ (y, y,,) performs well, incorporating autocorrelation rarely results in a significant improvement in performance. In fact, in every case where our predictor outperforms the baseline, it includes information from multiple runs. 8. RELATED WORK In this section we draw more general comparisons to other work in performance prediction and spatial data analysis. There is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19]. We have attempted to place our work in the context of much of this work in Section 4. However, a complete comparison is beyond the scope of this paper. We note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined. Much previous work--particularly in the context of TREC--focuses on predicting the performance of systems. Here, each system generates k retrievals. The task is, given these retrievals, to predict the ranking of systems according to some performance measure. Several papers attempt to address this task under the constraint of few judgments [2, 4]. Some work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17]. Our task differs because we focus on ranking retrievals independent of the generating system. The task here is not to test the hypothesis "system A is superior to system B" but to test the hypothesis "retrieval A is superior to retrieval B". Autocorrelation manifests itself in many classification tasks. Neville and Jensen define relational autocorrelation for relational learning problems and demonstrate that many classification tasks manifest autocorrelation [13]. Temporal autocorrelation of initial retrievals has also been used to predict performance [9]. However, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space. In our work, we focus on the behavior of the function over the relationships between documents. Table 1: Comparison to Robustness and Clarity measures for language model scores. Evaluation replicates experiments from [19]. We present correlations between the classic Clarity measure (DVKL), the ranking robustness measure (P), and autocorrelation (ρ (y, ˜y)) each with mean average precision in terms of Kendall's τ. The adjusted coefficient of determination is presented to measure the effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Table 2: Large scale prediction experiments. We predict the ranking of large sets of retrievals for various collections and retrieval systems. Kendall's τ correlations are computed between the predicted ranking and a ranking based on the retrieval's average precision. In column (a), we have predictors which do not use information from other retrievals for the same query. In columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals. The adjusted coefficient of determination is computed to determine effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Finally, regularization-based re-ranking processes are also closely-related to our work [8]. These techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem. The maximization of consistency is equivalent to maximizing the Moran autocorrelation. Therefore, we believe that our work provides explanation for why regularization-based re-ranking works. 9. CONCLUSION We have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments. We consider two cases. First, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision. This performance, combined with a simple implementation, makes our predictors, in particular, very attractive. We have demonstrated this improvement for many, diverse settings. To our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction. Second, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines. Our experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided. Our results suggest two conclusions. First, our results could affect retrieval algorithm design. Retrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance. Second, our results could affect the design of minimal test collection algorithms. Much of the recent work in ranking systems sometimes ignores correlations between document labels and scores. We believe that these two directions could be rewarding given the theoretical and experimental evidence in this paper.
Performance Prediction Using Spatial Autocorrelation ABSTRACT Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages. 1. INTRODUCTION In information retrieval, a user poses a query to a system. The system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance. If we randomly select pairs of documents from this set, we expect some pairs to share the same topic and other pairs to not share the same topic. Take two topically-related documents from the set and call them a and b. If the scores of a and b are very different, we may be concerned about the performance of our system. That is, if a and b are both on the topic of the query, we would like them both to receive a high score; if a and b are not on the topic of the query, we would like them both to receive a low score. We might become more worried as we find more differences between scores of related documents. We would be more comfortable with a retrieval where scores are consistent between related documents. Our paper studies the quantification of this inconsistency in a retrieval from a spatial perspective. Spatial analysis is appropriate since many retrieval models embed documents in some vector space. If documents are embedded in a space, proximity correlates with topical relationships. Score consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10]. In this paper, we demonstrate a strong correlation between IM and retrieval performance. The discussion up to this point is reminiscent of the cluster hypothesis. The cluster hypothesis states: closely-related documents tend to be relevant to the same request [12]. As we shall see, a retrieval function's spatial autocorrelation measures the degree to which closely-related documents receive similar scores. Because of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis. If this connection is reasonable, in Section 6, we present evidence that failure to satisfy the cluster hypothesis correlates strongly with poor performance. In this work, we provide the following contributions, 1. A general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3). 2. A theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4). 3. The first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6). 2. PROBLEM DEFINITION 3. SPATIAL CORRELATION 3.1 Spatial Representation of Documents 3.2 Spatial Autocorrelation of a Retrieval 3.3 Correlation with Other Retrievals 4. RELATIONSHIP WITH OTHER PREDICTORS 5. EXPERIMENTS 5.1 Detailed Experiments 5.1.1 Topics and Collections 5.1.2 Baselines 5.2 Generalizability Experiments 5.2.1 Topics and Collections 5.2.2 Baselines 5.2.3 Parameter Settings 5.3 Evaluation 6. RESULTS 8. RELATED WORK In this section we draw more general comparisons to other work in performance prediction and spatial data analysis. There is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19]. We have attempted to place our work in the context of much of this work in Section 4. However, a complete comparison is beyond the scope of this paper. We note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined. Much previous work--particularly in the context of TREC--focuses on predicting the performance of systems. Here, each system generates k retrievals. The task is, given these retrievals, to predict the ranking of systems according to some performance measure. Several papers attempt to address this task under the constraint of few judgments [2, 4]. Some work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17]. Our task differs because we focus on ranking retrievals independent of the generating system. The task here is not to test the hypothesis "system A is superior to system B" but to test the hypothesis "retrieval A is superior to retrieval B". Autocorrelation manifests itself in many classification tasks. Neville and Jensen define relational autocorrelation for relational learning problems and demonstrate that many classification tasks manifest autocorrelation [13]. Temporal autocorrelation of initial retrievals has also been used to predict performance [9]. However, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space. In our work, we focus on the behavior of the function over the relationships between documents. Table 1: Comparison to Robustness and Clarity measures for language model scores. Evaluation replicates experiments from [19]. We present correlations between the classic Clarity measure (DVKL), the ranking robustness measure (P), and autocorrelation (ρ (y, ˜y)) each with mean average precision in terms of Kendall's τ. The adjusted coefficient of determination is presented to measure the effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Table 2: Large scale prediction experiments. We predict the ranking of large sets of retrievals for various collections and retrieval systems. Kendall's τ correlations are computed between the predicted ranking and a ranking based on the retrieval's average precision. In column (a), we have predictors which do not use information from other retrievals for the same query. In columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals. The adjusted coefficient of determination is computed to determine effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Finally, regularization-based re-ranking processes are also closely-related to our work [8]. These techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem. The maximization of consistency is equivalent to maximizing the Moran autocorrelation. Therefore, we believe that our work provides explanation for why regularization-based re-ranking works. 9. CONCLUSION We have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments. We consider two cases. First, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision. This performance, combined with a simple implementation, makes our predictors, in particular, very attractive. We have demonstrated this improvement for many, diverse settings. To our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction. Second, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines. Our experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided. Our results suggest two conclusions. First, our results could affect retrieval algorithm design. Retrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance. Second, our results could affect the design of minimal test collection algorithms. Much of the recent work in ranking systems sometimes ignores correlations between document labels and scores. We believe that these two directions could be rewarding given the theoretical and experimental evidence in this paper.
Performance Prediction Using Spatial Autocorrelation ABSTRACT Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages. 1. INTRODUCTION In information retrieval, a user poses a query to a system. The system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance. If the scores of a and b are very different, we may be concerned about the performance of our system. We might become more worried as we find more differences between scores of related documents. We would be more comfortable with a retrieval where scores are consistent between related documents. Our paper studies the quantification of this inconsistency in a retrieval from a spatial perspective. Spatial analysis is appropriate since many retrieval models embed documents in some vector space. If documents are embedded in a space, proximity correlates with topical relationships. Score consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10]. In this paper, we demonstrate a strong correlation between IM and retrieval performance. The cluster hypothesis states: closely-related documents tend to be relevant to the same request [12]. As we shall see, a retrieval function's spatial autocorrelation measures the degree to which closely-related documents receive similar scores. Because of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis. In this work, we provide the following contributions, 1. A general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3). 2. A theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4). 3. The first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6). 8. RELATED WORK In this section we draw more general comparisons to other work in performance prediction and spatial data analysis. There is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19]. We have attempted to place our work in the context of much of this work in Section 4. We note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined. Much previous work--particularly in the context of TREC--focuses on predicting the performance of systems. Here, each system generates k retrievals. The task is, given these retrievals, to predict the ranking of systems according to some performance measure. Some work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17]. Our task differs because we focus on ranking retrievals independent of the generating system. The task here is not to test the hypothesis "system A is superior to system B" but to test the hypothesis "retrieval A is superior to retrieval B". Autocorrelation manifests itself in many classification tasks. Temporal autocorrelation of initial retrievals has also been used to predict performance [9]. However, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space. In our work, we focus on the behavior of the function over the relationships between documents. Table 1: Comparison to Robustness and Clarity measures for language model scores. Evaluation replicates experiments from [19]. The adjusted coefficient of determination is presented to measure the effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Table 2: Large scale prediction experiments. We predict the ranking of large sets of retrievals for various collections and retrieval systems. Kendall's τ correlations are computed between the predicted ranking and a ranking based on the retrieval's average precision. In column (a), we have predictors which do not use information from other retrievals for the same query. In columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals. The adjusted coefficient of determination is computed to determine effectiveness of combining predictors. Measures in bold represent the strongest correlation for that test/collection pair. Finally, regularization-based re-ranking processes are also closely-related to our work [8]. These techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem. The maximization of consistency is equivalent to maximizing the Moran autocorrelation. Therefore, we believe that our work provides explanation for why regularization-based re-ranking works. 9. CONCLUSION We have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments. We consider two cases. First, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision. This performance, combined with a simple implementation, makes our predictors, in particular, very attractive. To our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction. Second, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines. Our experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided. Our results suggest two conclusions. First, our results could affect retrieval algorithm design. Retrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance. Second, our results could affect the design of minimal test collection algorithms. Much of the recent work in ranking systems sometimes ignores correlations between document labels and scores.
C-44
MSP: Multi-Sequence Positioning of Wireless Sensor Nodes
Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.
[ "wireless sensor network", "node local", "local", "event distribut", "multi-sequenc posit", "massiv uva-base deploment", "spatiotempor correl", "rang-base approach", "distribut-base locat estim", "listen-detect-assembl-report protocol", "margin distribut", "node sequenc process" ]
[ "P", "P", "P", "P", "M", "U", "U", "U", "M", "U", "M", "R" ]
MSP: Multi-Sequence Positioning of Wireless Sensor Nodes∗ Ziguo Zhong Computer Science and Engineering University of Minnesota zhong@cs.umn.edu Tian He Computer Science and Engineering University of Minnesota tianhe@cs.umn.edu Abstract Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy. Categories and Subject Descriptors C.2.4 [Computer Communications Networks]: Distributed Systems General Terms Algorithms, Measurement, Design, Performance, Experimentation 1 Introduction Although Wireless Sensor Networks (WSN) have shown promising prospects in various applications [5], researchers still face several challenges for massive deployment of such networks. One of these is to identify the location of individual sensor nodes in outdoor environments. Because of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment. On the other hand, geometric information is indispensable in these networks, since users need to know where events of interest occur (e.g., the location of intruders or of a bomb explosion). Previous research on node localization falls into two categories: range-based approaches and range-free approaches. Range-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations. These approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]). Although range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments. On the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information. Unfortunately, since wireless connectivity is highly influenced by the environment and hardware calibration, existing solutions fail to deliver encouraging empirical results, or require substantial survey [2] and calibration [24] on a case-by-case basis. Realizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes'' locations based on the detection time of controlled events). These solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes. These benefits, however, come along with an implicit assumption that the controlled events can be precisely distributed to a specified location at a specified time. We argue that precise event distribution is difficult to achieve, especially at large scale when terrain is uneven, the event distribution device is not well calibrated and its position is difficult to maintain (e.g., the helicopter-mounted scenario in [20]). To address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method 15 for large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors. The novel idea behind MSP is to estimate each sensor node``s two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution. This design offers several benefits. First, compared to a range-based approach, MSP does not require additional costly hardware. It works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work. Second, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors. And third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost. This paper offers the following additional intellectual contributions: • We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event. We demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy. Interestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy. • We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence. This new algorithm outperforms the widely adopted Centroid estimation [4, 8]. • To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events. The generation of later events is guided by localization results from previous events. • We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments. Through system implementation, we discover and address several interesting issues such as partial sequence and sequence flips. To reveal MSP performance at scale, we provide analytic results as well as a complete simulation study. All the simulation and implementation code is available online at http://www.cs.umn.edu/∼zhong/MSP. The rest of the paper is organized as follows. Section 2 briefly surveys the related work. Section 3 presents an overview of the MSP localization system. In sections 4 and 5, basic MSP and four advanced processing methods are introduced. Section 6 describes how MSP can be applied in a wave propagation scenario. Section 7 discusses several implementation issues. Section 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed. Section 10 concludes the paper. 2 Related Work Many methods have been proposed to localize wireless sensor devices in the open air. Most of these can be classified into two categories: range-based and range-free localization. Range-based localization systems, such as GPS [23], Cricket [17], AHLoS [19], AOA [16], Robust Quadrilaterals [13] and Sweeps [7], are based on fine-grained point-topoint distance estimation or angle estimation to identify pernode location. Constraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment. In addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns. For example, ultrasound signals usually effectively propagate 20-30 feet using an on-board transmitter [17]. Consequently, these range-based solutions require an undesirably high deployment density. Although the received signal strength indicator (RSSI) related [2, 24] methods were once considered an ideal low-cost solution, the irregularity of radio propagation [26] seriously limits the accuracy of such systems. The recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured. This ranging technique performs very well as long as antennas are well oriented and environmental factors such as multi-path effects and background noise are sufficiently addressed. Range-free methods don``t need to estimate or measure accurate distances or angles. Instead, anchors or controlled-event distributions are used for node localization. Range-free methods can be generally classified into two types: anchor-based and anchor-free solutions. • For anchor-based solutions such as Centroid [4], APIT [8], SeRLoc [10], Gradient [13] , and APS [15], the main idea is that the location of each node is estimated based on the known locations of the anchor nodes. Different anchor combinations narrow the areas in which the target nodes can possibly be located. Anchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy. In practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost. • Anchor-free solutions require no anchor nodes. Instead, external event generators and data processing platforms are used. The main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors. Spotlight [20] and Lighthouse [18] work in this fashion. In Spotlight [20], the event distribution needs to be precise in both time and space. Precise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight). All these increase system cost and reduce localization speed. StarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints. Label relaxation algorithms converge only when a sufficient number of robust constraints are obtained. Due to the environmental impact on RF connectivity constraints, however, StarDust is less accurate than Spotlight. In this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions. Unlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft 16 1 A B 2 3 4 5 Target nodeAnchor node 1A 5 3 B2 4 1 B2 5A 43 1A25B4 3 1 52 AB 4 3 1 2 3 5 4 (b) (c)(d) (a) Event 1 Node Sequence generated by event 1 Event 3 Node Sequence generated by event 2 Node Sequence generated by event 3 Node Sequence generated by event 4 Event 2 Event 4 Figure 1. The MSP System Overview cost (localization events). MSP uses only a small number of anchors (theoretically, as few as two). Unlike anchor-free solutions, MSP doesn``t need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors. 3 System Overview MSP works by extracting relative location information from multiple simple one-dimensional orderings of nodes. Figure 1(a) shows a layout of a sensor network with anchor nodes and target nodes. Target nodes are defined as the nodes to be localized. Briefly, the MSP system works as follows. First, events are generated one at a time in the network area (e.g., ultrasound propagations from different locations, laser scans with diverse angles). As each event propagates, as shown in Figure 1(a), each node detects it at some particular time instance. For a single event, we call the ordering of nodes, which is based on the sequential detection of the event, a node sequence. Each node sequence includes both the targets and the anchors as shown in Figure 1(b). Second, a multi-sequence processing algorithm helps to narrow the possible location of each node to a small area (Figure 1(c)). Finally, a distributionbased estimation method estimates the exact location of each sensor node, as shown in Figure 1(d). Figure 1 shows that the node sequences can be obtained much more economically than accurate pair-wise distance measurements between target nodes and anchor nodes via ranging methods. In addition, this system does not require a rigid time-space relationship for the localization events, which is critical but hard to achieve in controlled event distribution scenarios (e.g., Spotlight [20]). For the sake of clarity in presentation, we present our system in two cases: • Ideal Case, in which all the node sequences obtained from the network are complete and correct, and nodes are time-synchronized [12, 9]. • Realistic Deployment, in which (i) node sequences can be partial (incomplete), (ii) elements in sequences could flip (i.e., the order obtained is reversed from reality), and (iii) nodes are not time-synchronized. To introduce the MSP algorithm, we first consider a simple straight-line scan scenario. Then, we describe how to implement straight-line scans as well as other event types, such as sound wave propagation. 1 A 2 3 4 5 B C 6 7 8 9 Straight-line Scan 1 Straight-lineScan2 8 1 5 A 6 C 4 3 7 2 B 9 3 1 C 5 9 2 A 4 6 B 7 8 Target node Anchor node Figure 2. Obtaining Multiple Node Sequences 4 Basic MSP Let us consider a sensor network with N target nodes and M anchor nodes randomly deployed in an area of size S. The top-level idea for basic MSP is to split the whole sensor network area into small pieces by processing node sequences. Because the exact locations of all the anchors in a node sequence are known, all the nodes in this sequence can be divided into O(M +1) parts in the area. In Figure 2, we use numbered circles to denote target nodes and numbered hexagons to denote anchor nodes. Basic MSP uses two straight lines to scan the area from different directions, treating each scan as an event. All the nodes react to the event sequentially generating two node sequences. For vertical scan 1, the node sequence is (8,1,5,A,6,C,4,3,7,2,B,9), as shown outside the right boundary of the area in Figure 2; for horizontal scan 2, the node sequence is (3,1,C,5,9,2,A,4,6,B,7,8), as shown under the bottom boundary of the area in Figure 2. Since the locations of the anchor nodes are available, the anchor nodes in the two node sequences actually split the area vertically and horizontally into 16 parts, as shown in Figure 2. To extend this process, suppose we have M anchor nodes and perform d scans from different angles, obtaining d node sequences and dividing the area into many small parts. Obviously, the number of parts is a function of the number of anchors M, the number of scans d, the anchors'' location as well as the slop k for each scan line. According to the pie-cutting theorem [22], the area can be divided into O(M2d2) parts. When M and d are appropriately large, the polygon for each target node may become sufficiently small so that accurate estimation can be achieved. We emphasize that accuracy is affected not only by the number of anchors M, but also by the number of events d. In other words, MSP provides a tradeoff between the physical cost of anchors and the soft cost of events. Algorithm 1 depicts the computing architecture of basic MSP. Each node sequence is processed within line 1 to 8. For each node, GetBoundaries() in line 5 searches for the predecessor and successor anchors in the sequence so as to determine the boundaries of this node. Then in line 6 UpdateMap() shrinks the location area of this node according to the newly obtained boundaries. After processing all sequences, Centroid Estimation (line 11) set the center of gravity of the final polygon as the estimated location of the target node. Basic MSP only makes use of the order information between a target node and the anchor nodes in each sequence. Actually, we can extract much more location information from 17 Algorithm 1 Basic MSP Process Output: The estimated location of each node. 1: repeat 2: GetOneUnprocessedSeqence(); 3: repeat 4: GetOneNodeFromSequenceInOrder(); 5: GetBoundaries(); 6: UpdateMap(); 7: until All the target nodes are updated; 8: until All the node sequences are processed; 9: repeat 10: GetOneUnestimatedNode(); 11: CentroidEstimation(); 12: until All the target nodes are estimated; each sequence. Section 5 will introduce advanced MSP, in which four novel optimizations are proposed to improve the performance of MSP significantly. 5 Advanced MSP Four improvements to basic MSP are proposed in this section. The first three improvements do not need additional sensing and communication in the networks but require only slightly more off-line computation. The objective of all these improvements is to make full use of the information embedded in the node sequences. The results we have obtained empirically indicate that the implementation of the first two methods can dramatically reduce the localization error, and that the third and fourth methods are helpful for some system deployments. 5.1 Sequence-Based MSP As shown in Figure 2, each scan line and M anchors, splits the whole area into M + 1 parts. Each target node falls into one polygon shaped by scan lines. We noted that in basic MSP, only the anchors are used to narrow down the polygon of each target node, but actually there is more information in the node sequence that we can made use of. Let``s first look at a simple example shown in Figure 3. The previous scans narrow the locations of target node 1 and node 2 into two dashed rectangles shown in the left part of Figure 3. Then a new scan generates a new sequence (1, 2). With knowledge of the scan``s direction, it is easy to tell that node 1 is located to the left of node 2. Thus, we can further narrow the location area of node 2 by eliminating the shaded part of node 2``s rectangle. This is because node 2 is located on the right of node 1 while the shaded area is outside the lower boundary of node 1. Similarly, the location area of node 1 can be narrowed by eliminating the shaded part out of node 2``s right boundary. We call this procedure sequence-based MSP which means that the whole node sequence needs to be processed node by node in order. Specifically, sequence-based MSP follows this exact processing rule: 1 2 1 2 1 2 Lower boundary of 1 Upper boundary of 1 Lower boundary of 2 Upper boundary of 2 New sequence New upper boundary of 1 New Lower boundary of 2 EventPropagation Figure 3. Rule Illustration in Sequence Based MSP Algorithm 2 Sequence-Based MSP Process Output: The estimated location of each node. 1: repeat 2: GetOneUnprocessedSeqence(); 3: repeat 4: GetOneNodeByIncreasingOrder(); 5: ComputeLowbound(); 6: UpdateMap(); 7: until The last target node in the sequence; 8: repeat 9: GetOneNodeByDecreasingOrder(); 10: ComputeUpbound(); 11: UpdateMap(); 12: until The last target node in the sequence; 13: until All the node sequences are processed; 14: repeat 15: GetOneUnestimatedNode(); 16: CentroidEstimation(); 17: until All the target nodes are estimated; Elimination Rule: Along a scanning direction, the lower boundary of the successor``s area must be equal to or larger than the lower boundary of the predecessor``s area, and the upper boundary of the predecessor``s area must be equal to or smaller than the upper boundary of the successor``s area. In the case of Figure 3, node 2 is the successor of node 1, and node 1 is the predecessor of node 2. According to the elimination rule, node 2``s lower boundary cannot be smaller than that of node 1 and node 1``s upper boundary cannot exceed node 2``s upper boundary. Algorithm 2 illustrates the pseudo code of sequence-based MSP. Each node sequence is processed within line 3 to 13. The sequence processing contains two steps: Step 1 (line 3 to 7): Compute and modify the lower boundary for each target node by increasing order in the node sequence. Each node``s lower boundary is determined by the lower boundary of its predecessor node in the sequence, thus the processing must start from the first node in the sequence and by increasing order. Then update the map according to the new lower boundary. Step 2 (line 8 to 12): Compute and modify the upper boundary for each node by decreasing order in the node sequence. Each node``s upper boundary is determined by the upper boundary of its successor node in the sequence, thus the processing must start from the last node in the sequence and by decreasing order. Then update the map according to the new upper boundary. After processing all the sequences, for each node, a polygon bounding its possible location has been found. Then, center-ofgravity-based estimation is applied to compute the exact location of each node (line 14 to 17). An example of this process is shown in Figure 4. The third scan generates the node sequence (B,9,2,7,4,6,3,8,C,A,5,1). In addition to the anchor split lines, because nodes 4 and 7 come after node 2 in the sequence, node 4 and 7``s polygons could be narrowed according to node 2``s lower boundary (the lower right-shaded area); similarly, the shaded area in node 2``s rectangle could be eliminated since this part is beyond node 7``s upper boundary indicated by the dotted line. Similar eliminating can be performed for node 3 as shown in the figure. 18 1 A 2 3 4 5 B C 6 7 8 9 Straight-line Scan 1 Straight-lineScan2 Straight-line Scan 3 Target node Anchor node Figure 4. Sequence-Based MSP Example 1 A 2 3 4 5 B C 6 7 8 9 Straight-line Scan 1 Straight-lineScan2 Straight-line Scan 3 Reprocessing Scan 1 Target node Anchor node Figure 5. Iterative MSP: Reprocessing Scan 1 From above, we can see that the sequence-based MSP makes use of the information embedded in every sequential node pair in the node sequence. The polygon boundaries of the target nodes obtained in prior could be used to further split other target nodes'' areas. Our evaluation in Sections 8 and 9 shows that sequence-based MSP considerably enhances system accuracy. 5.2 Iterative MSP Sequence-based MSP is preferable to basic MSP because it extracts more information from the node sequence. In fact, further useful information still remains! In sequence-based MSP, a sequence processed later benefits from information produced by previously processed sequences (e.g., the third scan in Figure 5). However, the first several sequences can hardly benefit from other scans in this way. Inspired by this phenomenon, we propose iterative MSP. The basic idea of iterative MSP is to process all the sequences iteratively several times so that the processing of each single sequence can benefit from the results of other sequences. To illustrate the idea more clearly, Figure 4 shows the results of three scans that have provided three sequences. Now if we process the sequence (8,1,5,A,6,C,4,3,7,2,B,9) obtained from scan 1 again, we can make progress, as shown in Figure 5. The reprocessing of the node sequence 1 provides information in the way an additional vertical scan would. From sequencebased MSP, we know that the upper boundaries of nodes 3 and 4 along the scan direction must not extend beyond the upper boundary of node 7, therefore the grid parts can be eliminated (a) Central of Gravity (b) Joint Distribution 1 2 2 1 1 2 1 2 2 1 1 2 2 1 1 2 Figure 6. Example of Joint Distribution Estimation ...... vm ap[0] vm ap[1] vm ap[2] vm ap[3] Combine m ap Figure 7. Idea of DBE MSP for Each Node for the nodes 3 and node 4, respectively, as shown in Figure 5. From this example, we can see that iterative processing of the sequence could help further shrink the polygon of each target node, and thus enhance the accuracy of the system. The implementation of iterative MSP is straightforward: process all the sequences multiple times using sequence-based MSP. Like sequence-based MSP, iterative MSP introduces no additional event cost. In other words, reprocessing does not actually repeat the scan physically. Evaluation results in Section 8 will show that iterative MSP contributes noticeably to a lower localization error. Empirical results show that after 5 iterations, improvements become less significant. In summary, iterative processing can achieve better performance with only a small computation overhead. 5.3 Distribution-Based Estimation After determining the location area polygon for each node, estimation is needed for a final decision. Previous research mostly applied the Center of Gravity (COG) method [4] [8] [10] which minimizes average error. If every node is independent of all others, COG is the statistically best solution. In MSP, however, each node may not be independent. For example, two neighboring nodes in a certain sequence could have overlapping polygon areas. In this case, if the marginal probability of joint distribution is used for estimation, better statistical results are achieved. Figure 6 shows an example in which node 1 and node 2 are located in the same polygon. If COG is used, both nodes are localized at the same position (Figure 6(a)). However, the node sequences obtained from two scans indicate that node 1 should be to the left of and above node 2, as shown in Figure 6(b). The high-level idea of distribution-based estimation proposed for MSP, which we call DBE MSP, is illustrated in Figure 7. The distributions of each node under the ith scan (for the ith node sequence) are estimated in node.vmap[i], which is a data structure for remembering the marginal distribution over scan i. Then all the vmaps are combined to get a single map and weighted estimation is used to obtain the final location. For each scan, all the nodes are sorted according to the gap, which is the diameter of the polygon along the direction of the scan, to produce a second, gap-based node sequence. Then, the estimation starts from the node with the smallest gap. This is because it is statistically more accurate to assume a uniform distribution of the node with smaller gap. For each node processed in order from the gap-based node sequence, either if 19 Pred. node``s area Predecessor node exists: conditional distribution based on pred. node``s area Alone: Uniformly Distributed Succ. node``s area Successor node exists: conditional distribution based on succ. node``s area Succ. node``s area Both predecessor and successor nodes exist: conditional distribution based on both of them Pred. node``s area Figure 8. Four Cases in DBE Process no neighbor node in the original event-based node sequence shares an overlapping area, or if the neighbors have not been processed due to bigger gaps, a uniform distribution Uniform() is applied to this isolated node (the Alone case in Figure 8). If the distribution of its neighbors sharing overlapped areas has been processed, we calculate the joint distribution for the node. As shown in Figure 8, there are three possible cases depending on whether the distribution of the overlapping predecessor and/or successor nodes have/has already been estimated. The estimation``s strategy of starting from the most accurate node (smallest gap node) reduces the problem of estimation error propagation. The results in the evaluation section indicate that applying distribution-based estimation could give statistically better results. 5.4 Adaptive MSP So far, all the enhancements to basic MSP focus on improving the multi-sequence processing algorithm given a fixed set of scan directions. All these enhancements require only more computing time without any overhead to the sensor nodes. Obviously, it is possible to have some choice and optimization on how events are generated. For example, in military situations, artillery or rocket-launched mini-ultrasound bombs can be used for event generation at some selected locations. In adaptive MSP, we carefully generate each new localization event so as to maximize the contribution of the new event to the refinement of localization, based on feedback from previous events. Figure 9 depicts the basic architecture of adaptive MSP. Through previous localization events, the whole map has been partitioned into many small location areas. The idea of adaptive MSP is to generate the next localization event to achieve best-effort elimination, which ideally could shrink the location area of individual node as much as possible. We use a weighted voting mechanism to evaluate candidate localization events. Every node wants the next event to split its area evenly, which would shrink the area fast. Therefore, every node votes for the parameters of the next event (e.g., the scan angle k of the straight-line scan). Since the area map is maintained centrally, the vote is virtually done and there is no need for the real sensor nodes to participate in it. After gathering all the voting results, the event parameters with the most votes win the election. There are two factors that determine the weight of each vote: • The vote for each candidate event is weighted according to the diameter D of the node``s location area. Nodes with bigger location areas speak louder in the voting, because Map Partitioned by the Localization Events Diameter of Each Area Candidate Localization Events Evaluation Trigger Next Localization Evet Figure 9. Basic Architecture of Adaptive MSP 2 3 Diameter D3 1 1 3k 2 3k 3 3k 4 3k 5 3k 6 3k 1 3k 2 3k 3 3k 6 3k4 3k 5 3k Weight el small i opt i j ii j i S S DkkDfkWeight arg ),(,()( ⋅=∆= 1 3 opt k Target node Anchor node Center of Gravity Node 3's area Figure 10. Candidate Slops for Node 3 at Anchor 1 overall system error is reduced mostly by splitting the larger areas. • The vote for each candidate event is also weighted according to its elimination efficiency for a location area, which is defined as how equally in size (or in diameter) an event can cut an area. In other words, an optimal scan event cuts an area in the middle, since this cut shrinks the area quickly and thus reduces localization uncertainty quickly. Combining the above two aspects, the weight for each vote is computed according to the following equation (1): Weight(k j i ) = f(Di,△(k j i ,k opt i )) (1) k j i is node i``s jth supporting parameter for next event generation; Di is diameter of node i``s location area; △(k j i ,k opt i ) is the distance between k j i and the optimal parameter k opt i for node i, which should be defined to fit the specific application. Figure 10 presents an example for node 1``s voting for the slopes of the next straight-line scan. In the system, there are a fixed number of candidate slopes for each scan (e.g., k1,k2,k3,k4...). The location area of target node 3 is shown in the figure. The candidate events k1 3,k2 3,k3 3,k4 3,k5 3,k6 3 are evaluated according to their effectiveness compared to the optimal ideal event which is shown as a dotted line with appropriate weights computed according to equation (1). For this specific example, as is illustrated in the right part of Figure 10, f(Di,△(k j i ,kopt i )) is defined as the following equation (2): Weight(kj i ) = f(Di,△(kj i ,kopt i )) = Di · Ssmall Slarge (2) Ssmall and Slarge are the sizes of the smaller part and larger part of the area cut by the candidate line respectively. In this case, node 3 votes 0 for the candidate lines that do not cross its area since Ssmall = 0. We show later that adaptive MSP improves localization accuracy in WSNs with irregularly shaped deployment areas. 20 5.5 Overhead and MSP Complexity Analysis This section provides a complexity analysis of the MSP design. We emphasize that MSP adopts an asymmetric design in which sensor nodes need only to detect and report the events. They are blissfully oblivious to the processing methods proposed in previous sections. In this section, we analyze the computational cost on the node sequence processing side, where resources are plentiful. According to Algorithm 1, the computational complexity of Basic MSP is O(d · N · S), and the storage space required is O(N · S), where d is the number of events, N is the number of target nodes, and S is the area size. According to Algorithm 2, the computational complexity of both sequence-based MSP and iterative MSP is O(c·d ·N ·S), where c is the number of iterations and c = 1 for sequencebased MSP, and the storage space required is O(N ·S). Both the computational complexity and storage space are equal within a constant factor to those of basic MSP. The computational complexity of the distribution-based estimation (DBE MSP) is greater. The major overhead comes from the computation of joint distributions when both predecessor and successor nodes exit. In order to compute the marginal probability, MSP needs to enumerate the locations of the predecessor node and the successor node. For example, if node A has predecessor node B and successor node C, then the marginal probability PA(x,y) of node A``s being at location (x,y) is: PA(x,y) = ∑ i ∑ j ∑ m ∑ n 1 NB,A,C ·PB(i, j)·PC(m,n) (3) NB,A,C is the number of valid locations for A satisfying the sequence (B, A, C) when B is at (i, j) and C is at (m,n); PB(i, j) is the available probability of node B``s being located at (i, j); PC(m,n) is the available probability of node C``s being located at (m,n). A naive algorithm to compute equation (3) has complexity O(d · N · S3). However, since the marginal probability indeed comes from only one dimension along the scanning direction (e.g., a line), the complexity can be reduced to O(d · N · S1.5) after algorithm optimization. In addition, the final location areas for every node are much smaller than the original field S; therefore, in practice, DBE MSP can be computed much faster than O(d ·N ·S1.5). 6 Wave Propagation Example So far, the description of MSP has been solely in the context of straight-line scan. However, we note that MSP is conceptually independent of how the event is propagated as long as node sequences can be obtained. Clearly, we can also support wave-propagation-based events (e.g., ultrasound propagation, air blast propagation), which are polar coordinate equivalences of the line scans in the Cartesian coordinate system. This section illustrates the effects of MSP``s implementation in the wave propagation-based situation. For easy modelling, we have made the following assumptions: • The wave propagates uniformly in all directions, therefore the propagation has a circle frontier surface. Since MSP does not rely on an accurate space-time relationship, a certain distortion in wave propagation is tolerable. If any directional wave is used, the propagation frontier surface can be modified accordingly. 1 3 5 9 Target node Anchor node Previous Event location A 2 Center of Gravity 4 8 7 B 6 C A line of preferred locations for next event Figure 11. Example of Wave Propagation Situation • Under the situation of line-of-sight, we allow obstacles to reflect or deflect the wave. Reflection and deflection are not problems because each node reacts only to the first detected event. Those reflected or deflected waves come later than the line-of-sight waves. The only thing the system needs to maintain is an appropriate time interval between two successive localization events. • We assume that background noise exists, and therefore we run a band-pass filter to listen to a particular wave frequency. This reduces the chances of false detection. The parameter that affects the localization event generation here is the source location of the event. The different distances between each node and the event source determine the rank of each node in the node sequence. Using the node sequences, the MSP algorithm divides the whole area into many non-rectangular areas as shown in Figure 11. In this figure, the stars represent two previous event sources. The previous two propagations split the whole map into many areas by those dashed circles that pass one of the anchors. Each node is located in one of the small areas. Since sequence-based MSP, iterative MSP and DBE MSP make no assumptions about the type of localization events and the shape of the area, all three optimization algorithms can be applied for the wave propagation scenario. However, adaptive MSP needs more explanation. Figure 11 illustrates an example of nodes'' voting for next event source locations. Unlike the straight-line scan, the critical parameter now is the location of the event source, because the distance between each node and the event source determines the rank of the node in the sequence. In Figure 11, if the next event breaks out along/near the solid thick gray line, which perpendicularly bisects the solid dark line between anchor C and the center of gravity of node 9``s area (the gray area), the wave would reach anchor C and the center of gravity of node 9``s area at roughly the same time, which would relatively equally divide node 9``s area. Therefore, node 9 prefers to vote for the positions around the thick gray line. 7 Practical Deployment Issues For the sake of presentation, until now we have described MSP in an ideal case where a complete node sequence can be obtained with accurate time synchronization. In this section we describe how to make MSP work well under more realistic conditions. 21 7.1 Incomplete Node Sequence For diverse reasons, such as sensor malfunction or natural obstacles, the nodes in the network could fail to detect localization events. In such cases, the node sequence will not be complete. This problem has two versions: • Anchor nodes are missing in the node sequence If some anchor nodes fail to respond to the localization events, then the system has fewer anchors. In this case, the solution is to generate more events to compensate for the loss of anchors so as to achieve the desired accuracy requirements. • Target nodes are missing in the node sequence There are two consequences when target nodes are missing. First, if these nodes are still be useful to sensing applications, they need to use other backup localization approaches (e.g., Centroid) to localize themselves with help from their neighbors who have already learned their own locations from MSP. Secondly, since in advanced MSP each node in the sequence may contribute to the overall system accuracy, dropping of target nodes from sequences could also reduce the accuracy of the localization. Thus, proper compensation procedures such as adding more localization events need to be launched. 7.2 Localization without Time Synchronization In a sensor network without time synchronization support, nodes cannot be ordered into a sequence using timestamps. For such cases, we propose a listen-detect-assemble-report protocol, which is able to function independently without time synchronization. listen-detect-assemble-report requires that every node listens to the channel for the node sequence transmitted from its neighbors. Then, when the node detects the localization event, it assembles itself into the newest node sequence it has heard and reports the updated sequence to other nodes. Figure 12 (a) illustrates an example for the listen-detect-assemble-report protocol. For simplicity, in this figure we did not differentiate the target nodes from anchor nodes. A solid line between two nodes stands for a communication link. Suppose a straight line scans from left to right. Node 1 detects the event, and then it broadcasts the sequence (1) into the network. Node 2 and node 3 receive this sequence. When node 2 detects the event, node 2 adds itself into the sequence and broadcasts (1, 2). The sequence propagates in the same direction with the scan as shown in Figure 12 (a). Finally, node 6 obtains a complete sequence (1,2,3,5,7,4,6). In the case of ultrasound propagation, because the event propagation speed is much slower than that of radio, the listendetect-assemble-report protocol can work well in a situation where the node density is not very high. For instance, if the distance between two nodes along one direction is 10 meters, the 340m/s sound needs 29.4ms to propagate from one node to the other. While normally the communication data rate is 250Kbps in the WSN (e.g., CC2420 [1]), it takes only about 2 ∼ 3 ms to transmit an assembled packet for one hop. One problem that may occur using the listen-detectassemble-report protocol is multiple partial sequences as shown in Figure 12 (b). Two separate paths in the network may result in two sequences that could not be further combined. In this case, since the two sequences can only be processed as separate sequences, some order information is lost. Therefore the 1,2,5,4 1,3,7,4 1,2,3,5 1,2,3,5,7,4 1,2,3,5,7 1,2,3,5 1,3 1,2 1 2 3 5 7 4 6 1 1 1,3 1,2,3,5,7,4,6 1,2,5 1,3,7 1,3 1,2 1 2 3 5 7 4 6 1 1 1,3,7,4,6 1,2,5,4,6 (a) (b) (c) 1,3,2,5 1,3,2,5,7,4 1,3,2,5,7 1,3,2,5 1,3 1,2 1 2 3 5 7 4 6 1 1 1,3 1,3,2,5,7,4,6 Event Propagation Event Propagation Event Propagation Figure 12. Node Sequence without Time Synchronization accuracy of the system would decrease. The other problem is the sequence flip problem. As shown in Figure 12 (c), because node 2 and node 3 are too close to each other along the scan direction, they detect the scan almost simultaneously. Due to the uncertainty such as media access delay, two messages could be transmitted out of order. For example, if node 3 sends out its report first, then the order of node 2 and node 3 gets flipped in the final node sequence. The sequence flip problem would appear even in an accurately synchronized system due to random jitter in node detection if an event arrives at multiple nodes almost simultaneously. A method addressing the sequence flip is presented in the next section. 7.3 Sequence Flip and Protection Band Sequence flip problems can be solved with and without time synchronization. We firstly start with a scenario applying time synchronization. Existing solutions for time synchronization [12, 6] can easily achieve sub-millisecond-level accuracy. For example, FTSP [12] achieves 16.9µs (microsecond) average error for a two-node single-hop case. Therefore, we can comfortably assume that the network is synchronized with maximum error of 1000µs. However, when multiple nodes are located very near to each other along the event propagation direction, even when time synchronization with less than 1ms error is achieved in the network, sequence flip may still occur. For example, in the sound wave propagation case, if two nodes are less than 0.34 meters apart, the difference between their detection timestamp would be smaller than 1 millisecond. We find that sequence flip could not only damage system accuracy, but also might cause a fatal error in the MSP algorithm. Figure 13 illustrates both detrimental results. In the left side of Figure 13(a), suppose node 1 and node 2 are so close to each other that it takes less than 0.5ms for the localization event to propagate from node 1 to node 2. Now unfortunately, the node sequence is mistaken to be (2,1). So node 1 is expected to be located to the right of node 2, such as at the position of the dashed node 1. According to the elimination rule in sequencebased MSP, the left part of node 1``s area is cut off as shown in the right part of Figure 13(a). This is a potentially fatal error, because node 1 is actually located in the dashed area which has been eliminated by mistake. During the subsequent eliminations introduced by other events, node 1``s area might be cut off completely, thus node 1 could consequently be erased from the map! Even in cases where node 1 still survives, its area actually does not cover its real location. 22 1 2 12 2 Lower boundary of 1 Upper boundary of 1 Flipped Sequence Fatal Elimination Error EventPropagation 1 1 Fatal Error 1 1 2 12 2 Lower boundary of 1 Upper boundary of 1 Flipped Sequence Safe Elimination EventPropagation 1 1 New lower boundary of 1 1 B (a) (b) B: Protection band Figure 13. Sequence Flip and Protection Band Another problem is not fatal but lowers the localization accuracy. If we get the right node sequence (1,2), node 1 has a new upper boundary which can narrow the area of node 1 as in Figure 3. Due to the sequence flip, node 1 loses this new upper boundary. In order to address the sequence flip problem, especially to prevent nodes from being erased from the map, we propose a protection band compensation approach. The basic idea of protection band is to extend the boundary of the location area a little bit so as to make sure that the node will never be erased from the map. This solution is based on the fact that nodes have a high probability of flipping in the sequence if they are near to each other along the event propagation direction. If two nodes are apart from each other more than some distance, say, B, they rarely flip unless the nodes are faulty. The width of a protection band B, is largely determined by the maximum error in system time synchronization and the localization event propagation speed. Figure 13(b) presents the application of the protection band. Instead of eliminating the dashed part in Figure 13(a) for node 1, the new lower boundary of node 1 is set by shifting the original lower boundary of node 2 to the left by distance B. In this case, the location area still covers node 1 and protects it from being erased. In a practical implementation, supposing that the ultrasound event is used, if the maximum error of system time synchronization is 1ms, two nodes might flip with high probability if the timestamp difference between the two nodes is smaller than or equal to 1ms. Accordingly, we set the protection band B as 0.34m (the distance sound can propagate within 1 millisecond). By adding the protection band, we reduce the chances of fatal errors, although at the cost of localization accuracy. Empirical results obtained from our physical test-bed verified this conclusion. In the case of using the listen-detect-assemble-report protocol, the only change we need to make is to select the protection band according to the maximum delay uncertainty introduced by the MAC operation and the event propagation speed. To bound MAC delay at the node side, a node can drop its report message if it experiences excessive MAC delay. This converts the sequence flip problem to the incomplete sequence problem, which can be more easily addressed by the method proposed in Section 7.1. 8 Simulation Evaluation Our evaluation of MSP was conducted on three platforms: (i) an indoor system with 46 MICAz motes using straight-line scan, (ii) an outdoor system with 20 MICAz motes using sound wave propagation, and (iii) an extensive simulation under various kinds of physical settings. In order to understand the behavior of MSP under numerous settings, we start our evaluation with simulations. Then, we implemented basic MSP and all the advanced MSP methods for the case where time synchronization is available in the network. The simulation and implementation details are omitted in this paper due to space constraints, but related documents [25] are provided online at http://www.cs.umn.edu/∼zhong/MSP. Full implementation and evaluation of system without time synchronization are yet to be completed in the near future. In simulation, we assume all the node sequences are perfect so as to reveal the performance of MSP achievable in the absence of incomplete node sequences or sequence flips. In our simulations, all the anchor nodes and target nodes are assumed to be deployed uniformly. The mean and maximum errors are averaged over 50 runs to obtain high confidence. For legibility reasons, we do not plot the confidence intervals in this paper. All the simulations are based on the straight-line scan example. We implement three scan strategies: • Random Scan: The slope of the scan line is randomly chosen at each time. • Regular Scan: The slope is predetermined to rotate uniformly from 0 degree to 180 degrees. For example, if the system scans 6 times, then the scan angles would be: 0, 30, 60, 90, 120, and 150. • Adaptive Scan: The slope of each scan is determined based on the localization results from previous scans. We start with basic MSP and then demonstrate the performance improvements one step at a time by adding (i) sequencebased MSP, (ii) iterative MSP, (iii) DBE MSP and (iv) adaptive MSP. 8.1 Performance of Basic MSP The evaluation starts with basic MSP, where we compare the performance of random scan and regular scan under different configurations. We intend to illustrate the impact of the number of anchors M, the number of scans d, and target node density (number of target nodes N in a fixed-size region) on the localization error. Table 1 shows the default simulation parameters. The error of each node is defined as the distance between the estimated location and the real position. We note that by default we only use three anchors, which is considerably fewer than existing range-free solutions [8, 4]. Impact of the Number of Scans: In this experiment, we compare regular scan with random scan under a different number of scans from 3 to 30 in steps of 3. The number of anchors Table 1. Default Configuration Parameters Parameter Description Field Area 200×200 (Grid Unit) Scan Type Regular (Default)/Random Scan Anchor Number 3 (Default) Scan Times 6 (Default) Target Node Number 100 (Default) Statistics Error Mean/Max Random Seeds 50 runs 23 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 80 90 Mean Error and Max Error VS Scan Time Scan Time Error Max Error of Random Scan Max Error of Regular Scan Mean Error of Random Scan Mean Error of Regular Scan (a) Error vs. Number of Scans 0 5 10 15 20 25 30 0 10 20 30 40 50 60 Mean Error and Max Error VS Anchor Number Anchor Number Error Max Error of Random Scan Max Error of Regular Scan Mean Error of Random Scan Mean Error of Regular Scan (b) Error vs. Anchor Number 0 50 100 150 200 10 20 30 40 50 60 70 Mean Error and Max Error VS Target Node Number Target Node Number Error Max Error of Random Scan Max Error of Regular Scan Mean Error of Random Scan Mean Error of Regular Scan (c) Error vs. Number of Target Nodes Figure 14. Evaluation of Basic MSP under Random and Regular Scans 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 Basic MSP VS Sequence Based MSP II Scan Time Error Max Error of Basic MSP Max Error of Seq MSP Mean Error of Basic MSP Mean Error of Seq MSP (a) Error vs. Number of Scans 0 5 10 15 20 25 30 0 5 10 15 20 25 30 35 40 45 50 Basic MSP VS Sequence Based MSP I Anchor Number Error Max Error of Basic MSP Max Error of Seq MSP Mean Error of Basic MSP Mean Error of Seq MSP (b) Error vs. Anchor Number 0 50 100 150 200 5 10 15 20 25 30 35 40 45 50 55 Basic MSP VS Sequence Based MSP III Target Node Number Error Max Error of Basic MSP Max Error of Seq MSP Mean Error of Basic MSP Mean Error of Seq MSP (c) Error vs. Number of Target Nodes Figure 15. Improvements of Sequence-Based MSP over Basic MSP is 3 by default. Figure 14(a) indicates the following: (i) as the number of scans increases, the localization error decreases significantly; for example, localization errors drop more than 60% from 3 scans to 30 scans; (ii) statistically, regular scan achieves better performance than random scan under identical number of scans. However, the performance gap reduces as the number of scans increases. This is expected since a large number of random numbers converges to a uniform distribution. Figure 14(a) also demonstrates that MSP requires only a small number of anchors to perform very well, compared to existing range-free solutions [8, 4]. Impact of the Number of Anchors: In this experiment, we compare regular scan with random scan under different number of anchors from 3 to 30 in steps of 3. The results shown in Figure 14(b) indicate that (i) as the number of anchor nodes increases, the localization error decreases, and (ii) statistically, regular scan obtains better results than random scan with identical number of anchors. By combining Figures 14(a) and 14(b), we can conclude that MSP allows a flexible tradeoff between physical cost (anchor nodes) and soft cost (localization events). Impact of the Target Node Density: In this experiment, we confirm that the density of target nodes has no impact on the accuracy, which motivated the design of sequence-based MSP. In this experiment, we compare regular scan with random scan under different number of target nodes from 10 to 190 in steps of 20. Results in Figure 14(c) show that mean localization errors remain constant across different node densities. However, when the number of target nodes increases, the average maximum error increases. Summary: From the above experiments, we can conclude that in basic MSP, regular scan are better than random scan under different numbers of anchors and scan events. This is because regular scans uniformly eliminate the map from different directions, while random scans would obtain sequences with redundant overlapping information, if two scans choose two similar scanning slopes. 8.2 Improvements of Sequence-Based MSP This section evaluates the benefits of exploiting the order information among target nodes by comparing sequence-based MSP with basic MSP. In this and the following sections, regular scan is used for straight-line scan event generation. The purpose of using regular scan is to keep the scan events and the node sequences identical for both sequence-based MSP and basic MSP, so that the only difference between them is the sequence processing procedure. Impact of the Number of Scans: In this experiment, we compare sequence-based MSP with basic MSP under different number of scans from 3 to 30 in steps of 3. Figure 15(a) indicates significant performance improvement in sequence-based MSP over basic MSP across all scan settings, especially when the number of scans is large. For example, when the number of scans is 30, errors in sequence-based MSP are about 20% of that of basic MSP. We conclude that sequence-based MSP performs extremely well when there are many scan events. Impact of the Number of Anchors: In this experiment, we use different number of anchors from 3 to 30 in steps of 3. As seen in Figure 15(b), the mean error and maximum error of sequence-based MSP is much smaller than that of basic MSP. Especially when there is limited number of anchors in the system, e.g., 3 anchors, the error rate was almost halved by using sequence-based MSP. This phenomenon has an interesting explanation: the cutting lines created by anchor nodes are exploited by both basic MSP and sequence-based MSP, so as the 24 0 2 4 6 8 10 0 5 10 15 20 25 30 35 40 45 50 Basic MSP VS Iterative MSP Iterative Times Error Max Error of Iterative Seq MSP Mean Error of Iterative Seq MSP Max Error of Basic MSP Mean Error of Basic MSP Figure 16. Improvements of Iterative MSP 0 2 4 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 DBE VS Non−DBE Error CumulativeDistrubutioinFunctions(CDF) Mean Error CDF of DBE MSP Mean Error CDF of Non−DBE MSP Max Error CDF of DBE MSP Max Error CDF of Non−DBE MSP Figure 17. Improvements of DBE MSP 0 20 40 60 80 100 0 10 20 30 40 50 60 70 Adaptive MSP for 500by80 Target Node Number Error Max Error of Regualr Scan Max Error of Adaptive Scan Mean Error of Regualr Scan Mean Error of Adaptive Scan (a) Adaptive MSP for 500 by 80 field 0 10 20 30 40 50 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Mean Error CDF at Different Angle Steps in Adaptive Scan Mean Error CumulativeDistrubutioinFunctions(CDF) 5 Degree Angle Step Adaptive 10 Degree Angle Step Adaptive 20 Degree Angle Step Adaptive 30 Degree Step Regular Scan (b) Impact of the Number of Candidate Events Figure 18. The Improvements of Adaptive MSP number of anchor nodes increases, anchors tend to dominate the contribution. Therefore the performance gaps lessens. Impact of the Target Node Density: Figure 15(c) demonstrates the benefits of exploiting order information among target nodes. Since sequence-based MSP makes use of the information among the target nodes, having more target nodes contributes to the overall system accuracy. As the number of target nodes increases, the mean error and maximum error of sequence-based MSP decreases. Clearly the mean error in basic MSP is not affected by the number of target nodes, as shown in Figure 15(c). Summary: From the above experiments, we can conclude that exploiting order information among target nodes can improve accuracy significantly, especially when the number of events is large but with few anchors. 8.3 Iterative MSP over Sequence-Based MSP In this experiment, the same node sequences were processed iteratively multiple times. In Figure 16, the two single marks are results from basic MSP, since basic MSP doesn``t perform iterations. The two curves present the performance of iterative MSP under different numbers of iterations c. We note that when only a single iteration is used, this method degrades to sequence-based MSP. Therefore, Figure 16 compares the three methods to one another. Figure 16 shows that the second iteration can reduce the mean error and maximum error dramatically. After that, the performance gain gradually reduces, especially when c > 5. This is because the second iteration allows earlier scans to exploit the new boundaries created by later scans in the first iteration. Such exploitation decays quickly over iterations. 8.4 DBE MSP over Iterative MSP Figure 17, in which we augment iterative MSP with distribution-based estimation (DBE MSP), shows that DBE MSP could bring about statistically better performance. Figure 17 presents cumulative distribution localization errors. In general, the two curves of the DBE MSP lay slightly to the left of that of non-DBE MSP, which indicates that DBE MSP has a smaller statistical mean error and averaged maximum error than non-DBE MSP. We note that because DBE is augmented on top of the best solution so far, the performance improvement is not significant. When we apply DBE on basic MSP methods, the improvement is much more significant. We omit these results because of space constraints. 8.5 Improvements of Adaptive MSP This section illustrates the performance of adaptive MSP over non-adaptive MSP. We note that feedback-based adaptation can be applied to all MSP methods, since it affects only the scanning angles but not the sequence processing. In this experiment, we evaluated how adaptive MSP can improve the best solution so far. The default angle granularity (step) for adaptive searching is 5 degrees. Impact of Area Shape: First, if system settings are regular, the adaptive method hardly contributes to the results. For a square area (regular), the performance of adaptive MSP and regular scans are very close. However, if the shape of the area is not regular, adaptive MSP helps to choose the appropriate localization events to compensate. Therefore, adaptive MSP can achieve a better mean error and maximum error as shown in Figure 18(a). For example, adaptive MSP improves localization accuracy by 30% when the number of target nodes is 10. Impact of the Target Node Density: Figure 18(a) shows that when the node density is low, adaptive MSP brings more benefit than when node density is high. This phenomenon makes statistical sense, because the law of large numbers tells us that node placement approaches a truly uniform distribution when the number of nodes is increased. Adaptive MSP has an edge 25 Figure 19. The Mirage Test-bed (Line Scan) Figure 20. The 20-node Outdoor Experiments (Wave) when layout is not uniform. Impact of Candidate Angle Density: Figure 18(b) shows that the smaller the candidate scan angle step, the better the statistical performance in terms of mean error. The rationale is clear, as wider candidate scan angles provide adaptive MSP more opportunity to choose the one approaching the optimal angle. 8.6 Simulation Summary Starting from basic MSP, we have demonstrated step-bystep how four optimizations can be applied on top of each other to improve localization performance. In other words, these optimizations are compatible with each other and can jointly improve the overall performance. We note that our simulations were done under assumption that the complete node sequence can be obtained without sequence flips. In the next section, we present two real-system implementations that reveal and address these practical issues. 9 System Evaluation In this section, we present a system implementation of MSP on two physical test-beds. The first one is called Mirage, a large indoor test-bed composed of six 4-foot by 8-foot boards, illustrated in Figure 19. Each board in the system can be used as an individual sub-system, which is powered, controlled and metered separately. Three Hitachi CP-X1250 projectors, connected through a Matorx Triplehead2go graphics expansion box, are used to create an ultra-wide integrated display on six boards. Figure 19 shows that a long tilted line is generated by the projectors. We have implemented all five versions of MSP on the Mirage test-bed, running 46 MICAz motes. Unless mentioned otherwise, the default setting is 3 anchors and 6 scans at the scanning line speed of 8.6 feet/s. In all of our graphs, each data point represents the average value of 50 trials. In the outdoor system, a Dell A525 speaker is used to generate 4.7KHz sound as shown in Figure 20. We place 20 MICAz motes in the backyard of a house. Since the location is not completely open, sound waves are reflected, scattered and absorbed by various objects in the vicinity, causing a multi-path effect. In the system evaluation, simple time synchronization mechanisms are applied on each node. 9.1 Indoor System Evaluation During indoor experiments, we encountered several realworld problems that are not revealed in the simulation. First, sequences obtained were partial due to misdetection and message losses. Second, elements in the sequences could flip due to detection delay, uncertainty in media access, or error in time synchronization. We show that these issues can be addressed by using the protection band method described in Section 7.3. 9.1.1 On Scanning Speed and Protection Band In this experiment, we studied the impact of the scanning speed and the length of protection band on the performance of the system. In general, with increasing scanning speed, nodes have less time to respond to the event and the time gap between two adjacent nodes shrinks, leading to an increasing number of partial sequences and sequence flips. Figure 21 shows the node flip situations for six scans with distinct angles under different scan speeds. The x-axis shows the distance between the flipped nodes in the correct node sequence. y-axis shows the total number of flips in the six scans. This figure tells us that faster scan brings in not only increasing number of flips, but also longer-distance flips that require wider protection band to prevent from fatal errors. Figure 22(a) shows the effectiveness of the protection band in terms of reducing the number of unlocalized nodes. When we use a moderate scan speed (4.3feet/s), the chance of flipping is rare, therefore we can achieve 0.45 feet mean accuracy (Figure 22(b)) with 1.6 feet maximum error (Figure 22(c)). With increasing speeds, the protection band needs to be set to a larger value to deal with flipping. Interesting phenomena can be observed in Figures 22: on one hand, the protection band can sharply reduce the number of unlocalized nodes; on the other hand, protection bands enlarge the area in which a target would potentially reside, introducing more uncertainty. Thus there is a concave curve for both mean and maximum error when the scan speed is at 8.6 feet/s. 9.1.2 On MSP Methods and Protection Band In this experiment, we show the improvements resulting from three different methods. Figure 23(a) shows that a protection band of 0.35 feet is sufficient for the scan speed of 8.57feet/s. Figures 23(b) and 23(c) show clearly that iterative MSP (with adaptation) achieves best performance. For example, Figures 23(b) shows that when we set the protection band at 0.05 feet, iterative MSP achieves 0.7 feet accuracy, which is 42% more accurate than the basic design. Similarly, Figures 23(b) and 23(c) show the double-edged effects of protection band on the localization accuracy. 0 5 10 15 20 0 20 40 (3) Flip Distribution for 6 Scans at Line Speed of 14.6feet/s Flips Node Distance in the Ideal Node Sequence 0 5 10 15 20 0 20 40 (2) Flip Distribution for 6 Scans at Line Speed of 8.6feet/s Flips 0 5 10 15 20 0 20 40 (1) Flip Distribution for 6 Scans at Line Speed of 4.3feet/s Flips Figure 21. Number of Flips for Different Scan Speeds 26 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 16 18 20 Unlocalized Node Number(Line Scan at Different Speed) Protection Band (in feet) UnlocalizedNodeNumber Scan Line Speed: 14.6feet/s Scan Line Speed: 8.6feet/s Scan Line Speed: 4.3feet/s (a) Number of Unlocalized Nodes 0 0.2 0.4 0.6 0.8 1 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Mean Error(Line Scan at Different Speed) Protection Band (in feet) Error(infeet) Scan Line Speed:14.6feet/s Scan Line Speed: 8.6feet/s Scan Line Speed: 4.3feet/s (b) Mean Localization Error 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 3 3.5 4 Max Error(Line Scan at Different Speed) Protection Band (in feet) Error(infeet) Scan Line Speed: 14.6feet/s Scan Line Speed: 8.6feet/s Scan Line Speed: 4.3feet/s (c) Max Localization Error Figure 22. Impact of Protection Band and Scanning Speed 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 16 18 20 Unlocalized Node Number(Scan Line Speed 8.57feet/s) Protection Band (in feet) Numberofunlocalizednodeoutof46 Unlocalized node of Basic MSP Unlocalized node of Sequence Based MSP Unlocalized node of Iterative MSP (a) Number of Unlocalized Nodes 0 0.2 0.4 0.6 0.8 1 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Mean Error(Scan Line Speed 8.57feet/s) Protection Band (in feet) Error(infeet) Mean Error of Basic MSP Mean Error of Sequence Based MSP Mean Error of Iterative MSP (b) Mean Localization Error 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 3 3.5 4 Max Error(Scan Line Speed 8.57feet/s) Protection Band (in feet) Error(infeet) Max Error of Basic MSP Max Error of Sequence Based MSP Max Error of Iterative MSP (c) Max Localization Error Figure 23. Impact of Protection Band under Different MSP Methods 3 4 5 6 7 8 9 10 11 0 0.5 1 1.5 2 2.5 Unlocalized Node Number(Protection Band: 0.35 feet) Anchor Number UnlocalizedNodeNumber 4 Scan Events at Speed 8.75feet/s 6 Scan Events at Speed 8.75feet/s 8 Scan Events at Speed 8.75feet/s (a) Number of Unlocalized Nodes 3 4 5 6 7 8 9 10 11 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Mean Error(Protection Band: 0.35 feet) Anchor Number Error(infeet) Mean Error of 4 Scan Events at Speed 8.75feet/s Mean Error of 6 Scan Events at Speed 8.75feet/s Mean Error of 8 Scan Events at Speed 8.75feet/s (b) Mean Localization Error 3 4 5 6 7 8 9 10 11 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 Max Error(Protection Band: 0.35 feet) Anchor Number Error(infeet) Max Error of 4 Scan Events at Speed 8.75feet/s Max Error of 6 Scan Events at Speed 8.75feet/s Max Error of 8 Scan Events at Speed 8.75feet/s (c) Max Localization Error Figure 24. Impact of the Number of Anchors and Scans 9.1.3 On Number of Anchors and Scans In this experiment, we show a tradeoff between hardware cost (anchors) with soft cost (events). Figure 24(a) shows that with more cutting lines created by anchors, the chance of unlocalized nodes increases slightly. We note that with a 0.35 feet protection band, the percentage of unlocalized nodes is very small, e.g., in the worst-case with 11 anchors, only 2 out of 46 nodes are not localized due to flipping. Figures 24(b) and 24(c) show the tradeoff between number of anchors and the number of scans. Obviously, with the number of anchors increases, the error drops significantly. With 11 anchors we can achieve a localization accuracy as low as 0.25 ∼ 0.35 feet, which is nearly a 60% improvement. Similarly, with increasing number of scans, the accuracy drops significantly as well. We can observe about 30% across all anchor settings when we increase the number of scans from 4 to 8. For example, with only 3 anchors, we can achieve 0.6-foot accuracy with 8 scans. 9.2 Outdoor System Evaluation The outdoor system evaluation contains two parts: (i) effective detection distance evaluation, which shows that the node sequence can be readily obtained, and (ii) sound propagation based localization, which shows the results of wavepropagation-based localization. 9.2.1 Effective Detection Distance Evaluation We firstly evaluate the sequence flip phenomenon in wave propagation. As shown in Figure 25, 20 motes were placed as five groups in front of the speaker, four nodes in each group at roughly the same distances to the speaker. The gap between each group is set to be 2, 3, 4 and 5 feet respectively in four experiments. Figure 26 shows the results. The x-axis in each subgraph indicates the group index. There are four nodes in each group (4 bars). The y-axis shows the detection rank (order) of each node in the node sequence. As distance between each group increases, number of flips in the resulting node sequence 27 Figure 25. Wave Detection 1 2 3 4 5 0 5 10 15 20 2 feet group distance Rank Group Index 1 2 3 4 5 0 5 10 15 20 3 feet group distance Rank Group Index 1 2 3 4 5 0 5 10 15 20 4 feet group distance Rank Group Index 1 2 3 4 5 0 5 10 15 20 5 feet group distance Rank Group Index Figure 26. Ranks vs. Distances 0 2 4 6 8 10 12 14 16 18 20 22 24 0 2 4 6 8 10 12 14 Y-Dimension(feet) X-Dimension (feet) Node 0 2 4 6 8 10 12 14 16 18 20 22 24 0 2 4 6 8 10 12 14 Y-Dimension(feet) X-Dimension (feet) Anchor Figure 27. Localization Error (Sound) decreases. For example, in the 2-foot distance subgraph, there are quite a few flips between nodes in adjacent and even nonadjacent groups, while in the 5-foot subgraph, flips between different groups disappeared in the test. 9.2.2 Sound Propagation Based Localization As shown in Figure 20, 20 motes are placed as a grid including 5 rows with 5 feet between each row and 4 columns with 4 feet between each column. Six 4KHz acoustic wave propagation events are generated around the mote grid by a speaker. Figure 27 shows the localization results using iterative MSP (3 times iterative processing) with a protection band of 3 feet. The average error of the localization results is 3 feet and the maximum error is 5 feet with one un-localized node. We found that sequence flip in wave propagation is more severe than that in the indoor, line-based test. This is expected due to the high propagation speed of sound. Currently we use MICAz mote, which is equipped with a low quality microphone. We believe that using a better speaker and more events, the system can yield better accuracy. Despite the hardware constrains, the MSP algorithm still successfully localized most of the nodes with good accuracy. 10 Conclusions In this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes. We demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences. We proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results. Importantly, these optimization methods can be used together, and improve accuracy additively. The practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds. We also evaluated performance at scale through analysis as well as extensive simulations. Results demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events. 11 References [1] CC2420 Data Sheet. Avaiable at http://www.chipcon.com/. [2] P. Bahl and V. N. Padmanabhan. Radar: An In-Building RF-Based User Location and Tracking System. In IEEE Infocom ``00. [3] M. Broxton, J. Lifton, and J. Paradiso. Localizing A Sensor Network via Collaborative Processing of Global Stimuli. In EWSN ``05. [4] N. Bulusu, J. Heidemann, and D. Estrin. GPS-Less Low Cost Outdoor Localization for Very Small Devices. IEEE Personal Communications Magazine, 7(4), 2000. [5] D. Culler, D. Estrin, and M. Srivastava. Overview of Sensor Networks. IEEE Computer Magazine, 2004. [6] J. Elson, L. Girod, and D. Estrin. Fine-Grained Network Time Synchronization Using Reference Broadcasts. In OSDI ``02. [7] D. K. Goldenberg, P. Bihler, M. Gao, J. Fang, B. D. Anderson, A. Morse, and Y. Yang. Localization in Sparse Networks Using Sweeps. In MobiCom ``06. [8] T. He, C. Huang, B. M. Blum, J. A. Stankovic, and T. Abdelzaher. RangeFree Localization Schemes in Large-Scale Sensor Networks. In MobiCom ``03. [9] B. Kusy, P. Dutta, P. Levis, M. Mar, A. Ledeczi, and D. Culler. Elapsed Time on Arrival: A Simple and Versatile Primitive for Canonical Time Synchronization Services. International Journal of ad-hoc and Ubiquitous Computing, 2(1), 2006. [10] L. Lazos and R. Poovendran. SeRLoc: Secure Range-Independent Localization for Wireless Sensor Networks. In WiSe ``04. [11] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar, S. Dora, and A. Ledeczi. Radio Interferometric Geolocation. In SenSys ``05. [12] M. Maroti, B. Kusy, G. Simon, and A. Ledeczi. The Flooding Time Synchronization Protocol. In SenSys ``04. [13] D. Moore, J. Leonard, D. Rus, and S. Teller. Robust Distributed Network Localization with Noise Range Measurements. In SenSys ``04. [14] R. Nagpal and D. Coore. An Algorithm for Group Formation in an Amorphous Computer. In PDCS ``98. [15] D. Niculescu and B. Nath. ad-hoc Positioning System. In GlobeCom ``01. [16] D. Niculescu and B. Nath. ad-hoc Positioning System (APS) Using AOA. In InfoCom ``03. [17] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan. The Cricket Location-Support System. In MobiCom ``00. [18] K. R¨omer. The Lighthouse Location System for Smart Dust. In MobiSys ``03. [19] A. Savvides, C. C. Han, and M. B. Srivastava. Dynamic Fine-Grained Localization in ad-hoc Networks of Sensors. In MobiCom ``01. [20] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke. A High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks. In SenSys ``05. [21] R. Stoleru, P. Vicaire, T. He, and J. A. Stankovic. StarDust: a Flexible Architecture for Passive Localization in Wireless Sensor Networks. In SenSys ``06. [22] E. W. Weisstein. Plane Division by Lines. mathworld.wolfram.com. [23] B. H. Wellenhoff, H. Lichtenegger, and J. Collins. Global Positions System: Theory and Practice,Fourth Edition. Springer Verlag, 1997. [24] K. Whitehouse. The Design of Calamari: an ad-hoc Localization System for Sensor Networks. In University of California at Berkeley, 2002. [25] Z. Zhong. MSP Evaluation and Implementation Report. Avaiable at http://www.cs.umn.edu/∼zhong/MSP. [26] G. Zhou, T. He, and J. A. Stankovic. Impact of Radio Irregularity on Wireless Sensor Networks. In MobiSys ``04. 28
MSP: Multi-Sequence Positioning of Wireless Sensor Nodes * Abstract Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy. 1 Introduction Although Wireless Sensor Networks (WSN) have shown promising prospects in various applications [5], researchers still face several challenges for massive deployment of such networks. One of these is to identify the location of individual sensor nodes in outdoor environments. Because of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment. On the other hand, geometric information is indispensable in these networks, since users need to know where events of interest occur (e.g., the location of intruders or of a bomb explosion). Previous research on node localization falls into two categories: range-based approaches and range-free approaches. Range-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations. These approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]). Although range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments. On the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information. Unfortunately, since wireless connectivity is highly influenced by the environment and hardware calibration, existing solutions fail to deliver encouraging empirical results, or require substantial survey [2] and calibration [24] on a case-by-case basis. Realizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes' locations based on the detection time of controlled events). These solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes. These benefits, however, come along with an implicit assumption that the controlled events can be precisely distributed to a specified location at a specified time. We argue that precise event distribution is difficult to achieve, especially at large scale when terrain is uneven, the event distribution device is not well calibrated and its position is difficult to maintain (e.g., the helicopter-mounted scenario in [20]). To address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method for large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors. The novel idea behind MSP is to estimate each sensor node's two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution. This design offers several benefits. First, compared to a range-based approach, MSP does not require additional costly hardware. It works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work. Second, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors. And third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost. This paper offers the following additional intellectual contributions: • We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event. We demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy. Interestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy. • We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence. This new algorithm outperforms the widely adopted Centroid estimation [4, 8]. • To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events. The generation of later events is guided by localization results from previous events. • We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments. Through system implementation, we discover and address several interesting issues such as partial sequence and sequence flips. To reveal MSP performance at scale, we provide analytic results as well as a complete simulation study. All the simulation and implementation code is available online at http://www.cs.umn.edu/∼zhong/MSP. The rest of the paper is organized as follows. Section 2 briefly surveys the related work. Section 3 presents an overview of the MSP localization system. In sections 4 and 5, basic MSP and four advanced processing methods are introduced. Section 6 describes how MSP can be applied in a wave propagation scenario. Section 7 discusses several implementation issues. Section 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed. Section 10 concludes the paper. 2 Related Work Many methods have been proposed to localize wireless sensor devices in the open air. Most of these can be classified into two categories: range-based and range-free localization. Range-based localization systems, such as GPS [23], Cricket [17], AHLoS [19], AOA [16], Robust Quadrilaterals [13] and Sweeps [7], are based on fine-grained point-topoint distance estimation or angle estimation to identify pernode location. Constraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment. In addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns. For example, ultrasound signals usually effectively propagate 20-30 feet using an on-board transmitter [17]. Consequently, these range-based solutions require an undesirably high deployment density. Although the received signal strength indicator (RSSI) related [2, 24] methods were once considered an ideal low-cost solution, the irregularity of radio propagation [26] seriously limits the accuracy of such systems. The recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured. This ranging technique performs very well as long as antennas are well oriented and environmental factors such as multi-path effects and background noise are sufficiently addressed. Range-free methods don't need to estimate or measure accurate distances or angles. Instead, anchors or controlled-event distributions are used for node localization. Range-free methods can be generally classified into two types: anchor-based and anchor-free solutions. • For anchor-based solutions such as Centroid [4], APIT [8], SeRLoc [10], Gradient [13], and APS [15], the main idea is that the location of each node is estimated based on the known locations of the anchor nodes. Different anchor combinations narrow the areas in which the target nodes can possibly be located. Anchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy. In practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost. • Anchor-free solutions require no anchor nodes. Instead, external event generators and data processing platforms are used. The main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors. Spotlight [20] and Lighthouse [18] work in this fashion. In Spotlight [20], the event distribution needs to be precise in both time and space. Precise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight). All these increase system cost and reduce localization speed. StarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints. Label relaxation algorithms converge only when a sufficient number of robust constraints are obtained. Due to the environmental impact on RF connectivity constraints, however, StarDust is less accurate than Spotlight. In this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions. Unlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft Figure 1. The MSP System Overview cost (localization events). MSP uses only a small number of anchors (theoretically, as few as two). Unlike anchor-free solutions, MSP doesn't need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors. 3 System Overview MSP works by extracting relative location information from multiple simple one-dimensional orderings of nodes. Figure 1 (a) shows a layout of a sensor network with anchor nodes and target nodes. Target nodes are defined as the nodes to be localized. Briefly, the MSP system works as follows. First, events are generated one at a time in the network area (e.g., ultrasound propagations from different locations, laser scans with diverse angles). As each event propagates, as shown in Figure 1 (a), each node detects it at some particular time instance. For a single event, we call the ordering of nodes, which is based on the sequential detection of the event, a node sequence. Each node sequence includes both the targets and the anchors as shown in Figure 1 (b). Second, a multi-sequence processing algorithm helps to narrow the possible location of each node to a small area (Figure 1 (c)). Finally, a distributionbased estimation method estimates the exact location of each sensor node, as shown in Figure 1 (d). Figure 1 shows that the node sequences can be obtained much more economically than accurate pair-wise distance measurements between target nodes and anchor nodes via ranging methods. In addition, this system does not require a rigid time-space relationship for the localization events, which is critical but hard to achieve in controlled event distribution scenarios (e.g., Spotlight [20]). For the sake of clarity in presentation, we present our system in two cases: • Ideal Case, in which all the node sequences obtained from the network are complete and correct, and nodes are time-synchronized [12, 9]. • Realistic Deployment, in which (i) node sequences can be partial (incomplete), (ii) elements in sequences could flip (i.e., the order obtained is reversed from reality), and (iii) nodes are not time-synchronized. To introduce the MSP algorithm, we first consider a simple straight-line scan scenario. Then, we describe how to implement straight-line scans as well as other event types, such as sound wave propagation. Figure 2. Obtaining Multiple Node Sequences 4 Basic MSP Let us consider a sensor network with N target nodes and M anchor nodes randomly deployed in an area of size S. The top-level idea for basic MSP is to split the whole sensor network area into small pieces by processing node sequences. Because the exact locations of all the anchors in a node sequence are known, all the nodes in this sequence can be divided into O (M + 1) parts in the area. In Figure 2, we use numbered circles to denote target nodes and numbered hexagons to denote anchor nodes. Basic MSP uses two straight lines to scan the area from different directions, treating each scan as an event. All the nodes react to the event sequentially generating two node sequences. For vertical scan 1, the node sequence is (8,1,5, A,6, C,4,3,7,2, B,9), as shown outside the right boundary of the area in Figure 2; for horizontal scan 2, the node sequence is (3,1, C,5,9,2, A,4,6, B,7,8), as shown under the bottom boundary of the area in Figure 2. Since the locations of the anchor nodes are available, the anchor nodes in the two node sequences actually split the area vertically and horizontally into 16 parts, as shown in Figure 2. To extend this process, suppose we have M anchor nodes and perform d scans from different angles, obtaining d node sequences and dividing the area into many small parts. Obviously, the number of parts is a function of the number of anchors M, the number of scans d, the anchors' location as well as the slop k for each scan line. According to the pie-cutting theorem [22], the area can be divided into O (M2d2) parts. When M and d are appropriately large, the polygon for each target node may become sufficiently small so that accurate estimation can be achieved. We emphasize that accuracy is affected not only by the number of anchors M, but also by the number of events d. In other words, MSP provides a tradeoff between the physical cost of anchors and the soft cost of events. Algorithm 1 depicts the computing architecture of basic MSP. Each node sequence is processed within line 1 to 8. For each node, GetBoundaries () in line 5 searches for the predecessor and successor anchors in the sequence so as to determine the boundaries of this node. Then in line 6 UpdateMap () shrinks the location area of this node according to the newly obtained boundaries. After processing all sequences, Centroid Estimation (line 11) set the center of gravity of the final polygon as the estimated location of the target node. Basic MSP only makes use of the order information between a target node and the anchor nodes in each sequence. Actually, we can extract much more location information from Algorithm 1 Basic MSP Process Output: The estimated location of each node. 1: repeat 2: GetOneUnprocessedSeqence (); 3: repeat 4: GetOneNodeFromSequenceInOrder (); 5: GetBoundaries (); 6: UpdateMap (); 7: until All the target nodes are updated; 8: until All the node sequences are processed; 9: repeat 10: GetOneUnestimatedNode (); 11: CentroidEstimation (); 12: until All the target nodes are estimated; each sequence. Section 5 will introduce advanced MSP, in which four novel optimizations are proposed to improve the performance of MSP significantly. 5 Advanced MSP Four improvements to basic MSP are proposed in this section. The first three improvements do not need additional sensing and communication in the networks but require only slightly more off-line computation. The objective of all these improvements is to make full use of the information embedded in the node sequences. The results we have obtained empirically indicate that the implementation of the first two methods can dramatically reduce the localization error, and that the third and fourth methods are helpful for some system deployments. 5.1 Sequence-Based MSP As shown in Figure 2, each scan line and M anchors, splits the whole area into M + 1 parts. Each target node falls into one polygon shaped by scan lines. We noted that in basic MSP, only the anchors are used to narrow down the polygon of each target node, but actually there is more information in the node sequence that we can made use of. Let's first look at a simple example shown in Figure 3. The previous scans narrow the locations of target node 1 and node 2 into two dashed rectangles shown in the left part of Figure 3. Then a new scan generates a new sequence (1, 2). With knowledge of the scan's direction, it is easy to tell that node 1 is located to the left of node 2. Thus, we can further narrow the location area of node 2 by eliminating the shaded part of node 2's rectangle. This is because node 2 is located on the right of node 1 while the shaded area is outside the lower boundary of node 1. Similarly, the location area of node 1 can be narrowed by eliminating the shaded part out of node 2's right boundary. We call this procedure sequence-based MSP which means that the whole node sequence needs to be processed node by node in order. Specifically, sequence-based MSP follows this exact processing rule: Figure 3. Rule Illustration in Sequence Based MSP Algorithm 2 Sequence-Based MSP Process Output: The estimated location of each node. 1: repeat 2: GetOneUnprocessedSeqence (); 3: repeat 4: GetOneNodeByIncreasingOrder (); 5: ComputeLowbound (); 6: UpdateMap (); 7: until The last target node in the sequence; 8: repeat 9: GetOneNodeByDecreasingOrder (); 10: ComputeUpbound (); 11: UpdateMap (); 12: until The last target node in the sequence; 13: until All the node sequences are processed; 14: repeat 15: GetOneUnestimatedNode (); 16: CentroidEstimation (); 17: until All the target nodes are estimated; Elimination Rule: Along a scanning direction, the lower boundary of the successor's area must be equal to or larger than the lower boundary of the predecessor's area, and the upper boundary of the predecessor's area must be equal to or smaller than the upper boundary of the successor's area. In the case of Figure 3, node 2 is the successor of node 1, and node 1 is the predecessor of node 2. According to the elimination rule, node 2's lower boundary cannot be smaller than that of node 1 and node 1's upper boundary cannot exceed node 2's upper boundary. Algorithm 2 illustrates the pseudo code of sequence-based MSP. Each node sequence is processed within line 3 to 13. The sequence processing contains two steps: Step 1 (line 3 to 7): Compute and modify the lower boundary for each target node by increasing order in the node sequence. Each node's lower boundary is determined by the lower boundary of its predecessor node in the sequence, thus the processing must start from the first node in the sequence and by increasing order. Then update the map according to the new lower boundary. Step 2 (line 8 to 12): Compute and modify the upper boundary for each node by decreasing order in the node sequence. Each node's upper boundary is determined by the upper boundary of its successor node in the sequence, thus the processing must start from the last node in the sequence and by decreasing order. Then update the map according to the new upper boundary. After processing all the sequences, for each node, a polygon bounding its possible location has been found. Then, center-ofgravity-based estimation is applied to compute the exact location of each node (line 14 to 17). An example of this process is shown in Figure 4. The third scan generates the node sequence (B,9,2,7,4,6,3,8, C, A,5,1). In addition to the anchor split lines, because nodes 4 and 7 come after node 2 in the sequence, node 4 and 7's polygons could be narrowed according to node 2's lower boundary (the lower right-shaded area); similarly, the shaded area in node 2's rectangle could be eliminated since this part is beyond node 7's upper boundary indicated by the dotted line. Similar eliminating can be performed for node 3 as shown in the figure. Figure 4. Sequence-Based MSP Example Straight-line Scan 1 From above, we can see that the sequence-based MSP makes use of the information embedded in every sequential node pair in the node sequence. The polygon boundaries of the target nodes obtained in prior could be used to further split other target nodes' areas. Our evaluation in Sections 8 and 9 shows that sequence-based MSP considerably enhances system accuracy. 5.2 Iterative MSP Sequence-based MSP is preferable to basic MSP because it extracts more information from the node sequence. In fact, further useful information still remains! In sequence-based MSP, a sequence processed later benefits from information produced by previously processed sequences (e.g., the third scan in Figure 5). However, the first several sequences can hardly benefit from other scans in this way. Inspired by this phenomenon, we propose iterative MSP. The basic idea of iterative MSP is to process all the sequences iteratively several times so that the processing of each single sequence can benefit from the results of other sequences. To illustrate the idea more clearly, Figure 4 shows the results of three scans that have provided three sequences. Now if we process the sequence (8,1,5, A,6, C,4,3,7,2, B,9) obtained from scan 1 again, we can make progress, as shown in Figure 5. The reprocessing of the node sequence 1 provides information in the way an additional vertical scan would. From sequencebased MSP, we know that the upper boundaries of nodes 3 and 4 along the scan direction must not extend beyond the upper boundary of node 7, therefore the grid parts can be eliminated Figure 6. Example of Joint Distribution Estimation Figure 7. Idea of DBE MSP for Each Node for the nodes 3 and node 4, respectively, as shown in Figure 5. From this example, we can see that iterative processing of the sequence could help further shrink the polygon of each target node, and thus enhance the accuracy of the system. The implementation of iterative MSP is straightforward: process all the sequences multiple times using sequence-based MSP. Like sequence-based MSP, iterative MSP introduces no additional event cost. In other words, reprocessing does not actually repeat the scan physically. Evaluation results in Section 8 will show that iterative MSP contributes noticeably to a lower localization error. Empirical results show that after 5 iterations, improvements become less significant. In summary, iterative processing can achieve better performance with only a small computation overhead. 5.3 Distribution-Based Estimation After determining the location area polygon for each node, estimation is needed for a final decision. Previous research mostly applied the Center of Gravity (COG) method [4] [8] [10] which minimizes average error. If every node is independent of all others, COG is the statistically best solution. In MSP, however, each node may not be independent. For example, two neighboring nodes in a certain sequence could have overlapping polygon areas. In this case, if the marginal probability of joint distribution is used for estimation, better statistical results are achieved. Figure 6 shows an example in which node 1 and node 2 are located in the same polygon. If COG is used, both nodes are localized at the same position (Figure 6 (a)). However, the node sequences obtained from two scans indicate that node 1 should be to the left of and above node 2, as shown in Figure 6 (b). The high-level idea of distribution-based estimation proposed for MSP, which we call DBE MSP, is illustrated in Figure 7. The distributions of each node under the ith scan (for the ith node sequence) are estimated in node.vmap [i], which is a data structure for remembering the marginal distribution over scan i. Then all the vmaps are combined to get a single map and weighted estimation is used to obtain the final location. For each scan, all the nodes are sorted according to the gap, which is the diameter of the polygon along the direction of the scan, to produce a second, gap-based node sequence. Then, the estimation starts from the node with the smallest gap. This is because it is statistically more accurate to assume a uniform distribution of the node with smaller gap. For each node processed in order from the gap-based node sequence, either if Figure 5. Iterative MSP: Reprocessing Scan 1 Figure 9. Basic Architecture of Adaptive MSP Successor node exists: Both predecessor and successor conditional distribution nodes exist: conditional distribution based on succ. node's area based on both of them Figure 8. Four Cases in DBE Process no neighbor node in the original event-based node sequence shares an overlapping area, or if the neighbors have not been processed due to bigger gaps, a uniform distribution Uniform () is applied to this isolated node (the Alone case in Figure 8). If the distribution of its neighbors sharing overlapped areas has been processed, we calculate the joint distribution for the node. As shown in Figure 8, there are three possible cases depending on whether the distribution of the overlapping predecessor and/or successor nodes have/has already been estimated. The estimation's strategy of starting from the most accurate node (smallest gap node) reduces the problem of estimation error propagation. The results in the evaluation section indicate that applying distribution-based estimation could give statistically better results. 5.4 Adaptive MSP So far, all the enhancements to basic MSP focus on improving the multi-sequence processing algorithm given a fixed set of scan directions. All these enhancements require only more computing time without any overhead to the sensor nodes. Obviously, it is possible to have some choice and optimization on how events are generated. For example, in military situations, artillery or rocket-launched mini-ultrasound bombs can be used for event generation at some selected locations. In adaptive MSP, we carefully generate each new localization event so as to maximize the contribution of the new event to the refinement of localization, based on feedback from previous events. Figure 9 depicts the basic architecture of adaptive MSP. Through previous localization events, the whole map has been partitioned into many small location areas. The idea of adaptive MSP is to generate the next localization event to achieve best-effort elimination, which ideally could shrink the location area of individual node as much as possible. We use a weighted voting mechanism to evaluate candidate localization events. Every node wants the next event to split its area evenly, which would shrink the area fast. Therefore, every node votes for the parameters of the next event (e.g., the scan angle k of the straight-line scan). Since the area map is maintained centrally, the vote is virtually done and there is no need for the real sensor nodes to participate in it. After gathering all the voting results, the event parameters with the most votes win the election. There are two factors that determine the weight of each vote: • The vote for each candidate event is weighted according to the diameter D of the node's location area. Nodes with bigger location areas speak louder in the voting, because Figure 10. Candidate Slops for Node 3 at Anchor 1 overall system error is reduced mostly by splitting the larger areas. • The vote for each candidate event is also weighted according to its elimination efficiency for a location area, which is defined as how equally in size (or in diameter) an event can cut an area. In other words, an optimal scan event cuts an area in the middle, since this cut shrinks the area quickly and thus reduces localization uncertainty quickly. Combining the above two aspects, the weight for each vote is computed according to the following equation (1): Weight (kji) = f (Di, △ (kji, kopt kji is node i's jth supporting parameter for next event generation; Di is diameter of node i's location area; △ (kji, kopt i) is the distance between kji and the optimal parameter kopt i for node i, which should be defined to fit the specific application. Figure 10 presents an example for node 1's voting for the slopes of the next straight-line scan. In the system, there are a fixed number of candidate slopes for each scan (e.g., k1, k2, k3, k4 ...). The location area of target node 3 is shown in the figure. The candidate events k13, k23, k33, k43, k53, k63 are evaluated according to their effectiveness compared to the optimal ideal event which is shown as a dotted line with appropriate weights computed according to equation (1). For this specific example, as is illustrated in the right part of Figure 10, f (Di, △ (kji, kopt Slarge Ssmall and Slarge are the sizes of the smaller part and larger part of the area cut by the candidate line respectively. In this case, node 3 votes 0 for the candidate lines that do not cross its area since Ssmall = 0. We show later that adaptive MSP improves localization accuracy in WSNs with irregularly shaped deployment areas. 5.5 Overhead and MSP Complexity Analysis This section provides a complexity analysis of the MSP design. We emphasize that MSP adopts an asymmetric design in which sensor nodes need only to detect and report the events. They are blissfully oblivious to the processing methods proposed in previous sections. In this section, we analyze the computational cost on the node sequence processing side, where resources are plentiful. According to Algorithm 1, the computational complexity of Basic MSP is O (d • N • S), and the storage space required is O (N • S), where d is the number of events, N is the number of target nodes, and S is the area size. According to Algorithm 2, the computational complexity of both sequence-based MSP and iterative MSP is O (c • d • N • S), where c is the number of iterations and c = 1 for sequencebased MSP, and the storage space required is O (N • S). Both the computational complexity and storage space are equal within a constant factor to those of basic MSP. The computational complexity of the distribution-based estimation (DBE MSP) is greater. The major overhead comes from the computation of joint distributions when both predecessor and successor nodes exit. In order to compute the marginal probability, MSP needs to enumerate the locations of the predecessor node and the successor node. For example, if node A has predecessor node B and successor node C, then the marginal probability PA (x, y) of node A's being at location NB, A, C is the number of valid locations for A satisfying the sequence (B, A, C) when B is at (i, j) and C is at (m, n); PB (i, j) is the available probability of node B's being located at (i, j); PC (m, n) is the available probability of node C's being located at (m, n). A naive algorithm to compute equation (3) has complexity O (d • N • S3). However, since the marginal probability indeed comes from only one dimension along the scanning direction (e.g., a line), the complexity can be reduced to O (d • N • S1 .5) after algorithm optimization. In addition, the final location areas for every node are much smaller than the original field S; therefore, in practice, DBE MSP can be computed much faster than O (d • N • S1 .5). 6 Wave Propagation Example So far, the description of MSP has been solely in the context of straight-line scan. However, we note that MSP is conceptually independent of how the event is propagated as long as node sequences can be obtained. Clearly, we can also support wave-propagation-based events (e.g., ultrasound propagation, air blast propagation), which are polar coordinate equivalences of the line scans in the Cartesian coordinate system. This section illustrates the effects of MSP's implementation in the wave propagation-based situation. For easy modelling, we have made the following assumptions: • The wave propagates uniformly in all directions, therefore the propagation has a circle frontier surface. Since MSP does not rely on an accurate space-time relationship, a certain distortion in wave propagation is tolerable. If any directional wave is used, the propagation frontier surface can be modified accordingly. Figure 11. Example of Wave Propagation Situation • Under the situation of line-of-sight, we allow obstacles to reflect or deflect the wave. Reflection and deflection are not problems because each node reacts only to the first detected event. Those reflected or deflected waves come later than the line-of-sight waves. The only thing the system needs to maintain is an appropriate time interval between two successive localization events. • We assume that background noise exists, and therefore we run a band-pass filter to listen to a particular wave frequency. This reduces the chances of false detection. The parameter that affects the localization event generation here is the source location of the event. The different distances between each node and the event source determine the rank of each node in the node sequence. Using the node sequences, the MSP algorithm divides the whole area into many non-rectangular areas as shown in Figure 11. In this figure, the stars represent two previous event sources. The previous two propagations split the whole map into many areas by those dashed circles that pass one of the anchors. Each node is located in one of the small areas. Since sequence-based MSP, iterative MSP and DBE MSP make no assumptions about the type of localization events and the shape of the area, all three optimization algorithms can be applied for the wave propagation scenario. However, adaptive MSP needs more explanation. Figure 11 illustrates an example of nodes' voting for next event source locations. Unlike the straight-line scan, the critical parameter now is the location of the event source, because the distance between each node and the event source determines the rank of the node in the sequence. In Figure 11, if the next event breaks out along/near the solid thick gray line, which perpendicularly bisects the solid dark line between anchor C and the center of gravity of node 9's area (the gray area), the wave would reach anchor C and the center of gravity of node 9's area at roughly the same time, which would relatively equally divide node 9's area. Therefore, node 9 prefers to vote for the positions around the thick gray line. 7 Practical Deployment Issues For the sake of presentation, until now we have described MSP in an ideal case where a complete node sequence can be obtained with accurate time synchronization. In this section we describe how to make MSP work well under more realistic conditions. 7.1 Incomplete Node Sequence For diverse reasons, such as sensor malfunction or natural obstacles, the nodes in the network could fail to detect localization events. In such cases, the node sequence will not be complete. This problem has two versions: • Anchor nodes are missing in the node sequence If some anchor nodes fail to respond to the localization events, then the system has fewer anchors. In this case, the solution is to generate more events to compensate for the loss of anchors so as to achieve the desired accuracy requirements. • Target nodes are missing in the node sequence There are two consequences when target nodes are missing. First, if these nodes are still be useful to sensing applications, they need to use other backup localization approaches (e.g., Centroid) to localize themselves with help from their neighbors who have already learned their own locations from MSP. Secondly, since in advanced MSP each node in the sequence may contribute to the overall system accuracy, dropping of target nodes from sequences could also reduce the accuracy of the localization. Thus, proper compensation procedures such as adding more localization events need to be launched. 7.2 Localization without Time Synchronization In a sensor network without time synchronization support, nodes cannot be ordered into a sequence using timestamps. For such cases, we propose a listen-detect-assemble-report protocol, which is able to function independently without time synchronization. listen-detect-assemble-report requires that every node listens to the channel for the node sequence transmitted from its neighbors. Then, when the node detects the localization event, it assembles itself into the newest node sequence it has heard and reports the updated sequence to other nodes. Figure 12 (a) illustrates an example for the listen-detect-assemble-report protocol. For simplicity, in this figure we did not differentiate the target nodes from anchor nodes. A solid line between two nodes stands for a communication link. Suppose a straight line scans from left to right. Node 1 detects the event, and then it broadcasts the sequence (1) into the network. Node 2 and node 3 receive this sequence. When node 2 detects the event, node 2 adds itself into the sequence and broadcasts (1, 2). The sequence propagates in the same direction with the scan as shown in Figure 12 (a). Finally, node 6 obtains a complete sequence (1,2,3,5,7,4,6). In the case of ultrasound propagation, because the event propagation speed is much slower than that of radio, the listendetect-assemble-report protocol can work well in a situation where the node density is not very high. For instance, if the distance between two nodes along one direction is 10 meters, the 340m/s sound needs 29.4 ms to propagate from one node to the other. While normally the communication data rate is 250Kbps in the WSN (e.g., CC2420 [1]), it takes only about 2 ∼ 3 ms to transmit an assembled packet for one hop. One problem that may occur using the listen-detectassemble-report protocol is multiple partial sequences as shown in Figure 12 (b). Two separate paths in the network may result in two sequences that could not be further combined. In this case, since the two sequences can only be processed as separate sequences, some order information is lost. Therefore the Figure 12. Node Sequence without Time Synchronization accuracy of the system would decrease. The other problem is the sequence flip problem. As shown in Figure 12 (c), because node 2 and node 3 are too close to each other along the scan direction, they detect the scan almost simultaneously. Due to the uncertainty such as media access delay, two messages could be transmitted out of order. For example, if node 3 sends out its report first, then the order of node 2 and node 3 gets flipped in the final node sequence. The sequence flip problem would appear even in an accurately synchronized system due to random jitter in node detection if an event arrives at multiple nodes almost simultaneously. A method addressing the sequence flip is presented in the next section. 7.3 Sequence Flip and Protection Band Sequence flip problems can be solved with and without time synchronization. We firstly start with a scenario applying time synchronization. Existing solutions for time synchronization [12, 6] can easily achieve sub-millisecond-level accuracy. For example, FTSP [12] achieves 16.9 µs (microsecond) average error for a two-node single-hop case. Therefore, we can comfortably assume that the network is synchronized with maximum error of 1000µs. However, when multiple nodes are located very near to each other along the event propagation direction, even when time synchronization with less than 1ms error is achieved in the network, sequence flip may still occur. For example, in the sound wave propagation case, if two nodes are less than 0.34 meters apart, the difference between their detection timestamp would be smaller than 1 millisecond. We find that sequence flip could not only damage system accuracy, but also might cause a fatal error in the MSP algorithm. Figure 13 illustrates both detrimental results. In the left side of Figure 13 (a), suppose node 1 and node 2 are so close to each other that it takes less than 0.5 ms for the localization event to propagate from node 1 to node 2. Now unfortunately, the node sequence is mistaken to be (2,1). So node 1 is expected to be located to the right of node 2, such as at the position of the dashed node 1. According to the elimination rule in sequencebased MSP, the left part of node 1's area is cut off as shown in the right part of Figure 13 (a). This is a potentially fatal error, because node 1 is actually located in the dashed area which has been eliminated by mistake. During the subsequent eliminations introduced by other events, node 1's area might be cut off completely, thus node 1 could consequently be erased from the map! Even in cases where node 1 still survives, its area actually does not cover its real location. Figure 13. Sequence Flip and Protection Band Another problem is not fatal but lowers the localization accuracy. If we get the right node sequence (1,2), node 1 has a new upper boundary which can narrow the area of node 1 as in Figure 3. Due to the sequence flip, node 1 loses this new upper boundary. In order to address the sequence flip problem, especially to prevent nodes from being erased from the map, we propose a protection band compensation approach. The basic idea of protection band is to extend the boundary of the location area a little bit so as to make sure that the node will never be erased from the map. This solution is based on the fact that nodes have a high probability of flipping in the sequence if they are near to each other along the event propagation direction. If two nodes are apart from each other more than some distance, say, B, they rarely flip unless the nodes are faulty. The width of a protection band B, is largely determined by the maximum error in system time synchronization and the localization event propagation speed. Figure 13 (b) presents the application of the protection band. Instead of eliminating the dashed part in Figure 13 (a) for node 1, the new lower boundary of node 1 is set by shifting the original lower boundary of node 2 to the left by distance B. In this case, the location area still covers node 1 and protects it from being erased. In a practical implementation, supposing that the ultrasound event is used, if the maximum error of system time synchronization is 1ms, two nodes might flip with high probability if the timestamp difference between the two nodes is smaller than or equal to 1ms. Accordingly, we set the protection band B as 0.34 m (the distance sound can propagate within 1 millisecond). By adding the protection band, we reduce the chances of fatal errors, although at the cost of localization accuracy. Empirical results obtained from our physical test-bed verified this conclusion. In the case of using the listen-detect-assemble-report protocol, the only change we need to make is to select the protection band according to the maximum delay uncertainty introduced by the MAC operation and the event propagation speed. To bound MAC delay at the node side, a node can drop its report message if it experiences excessive MAC delay. This converts the sequence flip problem to the incomplete sequence problem, which can be more easily addressed by the method proposed in Section 7.1. 8 Simulation Evaluation Our evaluation of MSP was conducted on three platforms: (i) an indoor system with 46 MICAz motes using straight-line scan, (ii) an outdoor system with 20 MICAz motes using sound wave propagation, and (iii) an extensive simulation under various kinds of physical settings. In order to understand the behavior of MSP under numerous settings, we start our evaluation with simulations. Then, we implemented basic MSP and all the advanced MSP methods for the case where time synchronization is available in the network. The simulation and implementation details are omitted in this paper due to space constraints, but related documents [25] are provided online at http://www.cs.umn.edu/∼zhong/MSP. Full implementation and evaluation of system without time synchronization are yet to be completed in the near future. In simulation, we assume all the node sequences are perfect so as to reveal the performance of MSP achievable in the absence of incomplete node sequences or sequence flips. In our simulations, all the anchor nodes and target nodes are assumed to be deployed uniformly. The mean and maximum errors are averaged over 50 runs to obtain high confidence. For legibility reasons, we do not plot the confidence intervals in this paper. All the simulations are based on the straight-line scan example. We implement three scan strategies: • Random Scan: The slope of the scan line is randomly chosen at each time. • Regular Scan: The slope is predetermined to rotate uniformly from 0 degree to 180 degrees. For example, if the system scans 6 times, then the scan angles would be: 0, 30, 60, 90, 120, and 150. • Adaptive Scan: The slope of each scan is determined based on the localization results from previous scans. We start with basic MSP and then demonstrate the performance improvements one step at a time by adding (i) sequencebased MSP, (ii) iterative MSP, (iii) DBE MSP and (iv) adaptive MSP. 8.1 Performance of Basic MSP The evaluation starts with basic MSP, where we compare the performance of random scan and regular scan under different configurations. We intend to illustrate the impact of the number of anchors M, the number of scans d, and target node density (number of target nodes N in a fixed-size region) on the localization error. Table 1 shows the default simulation parameters. The error of each node is defined as the distance between the estimated location and the real position. We note that by default we only use three anchors, which is considerably fewer than existing range-free solutions [8, 4]. Impact of the Number of Scans: In this experiment, we compare regular scan with random scan under a different number of scans from 3 to 30 in steps of 3. The number of anchors Table 1. Default Configuration Parameters Figure 15. Improvements of Sequence-Based MSP over Basic MSP is 3 by default. Figure 14 (a) indicates the following: (i) as the number of scans increases, the localization error decreases significantly; for example, localization errors drop more than 60% from 3 scans to 30 scans; (ii) statistically, regular scan achieves better performance than random scan under identical number of scans. However, the performance gap reduces as the number of scans increases. This is expected since a large number of random numbers converges to a uniform distribution. Figure 14 (a) also demonstrates that MSP requires only a small number of anchors to perform very well, compared to existing range-free solutions [8, 4]. Impact of the Number of Anchors: In this experiment, we compare regular scan with random scan under different number of anchors from 3 to 30 in steps of 3. The results shown in Figure 14 (b) indicate that (i) as the number of anchor nodes increases, the localization error decreases, and (ii) statistically, regular scan obtains better results than random scan with identical number of anchors. By combining Figures 14 (a) and 14 (b), we can conclude that MSP allows a flexible tradeoff between physical cost (anchor nodes) and soft cost (localization events). Impact of the Target Node Density: In this experiment, we confirm that the density of target nodes has no impact on the accuracy, which motivated the design of sequence-based MSP. In this experiment, we compare regular scan with random scan under different number of target nodes from 10 to 190 in steps of 20. Results in Figure 14 (c) show that mean localization errors remain constant across different node densities. However, when the number of target nodes increases, the average maximum error increases. Summary: From the above experiments, we can conclude that in basic MSP, regular scan are better than random scan under different numbers of anchors and scan events. This is because regular scans uniformly eliminate the map from different directions, while random scans would obtain sequences with redundant overlapping information, if two scans choose two similar scanning slopes. 8.2 Improvements of Sequence-Based MSP This section evaluates the benefits of exploiting the order information among target nodes by comparing sequence-based MSP with basic MSP. In this and the following sections, regular scan is used for straight-line scan event generation. The purpose of using regular scan is to keep the scan events and the node sequences identical for both sequence-based MSP and basic MSP, so that the only difference between them is the sequence processing procedure. Impact of the Number of Scans: In this experiment, we compare sequence-based MSP with basic MSP under different number of scans from 3 to 30 in steps of 3. Figure 15 (a) indicates significant performance improvement in sequence-based MSP over basic MSP across all scan settings, especially when the number of scans is large. For example, when the number of scans is 30, errors in sequence-based MSP are about 20% of that of basic MSP. We conclude that sequence-based MSP performs extremely well when there are many scan events. Impact of the Number of Anchors: In this experiment, we use different number of anchors from 3 to 30 in steps of 3. As seen in Figure 15 (b), the mean error and maximum error of sequence-based MSP is much smaller than that of basic MSP. Especially when there is limited number of anchors in the system, e.g., 3 anchors, the error rate was almost halved by using sequence-based MSP. This phenomenon has an interesting explanation: the cutting lines created by anchor nodes are exploited by both basic MSP and sequence-based MSP, so as the Figure 16. Improvements of Iterative MSP Adaptive MSP for 500by80 Max Error of Regualr Scan Max Error of Adaptive Scan Mean Error of Regualr Scan Mean Error of Adaptive Scan Figure 17. Improvements of DBE MSP Figure 18. The Improvements of Adaptive MSP number of anchor nodes increases, anchors tend to dominate the contribution. Therefore the performance gaps lessens. Impact of the Target Node Density: Figure 15 (c) demonstrates the benefits of exploiting order information among target nodes. Since sequence-based MSP makes use of the information among the target nodes, having more target nodes contributes to the overall system accuracy. As the number of target nodes increases, the mean error and maximum error of sequence-based MSP decreases. Clearly the mean error in basic MSP is not affected by the number of target nodes, as shown in Figure 15 (c). Summary: From the above experiments, we can conclude that exploiting order information among target nodes can improve accuracy significantly, especially when the number of events is large but with few anchors. 8.3 Iterative MSP over Sequence-Based MSP In this experiment, the same node sequences were processed iteratively multiple times. In Figure 16, the two single marks are results from basic MSP, since basic MSP doesn't perform iterations. The two curves present the performance of iterative MSP under different numbers of iterations c. We note that when only a single iteration is used, this method degrades to sequence-based MSP. Therefore, Figure 16 compares the three methods to one another. Figure 16 shows that the second iteration can reduce the mean error and maximum error dramatically. After that, the performance gain gradually reduces, especially when c> 5. This is because the second iteration allows earlier scans to exploit the new boundaries created by later scans in the first iteration. Such exploitation decays quickly over iterations. 8.4 DBE MSP over Iterative MSP Figure 17, in which we augment iterative MSP with distribution-based estimation (DBE MSP), shows that DBE MSP could bring about statistically better performance. Figure 17 presents cumulative distribution localization errors. In general, the two curves of the DBE MSP lay slightly to the left of that of non-DBE MSP, which indicates that DBE MSP has a smaller statistical mean error and averaged maximum error than non-DBE MSP. We note that because DBE is augmented on top of the best solution so far, the performance improvement is not significant. When we apply DBE on basic MSP methods, the improvement is much more significant. We omit these results because of space constraints. 8.5 Improvements of Adaptive MSP This section illustrates the performance of adaptive MSP over non-adaptive MSP. We note that feedback-based adaptation can be applied to all MSP methods, since it affects only the scanning angles but not the sequence processing. In this experiment, we evaluated how adaptive MSP can improve the best solution so far. The default angle granularity (step) for adaptive searching is 5 degrees. Impact of Area Shape: First, if system settings are regular, the adaptive method hardly contributes to the results. For a square area (regular), the performance of adaptive MSP and regular scans are very close. However, if the shape of the area is not regular, adaptive MSP helps to choose the appropriate localization events to compensate. Therefore, adaptive MSP can achieve a better mean error and maximum error as shown in Figure 18 (a). For example, adaptive MSP improves localization accuracy by 30% when the number of target nodes is 10. Impact of the Target Node Density: Figure 18 (a) shows that when the node density is low, adaptive MSP brings more benefit than when node density is high. This phenomenon makes statistical sense, because the law of large numbers tells us that node placement approaches a truly uniform distribution when the number of nodes is increased. Adaptive MSP has an edge Figure 19. The Mirage Test-bed (Line Scan) Figure 20. The 20-node Outdoor Experiments (Wave) when layout is not uniform. Impact of Candidate Angle Density: Figure 18 (b) shows that the smaller the candidate scan angle step, the better the statistical performance in terms of mean error. The rationale is clear, as wider candidate scan angles provide adaptive MSP more opportunity to choose the one approaching the optimal angle. 8.6 Simulation Summary Starting from basic MSP, we have demonstrated step-bystep how four optimizations can be applied on top of each other to improve localization performance. In other words, these optimizations are compatible with each other and can jointly improve the overall performance. We note that our simulations were done under assumption that the complete node sequence can be obtained without sequence flips. In the next section, we present two real-system implementations that reveal and address these practical issues. 9 System Evaluation In this section, we present a system implementation of MSP on two physical test-beds. The first one is called Mirage, a large indoor test-bed composed of six 4-foot by 8-foot boards, illustrated in Figure 19. Each board in the system can be used as an individual sub-system, which is powered, controlled and metered separately. Three Hitachi CP-X1250 projectors, connected through a Matorx Triplehead2go graphics expansion box, are used to create an ultra-wide integrated display on six boards. Figure 19 shows that a long tilted line is generated by the projectors. We have implemented all five versions of MSP on the Mirage test-bed, running 46 MICAz motes. Unless mentioned otherwise, the default setting is 3 anchors and 6 scans at the scanning line speed of 8.6 feet/s. In all of our graphs, each data point represents the average value of 50 trials. In the outdoor system, a Dell A525 speaker is used to generate 4.7 KHz sound as shown in Figure 20. We place 20 MICAz motes in the backyard of a house. Since the location is not completely open, sound waves are reflected, scattered and absorbed by various objects in the vicinity, causing a multi-path effect. In the system evaluation, simple time synchronization mechanisms are applied on each node. 9.1 Indoor System Evaluation During indoor experiments, we encountered several realworld problems that are not revealed in the simulation. First, sequences obtained were partial due to misdetection and message losses. Second, elements in the sequences could flip due to detection delay, uncertainty in media access, or error in time synchronization. We show that these issues can be addressed by using the protection band method described in Section 7.3. 9.1.1 On Scanning Speed and Protection Band In this experiment, we studied the impact of the scanning speed and the length of protection band on the performance of the system. In general, with increasing scanning speed, nodes have less time to respond to the event and the time gap between two adjacent nodes shrinks, leading to an increasing number of partial sequences and sequence flips. Figure 21 shows the node flip situations for six scans with distinct angles under different scan speeds. The x-axis shows the distance between the flipped nodes in the correct node sequence. y-axis shows the total number of flips in the six scans. This figure tells us that faster scan brings in not only increasing number of flips, but also longer-distance flips that require wider protection band to prevent from fatal errors. Figure 22 (a) shows the effectiveness of the protection band in terms of reducing the number of unlocalized nodes. When we use a moderate scan speed (4.3 feet/s), the chance of flipping is rare, therefore we can achieve 0.45 feet mean accuracy (Figure 22 (b)) with 1.6 feet maximum error (Figure 22 (c)). With increasing speeds, the protection band needs to be set to a larger value to deal with flipping. Interesting phenomena can be observed in Figures 22: on one hand, the protection band can sharply reduce the number of unlocalized nodes; on the other hand, protection bands enlarge the area in which a target would potentially reside, introducing more uncertainty. Thus there is a concave curve for both mean and maximum error when the scan speed is at 8.6 feet/s. 9.1.2 On MSP Methods and Protection Band In this experiment, we show the improvements resulting from three different methods. Figure 23 (a) shows that a protection band of 0.35 feet is sufficient for the scan speed of 8.57 feet/s. Figures 23 (b) and 23 (c) show clearly that iterative MSP (with adaptation) achieves best performance. For example, Figures 23 (b) shows that when we set the protection band at 0.05 feet, iterative MSP achieves 0.7 feet accuracy, which is 42% more accurate than the basic design. Similarly, Figures 23 (b) and 23 (c) show the double-edged effects of protection band on the localization accuracy. Figure 21. Number of Flips for Different Scan Speeds Figure 22. Impact of Protection Band and Scanning Speed Figure 22. Impact of Protection Band and Scanning Speed Unlocalized Node Number (Scan Line Speed 8.57 feet/s) Unlocalized Node Number (Scan Line Speed 8.57 feet/s) Mean Error (Scan Line Speed 8.57 feet/s) Max Error (Scan Line Speed 8.57 feet/s) Mean Error (Scan Line Speed 8.57 feet/s) Max Error (Scan Line Speed 8.57 feet/s) Figure 23. Impact of Protection Band under Different MSP Methods Figure 24. Impact of the Number of Anchors and Scans 9.1.3 On Number of Anchors and Scans In this experiment, we show a tradeoff between hardware cost (anchors) with soft cost (events). Figure 24 (a) shows that with more cutting lines created by anchors, the chance of unlocalized nodes increases slightly. We note that with a 0.35 feet protection band, the percentage of unlocalized nodes is very small, e.g., in the worst-case with 11 anchors, only 2 out of 46 nodes are not localized due to flipping. Figures 24 (b) and 24 (c) show the tradeoff between number of anchors and the number of scans. Obviously, with the number of anchors increases, the error drops significantly. With 11 anchors we can achieve a localization accuracy as low as 0.25 ∼ 0.35 feet, which is nearly a 60% improvement. Similarly, with increasing number of scans, the accuracy drops significantly as well. We can observe about 30% across all anchor settings when we increase the number of scans from 4 to 8. For example, with only 3 anchors, we can achieve 0.6-foot accuracy with 8 scans. 9.2 Outdoor System Evaluation The outdoor system evaluation contains two parts: (i) effective detection distance evaluation, which shows that the node sequence can be readily obtained, and (ii) sound propagation based localization, which shows the results of wavepropagation-based localization. 9.2.1 Effective Detection Distance Evaluation We firstly evaluate the sequence flip phenomenon in wave propagation. As shown in Figure 25, 20 motes were placed as five groups in front of the speaker, four nodes in each group at roughly the same distances to the speaker. The gap between each group is set to be 2, 3, 4 and 5 feet respectively in four experiments. Figure 26 shows the results. The x-axis in each subgraph indicates the group index. There are four nodes in each group (4 bars). The y-axis shows the detection rank (order) of each node in the node sequence. As distance between each group increases, number of flips in the resulting node sequence Figure 26. Ranks vs. Distances Figure 27. Localization Error (Sound) Figure 25. Wave Detection decreases. For example, in the 2-foot distance subgraph, there are quite a few flips between nodes in adjacent and even nonadjacent groups, while in the 5-foot subgraph, flips between different groups disappeared in the test. 9.2.2 Sound Propagation Based Localization As shown in Figure 20, 20 motes are placed as a grid including 5 rows with 5 feet between each row and 4 columns with 4 feet between each column. Six 4KHz acoustic wave propagation events are generated around the mote grid by a speaker. Figure 27 shows the localization results using iterative MSP (3 times iterative processing) with a protection band of 3 feet. The average error of the localization results is 3 feet and the maximum error is 5 feet with one un-localized node. We found that sequence flip in wave propagation is more severe than that in the indoor, line-based test. This is expected due to the high propagation speed of sound. Currently we use MICAz mote, which is equipped with a low quality microphone. We believe that using a better speaker and more events, the system can yield better accuracy. Despite the hardware constrains, the MSP algorithm still successfully localized most of the nodes with good accuracy. 10 Conclusions In this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes. We demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences. We proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results. Importantly, these optimization methods can be used together, and improve accuracy additively. The practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds. We also evaluated performance at scale through analysis as well as extensive simulations. Results demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.
MSP: Multi-Sequence Positioning of Wireless Sensor Nodes * Abstract Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy. 1 Introduction Although Wireless Sensor Networks (WSN) have shown promising prospects in various applications [5], researchers still face several challenges for massive deployment of such networks. One of these is to identify the location of individual sensor nodes in outdoor environments. Because of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment. On the other hand, geometric information is indispensable in these networks, since users need to know where events of interest occur (e.g., the location of intruders or of a bomb explosion). Previous research on node localization falls into two categories: range-based approaches and range-free approaches. Range-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations. These approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]). Although range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments. On the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information. Unfortunately, since wireless connectivity is highly influenced by the environment and hardware calibration, existing solutions fail to deliver encouraging empirical results, or require substantial survey [2] and calibration [24] on a case-by-case basis. Realizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes' locations based on the detection time of controlled events). These solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes. These benefits, however, come along with an implicit assumption that the controlled events can be precisely distributed to a specified location at a specified time. We argue that precise event distribution is difficult to achieve, especially at large scale when terrain is uneven, the event distribution device is not well calibrated and its position is difficult to maintain (e.g., the helicopter-mounted scenario in [20]). To address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method for large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors. The novel idea behind MSP is to estimate each sensor node's two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution. This design offers several benefits. First, compared to a range-based approach, MSP does not require additional costly hardware. It works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work. Second, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors. And third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost. This paper offers the following additional intellectual contributions: • We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event. We demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy. Interestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy. • We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence. This new algorithm outperforms the widely adopted Centroid estimation [4, 8]. • To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events. The generation of later events is guided by localization results from previous events. • We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments. Through system implementation, we discover and address several interesting issues such as partial sequence and sequence flips. To reveal MSP performance at scale, we provide analytic results as well as a complete simulation study. All the simulation and implementation code is available online at http://www.cs.umn.edu/∼zhong/MSP. The rest of the paper is organized as follows. Section 2 briefly surveys the related work. Section 3 presents an overview of the MSP localization system. In sections 4 and 5, basic MSP and four advanced processing methods are introduced. Section 6 describes how MSP can be applied in a wave propagation scenario. Section 7 discusses several implementation issues. Section 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed. Section 10 concludes the paper. 2 Related Work Many methods have been proposed to localize wireless sensor devices in the open air. Most of these can be classified into two categories: range-based and range-free localization. Range-based localization systems, such as GPS [23], Cricket [17], AHLoS [19], AOA [16], Robust Quadrilaterals [13] and Sweeps [7], are based on fine-grained point-topoint distance estimation or angle estimation to identify pernode location. Constraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment. In addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns. For example, ultrasound signals usually effectively propagate 20-30 feet using an on-board transmitter [17]. Consequently, these range-based solutions require an undesirably high deployment density. Although the received signal strength indicator (RSSI) related [2, 24] methods were once considered an ideal low-cost solution, the irregularity of radio propagation [26] seriously limits the accuracy of such systems. The recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured. This ranging technique performs very well as long as antennas are well oriented and environmental factors such as multi-path effects and background noise are sufficiently addressed. Range-free methods don't need to estimate or measure accurate distances or angles. Instead, anchors or controlled-event distributions are used for node localization. Range-free methods can be generally classified into two types: anchor-based and anchor-free solutions. • For anchor-based solutions such as Centroid [4], APIT [8], SeRLoc [10], Gradient [13], and APS [15], the main idea is that the location of each node is estimated based on the known locations of the anchor nodes. Different anchor combinations narrow the areas in which the target nodes can possibly be located. Anchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy. In practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost. • Anchor-free solutions require no anchor nodes. Instead, external event generators and data processing platforms are used. The main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors. Spotlight [20] and Lighthouse [18] work in this fashion. In Spotlight [20], the event distribution needs to be precise in both time and space. Precise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight). All these increase system cost and reduce localization speed. StarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints. Label relaxation algorithms converge only when a sufficient number of robust constraints are obtained. Due to the environmental impact on RF connectivity constraints, however, StarDust is less accurate than Spotlight. In this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions. Unlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft Figure 1. The MSP System Overview cost (localization events). MSP uses only a small number of anchors (theoretically, as few as two). Unlike anchor-free solutions, MSP doesn't need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors. 3 System Overview 4 Basic MSP Algorithm 1 Basic MSP Process 5 Advanced MSP 5.1 Sequence-Based MSP 5.2 Iterative MSP 5.3 Distribution-Based Estimation 5.4 Adaptive MSP 5.5 Overhead and MSP Complexity Analysis 6 Wave Propagation Example 7 Practical Deployment Issues 7.1 Incomplete Node Sequence 7.2 Localization without Time Synchronization 7.3 Sequence Flip and Protection Band 8 Simulation Evaluation 8.1 Performance of Basic MSP Impact of the Number of Anchors: In this experiment, we Impact of the Target Node Density: In this experiment, we 8.2 Improvements of Sequence-Based MSP Impact of the Number of Scans: In this experiment, we Impact of the Number of Anchors: In this experiment, we Adaptive MSP for 500by80 Max Error of Regualr Scan Max Error of Adaptive Scan Mean Error of Regualr Scan Mean Error of Adaptive Scan 8.3 Iterative MSP over Sequence-Based MSP 8.4 DBE MSP over Iterative MSP 8.5 Improvements of Adaptive MSP 8.6 Simulation Summary 9 System Evaluation 9.1 Indoor System Evaluation 9.1.1 On Scanning Speed and Protection Band 9.1.2 On MSP Methods and Protection Band Unlocalized Node Number (Scan Line Speed 8.57 feet/s) Unlocalized Node Number (Scan Line Speed 8.57 feet/s) Mean Error (Scan Line Speed 8.57 feet/s) Max Error (Scan Line Speed 8.57 feet/s) Mean Error (Scan Line Speed 8.57 feet/s) Max Error (Scan Line Speed 8.57 feet/s) 9.1.3 On Number of Anchors and Scans 9.2 Outdoor System Evaluation 9.2.1 Effective Detection Distance Evaluation 9.2.2 Sound Propagation Based Localization 10 Conclusions In this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes. We demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences. We proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results. Importantly, these optimization methods can be used together, and improve accuracy additively. The practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds. We also evaluated performance at scale through analysis as well as extensive simulations. Results demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.
MSP: Multi-Sequence Positioning of Wireless Sensor Nodes * Abstract Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy. 1 Introduction One of these is to identify the location of individual sensor nodes in outdoor environments. Because of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment. Previous research on node localization falls into two categories: range-based approaches and range-free approaches. Range-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations. These approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]). Although range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments. On the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information. Realizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes' locations based on the detection time of controlled events). These solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes. To address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method for large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors. The novel idea behind MSP is to estimate each sensor node's two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution. This design offers several benefits. First, compared to a range-based approach, MSP does not require additional costly hardware. It works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work. Second, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors. And third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost. This paper offers the following additional intellectual contributions: • We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event. We demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy. Interestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy. • We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence. • To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events. The generation of later events is guided by localization results from previous events. • We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments. Through system implementation, we discover and address several interesting issues such as partial sequence and sequence flips. To reveal MSP performance at scale, we provide analytic results as well as a complete simulation study. All the simulation and implementation code is available online at http://www.cs.umn.edu/∼zhong/MSP. The rest of the paper is organized as follows. Section 2 briefly surveys the related work. Section 3 presents an overview of the MSP localization system. In sections 4 and 5, basic MSP and four advanced processing methods are introduced. Section 6 describes how MSP can be applied in a wave propagation scenario. Section 7 discusses several implementation issues. Section 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed. Section 10 concludes the paper. 2 Related Work Many methods have been proposed to localize wireless sensor devices in the open air. Most of these can be classified into two categories: range-based and range-free localization. Constraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment. In addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns. Consequently, these range-based solutions require an undesirably high deployment density. The recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured. Range-free methods don't need to estimate or measure accurate distances or angles. Instead, anchors or controlled-event distributions are used for node localization. Range-free methods can be generally classified into two types: anchor-based and anchor-free solutions. idea is that the location of each node is estimated based on the known locations of the anchor nodes. Different anchor combinations narrow the areas in which the target nodes can possibly be located. Anchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy. In practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost. • Anchor-free solutions require no anchor nodes. Instead, external event generators and data processing platforms are used. The main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors. Spotlight [20] and Lighthouse [18] work in this fashion. In Spotlight [20], the event distribution needs to be precise in both time and space. Precise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight). All these increase system cost and reduce localization speed. StarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints. Label relaxation algorithms converge only when a sufficient number of robust constraints are obtained. In this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions. Unlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft Figure 1. The MSP System Overview cost (localization events). MSP uses only a small number of anchors (theoretically, as few as two). Unlike anchor-free solutions, MSP doesn't need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors. 10 Conclusions In this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes. We demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences. We proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results. Importantly, these optimization methods can be used together, and improve accuracy additively. The practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds. We also evaluated performance at scale through analysis as well as extensive simulations. Results demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.
J-60
On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments
Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (single-parameter domains). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents' belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing.
[ "decentr incent compat mechan", "partial inform environ", "domin strategi implement", "distribut environ", "comput entiti", "cooper", "agent", "distribut algorithm mechan design", "vickrei-clark-grove", "weakli domin strategi iter elimin", "peer-to-peer", "p-inform environ" ]
[ "P", "P", "P", "P", "P", "P", "P", "R", "U", "R", "U", "M" ]
On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments ∗ Ahuva Mu``alem School of Engineering and Computer Science The Hebrew University of Jerusalem ahumu@cs.huji.ac.il ABSTRACT Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (single-parameter domains). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents'' belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Design, Algorithms 1. INTRODUCTION Recently, global networks have attracted widespread study. The emergence of popular scalable shared networks with self-interested entities - such as peer-to-peer systems over the Internet and mobile wireless communication ad-hoc networks - poses fundamental challenges. Naturally, the study of such giant decentralized systems involves aspects of game theory [32, 34]. In particular, the subfield of Mechanism Design deals with the construction of mechanisms: for a given social goal the challenge is to design rules for interaction such that selfish behavior of the agents will result in the desired social goal [23, 33]. Algorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32]. Distributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12]. The standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies. The solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players. The appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption. Many mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22]. Most of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player``s preference is described by only one parameter (single-parameter domains). To date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences. However, in distributed settings without available subsidies from outside sources, VCG mechanisms cannot be accepted as valid solutions due to a serious lack of budget balance. Additionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20]. Further limitations of the set-up are discussed in subsection 1.3. In most distributed environments, players can take advantage of the network structure to collect and distribute information about other players. This paper thus studies the effects of relaxing the private information assumption. 240 One model that has been extensively studied recently is the Peer-to-Peer (P2P) network. A P2P network is a distributed network with no centralized authority, in which the participants share their individual resources (e.g., processing power, storage capacity, bandwidth and content). The aggregation of such resources provides inexpensive computational platforms. The most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa. Recent work on P2P Incentives include micropayment methods [15] and reputation-based methods [9, 13]. The following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption. Example 1. Consider a Peer-to-Peer network for file sharing. Whenever agent B uploads a file from agent A, all peers along the routing path know that B has loaded the file. They can record this information about agent B. In addition, they can distribute this information. However, it is impossible to record all the information everywhere. First, such duplication induces huge costs. Second, as agents dynamically enter and exit from the network, the information might not be always available. And so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p. In this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals. As a result, cooperation is achieved independent of agents'' belief. This demonstrates that in some computational contexts our approach is far less demanding than the Bayesian approach (that assumes that players'' types are drawn according to some identified probability density function). 1.1 Implementations in Complete Information Set-ups In complete information environments, each agent is informed about everyone else. That is, each agent observes his own preference and the preferences of all other agents. However, no outsider can observe this information. Specifically, neither the mechanism designer nor the court. Many positive results were shown for such arguably realistic settings. For recent surveys see [25, 27, 18]. Moore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28]. The concept they used is subgame-perfect implementations (SPE). The SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a small strategy space. As a result, it is straightforward for a player to compute his strategy.1 Second, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks. Third, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer) 1 Interestingly, in real life players do not always use their subgame perfect strategies. One such widely studied case is the Ultimatum Bargaining 2-person game. In this simple game, the proposer first makes an offer of how to divide a certain known sum of money, and the responder either agrees or refuses, in the latter case both players earn zero. Somewhat surprisingly, experiments show that the responder often rejects the suggested offer, even if it is bounded away from zero and the game is played only once (see e.g. [38]). and budget-balanced (i.e., transfers always sum up to zero). This happens essentially if there are at least three players, and a direct network link between any two agents. Finally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him. 1.2 Implementations in Partially Informed Set-ups and Our Results The complete information assumption is realistic for small groups of players, but not in general. In this paper we consider players that are informed about each other with some probability. More formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies. We demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains. 1. We first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p). 2. We then suggest a certificate based challenging method that is more natural in computerized p-informed environments and different from the one introduced by Moore and Repullo [28] (for p ∈ (0, 1]). 3. We consider implementations in various network structures. As a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium. Our approach is (agent, file)-specific. (2) Web-cache budget-balanced and economically efficient mechanism. Our mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation. Essentially, large p implies a large amount of recorded information. 1.2.1 Malicious Agents Decentralized mechanisms often utilize punishing outcomes. As a result, malicious players might cause severe harm to others. We suggest a quantified notion of malicious player, who benefits from his own gained surplus and from harm caused to others. [12] suggests several categories to classify non-cooperating players. Our approach is similar to [7] (and the references therein), who considered independently such players in different context. We show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium. 241 1.3 Dominant Strategy Implementations In this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general. First, Roberts'' classical impossibility result shows that if players'' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35]. For slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms. This was observed and analyzed in [32, 22, 31]. Recently [20] extends Roberts'' result to some leading examples. They showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG. Additionally, it turns out that the dominant strategy requirement implies that the social goal must be monotone [35, 36, 22, 20, 5, 37]. This condition is very restrictive, as many desired natural goals are non-monotone2 . Several recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21]. However, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or almost VCG mechanisms (e.g., [19]). Recently, [8, 3] considered implementations for generalized single-parameter players. Organization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments. In section 3 we show a mechanism for Peer-to-Peer file sharing networks. In section 4 we apply our methods to derive a web cache mechanism. Future work is briefly discussed in section 5. 2. MOTIVATING EXAMPLES In this section we examine the concepts of subgame perfect and iterative elimination of weakly dominated strategies for completely informed and p-informed environments. We also present the notion of q-maliciousness and some other related considerations through two illustrative examples. 2.1 The Fair Assignment Problem Our first example is an adjustment to computerized context of an ancient procedure to ensure that the wealthiest man in Athens would sponsor a theatrical production known as the Choregia [27]. In the fair assignment problem, Alice and Bob are two workers, and there is a new task to be performed. Their goal is to assign the task to the least loaded worker without any monetary transfers. The informational assumption is that Alice and Bob know both loads and the duration of the new task.3 2 E.g., minimizing the makespan within a factor of 2 [32] and Rawls'' Rule over some multi-dimensional domains [20]. 3 In first glance one might ask why the completely informed agents could not simply sign a contract, specifying the desired goal. Such a contract is sometimes infeasible due to fact that the true state cannot be observed by outsiders, especially not the court. Claim 1. The fair assignment goal cannot be implemented in dominant strategies.4 2.1.1 Basic Mechanism The following simple mechanism implements this goal in subgame perfect equilibrium. • Stage 1: Alice either agrees to perform the new task or refuses. • Stage 2: If she refuses, Bob has to choose between: - (a) Performing the task himself. - (b) Exchanging his load with Alice and performing the new task as well. Let LT A, LT B be the true loads of Alice and Bob, and let t > 0 be the load of the new task. Assume that load exchanging takes zero time and cost. We shall see that the basic mechanism achieves the goal in a subgame perfect equilibrium. Intuitively this means that in equilibrium each player will choose his best action at each point he might reach, assuming similar behavior of others, and thus every SPE is a Nash equilibrium. Claim 2. ([27]) The task is assigned to the least loaded worker in subgame perfect equilibrium. Proof. By backward induction argument (look forward and reason backward), consider the following cases: 1. LT B ≤ LT A. If stage 2 is reached then Bob will not exchange. 2. LT A < LT B < LT A + t. If stage 2 is reached Bob will exchange, and this is what Alice prefers. 3. LT A + t ≤ LT B. If stage 2 is reached then Bob would exchange, as a result it is strictly preferable by Alice to perform the task. Note that the basic mechanism does not use monetary transfers at all and is decentralized in the sense that no third party is needed to run the procedure. The goal is achieved in equilibrium (ties are broken in favor of Alice). However, in the second case exchange do occur in an equilibrium point. Recall the unrealistic assumption that load exchange takes zero time and cost. Introducing fines, the next mechanism overcomes this drawback. 2.1.2 Elicitation Mechanism In this subsection we shall see a centralized mechanism for the fair assignment goal without load exchange in equilibrium. The additional assumptions are as follows. The cost performing a load of duration d is exactly d. We assume that the duration t of the new task is < T. The payoffs 4 proof: Assume that there exists a mechanism that implements this goal in dominant strategies. Then by the Revelation Principle [23] there exists a mechanism that implements this goal for which the dominant strategy of each player is to report his true load. Clearly, truthfully reporting cannot be a dominant strategy for this goal (if monetary transfers are not available), as players would prefer to report higher loads. 242 of the utility maximizers agents are quasilinear. The following mechanism is an adaptation of Moore and Repullo``s elicitation mechanism [28]5 . • Stage 1: (Elicitation of Alice``s load) Alice announces LA.. Bob announces LA ≤ LA.. If LA = LA (Bob agrees) goto the next Stage. Otherwise (Bob challenges), Alice is assigned the task. She then has to choose between: - (a) Transferring her original load to Bob and paying him LA − 0.5 · min{ , LA − LA}. Alice pays to the mechanism. Bob pays the fine of T + to the mechanism. - (b) No load transfer. Alice pays to Bob. STOP. • Stage 2: The elicitation of Bob``s load is similar to Stage 1 (switching the roles of Alice and Bob). • Stage 3: If LA < LB Alice is assigned the task, otherwise Bob. STOP. Observe that Alice is assigned the task and fined with whenever Bob challenges. We shall see that the bonus of is paid to a challenging player only in out of equilibria cases. Claim 3. If the mechanism stops at Stage 3, then the payoff of each agent is at least −t and at most 0. Proposition 1. It is a subgame perfect equilibrium of the elicitation mechanism to report the true load, and to challenge with the true load only if the other agent overreports. Proof. Assume w.l.o.g that the elicitation of Alice``s load is done after Bob``s, and that Stage 2 is reached. If Alice truly reports LA = LT A, Bob strictly prefers to agree. Otherwise, if Bob challenges, Alice would always strictly prefer to transfer (as in this case Bob would perform her load for smaller cost), as a result Bob would pay T + to the mechanism. This punishing outcome is less preferable than the normal outcome of Stage 3 achieved had he agreed. If Alice misreports LA > LT A, then Bob can ensure himself the bonus (which is always strictly preferable than reaching Stage 3) by challenging with LA = LT A, and so whenever Bob gets the bonus Alice gains the worst of all payoffs. Reporting a lower load LA < LT A is not beneficial for Alice. In this case, Bob would strictly prefer to agree (and not to announce LA < LA, as he limited to challenge with a smaller load than what she announces). Thus such misreporting can only increase the possibility that she is assigned the task. And so there is no incentive for Alice to do so. All together, Alice would prefer to report the truth in this stage. And so Stage 2 would not abnormally end by STOP, and similarly Stage 1. Observe that the elicitation mechanism is almost balanced: in all outcomes no money comes in or out, except for the non-equilibrium outcome (a), in which both players pay to the mechanism. 5 In [28], if an agent misreport his type then it is always beneficial to the other agent to challenge. In particular, even if the agent reports a lower load. 2.1.3 Elicitation Mechanism for Partially Informed Agents In this subsection we consider partially informed agents. Formally: Definition 1. An agent A is p-informed about agent B, if A knows the type of B with probability p (independently of what B knows). It turns out that a version of the elicitation mechanism works for this relaxed information assumption, if we use the concept of iterative elimination of weakly dominated strategies6 . We replace the fixed fine of in the elicitation mechanism with the fine: βp = max{L, 1 − p 2p − 1 T} + , and assume the bounds LT A, LT B ≤ L. Proposition 2. If all agents are p-informed, p > 0.5, the elicitation mechanism(βp) implements the fair assignment goal with the concept of iterative elimination of weakly dominated strategies. The strategy of each player is to report the true load and to challenge with the true load if the other agent overreport. Proof. Assume w.l.o.g that the elicitation of Alice``s load is done after Bob``s, and that Stage 2 is reached. First observe that underreporting the true value is a dominated strategy, whether Bob is not informed and mistakenly challenges with a lower load (as βp ≥ L) or not, or even if t is very small. Now we shall see that overreporting her value is a dominated strategy, as well. Alice``s expected payoff gained by misreporting ≤ p (payoff if she lies and Bob is informed) +(1 − p) (max payoff if Bob is not informed) ≤ p (−t − βp) < p (−t) + (1 − p) (−t − βp) ≤ p (min payoff of true report if Bob is informed) + (1 − p) (min payoff if Bob is not informed) ≤ Alice``s expected payoff if she truly reports. The term (−t−βp) in the left hand side is due to the fact that if Bob is informed he will always prefer to challenge. In the right hand side, if he is informed, then challenging is a dominated strategy, and if he is not informed the worst harm he can make is to challenge. Thus in stage 2 Alice will report her true load. This implies that challenging without being informed is a dominated strategy for Bob. This argument can be reasoned also for the first stage, when Bob reports his value. Bob knows the maximum payoff he can gain is at most zero since he cannot expect to get the bonus in the next stage. 2.1.4 Extensions The elicitation mechanism for partially informed agents is rather general. As in [28], we need the capability to judge between two distinct declarations in the elicitation rounds, 6 A strategy si of player i is weakly dominated if there exists si such that (i) the payoff gained by si is at least as high as the payoff gained by si, for all strategies of the other players and all preferences, and (ii) there exist a preference and a combination of strategies for the other players such that the payoff gained by si is strictly higher than the payoff gained by si. 243 and upper and lower bounds based on the possible payoffs derived from the last stage. In addition, for p-informed environments, some structure is needed to ensure that underbidding is a dominated strategy. The Choregia-type mechanisms can be applied to more than 2 players with the same number of stages: the player in the first stage can simply points out the name of the wealthiest agent. Similarly, the elicitation mechanisms can be extended in a straightforward manner. These mechanisms can be budget-balanced, as some player might replace the role of the designer, and collect the fines, as observed in [28]. Open Problem 1. Design a decentralized budget balanced mechanism with reasonable fines for independently p-informed n players, where p ≤ 1 − 1/2 1 n−1 . 2.2 Seller and Buyer Scenario A player might cause severe harm to others by choosing a non-equilibrium outcome. In the mechanism for the fair assignment goal, an agent might maliciously challenge even if the other agent truly reports his load. In this subsection we consider such malicious scenarios. For the ease of exposition we present a second example. We demonstrate that equilibria remain unchanged even if players are malicious. In the seller-buyer example there is one item to be traded and two possible future states. The goal is to sell the item for the average low price pl = ls+lb 2 in state L, and the higher price ph = hs+hb 2 in the other state H, where ls is seller``s cost and lb is buyer``s value in state L, and similarly hs, hb in H. The players fix the prices without knowing what will be the future state. Assume that ls < hs < lb < hb, and that trade can occur in both prices (that is, pl, ph ∈ (hs, lb)). Only the players can observe the realization of the true state. The payoffs are of the form ub = xv−tb, us = ts −xvs, where the binary variable x indicates if trade occurred, and tb, ts are the transfers. Consider the following decentralized trade mechanism. • Stage 1: If seller reports H goto Stage 2. Otherwise, trade at the low price pl. STOP. • Stage 2: The buyer has to choose between: - (a) Trade at the high price ph. - (b) No trade and seller pays ∆ to the buyer. Claim 4. Let ∆ = lb−ph+ . The unique subgame perfect equilibrium of the trade mechanism is to report the true state in Stage 1 and trading if Stage 2 is reached. Note that the outcome (b) is never chosen in equilibrium. 2.2.1 Trade Mechanism for Malicious Agents The buyer might maliciously punish the seller by choosing the outcome (b) when the true state is H. The following notion quantifies the consideration that a player is not indifferent to the private surpluses of others. Definition 2. A player is q-malicious if his payoff equals: (1 − q) (his private surplus) − q (summation of others surpluses), q ∈ [0, 1]. This definition appeared independently in [7] in different context. We shall see that the traders would avoid such bad behavior if they are q-malicious, where q < 0.5, that is if their non-indifference impact is bounded by 0.5. Equilibria outcomes remain unchanged, and so cooperation is achieved as in the original case of non-malicious players. Consider the trade mechanism with pl = (1 − q) hs + q lb , ph = q hs + (1 − q) lb , ∆ = (1 − q) (hb − lb − ). Note that pl < ph for q < 0.5. Claim 5. If q < 0.5, then the unique subgame perfect equilibrium for q-malicious players remains unchanged. Proof. By backward induction we consider two cases. In state H, the q-malicious buyer would prefer to trade if (1 − q)(hb − ph) + q(hs − ph) > (1 − q)∆ + q(∆). Indeed, (1 − q)hb + qhs > ∆ + ph. Trivially, the seller prefers to trade at the higher price, (1 − q)(pl − hs) + q(pl − hb) < (1 − q)(ph − hs) + q(ph − hb). In state L the buyer prefers the no trade outcome, as (1−q)(lb −ph)+q(ls −ph) < ∆. The seller prefers to trade at a low price, as (1 − q)(pl − ls) + q(pl − lb) > 0 > −∆. 2.2.2 Discussion No mechanism can Nash-implement this trading goal if the only possible outcomes are trade at pl and trade at ph. To see this, it is enough to consider normal forms (as any extensive form mechanism can be presented as a normal one). Consider a matrix representation, where the seller is the row player and the buyer is the column player, in which every entry includes an outcome. Suppose there is equilibrium entry for the state L. The associate column must be all pl, otherwise the seller would have an incentive to deviate. Similarly, the associate row of the H equilibrium entry must be all ph (otherwise the buyer would deviate), a contradiction. 7 8 The buyer prefers pl and seller ph, and so the preferences are identical in both states. Hence reporting preferences over outcomes is not enough - players must supply additional information. This is captured by outcome (b) in the trade mechanism. Intuitively, if a goal is not Nash-implementable we need to add more outcomes. The drawback is that some new additional equilibria must be ruled out. E.g., additional Nash equilibrium for the trade mechanism is (trade at pl, (b)). That is, the seller chooses to trade at low price at either states, and the buyer always chooses the no trade option that fines the seller, if the second stage is reached. Such buyer``s threat is not credible, because if the mechanism is played only once, and Stage 2 is reached in state H, the buyer would strictly decrease his payoff if he chooses (b). Clearly, this is not a subgame perfect equilibrium. Although each extensive game-form is strategically equivalent to a normal form one, the extensive form representation places more structure and so it seems plausible that the subgame perfect equilibrium will be played.9 7 Formally, this goal is not Maskin monotonic, a necessary condition for Nash-implementability [24]. 8 A similar argument applies for the Fair Assignment Problem. 9 Interestingly, it is a straight forward to construct a sequential mechanism with unique SPE, and additional NE with a strictly larger payoff for every player. 244 3. PEER-TO-PEER NETWORKS In this section we describe a simplified Peer-to-Peer network for file sharing, without payments in equilibrium, using a certificate-based challenging method. In this challenging method - as opposed to [28] - an agent that challenges cannot harm other agents, unless he provides a valid certificate. In general, if agent B copied a file f from agent A, then agent A knows that agent B holds a copy of the file. We denote such information as a certificate(B, f) (we shall omit cryptographic details). Such a certificate can be recorded and distributed along the network, and so we can treat each agent holding the certificate as an informed agent. Assumptions: We assume an homogeneous system with files of equal size. The benefit each agent gains by holding a copy of any file is V . The only cost each agent has is the uploading cost C (induced while transferring a file to an immediate neighbor). All other costs are negligible (e.g., storing the certificates, forwarding messages, providing acknowledgements, digital signatures, etc). Let upA, downA be the numbers of agent A uploads and downloads if he always cooperates. We assume that each agent A enters the system if upA · C < downA · V . Each agent has a quasilinear utility and only cares about his current bandwidth usage. In particular, he ignores future scenarios (e.g., whether forwarding or dropping of a packet might affect future demand). 3.1 Basic Mechanism We start with a mechanism for a network with 3 p-informed agents: B, A1, A2. We assume that B is directly connected to A1 and A2. If B has the certificate(A1, f), then he can apply directly to A1 and request the file (if he refuses, then B can go to court). The following basic sequential mechanism is applicable whenever agent B is not informed and still would like to download the file if it exists in the network. Note that this goal cannot be implemented in dominant strategies without payments (similar to Claim 1, when the type of each agent here is the set of files he holds). Define tA,B to be the monetary amount that agent A should transfer to B. • Stage 1: Agent B requests the file f from A1. - If A1 replies yes then B downloads the file from A1. STOP. - Otherwise, agent B sends A1s no reply to agent A2. ∗ If A2 declares agree then goto the next stage. ∗ Else, A2 sends a certificate(A1, f) to agent B. · If the certificate is correct then tA1,A2 = βp. STOP. · Else tA2,A1 = |C| + . STOP. Stage 2: Agent B requests the file f from A2. Switch the roles of the agents A1, A2. Claim 6. The basic mechanism is budget-balanced (transfers always sum to zero) and decentralized. Theorem 1. Let βp = |C| p + , p ∈ (0, 1]. A strategy that survives iterative elimination of weakly dominated strategies is to reply yes if Ai holds the file, and to challenge only with a valid certificate. As a result, B downloads the file if some agent holds it, in equilibrium. There are no payments or transfers in equilibrium. Proof. Clearly if the mechanism ends without challenging: −C ≤ u(Ai) ≤ 0. And so, challenging with an invalid certificate is always a dominated strategy. Now, when Stage 2 is reached, A2 is the last to report if he has the file. If A2 has the file it is a weakly undominated strategy to misreport, whether A1 is informed or not: A2``s expected payoff gained by misreporting no ≤ p · (−βp) + (1 − p) · 0 < −C ≤ A2``s payoff if she reports yes. This argument can be reasoned also for Stage 1, when A1 reports whether he has the file. A1 knows that A2 will report yes if and only if she has the file in the next stage, and so the maximum payoff he can gain is at most zero since he cannot expect to get a bonus. 3.2 Chain Networks In a chain network, agent B is directly connected to A1, and Ai is directly connected to agent Ai+1. Assume that we have an acknowledgment protocol to confirm the receipt of a particular message. To avoid message dropping, we add the fine (βp +2 ) to be paid by an agent who hasn``t properly forwarded a message. The chain mechanism follows: • Stage i: Agent B forwards a request for the file f to Ai (through {Ak}k≤i). • If Ai reports yes, then B downloads f from Ai. STOP. • Otherwise Ai reports no. If Aj sends a certificate(Ak, f) to B, ( j, k ≤ i), then - If certificate(Ak, f) is correct, then t(Ak, Aj) = βp. STOP. - Else, t(Aj, Ak) = C + . STOP. If Ai reports that he has no copy of the file, then any agent in between might challenge. Using digital signatures and acknowledgements, observe that every agent must forward each message, even if it contains a certificate showing that he himself has misreported. We use the same fine, βp, as in the basic mechanism, because the protocol might end at stage 1 (clearly, the former analysis still applies, since the actual p increases with the number of players). 3.3 Network Mechanism In this subsection we consider general network structures. We need the assumption that there is a ping protocol that checks whether a neighbor agent is on-line or not (that is, an on-line agent cannot hide himself). To limit the amount of information to be recorded, we assume that an agent is committed to keep any downloaded file to at least one hour, and so certificates are valid for a limited amount of time. We assume that each agent has a digitally signed listing of his current immediate neighbors. As in real P2P file sharing applications, we restrict each request for a file to be forwarded at most r times (that is, downloads are possible only inside a neighborhood of radius r). 245 The network mechanism utilizes the chain mechanism in the following way: When agent B requests a file from agent A (at most r − 1 far), then A sends to B the list of his neighbors and the output of the ping protocol to all of these neighbors. As a result, B can explore the network. Remark: In this mechanism we assumed that the environment is p-informed. An important design issue that it is not addressed here is the incentives for the information propagation phase. 4. WEB CACHE Web caches are widely used tool to improve overall system efficiency by allowing fast local access. They were listed in [12] as a challenging application of Distributed Algorithmic Mechanism Design. Nisan [30] considered a single cache shared by strategic agents. In this problem, agent i gains the value vT i if a particular item is loaded to the local shared cache. The efficient goal is to load the item if and only if ΣvT i ≥ C, where C is the loading cost. This goal reduces to the public project problem analyzed by Clarke [10]. However, it is well known that this mechanism is not budget-balanced (e.g., if the valuation of each player is C, then everyone pays zero). In this section we suggest informational and environmental assumptions for which we describe a decentralized budgetbalanced efficient mechanism. We consider environments for which future demand of each agent depends on past demand. The underlying informational and environmental requirements are as follows. 1. An agent can read the content of a message only if he is the target node (even if he has to forward the message as an intermediate node of some routing path). An agent cannot initiate a message on behalf of other agents. 2. An acknowledgement protocol is available, so that every agent can provide a certificate indicating that he handled a certain message properly. 3. Negligible costs: we assume p-informed agents, where p is such that the agent``s induced cost for keeping records of information is negligible. We also assume that the cost incurred by sending and forwarding messages is negligible. 4. Let qi(t) denotes the number of loading requests agent i initiated for the item during the time slot t. We assume that vT i (t), the value for caching the item in the beginning of slot t depends only on most recent slot, formally vT i (t) = max{Vi(qi(t − 1)), C}, where Vi(·) is a non-decreasing real function. In addition, Vi(·) is a common knowledge among the players. 5. The network is homogeneous in the sense that if agent j happens to handle k requests initiated by agent i during the time slot t, then qi(t) = kα, where α depends on the routing protocol and the environment (α might be smaller than 1, if each request is flooded several times). We assume that the only way agent i can affect the true qi(t) is by superficially increasing his demand for the cached item, but not the other way (that is, agent``s loss, incurred by giving up a necessary request for the item, is not negligible). The first requirement is to avoid free riding, and also to avoid the case that an agent superficially increases the demand of others and as a result decreases his own demand. The second requirement is to avoid the case that an agent who gets a routing request for the item, records it and then drops it. The third is to ensure that the environment stays well informed. In addition, if the forwarding cost is negligible each agent cooperates and forwards messages as he would not like to decrease the future demand (that monotonically depends on the current time slot, as assumed in the forth requirement) of some other agent. Given that the payments are increasing with the declared values, the forth and fifth requirements ensure that the agent would not increase his demand superficially and so qi(t) is the true demand. The following Web-Cache Mechanism implements the efficient goal that shares the cost proportionally. For simplicity it is described for two players and w.l.o.g vT i (t) equals the number of requests initiated by i and observed by any informed j (that is, α = 1 and Vi(qi(t − 1)) = qi(t − 1)). • Stage 1: (Elicitation of vT A(t)) Alice announces vA.. Bob announces vA ≥ vA.. If vA = vA goto the next Stage. Otherwise (Bob challenges): - If Bob provides vA valid records then Alice pays C to finance the loading of the item into the cache. She also pays βp to Bob. STOP. - Otherwise, Bob finances the loading of the item into the cache. STOP. • Stage 2: The elicitation of vT B(t) is done analogously. • Stage 3: If vA + vB < C, then STOP. Otherwise, load the item to the cache, Alice pays pA = vA vA+vB · C, and Bob pays pB = vB vA+vB · C. Claim 7. It is a dominated strategy to overreport the true value. Proof. Let vT A < VA.. There are two cases to consider: • If vT A + vB < C and vA + vB ≥ C. We need to show that if the mechanism stops normally Alice would pay more than vT A, that is: vA vA+vB ·C > vT A. Indeed, vA C > vA (vT A + vB) > vT A (vA + vB). • If vT A + vB ≥ C, then clearly, vA vA+vB > vT A vT A +vB . Theorem 2. Let βp = max{0, 1−2p p · C} + , p ∈ (0, 1]. A strategy that survives iterative elimination of weakly dominated strategies is to report the truth and to challenge only when the agent is informed. The mechanism is efficient, budget-balanced, exhibits consumer sovereignty, no positive transfer and individual rationality10 . Proof. Challenging without being informed (that is, without providing enough valid records) is always dominated strategy in this mechanism. Now, assume w.l.o.g. Alice is 10 See [29] or [12] for exact definitions. 246 the last to report her value. Alice``s expected payoff gained by underreporting ≤ p · (−C − βp) + (1 − p) · C < p · 0 + (1 − p) · 0 ≤ Alice``s expected payoff if she honestly reports. The right hand side equals zero as the participation costs are negligible. Reasoning back, Bob cannot expect to get the bonus and so misreporting is dominated strategy for him. 5. CONCLUDING REMARKS In this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information. We then described some mechanisms using the concept of iterative elimination of weakly dominated strategies. Some issues for future work include: • As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct incentive compatible mechanisms even for non-singleparameter cases. The challenge is to find more realistic scenarios in which the partial informational assumption is applicable. • Mechanisms for information propagation and maintenance. In our examples we choose p such that the maintenance cost over time is negligible. However, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not likely to be needed, in order to decrease his maintenance costs. As a result, the probability that the environment is informed decreases, and selfish agents would not cooperate. Incentives for information propagation should be considered as well (e.g., for P2P networks for file sharing). • It seems that some social choice goals cannot be implemented if each player is at least 1/n-malicious (where n is the number of players). It would be interesting to identify these cases. Acknowledgements We thank Meitav Ackerman, Moshe Babaioff, Liad Blumrozen, Michal Feldman, Daniel Lehmann, Noam Nisan, Motty Perry and Eyal Winter for helpful discussions. 6. REFERENCES [1] A. Archer and E. Tardos. Truthful mechanisms for one-parameter agents. In IEEE Symposium on Foundations of Computer Science, pages 482-491, 2001. [2] Aaron Archer, Christos Papadimitriou, Kunal Talwar, and Eva Tardos. An approximate truthful mechanism for combinatorial auctions with single parameter agent. In SODA, 2003. [3] Moshe Babaioff, Ron Lavi, and Elan Pavlov. Single-parameter domains and implementation in undominated strategies, 2004. Working paper. [4] Yair Bartal, Rica Gonen, and Noam Nisan. Incentive compatible multi-unit combinatorial auctions, 2003. TARK-03. [5] Sushil Bikhchandani, Shurojit Chatterji, and Arunava Sen. Incentive compatibility in multi-unit auctions, 2003. Working paper. [6] Liad Blumrosen, Noam Nisan, and Ilya Segal. Auctions with severely bounded communication, 2004. Working paper. [7] F. Brandt, T. Sandholm, and Y. Shoham. Spiteful bidding in sealed-bid auctions, 2005. [8] Patrick Briest, Piotr Krysta, and Berthold Voecking. Approximation techniques for utilitarian mechanism design. In STOC, 2005. [9] Chiranjeeb Buragohain, Divy Agrawal, and Subhash Suri. A game-theoretic framework for incentives in p2p systems. In IEEE P2P, 2003. [10] E. H. Clarke. Multipart pricing of public goods. Public Choice, 11:17-33, 1971. [11] Joan Feigenbaum, Christos Papadimitrios, and Scott Shenkar. Sharing the cost of multicast transmissions. Computer and system Sciences, 63(1), 2001. [12] Joan Feigenbaum and Scott Shenker. Distributed algorithmic mechanism design: Recent results and future directions. In Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, pages 1-13. ACM Press, New York, 2002. [13] M. Feldman, K. Lai, I. Stoica, and J. Chuang. Robust incentive techniques for peer-to-peer networks. In EC, 2004. [14] A. Goldberg, J. Hartline, A. Karlin, and A. Wright. Competitive auctions, 2004. Working paper. [15] Philippe Golle, Kevin Leyton-Brown, Ilya Mironov, and Mark Lillibridge. Incentives for sharing in peer-to-peer networks. In EC, 2001. [16] Ron Holzman, Noa Kfir-Dahav, Dov Monderer, and Moshe Tennenholtz. Bundling equilibrium in combinatorial auctions. Games and Economic Behavior, 47:104-123, 2004. [17] Ron Holzman and Dov Monderer. Characterization of ex post equilibrium in the vcg combinatorial auctions. Games and Economic Behavior, 47:87-103, 2004. [18] Matthew O. Jackson. A crash course in implementation theory, 1997. mimeo: California Institute of Technology. 25. [19] A. Kothari, D. Parkes, and S. Suri. Approximately-strategyproof and tractable multi-unit auctions. In EC, 2003. [20] Ron Lavi, Ahuva Mu``alem, and Noam Nisan. Towards a characterization of truthful combinatorial auctions. In FOCS, 2003. [21] Ron Lavi and Noam Nisan. Online ascending auctions for gradually expiring goods. In SODA, 2005. [22] Daniel Lehmann, Liadan O``Callaghan, and Yoav Shoham. Truth revelation in approximately efficient combinatorial auctions. Journal of the ACM, 49(5):577-602, 2002. [23] A. Mas-Collel, W. Whinston, and J. Green. Microeconomic Theory. Oxford university press, 1995. [24] Eric Maskin. Nash equilibrium and welfare optimality. Review of Economic Studies, 66:23-38, 1999. [25] Eric Maskin and Tomas Sj¨ostr¨om. Implementation theory, 2002. 247 [26] Aranyak Mehta and Vijay Vazirani. Randomized truthful auctions of digital goods are randomizations over truthful auctions. In EC, 2004. [27] John Moore. Implementation, contract and renegotiation in environments with complete information, 1992. [28] John Moore and Rafael Repullo. Subgame perfect implementation. Econometrica, 56(5):1191-1220, 1988. [29] H. Moulin and S. Shenker. Strategyproof sharing of submodular costs: Budget balance versus efficiency. Economic Theory, 18(3):511-533, 2001. [30] Noam Nisan. Algorithms for selfish agents. In STACS, 1999. [31] Noam Nisan and Amir Ronen. Computationally feasable vcg mechanisms. In EC, 2000. [32] Noam Nisan and Amir Ronen. Algorithmic mechanism design. Games and Economic Behavior, 35:166-196, 2001. [33] M. J. Osborne and A. Rubinstein. A Course in Game Theory. MIT press, 1994. [34] Christos H. Papadimitriou. Algorithms, games, and the internet. In STOC, 2001. [35] Kevin Roberts. The characterization of implementable choice rules. In Jean-Jacques Laffont, editor, Aggregation and Revelation of Preferences. Papers presented at the 1st European Summer Workshop of the Econometric Society, pages 321-349. North-Holland, 1979. [36] Irit Rozenshtrom. Dominant strategy implementation with quasi-linear preferences, 1999. Master``s thesis, Dept. of Economics, The Hebrew University, Jerusalem, Israel. [37] Rakesh Vohra and Rudolf Muller. On dominant strategy mechanisms, 2003. Working paper. [38] Shmuel Zamir. Rationality and emotions in ultimatum bargaining. Annales D'' Economie et De Statistique, 61, 2001. 248
On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments * ABSTRACT Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players ("single-parameter domains"). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents' belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing. 1. INTRODUCTION Recently, global networks have attracted widespread study. The emergence of popular scalable shared networks with self-interested entities - such as peer-to-peer systems over the Internet and mobile wireless communication ad-hoc networks - poses fundamental challenges. Naturally, the study of such giant decentralized systems involves aspects of game theory [32, 34]. In particular, the subfield of Mechanism Design deals with the construction of mechanisms: for a given social goal the challenge is to design rules for interaction such that selfish behavior of the agents will result in the desired social goal [23, 33]. Algorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32]. Distributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12]. The standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies. The solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players. The appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption. Many mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22]. Most of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player's preference is described by only one parameter ("single-parameter domains"). To date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences. However, in distributed settings without available subsidies from outside sources, VCG mechanisms cannot be accepted as valid solutions due to a serious lack of budget balance. Additionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20]. Further limitations of the set-up are discussed in subsection 1.3. In most distributed environments, players can take advantage of the network structure to collect and distribute information about other players. This paper thus studies the effects of relaxing the private information assumption. One model that has been extensively studied recently is the Peer-to-Peer (P2P) network. A P2P network is a distributed network with no centralized authority, in which the participants share their individual resources (e.g., processing power, storage capacity, bandwidth and content). The aggregation of such resources provides inexpensive computational platforms. The most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa. Recent work on P2P Incentives include micropayment methods [15] and reputation-based methods [9, 13]. The following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption. EXAMPLE 1. Consider a Peer-to-Peer network for file sharing. Whenever agent B uploads a file from agent A, all peers along the routing path know that B has loaded the file. They can record this information about agent B. In addition, they can distribute this information. However, it is impossible to record all the information everywhere. First, such duplication induces huge costs. Second, as agents dynamically enter and exit from the network, the information might not be always available. And so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p. In this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals. As a result, cooperation is achieved independent of agents' belief. This demonstrates that in some computational contexts our approach is far less demanding than the Bayesian approach (that assumes that players' types are drawn according to some identified probability density function). 1.1 Implementations in Complete Information Set-ups In complete information environments, each agent is informed about everyone else. That is, each agent observes his own preference and the preferences of all other agents. However, no outsider can observe this information. Specifically, neither the mechanism designer nor the court. Many positive results were shown for such arguably realistic settings. For recent surveys see [25, 27, 18]. Moore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28]. The concept they used is subgame-perfect implementations (SPE). The SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a "small" strategy space. As a result, it is straightforward for a player to compute his strategy . ' Second, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks. Third, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer)' Interestingly, in real life players do not always use their subgame perfect strategies. One such widely studied case is the Ultimatum Bargaining 2-person game. In this simple game, the proposer first makes an offer of how to divide a certain known sum of money, and the responder either agrees or refuses, in the latter case both players earn zero. Somewhat surprisingly, experiments show that the responder often rejects the suggested offer, even if it is bounded away from zero and the game is played only once (see e.g. [38]). and budget-balanced (i.e., transfers always sum up to zero). This happens essentially if there are at least three players, and a direct network link between any two agents. Finally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him. 1.2 Implementations in Partially Informed Set-ups and Our Results The complete information assumption is realistic for small groups of players, but not in general. In this paper we consider players that are informed about each other with some probability. More formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies. We demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains. 1. We first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p). 2. We then suggest a certificate based challenging method that is more natural in computerized p-informed environments and different from the one introduced by Moore and Repullo [28] (for p G (0, 1]). 3. We consider implementations in various network structures. As a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium. Our approach is (agent, file) - specific. (2) Web-cache budget-balanced and economically efficient mechanism. Our mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation. Essentially, large p implies a large amount of recorded information. 1.2.1 Malicious Agents Decentralized mechanisms often utilize punishing outcomes. As a result, malicious players might cause severe harm to others. We suggest a quantified notion of "malicious" player, who benefits from his own gained surplus and from harm caused to others. [12] suggests several categories to classify non-cooperating players. Our approach is similar to [7] (and the references therein), who considered independently such players in different context. We show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium. 1.3 Dominant Strategy Implementations In this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general. First, Roberts' classical impossibility result shows that if players' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35]. For slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms. This was observed and analyzed in [32, 22, 31]. Recently [20] extends Roberts' result to some leading examples. They showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG. Additionally, it turns out that the dominant strategy requirement implies that the social goal must be "monotone" [35, 36, 22, 20, 5, 37]. This condition is very restrictive, as many desired natural goals are non-monotone2. Several recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21]. However, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or "almost" VCG mechanisms (e.g., [19]). Recently, [8, 3] considered implementations for generalized single-parameter players. Organization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments. In section 3 we show a mechanism for Peer-to-Peer file sharing networks. In section 4 we apply our methods to derive a web cache mechanism. Future work is briefly discussed in section 5. 2. MOTIVATING EXAMPLES In this section we examine the concepts of subgame perfect and iterative elimination of weakly dominated strategies for completely informed and p-informed environments. We also present the notion of q-maliciousness and some other related considerations through two illustrative examples. 2.1 The Fair Assignment Problem Our first example is an adjustment to computerized context of an ancient procedure to ensure that the wealthiest man in Athens would sponsor a theatrical production known as the Choregia [27]. In the fair assignment problem, Alice and Bob are two workers, and there is a new task to be performed. Their goal is to assign the task to the least loaded worker without any monetary transfers. The informational assumption is that Alice and Bob know both loads and the duration of the new task .3 2E. g., minimizing the makespan within a factor of 2 [32] and Rawls' Rule over some multi-dimensional domains [20]. 3In first glance one might ask why the completely informed agents could not simply sign a contract, specifying the desired goal. Such a contract is sometimes infeasible due to fact that the true state cannot be observed by outsiders, especially not the court. 2.1.1 Basic Mechanism The following simple mechanism implements this goal in subgame perfect equilibrium. 9 Stage 1: Alice either agrees to perform the new task or refuses. 9 Stage 2: If she refuses, Bob has to choose between:--(a) Performing the task himself. -- (b) Exchanging his load with Alice and performing the new task as well. Let LTA, LTB be the true loads of Alice and Bob, and let t> 0 be the load of the new task. Assume that load exchanging takes zero time and cost. We shall see that the basic mechanism achieves the goal in a subgame perfect equilibrium. Intuitively this means that in equilibrium each player will choose his best action at each point he might reach, assuming similar behavior of others, and thus every SPE is a Nash equilibrium. CLAIM 2. ([27]) The task is assigned to the least loaded worker in subgame perfect equilibrium. PROOF. By backward induction argument ("look forward and reason backward"), consider the following cases: 1. LTB <LTA. If stage 2 is reached then Bob will not exchange. 2. LTA <LTB <LTA + t. If stage 2 is reached Bob will exchange, and this is what Alice prefers. 3. LTA + t <LTB. If stage 2 is reached then Bob would exchange, as a result it is strictly preferable by Alice to perform the task. Note that the basic mechanism does not use monetary transfers at all and is decentralized in the sense that no third party is needed to run the procedure. The goal is achieved in equilibrium (ties are broken in favor of Alice). However, in the second case exchange do occur in an equilibrium point. Recall the unrealistic assumption that load exchange takes zero time and cost. Introducing fines, the next mechanism overcomes this drawback. 2.1.2 Elicitation Mechanism In this subsection we shall see a centralized mechanism for the fair assignment goal without load exchange in equilibrium. The additional assumptions are as follows. The cost performing a load of duration d is exactly d. We assume that the duration t of the new task is <T. The payoffs cents proof: Assume that there exists a mechanism that implements this goal in dominant strategies. Then by the Revelation Principle [23] there exists a mechanism that implements this goal for which the dominant strategy of each player is to report his true load. Clearly, truthfully reporting cannot be a dominant strategy for this goal (if monetary transfers are not available), as players would prefer to report higher loads. of the utility maximizers agents are quasilinear. The following mechanism is an adaptation of Moore and Repullo's elicitation mechanism [28] 5. • Stage 1: ("Elicitation of Alice's load") Alice announces LA. . Bob announces L A <LA. . If L A = LA (" Bob agrees") goto the next Stage. Otherwise (" Bob challenges"), Alice is assigned the task. She then has to choose between: -- (a) Transferring her original load to Bob and paying him LA − 0.5 · min {e, LA − L A}. Alice pays a to the mechanism. Bob pays the fine of T + E to the mechanism. -- (b) No load transfer. Alice pays e to Bob. STOP. • Stage 2: The elicitation of Bob's load is similar to Stage 1 (switching the roles of Alice and Bob). • Stage 3: If LA <LB Alice is assigned the task, otherwise Bob. STOP. Observe that Alice is assigned the task and fined with e whenever Bob challenges. We shall see that the bonus of e is paid to a challenging player only in out of equilibria cases. PROOF. Assume w.l.o.g that the elicitation of Alice's load is done after Bob's, and that Stage 2 is reached. If Alice truly reports LA = LTA, Bob strictly prefers to agree. Otherwise, if Bob challenges, Alice would always strictly prefer to transfer (as in this case Bob would perform her load for smaller cost), as a result Bob would pay T + e to the mechanism. This punishing outcome is less preferable than the" normal" outcome of Stage 3 achieved had he agreed. If Alice misreports LA> LTA, then Bob can ensure himself the bonus (which is always strictly preferable than reaching Stage 3) by challenging with L A = LTA, and so whenever Bob gets the bonus Alice gains the worst of all payoffs. Reporting a lower load LA <LTA is not beneficial for Alice. In this case, Bob would strictly prefer to agree (and not to announce L A <LA, as he limited to challenge with a smaller load than what she announces). Thus such misreporting can only increase the possibility that she is assigned the task. And so there is no incentive for Alice to do so. All together, Alice would prefer to report the truth in this stage. And so Stage 2 would not abnormally end by STOP, and similarly Stage 1. Observe that the elicitation mechanism is almost balanced: in all outcomes no money comes in or out, except for the non-equilibrium outcome (a), in which both players pay to the mechanism. 5In [28], if an agent misreport his type then it is always beneficial to the other agent to challenge. In particular, even if the agent reports a lower load. 2.1.3 Elicitation Mechanism for Partially Informed Agents In this subsection we consider partially informed agents. Formally: It turns out that a version of the elicitation mechanism works for this relaxed information assumption, if we use the concept of iterative elimination of weakly dominated strategies6. We replace the fixed fine of a in the elicitation mechanism with the fine: ,3 p = max {L, 2p − 1 T} + e, and assume the bounds LTA, LTB <L. PROPOSITION 2. If all agents are p-informed, p> 0.5, the elicitation mechanism (,3 p) implements the fair assignment goal with the concept of iterative elimination of weakly dominated strategies. The strategy of each player is to report the true load and to challenge with the true load if the other agent overreport. PROOF. Assume w.l.o.g that the elicitation of Alice's load is done after Bob's, and that Stage 2 is reached. First observe that underreporting the true value is a dominated strategy, whether Bob is not informed and "mistakenly" challenges with a lower load (as,3 p> L) or not, or even if t is very small. Now we shall see that overreporting her value is a dominated strategy, as well. Alice's expected payoff gained by misreporting <p (payoff if she lies and Bob is informed) + (1 − p) (max payoff if Bob is not informed) <p (− t −,3 p) <p (− t) + (1 − p) (− t −,3 p) <p (min payoff of true report if Bob is informed) + (1 − p) (min payoff if Bob is not informed) <Alice's expected payoff if she truly reports. The term (− t −,3 p) in the left hand side is due to the fact that if Bob is informed he will always prefer to challenge. In the right hand side, if he is informed, then challenging is a dominated strategy, and if he is not informed the worst harm he can make is to challenge. Thus in stage 2 Alice will report her true load. This implies that challenging without being informed is a dominated strategy for Bob. This argument can be reasoned also for the first stage, when Bob reports his value. Bob knows the maximum payoff he can gain is at most zero since he cannot expect to get the bonus in the next stage. 2.1.4 Extensions The elicitation mechanism for partially informed agents is rather general. As in [28], we need the capability to "judge" between two distinct declarations in the elicitation rounds, 6A strategy si of player i is weakly dominated if there exists s i such that (i) the payoff gained by s i is at least as high as the payoff gained by si, for all strategies of the other players and all preferences, and (ii) there exist a preference and a combination of strategies for the other players such that the payoff gained by s i is strictly higher than the payoff gained by si. and upper and lower bounds based on the possible payoffs derived from the last stage. In addition, for p-informed environments, some structure is needed to ensure that underbidding is a dominated strategy. The Choregia-type mechanisms can be applied to more than 2 players with the same number of stages: the player in the first stage can simply points out the name of the "wealthiest" agent. Similarly, the elicitation mechanisms can be extended in a straightforward manner. These mechanisms can be budget-balanced, as some player might replace the role of the designer, and collect the fines, as observed in [28]. OPEN PROBLEM 1. Design a decentralized budget balanced mechanism with reasonable fines for independently p-informed n players, where p <1--1/2 n − 1. 2.2 Seller and Buyer Scenario A player might cause severe harm to others by choosing a non-equilibrium outcome. In the mechanism for the fair assignment goal, an agent might "maliciously" challenge even if the other agent truly reports his load. In this subsection we consider such malicious scenarios. For the ease of exposition we present a second example. We demonstrate that equilibria remain unchanged even if players are malicious. In the seller-buyer example there is one item to be traded and two possible future states. The goal is to sell the item for the average low price pl = ls + lb 2 in state L, and the higher price ph = hs + hb 2 in the other state H, where ls is seller's cost and lb is buyer's value in state L, and similarly hs, hb in H. The players fix the prices without knowing what will be the future state. Assume that ls <hs <lb <hb, and that trade can occur in both prices (that is, pl, ph E (hs, lb)). Only the players can observe the realization of the true state. The payoffs are of the form ub = xv--tb, us = ts--xvs, where the binary variable x indicates if trade occurred, and tb, ts are the transfers. Consider the following decentralized trade mechanism. 9 Stage 1: If seller reports H goto Stage 2. Otherwise, trade at the low price pl. STOP. 9 Stage 2: The buyer has to choose between:--(a) Trade at the high price ph.--(b) No trade and seller pays A to the buyer. CLAIM 4. Let A = lb--ph +. The unique subgame perfect equilibrium of the trade mechanism is to report the true state in Stage 1 and trading if Stage 2 is reached. Note that the outcome (b) is never chosen in equilibrium. 2.2.1 Trade Mechanism for Malicious Agents The buyer might maliciously punish the seller by choosing the outcome (b) when the true state is H. The following notion quantifies the consideration that a player is not indifferent to the private surpluses of others. This definition appeared independently in [7] in different context. We shall see that the traders would avoid such bad behavior if they are q-malicious, where q <0.5, that is if their" non-indifference" impact is bounded by 0.5. Equilibria outcomes remain unchanged, and so cooperation is achieved as in the original case of non-malicious players. Consider the trade mechanism with pl = (1--q) hs + q lb, ph = q hs + (1--q) lb, A = (1--q) (hb--lb--). Note that pl <ph for q <0.5. CLAIM 5. If q <0.5, then the unique subgame perfect equilibrium for q-malicious players remains unchanged. PROOF. By backward induction we consider two cases. In state H, the q-malicious buyer would prefer to trade if In state L the buyer prefers the no trade outcome, as (1--q) (lb--ph) + q (ls--ph) <A. The seller prefers to trade at a low price, as (1--q) (pl--ls) + q (pl--lb)> 0>--A. 2.2.2 Discussion No mechanism can Nash-implement this trading goal if the only possible outcomes are trade at pl and trade at ph. To see this, it is enough to consider normal forms (as any extensive form mechanism can be presented as a normal one). Consider a matrix representation, where the seller is the row player and the buyer is the column player, in which every entry includes an outcome. Suppose there is equilibrium entry for the state L. The associate column must be all pl, otherwise the seller would have an incentive to deviate. Similarly, the associate row of the H equilibrium entry must be all ph (otherwise the buyer would deviate), a contradiction. 7 8 The buyer prefers pl and seller ph, and so the preferences are identical in both states. Hence reporting preferences over outcomes is not "enough" - players must supply additional "information". This is captured by outcome (b) in the trade mechanism. Intuitively, if a goal is not Nash-implementable we need to add more outcomes. The drawback is that some" new" additional equilibria must be ruled out. E.g., additional Nash equilibrium for the trade mechanism is (trade at pl, (b)). That is, the seller chooses to trade at low price at either states, and the buyer always chooses the no trade option that fines the seller, if the second stage is reached. Such buyer's threat is not credible, because if the mechanism is played only once, and Stage 2 is reached in state H, the buyer would strictly decrease his payoff if he chooses (b). Clearly, this is not a subgame perfect equilibrium. Although each extensive game-form is strategically equivalent to a normal form one, the extensive form representation places more structure and so it seems plausible that the subgame perfect equilibrium will be played .9 7Formally, this goal is not Maskin monotonic, a necessary condition for Nash-implementability [24]. 8A similar argument applies for the Fair Assignment Problem. 9Interestingly, it is a straight forward to construct a sequential mechanism with unique SPE, and additional NE with a strictly larger payoff for every player. 3. PEER-TO-PEER NETWORKS In this section we describe a simplified Peer-to-Peer network for file sharing, without payments in equilibrium, using a certificate-based challenging method. In this challenging method - as opposed to [28] - an agent that challenges cannot harm other agents, unless he provides a valid "certificate". In general, if agent B copied a file f from agent A, then agent A knows that agent B holds a copy of the file. We denote such information as a certificate (B, f) (we shall omit cryptographic details). Such a certificate can be recorded and distributed along the network, and so we can treat each agent holding the certificate as an informed agent. Assumptions: We assume an homogeneous system with files of equal size. The benefit each agent gains by holding a copy of any file is V. The only cost each agent has is the uploading cost C (induced while transferring a file to an immediate neighbor). All other costs are negligible (e.g., storing the certificates, forwarding messages, providing acknowledgements, digital signatures, etc). Let upA, downA be the numbers of agent A uploads and downloads if he always cooperates. We assume that each agent A enters the system if upA · C <downA · V. Each agent has a quasilinear utility and only cares about his current bandwidth usage. In particular, he ignores future scenarios (e.g., whether forwarding or dropping of a packet might affect future demand). 3.1 Basic Mechanism We start with a mechanism for a network with 3p-informed agents: B, A1, A2. We assume that B is directly connected to A1 and A2. If B has the certificate (A1, f), then he can apply directly to A1 and request the file (if he refuses, then B can go to court). The following basic sequential mechanism is applicable whenever agent B is not informed and still would like to download the file if it exists in the network. Note that this goal cannot be implemented in dominant strategies without payments (similar to Claim 1, when the type of each agent here is the set of files he holds). Define tA, B to be the monetary amount that agent A should transfer to B. • Stage 1: Agent B requests the file f from A1. -- If A1 replies "yes" then B downloads the file from A1. STOP. -- Otherwise, agent B sends A 1s "no" reply to agent If A2 declares "agree" then goto the next stage. Else, A2 sends a certificate (A1, f) to agent B. · If the certificate is correct then tA1, A2 =,3 p. STOP. · Else tA2, A1 = | C | + E. STOP. Stage 2: Agent B requests the file f from A2. Switch the roles of the agents A1, A2. PROOF. Clearly if the mechanism ends without challenging: − C u (Ai) 0. And so, challenging with an invalid certificate is always a dominated strategy. Now, when Stage 2 is reached, A2 is the last to report if he has the file. If A2 has the file it is a weakly undominated strategy to misreport, whether A1 is informed or not: A2's expected payoff gained by misreporting "no" p · (−,3 p) + (1 − p) · 0 <− C A2's payoff if she reports "yes". This argument can be reasoned also for Stage 1, when A1 reports whether he has the file. A1 knows that A2 will report "yes" if and only if she has the file in the next stage, and so the maximum payoff he can gain is at most zero since he cannot expect to get a bonus. 3.2 Chain Networks In a chain network, agent B is directly connected to A1, and Ai is directly connected to agent Ai +1. Assume that we have an acknowledgment protocol to confirm the receipt of a particular message. To avoid message dropping, we add the fine (,3 p +2 e) to be paid by an agent who hasn't properly forwarded a message. The chain mechanism follows: • Stage i: Agent B forwards a request for the file f to Ai (through {Ak} k <i). • If Ai reports "yes", then B downloads f from Ai. STOP. • Otherwise Ai reports "no". If Aj sends a certificate (Ak, f) to B, (j, k i), then--If certificate (Ak, f) is correct, then t (Ak, Aj) =,3 p. STOP. -- Else, t (Aj, Ak) = C + E. STOP. If Ai reports that he has no copy of the file, then any agent in between might challenge. Using digital signatures and acknowledgements, observe that every agent must forward each message, even if it contains a certificate showing that he himself has misreported. We use the same fine,,3 p, as in the basic mechanism, because the protocol might end at stage 1 (clearly, the former analysis still applies, since the actual p increases with the number of players). 3.3 Network Mechanism In this subsection we consider general network structures. We need the assumption that there is a ping protocol that checks whether a neighbor agent is on-line or not (that is, an on-line agent cannot hide himself). To limit the amount of information to be recorded, we assume that an agent is committed to keep any downloaded file to at least one hour, and so certificates are valid for a limited amount of time. We assume that each agent has a digitally signed listing of his current immediate neighbors. As in real P2P file sharing applications, we restrict each request for a file to be forwarded at most r times (that is, downloads are possible only inside a neighborhood of radius r). The network mechanism utilizes the chain mechanism in the following way: When agent B requests a file from agent A (at most r − 1 far), then A sends to B the list of his neighbors and the output of the ping protocol to all of these neighbors. As a result, B can explore the network. Remark: In this mechanism we assumed that the environment is p-informed. An important design issue that it is not addressed here is the incentives for the information propagation phase. 4. WEB CACHE Web caches are widely used tool to improve overall system efficiency by allowing fast local access. They were listed in [12] as a challenging application of Distributed Algorithmic Mechanism Design. Nisan [30] considered a single cache shared by strategic agents. In this problem, agent i gains the value vTi if a particular item is loaded to the local shared cache. The efficient goal is to load the item if and only if EvTi C, where C is the loading cost. This goal reduces to the" public project" problem analyzed by Clarke [10]. However, it is well known that this mechanism is not budget-balanced (e.g., if the valuation of each player is C, then everyone pays zero). In this section we suggest informational and environmental assumptions for which we describe a decentralized budgetbalanced efficient mechanism. We consider environments for which future demand of each agent depends on past demand. The underlying informational and environmental requirements are as follows. 1. An agent can read the content of a message only if he is the target node (even if he has to forward the message as an intermediate node of some routing path). An agent cannot initiate a message on behalf of other agents. 2. An acknowledgement protocol is available, so that every agent can provide a certificate indicating that he handled a certain message properly. 3. Negligible costs: we assume p-informed agents, where p is such that the agent's induced cost for keeping records of information is negligible. We also assume that the cost incurred by sending and forwarding messages is negligible. 4. Let qi (t) denotes the number of loading requests agent i initiated for the item during the time slot t. We assume that vTi (t), the value for caching the item in the beginning of slot t depends only on most recent slot, formally vTi (t) = max {Vi (qi (t − 1)), C}, where Vi (·) is a non-decreasing real function. In addition, Vi (·) is a common knowledge among the players. 5. The network is" homogeneous" in the sense that if agent j happens to handle k requests initiated by agent i during the time slot t, then qi (t) = k, where depends on the routing protocol and the environment (might be smaller than 1, if each request is "flooded" several times). We assume that the only way agent i can affect the true qi (t) is by superficially increasing his demand for the cached item, but not the other way (that is, agent's loss, incurred by giving up a necessary request for the item, is not negligible). The first requirement is to avoid free riding, and also to avoid the case that an agent superficially increases the demand of others and as a result decreases his own demand. The second requirement is to avoid the case that an agent who gets a routing request for the item, records it and then drops it. The third is to ensure that the environment stays well informed. In addition, if the forwarding cost is negligible each agent cooperates and forwards messages as he would not like to decrease the future demand (that monotonically depends on the current time slot, as assumed in the forth requirement) of some other agent. Given that the payments are increasing with the declared values, the forth and fifth requirements ensure that the agent would not increase his demand superficially and so qi (t) is the true demand. The following Web-Cache Mechanism implements the efficient goal that shares the cost proportionally. For simplicity it is described for two players and w.l.o.g vTi (t) equals the number of requests initiated by i and observed by any informed j (that is, = 1 and Vi (qi (t − 1)) = qi (t − 1)). • Stage 1: ("Elicitation of vTA (t)") Alice announces vA. . Bob announces vA vA. . If v' A = vA goto the next Stage. Otherwise (" Bob challenges"):--If Bob provides vA valid records then Alice pays C to finance the loading of the item into the cache. She also pays p to Bob. STOP. -- Otherwise, Bob finances the loading of the item into the cache. STOP. • Stage 2: The elicitation of vTB (t) is done analogously. • Stage 3: If vA + vB <C, then STOP. Otherwise, load the item to the cache, Alice pays pA = vA · C, and Bob pays pB = vB PROOF. Challenging without being informed (that is, without providing enough valid records) is always dominated strategy in this mechanism. Now, assume w.l.o.g. Alice is 10See [29] or [12] for exact definitions. the last to report her value. Alice's expected payoff gained by underreporting p · (− C −,3 p) + (1 − p) · C <p · 0 + (1 − p) · 0 Alice's expected payoff if she honestly reports. The right hand side equals zero as the participation costs are negligible. Reasoning back, Bob cannot expect to get the bonus and so misreporting is dominated strategy for him. 5. CONCLUDING REMARKS In this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information. We then described some mechanisms using the concept of iterative elimination of weakly dominated strategies. Some issues for future work include: • As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct "incentive compatible" mechanisms even for non-singleparameter cases. The challenge is to find more realistic scenarios in which the partial informational assumption is applicable. • Mechanisms for information propagation and maintenance. In our examples we choose p such that the maintenance cost over time is negligible. However, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not "likely" to be needed, in order to decrease his maintenance costs. As a result, the probability that the environment is informed decreases, and selfish agents would not cooperate. Incentives for information propagation should be considered as well (e.g., for P2P networks for file sharing). • It seems that some social choice goals cannot be implemented if each player is at least 1/n-malicious (where n is the number of players). It would be interesting to identify these cases.
On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments * ABSTRACT Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players ("single-parameter domains"). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents' belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing. 1. INTRODUCTION Recently, global networks have attracted widespread study. The emergence of popular scalable shared networks with self-interested entities - such as peer-to-peer systems over the Internet and mobile wireless communication ad-hoc networks - poses fundamental challenges. Naturally, the study of such giant decentralized systems involves aspects of game theory [32, 34]. In particular, the subfield of Mechanism Design deals with the construction of mechanisms: for a given social goal the challenge is to design rules for interaction such that selfish behavior of the agents will result in the desired social goal [23, 33]. Algorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32]. Distributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12]. The standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies. The solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players. The appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption. Many mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22]. Most of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player's preference is described by only one parameter ("single-parameter domains"). To date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences. However, in distributed settings without available subsidies from outside sources, VCG mechanisms cannot be accepted as valid solutions due to a serious lack of budget balance. Additionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20]. Further limitations of the set-up are discussed in subsection 1.3. In most distributed environments, players can take advantage of the network structure to collect and distribute information about other players. This paper thus studies the effects of relaxing the private information assumption. One model that has been extensively studied recently is the Peer-to-Peer (P2P) network. A P2P network is a distributed network with no centralized authority, in which the participants share their individual resources (e.g., processing power, storage capacity, bandwidth and content). The aggregation of such resources provides inexpensive computational platforms. The most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa. Recent work on P2P Incentives include micropayment methods [15] and reputation-based methods [9, 13]. The following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption. EXAMPLE 1. Consider a Peer-to-Peer network for file sharing. Whenever agent B uploads a file from agent A, all peers along the routing path know that B has loaded the file. They can record this information about agent B. In addition, they can distribute this information. However, it is impossible to record all the information everywhere. First, such duplication induces huge costs. Second, as agents dynamically enter and exit from the network, the information might not be always available. And so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p. In this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals. As a result, cooperation is achieved independent of agents' belief. This demonstrates that in some computational contexts our approach is far less demanding than the Bayesian approach (that assumes that players' types are drawn according to some identified probability density function). 1.1 Implementations in Complete Information Set-ups In complete information environments, each agent is informed about everyone else. That is, each agent observes his own preference and the preferences of all other agents. However, no outsider can observe this information. Specifically, neither the mechanism designer nor the court. Many positive results were shown for such arguably realistic settings. For recent surveys see [25, 27, 18]. Moore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28]. The concept they used is subgame-perfect implementations (SPE). The SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a "small" strategy space. As a result, it is straightforward for a player to compute his strategy . ' Second, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks. Third, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer)' Interestingly, in real life players do not always use their subgame perfect strategies. One such widely studied case is the Ultimatum Bargaining 2-person game. In this simple game, the proposer first makes an offer of how to divide a certain known sum of money, and the responder either agrees or refuses, in the latter case both players earn zero. Somewhat surprisingly, experiments show that the responder often rejects the suggested offer, even if it is bounded away from zero and the game is played only once (see e.g. [38]). and budget-balanced (i.e., transfers always sum up to zero). This happens essentially if there are at least three players, and a direct network link between any two agents. Finally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him. 1.2 Implementations in Partially Informed Set-ups and Our Results The complete information assumption is realistic for small groups of players, but not in general. In this paper we consider players that are informed about each other with some probability. More formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies. We demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains. 1. We first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p). 2. We then suggest a certificate based challenging method that is more natural in computerized p-informed environments and different from the one introduced by Moore and Repullo [28] (for p G (0, 1]). 3. We consider implementations in various network structures. As a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium. Our approach is (agent, file) - specific. (2) Web-cache budget-balanced and economically efficient mechanism. Our mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation. Essentially, large p implies a large amount of recorded information. 1.2.1 Malicious Agents Decentralized mechanisms often utilize punishing outcomes. As a result, malicious players might cause severe harm to others. We suggest a quantified notion of "malicious" player, who benefits from his own gained surplus and from harm caused to others. [12] suggests several categories to classify non-cooperating players. Our approach is similar to [7] (and the references therein), who considered independently such players in different context. We show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium. 1.3 Dominant Strategy Implementations In this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general. First, Roberts' classical impossibility result shows that if players' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35]. For slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms. This was observed and analyzed in [32, 22, 31]. Recently [20] extends Roberts' result to some leading examples. They showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG. Additionally, it turns out that the dominant strategy requirement implies that the social goal must be "monotone" [35, 36, 22, 20, 5, 37]. This condition is very restrictive, as many desired natural goals are non-monotone2. Several recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21]. However, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or "almost" VCG mechanisms (e.g., [19]). Recently, [8, 3] considered implementations for generalized single-parameter players. Organization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments. In section 3 we show a mechanism for Peer-to-Peer file sharing networks. In section 4 we apply our methods to derive a web cache mechanism. Future work is briefly discussed in section 5. 2. MOTIVATING EXAMPLES 2.1 The Fair Assignment Problem 2.1.1 Basic Mechanism 2.1.2 Elicitation Mechanism 2.1.3 Elicitation Mechanism for Partially Informed Agents 2.1.4 Extensions 2.2 Seller and Buyer Scenario 2.2.1 Trade Mechanism for Malicious Agents 2.2.2 Discussion 3. PEER-TO-PEER NETWORKS 3.1 Basic Mechanism 3.2 Chain Networks 3.3 Network Mechanism 4. WEB CACHE 5. CONCLUDING REMARKS In this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information. We then described some mechanisms using the concept of iterative elimination of weakly dominated strategies. Some issues for future work include: • As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct "incentive compatible" mechanisms even for non-singleparameter cases. The challenge is to find more realistic scenarios in which the partial informational assumption is applicable. • Mechanisms for information propagation and maintenance. In our examples we choose p such that the maintenance cost over time is negligible. However, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not "likely" to be needed, in order to decrease his maintenance costs. As a result, the probability that the environment is informed decreases, and selfish agents would not cooperate. Incentives for information propagation should be considered as well (e.g., for P2P networks for file sharing). • It seems that some social choice goals cannot be implemented if each player is at least 1/n-malicious (where n is the number of players). It would be interesting to identify these cases.
On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments * ABSTRACT Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players ("single-parameter domains"). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents' belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing. 1. INTRODUCTION Recently, global networks have attracted widespread study. Algorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32]. Distributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12]. The standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies. The solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players. The appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption. Many mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22]. Most of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player's preference is described by only one parameter ("single-parameter domains"). To date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences. Additionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20]. Further limitations of the set-up are discussed in subsection 1.3. In most distributed environments, players can take advantage of the network structure to collect and distribute information about other players. This paper thus studies the effects of relaxing the private information assumption. One model that has been extensively studied recently is the Peer-to-Peer (P2P) network. The most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa. The following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption. EXAMPLE 1. Consider a Peer-to-Peer network for file sharing. They can record this information about agent B. In addition, they can distribute this information. However, it is impossible to record all the information everywhere. First, such duplication induces huge costs. Second, as agents dynamically enter and exit from the network, the information might not be always available. And so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p. In this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals. As a result, cooperation is achieved independent of agents' belief. 1.1 Implementations in Complete Information Set-ups In complete information environments, each agent is informed about everyone else. That is, each agent observes his own preference and the preferences of all other agents. However, no outsider can observe this information. Specifically, neither the mechanism designer nor the court. Many positive results were shown for such arguably realistic settings. Moore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28]. The concept they used is subgame-perfect implementations (SPE). The SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a "small" strategy space. As a result, it is straightforward for a player to compute his strategy . ' Second, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks. Third, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer)' Interestingly, in real life players do not always use their subgame perfect strategies. One such widely studied case is the Ultimatum Bargaining 2-person game. This happens essentially if there are at least three players, and a direct network link between any two agents. Finally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him. 1.2 Implementations in Partially Informed Set-ups and Our Results The complete information assumption is realistic for small groups of players, but not in general. In this paper we consider players that are informed about each other with some probability. More formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies. We demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains. 1. We first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p). 2. 3. We consider implementations in various network structures. As a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium. Our approach is (agent, file) - specific. (2) Web-cache budget-balanced and economically efficient mechanism. Our mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation. Essentially, large p implies a large amount of recorded information. 1.2.1 Malicious Agents Decentralized mechanisms often utilize punishing outcomes. As a result, malicious players might cause severe harm to others. We suggest a quantified notion of "malicious" player, who benefits from his own gained surplus and from harm caused to others. [12] suggests several categories to classify non-cooperating players. Our approach is similar to [7] (and the references therein), who considered independently such players in different context. We show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium. 1.3 Dominant Strategy Implementations In this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general. First, Roberts' classical impossibility result shows that if players' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35]. For slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms. Recently [20] extends Roberts' result to some leading examples. They showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG. Additionally, it turns out that the dominant strategy requirement implies that the social goal must be "monotone" [35, 36, 22, 20, 5, 37]. This condition is very restrictive, as many desired natural goals are non-monotone2. Several recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21]. However, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or "almost" VCG mechanisms (e.g., [19]). Recently, [8, 3] considered implementations for generalized single-parameter players. Organization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments. In section 3 we show a mechanism for Peer-to-Peer file sharing networks. In section 4 we apply our methods to derive a web cache mechanism. Future work is briefly discussed in section 5. 5. CONCLUDING REMARKS In this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information. We then described some mechanisms using the concept of iterative elimination of weakly dominated strategies. Some issues for future work include: • As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct "incentive compatible" mechanisms even for non-singleparameter cases. The challenge is to find more realistic scenarios in which the partial informational assumption is applicable. • Mechanisms for information propagation and maintenance. In our examples we choose p such that the maintenance cost over time is negligible. However, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not "likely" to be needed, in order to decrease his maintenance costs. As a result, the probability that the environment is informed decreases, and selfish agents would not cooperate. Incentives for information propagation should be considered as well (e.g., for P2P networks for file sharing). • It seems that some social choice goals cannot be implemented if each player is at least 1/n-malicious (where n is the number of players). It would be interesting to identify these cases.
J-74
On Cheating in Sealed-Bid Auctions
Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealed-bid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction.
[ "cheat", "cheat", "auction", "seller", "payment", "case", "possibl", "seal-bid", "bidsecond-price auction", "agent", "first-price auction", "game theori", "seal-bid auction" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "M", "U", "M" ]
On Cheating in Sealed-Bid Auctions Ryan Porter rwporter@stanford.edu Yoav Shoham shoham@stanford.edu Computer Science Department Stanford University Stanford, CA 94305 ABSTRACT Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction. Categories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Security 1. INTRODUCTION Among the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly. Each bidder simply submits a bid, and the winner is immediately determined. However, sealed-bid auctions do require that the bids be kept private until the auction clears. The increasing popularity of online auctions only makes this disadvantage more troublesome. At an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer. However, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server. In this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids. We investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used. Note that two of these cases are trivial. In our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder. In a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully. This leaves us with two cases that we examine in detail. A seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid. This possibility was pointed out as early as the seminal paper [12] that introduced this type of auction. For example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into eBay``s server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity. We assume that there is no chance that the seller will be caught when it cheats. However, not all sellers are willing to use this power (or, not all sellers can successfully cheat). We assume that each bidder knows the probability with which the seller will cheat. Possible motivation for this knowledge could be a recently published expos´e on seller cheating in eBay auctions. In this setting, we derive an equilibrium bidding strategy for the case in which each bidder``s value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability). This result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions. But why should the seller have all the fun? In a first-price auction, a bidder must bid below his value for the good (also called shaving his bid) in order to have positive utility if he 76 wins. To decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win. Of course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction. In this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders. When bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder. This result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating. We conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid. The rest of the paper is structured as follows. In Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction. Section 3 covers the case of bidders cheating in a first-price auction. In Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings. We discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6. All proofs and derivations are found in the appendix. 2. SECOND-PRICE AUCTION, CHEATING SELLER In this section, we consider a second-price auction in which the seller may cheat by inserting a shill bid after observing all of the bids. The formulation for this section will be largely reused in the following section on bidders cheating in a first-price auction. While no prior knowledge of game theory or auction theory is assumed, good introductions can be found in [2] and [6], respectively. 2.1 Formulation The setting consists of N bidders, or agents, (indexed by i = 1, · · · , n) and a seller. Each agent has a type θi ∈ [0, 1], drawn from a continuous range, which represents the agent``s value for the good being auctioned.2 Each agent``s type is independently drawn from a cumulative distribution function (cdf ) F over [0, 1], where F(0) = 0 and F(1) = 1. We assume that F(·) is strictly increasing and differentiable over the interval [0, 1]. Call the probability density function (pdf ) f(θi) = F (θi), which is the derivative of the cdf. Each agent knows its own type θi, but only the distribution over the possible types of the other agents. A bidding strategy for an agent bi : [0, 1] → [0, 1] maps its type to its bid.3 Let θ = (θ1, · · · , θn) be the vector of types for all agents, and θ−i = (θ1, · · · , θi−1, θi+1, · · · θn) be the vector of all types except for that of agent i. We can then combine the vectors so that θ = (θi, θ−i). We also define the vector of bids as b(θ) = (b1(θ1), ... , bn(θn)), and this vector without 2 We can restrict the types to the range [0, 1] without loss of generality because any distribution over a different range can be normalized to this range. 3 We thus limit agents to deterministic bidding strategies, but, because of our continuity assumption, there always exists a pure strategy equilibrium. the bid of agent i as b−i(θ−i). Let b[1](θ) be the value of the highest bid of the vector b(θ), with a corresponding definition for b[1](θ−i). An agent obviously wins the auction if its bid is greater than all other bids, but ties complicate the formulation. Fortunately, we can ignore the case of ties in this paper because our continuity assumption will make them a zero probability event in equilibrium. We assume that the seller does not set a reserve price.4 If the seller does not cheat, then the winning agent pays the highest bid by another agent. On the other hand, if the seller does cheat, then the winning agent will pay its bid, since we assume that a cheating seller would take full advantage of its power. Let the indicator variable µc be 1 if the seller cheats, and 0 otherwise. The probability that the seller cheats, Pc , is known by all agents.5 We can then write the payment of the winning agent as follows. pi(b(θ), µc ) = µc · bi(θi) − (1 − µc ) · b[1](θ−i) (1) Let µ(·) be an indicator function that takes an inequality as an argument and returns is 1 if it holds, and 0 otherwise. The utility for agent i is zero if it does not win the auction, and the difference between its valuation and its price if it does. ui(b(θ), µc , θi) = µ bi(θi) > b[1](θ−i) · θi − pi(b(θ), µc ) (2) We will be concerned with the expected utility of an agent, with the expectation taken over the types of the other agents and over whether or not the seller cheats. By pushing the expectation inward so that it is only over the price (conditioned on the agent winning the auction), we can write the expected utility as: Eθ−i,µc [ui(b(θ), µc , θi)] = Prob bi(θi) > b[1](θ−i) · θi − Eθ−i,µc pi(b(θ), µc ) | bi(θi) > b[1](θ−i) (3) We assume that all agents are rational, expected utility maximizers. Because of the uncertainty over the types of the other agents, we will be looking for a Bayes-Nash equilibrium. A vector of bidding strategies b∗ is a Bayes-Nash equilibrium if for each agent i and each possible type θi, agent i cannot increase its expected utility by using an alternate bidding strategy bi, holding the bidding strategies for all other agents fixed. Formally, b∗ is a Bayes-Nash equilibrium if: ∀i, θi, bi Eθ−i,µc ui b∗ i (θi), b∗ −i(θ−i) , µc , θi ≥ Eθ−i,µc ui bi(θi), b∗ −i(θ−i) , µc , θi (4) 2.2 Equilibrium We first present the Bayes-Nash equilibrium for an arbitrary distribution F(·). 4 This simplifies the analysis, but all of our results can be applied to the case in which the seller announces a reserve price before the auction begins. 5 Note that common knowledge is not necessary for the existence of an equilibrium. 77 Theorem 1. In a second-price auction in which the seller cheats with probability Pc , it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(θi) = θi − θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) (5) It is useful to consider the extreme points of Pc . Setting Pc = 1 yields the correct result for a first-price auction (see, e.g., [10]). In the case of Pc = 0, this solution is not defined. However, in the limit, bi(θi) approaches θi as Pc approaches 0, which is what we expect as the auction approaches a standard second-price auction. The position of Pc is perhaps surprising. For example, the linear combination bi(θi) = θi − Pc · θi 0 F (N−1) (x)dx F (N−1)(θi) of the equilibrium bidding strategies of first and second-price auctions would have also given us the correct bidding strategies for the cases of Pc = 0 and Pc = 1. 2.3 Continuum of Auctions An alternative perspective on the setting is as a continuum between first and second-price auctions. Consider a probabilistic sealed-bid auction in which the seller is honest, but the price paid by the winning agent is determined by a weighted coin flip: with probability Pc it is his bid, and with probability 1 − Pc it is the second-highest bid. By adjusting Pc , we can smoothly move between a first and second-price auction. Furthermore, the fact that this probabilistic auction satisfies the properties required for the Revenue Equivalence Theorem (see, e.g., [2]) provides a way to verify that the bidding strategy in Equation 5 is the symmetric equilibrium of this auction (see the alternative proof of Theorem 1 in the appendix). 2.4 Special Case: Uniform Distribution Another way to try to gain insight into Equation 5 is by instantiating the distribution of types. We now consider the often-studied uniform distribution: F(θi) = θi. Corollary 2. In a second-price auction in which the seller cheats with probability Pc , and F(θi) = θi, it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(θi) = N − 1 N − 1 + Pc θi (6) This equilibrium bidding strategy, parameterized by Pc , can be viewed as an interpolation between two well-known results. When Pc = 0 the bidding strategy is now welldefined (each agent bids its true type), while when Pc = 1 we get the correct result for a first-price auction: each agent bids according to the strategy bi(θi) = N−1 N θi. 3. FIRST-PRICE AUCTION, CHEATING AGENTS We now consider the case in which the seller is honest, but there is a chance that agents will cheat and examine the other bids before submitting their own (or, alternatively, they will revise their bid before the auction clears). Since this type of cheating is pointless in a second-price auction, we only analyze the case of a first-price auction. After revising the formulation from the previous section, we present a fixed point equation for the equilibrium strategy for an arbitrary distribution F(·). This equation will be useful for the analysis the uniform distribution, in which we show that the possibility of cheating agents does not change the equilibrium strategy of honest agents. This result has implications for the robustness of the symmetric equilibrium to overbidding in a standard first-price auction. Furthermore, we find that for other distributions overbidding actually induces a competing agent to shave more off of its bid. 3.1 Formulation It is clear that if a single agent is cheating, he will bid (up to his valuation) the minimum amount necessary to win the auction. It is less obvious, though, what will happen if multiple agents cheat. One could imagine a scenario similar to an English auction, in which all cheating agents keep revising their bids until all but one cheater wants the good at the current winning bid. However, we are only concerned with how an honest agent should bid given that it is aware of the possibility of cheating. Thus, it suffices for an honest agent to know that it will win the auction if and only if its bid exceeds every other honest agent``s bid and every cheating agent``s type. This intuition can be formalized as the following discriminatory auction. In the first stage, each agent``s payment rule is determined. With probability Pa , the agent will pay the second highest bid if it wins the auction (essentially, he is a cheater), and otherwise it will have to pay its bid. These selections are recorded by a vector of indicator variables µa = (µa1 , ... , µan ), where µai = 1 denotes that agent i pays the second highest bid. Each agent knows the probability Pa , but does not know the payment rule for all other agents. Otherwise, this auction is a standard, sealed-bid auction. It is thus a dominant strategy for a cheater to bid its true type, making this formulation strategically equivalent to the setting outlined in the previous paragraph. The expression for the utility of an honest agent in this discriminatory auction is as follows. ui(b(θ), µa , θi) = θi − bi(θ) · j=i µaj · µ bi(θi) > θj + (1 − µaj ) · µ bi(θi) > bj(θj) (7) 3.2 Equilibrium Our goal is to find the equilibrium in which all cheating agents use their dominant strategy of bidding truthfully and honest agents bid according to a symmetric bidding strategy. Since we have left F(·) unspecified, we cannot present a closed form solution for the honest agent``s bidding strategy, and instead give a fixed point equation for it. Theorem 3. In a first-price auction in which each agent cheats with probability Pa , it is a Bayes-Nash equilibrium for each non-cheating agent i to bid according to the strategy that is a fixed point of the following equation: bi(θi) = θi − θi 0 Pa · F(bi(x)) + (1 − Pa) · F(x) (N−1) dx Pa · F(bi(θi)) + (1 − Pa) · F(θi) (N−1) (8) 78 3.3 Special Case: Uniform Distribution Since we could not solve Equation 8 in the general case, we can only see how the possibility of cheating affects the equilibrium bidding strategy for particular instances of F(·). A natural place to start is uniform distribution: F(θi) = θi. Recall the logic behind the symmetric equilibrium strategy in a first-price auction without cheating: bi(θi) = N−1 N θi is the optimal tradeoff between increasing the probability of winning and decreasing the price paid upon winning, given that the other agents are bidding according to the same strategy. Since in the current setting the cheating agents do not shave their bid at all and thus decrease an honest agent``s probability of winning (while obviously not affecting the price that an honest agent pays if he wins), it is natural to expect that an honest agent should compensate by increasing his bid. The idea is that sacrificing some potential profit in order to regain some of the lost probability of winning would bring the two sides of the tradeoff back into balance. However, it turns out that the equilibrium bidding strategy is unchanged. Corollary 4. In a first-price auction in which each agent cheats with probability Pa , and F(θi) = θi, it is a BayesNash equilibrium for each non-cheating agent to bid according to the strategy bi(θi) = N−1 N θi. This result suggests that the equilibrium of a first-price auction is particularly robust when types are drawn from the uniform distribution, since the best response is unaffected by deviations of the other agents to the strategy of always bidding their type. In fact, as long as all other agents shave their bid by a fraction (which can differ across the agents) no greater than 1 N , it is still a best response for the remaining agent to bid according to the equilibrium strategy. Note that this result holds even if other agents are shaving their bid by a negative fraction, and are thus irrationally bidding above their type. Theorem 5. In a first-price auction where F(θi) = θi, if each agent j = i bids according a strategy bj(θj) = N−1+αj N θj, where αj ≥ 0, then it is a best response for the remaining agent i to bid according to the strategy bi(θi) = N−1 N θi. Obviously, these strategy profiles are not equilibria (unless each αj = 0), because each agent j has an incentive to set αj = 0. The point of this theorem is that a wide range of possible beliefs that an agent can hold about the strategies of the other agents will all lead him to play the equilibrium strategy. This is important because a common (and valid) criticism of equilibrium concepts such as Nash and BayesNash is that they are silent on how the agents converge on a strategy profile from which no one wants to deviate. However, if the equilibrium strategy is a best response to a large set of strategy profiles that are out of equilibrium, then it seems much more plausible that the agents will indeed converge on this equilibrium. It is important to note, though, that while this equilibrium is robust against arbitrary deviations to strategies that shave less, it is not robust to even a single agent shaving more off of its bid. In fact, if we take any strategy profile consistent with the conditions of Theorem 5 and change a single agent j``s strategy so that its corresponding αj is negative, then agent i``s best response is to shave more than 1 N off of its bid. 3.4 Effects of Overbidding for Other Distributions A natural question is whether the best response bidding strategy is similarly robust to overbidding by competing agents for other distributions. It turns out that Theorem 5 holds for all distributions of the form F(θi) = (θi)k , where k is some positive integer. However, taking a simple linear combination of two such distributions to produce F(θi) = θ2 i +θi 2 yields a distribution in which an agent should actually shave its bid more when other agents shave their bids less. In the example we present for this distribution (with the details in the appendix), there are only two players and the deviation by one agent is to bid his type. However, it can be generalized to a higher number of agents and to other deviations. Example 1. In a first-price auction where F(θi) = θ2 i +θi 2 and N = 2, if agent 2 always bids its type (b2(θ2) = θ2), then, for all θ1 > 0, agent 1``s best response bidding strategy is strictly less than the bidding strategy of the symmetric equilibrium. We also note that the same result holds for the normalized exponential distribution (F(θi) = eθi −1 e−1 ). It is certainly the case that distributions can be found that support the intuition given above that agents should shave their bid less when other agents are doing likewise. Examples include F(θi) = −1 2 θ2 i + 3 2 θi (the solution to the system of equations: F (θi) = −1, F(0) = 0, and F(1) = 1), and F(θi) = e−e(1−θi) e−1 . It would be useful to relate the direction of the change in the best response bidding strategy to a general condition on F(·). Unfortunately, we were not able to find such a condition, in part because the integral in the symmetric bidding strategy of a first-price auction cannot be solved without knowing F(·) (or at least some restrictions on it). We do note, however, that the sign of the second derivative of F(θi)/f(θi) is an accurate predictor for all of the distributions that we considered. 4. REVENUE LOSS FOR AN HONEST SELLER In both of the settings we covered, an honest seller suffers a loss in expected revenue due to the possibility of cheating. The equilibrium bidding strategies that we derived allow us to quantify this loss. Although this is as far as we will take the analysis, it could be applied to more general settings, in which the seller could, for example, choose the market in which he sells his good or pay a trusted third party to oversee the auction. In a second-price auction in which the seller may cheat, an honest seller suffers due the fact that the agents will shave their bids. For the case in which agent types are drawn from the uniform distribution, every agent will shave its bid by P c N−1+P c , which is thus also the fraction by which an honest seller``s revenue decreases due to the possibility of cheating. Analysis of the case of a first-price auction in which agents may cheat is not so straightforward. If Pa = 1 (each agent cheats with certainty), then we simply have a second-price auction, and the seller``s expected revenue will be unchanged. Again considering the uniform distribution for agent types, it is not surprising that Pa = 1 2 causes the seller to lose 79 the most revenue. However, even in this worst case, the percentage of expected revenue lost is significantly less than it is for the second-price auction in which Pc = 1 2 , as shown in Table 1.6 It turns out that setting Pc = 0.2 would make the expected loss of these two settings comparable. While this comparison between the settings is unlikely to be useful for a seller, it is interesting to note that agent suspicions of possible cheating by the seller are in some sense worse than agents actually cheating themselves. Percentage of Revenue lost for an Honest Seller Agents Second-Price Auction First-Price Auction (Pc = 0.5) (Pa = 0.5) 2 33 12 5 11 4.0 10 5.3 1.8 15 4.0 1.5 25 2.2 0.83 50 1.1 0.38 100 0.50 0.17 Table 1: The percentage of expected revenue lost by an honest seller due to the possibility of cheating in the two settings considered in this paper. Agent valuations are drawn from the uniform distribution. 5. RELATED WORK Existing work covers another dimension along which we could analyze cheating: altering the perceived value of N. In this paper, we have assumed that N is known by all of the bidders. However, in an online setting this assumption is rather tenuous. For example, a bidder``s only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate. In these cases, the seller could arbitrarily manipulate the perceived N. In a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation. However, if agents are aware that the seller has this power, then any communication about N to the agents is cheap talk, and furthermore is not credible. Thus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents. If we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders. Of course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents. The analysis would then proceed in a similar manner as that of our cheating seller model. The other interesting case of this form of cheating is by bidders in a first-price auction. Bidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids. While it is 6 Note that we have not considered the costs of the seller. Thus, the expected loss in profit could be much greater than the numbers that appear here. unreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction. While the non-colluding agents would account for this possibility, as long as they are not certain of the collusion they will still be induced to shave more off of their bids than they would if the collusion did not take place. This issue is tackled in [7]. Other types of collusion are of course related to the general topic of cheating in auctions. Results on collusion in first and second-price auctions can be found in [8] and [3], respectively. The work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction. In their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction. The seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction. The bidders, who know the distribution from which the seller``s type is drawn, then place their bid. It is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction. Our work differs in that we focus on the agents'' strategies in a second-price auction for a given probability of cheating by the seller. An explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions. An area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer. The goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids. Cryptographic methods are the standard weapon of choice here (see [1, 4, 9]). 6. CONCLUSION In this paper we presented the equilibria of sealed-bid auctions in which cheating is possible. In addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions. The results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum. The case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution. We also explored the effect of overbidding on the best response bidding strategy for other distributions, and showed that even for relatively simple distributions it can be positive, negative, or neutral. Finally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating. 7. REFERENCES [1] M. Franklin and M. Reiter. The Design and Implementation of a Secure Auction Service. In Proc. IEEE Symp. on Security and Privacy, 1995. [2] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. 80 [3] D. Graham and R. Marshall. Collusive bidder behavior at single-object second-price and english auctions. Journal of Political Economy, 95:579-599, 1987. [4] M. Harkavy, J. D. Tygar, and H. Kikuchi. Electronic auctions with private bids. In Proceedings of the 3rd USENIX Workshop on Electronic Commerce, 1998. [5] R. Harstad, J. Kagel, and D. Levin. Equilibrium bid functions for auctions with an uncertain number of bidders. Economic Letters, 33:35-40, 1990. [6] P. Klemperer. Auction theory: A guide to the literature. Journal of Economic Surveys, 13(3):227-286, 1999. [7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz. Bidding clubs in first-price auctions. In AAAI-02. [8] R. McAfee and J. McMillan. Bidding rings. The American Economic Review, 71:579-599, 1992. [9] M. Naor, B. Pinkas, and R. Sumner. Privacy preserving auctions and mechanism design. In EC-99. [10] J. Riley and W. Samuelson. Optimal auctions. American Economic Review, 71(3):381-392, 1981. [11] M. Rothkopf and R. Harstad. Two models of bid-taker cheating in vickrey auctions. The Journal of Business, 68(2):257-267, 1995. [12] W. Vickrey. Counterspeculations, auctions, and competitive sealed tenders. Journal of Finance, 16:15-27, 1961. APPENDIX Theorem 1. In a second-price auction in which the seller cheats with probability Pc , it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(θi) = θi − θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) (5) Proof. To find an equilibrium, we start by guessing that there exists an equilibrium in which all agents bid according to the same function bi(θi), because the game is symmetric. Further, we guess that bi(θi) is strictly increasing and differentiable over the range [0, 1]. We can also assume that bi(0) = 0, because negative bids are not allowed and a positive bid is not rational when the agent``s valuation is 0. Note that these are not assumptions on the setting- they are merely limitations that we impose on our search. Let Φi : [0, bi(1)] → [0, 1] be the inverse function of bi(θi). That is, it takes a bid for agent i as input and returns the type θi that induced this bid. Recall Equation 3: Eθ−i,µc ui(b(θ), µc , θi) = Prob bi(θi) > b[1](θ−i) · θi − Eθ−i,µc pi(b(θ), µc ) | bi(θi) > b[1](θ−i) The probability that a single other bid is below that of agent i is equal to the cdf at the type that would induce a bid equal to that of agent i, which is formally written as F(Φi(bi(θi))). Since all agents are independent, the probability that all other bids are below agent i``s is simply this term raised the (N − 1)-th power. Thus, we can re-write the expected utility as: Eθ−i,µc ui(b(θ), µc , θi) = FN−1 (Φi(bi(θi)))· θi − Eθ−i,µc pi(b(θ), µc ) | bi(θi) > b[1](θ−i) (9) We now solve for the expected payment. Plugging Equation 1 (which gives the price for the winning agent) into the term for the expected price in Equation 9, and then simplifying the expectation yields: Eθ−i,µc pi(b(θ), µc ) | bi(θi) > b[1](θ−i) = Eθ−i,µc µc · bi(θi) + (1 − µc ) · b[1](θ−i) | bi(θi) > b[1](θ−i) = Pc · bi(θi) + (1 − Pc ) · Eθ−i b[1](θ−i) | bi(θi) > b[1](θ−i) = Pc · bi(θi) + (1 − Pc ) · bi(θi) 0 b[1](θ−i)· pdf b[1](θ−i) | bi(θi) > b[1](θ−i) db[1](θ−i) (10) Note that the integral on the last line is taken up to bi(θi) because we are conditioning on the fact that bi(θi) > b[1](θ−i). To derive the pdf of b[1](θ−i) given this condition, we start with the cdf. For a given value b[1](θ−i), the probability that any one agent``s bid is less than this value is equal to F(Φi(b[1](θ−i))). We then condition on the agent``s bid being below bi(θi) by dividing by F(Φi(bi(θi))). The cdf for the N − 1 agents is then this value raised to the (N − 1)-th power. cdf b[1](θ−i) | bi(θi) > b[1](θ−i) = FN−1 (Φi(b[1](θ−i))) FN−1(Φi(bi(θi))) The pdf is then the derivative of the cdf with respect to b[1](θ−i): pdf b[1](θ−i) | bi(θi) > b[1](θ−i) = N − 1 FN−1(Φi(bi(θi))) · FN−2 (Φi(b[1](θ−i)))· f(Φi(b[1](θ−i))) · Φi(b[1](θ−i)) Substituting the pdf into Equation 10 and pulling terms out of the integral that do not depend on b[1](θ−i) yields: Eθ−i,µc pi(b(θ), µc ) | bi(θi) > b[1](θ−i) = Pc · bi(θi)+ (1 − Pc ) · (N − 1) FN−1(Φi(bi(θi))) · bi(θi) 0 b[1](θ−i) · FN−2 (Φi(b[1](θ−i)))· f(Φi(b[1](θ−i))) · Φi(b[1](θ−i)) db[1](θ−i) 81 Plugging the expected price back into the expected utility equation (9), and distributing FN−1 (Φi(bi(θi))), yields: Eθ−i,µc ui(b(θ), µc , θi) = FN−1 (Φi(bi(θi))) · θi− FN−1 (Φi(bi(θi))) · Pc · bi(θi)− (1 − Pc ) · (N − 1) · bi(θi) 0 b[1](θ−i) · FN−2 (Φi(b[1](θ−i)))· f(Φi(b[1](θ−i))) · Φi(b[1](θ−i)) db[1](θ−i) We are now ready to optimize the expected utility by taking the derivative with respect to bi(θi) and setting it to 0. Note that we do not need to solve the integral, because it will disappear when the derivative is taken (by application of the Fundamental Theorem of Calculus). 0 = (N−1)·FN−2 (Φi(bi(θi)))·f(Φi(bi(θi)))·Φi(bi(θi))·θi− FN−1 (Φi(bi(θi))) · Pc − Pc ·(N−1)·FN−2 (Φi(bi(θi)))·f(Φi(bi(θi)))·Φi(bi(θi))·bi(θi)− (1−Pc )·(N−1)· bi(θi)·FN−2 (Φi(bi(θi)))·f(Φi(bi(θi)))·Φi(bi(θi)) Dividing through by FN−2 (Φi(bi(θi))) and combining like terms yields: 0 = θi − Pc · bi(θi) − (1 − Pc ) · bi(θi) · (N − 1) · f(Φi(bi(θi))) · Φi(bi(θi)) − Pc · F(Φi(bi(θi))) Simplifying the expression and rearranging terms produces: bi(θi) = θi − Pc · F(Φi(bi(θi))) (N − 1) · f(Φi(bi(θi))) · Φi(bi(θi)) To further simplify, we use the formula f (x) = 1 g (f(x)) , where g(x) is the inverse function of f(x). Plugging in function from our setting gives us: Φi(bi(θi)) = 1 bi(θi) . Applying both this equation and Φi(bi(θi)) = θi yields: bi(θi) = θi − Pc · F(θi) · bi(θi) (N − 1) · f(θi) (11) Attempts at a derivation of the solution from this point proved fruitless, but we are at a point now where a guessed solution can be quickly verified. We used the solution for the first-price auction (see, e.g., [10]) as our starting point to find the answer: bi(θi) = θi − θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) (12) To verify the solution, we first take its derivative: bi(θi) = 1− F(2· N−1 P c ) (θi) − N−1 P c · F( N−1 P c −1) (θi) · f(θi) · θi 0 F( N−1 P c ) (x)dx F(2· N−1 P c ) (θi) This simplifies to: bi(θi) = N−1 P c · f(θi) · θi 0 F( N−1 P c ) (x)dx F( N−1 P c +1) (θi) We then plug this derivative into the equation we derived (11): bi(θi) = θi − Pc · F(θi) · N−1 P c · f(θi) · θi 0 F( N−1 P c ) (x)dx (N − 1) · f(θi) · F( N−1 P c +1) (θi) Cancelling terms yields Equation 12, verifying that our guessed solution is correct. Alternative Proof of Theorem 1: The following proof uses the Revenue Equivalence Theorem (RET) and the probabilistic auction given as an interpretation of our cheating seller setting. In a first-price auction without the possibility of cheating, the expected payment for an agent with type θi is simply the product of its bid and the probability that this bid is the highest. For the symmetric equilibrium, this is equal to: F(N−1) (θi) · θi − θi 0 F(N−1) (x)dx F(N−1)(θi) For our probabilistic auction, the expected payment of the winning agent is a weighted average of its bid and the second highest bid. For the bi(·) we found in the original interpretation of the setting, it can be written as follows. F(N−1) (θi) · Pc θi − θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) + (1 − Pc ) 1 F(N−1)(θi) · θi 0 x− x 0 F( N−1 P c ) (y)dy F( N−1 P c ) (x) ·(N −1)·F(N−2) (x)·f(x)dx By the RET, the expected payments will be the same in the two auctions. Thus, we can verify our equilibrium bidding strategy by showing that the expected payment in the two auctions is equal. Since the expected payment is zero at θi = 0 for both functions, it suffices to verify that the derivatives of the expected payment functions with respect to θi are equal, for an arbitrary value θi. Thus, we need to verify the following equation: F(N−1) (θi) + (N − 1) · F(N−2) (θi) · f(θi) · θi − F(N−1) (θi) = Pc · F(N−1) (θi) · 1 − 1− (N−1 P c ) · F( N−1 P c −1) (θi) · f(θi) · θi 0 F( N−1 P c ) (x)dx F2( N−1 P c ) (θi) + (N − 1) · F(N−2) (θi) · f(θi) · θi − θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) + (1−Pc ) θi− θi 0 F( N−1 P c ) (y)dy F( N−1 P c ) (θi) ·(N−1)·F(N−2) (θi)·f(θi) 82 This simplifies to: 0 = Pc · (N−1 P c ) · F(N−2) (θi) · f(θi) · θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) + (N − 1) · F(N−2) (θi) · f(θi) · − θi 0 F( N−1 P c ) (x)dx F( N−1 P c ) (θi) + (1−Pc ) − θi 0 F( N−1 P c ) (y)dy F( N−1 P c ) (θi) ·(N−1)·F(N−2) (θi)·f(θi) After distributing Pc , the right-hand side of this equation cancels out, and we have verified our equilibrium bidding strategy. Corollary 2. In a second-price auction in which the seller cheats with probability Pc , and F(θi) = θi, it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(θi) = N − 1 N − 1 + Pc θi (6) Proof. Plugging F(θi) = θi into Equation 5 (repeated as 12), we get: bi(θi) = θi − θi 0 x( N−1 P c ) dx θi ( N−1 P c ) = θi − P c N−1+P c θi ( N−1+P c P c ) θi ( N−1 P c ) = θi − Pc N − 1 + Pc · θi = N − 1 N − 1 + Pc · θi Theorem 3. In a first-price auction in which each agent cheats with probability Pa , it is a Bayes-Nash equilibrium for each non-cheating agent i to bid according to the strategy that is a fixed point of the following equation: bi(θi) = θi − θi 0 Pa · F(bi(x)) + (1 − Pa) · F(x) (N−1) dx Pa · F(bi(θi)) + (1 − Pa) · F(θi) (N−1) (8) Proof. We make the same guesses about the equilibrium strategy to aid our search as we did in the proof of Theorem 1. When simplifying the expectation of this setting``s utility equation (7), we use the fact that the probability that agent i will have a higher bid than another honest agent is still F(Φi(bi(θi))), while the probability is F(bi(θi)) if the other agent cheats. The probability that agent i beats a single other agent is then a weighted average of these two probabilities. Thus, we can write agent i``s expected utility as: Eθ−i,µa ui(b(θ), µa , θi) = θi − bi(θi) · Pa · F(bi(θi)) + (1 − Pa ) · F(Φi(bi(θi))) N−1 As before, to find the equilibrium bi(θi), we take the derivative and set it to zero: 0 = θi − bi(θi) · (N − 1)· Pa · F(bi(θi)) + (1 − Pa ) · F(Φi(bi(θi))) N−2 · Pa · f(bi(θi)) + (1 − Pa ) · f(Φi(bi(θi))) · Φi(bi(θi)) − Pa · F(bi(θi)) + (1 − Pa ) · F(Φi(bi(θi))) N−1 Applying the equations Φi(bi(θi)) = 1 bi(θi) and Φi(bi(θi)) = θi, and dividing through, produces: 0 = θi − bi(θi) · (N − 1)· Pa · f(bi(θi)) + (1 − Pa ) · f(θi) · 1 bi(θi) − Pa · F(bi(θi)) + (1 − Pa ) · F(θi) Rearranging terms yields: bi(θi) = θi − Pa · F(bi(θi)) + (1 − Pa) · F(θi) · bi(θi) (N − 1) · Pa · f(bi(θi)) · bi(θi) + (1 − Pa) · f(θi) (13) In this setting, because we leave F(·) unspecified, we cannot present a closed form solution. However, we can simplify the expression by removing its dependence on bi(θi). bi(θi) = θi − θi 0 Pa · F(bi(x)) + (1 − Pa) · F(x) (N−1) dx Pa · F(bi(θi)) + (1 − Pa) · F(θi) (N−1) (14) To verify Equation 14, first take its derivative: bi(θi) = 1 − 1− (N − 1) · Pa · F(bi(θi)) + (1 − Pa ) · F(θi) (N−2) · Pa · f(bi(θi)) · bi(θi)) + (1 − Pa ) · f(θi) · θi 0 Pa · F(bi(x)) + (1 − Pa ) · F(x) (N−1) dx Pa · F(bi(θi)) + (1 − Pa) · F(θi) 2(N−1) This equation simplifies to: bi(θi) = (N −1)· Pa ·f(bi(θi))·bi(θi))+(1−Pa )·f(θi) · θi 0 Pa · F(bi(x)) + (1 − Pa ) · F(x) (N−1) dx Pa · F(bi(θi)) + (1 − Pa) · F(θi) N Plugging this equation into the bi(θi) in the numerator of Equation 13 yields Equation 14, verifying the solution. 83 Corollary 4. In a first-price auction in which each agent cheats with probability Pa , and F(θi) = θi, it is a BayesNash equilibrium for each non-cheating agent to bid according to the strategy bi(θi) = N−1 N θi. Proof. Instantiating the fixed point equation (8, and repeated as 14) with F(θi) = θi yields: bi(θi) = θi − θi 0 Pa · bi(x) + (1 − Pa ) · x (N−1) dx Pa · bi(θi) + (1 − Pa) · θi (N−1) We can plug the strategy bi(θi) = N−1 N θi into this equation in order to verify that it is a fixed point. bi(θi) = θi − θi 0 Pa · N−1 N x + (1 − Pa ) · x (N−1) dx Pa · N−1 N θi + (1 − Pa) · θi (N−1) = θi − θi 0 x(N−1) dx θ (N−1) i = θi − 1 N θN i θ (N−1) i = N − 1 N θi Theorem 5. In a first-price auction where F(θi) = θi, if each agent j = i bids according a strategy bj(θj) = N−1+αj N θj, where αj ≥ 0, then it is a best response for the remaining agent i to bid according to the strategy bi(θi) = N−1 N θi. Proof. We again use Φj : [0, bj(1)] → [0, 1] as the inverse of bj(θj). For all j = i in this setting, Φj(x) = N N−1+αj x. The probability that agent i has a higher bid than a single agent j is F(Φj(bi(θi))) = N N−1+αj bi(θi). Note, however, that since Φj(·) is only defined over the range [0, bj(1)], it must be the case that bi(1) ≤ bj(1), which is why αj ≥ 0 is necessary, in addition to being sufficient. Assuming that bi(θi) = N−1 N θi, then indeed Φj(bi(θi)) is always welldefined. We will now show that this assumption is correct. The expected utility for agent i can then be written as: Eθ−i ui(b(θ), θi) = Πj=i N N − 1 + αj bi(θi) · θi −bi(θ) = Πj=i N N − 1 + αj · θi· bi(θi) (N−1) − bi(θi) N Taking the derivative with respect to bi(θi), setting it to zero, and dividing out Πj=i N N−1+αj yields: 0 = θi · (N − 1) · bi(θi) (N−2) − N · bi(θi) (N−1) This simplifies to the solution: bi(θi) = N−1 N θi. Full Version of Example 1: In a first-price auction where F(θi) = θ2 i +θi 2 and N = 2, if agent 2 always bids its type (b2(θ2) = θ2), then, for all θ1 > 0, agent 1``s best response bidding strategy is strictly less than the bidding strategy of the symmetric equilibrium. After calculating the symmetric equilibrium in which both agents shave their bid by the same amount, we find the best response to an agent who instead does not shave its bid. We then show that this best response is strictly less than the equilibrium strategy. To find the symmetric equilibrium bidding strategy, we instantiate N = 2 in the general formula the equation found in [10], plug in F(θi) = θ2 i +θi 2 , and simplify: bi(θi) = θi − θi 0 F(x)dx F(θi) = θi− 1 2 · θi 0 (x2 + x)dx 1 2 · (θ2 i + θi) = θi− 1 3 θ3 i + 1 2 θ2 i θ2 i + θi = 2 3 θ2 i + 1 2 θi θi + 1 We now derive the best response for agent 1 to the strategy b2(θ2) = θ2, denoting the best response strategy b∗ 1(θ1) to distinguish it from the symmetric case. The probability of agent 1 winning is F(b∗ 1(θ1)), which is the probability that agent 2``s type is less than agent 1``s bid. Thus, agent 1``s expected utility is: Eθ2 ui((b∗ 1(θ1), b2(θ2)), θ1) = F(b∗ 1(θ1)) · θ1 − b∗ 1(θ1) = (b∗ 1(θ1))2 + b∗ 1(θ1) 2 · θ1 − b∗ 1(θ1) Taking the derivative with respect to b∗ 1(θ1), setting it to zero, and then rearranging terms gives us: 0 = 1 2 · 2 · b∗ 1(θ1) · θ1 − 3 · (b∗ 1(θ1))2 + θ1 − 2 · b∗ 1(θ1) 0 = 3 · (b∗ 1(θ1))2 + (2 − 2 · θ1) · b∗ 1(θ1) − θ1 Of the two solutions of this equation, one always produces a negative bid. The other is: b∗ 1(θ1) = θ1 − 1 + θ2 1 + θ1 + 1 3 We now need to show that b1(θ1) > b∗ 1(θ1) holds for all θi > 0. Substituting in for both terms, and then simplifying the inequality gives us: 2 3 θ2 1 + 1 2 θ1 θ1 + 1 > θ1 − 1 + θ2 1 + θ1 + 1 3 θ2 1 + 3 2 θ1 + 1 > (θ1 + 1) θ2 1 + θ1 + 1 Since θ1 ≥ 0, we can square both sides of the inequality, which then allows us to verify the inequality for all θ1 > 0. θ4 1 + 3θ3 1 + 17 4 θ2 1 + 3θ1 + 1 > θ4 1 + 3θ3 1 + 4θ2 1 + 3θ1 + 1 1 4 θ2 1 > 0 84
On Cheating in Sealed-Bid Auctions Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction. 1. INTRODUCTION Among the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly. Each bidder simply submits a bid, and the winner is immediately determined. However, sealed-bid auctions do require that the bids be kept private until the auction clears. The increasing popularity of online auctions only makes this disadvantage more troublesome. At an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer. However, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server. In this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids. We investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used. Note that two of these cases are trivial. In our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder. In a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully. This leaves us with two cases that we examine in detail. A seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid. This possibility was pointed out as early as the seminal paper [12] that introduced this type of auction. For example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into eBay's server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity. We assume that there is no chance that the seller will be caught when it cheats. However, not all sellers are willing to use this power (or, not all sellers can successfully cheat). We assume that each bidder knows the probability with which the seller will cheat. Possible motivation for this knowledge could be a recently published expos ´ e on seller cheating in eBay auctions. In this setting, we derive an equilibrium bidding strategy for the case in which each bidder's value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability). This result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions. But why should the seller have all the fun? In a first-price auction, a bidder must bid below his value for the good (also called "shaving" his bid) in order to have positive utility if he wins. To decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win. Of course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction. In this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders. When bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder. This result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating. We conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid. The rest of the paper is structured as follows. In Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction. Section 3 covers the case of bidders cheating in a first-price auction. In Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings. We discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6. All proofs and derivations are found in the appendix. 2. SECOND-PRICE AUCTION, CHEATING SELLER In this section, we consider a second-price auction in which the seller may cheat by inserting a shill bid after observing all of the bids. The formulation for this section will be largely reused in the following section on bidders cheating in a first-price auction. While no prior knowledge of game theory or auction theory is assumed, good introductions can be found in [2] and [6], respectively. 2.1 Formulation The setting consists of N bidders, or agents, (indexed by i = 1, • • •, n) and a seller. Each agent has a type Oi E [0, 1], drawn from a continuous range, which represents the agent's value for the good being auctioned .2 Each agent's type is independently drawn from a cumulative distribution function (cdf) F over [0, 1], where F (0) = 0 and F (1) = 1. We assume that F (•) is strictly increasing and differentiable over the interval [0, 1]. Call the probability density function (pdf) f (Oi) = F ~ (Oi), which is the derivative of the cdf. Each agent knows its own type Oi, but only the distribution over the possible types of the other agents. A bidding strategy for an agent bi: [0, 1]--+ [0, 1] maps its type to its bid .3 Let O = (O1, • • •, On) be the vector of types for all agents, and O − i = (O1, • • •, Oi − 1, Oi +1, • • • On) be the vector of all types except for that of agent i. We can then combine the vectors so that O = (Oi, O − i). We also define the vector of bids as b (O) = (b1 (O1),..., bn (On)), and this vector without 2We can restrict the types to the range [0, 1] without loss of generality because any distribution over a different range can be normalized to this range. 3We thus limit agents to deterministic bidding strategies, but, because of our continuity assumption, there always exists a pure strategy equilibrium. the bid of agent i as b − i (O − i). Let b [1] (O) be the value of the highest bid of the vector b (O), with a corresponding definition for b [1] (O − i). An agent obviously wins the auction if its bid is greater than all other bids, but ties complicate the formulation. Fortunately, we can ignore the case of ties in this paper because our continuity assumption will make them a zero probability event in equilibrium. We assume that the seller does not set a reserve price .4 If the seller does not cheat, then the winning agent pays the highest bid by another agent. On the other hand, if the seller does cheat, then the winning agent will pay its bid, since we assume that a cheating seller would take full advantage of its power. Let the indicator variable µc be 1 if the seller cheats, and 0 otherwise. The probability that the seller cheats, Pc, is known by all agents .5 We can then write the payment of the winning agent as follows. Let µ (•) be an indicator function that takes an inequality as an argument and returns is 1 if it holds, and 0 otherwise. The utility for agent i is zero if it does not win the auction, and the difference between its valuation and its price if it does. We will be concerned with the expected utility of an agent, with the expectation taken over the types of the other agents and over whether or not the seller cheats. By pushing the expectation inward so that it is only over the price (conditioned on the agent winning the auction), we can write the expected utility as: We assume that all agents are rational, expected utility maximizers. Because of the uncertainty over the types of the other agents, we will be looking for a Bayes-Nash equilibrium. A vector of bidding strategies b ∗ is a Bayes-Nash equilibrium if for each agent i and each possible type Oi, agent i cannot increase its expected utility by using an alternate bidding strategy b ~ i, holding the bidding strategies for all other agents fixed. Formally, b ∗ is a Bayes-Nash equilibrium if: 2.2 Equilibrium We first present the Bayes-Nash equilibrium for an arbitrary distribution F (•). It is useful to consider the extreme points of Pc. Setting Pc = 1 yields the correct result for a first-price auction (see, e.g., [10]). In the case of Pc = 0, this solution is not defined. However, in the limit, bi (θi) approaches θi as Pc approaches 0, which is what we expect as the auction approaches a standard second-price auction. The position of Pc is perhaps surprising. For example, the equilibrium bidding strategies of first and second-price auctions would have also given us the correct bidding strategies for the cases of Pc = 0 and Pc = 1. 2.3 Continuum of Auctions An alternative perspective on the setting is as a continuum between first and second-price auctions. Consider a probabilistic sealed-bid auction in which the seller is honest, but the price paid by the winning agent is determined by a weighted coin flip: with probability Pc it is his bid, and with probability 1--Pc it is the second-highest bid. By adjusting Pc, we can smoothly move between a first and second-price auction. Furthermore, the fact that this probabilistic auction satisfies the properties required for the Revenue Equivalence Theorem (see, e.g., [2]) provides a way to verify that the bidding strategy in Equation 5 is the symmetric equilibrium of this auction (see the alternative proof of Theorem 1 in the appendix). 2.4 Special Case: Uniform Distribution Another way to try to gain insight into Equation 5 is by instantiating the distribution of types. We now consider the often-studied uniform distribution: F (θi) = θi. COROLLARY 2. In a second-price auction in which the seller cheats with probability Pc, and F (θi) = θi, it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: This equilibrium bidding strategy, parameterized by Pc, can be viewed as an interpolation between two well-known results. When Pc = 0 the bidding strategy is now welldefined (each agent bids its true type), while when Pc = 1 we get the correct result for a first-price auction: each agent bids according to the strategy bi (θi) = N − 1 3. FIRST-PRICE AUCTION, CHEATING AGENTS We now consider the case in which the seller is honest, but there is a chance that agents will cheat and examine the other bids before submitting their own (or, alternatively, they will revise their bid before the auction clears). Since this type of cheating is pointless in a second-price auction, we only analyze the case of a first-price auction. After revising the formulation from the previous section, we present a fixed point equation for the equilibrium strategy for an arbitrary distribution F (•). This equation will be useful for the analysis the uniform distribution, in which we show that the possibility of cheating agents does not change the equilibrium strategy of honest agents. This result has implications for the robustness of the symmetric equilibrium to overbidding in a standard first-price auction. Furthermore, we find that for other distributions overbidding actually induces a competing agent to shave more off of its bid. 3.1 Formulation It is clear that if a single agent is cheating, he will bid (up to his valuation) the minimum amount necessary to win the auction. It is less obvious, though, what will happen if multiple agents cheat. One could imagine a scenario similar to an English auction, in which all cheating agents keep revising their bids until all but one cheater wants the good at the current winning bid. However, we are only concerned with how an honest agent should bid given that it is aware of the possibility of cheating. Thus, it suffices for an honest agent to know that it will win the auction if and only if its bid exceeds every other honest agent's bid and every cheating agent's type. This intuition can be formalized as the following discriminatory auction. In the first stage, each agent's payment rule is determined. With probability Pa, the agent will pay the second highest bid if it wins the auction (essentially, he is a cheater), and otherwise it will have to pay its bid. These selections are recorded by a vector of indicator variables µa = (µa1,..., µan), where µai = 1 denotes that agent i pays the second highest bid. Each agent knows the probability Pa, but does not know the payment rule for all other agents. Otherwise, this auction is a standard, sealed-bid auction. It is thus a dominant strategy for a cheater to bid its true type, making this formulation strategically equivalent to the setting outlined in the previous paragraph. The expression for the utility of an honest agent in this discriminatory auction is as follows. 3.2 Equilibrium Our goal is to find the equilibrium in which all cheating agents use their dominant strategy of bidding truthfully and honest agents bid according to a symmetric bidding strategy. Since we have left F (•) unspecified, we cannot present a closed form solution for the honest agent's bidding strategy, and instead give a fixed point equation for it. 3.3 Special Case: Uniform Distribution Since we could not solve Equation 8 in the general case, we can only see how the possibility of cheating affects the equilibrium bidding strategy for particular instances of F (•). A natural place to start is uniform distribution: F (θi) = θi. Recall the logic behind the symmetric equilibrium strategy in a first-price auction without cheating: bi (θi) = N-1 N θi is the optimal tradeoff between increasing the probability of winning and decreasing the price paid upon winning, given that the other agents are bidding according to the same strategy. Since in the current setting the cheating agents do not shave their bid at all and thus decrease an honest agent's probability of winning (while obviously not affecting the price that an honest agent pays if he wins), it is natural to expect that an honest agent should compensate by increasing his bid. The idea is that sacrificing some potential profit in order to regain some of the lost probability of winning would bring the two sides of the tradeoff back into balance. However, it turns out that the equilibrium bidding strategy is unchanged. This result suggests that the equilibrium of a first-price auction is particularly robust when types are drawn from the uniform distribution, since the best response is unaffected by deviations of the other agents to the strategy of always bidding their type. In fact, as long as all other agents shave their bid by a fraction (which can differ across the agents) no greater than 1N, it is still a best response for the remaining agent to bid according to the equilibrium strategy. Note that this result holds even if other agents are shaving their bid by a negative fraction, and are thus irrationally bidding above their type. Obviously, these strategy profiles are not equilibria (unless each αj = 0), because each agent j has an incentive to set αj = 0. The point of this theorem is that a wide range of possible beliefs that an agent can hold about the strategies of the other agents will all lead him to play the equilibrium strategy. This is important because a common (and valid) criticism of equilibrium concepts such as Nash and BayesNash is that they are silent on how the agents converge on a strategy profile from which no one wants to deviate. However, if the equilibrium strategy is a best response to a large set of strategy profiles that are out of equilibrium, then it seems much more plausible that the agents will indeed converge on this equilibrium. It is important to note, though, that while this equilibrium is robust against arbitrary deviations to strategies that shave less, it is not robust to even a single agent shaving more off of its bid. In fact, if we take any strategy profile consistent with the conditions of Theorem 5 and change a single agent j's strategy so that its corresponding αj is negative, then agent i's best response is to shave more than 1N off of its bid. 3.4 Effects of Overbidding for Other Distributions A natural question is whether the best response bidding strategy is similarly robust to overbidding by competing agents for other distributions. It turns out that Theorem 5 holds for all distributions of the form F (θi) = (θi) k, where k is some positive integer. However, taking a simple linear combination of two such distributions to produce F (θi) = 2 yields a distribution in which an agent should actually shave its bid more when other agents shave their bids less. In the example we present for this distribution (with the details in the appendix), there are only two players and the deviation by one agent is to bid his type. However, it can be generalized to a higher number of agents and to other deviations. and N = 2, if agent 2 always bids its type (b2 (θ2) = θ2), then, for all θ1> 0, agent 1's best response bidding strategy is strictly less than the bidding strategy of the symmetric equilibrium. We also note that the same result holds for the normalized It is certainly the case that distributions can be found that support the intuition given above that agents should shave their bid less when other agents are doing likewise. Examples include F (θi) = -21 θ2i + 32θi (the solution to the system of equations: F ~ ~ (θi) = -1, F (0) = 0, and F (1) = 1), It would be useful to relate the direction of the change in the best response bidding strategy to a general condition on F (•). Unfortunately, we were not able to find such a condition, in part because the integral in the symmetric bidding strategy of a first-price auction cannot be solved without knowing F (•) (or at least some restrictions on it). We do note, however, that the sign of the second derivative of (F (θi) / f (θi)) is an accurate predictor for all of the distributions that we considered. 4. REVENUE LOSS FOR AN HONEST SELLER In both of the settings we covered, an honest seller suffers a loss in expected revenue due to the possibility of cheating. The equilibrium bidding strategies that we derived allow us to quantify this loss. Although this is as far as we will take the analysis, it could be applied to more general settings, in which the seller could, for example, choose the market in which he sells his good or pay a trusted third party to oversee the auction. In a second-price auction in which the seller may cheat, an honest seller suffers due the fact that the agents will shave their bids. For the case in which agent types are drawn from the uniform distribution, every agent will shave its bid by N-1 + P c, which is thus also the fraction by which an honest seller's revenue decreases due to the possibility of cheating. Analysis of the case of a first-price auction in which agents may cheat is not so straightforward. If Pa = 1 (each agent cheats with certainty), then we simply have a second-price auction, and the seller's expected revenue will be unchanged. Again considering the uniform distribution for agent types, it is not surprising that Pa = 21 causes the seller to lose the most revenue. However, even in this worst case, the percentage of expected revenue lost is significantly less than it is for the second-price auction in which Pc = 21, as shown in Table 1.6 It turns out that setting Pc = 0.2 would make the expected loss of these two settings comparable. While this comparison between the settings is unlikely to be useful for a seller, it is interesting to note that agent suspicions of possible cheating by the seller are in some sense worse than agents actually cheating themselves. Table 1: The percentage of expected revenue lost by an honest seller due to the possibility of cheating in the two settings considered in this paper. Agent valuations are drawn from the uniform distribution. 5. RELATED WORK Existing work covers another dimension along which we could analyze cheating: altering the perceived value of N. In this paper, we have assumed that N is known by all of the bidders. However, in an online setting this assumption is rather tenuous. For example, a bidder's only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate. In these cases, the seller could arbitrarily manipulate the perceived N. In a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation. However, if agents are aware that the seller has this power, then any communication about N to the agents is "cheap talk", and furthermore is not credible. Thus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents. If we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders. Of course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents. The analysis would then proceed in a similar manner as that of our cheating seller model. The other interesting case of this form of cheating is by bidders in a first-price auction. Bidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids. While it is 6Note that we have not considered the costs of the seller. Thus, the expected loss in profit could be much greater than the numbers that appear here. unreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction. While the non-colluding agents would account for this possibility, as long as they are not certain of the collusion they will still be induced to shave more off of their bids than they would if the collusion did not take place. This issue is tackled in [7]. Other types of collusion are of course related to the general topic of cheating in auctions. Results on collusion in first and second-price auctions can be found in [8] and [3], respectively. The work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction. In their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction. The seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction. The bidders, who know the distribution from which the seller's type is drawn, then place their bid. It is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction. Our work differs in that we focus on the agents' strategies in a second-price auction for a given probability of cheating by the seller. An explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions. An area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer. The goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids. Cryptographic methods are the standard weapon of choice here (see [1, 4, 9]). 6. CONCLUSION In this paper we presented the equilibria of sealed-bid auctions in which cheating is possible. In addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions. The results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum. The case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution. We also explored the effect of overbidding on the best response bidding strategy for other distributions, and showed that even for relatively simple distributions it can be positive, negative, or neutral. Finally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating. 7. REFERENCES [1] M. Franklin and M. Reiter. The Design and Implementation of a Secure Auction Service. In Proc. IEEE Symp. on Security and Privacy, 1995. [2] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. 80 [3] D. Graham and R. Marshall. Collusive bidder behavior at single-object second-price and english auctions. Journal of Political Economy, 95:579--599, 1987. [4] M. Harkavy, J. D. Tygar, and H. Kikuchi. Electronic auctions with private bids. In Proceedings of the 3rd USENIX Workshop on Electronic Commerce, 1998. [5] R. Harstad, J. Kagel, and D. Levin. Equilibrium bid functions for auctions with an uncertain number of bidders. Economic Letters, 33:35--40, 1990. [6] P. Klemperer. Auction theory: A guide to the literature. Journal of Economic Surveys, 13 (3):227--286, 1999. [7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz. Bidding clubs in first-price auctions. In AAAI-02. [8] R. McAfee and J. McMillan. Bidding rings. The American Economic Review, 71:579--599, 1992. [9] M. Naor, B. Pinkas, and R. Sumner. Privacy preserving auctions and mechanism design. In EC-99. [10] J. Riley and W. Samuelson. Optimal auctions. American Economic Review, 71 (3):381--392, 1981. [11] M. Rothkopf and R. Harstad. Two models of bid-taker cheating in vickrey auctions. The Journal of Business, 68 (2):257--267, 1995. [12] W. Vickrey. Counterspeculations, auctions, and competitive sealed tenders. Journal of Finance, 16:15--27, 1961.
On Cheating in Sealed-Bid Auctions Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction. 1. INTRODUCTION Among the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly. Each bidder simply submits a bid, and the winner is immediately determined. However, sealed-bid auctions do require that the bids be kept private until the auction clears. The increasing popularity of online auctions only makes this disadvantage more troublesome. At an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer. However, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server. In this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids. We investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used. Note that two of these cases are trivial. In our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder. In a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully. This leaves us with two cases that we examine in detail. A seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid. This possibility was pointed out as early as the seminal paper [12] that introduced this type of auction. For example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into eBay's server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity. We assume that there is no chance that the seller will be caught when it cheats. However, not all sellers are willing to use this power (or, not all sellers can successfully cheat). We assume that each bidder knows the probability with which the seller will cheat. Possible motivation for this knowledge could be a recently published expos ´ e on seller cheating in eBay auctions. In this setting, we derive an equilibrium bidding strategy for the case in which each bidder's value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability). This result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions. But why should the seller have all the fun? In a first-price auction, a bidder must bid below his value for the good (also called "shaving" his bid) in order to have positive utility if he wins. To decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win. Of course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction. In this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders. When bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder. This result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating. We conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid. The rest of the paper is structured as follows. In Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction. Section 3 covers the case of bidders cheating in a first-price auction. In Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings. We discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6. All proofs and derivations are found in the appendix. 2. SECOND-PRICE AUCTION, CHEATING SELLER 2.1 Formulation 2.2 Equilibrium 2.3 Continuum of Auctions 2.4 Special Case: Uniform Distribution 3. FIRST-PRICE AUCTION, CHEATING AGENTS 3.1 Formulation 3.2 Equilibrium 3.3 Special Case: Uniform Distribution 3.4 Effects of Overbidding for Other Distributions 4. REVENUE LOSS FOR AN HONEST SELLER 5. RELATED WORK Existing work covers another dimension along which we could analyze cheating: altering the perceived value of N. In this paper, we have assumed that N is known by all of the bidders. However, in an online setting this assumption is rather tenuous. For example, a bidder's only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate. In these cases, the seller could arbitrarily manipulate the perceived N. In a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation. However, if agents are aware that the seller has this power, then any communication about N to the agents is "cheap talk", and furthermore is not credible. Thus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents. If we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders. Of course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents. The analysis would then proceed in a similar manner as that of our cheating seller model. The other interesting case of this form of cheating is by bidders in a first-price auction. Bidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids. While it is 6Note that we have not considered the costs of the seller. Thus, the expected loss in profit could be much greater than the numbers that appear here. unreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction. While the non-colluding agents would account for this possibility, as long as they are not certain of the collusion they will still be induced to shave more off of their bids than they would if the collusion did not take place. This issue is tackled in [7]. Other types of collusion are of course related to the general topic of cheating in auctions. Results on collusion in first and second-price auctions can be found in [8] and [3], respectively. The work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction. In their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction. The seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction. The bidders, who know the distribution from which the seller's type is drawn, then place their bid. It is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction. Our work differs in that we focus on the agents' strategies in a second-price auction for a given probability of cheating by the seller. An explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions. An area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer. The goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids. Cryptographic methods are the standard weapon of choice here (see [1, 4, 9]). 6. CONCLUSION In this paper we presented the equilibria of sealed-bid auctions in which cheating is possible. In addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions. The results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum. The case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution. We also explored the effect of overbidding on the best response bidding strategy for other distributions, and showed that even for relatively simple distributions it can be positive, negative, or neutral. Finally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating. 7. REFERENCES [1] M. Franklin and M. Reiter. The Design and Implementation of a Secure Auction Service. In Proc. IEEE Symp. on Security and Privacy, 1995. [2] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. 80 [3] D. Graham and R. Marshall. Collusive bidder behavior at single-object second-price and english auctions. Journal of Political Economy, 95:579--599, 1987. [4] M. Harkavy, J. D. Tygar, and H. Kikuchi. Electronic auctions with private bids. In Proceedings of the 3rd USENIX Workshop on Electronic Commerce, 1998. [5] R. Harstad, J. Kagel, and D. Levin. Equilibrium bid functions for auctions with an uncertain number of bidders. Economic Letters, 33:35--40, 1990. [6] P. Klemperer. Auction theory: A guide to the literature. Journal of Economic Surveys, 13 (3):227--286, 1999. [7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz. Bidding clubs in first-price auctions. In AAAI-02. [8] R. McAfee and J. McMillan. Bidding rings. The American Economic Review, 71:579--599, 1992. [9] M. Naor, B. Pinkas, and R. Sumner. Privacy preserving auctions and mechanism design. In EC-99. [10] J. Riley and W. Samuelson. Optimal auctions. American Economic Review, 71 (3):381--392, 1981. [11] M. Rothkopf and R. Harstad. Two models of bid-taker cheating in vickrey auctions. The Journal of Business, 68 (2):257--267, 1995. [12] W. Vickrey. Counterspeculations, auctions, and competitive sealed tenders. Journal of Finance, 16:15--27, 1961.
On Cheating in Sealed-Bid Auctions Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction. 1. INTRODUCTION Among the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly. Each bidder simply submits a bid, and the winner is immediately determined. However, sealed-bid auctions do require that the bids be kept private until the auction clears. The increasing popularity of online auctions only makes this disadvantage more troublesome. At an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer. However, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server. In this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids. We investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used. Note that two of these cases are trivial. In our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder. In a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully. This leaves us with two cases that we examine in detail. A seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid. This possibility was pointed out as early as the seminal paper [12] that introduced this type of auction. We assume that there is no chance that the seller will be caught when it cheats. However, not all sellers are willing to use this power (or, not all sellers can successfully cheat). We assume that each bidder knows the probability with which the seller will cheat. Possible motivation for this knowledge could be a recently published expos ´ e on seller cheating in eBay auctions. In this setting, we derive an equilibrium bidding strategy for the case in which each bidder's value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability). This result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions. But why should the seller have all the fun? In a first-price auction, a bidder must bid below his value for the good (also called "shaving" his bid) in order to have positive utility if he wins. To decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win. Of course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction. In this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders. When bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder. This result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating. We conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid. The rest of the paper is structured as follows. In Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction. Section 3 covers the case of bidders cheating in a first-price auction. In Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings. We discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6. 5. RELATED WORK Existing work covers another dimension along which we could analyze cheating: altering the perceived value of N. In this paper, we have assumed that N is known by all of the bidders. However, in an online setting this assumption is rather tenuous. For example, a bidder's only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate. In these cases, the seller could arbitrarily manipulate the perceived N. In a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation. However, if agents are aware that the seller has this power, then any communication about N to the agents is "cheap talk", and furthermore is not credible. Thus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents. If we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders. Of course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents. The analysis would then proceed in a similar manner as that of our cheating seller model. The other interesting case of this form of cheating is by bidders in a first-price auction. Bidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids. While it is 6Note that we have not considered the costs of the seller. unreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction. Other types of collusion are of course related to the general topic of cheating in auctions. Results on collusion in first and second-price auctions can be found in [8] and [3], respectively. The work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction. In their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction. The seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction. The bidders, who know the distribution from which the seller's type is drawn, then place their bid. It is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction. Our work differs in that we focus on the agents' strategies in a second-price auction for a given probability of cheating by the seller. An explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions. An area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer. The goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids. 6. CONCLUSION In this paper we presented the equilibria of sealed-bid auctions in which cheating is possible. In addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions. The results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum. The case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution. Finally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating. 7. The Design and Implementation of a Secure Auction Service. In Proc. IEEE Symp. on Security and Privacy, 1995. Game Theory. MIT Press, 1991. Collusive bidder behavior at single-object second-price and english auctions. Journal of Political Economy, 95:579--599, 1987. Electronic auctions with private bids. [5] R. Harstad, J. Kagel, and D. Levin. Equilibrium bid functions for auctions with an uncertain number of bidders. Economic Letters, 33:35--40, 1990. [6] P. Klemperer. Auction theory: A guide to the literature. Journal of Economic Surveys, 13 (3):227--286, 1999. Bidding clubs in first-price auctions. In AAAI-02. Bidding rings. The American Economic Review, 71:579--599, 1992. Privacy preserving auctions and mechanism design. In EC-99. Optimal auctions. American Economic Review, 71 (3):381--392, 1981. [11] M. Rothkopf and R. Harstad. Two models of bid-taker cheating in vickrey auctions. The Journal of Business, 68 (2):257--267, 1995. [12] W. Vickrey. Counterspeculations, auctions, and competitive sealed tenders. Journal of Finance, 16:15--27, 1961.
C-50
CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses
This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it.
[ "wit", "search and rescu", "emerg situat", "sensor network", "connect network", "intermitt network connect", "pervas comput", "satellit transmitt", "group and partit", "locat track system", "hiker", "gp receiv", "rf transmitt", "beacon" ]
[ "P", "P", "P", "P", "P", "P", "U", "U", "M", "M", "U", "U", "U", "U" ]
CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses Jyh-How Huang Department of Computer Science University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 huangjh@cs.colorado.edu Saqib Amjad Department of Computer Science University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 Saqib.Amjad@colorado.edu Shivakant Mishra Department of Computer Science University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 mishras@cs.colorado.edu ABSTRACT This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms Algorithms, Design, Experimentation 1. INTRODUCTION Search and rescue of people in emergency situation in a timely manner is an extremely important service. It has been difficult to provide such a service due to lack of timely information needed to determine the current location of a person who may be in an emergency situation. With the emergence of pervasive computing, several systems [12, 19, 1, 5, 6, 4, 11] have been developed over the last few years that make use of small devices such as cell phones, sensors, etc.. All these systems require a connected network via satellites, GSM base stations, or mobile devices. This requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult. For example, a GSM transmitter has to be in the range of a base station to transmit. As a result, it cannot operate in most wilderness areas. While a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome. Furthermore, a line of sight is required to transmit to satellite, and that makes it infeasible to stay connected in narrow canyons, large cities with skyscrapers, rain forests, or even when there is a roof or some other obstruction above the transmitter, e.g. in a car. An RF transmitter has a relatively smaller range of transmission. So, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area. In a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area. In fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area. In this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses). CenWits is comprised of mobile, in-situ sensors that are worn by subjects (people, wild animals, or in-animate objects), access points (AP) that collect information from these sensors, and GPS receivers and location points (LP) that provide location information to the sensors. A subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location. The key idea of CenWits is that it uses a concept of witnesses to convey a subject``s movement and location information to the outside world. This averts a need for maintaining a connected network to transmit location information to the outside world. In particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits. 180 CenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors. In particular, it makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. The problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network. Instead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network. As a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures. Also, this makes CenWits cost-effective. A subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters. Furthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers. The problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions. Groups and partitions allow sensors to stay in sleep or receive modes most of the time. Using groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information. In fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors. Each sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory. It has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network. This paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors. The paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application. However, these techniques are quite general. We discuss several other sensor-based applications that can employ these techniques. While CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area. First, unlike earlier location tracking solutions, CenWits does not require a connected network. Second, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization. Instead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated. The rest of this paper is organized as follows. In Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems. In Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality. In the next section, Section 4, we discuss power and memory management in CenWits. To simplify our presentation, we will focus on a specific application of tracking lost/injured hikers in all these sections. In Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation. We discuss how the ideas of CenWits can be used to build several other applications in Section 7. Finally, in Section 8, we discuss some related issues and conclude the paper. 2. RELATED WORK A survey of location systems for ubiquitous computing is provided in [11]. A location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17]. Most location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing. They don``t fit well for our purposes. The well-known active badge system [19] lets a user carry a badge around. An infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person. This is a useful system for indoor environment, where GPS doesn``t work. Locationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8]. Because of the popularity and low cost of 802.11 devices, several business solutions based on this technology have been developed[1]. A system that combines two mature technologies and is viable in suburban area where a user can see clear sky and has GSM cellular reception at the same time is currently available[5]. This system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user``s location. A very simple system to monitor children consists an RF transmitter and a receiver. The system alarms the holder of the receiver when the transmitter is about to run out of range [6]. Personal Locater Beacons (PLB) has been used for avalanche rescuing for years. A skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his/her location based on the strength of the RF signal. Luxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user``s location in latitude and longitude to the rescue team whenever an accident happens [4]. However, the device either is turned on all the time resulting in fast battery drain, or must be turned on after the accident to function. Another related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars. In this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center. Designed for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident. Once the system has been triggered, a human representative attempts to gain communication with the user via a cell phone built as an incar device. If contact cannot be made, emergency services are dispatched to the location provided by GPS. Like PLBs, this system has several limitations. First, it is heavy and expensive. It requires a satellite transmitter and a connected network. If connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails. Unfortunately, these are common obstacles encountered in deep canyons, narrow streets in large cities, parking garages, and a number of other places. 181 The Lifetch system uses GPS receiver board combined with a GSM/GPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU). An ICU first attempts to transmit its location to a control center through GSM/GPRS network. If that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has GSM/GPRS reception. This ICU then transmits the location information of the original ICU via the GSM/GPRS network. ZebraNet is a system designed to study the moving patterns of zebras [13]. It utilizes two protocols: History-based protocol and flooding protocol. History-based protocol is used when the zebras are grazing and not moving around too much. While this might be useful for tracking zebras, it``s not suitable for tracking hikers because two hikers are most likely to meet each other only once on a trail. In the flooding protocol, a node dumps its data to a neighbor whenever it finds one and doesn``t delete its own copy until it finds a base station. Without considering routing loops, packet filtering and grouping, the size of data on a node will grow exponentially and drain the power and memory of a sensor node with in a short time. Instead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive. While ZebraNet is designed for a big group of sensors moving together in the same direction with same speed, Cenwits is designed to be used in the scenario where sensors move in different directions at different speeds. Delay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9]. While this work is mainly concerned with interoperability of challenged networks, some problems related to occasionally-connected networks are similar to the ones we have addressed in CenWits. Among all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas. However, both of these systems require a connected network. Luxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to GSM/GPRS network. Luxury PLB transmits location information, only when an accident happens. However, if the user is buried in the snow or falls into a deep canyon, there is almost no chance for the signal to go through and be relayed to the rescue team. This is because satellite transmission needs line of sight. Furthermore, since there is no known history of user``s location, it is not possible for the rescue team to infer the current location of the user. Another disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750. Lifetch attempts to transmit the location information by GSM/GPRS and adhoc sensor network that uses AODV as the routing protocol. However, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely. Furthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area. This is because the hikers are mobile and it is very unlikely to have several ICUs placed dense enough to forward packets even on a very popular hike route. CenWits is designed to address the limitations of systems such as luxury PLB and Lifetch. It is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center. It is not reliant upon constant connectivity with any communication medium. Rather, it communicates information along from user to user, finally arriving at a control center. Unlike several of the systems discussed so far, it does not require that a user``s unit is constantly turned on. In fact, it can discover a victim``s location, even if the victim``s sensor was off at the time of accident and has remained off since then. CenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability. This means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well. Figure 1: Hiker A and Hiker B are are not in the range of each other 3. CENWITS We describe CenWits in the context of locating lost/injured hikers in wilderness areas. Each hiker wears a sensor (MICA2 motes in our prototype) equipped with a GPS receiver and an RF transmitter. Each sensor is assigned a unique ID and maintains its current location based on the signal received by its GPS receiver. It also emits beacons periodically. When any two sensors are in the range of one another, they record the presence of each other (witness information), and also exchange the witness information they recorded earlier. The key idea here is that if two sensors come with in range of each other at any time, they become each other``s witnesses. Later on, if the hiker wearing one of these sensors is lost, the other sensor can convey the last known (witnessed) location of the lost hiker. Furthermore, by exchanging the witness information that each sensor recorded earlier, the witness information is propagated beyond a direct contact between two sensors. To convey witness information to a processing center or to a rescue team, access points are established at well-known locations that the hikers are expected to pass through, e.g. at the trail heads, trail ends, intersection of different trails, scenic view points, resting areas, and so on. Whenever a sensor node is in the vicinity of an access point, all witness information stored in that sensor is automatically dumped to the access point. Access points are connected to a processing center via satellite or some other network1 . The witness information is downloaded to the processing center from various access points at regular intervals. In case, connection to an access point is lost, the information from that 1 A connection is needed only between access points and a processing center. There is no need for any connection between different access points. 182 access point can be downloaded manually, e.g. by UAVs. To estimate the speed, location and direction of a hiker at any point in time, all witness information of that hiker that has been collected from various access points is processed. Figure 2: Hiker A and Hiker B are in the range of each other. A records the presence of B and B records the presence of A. A and B become each other``s witnesses. Figure 3: Hiker A is in the range of an access point. It uploads its recorded witness information and clears its memory. An example of how CenWits operates is illustrated in Figures 1, 2 and 3. First, hikers A and B are on two close trails, but out of range of each other (Figure 1). This is a very common scenario during a hike. For example, on a popular four-hour hike, a hiker might run into as many as 20 other hikers. This accounts for one encounter every 12 minutes on average. A slow hiker can go 1 mile (5,280 feet) per hour. Thus in 12 minutes a slow hiker can go as far as 1056 feet. This implies that if we were to put 20 hikers on a 4-hour, one-way hike evenly, the range of each sensor node should be at least 1056 feet for them to communicate with one another continuously. The signal strength starts dropping rapidly for two Mica2 nodes to communicate with each other when they are 180 feet away, and is completely lost when they are 230 feet away from each other[7]. So, for the sensors to form a sensor network on a 4-hour hiking trail, there should be at least 120 hikers scattered evenly. Clearly, this is extremely unlikely. In fact, in a 4-hour, less-popular hiking trail, one might only run into say five other hikers. CenWits takes advantage of the fact that sensors can communicate with one another and record their presence. Given a walking speed of one mile per hour (88 feet per minute) and Mica2 range of about 150 feet for non-line-of-sight radio transmission, two hikers have about 150/88 = 1.7 minutes to discover the presence of each other and exchange their witness information. We therefore design our system to have each sensor emit a beacon every one-and-a-half minute. In Figure 2, hiker B``s sensor emits a beacon when A is in range, this triggers A to exchange data with B. A communicates the following information to B: My ID is A; I saw C at 1:23 PM at (39◦ 49.3277655'', 105◦ 39.1126776''), I saw E at 3:09 PM at (40◦ 49.2234879'', 105◦ 20.3290168''). B then replies with My ID is B; I saw K at 11:20 AM at (39◦ 51.4531655'', 105◦ 41.6776223''). In addition, A records I saw B at 4:17 PM at (41◦ 29.3177354'', 105◦ 04.9106211'') and B records I saw A at 4:17 PM at (41◦ 29.3177354'', 105◦ 04.9106211''). B goes on his way to overnight camping while A heads back to trail head where there is an AP, which emits beacon every 5 seconds to avoid missing any hiker. A dumps all witness information it has collected to the access point. This is shown in Figure 3. 3.1 Witness Information: Storage A critical concern is that there is limited amount of memory available on motes (4 KB SDRAM memory, 128 KB flash memory, and 4-512 KB EEPROM). So, it is important to organize witness information efficiently. CenWits stores witness information at each node as a set of witness records (Format is shown in Figure 4. 1 B Node ID Record Time X, Y Location Time Hop Count 1 B 3 B 8 B 3 B Figure 4: Format of a witness record. When two nodes i and j encounter each other, each node generates a new witness record. In the witness record generated by i, Node ID is j, Record Time is the current time in i``s clock, (X,Y) are the coordinates of the location of i that i recorded most recently (either from satellite or an LP), Location Time is the time when the this location was recorded, and Hop Count is 0. Each node is assigned a unique Node Id when it enters a trail. In our current prototype, we have allocated one byte for Node Id, although this can be increased to two or more bytes if a large number of hikers are expected to be present at the same time. We can represent time in 17 bits to a second precision. So, we have allocated 3 bytes each for Record Time and Location Time. The circumference of the Earth is approximately 40,075 KM. If we use a 32-bit number to represent both longitude and latitude, the precision we get is 40,075,000/232 = 0.0093 meter = 0.37 inches, which is quite precise for our needs. So, we have allocated 4 bytes each for X and Y coordinates of the location of a node. In fact, a foot precision can be achieved by using only 27 bits. 3.2 Location Point and Location Inference Although a GPS receiver provides an accurate location information, it has it``s limitation. In canyons and rainy forests, a GPS receiver does not work. When there is a heavy cloud cover, GPS users have experienced inaccuracy in the reported location as well. Unfortunately, a lot of hiking trails are in dense forests and canyons, and it``s not that uncommon to rain after hikers start hiking. To address this, CenWits incorporates the idea of location points (LP). A location point can update a sensor node with its current location whenever the node is near that LP. LPs are placed at different locations in a wilderness area where GPS receivers don``t work. An LP is a very simple device that emits prerecorded location information at some regular time interval. It can be placed in difficult-to-reach places such as deep canyons and dense rain forests by simply dropping them from an airplane. LPs allow a sensor node to determine its current location more accurately. However, they are not 183 an essential requirement of CenWits. If an LP runs out of power, the CenWits will continue to work correctly. Figure 5: GPS receiver not working correctly. Sensors then have to rely on LP to provide coordination In Figure 5, B cannot get GPS reception due to bad weather. It then runs into A on the trail who doesn``t have GPS reception either. Their sensors record the presence of each other. After 10 minutes, A is in range of an LP that provides an accurate location information to A. When A returns to trail head and uploads its data (Figure 6), the system can draw a circle centered at the LP from which A fetched location information for the range of encounter location of A and B. By Overlapping this circle with the trail map, two or three possible location of encounter can be inferred. Thus when a rescue is required, the possible location of B can be better inferred (See Figures 7 and 8). Figure 6: A is back to trail head, It reports the time of encounter with B to AP, but no location information to AP Figure 7: B is still missing after sunset. CenWits infers the last contact point and draws the circle of possible current locations based on average hiking speed CenWits requires that the clocks of different sensor nodes be loosely synchronized with one another. Such a synchronization is trivial when GPS coverage is available. In addition, sensor nodes in CenWits synchronize their clocks whenever they are in the range of an AP or an LP. The Figure 8: Based on overlapping landscape, B might have hiked to wrong branch and fallen off a cliff. Hot rescue areas can thus be determined synchronization accuracy Cenwits needs is of the order of a second or so. As long as the clocks are synchronized with in one second range, whether A met B at 12:37``45 or 12:37``46 doesn``t matter in the ordering of witness events and inferring the path. 4. MEMORY AND POWER MANAGEMENT CenWits employs several important mechanisms to conserve power and memory. It is important to note while current sensor nodes have limited amount of memory, future sensor nodes are expected to have much more memory. With this in mind, the main focus in our design is to provide a tradeoff between the amount of memory available and amount of power consumption. 4.1 Memory Management The size of witness information stored at a node can get very large. This is because the node may come across several other nodes during a hike, and may end up accumulating a large amount of witness information over time. To address this problem, CenWits allows a node to pro-actively free up some parts of its memory periodically. This raises an interesting question of when and which witness record should be deleted from the memory of a node? CenWits uses three criteria to determine this: record count, hop count, and record gap. Record count refers to the number of witness records with same node id that a node has stored in its memory. A node maintains an integer parameter MAX RECORD COUNT. It stores at most MAX RECORD COUNT witness records of any node. Every witness record has a hop count field that stores the number times (hops) this record has been transferred since being created. Initially this field is set to 0. Whenever a node receives a witness record from another node, it increments the hop count of that record by 1. A node maintains an integer parameter called MAX HOP COUNT. It keeps only those witness records in its memory, whose hop count is less than MAX HOP COUNT. The MAX HOP COUNT parameter provides a balance between two conflicting goals: (1) To ensure that a witness record has been propagated to and thus stored at as many nodes as possible, so that it has a high probability of being dumped at some AP as quickly as possible; and (2) To ensure that a witness record is stored only at a few nodes, so that it does not clog up too much of the combined memory of all sensor nodes. We chose to use hop count instead of time-to-live to decide when to drop a packet. The main reason for this is that the probability of a packet reaching an AP goes up as the hop count adds up. For example, when the hop count if 5 for a specific record, 184 the record is in at least 5 sensor nodes. On the other hand, if we discard old records, without considering hop count, there is no guarantee that the record is present in any other sensor node. Record gap refers to the time difference between the record times of two witness records with the same node id. To save memory, a node n ensures the the record gap between any two witness records with the same node id is at least MIN RECORD GAP. For each node id i, n stores the witness record with the most recent record time rti, the witness with most recent record time that is at least MIN RECORD GAP time units before rti, and so on until the record count limit (MAX RECORD COUNT) is reached. When a node is tight in memory, it adjusts the three parameters, MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to free up some memory. It decrements MAX RECORD COUNT and MAX HOP COUNT, and increments MIN RECORD GAP. It then first erases all witness records whose hop count exceeds the reduced MAX HOP COUNT value, and then erases witness records to satisfy the record gap criteria. Also, when a node has extra memory space available, e.g. after dumping its witness information at an access point, it resets MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to some predefined values. 4.2 Power Management An important advantage of using sensors for tracking purposes is that we can regulate the behavior of a sensor node based on current conditions. For example, we mentioned earlier that a sensor should emit a beacon every 1.7 minute, given a hiking speed of 1 mile/hour. However, if a user is moving 10 feet/sec, a beacon should be emitted every 10 seconds. If a user is not moving at all, a beacon can be emitted every 10 minutes. In the night, a sensor can be put into sleep mode to save energy, when a user is not likely to move at all for a relatively longer period of time. If a user is active for only eight hours in a day, we can put the sensor into sleep mode for the other 16 hours and thus save 2/3rd of the energy. In addition, a sensor node can choose to not send any beacons during some time intervals. For example, suppose hiker A has communicated its witness information to three other hikers in the last five minutes. If it is running low on power, it can go to receive mode or sleep mode for the next ten minutes. It goes to receive mode if it is still willing to receive additional witness information from hikers that it encounters in the next ten minutes. It goes to sleep mode if it is extremely low on power. The bandwidth and energy limitations of sensor nodes require that the amount of data transferred among the nodes be reduced to minimum. It has been observed that in some scenarios 3000 instructions could be executed for the same energy cost of sending a bit 100m by radio [15]. To reduce the amount of data transfer, CenWits employs a handshake protocol that two nodes execute when they encounter one another. The goal of this protocol is to ensure that a node transmits only as much witness information as the other node is willing to receive. This protocol is initiated when a node i receives a beacon containing the node ID of the sender node j and i has not exchanged witness information with j in the last δ time units. Assume that i < j. The protocol consists of four phases (See Figure 9): 1. Phase I: Node i sends its receive constraints and the number of witness records it has in its memory. 2. Phase II: On receiving this message from i, j sends its receive constraints and the number of witness records it has in its memory. 3. Phase III: On receiving the above message from j, i sends its witness information (filtered based on receive constraints received in phase II). 4. Phase IV: After receiving the witness records from i, j sends its witness information (filtered based on receive constraints received in phase I). j <Constaints, Witness info size> <Constaints, Witness info size> <Filtered Witness info> <Filtered Witness info> i j j j i i i Figure 9: Four-Phase Hand Shake Protocol (i < j) Receive constraints are a function of memory and power. In the most general case, they are comprised of the three parameters (record count, hop count and record gap) used for memory management. If i is low on memory, it specifies the maximum number of records it is willing to accept from j. Similarly, i can ask j to send only those records that have hop count value less than MAX HOP COUNT − 1. Finally, i can include its MIN RECORD GAP value in its receive constraints. Note that the handshake protocol is beneficial to both i and j. They save memory by receiving only as much information as they are willing to accept and conserve energy by sending only as many witness records as needed. It turns out that filtering witness records based on MIN RECORD GAP is complex. It requires that the witness records of any given node be arranged in an order sorted by their record time values. Maintaining this sorted order is complex in memory, because new witness records with the same node id can arrive later that may have to be inserted in between to preserve the sorted order. For this reason, the receive constraints in the current CenWits prototype do not include record gap. Suppose i specifies a hop count value of 3. In this case, j checks the hop count field of every witness record before sending them. If the hop count value is greater than 3, the record is not transmitted. 4.3 Groups and Partitions To further reduce communication and increase the lifetime of our system, we introduce the notion of groups. The idea is based on the concept of abstract regions presented in [20]. A group is a set of n nodes that can be defined in terms of radio connectivity, geographic location, or other properties of nodes. All nodes within a group can communicate directly with one another and they share information to maintain their view of the external world. At any point in time, a group has exactly one leader that communicates 185 with external nodes on behalf of the entire group. A group can be static, meaning that the group membership does not change over the period of time, or it could be dynamic in which case nodes can leave or join the group. To make our analysis simple and to explain the advantages of group, we first discuss static groups. A static group is formed at the start of a hiking trail or ski slope. Suppose there are five family members who want to go for a hike in the Rocky Mountain National Park. Before these members start their hike, each one of them is given a sensor node and the information is entered in the system that the five nodes form a group. Each group member is given a unique id and every group member knows about other members of the group. The group, as a whole, is also assigned an id to distinguish it from other groups in the system. Figure 10: A group of five people. Node 2 is the group leader and it is communicating on behalf of the group with an external node 17. All other (shown in a lighter shade) are in sleep mode. As the group moves through the trail, it exchanges information with other nodes or groups that it comes across. At any point in time, only one group member, called the leader, sends and receives information on behalf of the group and all other n − 1 group members are put in the sleep mode (See Figure 10). It is this property of groups that saves us energy. The group leadership is time-multiplexed among the group members. This is done to make sure that a single node does not run out of battery due to continuous exchange of information. Thus after every t seconds, the leadership is passed on to another node, called the successor, and the leader (now an ordinary member) is put to sleep. Since energy is dear, we do not implement an extensive election algorithm for choosing the successor. Instead, we choose the successor on the basis of node id. The node with the next highest id in the group is chosen as the successor. The last node, of course, chooses the node with the lowest id as its successor. We now discuss the data storage schemes for groups. Memory is a scarce resource in sensor nodes and it is therefore important that witness information be stored efficiently among group members. Efficient data storage is not a trivial task when it comes to groups. The tradeoff is between simplicity of the scheme and memory savings. A simpler scheme incurs lesser energy cost as compared to a more sophisticated scheme, but offers lesser memory savings as well. This is because in a more complicated scheme, the group members have to coordinate to update and store information. After considering a number of different schemes, we have come to a conclusion that there is no optimal storage scheme for groups. The system should be able to adapt according to its requirements. If group members are low on battery, then the group can adapt a scheme that is more energy efficient. Similarly, if the group members are running out of memory, they can adapt a scheme that is more memory efficient. We first present a simple scheme that is very energy efficient but does not offer significant memory savings. We then present an alternate scheme that is much more memory efficient. As already mentioned a group can receive information only through the group leader. Whenever the leader comes across an external node e, it receives information from that node and saves it. In our first scheme, when the timeslot for the leader expires, the leader passes this new information it received from e to its successor. This is important because during the next time slot, if the new leader comes across another external node, it should be able to pass information about all the external nodes this group has witnessed so far. Thus the information is fully replicated on all nodes to maintain the correct view of the world. Our first scheme does not offer any memory savings but is highly energy efficient and may be a good choice when the group members are running low on battery. Except for the time when the leadership is switched, all n − 1 members are asleep at any given time. This means that a single member is up for t seconds once every n∗t seconds and therefore has to spend approximately only 1/nth of its energy. Thus, if there are 5 members in a group, we save 80% energy, which is huge. More energy can be saved by increasing the group size. We now present an alternate data storage scheme that aims at saving memory at the cost of energy. In this scheme we divide the group into what we call partitions. Partitions can be thought of as subgroups within a group. Each partition must have at least two nodes in it. The nodes within a partition are called peers. Each partition has one peer designated as partition leader. The partition leader stays in receive mode at all times, while all others peers a partition stay in the sleep mode. Partition leadership is time-multiplexed among the peers to make sure that a single node does not run out of battery. Like before, a group has exactly one leader and the leadership is time-multiplexed among partitions. The group leader also serves as the partition leader for the partition it belongs to (See Figure 11). In this scheme, all partition leaders participate in information exchange. Whenever a group comes across an external node e, every partition leader receives all witness information, but it only stores a subset of that information after filtering. Information is filtered in such a way that each partition leader has to store only B/K bytes of data, where K is the number of partitions and B is the total number of bytes received from e. Similarly when a group wants to send witness information to e, each partition leader sends only B/K bytes that are stored in the partition it belongs to. However, before a partition leader can send information, it must switch from receive mode to send mode. Also, partition leaders must coordinate with one another to ensure that they do not send their witness information at the same time, i.e. their message do not collide. All this is achieved by having the group leader send a signal to every partition leader in turn. 186 Figure 11: The figure shows a group of eight nodes divided into four partitions of 2 nodes each. Node 1 is the group leader whereas nodes 2, 9, and 7 are partition leaders. All other nodes are in the sleep mode. Since the partition leadership is time-multiplexed, it is important that any information received by the partition leader, p1, be passed on to the next leader, p2. This has to be done to make sure that p2 has all the information that it might need to send when it comes across another external node during its timeslot. One way of achieving this is to wake p2 up just before p1``s timeslot expires and then have p1 transfer information only to p2. An alternate is to wake all the peers up at the time of leadership change, and then have p1 broadcast the information to all peers. Each peer saves the information sent by p1 and then goes back to sleep. In both cases, the peers send acknowledgement to the partition leader after receiving the information. In the former method, only one node needs to wake up at the time of leadership change, but the amount of information that has to be transmitted between the nodes increases as time passes. In the latter case, all nodes have to be woken up at the time of leadership change, but small piece of information has to be transmitted each time among the peers. Since communication is much more expensive than bringing the nodes up, we prefer the second method over the first one. A group can be divided into partitions in more than one way. For example, suppose we have a group of six members. We can divide this group into three partitions of two peers each, or two partitions with three peers each. The choice once again depends on the requirements of the system. A few big partitions will make the system more energy efficient. This is because in this configuration, a greater number of nodes will stay in sleep mode at any given point in time. On the other hand, several small partitions will make the system memory efficient, since each node will have to store lesser information (See Figure 12). A group that is divided into partitions must be able to readjust itself when a node leaves or runs out of battery. This is crucial because a partition must have at least two nodes at any point in time to tolerate failure of one node. For example, in figure 3 (a), if node 2 or node 5 dies, the partition is left with only one node. Later on, if that single node in the partition dies, all witness information stored in that partition will be lost. We have devised a very simple protocol to solve this problem. We first explain how partiFigure 12: The figure shows two different ways of partitioning a group of six nodes. In (a), a group is divided into three partitions of two nodes. Node 1 is the group leader, nodes 9 and 5 are partition leaders, and nodes 2, 3, and 6 are in sleep mode. In (b) the group is divided into two partitions of three nodes. Node 1 is the group leader, node 9 is the partition leader and nodes 2, 3, 5, and 6 are in sleep mode. tions are adjusted when a peer dies, and then explain what happens if a partition leader dies. Suppose node 2 in figure 3 (a) dies. When node 5, the partition leader, sends information to node 2, it does not receive an acknowledgement from it and concludes that node 2 has died2 . At this point, node 5 contacts other partition leaders (nodes 1 ... 9) using a broadcast message and informs them that one of its peers has died. Upon hearing this, each partition leader informs node 5 (i) the number of nodes in its partition, (ii) a candidate node that node 5 can take if the number of nodes in its partition is greater than 2, and (iii) the amount of witness information stored in its partition. Upon hearing from every leader, node 5 chooses the candidate node from the partition with maximum number (must be greater than 2) of peers, and sends a message back to all leaders. Node 5 then sends data to its new peer to make sure that the information is replicated within the partition. However, if all partitions have exactly two nodes, then node 5 must join another partition. It chooses the partition that has the least amount of witness information to join. It sends its witness information to the new partition leader. Witness information and membership update is propagated to all peers during the next partition leadership change. We now consider the case where the partition leader dies. If this happens, then we wait for the partition leadership to change and for the new partition leader to eventually find out that a peer has died. Once the new partition leader finds out that it needs more peers, it proceeds with the protocol explained above. However, in this case, we do lose information that the previous partition leader might have received just before it died. This problem can be solved by implementing a more rigorous protocol, but we have decided to give up on accuracy to save energy. Our current design uses time-division multiplexing to schedule wakeup and sleep modes in the sensor nodes. However, recent work on radio wakeup sensors [10] can be used to do this scheduling more efficiently. we plan to incorporate radio wakeup sensors in CenWits when the hardware is mature. 2 The algorithm to conclude that a node has died can be made more rigorous by having the partition leader query the suspected node a few times. 187 5. SYSTEM EVALUATION A sensor is constrained in the amount of memory and power. In general, the amount of memory needed and power consumption depends on a variety of factors such as node density, number of hiker encounters, and the number of access points. In this Section, we provide an estimate of how long the power of a MICA2 mote will last under certain assumtions. First, we assume that each sensor node carries about 100 witness records. On encountering another hiker, a sensor node transmits 50 witness records and receives 50 new witness records. Since, each record is 16 bytes long, it will take 0.34 seconds to transmit 50 records and another 0.34 seconds to receive 50 records over a 19200 bps link. The power consumption of MICA2 due to CPU processing, transmission and reception are approximately 8.0 mA, 7.0 mA and 8.5 mA per hour respectively [18], and the capacity of an alkaline battery is 2500mAh. Since the radio module of Mica2 is half-duplex and assuming that the CPU is always active when a node is awake, power consumption due to transmission is 8 + 8.5 = 16.5 mA per hour and due to reception is 8 + 7 = 15mA per hour. So, average power consumtion due to transmission and reception is (16.5 + 15)/2 = 15.75 mA per hour. Given that the capacity of an alkaline battery is 2500 mAh, a battery should last for 2500/15.75 = 159 hours of transmission and reception. An encounter between two hikers results in exchange of about 50 witness records that takes about 0.68 seconds as calculated above. Thus, a single alkaline battery can last for (159 ∗ 60 ∗ 60)/0.68 = 841764 hiker encounters. Assuming that a node emits a beacon every 90 seconds and a hiker encounter occurs everytime a beacon is emitted (worst-case scenario), a single alkaline battery will last for (841764 ∗ 90)/(30 ∗ 24 ∗ 60 ∗ 60) = 29 days. Since, a Mica2 is equipped with two batteries, a Mica2 sensor can remain operation for about two months. Notice that this calculation is preliminary, because it assumes that hikers are active 24 hours of the day and a hiker encounter occurs every 90 seconds. In a more realistic scenario, power is expected to last for a much longer time period. Also, this time period will significantly increase when groups of hikers are moving together. Finally, the lifetime of a sensor running on two batteries can definitely be increased significantly by using energy scavenging techniques and energy harvesting techniques [16, 14]. 6. PROTOTYPE IMPLEMENTATION We have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1b. One of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking. We use SiRF, the serial communication protocol, to control GPS module. SiRF has a rich command set, but we record only X and Y coordinates. A witness record is 16 bytes long. When a node starts up, it stores its current location and emits a beacon periodicallyin the prototype, a node emits a beacon every minute. We have conducted a number of experiments with this prototype. A detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http://csel.cs.colorado.edu/∼huangjh/ Cenwits.index.htm. Here we report results from three of them. In all these experiments, there are three access points (A, B and C) where nodes dump their witness information. These access points also provide location information to the nodes that come with in their range. We first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing. Next, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments. 6.1 Locating Lost Hikers The first experiment is called Direct Contact. It is a very simple experiment in which a single hiker starts from A, goes to B and then C, and finally returns to A (See Figure 13). The goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information. Figure 13: Direct Contact Experiment Node Id Record (X,Y) Location Hop Time Time Count 1 15 (12,7) 15 0 1 33 (31,17) 33 0 1 46 (12,23) 46 0 1 10 (12,7) 10 0 1 48 (12,23) 48 0 1 16 (12,7) 16 0 1 34 (31,17) 34 0 Table 1: Witness information collected in the direct contact experiment. The witness information dumped at the three access points was then collected and processed at a control center. Part of the witness information collected at the control center is shown in Table 1. The X,Y locations in this table correspond to the location information provided by access points A, B, and C. A is located at (12,7), B is located at (31,17) and C is located at (12,23). Three encounter points (between hiker 1 and the three access points) extracted from 188 this witness information are shown in Figure 13 (shown in rectangular boxes). For example, A,1 at 16 means 1 came in contact with A at time 16. Using this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving. Furthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took. The second experiment is called Indirect Inference. This experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point. It illustrates the importance of witness information in search and rescue applications. In this experiment, there are three hikers, 1, 2 and 3. Hiker 1 takes a trail that goes along access points A and B, while hiker 3 takes trail that goes along access points C and B. Hiker 2 takes a trail that does not come in the range of any access points. However, this hiker meets hiker 1 and 3 during his hike. This is illustrated in Figure 14. Figure 14: Indirect Inference Experiment Node Id Record (X,Y) Location Hop Time Time Count 2 16 (12,7) 6 0 2 15 (12,7) 6 0 1 4 (12,7) 4 0 1 6 (12,7) 6 0 1 29 (31,17) 29 0 1 31 (31,17) 31 0 Table 2: Witness information collected from hiker 1 in indirect inference experiment. Part of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3. There are some interesting data in these tables. For example, the location time in some witness records is not the same as the record time. This means that the node that generated that record did not have its most up-to-date location at the encounter time. For example, when hikers 1 and 2 meet at time 16, the last recorded location time of Node Id Record (X,Y) Location Hop Time Time Count 3 78 (12,23) 78 0 3 107 (31,17) 107 0 3 106 (31,17) 106 0 3 76 (12,23) 76 0 3 79 (12,23) 79 0 2 94 (12,23) 79 0 1 16 (? ,?) ? 1 1 15 (? ,?) ? 1 Table 3: Witness information collected from hiker 3 in indirect inference experiment. hiker 1 is (12,7) recorded at time 6. So, node 1 generates a witness record with record time 16, location (12,7) and location time 6. In fact, the last two records in Table 3 have (? ,?) as their location. This has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16. Until this time, hiker 2 hadn``t come in contact with any location points. Interestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center. It took 25 units of time for hiker 1 to go from A (12,7) to B (31,17). Assuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10). Thus (18,10) is a more accurate location of encounter between 1 and 2. Finally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing. There are six hikers (1, 2, 3, 4, 5 and 6) in this experiment. Figure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center. For brevity, we have not shown the entire witness information collected at the control center. This information is available at http://csel.cs.colorado.edu/∼huangjh/Cenwits/index.htm. Figure 15: Identifying Hot Search Area Experiment (without hiker 6) 189 Now suppose hiker 6 is reported missing at time 260. To determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location. Based on this information and the hiking trail map, hot search areas are identified. The hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16. Figure 16: Identifying Hot Search Area Experiment (with hiker 6) 6.2 Results of Power and Memory Management The witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1. For example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3. By applying the record gap criteria, two of these three records will be erased. Similarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1. Again, by applying the record gap criteria, two of these three records will be erased. Our experiments did not generate enough data to test the impact of record count or hop count criteria. To evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points. We generated witness records by having the hikers walk randomly. We applied the three criteria to measure the amount of memory savings in a sensor node. The results are shown in Table 4. The number of hikers in this simulation was 10 and the number of access points was 5. The number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point. These results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits. For example, they can reduce MAX MIN MAX # of RECORD RECORD HOP Witness COUNT GAP COUNT Records 5 5 5 628 4 5 5 421 3 5 5 316 5 10 5 311 5 20 5 207 5 5 4 462 5 5 3 341 3 20 3 161 Table 4: Impact of memory management techniques. the memory consumption by up to 75%. However, these results are premature at present for two reasons: (1) They are generated via simulation of hikers walking at random; and (2) It is not clear what impact the erasing of witness records has on the accuracy of inferred location/hot search areas of lost hikers. In our future work, we plan to undertake a major study to address these two concerns. 7. OTHER APPLICATIONS In addition to the hiking in wilderness areas, CenWits can be used in several other applications, e.g. skiing, climbing, wild life monitoring, and person tracking. Since CenWits relies only on intermittent connectivity, it can take advantage of the existing cheap and mature technologies, and thereby make tracking cheaper and fairly accurate. Since CenWits doesn``t rely on keeping track of a sensor holder all time, but relies on maintaining witnesses, the system is relatively cheaper and widely applicable. For example, there are some dangerous cliffs in most ski resorts. But it is too expensive for a ski resort to deploy a connected wireless sensor network through out the mountain. Using CenWits, we can deploy some sensors at the cliff boundaries. These boundary sensors emit beacons quite frequently, e.g. every second, and so can record presence of skiers who cross the boundary and fall off the cliff. Ski patrols can cruise the mountains every hour, and automatically query the boundary sensor when in range using PDAs. If a PDA shows that a skier has been close to the boundary sensor, the ski patrol can use a long range walkie-talkie to query control center at the resort base to check the witness record of the skier. If there is no witness record after the recorded time in the boundary sensor, there is a high chance that a rescue is needed. In wildlife monitoring, a very popular method is to attach a GPS receiver on the animals. To collect data, either a satellite transmitter is used, or the data collector has to wait until the GPS receiver brace falls off (after a year or so) and then search for the GPS receiver. GPS transmitters are very expensive, e.g. the one used in geese tracking is $3,000 each [2]. Also, it is not yet known if continuous radio signal is harmful to the birds. Furthermore, a GPS transmitter is quite bulky and uncomfortable, and as a result, birds always try to get rid of it. Using CenWits, not only can we record the presence of wildlife, we can also record the behavior of wild animals, e.g. lions might follow the migration of deers. CenWits does nor require any bulky and expensive satellite transmitters, nor is there a need to wait for a year and search for the braces. CenWits provides a very simple and cost-effective solution in this case. Also, access points 190 can be strategically located, e.g. near a water source, to increase chances of collecting up-to-date data. In fact, the access points need not be statically located. They can be placed in a low-altitude plane (e.g a UAV) and be flown over a wilderness area to collect data from wildlife. In large cities, CenWits can be used to complement GPS, since GPS doesn``t work indoor and near skyscrapers. If a person A is reported missing, and from the witness records we find that his last contacts were C and D, we can trace an approximate location quickly and quite efficiently. 8. DISCUSSION AND FUTURE WORK This paper presents a new search and rescue system called CenWits that has several advantages over the current search and rescue systems. These advantages include a looselycoupled system that relies only on intermittent network connectivity, power and storage efficiency, and low cost. It solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability. This means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well. It utilizes the concept of witnesses to propagate information, infer current possible location and speed of a subject, and identify hot search and rescue areas in case of emergencies. A large part of CenWits design focuses on addressing the power and memory limitations of current sensor nodes. In fact, power and memory constraints depend on how much weight (of sensor node) a hiker is willing to carry and the cost of these sensors. An important goal of CenWits is build small chips that can be implanted in hiking boots or ski jackets. This goal is similar to the avalanche beacons that are currently implanted in ski jackets. We anticipate that power and memory will continue to be constrained in such an environment. While the paper focuses on the development of a search and rescue system, it also provides some innovative, systemlevel ideas for information processing in a sensor network system. We have developed and experimented with a basic prototype of CenWits at present. Future work includes developing a more mature prototype addressing important issues such as security, privacy, and high availability. There are several pressing concerns regarding security, privacy, and high availability in CenWits. For example, an adversary can sniff the witness information to locate endangered animals, females, children, etc.. He may inject false information in the system. An individual may not be comfortable with providing his/her location and movement information, even though he/she is definitely interested in being located in a timely manner at the time of emergency. In general, people in hiking community are friendly and usually trustworthy. So, a bullet-proof security is not really required. However, when CenWits is used in the context of other applications, security requirements may change. Since the sensor nodes used in CenWits are fragile, they can fail. In fact, the nature and level of security, privacy and high availability support needed in CenWits strongly depends on the application for which it is being used and the individual subjects involved. Accordingly, we plan to design a multi-level support for security, privacy and high availability in CenWits. So far, we have experimented with CenWits in a very restricted environment with a small number of sensors. Our next goal is to deploy this system in a much larger and more realistic environment. In particular, discussions are currenly underway to deploy CenWits in the Rocky Mountain and Yosemite National Parks. 9. REFERENCES [1] 802.11-based tracking system. http://www.pangonetworks.com/locator.htm. [2] Brent geese 2002. http://www.wwt.org.uk/brent/. [3] The onstar system. http://www.onstar.com. [4] Personal locator beacons with GPS receiver and satellite transmitter. http://www.aeromedix.com/. [5] Personal tracking using GPS and GSM system. http://www.ulocate.com/trimtrac.html. [6] Rf based kid tracking system. http://www.ion-kids.com/. [7] F. Alessio. Performance measurements with motes technology. MSWiM``04, 2004. [8] P. Bahl and V. N. Padmanabhan. RADAR: An in-building RF-based user location and tracking system. IEEE Infocom, 2000. [9] K. Fall. A delay-tolerant network architecture for challenged internets. In SIGCOMM, 2003. [10] L. Gu and J. Stankovic. Radio triggered wake-up capability for sensor networks. In Real-Time Applications Symposium, 2004. [11] J. Hightower and G. Borriello. Location systems for ubiquitous computing. IEEE Computer, 2001. [12] W. Jaskowski, K. Jedrzejek, B. Nyczkowski, and S. Skowronek. Lifetch life saving system. CSIDC, 2004. [13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein. Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet. In ASPLOS, 2002. [14] K. Kansal and M. Srivastava. Energy harvesting aware power management. In Wireless Sensor Networks: A Systems Perspective, 2005. [15] G. J. Pottie and W. J. Kaiser. Embedding the internet: wireless integrated network sensors. Communications of the ACM, 43(5), May 2000. [16] S. Roundy, P. K. Wright, and J. Rabaey. A study of low-level vibrations as a power source for wireless sensor networks. Computer Communications, 26(11), 2003. [17] C. Savarese, J. M. Rabaey, and J. Beutel. Locationing in distributed ad-hoc wireless sensor networks. ICASSP, 2001. [18] V. Shnayder, M. Hempstead, B. Chen, G. Allen, and M. Welsh. Simulating the power consumption of large-scale sensor network applications. In Sensys, 2004. [19] R. Want and A. Hopper. Active badges and personal interactive computing objects. IEEE Transactions of Consumer Electronics, 1992. [20] M. Welsh and G. Mainland. Programming sensor networks using abstract regions. First USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI ``04), 2004. 191
CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 ABSTRACT This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it. 1. INTRODUCTION Search and rescue of people in emergency situation in a timely manner is an extremely important service. It has been difficult to provide such a service due to lack of timely information needed to determine the current location of a person who may be in an emergency situation. With the emergence of pervasive computing, several systems [12, 19, 1, 5, 6, 4, 11] have been developed over the last few years that make use of small devices such as cell phones, sensors, etc. . All these systems require a connected network via satellites, GSM base stations, or mobile devices. This requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult. For example, a GSM transmitter has to be in the range of a base station to transmit. As a result, it cannot operate in most wilderness areas. While a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome. Furthermore, a line of sight is required to transmit to satellite, and that makes it infeasible to stay connected in narrow canyons, large cities with skyscrapers, rain forests, or even when there is a roof or some other obstruction above the transmitter, e.g. in a car. An RF transmitter has a relatively smaller range of transmission. So, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area. In a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area. In fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area. In this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses). CenWits is comprised of mobile, in-situ sensors that are worn by subjects (people, wild animals, or in-animate objects), access points (AP) that collect information from these sensors, and GPS receivers and location points (LP) that provide location information to the sensors. A subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location. The key idea of CenWits is that it uses a concept of witnesses to convey a subject's movement and location information to the outside world. This averts a need for maintaining a connected network to transmit location information to the outside world. In particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits. CenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors. In particular, it makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. The problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network. Instead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network. As a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures. Also, this makes CenWits cost-effective. A subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters. Furthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers. The problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions. Groups and partitions allow sensors to stay in sleep or receive modes most of the time. Using groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information. In fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors. Each sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory. It has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network. This paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors. The paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application. However, these techniques are quite general. We discuss several other sensor-based applications that can employ these techniques. While CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area. First, unlike earlier location tracking solutions, CenWits does not require a connected network. Second, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization. Instead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated. The rest of this paper is organized as follows. In Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems. In Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality. In the next section, Section 4, we discuss power and memory management in CenWits. To simplify our presentation, we will focus on a specific application of tracking lost/injured hikers in all these sections. In Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation. We discuss how the ideas of CenWits can be used to build several other applications in Section 7. Finally, in Section 8, we discuss some related issues and conclude the paper. 2. RELATED WORK A survey of location systems for ubiquitous computing is provided in [11]. A location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17]. Most location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing. They don't fit well for our purposes. The well-known active badge system [19] lets a user carry a badge around. An infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person. This is a useful system for indoor environment, where GPS doesn't work. Locationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8]. Because of the popularity and low cost of 802.11 devices, several business solutions based on this technology have been developed [1]. A system that combines two mature technologies and is viable in suburban area where a user can see clear sky and has GSM cellular reception at the same time is currently available [5]. This system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user's location. A very simple system to monitor children consists an RF transmitter and a receiver. The system alarms the holder of the receiver when the transmitter is about to run out of range [6]. Personal Locater Beacons (PLB) has been used for avalanche rescuing for years. A skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his/her location based on the strength of the RF signal. Luxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user's location in latitude and longitude to the rescue team whenever an accident happens [4]. However, the device either is turned on all the time resulting in fast battery drain, or must be turned on after the accident to function. Another related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars. In this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center. Designed for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident. Once the system has been triggered, a human representative attempts to gain communication with the user via a cell phone built as an incar device. If contact cannot be made, emergency services are dispatched to the location provided by GPS. Like PLBs, this system has several limitations. First, it is heavy and expensive. It requires a satellite transmitter and a connected network. If connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails. Unfortunately, these are common obstacles encountered in deep canyons, narrow streets in large cities, parking garages, and a number of other places. The Lifetch system uses CPS receiver board combined with a CSM/CPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU). An ICU first attempts to transmit its location to a control center through CSM/CPRS network. If that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has CSM/CPRS reception. This ICU then transmits the location information of the original ICU via the CSM/CPRS network. ZebraNet is a system designed to study the moving patterns of zebras [13]. It utilizes two protocols: History-based protocol and flooding protocol. History-based protocol is used when the zebras are grazing and not moving around too much. While this might be useful for tracking zebras, it's not suitable for tracking hikers because two hikers are most likely to meet each other only once on a trail. In the flooding protocol, a node dumps its data to a neighbor whenever it finds one and doesn't delete its own copy until it finds a base station. Without considering routing loops, packet filtering and grouping, the size of data on a node will grow exponentially and drain the power and memory of a sensor node with in a short time. Instead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive. While ZebraNet is designed for a big group of sensors moving together in the same direction with same speed, Cenwits is designed to be used in the scenario where sensors move in different directions at different speeds. Delay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9]. While this work is mainly concerned with interoperability of challenged networks, some problems related to occasionally-connected networks are similar to the ones we have addressed in CenWits. Among all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas. However, both of these systems require a connected network. Luxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to CSM/CPRS network. Luxury PLB transmits location information, only when an accident happens. However, if the user is buried in the snow or falls into a deep canyon, there is almost no chance for the signal to go through and be relayed to the rescue team. This is because satellite transmission needs line of sight. Furthermore, since there is no known history of user's location, it is not possible for the rescue team to infer the current location of the user. Another disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750. Lifetch attempts to transmit the location information by CSM/CPRS and adhoc sensor network that uses AODV as the routing protocol. However, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely. Furthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area. This is because the hikers are mobile and it is very unlikely to have several ICUs placed dense enough to forward packets even on a very popular hike route. CenWits is designed to address the limitations of systems such as luxury PLB and Lifetch. It is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center. It is not reliant upon constant connectivity with any communication medium. Rather, it communicates information along from user to user, finally arriving at a control center. Unlike several of the systems discussed so far, it does not require that a user's unit is constantly turned on. In fact, it can discover a victim's location, even if the victim's sensor was off at the time of accident and has remained off since then. CenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability. This means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well. Figure 1: Hiker A and Hiker B are are not in the range of each other 3. CENWITS We describe CenWits in the context of locating lost/injured hikers in wilderness areas. Each hiker wears a sensor (MICA2 motes in our prototype) equipped with a CPS receiver and an RF transmitter. Each sensor is assigned a unique ID and maintains its current location based on the signal received by its CPS receiver. It also emits beacons periodically. When any two sensors are in the range of one another, they record the presence of each other (witness information), and also exchange the witness information they recorded earlier. The key idea here is that if two sensors come with in range of each other at any time, they become each other's witnesses. Later on, if the hiker wearing one of these sensors is lost, the other sensor can convey the last known (witnessed) location of the lost hiker. Furthermore, by exchanging the witness information that each sensor recorded earlier, the witness information is propagated beyond a direct contact between two sensors. To convey witness information to a processing center or to a rescue team, access points are established at well-known locations that the hikers are expected to pass through, e.g. at the trail heads, trail ends, intersection of different trails, scenic view points, resting areas, and so on. Whenever a sensor node is in the vicinity of an access point, all witness information stored in that sensor is automatically dumped to the access point. Access points are connected to a processing center via satellite or some other network'. The witness information is downloaded to the processing center from various access points at regular intervals. In case, connection to an access point is lost, the information from that' A connection is needed only between access points and a processing center. There is no need for any connection between different access points. access point can be downloaded manually, e.g. by UAVs. To estimate the speed, location and direction of a hiker at any point in time, all witness information of that hiker that has been collected from various access points is processed. Figure 2: Hiker A and Hiker B are in the range of each other. A records the presence of B and B records the presence of A. A and B become each other's witnesses. Figure 3: Hiker A is in the range of an access point. It uploads its recorded witness information and clears its memory. An example of how CenWits operates is illustrated in Figures 1, 2 and 3. First, hikers A and B are on two close trails, but out of range of each other (Figure 1). This is a very common scenario during a hike. For example, on a popular four-hour hike, a hiker might run into as many as 20 other hikers. This accounts for one encounter every 12 minutes on average. A slow hiker can go 1 mile (5,280 feet) per hour. Thus in 12 minutes a slow hiker can go as far as 1056 feet. This implies that if we were to put 20 hikers on a 4-hour, one-way hike evenly, the range of each sensor node should be at least 1056 feet for them to communicate with one another continuously. The signal strength starts dropping rapidly for two Mica2 nodes to communicate with each other when they are 180 feet away, and is completely lost when they are 230 feet away from each other [7]. So, for the sensors to form a sensor network on a 4-hour hiking trail, there should be at least 120 hikers scattered evenly. Clearly, this is extremely unlikely. In fact, in a 4-hour, less-popular hiking trail, one might only run into say five other hikers. CenWits takes advantage of the fact that sensors can communicate with one another and record their presence. Given a walking speed of one mile per hour (88 feet per minute) and Mica2 range of about 150 feet for non-line-of-sight radio transmission, two hikers have about 150/88 = 1.7 minutes to discover the presence of each other and exchange their witness information. We therefore design our system to have each sensor emit a beacon every one-and-a-half minute. In Figure 2, hiker B's sensor emits a beacon when A is in range, this triggers A to exchange data with B. A communicates the following information to B: "My ID is A; I saw C at 1:23 PM at (39 ° 49.3277655', 105 ° 39.1126776'), I saw E at 3:09 PM at (40 ° 49.2234879', 105 ° 20.3290168')". B then replies with "My ID is B; I saw K at 11:20 AM at (39 ° 51.4531655', 105 ° 41.6776223')". In addition, A records "I saw B at 4:17 PM at (41 ° 29.3177354', 105 ° 04.9106211')" and B records "I saw A at 4:17 PM at (41 ° 29.3177354', 105 ° 04.9106211')". B goes on his way to overnight camping while A heads back to trail head where there is an AP, which emits beacon every 5 seconds to avoid missing any hiker. A dumps all witness information it has collected to the access point. This is shown in Figure 3. 3.1 Witness Information: Storage A critical concern is that there is limited amount of memory available on motes (4 KB SDRAM memory, 128 KB flash memory, and 4-512 KB EEPROM). So, it is important to organize witness information efficiently. CenWits stores witness information at each node as a set of witness records (Format is shown in Figure 4. Figure 4: Format of a witness record. When two nodes i and j encounter each other, each node generates a new witness record. In the witness record generated by i, Node ID is j, Record Time is the current time in i's clock, (X, Y) are the coordinates of the location of i that i recorded most recently (either from satellite or an LP), Location Time is the time when the this location was recorded, and Hop Count is 0. Each node is assigned a unique Node Id when it enters a trail. In our current prototype, we have allocated one byte for Node Id, although this can be increased to two or more bytes if a large number of hikers are expected to be present at the same time. We can represent time in 17 bits to a second precision. So, we have allocated 3 bytes each for Record Time and Location Time. The circumference of the Earth is approximately 40,075 KM. If we use a 32-bit number to represent both longitude and latitude, the precision we get is 40,075,000 / 232 = 0.0093 meter = 0.37 inches, which is quite precise for our needs. So, we have allocated 4 bytes each for X and Y coordinates of the location of a node. In fact, a foot precision can be achieved by using only 27 bits. 3.2 Location Point and Location Inference Although a GPS receiver provides an accurate location information, it has it's limitation. In canyons and rainy forests, a GPS receiver does not work. When there is a heavy cloud cover, GPS users have experienced inaccuracy in the reported location as well. Unfortunately, a lot of hiking trails are in dense forests and canyons, and it's not that uncommon to rain after hikers start hiking. To address this, CenWits incorporates the idea of location points (LP). A location point can update a sensor node with its current location whenever the node is near that LP. LPs are placed at different locations in a wilderness area where GPS receivers don't work. An LP is a very simple device that emits prerecorded location information at some regular time interval. It can be placed in difficult-to-reach places such as deep canyons and dense rain forests by simply dropping them from an airplane. LPs allow a sensor node to determine its current location more accurately. However, they are not an essential requirement of CenWits. If an LP runs out of power, the CenWits will continue to work correctly. Figure 5: GPS receiver not working correctly. Sensors then have to rely on LP to provide coordination In Figure 5, B cannot get GPS reception due to bad weather. It then runs into A on the trail who doesn't have GPS reception either. Their sensors record the presence of each other. After 10 minutes, A is in range of an LP that provides an accurate location information to A. When A returns to trail head and uploads its data (Figure 6), the system can draw a circle centered at the LP from which A fetched location information for the range of encounter location of A and B. By Overlapping this circle with the trail map, two or three possible location of encounter can be inferred. Thus when a rescue is required, the possible location of B can be better inferred (See Figures 7 and 8). Figure 6: A is back to trail head, It reports the time of encounter with B to AP, but no location information to AP Figure 7: B is still missing after sunset. CenWits infers the last contact point and draws the circle of possible current locations based on average hiking speed CenWits requires that the clocks of different sensor nodes be loosely synchronized with one another. Such a synchronization is trivial when GPS coverage is available. In addition, sensor nodes in CenWits synchronize their clocks whenever they are in the range of an AP or an LP. The Figure 8: Based on overlapping landscape, B might have hiked to wrong branch and fallen off a cliff. Hot rescue areas can thus be determined synchronization accuracy Cenwits needs is of the order of a second or so. As long as the clocks are synchronized with in one second range, whether A met B at 12:37' 45" or 12:37' 46" doesn't matter in the ordering of witness events and inferring the path. 4. MEMORY AND POWER MANAGEMENT CenWits employs several important mechanisms to conserve power and memory. It is important to note while current sensor nodes have limited amount of memory, future sensor nodes are expected to have much more memory. With this in mind, the main focus in our design is to provide a tradeoff between the amount of memory available and amount of power consumption. 4.1 Memory Management The size of witness information stored at a node can get very large. This is because the node may come across several other nodes during a hike, and may end up accumulating a large amount of witness information over time. To address this problem, CenWits allows a node to pro-actively free up some parts of its memory periodically. This raises an interesting question of when and which witness record should be deleted from the memory of a node? CenWits uses three criteria to determine this: record count, hop count, and record gap. Record count refers to the number of witness records with same node id that a node has stored in its memory. A node maintains an integer parameter MAX RECORD COUNT. It stores at most MAX RECORD COUNT witness records of any node. Every witness record has a hop count field that stores the number times (hops) this record has been transferred since being created. Initially this field is set to 0. Whenever a node receives a witness record from another node, it increments the hop count of that record by 1. A node maintains an integer parameter called MAX HOP COUNT. It keeps only those witness records in its memory, whose hop count is less than MAX HOP COUNT. The MAX HOP COUNT parameter provides a balance between two conflicting goals: (1) To ensure that a witness record has been propagated to and thus stored at as many nodes as possible, so that it has a high probability of being dumped at some AP as quickly as possible; and (2) To ensure that a witness record is stored only at a few nodes, so that it does not clog up too much of the combined memory of all sensor nodes. We chose to use hop count instead of time-to-live to decide when to drop a packet. The main reason for this is that the probability of a packet reaching an AP goes up as the hop count adds up. For example, when the hop count if 5 for a specific record, the record is in at least 5 sensor nodes. On the other hand, if we discard old records, without considering hop count, there is no guarantee that the record is present in any other sensor node. Record gap refers to the time difference between the record times of two witness records with the same node id. To save memory, a node n ensures the the record gap between any two witness records with the same node id is at least MIN RECORD GAP. For each node id i, n stores the witness record with the most recent record time rti, the witness with most recent record time that is at least MIN RECORD GAP time units before rti, and so on until the record count limit (MAX RECORD COUNT) is reached. When a node is tight in memory, it adjusts the three parameters, MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to free up some memory. It decrements MAX RECORD COUNT and MAX HOP COUNT, and increments MIN RECORD GAP. It then first erases all witness records whose hop count exceeds the reduced MAX HOP COUNT value, and then erases witness records to satisfy the record gap criteria. Also, when a node has extra memory space available, e.g. after dumping its witness information at an access point, it resets MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to some predefined values. 4.2 Power Management An important advantage of using sensors for tracking purposes is that we can regulate the behavior of a sensor node based on current conditions. For example, we mentioned earlier that a sensor should emit a beacon every 1.7 minute, given a hiking speed of 1 mile/hour. However, if a user is moving 10 feet/sec, a beacon should be emitted every 10 seconds. If a user is not moving at all, a beacon can be emitted every 10 minutes. In the night, a sensor can be put into sleep mode to save energy, when a user is not likely to move at all for a relatively longer period of time. If a user is active for only eight hours in a day, we can put the sensor into sleep mode for the other 16 hours and thus save 2/3rd of the energy. In addition, a sensor node can choose to not send any beacons during some time intervals. For example, suppose hiker A has communicated its witness information to three other hikers in the last five minutes. If it is running low on power, it can go to receive mode or sleep mode for the next ten minutes. It goes to receive mode if it is still willing to receive additional witness information from hikers that it encounters in the next ten minutes. It goes to sleep mode if it is extremely low on power. The bandwidth and energy limitations of sensor nodes require that the amount of data transferred among the nodes be reduced to minimum. It has been observed that in some scenarios 3000 instructions could be executed for the same energy cost of sending a bit 100m by radio [15]. To reduce the amount of data transfer, CenWits employs a handshake protocol that two nodes execute when they encounter one another. The goal of this protocol is to ensure that a node transmits only as much witness information as the other node is willing to receive. This protocol is initiated when a node i receives a beacon containing the node ID of the sender node j and i has not exchanged witness information with j in the last S time units. Assume that i <j. The protocol consists of four phases (See Figure 9): 1. Phase I: Node i sends its receive constraints and the number of witness records it has in its memory. 2. Phase II: On receiving this message from i, j sends its receive constraints and the number of witness records it has in its memory. 3. Phase III: On receiving the above message from j, i sends its witness information (filtered based on receive constraints received in phase II). 4. Phase IV: After receiving the witness records from i, j sends its witness information (filtered based on receive constraints received in phase I). Figure 9: Four-Phase Hand Shake Protocol (i <j) Receive constraints are a function of memory and power. In the most general case, they are comprised of the three parameters (record count, hop count and record gap) used for memory management. If i is low on memory, it specifies the maximum number of records it is willing to accept from j. Similarly, i can ask j to send only those records that have hop count value less than MAX HOP COUNT − 1. Finally, i can include its MIN RECORD GAP value in its receive constraints. Note that the handshake protocol is beneficial to both i and j. They save memory by receiving only as much information as they are willing to accept and conserve energy by sending only as many witness records as needed. It turns out that filtering witness records based on MIN RECORD GAP is complex. It requires that the witness records of any given node be arranged in an order sorted by their record time values. Maintaining this sorted order is complex in memory, because new witness records with the same node id can arrive later that may have to be inserted in between to preserve the sorted order. For this reason, the receive constraints in the current CenWits prototype do not include record gap. Suppose i specifies a hop count value of 3. In this case, j checks the hop count field of every witness record before sending them. If the hop count value is greater than 3, the record is not transmitted. 4.3 Groups and Partitions To further reduce communication and increase the lifetime of our system, we introduce the notion of groups. The idea is based on the concept of abstract regions presented in [20]. A group is a set of n nodes that can be defined in terms of radio connectivity, geographic location, or other properties of nodes. All nodes within a group can communicate directly with one another and they share information to maintain their view of the external world. At any point in time, a group has exactly one leader that communicates i i i j j j with external nodes on behalf of the entire group. A group can be static, meaning that the group membership does not change over the period of time, or it could be dynamic in which case nodes can leave or join the group. To make our analysis simple and to explain the advantages of group, we first discuss static groups. A static group is formed at the start of a hiking trail or ski slope. Suppose there are five family members who want to go for a hike in the Rocky Mountain National Park. Before these members start their hike, each one of them is given a sensor node and the information is entered in the system that the five nodes form a group. Each group member is given a unique id and every group member knows about other members of the group. The group, as a whole, is also assigned an id to distinguish it from other groups in the system. Figure 10: A group of five people. Node 2 is the group leader and it is communicating on behalf of the group with an external node 17. All other (shown in a lighter shade) are in sleep mode. As the group moves through the trail, it exchanges information with other nodes or groups that it comes across. At any point in time, only one group member, called the leader, sends and receives information on behalf of the group and all other n − 1 group members are put in the sleep mode (See Figure 10). It is this property of groups that saves us energy. The group leadership is time-multiplexed among the group members. This is done to make sure that a single node does not run out of battery due to continuous exchange of information. Thus after every t seconds, the leadership is passed on to another node, called the successor, and the leader (now an ordinary member) is put to sleep. Since energy is dear, we do not implement an extensive election algorithm for choosing the successor. Instead, we choose the successor on the basis of node id. The node with the next highest id in the group is chosen as the successor. The last node, of course, chooses the node with the lowest id as its successor. We now discuss the data storage schemes for groups. Memory is a scarce resource in sensor nodes and it is therefore important that witness information be stored efficiently among group members. Efficient data storage is not a trivial task when it comes to groups. The tradeoff is between simplicity of the scheme and memory savings. A simpler scheme incurs lesser energy cost as compared to a more sophisticated scheme, but offers lesser memory savings as well. This is because in a more complicated scheme, the group members have to coordinate to update and store information. After considering a number of different schemes, we have come to a conclusion that there is no optimal storage scheme for groups. The system should be able to adapt according to its requirements. If group members are low on battery, then the group can adapt a scheme that is more energy efficient. Similarly, if the group members are running out of memory, they can adapt a scheme that is more memory efficient. We first present a simple scheme that is very energy efficient but does not offer significant memory savings. We then present an alternate scheme that is much more memory efficient. As already mentioned a group can receive information only through the group leader. Whenever the leader comes across an external node e, it receives information from that node and saves it. In our first scheme, when the timeslot for the leader expires, the leader passes this new information it received from e to its successor. This is important because during the next time slot, if the new leader comes across another external node, it should be able to pass information about all the external nodes this group has witnessed so far. Thus the information is fully replicated on all nodes to maintain the correct view of the world. Our first scheme does not offer any memory savings but is highly energy efficient and may be a good choice when the group members are running low on battery. Except for the time when the leadership is switched, all n − 1 members are asleep at any given time. This means that a single member is up for t seconds once every n * t seconds and therefore has to spend approximately only 1/nth of its energy. Thus, if there are 5 members in a group, we save 80% energy, which is huge. More energy can be saved by increasing the group size. We now present an alternate data storage scheme that aims at saving memory at the cost of energy. In this scheme we divide the group into what we call partitions. Partitions can be thought of as subgroups within a group. Each partition must have at least two nodes in it. The nodes within a partition are called peers. Each partition has one peer designated as partition leader. The partition leader stays in receive mode at all times, while all others peers a partition stay in the sleep mode. Partition leadership is time-multiplexed among the peers to make sure that a single node does not run out of battery. Like before, a group has exactly one leader and the leadership is time-multiplexed among partitions. The group leader also serves as the partition leader for the partition it belongs to (See Figure 11). In this scheme, all partition leaders participate in information exchange. Whenever a group comes across an external node e, every partition leader receives all witness information, but it only stores a subset of that information after filtering. Information is filtered in such a way that each partition leader has to store only B/K bytes of data, where K is the number of partitions and B is the total number of bytes received from e. Similarly when a group wants to send witness information to e, each partition leader sends only B/K bytes that are stored in the partition it belongs to. However, before a partition leader can send information, it must switch from receive mode to send mode. Also, partition leaders must coordinate with one another to ensure that they do not send their witness information at the same time, i.e. their message do not collide. All this is achieved by having the group leader send a signal to every partition leader in turn. Figure 11: The figure shows a group of eight nodes divided into four partitions of 2 nodes each. Node 1 is the group leader whereas nodes 2, 9, and 7 are partition leaders. All other nodes are in the sleep mode. Since the partition leadership is time-multiplexed, it is important that any information received by the partition leader, P1, be passed on to the next leader, P2. This has to be done to make sure that P2 has all the information that it might need to send when it comes across another external node during its timeslot. One way of achieving this is to wake P2 up just before P1's timeslot expires and then have P1 transfer information only to P2. An alternate is to wake all the peers up at the time of leadership change, and then have P1 broadcast the information to all peers. Each peer saves the information sent by P1 and then goes back to sleep. In both cases, the peers send acknowledgement to the partition leader after receiving the information. In the former method, only one node needs to wake up at the time of leadership change, but the amount of information that has to be transmitted between the nodes increases as time passes. In the latter case, all nodes have to be woken up at the time of leadership change, but small piece of information has to be transmitted each time among the peers. Since communication is much more expensive than bringing the nodes up, we prefer the second method over the first one. A group can be divided into partitions in more than one way. For example, suppose we have a group of six members. We can divide this group into three partitions of two peers each, or two partitions with three peers each. The choice once again depends on the requirements of the system. A few big partitions will make the system more energy efficient. This is because in this configuration, a greater number of nodes will stay in sleep mode at any given point in time. On the other hand, several small partitions will make the system memory efficient, since each node will have to store lesser information (See Figure 12). A group that is divided into partitions must be able to readjust itself when a node leaves or runs out of battery. This is crucial because a partition must have at least two nodes at any point in time to tolerate failure of one node. For example, in figure 3 (a), if node 2 or node 5 dies, the partition is left with only one node. Later on, if that single node in the partition dies, all witness information stored in that partition will be lost. We have devised a very simple protocol to solve this problem. We first explain how partiFigure 12: The figure shows two different ways of partitioning a group of six nodes. In (a), a group is divided into three partitions of two nodes. Node 1 is the group leader, nodes 9 and 5 are partition leaders, and nodes 2, 3, and 6 are in sleep mode. In (b) the group is divided into two partitions of three nodes. Node 1 is the group leader, node 9 is the partition leader and nodes 2, 3, 5, and 6 are in sleep mode. tions are adjusted when a peer dies, and then explain what happens if a partition leader dies. Suppose node 2 in figure 3 (a) dies. When node 5, the partition leader, sends information to node 2, it does not receive an acknowledgement from it and concludes that node 2 has died2. At this point, node 5 contacts other partition leaders (nodes 1...9) using a broadcast message and informs them that one of its peers has died. Upon hearing this, each partition leader informs node 5 (i) the number of nodes in its partition, (ii) a candidate node that node 5 can take if the number of nodes in its partition is greater than 2, and (iii) the amount of witness information stored in its partition. Upon hearing from every leader, node 5 chooses the candidate node from the partition with maximum number (must be greater than 2) of peers, and sends a message back to all leaders. Node 5 then sends data to its new peer to make sure that the information is replicated within the partition. However, if all partitions have exactly two nodes, then node 5 must join another partition. It chooses the partition that has the least amount of witness information to join. It sends its witness information to the new partition leader. Witness information and membership update is propagated to all peers during the next partition leadership change. We now consider the case where the partition leader dies. If this happens, then we wait for the partition leadership to change and for the new partition leader to eventually find out that a peer has died. Once the new partition leader finds out that it needs more peers, it proceeds with the protocol explained above. However, in this case, we do lose information that the previous partition leader might have received just before it died. This problem can be solved by implementing a more rigorous protocol, but we have decided to give up on accuracy to save energy. Our current design uses time-division multiplexing to schedule wakeup and sleep modes in the sensor nodes. However, recent work on radio wakeup sensors [10] can be used to do this scheduling more efficiently. we plan to incorporate radio wakeup sensors in CenWits when the hardware is mature. 2The algorithm to conclude that a node has died can be made more rigorous by having the partition leader query the suspected node a few times. 5. SYSTEM EVALUATION A sensor is constrained in the amount of memory and power. In general, the amount of memory needed and power consumption depends on a variety of factors such as node density, number of hiker encounters, and the number of access points. In this Section, we provide an estimate of how long the power of a MICA2 mote will last under certain assumtions. First, we assume that each sensor node carries about 100 witness records. On encountering another hiker, a sensor node transmits 50 witness records and receives 50 new witness records. Since, each record is 16 bytes long, it will take 0.34 seconds to transmit 50 records and another 0.34 seconds to receive 50 records over a 19200 bps link. The power consumption of MICA2 due to CPU processing, transmission and reception are approximately 8.0 mA, 7.0 mA and 8.5 mA per hour respectively [18], and the capacity of an alkaline battery is 2500mAh. Since the radio module of Mica2 is half-duplex and assuming that the CPU is always active when a node is awake, power consumption due to transmission is 8 + 8.5 = 16.5 mA per hour and due to reception is 8 + 7 = 15mA per hour. So, average power consumtion due to transmission and reception is (16.5 + 15) / 2 = 15.75 mA per hour. Given that the capacity of an alkaline battery is 2500 mAh, a battery should last for 2500/15 .75 = 159 hours of transmission and reception. An encounter between two hikers results in exchange of about 50 witness records that takes about 0.68 seconds as calculated above. Thus, a single alkaline battery can last for (159 * 60 * 60) / 0.68 = 841764 hiker encounters. Assuming that a node emits a beacon every 90 seconds and a hiker encounter occurs everytime a beacon is emitted (worst-case scenario), a single alkaline battery will last for (841764 * 90) / (30 * 24 * 60 * 60) = 29 days. Since, a Mica2 is equipped with two batteries, a Mica2 sensor can remain operation for about two months. Notice that this calculation is preliminary, because it assumes that hikers are active 24 hours of the day and a hiker encounter occurs every 90 seconds. In a more realistic scenario, power is expected to last for a much longer time period. Also, this time period will significantly increase when groups of hikers are moving together. Finally, the lifetime of a sensor running on two batteries can definitely be increased significantly by using energy scavenging techniques and energy harvesting techniques [16, 14]. 6. PROTOTYPE IMPLEMENTATION We have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1 b. One of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking. We use SiRF, the serial communication protocol, to control GPS module. SiRF has a rich command set, but we record only X and Y coordinates. A witness record is 16 bytes long. When a node starts up, it stores its current location and emits a beacon periodically--in the prototype, a node emits a beacon every minute. We have conducted a number of experiments with this prototype. A detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http://csel.cs.colorado.edu/huangjh/ Cenwits.index.htm. Here we report results from three of them. In all these experiments, there are three access points (A, B and C) where nodes dump their witness information. These access points also provide location information to the nodes that come with in their range. We first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing. Next, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments. 6.1 Locating Lost Hikers The first experiment is called Direct Contact. It is a very simple experiment in which a single hiker starts from A, goes to B and then C, and finally returns to A (See Figure 13). The goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information. Figure 13: Direct Contact Experiment Table 1: Witness information collected in the direct contact experiment. The witness information dumped at the three access points was then collected and processed at a control center. Part of the witness information collected at the control center is shown in Table 1. The X, Y locations in this table correspond to the location information provided by access points A, B, and C. A is located at (12,7), B is located at (31,17) and C is located at (12,23). Three encounter points (between hiker 1 and the three access points) extracted from this witness information are shown in Figure 13 (shown in rectangular boxes). For example, A,1 at 16 means 1 came in contact with A at time 16. Using this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving. Furthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took. The second experiment is called Indirect Inference. This experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point. It illustrates the importance of witness information in search and rescue applications. In this experiment, there are three hikers, 1, 2 and 3. Hiker 1 takes a trail that goes along access points A and B, while hiker 3 takes trail that goes along access points C and B. Hiker 2 takes a trail that does not come in the range of any access points. However, this hiker meets hiker 1 and 3 during his hike. This is illustrated in Figure 14. Figure 14: Indirect Inference Experiment Table 2: Witness information collected from hiker 1 in indirect inference experiment. Part of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3. There are some interesting data in these tables. For example, the location time in some witness records is not the same as the record time. This means that the node that generated that record did not have its most up-to-date location at the encounter time. For example, when hikers 1 and 2 meet at time 16, the last recorded location time of Table 3: Witness information collected from hiker 3 in indirect inference experiment. hiker 1 is (12,7) recorded at time 6. So, node 1 generates a witness record with record time 16, location (12,7) and location time 6. In fact, the last two records in Table 3 have (? ,?) as their location. This has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16. Until this time, hiker 2 hadn't come in contact with any location points. Interestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center. It took 25 units of time for hiker 1 to go from A (12,7) to B (31,17). Assuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10). Thus (18,10) is a more accurate location of encounter between 1 and 2. Finally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing. There are six hikers (1, 2, 3, 4, 5 and 6) in this experiment. Figure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center. For brevity, we have not shown the entire witness information collected at the control center. This information is available at http://csel.cs.colorado.edu/huangjh/Cenwits/index.htm. Figure 15: Identifying Hot Search Area Experiment (without hiker 6) Now suppose hiker 6 is reported missing at time 260. To determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location. Based on this information and the hiking trail map, hot search areas are identified. The hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16. Figure 16: Identifying Hot Search Area Experiment (with hiker 6) 6.2 Results of Power and Memory Management The witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1. For example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3. By applying the record gap criteria, two of these three records will be erased. Similarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1. Again, by applying the record gap criteria, two of these three records will be erased. Our experiments did not generate enough data to test the impact of record count or hop count criteria. To evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points. We generated witness records by having the hikers walk randomly. We applied the three criteria to measure the amount of memory savings in a sensor node. The results are shown in Table 4. The number of hikers in this simulation was 10 and the number of access points was 5. The number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point. These results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits. For example, they can reduce Table 4: Impact of memory management techniques. the memory consumption by up to 75%. However, these results are premature at present for two reasons: (1) They are generated via simulation of hikers walking at random; and (2) It is not clear what impact the erasing of witness records has on the accuracy of inferred location/hot search areas of lost hikers. In our future work, we plan to undertake a major study to address these two concerns.
CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 ABSTRACT This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it. 1. INTRODUCTION Search and rescue of people in emergency situation in a timely manner is an extremely important service. It has been difficult to provide such a service due to lack of timely information needed to determine the current location of a person who may be in an emergency situation. With the emergence of pervasive computing, several systems [12, 19, 1, 5, 6, 4, 11] have been developed over the last few years that make use of small devices such as cell phones, sensors, etc. . All these systems require a connected network via satellites, GSM base stations, or mobile devices. This requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult. For example, a GSM transmitter has to be in the range of a base station to transmit. As a result, it cannot operate in most wilderness areas. While a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome. Furthermore, a line of sight is required to transmit to satellite, and that makes it infeasible to stay connected in narrow canyons, large cities with skyscrapers, rain forests, or even when there is a roof or some other obstruction above the transmitter, e.g. in a car. An RF transmitter has a relatively smaller range of transmission. So, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area. In a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area. In fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area. In this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses). CenWits is comprised of mobile, in-situ sensors that are worn by subjects (people, wild animals, or in-animate objects), access points (AP) that collect information from these sensors, and GPS receivers and location points (LP) that provide location information to the sensors. A subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location. The key idea of CenWits is that it uses a concept of witnesses to convey a subject's movement and location information to the outside world. This averts a need for maintaining a connected network to transmit location information to the outside world. In particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits. CenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors. In particular, it makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. The problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network. Instead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network. As a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures. Also, this makes CenWits cost-effective. A subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters. Furthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers. The problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions. Groups and partitions allow sensors to stay in sleep or receive modes most of the time. Using groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information. In fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors. Each sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory. It has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network. This paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors. The paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application. However, these techniques are quite general. We discuss several other sensor-based applications that can employ these techniques. While CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area. First, unlike earlier location tracking solutions, CenWits does not require a connected network. Second, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization. Instead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated. The rest of this paper is organized as follows. In Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems. In Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality. In the next section, Section 4, we discuss power and memory management in CenWits. To simplify our presentation, we will focus on a specific application of tracking lost/injured hikers in all these sections. In Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation. We discuss how the ideas of CenWits can be used to build several other applications in Section 7. Finally, in Section 8, we discuss some related issues and conclude the paper. 2. RELATED WORK A survey of location systems for ubiquitous computing is provided in [11]. A location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17]. Most location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing. They don't fit well for our purposes. The well-known active badge system [19] lets a user carry a badge around. An infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person. This is a useful system for indoor environment, where GPS doesn't work. Locationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8]. Because of the popularity and low cost of 802.11 devices, several business solutions based on this technology have been developed [1]. A system that combines two mature technologies and is viable in suburban area where a user can see clear sky and has GSM cellular reception at the same time is currently available [5]. This system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user's location. A very simple system to monitor children consists an RF transmitter and a receiver. The system alarms the holder of the receiver when the transmitter is about to run out of range [6]. Personal Locater Beacons (PLB) has been used for avalanche rescuing for years. A skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his/her location based on the strength of the RF signal. Luxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user's location in latitude and longitude to the rescue team whenever an accident happens [4]. However, the device either is turned on all the time resulting in fast battery drain, or must be turned on after the accident to function. Another related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars. In this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center. Designed for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident. Once the system has been triggered, a human representative attempts to gain communication with the user via a cell phone built as an incar device. If contact cannot be made, emergency services are dispatched to the location provided by GPS. Like PLBs, this system has several limitations. First, it is heavy and expensive. It requires a satellite transmitter and a connected network. If connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails. Unfortunately, these are common obstacles encountered in deep canyons, narrow streets in large cities, parking garages, and a number of other places. The Lifetch system uses CPS receiver board combined with a CSM/CPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU). An ICU first attempts to transmit its location to a control center through CSM/CPRS network. If that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has CSM/CPRS reception. This ICU then transmits the location information of the original ICU via the CSM/CPRS network. ZebraNet is a system designed to study the moving patterns of zebras [13]. It utilizes two protocols: History-based protocol and flooding protocol. History-based protocol is used when the zebras are grazing and not moving around too much. While this might be useful for tracking zebras, it's not suitable for tracking hikers because two hikers are most likely to meet each other only once on a trail. In the flooding protocol, a node dumps its data to a neighbor whenever it finds one and doesn't delete its own copy until it finds a base station. Without considering routing loops, packet filtering and grouping, the size of data on a node will grow exponentially and drain the power and memory of a sensor node with in a short time. Instead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive. While ZebraNet is designed for a big group of sensors moving together in the same direction with same speed, Cenwits is designed to be used in the scenario where sensors move in different directions at different speeds. Delay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9]. While this work is mainly concerned with interoperability of challenged networks, some problems related to occasionally-connected networks are similar to the ones we have addressed in CenWits. Among all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas. However, both of these systems require a connected network. Luxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to CSM/CPRS network. Luxury PLB transmits location information, only when an accident happens. However, if the user is buried in the snow or falls into a deep canyon, there is almost no chance for the signal to go through and be relayed to the rescue team. This is because satellite transmission needs line of sight. Furthermore, since there is no known history of user's location, it is not possible for the rescue team to infer the current location of the user. Another disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750. Lifetch attempts to transmit the location information by CSM/CPRS and adhoc sensor network that uses AODV as the routing protocol. However, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely. Furthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area. This is because the hikers are mobile and it is very unlikely to have several ICUs placed dense enough to forward packets even on a very popular hike route. CenWits is designed to address the limitations of systems such as luxury PLB and Lifetch. It is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center. It is not reliant upon constant connectivity with any communication medium. Rather, it communicates information along from user to user, finally arriving at a control center. Unlike several of the systems discussed so far, it does not require that a user's unit is constantly turned on. In fact, it can discover a victim's location, even if the victim's sensor was off at the time of accident and has remained off since then. CenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability. This means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well. Figure 1: Hiker A and Hiker B are are not in the range of each other 3. CENWITS 3.1 Witness Information: Storage 3.2 Location Point and Location Inference 4. MEMORY AND POWER MANAGEMENT 4.1 Memory Management 4.2 Power Management 4.3 Groups and Partitions 5. SYSTEM EVALUATION 6. PROTOTYPE IMPLEMENTATION We have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1 b. One of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking. We use SiRF, the serial communication protocol, to control GPS module. SiRF has a rich command set, but we record only X and Y coordinates. A witness record is 16 bytes long. When a node starts up, it stores its current location and emits a beacon periodically--in the prototype, a node emits a beacon every minute. We have conducted a number of experiments with this prototype. A detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http://csel.cs.colorado.edu/huangjh/ Cenwits.index.htm. Here we report results from three of them. In all these experiments, there are three access points (A, B and C) where nodes dump their witness information. These access points also provide location information to the nodes that come with in their range. We first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing. Next, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments. 6.1 Locating Lost Hikers The first experiment is called Direct Contact. It is a very simple experiment in which a single hiker starts from A, goes to B and then C, and finally returns to A (See Figure 13). The goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information. Figure 13: Direct Contact Experiment Table 1: Witness information collected in the direct contact experiment. The witness information dumped at the three access points was then collected and processed at a control center. Part of the witness information collected at the control center is shown in Table 1. The X, Y locations in this table correspond to the location information provided by access points A, B, and C. A is located at (12,7), B is located at (31,17) and C is located at (12,23). Three encounter points (between hiker 1 and the three access points) extracted from this witness information are shown in Figure 13 (shown in rectangular boxes). For example, A,1 at 16 means 1 came in contact with A at time 16. Using this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving. Furthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took. The second experiment is called Indirect Inference. This experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point. It illustrates the importance of witness information in search and rescue applications. In this experiment, there are three hikers, 1, 2 and 3. Hiker 1 takes a trail that goes along access points A and B, while hiker 3 takes trail that goes along access points C and B. Hiker 2 takes a trail that does not come in the range of any access points. However, this hiker meets hiker 1 and 3 during his hike. This is illustrated in Figure 14. Figure 14: Indirect Inference Experiment Table 2: Witness information collected from hiker 1 in indirect inference experiment. Part of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3. There are some interesting data in these tables. For example, the location time in some witness records is not the same as the record time. This means that the node that generated that record did not have its most up-to-date location at the encounter time. For example, when hikers 1 and 2 meet at time 16, the last recorded location time of Table 3: Witness information collected from hiker 3 in indirect inference experiment. hiker 1 is (12,7) recorded at time 6. So, node 1 generates a witness record with record time 16, location (12,7) and location time 6. In fact, the last two records in Table 3 have (? ,?) as their location. This has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16. Until this time, hiker 2 hadn't come in contact with any location points. Interestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center. It took 25 units of time for hiker 1 to go from A (12,7) to B (31,17). Assuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10). Thus (18,10) is a more accurate location of encounter between 1 and 2. Finally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing. There are six hikers (1, 2, 3, 4, 5 and 6) in this experiment. Figure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center. For brevity, we have not shown the entire witness information collected at the control center. This information is available at http://csel.cs.colorado.edu/huangjh/Cenwits/index.htm. Figure 15: Identifying Hot Search Area Experiment (without hiker 6) Now suppose hiker 6 is reported missing at time 260. To determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location. Based on this information and the hiking trail map, hot search areas are identified. The hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16. Figure 16: Identifying Hot Search Area Experiment (with hiker 6) 6.2 Results of Power and Memory Management The witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1. For example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3. By applying the record gap criteria, two of these three records will be erased. Similarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1. Again, by applying the record gap criteria, two of these three records will be erased. Our experiments did not generate enough data to test the impact of record count or hop count criteria. To evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points. We generated witness records by having the hikers walk randomly. We applied the three criteria to measure the amount of memory savings in a sensor node. The results are shown in Table 4. The number of hikers in this simulation was 10 and the number of access points was 5. The number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point. These results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits. For example, they can reduce Table 4: Impact of memory management techniques. the memory consumption by up to 75%. However, these results are premature at present for two reasons: (1) They are generated via simulation of hikers walking at random; and (2) It is not clear what impact the erasing of witness records has on the accuracy of inferred location/hot search areas of lost hikers. In our future work, we plan to undertake a major study to address these two concerns.
CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 ABSTRACT This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it. 1. INTRODUCTION Search and rescue of people in emergency situation in a timely manner is an extremely important service. information needed to determine the current location of a person who may be in an emergency situation. All these systems require a connected network via satellites, GSM base stations, or mobile devices. This requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult. For example, a GSM transmitter has to be in the range of a base station to transmit. As a result, it cannot operate in most wilderness areas. While a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome. An RF transmitter has a relatively smaller range of transmission. So, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area. In a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area. In fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area. In this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses). A subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location. The key idea of CenWits is that it uses a concept of witnesses to convey a subject's movement and location information to the outside world. This averts a need for maintaining a connected network to transmit location information to the outside world. In particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits. CenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors. The problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network. Instead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network. As a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures. Also, this makes CenWits cost-effective. A subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters. Furthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers. The problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions. Groups and partitions allow sensors to stay in sleep or receive modes most of the time. Using groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information. In fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors. Each sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory. It has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network. This paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors. The paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application. However, these techniques are quite general. We discuss several other sensor-based applications that can employ these techniques. While CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area. First, unlike earlier location tracking solutions, CenWits does not require a connected network. Second, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization. Instead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated. In Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems. In Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality. In the next section, Section 4, we discuss power and memory management in CenWits. To simplify our presentation, we will focus on a specific application of tracking lost/injured hikers in all these sections. In Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation. We discuss how the ideas of CenWits can be used to build several other applications in Section 7. Finally, in Section 8, we discuss some related issues and conclude the paper. 2. RELATED WORK A survey of location systems for ubiquitous computing is provided in [11]. A location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17]. Most location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing. The well-known active badge system [19] lets a user carry a badge around. An infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person. This is a useful system for indoor environment, where GPS doesn't work. Locationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8]. This system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user's location. A very simple system to monitor children consists an RF transmitter and a receiver. The system alarms the holder of the receiver when the transmitter is about to run out of range [6]. Personal Locater Beacons (PLB) has been used for avalanche rescuing for years. A skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his/her location based on the strength of the RF signal. Luxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user's location in latitude and longitude to the rescue team whenever an accident happens [4]. Another related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars. In this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center. Designed for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident. If contact cannot be made, emergency services are dispatched to the location provided by GPS. Like PLBs, this system has several limitations. First, it is heavy and expensive. It requires a satellite transmitter and a connected network. If connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails. The Lifetch system uses CPS receiver board combined with a CSM/CPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU). An ICU first attempts to transmit its location to a control center through CSM/CPRS network. If that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has CSM/CPRS reception. This ICU then transmits the location information of the original ICU via the CSM/CPRS network. ZebraNet is a system designed to study the moving patterns of zebras [13]. History-based protocol is used when the zebras are grazing and not moving around too much. Instead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive. Delay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9]. Among all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas. However, both of these systems require a connected network. Luxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to CSM/CPRS network. Luxury PLB transmits location information, only when an accident happens. This is because satellite transmission needs line of sight. Furthermore, since there is no known history of user's location, it is not possible for the rescue team to infer the current location of the user. Another disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750. Lifetch attempts to transmit the location information by CSM/CPRS and adhoc sensor network that uses AODV as the routing protocol. However, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely. Furthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area. CenWits is designed to address the limitations of systems such as luxury PLB and Lifetch. It is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center. Rather, it communicates information along from user to user, finally arriving at a control center. Unlike several of the systems discussed so far, it does not require that a user's unit is constantly turned on. In fact, it can discover a victim's location, even if the victim's sensor was off at the time of accident and has remained off since then. CenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability. This means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well. Figure 1: Hiker A and Hiker B are are not in the range of each other 6. PROTOTYPE IMPLEMENTATION We have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1 b. One of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking. We use SiRF, the serial communication protocol, to control GPS module. A witness record is 16 bytes long. We have conducted a number of experiments with this prototype. A detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http://csel.cs.colorado.edu/huangjh/ Cenwits.index.htm. Here we report results from three of them. In all these experiments, there are three access points (A, B and C) where nodes dump their witness information. These access points also provide location information to the nodes that come with in their range. We first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing. Next, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments. 6.1 Locating Lost Hikers The first experiment is called Direct Contact. The goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information. Figure 13: Direct Contact Experiment Table 1: Witness information collected in the direct contact experiment. The witness information dumped at the three access points was then collected and processed at a control center. Part of the witness information collected at the control center is shown in Table 1. The X, Y locations in this table correspond to the location information provided by access points A, B, and C. Three encounter points (between hiker 1 and the three access points) extracted from this witness information are shown in Figure 13 (shown in rectangular boxes). For example, A,1 at 16 means 1 came in contact with A at time 16. Using this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving. Furthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took. The second experiment is called Indirect Inference. This experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point. It illustrates the importance of witness information in search and rescue applications. In this experiment, there are three hikers, 1, 2 and 3. However, this hiker meets hiker 1 and 3 during his hike. This is illustrated in Figure 14. Figure 14: Indirect Inference Experiment Table 2: Witness information collected from hiker 1 in indirect inference experiment. Part of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3. There are some interesting data in these tables. For example, the location time in some witness records is not the same as the record time. This means that the node that generated that record did not have its most up-to-date location at the encounter time. For example, when hikers 1 and 2 meet at time 16, the last recorded location time of Table 3: Witness information collected from hiker 3 in indirect inference experiment. hiker 1 is (12,7) recorded at time 6. So, node 1 generates a witness record with record time 16, location (12,7) and location time 6. In fact, the last two records in Table 3 have (? ,?) as their location. This has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16. Until this time, hiker 2 hadn't come in contact with any location points. Interestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center. It took 25 units of time for hiker 1 to go from A (12,7) to B (31,17). Assuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10). Thus (18,10) is a more accurate location of encounter between 1 and 2. Finally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing. There are six hikers (1, 2, 3, 4, 5 and 6) in this experiment. Figure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center. For brevity, we have not shown the entire witness information collected at the control center. This information is available at http://csel.cs.colorado.edu/huangjh/Cenwits/index.htm. Figure 15: Identifying Hot Search Area Experiment (without hiker 6) Now suppose hiker 6 is reported missing at time 260. To determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location. Based on this information and the hiking trail map, hot search areas are identified. The hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16. Figure 16: Identifying Hot Search Area Experiment (with hiker 6) 6.2 Results of Power and Memory Management The witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1. For example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3. By applying the record gap criteria, two of these three records will be erased. Similarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1. Again, by applying the record gap criteria, two of these three records will be erased. Our experiments did not generate enough data to test the impact of record count or hop count criteria. To evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points. We generated witness records by having the hikers walk randomly. We applied the three criteria to measure the amount of memory savings in a sensor node. The results are shown in Table 4. The number of hikers in this simulation was 10 and the number of access points was 5. The number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point. These results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits. For example, they can reduce Table 4: Impact of memory management techniques. the memory consumption by up to 75%.
C-78
An Architectural Framework and a Middleware for Cooperating Smart Components
In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.
[ "smart sensor", "sensor and actuat", "gener event architectur", "tempor constraint", "event channel", "event channel", "event-base system", "corba", "real-time entiti", "sentient object", "dissemin qualiti", "cortex", "gear architectur", "tempor valid", "soft real-time channel", "cosmic middlewar", "event-base commun", "sentient comput", "componentbas system", "middlewar architectur" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "U", "U", "R", "U", "R", "M", "M", "M", "M", "M", "M", "R" ]
An Architectural Framework and a Middleware for Cooperating Smart Components ∗ Ant´onio Casimiro U.Lisboa casim@di.fc.ul.pt J¨org Kaiser U.Ulm kaiser@informatik.uniulm.de Paulo Ver´ıssimo U.Lisboa pjv@di.fc.ul.pt ABSTRACT In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including ∗This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI/1999/CHS/33996 (DEFEATS). the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed applications; C.3 [Special-Purpose and Application-Based Systems]: Real-Time and embedded systems General Terms Design 1. INTRODUCTION In recent years we have seen the continuous improvement of technologies that are relevant for the construction of distributed embedded systems, including trustworthy visual, auditory, and location sensing [11], communication and processing. We believe that in a future networked physical world a new class of applications will emerge, composed of a myriad of smart sensors and actuators to assess and control aspects of their environments and autonomously act in response to it. The anticipated challenging characteristics of these applications include autonomy, responsiveness and safety criticality, large scale, geographical dispersion, mobility and evolution. In order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms. Unfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent. Looking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface. However, classical event/object models are usually software oriented and, as such, when trans28 ported to a real-time, embedded systems setting, their harmony is cluttered by the conflict between, on the one side, send/receive of software events (message-based), and on the other side, input/output of hardware or real-world events, register-based. In terms of interaction paradigms, and although the use of event-based models appears to be a convenient solution [10, 22], these often lack the appropriate support for non-functional requirements like reliability, timeliness or security. This paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system. When choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications. Unlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect. Therefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows. This view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach. This provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows. In fact, the generic event layer hides the different communication channels, including the interactions through the environment. Additionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings. The paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer. In fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. The paper is organized as follows. In Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system. Then, in Section 4, we describe the componentbased approach that allows composition of objects. GEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions. Section 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects. A simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper. 2. RELATED WORK Our work considers a wired physical world in which a very large number of autonomous components cooperate. It is inspired by many research efforts in very different areas. Event-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22]. Intended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues. Secondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints. In [10] a real-time event system for CORBA has been introduced. The events are routed via a central event server which provides scheduling functions to support the real-time requirements. Such a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices. There are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19]. They are targeted for the CAN-Bus [9], a popular network developed for the automotive industry. However, in these approaches the support for timeliness or dependability issues does not exist or is only very limited. A new scheme to integrate smart devices in a CORBA environment is proposed in [17] and has lead to the proposal of a standard by the Object Management Group (OMG) [26]. Smart transducers are organized in clusters that are connected to a CORBA system by a gateway. The clusters form isolated subnetworks. A special master node enforces the temporal properties in the cluster subnet. A CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS). The basic structure is similar to the WAN-of-CANs structure which has been introduced in the CORTEX project [4]. Islands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks. However, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel. Secondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided. 3. INFORMATION FLOW AND INTERACTION MODEL In this paper we consider a component-based system model that incorporates previous work developed in the context of the IST CORTEX project [5]. As mentioned above, a fundamental idea underlying the approach is that applications can be composed of a large number of smart components that are able to sense their surrounding environment and interact with it. These components are referred to as sentient objects, a metaphor elaborated in CORTEX and inspired on the generic concept of sentient computing introduced in [12]. Sentient objects accept input events from a variety of different sources (including sensors, but not constrained to that), process them, and produce output events, whereby 29 they actuate on the environment and/or interact with other objects. Therefore, the following kinds of interactions can take place in the system: Environment-to-object interactions: correspond to a flow of information from the environment to application objects, reporting about the state of the former, and/or notifying about events taking place therein. Object-to-object interactions: correspond to a flow of information among sentient objects, serving two purposes. The first is related with complementing the assessment of each individual object about the state of the surrounding space. The second is related to collaboration, in which the object tries to influence other objects into contributing to a common goal, or into reacting to an unexpected situation. Object-to-environment interactions: correspond to a flow of information from an object to the environment, with the purpose of forcing a change in the state of the latter. Before continuing, we need to clarify a few issues with respect to these possible forms of interaction. We consider that the environment can be a producer or consumer of information while interacting with sentient objects. The environment is the real (physical) world surrounding an object, not necessarily close to the object or limited to certain boundaries. Quite clearly, the information produced by the environment corresponds to the physical representation of real-time entities, of which typical examples include temperature, distance or the state of a door. On the other hand, actuation on the environment implies the manipulation of these real-time entities, like increasing the temperature (applying more heat), changing the distance (applying some movement) or changing the state of the door (closing or opening it). The required transformations between system representations of these real-time entities and their physical representations is accomplished, generically, by sensors and actuators. We further consider that there may exist dumb sensors and actuators, which interact with the objects by disseminating or capturing raw transducer information, and smart sensors and actuators, with enhanced processing capabilities, capable of speaking some more elaborate event dialect (see Sections 5 and 6.1). Interaction with the environment is therefore done through sensors and actuators, which may, or may not be part of sentient objects, as discussed in Section 4.2. State or state changes in the environment are considered as events, captured by sensors (in the environment or within sentient objects) and further disseminated to other potentially interested sentient objects in the system. In consequence, it is quite natural to base the communication and interaction among sentient objects and with the environment on an event-based communication model. Moreover, typical properties of event-based models, such as anonymous and non-blocking communication, are highly desirable in systems where sentient objects can be mobile and where interactions are naturally very dynamic. A distinguishing aspect of our work from many of the existing approaches, is that we consider that sentient objects may indirectly communicate with each other through the environment, when they act on it. Thus the environment constitutes an interaction and communication channel and is in the control and awareness loop of the objects. In other words, when a sentient object actuates on the environment it will be able to observe the state changes in the environment by means of events captured by the sensors. Clearly, other objects might as well capture the same events, thus establishing the above-mentioned indirect communication path. In systems that involve interactions with the environment it is very important to consider the possibility of communication through the environment. It has been shown that the hidden channels developing through the latter (e.g., feedback loops) may hinder software-based algorithms ignoring them [30]. Therefore, any solution to the problem requires the definition of convenient abstractions and appropriate architectural constructs. On the other hand, in order to deal with the information flow through the whole computer system and environment in a seamless way, handling software and hardware events uniformly, it is also necessary to find adequate abstractions. As discussed in Section 5, the Generic-Events Architecture introduces the concept of Generic Event and an Event Layer abstraction which aim at dealing, among others, with these issues. 4. SENTIENT OBJECT COMPOSITION In this section we analyze the most relevant issues related with the sentient object paradigm and the construction of systems composed of sentient objects. 4.1 Component-based System Construction Sentient objects can take several different forms: they can simply be software-based components, but they can also comprise mechanical and/or hardware parts, amongst which the very sensorial apparatus that substantiates sentience, mixed with software components to accomplish their task. We refine this notion by considering a sentient object as an encapsulating entity, a component with internal logic and active processing elements, able to receive, transform and produce new events. This interface hides the internal hardware/software structure of the object, which may be complex, and shields the system from the low-level functional and temporal details of controlling a specific sensor or actuator. Furthermore, given the inherent complexity of the envisaged applications, the number of simultaneous input events and the internal size of sentient objects may become too large and difficult to handle. Therefore, it should be possible to consider the hierarchical composition of sentient objects so that the application logic can be separated across as few or as many of these objects as necessary. On the other hand, composition of sentient objects should normally be constrained by the actual hardware component``s structure, preventing the possibility of arbitrarily composing sentient objects. This is illustrated in Figure 1, where a sentient object is internally composed of a few other sentient objects, each of them consuming and producing events, some of which only internally propagated. Observing the figure, and recalling our previous discussion about the possible interactions, we identify all of them here: an object-to-environment interaction occurs between the object controlling a WLAN transmitter and some WLAN receiver in the environment; an environment-to-object interaction takes place when the object responsible for the GPS 30 G P S r e c e p t i o n W i r e l e s s t r a n s m i s s i o n D o p p l e r r a d a r P h y s i c a l f e e d b a c k O b j e c t ' s b o d y I n t e r n a l N e t w o r k Figure 1: Component-aware sentient object composition. signal reception uses the information transmitted by the satellites; finally, explicit object-to-object interactions occur internally to the container object, through an internal communication network. Additionally, it is interesting to observe that implicit communication can also occur, whether the physical feedback develops through the environment internal to the container object (as depicted) or through the environment external to this object. However, there is a subtle difference between both cases. While in the former the feedback can only be perceived by objects internal to the container, bounding the extent to which consistency must be ensured, such bounds do not exist in the latter. In fact, the notion of sentient object as an encapsulating entity may serve other purposes (e.g., the confinement of feedback and of the propagation of events), beyond the mere hierarchical composition of objects. To give a more concrete example of such component-aware object composition we consider a scenario of cooperating robots. Each robot is made of several components, corresponding, for instance, to axis and manipulator controllers. Together with the control software, each of these controllers may be a sentient object. On the other hand, a robot itself is a sentient object, composed of the objects materialized by the controllers, and the environment internal to its own structure, or body. This means that it should be possible to define cooperation activities using the events produced by robot sentient objects, without the need to know the internal structure of robots, or the events produced by body objects or by smart sensors within the body. From an engineering point of view, however, this also means that robot sentient object may have to generate new events that reflect its internal state, which requires the definition of a gateway to make the bridge between the internal and external environments. 4.2 Encapsulation and Scoping Now an important question is about how to represent and disseminate events in a large scale networked world. As we have seen above, any event generated by a sentient object could, in principle, be visible anywhere in the system and thus received by any other sentient object. However, there are substantial obstacles to such universal interactions, originating from the components heterogeneity in such a largescale setting. Firstly, the components may have severe performance constraints, particularly because we want to integrate smart sensors and actuators in such an architecture. Secondly, the bandwidth of the participating networks may vary largely. Such networks may be low power, low bandwidth fieldbuses, or more powerful wireless networks as well as high speed backbones. Thirdly, the networks may have widely different reliability and timeliness characteristics. Consider a platoon of cooperating vehicles. Inside a vehicle there may be a field-bus like CAN [8, 9], TTP/A [17] or LIN [20], with a comparatively low bandwidth. On the other hand, the vehicles are communicating with others in the platoon via a direct wireless link. Finally, there may be multiple platoons of vehicles which are coordinated by an additional wireless network layer. At the abstraction level of sentient objects, such heterogeneity is reflected by the notion of body-vs-environment. At the network level, we assume the WAN-of-CANs structure [27] to model the different networks. The notion of body and environment is derived from the recursively defined component-based object model. A body is similar to a cell membrane and represents a quality of service container for the sentient objects inside. On the network level, it may be associated with the components coupled by a certain CAN. A CAN defines the dissemination quality which can be expected by the cooperating objects. In the above example, a vehicle may be a sentient object, whose body is composed of the respective lower level objects (sensors and actuators) which are connected by the internal network (see Figure 1). Correspondingly, the platoon can be seen itself as an object composed of a collection of cooperating vehicles, its body being the environment encapsulated by the platoon zone. At the network level, the wireless network represents the respective CAN. However, several platoons united by their CANs may interact with each other and objects further away, through some wider-range, possible fixed networking substrate, hence the concept of WAN-of-CANs. The notions of body-environment and WAN-of-CANs are very useful when defining interaction properties across such boundaries. Their introduction obeyed to our belief that a single mechanism to provide quality measures for interactions is not appropriate. Instead, a high level construct for interaction across boundaries is needed which allows to specify the quality of dissemination and exploits the knowledge about body and environment to assess the feasibility of quality constraints. As we will see in the following section, the notion of an event channel represents this construct in our architecture. It disseminates events and allows the network independent specification of quality attributes. These attributes must be mapped to the respective properties of the underlying network structure. 5. A GENERIC EVENTS ARCHITECTURE In order to successfully apply event-based object-oriented models, addressing the challenges enumerated in the introduction of this paper, it is necessary to use adequate architectural constructs, which allow the enforcement of fundamental properties such as timeliness or reliability. We propose the Generic-Events Architecture (GEAR), depicted in Figure 2, which we briefly describe in what follows (for a more detailed description please refer to [29]). The L-shaped structure is crucial to ensure some of the properties described. Environment: The physical surroundings, remote and close, solid and etherial, of sentient objects. 31 C o m m ' sC o m m ' sC o m m ' s T r a n s l a t i o n L a y e r T r a n s l a t i o n L a y e r B o d y E n v i r o n m e n t B o d y E n v i r o n m e n t B o d y E n v i r o n m e n t ( i n c l u d i n g o p e r a t i o n a l n e t w o r k ) ( o f o b j e c t o r o b j e c t c o m p o u n d ) T r a n s l a t i o n L a y e r T r a n s l a t i o n S e n t i e n t O b j e c t S e n t i e n t O b j e c t S e n t i e n t O b j e c t R e g u l a r N e t w o r k c o n s u m ep r o d u c e E v e n t L a y e r E v e n t L a y e r E v e n t L a y e r S e n t i e n t O b j e c t Figure 2: Generic-Events architecture. Body: The physical embodiment of a sentient object (e.g., the hardware where a mechatronic controller resides, the physical structure of a car). Note that due to the compositional approach taken in our model, part of what is environment to a smaller object seen individually, becomes body for a larger, containing object. In fact, the body is the internal environment of the object. This architecture layering allows composition to take place seamlessly, in what concerns information flow. Inside a body there may also be implicit knowledge, which can be exploited to make interaction more efficient, like the knowledge about the number of cooperating entities, the existence of a specific communication network or the simple fact that all components are co-located and thus the respective events do not need to specify location in their context attributes. Such intrinsic information is not available outside a body and, therefore, more explicit information has to be carried by an event. Translation Layer: The layer responsible for physical event transformation from/to their native form to event channel dialect, between environment/body and an event channel. Essentially one doing observation and actuation operations on the lower side, and doing transactions of event descriptions on the other. On the lower side this layer may also interact with dumb sensors or actuators, therefore talking the language of the specific device. These interactions are done through operational networks (hence the antenna symbol in the figure). Event Layer: The layer responsible for event propagation in the whole system, through several Event Channels (EC):. In concrete terms, this layer is a kind of middleware that provides important event-processing services which are crucial for any realistic event-based system. For example, some of the services that imply the processing of events may include publishing, subscribing, discrimination (zoning, filtering, fusion, tracing), and queuing. Communication Layer: The layer responsible for wrapping events (as a matter of fact, event descriptions in EC dialect) into carrier event-messages, to be transported to remote places. For example, a sensing event generated by a smart sensor is wrapped in an event-message and disseminated, to be caught by whoever is concerned. The same holds for an actuation event produced by a sentient object, to be delivered to a remote smart actuator. Likewise, this may apply to an event-message from one sentient object to another. Dumb sensors and actuators do not send event-messages, since they are unable to understand the EC dialect (they do not have an event layer neither a communication layer- they communicate, if needed, through operational networks). Regular Network: This is represented in the horizontal axis of the block diagram by the communication layer, which encompasses the usual LAN, TCP/IP, and realtime protocols, desirably augmented with reliable and/or ordered broadcast and other protocols. The GEAR introduces some innovative ideas in distributed systems architecture. While serving an object model based on production and consumption of generic events, it treats events produced by several sources (environment, body, objects) in a homogeneous way. This is possible due to the use of a common basic dialect for talking about events and due to the existence of the translation layer, which performs the necessary translation between the physical representation of a real-time entity and the EC compliant format. Crucial to the architecture is the event layer, which uses event channels to propagate events through regular network infrastructures. The event layer is realized by the COSMIC middleware, as described in Section 7. 5.1 Information Flow in GEAR The flow of information (external environment and computational part) is seamlessly supported by the L-shaped architecture. It occurs in a number of different ways, which demonstrates the expressiveness of the model with regard to the necessary forms of information encountered in real-time cooperative and embedded systems. Smart sensors produce events which report on the environment. Body sensors produce events which report on the body. They are disseminated by the local event layer module, on an event channel (EC) propagated through the regular network, to any relevant remote event layer modules where entities showed an interest on them, normally, sentient objects attached to the respective local event layer modules. Sentient objects consume events they are interested in, process them, and produce other events. Some of these events are destined to other sentient objects. They are published on an EC using the same EC dialect that serves, e.g., sensor originated events. However, these events are semantically of a kind such that they are to be subscribed by the relevant sentient objects, for example, the sentient objects composing a robot controller system, or, at a higher level, the sentient objects composing the actual robots in 32 a cooperative application. Smart actuators, on the other hand, merely consume events produced by sentient objects, whereby they accept and execute actuation commands. Alternatively to talking to other sentient objects, sentient objects can produce events of a lower level, for example, actuation commands on the body or environment. They publish these exactly the same way: on an event channel through the local event layer representative. Now, if these commands are of concern to local actuator units (e.g., body, including internal operational networks), they are passed on to the local translation layer. If they are of concern to a remote smart actuator, they are disseminated through the distributed event layer, to reach the former. In any case, if they are also of interest to other entities, such as other sentient objects that wish to be informed of the actuation command, then they are also disseminated through the EC to these sentient objects. A key advantage of this architecture is that event-messages and physical events can be globally ordered, if necessary, since they all pass through the event layer. The model also offers opportunities to solve a long lasting problem in realtime, computer control, and embedded systems: the inconsistency between message passing and the feedback loop information flow subsystems. 6. TEMPORAL ASPECTS OF THE INTERACTIONS Any interaction needs some form of predictability. If safety critical scenarios are considered as it is done in CORTEX, temporal aspects become crucial and have to be made explicit. The problem is how to define temporal constraints and how to enforce them by appropriate resource usage in a dynamic ad-hoc environment. In an system where interactions are spontaneous, it may be also necessary to determine temporal properties dynamically. To do this, the respective temporal information must be stated explicitly and available during run-time. Secondly, it is not always ensured that temporal properties can be fulfilled. In these cases, adaptations and timing failure notification must be provided [2, 28]. In most real-time systems, the notion of a deadline is the prevailing scheme to express and enforce timeliness. However, a deadline only weakly reflect the temporal characteristics of the information which is handled. Secondly, a deadline often includes implicit knowledge about the system and the relations between activities. In a rather well defined, closed environment, it is possible to make such implicit assumptions and map these to execution times and deadlines. E.g. the engineer knows how long a vehicle position can be used before the vehicle movement outdates this information. Thus he maps this dependency between speed and position on a deadline which then assures that the position error can be assumed to be bounded. In a open environment, this implicit mapping is not possible any more because, as an obvious reason, the relation between speed and position, and thus the error bound, cannot easily be reverse engineered from a deadline. Therefore, our event model includes explicit quality attributes which allow to specify the temporal attributes for every individual event. This is of course an overhead compared to the use of implicit knowledge, but in a dynamic environment such information is needed. To illustrate the problem, consider the example of the position of a vehicle. A position is a typical example for time, value entity [30]. Thus, the position is useful if we can determine an error bound which is related to time, e.g. if we want a position error below 10 meters to establish a safety property between cooperating cars moving with 5 m/sec, the position has a validity time of 2 seconds. In a time, value entity entity we can trade time against the precision of the value. This is known as value over time and time over value [18]. Once having established the time-value relation and captured in event attributes, subscribers of this event can locally decide about the usefulness of an information. In the GEAR architecture temporal validity is used to reason about safety properties in a event-based system [29]. We will briefly review the respective notions and see how they are exploited in our COSMIC event middleware. Consider the timeline of generating an event representing some real-time entity [18] from its occurrence to the notification of a certain sentient object (Figure 3). The real-time entity is captured at the sensor interface of the system and has to be transformed in a form which can be treated by a computer. During the time interval t0 the sensor reads the real-time entity and a time stamp is associated with the respective value. The derived time, value entity represents an observation. It may be necessary to perform substantial local computations to derive application relevant information from the raw sensor data. However, it should be noted that the time stamp of the observation is associated with the capture time and thus independent from further signal processing and event generation. This close relationship between capture time and the associated value is supported by smart sensors described above. The processed sensor information is assembled in an event data structure after ts to be published to an event channel. As is described later, the event includes the time stamp of generation and the temporal validity as attributes. The temporal validity is an application defined measure for the expiration of a time, value . As we explained in the example of a position above, it may vary dependent on application parameters. Temporal validity is a more general concept than that of a deadline. It is independent of a certain technical implementation of a system. While deadlines may be used to schedule the respective steps in an event generation and dissemination, a temporal validity is an intrinsic property of a time, value entity carried in an event. A temporal validity allows to reason about the usefulness of information and is beneficial even in systems in which timely dissemination of events cannot be enforced because it enables timing failure detection at the event consumer. It is obvious that deadlines or periods can be derived from the temporal validity of an event. To set a deadline, knowledge of an implementation, worst case execution times or message dissemination latencies is necessary. Thus, in the timeline of Figure 3 every interval may have a deadline. Event dissemination through soft real-time channels in COSMIC exploits the temporal validity to define dissemination deadlines. Quality attributes can be defined, for instance, in terms of validity interval, omission degree pairs. These allow to characterize the usefulness of the event for a certain application, in a certain context. Because of that, quality attributes of an event clearly depend on higher level issues, such as the nature of the sentient object or of the smart sensor that produced the event. For instance, an event containing an indication of some vehicle speed must have different quality attributes depending on the kind of vehicle 33 real-world event observation: <time stamp, value> event generated ready to be transmitted event received notification , to t event producer communication network event consumer event channel push <event> , ts , tm , tt , tn , t o : t i m e t o o b t a i n a n o b s e r v a t i o n , t s : t i m e t o p r o c e s s s e n s o r r e a d i n g , t m : t i m e t o a s s e m b l e a n e v e n t m e s s a g e , t t : t i m e t o t r a n s f e r t h e e v e n t o n t h e r e g u l a r n e t w o r k , t n : t i m e f o r n o t i f i c a t i o n o n t h e c o n s u m e r s i t e Figure 3: Event processing and dissemination. from which it originated, or depending on its current speed. The same happens with the position event of the car example above, whose validity depends on the current speed and on a predefined required precision. However, since quality attributes are strictly related with the semantics of the application or, at least, with some high level knowledge of the purpose of the system (from which the validity of the information can be derived), the definition of these quality attributes may be done by exploiting the information provided at the programming interface. Therefore, it is important to understand how the system programmer can specify non-functional requirements at the API, and how these requirements translate into quality attributes assigned to events. While temporal validity is identified as an intrinsic event property, which is exploited to decide on the usefulness of data at a certain point in time, it is still necessary to provide a communication facility which can disseminate the event before the validity is expired. In a WAN-of-CANs network structure we have to cope with very different network characteristics and quality of service properties. Therefore, when crossing the network boundaries the quality of service guarantees available in a certain network will be lost and it will be very hard, costly and perhaps impossible to achieve these properties in the next larger area of the WAN-of CANs structure. CORTEX has a couple of abstractions to cope with this situation (network zones, body/environment) which have been discussed above. From the temporal point of view we need a high level abstraction like the temporal validity for the individual event now to express our quality requirements of the dissemination over the network. The bound, coverage pair, introduced in relation with the TCB [28] seems to be an appropriate approach. It considers the inherent uncertainty of networks and allows to trade the quality of dissemination against the resources which are needed. In relation with the event channel model discussed later, the bound, coverage pair allows to specify the quality properties of an event channel independently of specific technical issues. Given the typical environments in which sentient applications will operate, where it is difficult or even impossible to provide timeliness or reliability guarantees, we proposed an alternative way to handle non-functional application requirements, in relation with the TCB approach [28]. The proposed approach exploits intrinsic characteristics of applications, such as fail-safety, or time-elasticity, in order to secure QoS specifications of the form bound, coverage . Instead of constructing systems that rely on guaranteed bounds, the idea is to use (possibly changing) bounds that are secured with a constant probability all over the execution. This obviously requires an application to be able to adapt to changing conditions (and/or changing bounds) or, if this is not possible, to be able to perform some safety procedures when the operational conditions degrade to an unbearable level. The bounds we mentioned above refer essentially to timeliness bounds associated to the execution of local or distributed activities, or combinations thereof. From these bounds it is then possible to derive the quality attributes, in particular validity intervals, that characterize the events published in the event channel. 6.1 The Role of Smart Sensors and Actuators Smart devices encapsulate hardware, software and mechanical components and provide information and a set of well specified functions and which are closely related to the interaction with the environment. The built-in computational components and the network interface enable the implementation of a well-defined high level interface that does not just provide raw transducer data, but a processed, application-related set of events. Moreover, they exhibit an autonomous spontaneous behaviour. They differ from general purpose nodes because they are dedicated to a certain functionality which complies to their sensing and actuating capabilities while general purpose node may execute any program. Concerning the sentient object model, smart sensors and actuators may be basic sentient objects themselves, consuming events from the real-world environment and producing the respective generic events for the system``s event layer or, 34 vice versa consuming a generic event and converting it to a real-world event by an actuation. Smart components therefore constitute the periphery, i.e. the real-world interface of a more complex sentient object. The model of sentient objects also constitutes the framework to built more complex virtual sensors by relating multiple (primary, i.e. sensors which directly sense a physical entity) sensors. Smart components translate events of the environment to an appropriate form available at the event layer or, vice versa, transform a system event into an actuation. For smart components we can assume that: • Smart components have dedicated resources to perform a specific function. • These resources are not used for other purposes during normal real-time operation. • No local temporal conflicts occur that will change the observable temporal behaviour. • The functions of a component can usually only be changed during a configuration procedure which is not performed when the component is involved in critical operations. • An observation of the environment as a time,value pair can be obtained with a bounded jitter in time. Many predictability and scheduling problems arise from the fact, that very low level timing behaviours have to be handled on a single processor. Here, temporal encapsulation of activities is difficult because of the possible side effects when sharing a single processor resource. Consider the control of a simple IR-range detector which is used for obstacle avoidance. Dependent on its range and the speed of a vehicle, it has to be polled to prevent the vehicle from crashing into an obstacle. On a single central processor, this critical activity has to be coordinated with many similar, possibly less critical functions. It means that a very fine grained schedule has to be derived based purely on the artifacts of the low level device control. In a smart sensor component, all this low level timing behaviour can be optimized and encapsulated. Thus we can assume temporal encapsulation similar to information hiding in the functional domain. Of course, there is still the problem to guarantee that an event will be disseminated and recognized in due time by the respective system components, but this relates to application related events rather than the low artifacts of a device timing. The main responsibility to provide timeliness guarantees is shifted to the event layer where these events are disseminated. Smart sensors thus lead to network centric system model. The network constitute the shared resource which has to be scheduled in a predictable way. The COSMIC middleware introduced in the next section is an approach to provide predictable event dissemination for a network of smart sensors and actuators. 7. AN EVENT MODEL ANDMIDDLEWARE FOR COOPERATING SMART DEVICES An event model and a middleware suitable for smart components must support timely and reliable communication and also must be resource efficient. COSMIC (COoperating Smart devices) is aimed at supporting the interaction between those components according to the concepts introduced so far. Based on the model of a WAN-of-CANs, we assume that the components are connected to some form of CAN as a fieldbus or a special wireless sensor network which provides specific network properties. E.g. a fieldbus developed for control applications usually includes mechanisms for predictable communication while other networks only support a best effort dissemination. A gateway connects these CANs to the next level in the network hierarchy. The event system should allow the dynamic interaction over a hierarchy of such networks and comply with the overall CORTEX generic event model. Events are typed information carriers and are disseminated in a publisher/ subscriber style [24, 7], which is particularly suitable because it supports generative, anonymous communication [3] and does not create any artificial control dependencies between producers of information and the consumers. This decoupling in space (no references or names of senders or receivers are needed for communication) and the flow decoupling (no control transfer occurs with a data transfer) are well known [24, 7, 14] and crucial properties to maintain autonomy of components and dynamic interactions. It is obvious that not all networks can provide the same QoS guarantees and secondly, applications may have widely differing requirements for event dissemination. Additionally, when striving for predictability, resources have to be reserved and data structures must be set up before communication takes place. Thus, these things can not predictably be made on the fly while disseminating an event. Therefore, we introduced the notion of an event channel to cope with differing properties and requirements and have an object to which we can assign resources and reservations. The concept of an event channel is not new [10, 25], however, it has not yet been used to reflect the properties of the underlying heterogeneous communication networks and mechanisms as described by the GEAR architecture. Rather, existing event middleware allows to specify the priorities or deadlines of events handled in an event server. Event channels allow to specify the communication properties on the level of the event system in a fine grained way. An event channel is defined by: event channel := subject, quality attributeList, handlers The subject determines the types of events event which may be issued to the channel. The quality attributes model the properties of the underlying communication network and dissemination scheme. These attributes include latency specifications, dissemination constraints and reliability parameters. The notion of zones which represent a guaranteed quality of service in a subnetwork support this approach. Our goal is to handle the temporal specifications as bound, coverage pairs [28] orthogonal to the more technical questions of how to achieve a certain synchrony property of the dissemination infrastructure. Currently, we support quality attributes of event channels in a CAN-Bus environment represented by explicit synchrony classes. The COSMIC middleware maps the channel properties to lower level protocols of the regular network. Based on our previous work on predictable protocols for the CAN-Bus, COSMIC defines an abstract network which provides hard, soft and non real-time message classes [21]. Correspondingly, we distinguish three event channel classes according to their synchrony properties: hard real-time channels, soft real-time channels and non-real-time channels. Hard real-time channels (HRTC) guarantee event propagation within the defined time constraints in the presence 35 of a specified number of omission faults. HRTECs are supported by a reservation scheme which is similar to the scheme used in time-triggered protocols like TTP [16][31], TTP/A [17], and TTCAN [8]. However, a substantial advantage over a TDMA scheme is that due to CAN-Bus properties, bandwidth which was reserved but is not needed by a HRTEC can be used by less critical traffic [21]. Soft real-time channels (SRTC) exploit the temporal validity interval of events to derive deadlines for scheduling. The validity interval defines the point in time after which an event becomes temporally inconsistent. Therefore, in a real-time system an event is useless after this point and may me discarded. The transmission deadline (DL) is defined as the latest point in time when a message has to be transmitted and is specified in a time interval which is derived from the expiration time: tevent ready < DL < texpiration − ∆notification texpiration defines the point in time when the temporal validity expires. ∆notification is the expected end-to-end latency which includes the transfer time over the network and the time the event may be delayed by the local event handling in the nodes. As said before, event deadlines are used to schedule the dissemination by SRTECs. However, deadlines may be missed in transient overload situations or due to arbitrary arrival times of events. On the publisher side the application``s exception handler is called whenever the event deadline expires before event transmission. At this point in time the event is also not expected to arrive at the subscriber side before the validity expires. Therefore, the event is removed from the sending queue. On the subscriber side the expiration time is used to schedule the delivery of the event. If the event cannot be delivered until its expiration time it is removed from the respective queues allocated by the COSMIC middleware. This prevents the communication system to be loaded by outdated messages. Non-real-time channels do not assume any temporal specification and disseminate events in a best effort manner. An instance of an event channel is created locally, whenever a publisher makes an announcement for publication or a subscriber subscribes for an event notification. When a publisher announces publication, the respective data structures of an event channel are created by the middleware. When a subscriber subscribes to an event channel, it may specify context attributes of an event which are used to filter events locally. E.g. a subscriber may only be interested in events generated at a certain location. Additionally the subscriber specifies quality properties of the event channel. A more detailed description of the event channels can be found in [13]. Currently, COSMIC handles all event channels which disseminate events beyond the CAN network boundary as non real-time event channels. This is mainly because we use the TCP/IP protocol to disseminate events over wireless links or to the standard Ethernet. However, there are a number of possible improvements which can easily be integrated in the event channel model. The Timely Computing Base (TCB) [28] can be exploited for timing failure detection and thus would provide awareness for event dissemination in environments where timely delivery of events cannot be enforced. Additionally, there are wireless protocols which can provide timely and reliable message delivery [6, 23] which may be exploited for the respective event channel classes. Events are the information carriers which are exchanged between sentient objects through event channels. To cope with the requirements of an ad-hoc environment, an event includes the description of the context in which it has been generated and quality attributes defining requirements for dissemination. This is particularly important in an open, dynamic environment where an event may travel over multiple networks. An event instance is specified as: event := subject, context attributeList, quality attributeList, contents A subject defines the type of the event and is related to the event contents. It supports anonymous communication and is used to route an event. The subject has to match to the subject of the event channel through which the event is disseminated. Attributes are complementary to the event contents. They describe individual functional and non-functional properties of the event. The context attributes describe the environment in which the event has been generated, e.g. a location, an operational mode or a time of occurrence. The quality attributes specify timeliness and dependability aspects in terms of validity interval, omission degree pairs. The validity interval defines the point in time after which an event becomes temporally inconsistent [18]. As described above, the temporal validity can be mapped to a deadline. However, usually a deadline is an engineering artefact which is used for scheduling while the temporal validity is a general property of a time, value entity. In a environment where a deadline cannot be enforced, a consumer of an event eventually must decide whether the event still is temporally consistent, i.e. represents a valid time, value entity. 7.1 The Architecture of the COSMIC Middleware On the architectural level, COSMIC distinguish three layers roughly depicted in Figure 4. Two of them, the event layer and the abstract network layer are implemented by the COSMIC middleware. The event layer provides the API for the application and realizes the abstraction of event and event channels. The abstract network implements real-time message classes and adapts the quality requirements to the underlying real network. An event channel handler resides in every node. It supports the programming interface and provides the necessary data structures for event-based communication. Whenever an object subscribes to a channel or a publisher announces a channel, the event channel handler is involved. It initiates the binding of the channel``s subject, which is represented by a network independent unique identifier to an address of the underlying abstract network to enable communication [14]. The event channel handler then tightly cooperates with the respective handlers of the abstract network layer to disseminate events or receive event notifications. It should be noted that the QoS properties of the event layer in general depend on what the abstract network layer can provide. Thus, it may not always be possible to e.g. support hard real-time event channels because the abstract network layer cannot provide the respective guarantees. In [13], we describe the protocols and services of the abstract network layer particularly for the CAN-Bus. As can be seen in Figure 4, the hard real-time (HRT) message class is supported by a dedicated handler which is able to provide the time triggered message dissemination. 36 event notifications HRT-msg list SRT-msg queue NRT-msg queue HRT-msg calendar HRTC Handler S/NRTC Handler Abstract Network Layer CAN Layer RX Buffer TX Buffer RX, TX, error interrupts Event Channel Specs. Event Layer send messages exception notification exceptions, notifications ECH: Event Channel Handler p u b l i s h a n n o u n c e s u b s c r i b e b i n d i n g p r o t o c o l c o n f i g . p r o t o c o l Global Time Service event notifications HRT-msg list SRT-msg queue NRT-msg queue HRT-msg calendar HRTC Handler S/NRTC Handler Abstract Network Layer CAN Layer RX Buffer TX Buffer RX, TX, error interrupts Event Channel Specs. Event Layer send messages exception notification exceptions, notifications ECH: Event Channel Handler p u b l i s h a n n o u n c e s u b s c r i b e b i n d i n g p r o t o c o l c o n f i g . p r o t o c o l Global Time Service Figure 4: Architecture layers of COSMIC. The HRT handler maintains the HRT message list, which contains an entry for each local HRT message to be sent. The entry holds the parameters for the message, the activation status and the binding information. Messages are scheduled on the bus according to the HRT message calendar which comprises the precise start time for each time slot allocated for a message. Soft real-time message queues order outgoing messages according to their transmission deadlines derived from the temporal validity interval. If the transmission deadline is exceeded, the event message is purged out of the queue. The respective application is notified via the exception notification interface and can take actions like trying to publish the event again or publish it to a channel of another class. Incoming event messages are ordered according to their temporal validity. If an event message arrive, the respective applications are notified. At the moment, an outdated message is deleted from the queue and if the queue runs out of space, the oldest message is discarded. However, there are other policies possible depending on event attributes and available memory space. Non real-time messages are FIFO ordered in a fixed size circular buffer. 7.2 Status of COSMIC The goal for developing COSMIC was to provide a platform to seamlessly integrate smart tiny components in a large system. Therefore, COSMIC should run also on the small, resource constraint devices which are built around 16Bit or even 8-Bit micro-controllers. The distributed COSMIC middleware has been implemented and tested on various platforms. Under RT-Linux, we support the real-time channels over the CAN Bus as described above. The RTLinux version runs on Pentium processors and is currently evaluated before we intent to port it to a smart sensor or actuator. For the interoperability in a WAN-of-CANs environment, we only provide non real-time channels at the moment. This version includes a gateway between the CANbus and a TCP/IP network. It allows us to use a standard wireless 802.11 network. The non real-time version of COSMIC is available on Linux, RT-Linux and on the microcontroller families C167 (Infineon) and 68HC908 (Motorola). Both micro-controllers have an on-board CAN controller and thus do not require additional hardware components for the network. The memory footprint of COSMIC is about 13 Kbyte on a C167 and slightly more on the 68HC908 where it fits into the on-board flash memory without problems. Because only a few channels are required on such a smart sensor or actuator component, the requirement of RAM (which is a scarce resource on many single chip systems) to hold the dynamic data structures of a channel is low. The COSMIC middleware makes it very easy to include new smart sensors in an existing system. Particularly, the application running on a smart sensor to condition and process the raw physical data must not be aware of any low level network specific details. It seamlessly interacts with other components of the system exclusively via event channels. The demo example, briefly described in the next chapter, is using a distributed infrastructure of tiny smart sensors and actuators directly cooperating via event channels over heterogeneous networks. 8. AN ILLUSTRATIVE EXAMPLE A simple example for many important properties of the proposed system showing the coordination through the environment and events disseminated over the network is the demo of two cooperating robots depicted in Figure 5. Each robot is equipped with smart distance sensors, speed sensors, acceleration sensors and one of the robots (the guide (KURT2) in front (Figure 5)) has a tracking camera allowing to follow a white line. The robots form a WAN-of-CANs system in which their local CANs are interconnected via a wireless 802.11 network. COSMIC provides the event layer for seamless interaction. The blind robot (N.N.) is searching the guide randomly. Whenever the blind robot detects (by its front distance sensors) an obstacle, it checks whether this may be the guide. For this purpose, it dynamically subscribes to the event channel disseminating distance events from rear distance sensors of the guide(s) and compares these with the distance events from its local front sensors. If the distance is approximately the same it infers that it is really behind a guide. Now N.N. also subscribes to the event channels of the tracking camera and the speed sensors 37 Figure 5: Cooperating robots. to follow the guide. The demo application highlights the following properties of the system: 1. Dynamic interaction of robots which is not known in advance. In principle, any two a priori unknown robots can cooperate. All what publishers and subscribers have to know to dynamically interact in this environment is the subject of the respective event class. A problem will be to receive only the events of the robot which is closest. A robot identity does not help much to solve this problem. Rather, the position of the event generation entity which is captured in the respective attributes can be evaluated to filter the relevant event out of the event stream. A suitable wireless protocol which uses proximity to filter events has been proposed by Meier and Cahill [22] in the CORTEX project. 2. Interaction through the environment. The cooperation between the robots is controlled by sensing the distance between the robots. If the guide detects that the distance grows, it slows down. Respectively, if the blind robot comes too close it reduces its speed. The local distance sensors produce events which are disseminated through a low latency, highly predictable event channel. The respective reaction time can be calculated as function of the speed and the distance of the robots and define a dynamic dissemination deadline for events. Thus, the interaction through the environment will secure the safety properties of the application, i.e. the follower may not crash into the guide and the guide may not loose the follower. Additionally, the robots have remote subscriptions to the respective distance events which are used to check it with the local sensor readings to validate that they really follow the guide which they detect with their local sensors. Because there may be longer latencies and omissions, this check occasionally will not be possible. The unavailability of the remote events will decrease the quality of interaction and probably and slow down the robots, but will not affect safety properties. 3. Cooperative sensing. The blind robot subscribes to the events of the line tracking camera. Thus it can see through the eye of the guide. Because it knows the distance to the guide and the speed as well, it can foresee the necessary movements. The proposed system provides the architectural framework for such a cooperation. The respective sentient object controlling the actuation of the robot receives as input the position and orientation of the white line to be tracked. In the case of the guide robot, this information is directly delivered as a body event with a low latency and a high reliability over the internal network. For the follower robot, the information comes also via an event channel but with different quality attributes. These quality attributes are reflected in the event channel description. The sentient object controlling the actuation of the follower is aware of the increased latency and higher probability of omission. 9. CONCLUSION AND FUTURE WORK The paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components. We cannot assume that the network architecture in such a system is homogeneous. Rather multiple edge- networks are fused to a hierarchical, heterogeneous wide area network. They connect the tiny sensors and actuators perceiving the environment and providing sentience to the application. Additionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes. The work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles. Rather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints. The paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network. While appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences. Therefore, one of the main concerns is to define temporal properties in such an open infrastructure. The notion of an event channel has been introduced which allows to specify quality aspects explicitly. They can be verified at subscription and define a boundary for event dissemination. The COSMIC middleware is a first attempt to put these concepts into operation. COSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes. There are many open questions that emerged from our work. One direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain. Additionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research. To reduce network traffic and only disseminate those events to the subscribers which they are really interested in and which have a chance to arrive timely, the encapsulation and scoping schemes have to be transformed into respective multi-level filtering rules. The event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose. Finally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment. 38 10. REFERENCES [1] J. Bacon, K. Moody, J. Bates, R. Hayton, C. Ma, A. McNeil, O. Seidel, and M. Spiteri. Generic support for distributed applications. IEEE Computer, 33(3):68-76, 2000. [2] L. B. Becker, M. Gergeleit, S. Schemmer, and E. Nett. Using a flexible real-time scheduling strategy in a distributed embedded application. In Proc. of the 9th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Lisbon, Portugal, Sept. 2003. [3] N. Carriero and D. Gelernter. Linda in context. Communications of the ACM, 32(4):444-458, apr 1989. [4] A. Casimiro (Ed.) . Preliminary definition of cortex system architecture. CORTEX project, IST-2000-26031, Deliverable D4, Apr. 2002. [5] CORTEX project Annex 1, Description of Work. Technical report, CORTEX project, IST-2000-26031, Oct. 2000. http://cortex.di.fc.ul.pt. [6] R. Cunningham and V. Cahill. Time bounded medium access control for ad-hoc networks. In Proceedings of the Second ACM International Workshop on Principles of Mobile Computing (POMC``02), pages 1-8, Toulouse, France, Oct. 2002. ACM Press. [7] P. T. Eugster, P. Felber, R. Guerraoui, and A.-M. Kermarrec. The many faces of publish/subscribe. Technical Report DSC ID:200104, EPFL, Lausanne, Switzerland, 2001. [8] T. F¨uhrer, B. M¨uller, W. Dieterle, F. Hartwich, R. Hugel, and M.Walther. Time triggered communication on CAN, 2000. http://www.can-cia.org/can/ttcan/fuehrer.pdf. [9] R. B. GmbH. CAN Specification Version 2.0. Technical report, Sept. 1991. [10] T. Harrison, D. Levine, and D. Schmidt. The design and performance of a real-time corba event service. In Proceedings of the 1997 Conference on Object Oriented Programming Systems, Languages and Applications (OOPSLA), pages 184-200, Atlanta, Georgia, USA, 1997. ACM Press. [11] J. Hightower and G. Borriello. Location systems for ubiquitous computing. IEEE Computer, 34(8):57-66, aug 2001. [12] A. Hopper. The Clifford Paterson Lecture, 1999 Sentient Computing. Philosophical Transactions of the Royal Society London, 358(1773):2349-2358, Aug. 2000. [13] J. Kaiser, C. Mitidieri, C. Brudna, and C. Pereira. COSMIC: A Middleware for Event-Based Interaction on CAN. In Proc. 2003 IEEE Conference on Emerging Technologies and Factory Automation, Lisbon, Portugal, Sept. 2003. [14] J. Kaiser and M. Mock. Implementing the real-time publisher/subscriber model on the controller area network (CAN). In Proceedings of the 2nd International Symposium on Object-oriented Real-time distributed Computing (ISORC99), Saint-Malo, France, May 1999. [15] K. Kim, G. Jeon, S. Hong, T. Kim, and S. Kim. Integrating subscription-based and connection-oriented communications into the embedded CORBA for the CAN Bus. In Proceedings of the IEEE Real-time Technology and Application Symposium, May 2000. [16] H. Kopetz and G. Gr¨unsteidl. TTP - A Time-Triggered Protocol for Fault-Tolerant Real-Time Systems. Technical Report rr-12-92, Institut f¨ur Technische Informatik, Technische Universit¨at Wien, Treilstr. 3/182/1, A-1040 Vienna, Austria, 1992. [17] H. Kopetz, M. Holzmann, and W. Elmenreich. A Universal Smart Transducer Interface: TTP/A. International Journal of Computer System, Science Engineering, 16(2), Mar. 2001. [18] H. Kopetz and P. Ver´ıssimo. Real-time and Dependability Concepts. In S. J. Mullender, editor, Distributed Systems, 2nd Edition, ACM-Press, chapter 16, pages 411-446. Addison-Wesley, 1993. [19] S. Lankes, A. Jabs, and T. Bemmerl. Integration of a CAN-based connection-oriented communication model into Real-Time CORBA. In Workshop on Parallel and Distributed Real-Time Systems, Nice, France, Apr. 2003. [20] Local Interconnect Network: LIN Specification Package Revision 1.2. Technical report, Nov. 2000. [21] M. Livani, J. Kaiser, and W. Jia. Scheduling hard and soft real-time communication in the controller area network. Control Engineering, 7(12):1515-1523, 1999. [22] R. Meier and V. Cahill. Steam: Event-based middleware for wireless ad-hoc networks. In Proceedings of the International Workshop on Distributed Event-Based Systems (ICDCS/DEBS``02), pages 639-644, Vienna, Austria, 2002. [23] E. Nett and S. Schemmer. Reliable real-time communication in cooperative mobile applications. IEEE Transactions on Computers, 52(2):166-180, Feb. 2003. [24] B. Oki, M. Pfluegl, A. Seigel, and D. Skeen. The information bus - an architecture for extensible distributed systems. Operating Systems Review, 27(5):58-68, 1993. [25] O. M. G. (OMG). CORBAservices: Common Object Services Specification - Notification Service Specification, Version 1.0, 2000. [26] O. M. G. (OMG). Smart transducer interface, initial submission, June 2001. [27] P. Ver´ıssimo, V. Cahill, A. Casimiro, K. Cheverst, A. Friday, and J. Kaiser. Cortex: Towards supporting autonomous and cooperating sentient entities. In Proceedings of European Wireless 2002, Florence, Italy, Feb. 2002. [28] P. Ver´ıssimo and A. Casimiro. The Timely Computing Base model and architecture. Transactions on Computers - Special Issue on Asynchronous Real-Time Systems, 51(8):916-930, Aug. 2002. [29] P. Ver´ıssimo and A. Casimiro. Event-driven support of real-time sentient objects. In Proceedings of the 8th IEEE International Workshop on Object-oriented Real-time Dependable Systems, Guadalajara, Mexico, Jan. 2003. [30] P. Ver´ıssimo and L. Rodrigues. Distributed Systems for System Architects. Kluwer Academic Publishers, 2001. 39
An Architectural Framework and a Middleware for Cooperating Smart Components * U.Lisboa U.Ulm U.Lisboa casim@di.fc.ul.pt kaiser@informatik.uni- pjv@di.fc.ul.pt ulm.de ABSTRACT In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including * This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI/1999/CHS / 33996 (DEFEATS). the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. 1. INTRODUCTION In recent years we have seen the continuous improvement of technologies that are relevant for the construction of distributed embedded systems, including trustworthy visual, auditory, and location sensing [11], communication and processing. We believe that in a future networked physical world a new class of applications will emerge, composed of a myriad of smart sensors and actuators to assess and control aspects of their environments and autonomously act in response to it. The anticipated challenging characteristics of these applications include autonomy, responsiveness and safety criticality, large scale, geographical dispersion, mobility and evolution. In order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms. Unfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent. Looking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface. However, classical event/object models are usually software oriented and, as such, when trans ported to a real-time, embedded systems setting, their harmony is cluttered by the conflict between, on the one side, send/receive of "software" events (message-based), and on the other side, input/output of "hardware" or "real-world" events, register-based. In terms of interaction paradigms, and although the use of event-based models appears to be a convenient solution [10, 22], these often lack the appropriate support for non-functional requirements like reliability, timeliness or security. This paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system. When choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications. Unlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect. Therefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows. This view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach. This provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows. In fact, the generic event layer hides the different communication channels, including the interactions through the environment. Additionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings. The paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer. In fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. The paper is organized as follows. In Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system. Then, in Section 4, we describe the componentbased approach that allows composition of objects. GEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions. Section 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects. A simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper. 2. RELATED WORK Our work considers a wired physical world in which a very large number of autonomous components cooperate. It is inspired by many research efforts in very different areas. Event-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22]. Intended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues. Secondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints. In [10] a real-time event system for CORBA has been introduced. The events are routed via a central event server which provides scheduling functions to support the real-time requirements. Such a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices. There are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19]. They are targeted for the CAN-Bus [9], a popular network developed for the automotive industry. However, in these approaches the support for timeliness or dependability issues does not exist or is only very limited. A new scheme to integrate smart devices in a CORBA environment is proposed in [17] and has lead to the proposal of a standard by the Object Management Group (OMG) [26]. Smart transducers are organized in clusters that are connected to a CORBA system by a gateway. The clusters form isolated subnetworks. A special master node enforces the temporal properties in the cluster subnet. A CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS). The basic structure is similar to the WAN-of-CANs structure which has been introduced in the CORTEX project [4]. Islands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks. However, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel. Secondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided. 3. INFORMATION FLOW AND INTERACTION MODEL In this paper we consider a component-based system model that incorporates previous work developed in the context of the IST CORTEX project [5]. As mentioned above, a fundamental idea underlying the approach is that applications can be composed of a large number of smart components that are able to sense their surrounding environment and interact with it. These components are referred to as sentient objects, a metaphor elaborated in CORTEX and inspired on the generic concept of sentient computing introduced in [12]. Sentient objects accept input events from a variety of different sources (including sensors, but not constrained to that), process them, and produce output events, whereby they actuate on the environment and/or interact with other objects. Therefore, the following kinds of interactions can take place in the system: Environment-to-object interactions: correspond to a flow of information from the environment to application objects, reporting about the state of the former, and/or notifying about events taking place therein. Object-to-object interactions: correspond to a flow of information among sentient objects, serving two purposes. The first is related with complementing the assessment of each individual object about the state of the surrounding space. The second is related to collaboration, in which the object tries to influence other objects into contributing to a common goal, or into reacting to an unexpected situation. Object-to-environment interactions: correspond to a flow of information from an object to the environment, with the purpose of forcing a change in the state of the latter. Before continuing, we need to clarify a few issues with respect to these possible forms of interaction. We consider that the environment can be a producer or consumer of information while interacting with sentient objects. The environment is the real (physical) world surrounding an object, not necessarily close to the object or limited to certain boundaries. Quite clearly, the information produced by the environment corresponds to the physical representation of real-time entities, of which typical examples include temperature, distance or the state of a door. On the other hand, actuation on the environment implies the manipulation of these real-time entities, like increasing the temperature (applying more heat), changing the distance (applying some movement) or changing the state of the door (closing or opening it). The required transformations between system representations of these real-time entities and their physical representations is accomplished, generically, by sensors and actuators. We further consider that there may exist dumb sensors and actuators, which interact with the objects by disseminating or capturing raw transducer information, and smart sensors and actuators, with enhanced processing capabilities, capable of "speaking" some more elaborate "event dialect" (see Sections 5 and 6.1). Interaction with the environment is therefore done through sensors and actuators, which may, or may not be part of sentient objects, as discussed in Section 4.2. State or state changes in the environment are considered as events, captured by sensors (in the environment or within sentient objects) and further disseminated to other potentially interested sentient objects in the system. In consequence, it is quite natural to base the communication and interaction among sentient objects and with the environment on an event-based communication model. Moreover, typical properties of event-based models, such as anonymous and non-blocking communication, are highly desirable in systems where sentient objects can be mobile and where interactions are naturally very dynamic. A distinguishing aspect of our work from many of the existing approaches, is that we consider that sentient objects may indirectly communicate with each other through the environment, when they act on it. Thus the environment constitutes an interaction and communication channel and is in the control and awareness loop of the objects. In other words, when a sentient object actuates on the environment it will be able to observe the state changes in the environment by means of events captured by the sensors. Clearly, other objects might as well capture the same events, thus establishing the above-mentioned indirect communication path. In systems that involve interactions with the environment it is very important to consider the possibility of communication through the environment. It has been shown that the hidden channels developing through the latter (e.g., feedback loops) may hinder software-based algorithms ignoring them [30]. Therefore, any solution to the problem requires the definition of convenient abstractions and appropriate architectural constructs. On the other hand, in order to deal with the information flow through the whole computer system and environment in a seamless way, handling "software" and "hardware" events uniformly, it is also necessary to find adequate abstractions. As discussed in Section 5, the Generic-Events Architecture introduces the concept of Generic Event and an Event Layer abstraction which aim at dealing, among others, with these issues. 4. SENTIENT OBJECT COMPOSITION In this section we analyze the most relevant issues related with the sentient object paradigm and the construction of systems composed of sentient objects. 4.1 Component-based System Construction Sentient objects can take several different forms: they can simply be software-based components, but they can also comprise mechanical and/or hardware parts, amongst which the very sensorial apparatus that substantiates "sentience", mixed with software components to accomplish their task. We refine this notion by considering a sentient object as an encapsulating entity, a component with internal logic and active processing elements, able to receive, transform and produce new events. This interface hides the internal hardware/software structure of the object, which may be complex, and shields the system from the low-level functional and temporal details of controlling a specific sensor or actuator. Furthermore, given the inherent complexity of the envisaged applications, the number of simultaneous input events and the internal size of sentient objects may become too large and difficult to handle. Therefore, it should be possible to consider the hierarchical composition of sentient objects so that the application logic can be separated across as few or as many of these objects as necessary. On the other hand, composition of sentient objects should normally be constrained by the actual hardware component's structure, preventing the possibility of arbitrarily composing sentient objects. This is illustrated in Figure 1, where a sentient object is internally composed of a few other sentient objects, each of them consuming and producing events, some of which only internally propagated. Observing the figure, and recalling our previous discussion about the possible interactions, we identify all of them here: an object-to-environment interaction occurs between the object controlling a WLAN transmitter and some WLAN receiver in the environment; an environment-to-object interaction takes place when the object responsible for the GPS Figure 1: Component-aware sentient object composition. signal reception uses the information transmitted by the satellites; finally, explicit object-to-object interactions occur internally to the container object, through an internal communication network. Additionally, it is interesting to observe that implicit communication can also occur, whether the physical feedback develops through the environment internal to the container object (as depicted) or through the environment external to this object. However, there is a subtle difference between both cases. While in the former the feedback can only be perceived by objects internal to the container, bounding the extent to which consistency must be ensured, such bounds do not exist in the latter. In fact, the notion of sentient object as an encapsulating entity may serve other purposes (e.g., the confinement of feedback and of the propagation of events), beyond the mere hierarchical composition of objects. To give a more concrete example of such component-aware object composition we consider a scenario of cooperating robots. Each robot is made of several components, corresponding, for instance, to axis and manipulator controllers. Together with the control software, each of these controllers may be a sentient object. On the other hand, a robot itself is a sentient object, composed of the objects materialized by the controllers, and the environment internal to its own structure, or body. This means that it should be possible to define cooperation activities using the events produced by robot sentient objects, without the need to know the internal structure of robots, or the events produced by body objects or by smart sensors within the body. From an engineering point of view, however, this also means that robot sentient object may have to generate new events that reflect its internal state, which requires the definition of a gateway to make the bridge between the internal and external environments. 4.2 Encapsulation and Scoping Now an important question is about how to represent and disseminate events in a large scale networked world. As we have seen above, any event generated by a sentient object could, in principle, be visible anywhere in the system and thus received by any other sentient object. However, there are substantial obstacles to such universal interactions, originating from the components heterogeneity in such a largescale setting. Firstly, the components may have severe performance constraints, particularly because we want to integrate smart sensors and actuators in such an architecture. Secondly, the bandwidth of the participating networks may vary largely. Such networks may be low power, low bandwidth fieldbuses, or more powerful wireless networks as well as high speed backbones. Thirdly, the networks may have widely different reliability and timeliness characteristics. Consider a platoon of cooperating vehicles. Inside a vehicle there may be a field-bus like CAN [8, 9], TTP/A [17] or LIN [20], with a comparatively low bandwidth. On the other hand, the vehicles are communicating with others in the platoon via a direct wireless link. Finally, there may be multiple platoons of vehicles which are coordinated by an additional wireless network layer. At the abstraction level of sentient objects, such heterogeneity is reflected by the notion of body-vs-environment. At the network level, we assume the WAN-of-CANs structure [27] to model the different networks. The notion of body and environment is derived from the recursively defined component-based object model. A body is similar to a cell membrane and represents a quality of service container for the sentient objects inside. On the network level, it may be associated with the components coupled by a certain CAN. A CAN defines the dissemination quality which can be expected by the cooperating objects. In the above example, a vehicle may be a sentient object, whose body is composed of the respective lower level objects (sensors and actuators) which are connected by the internal network (see Figure 1). Correspondingly, the platoon can be seen itself as an object composed of a collection of cooperating vehicles, its body being the environment encapsulated by the platoon zone. At the network level, the wireless network represents the respective CAN. However, several platoons united by their CANs may interact with each other and objects further away, through some wider-range, possible fixed networking substrate, hence the concept of WAN-of-CANs. The notions of body-environment and WAN-of-CANs are very useful when defining interaction properties across such boundaries. Their introduction obeyed to our belief that a single mechanism to provide quality measures for interactions is not appropriate. Instead, a high level construct for interaction across boundaries is needed which allows to specify the quality of dissemination and exploits the knowledge about body and environment to assess the feasibility of quality constraints. As we will see in the following section, the notion of an event channel represents this construct in our architecture. It disseminates events and allows the network independent specification of quality attributes. These attributes must be mapped to the respective properties of the underlying network structure. 5. A GENERIC EVENTS ARCHITECTURE In order to successfully apply event-based object-oriented models, addressing the challenges enumerated in the introduction of this paper, it is necessary to use adequate architectural constructs, which allow the enforcement of fundamental properties such as timeliness or reliability. We propose the Generic-Events Architecture (GEAR), depicted in Figure 2, which we briefly describe in what follows (for a more detailed description please refer to [29]). The L-shaped structure is crucial to ensure some of the properties described. Environment: The physical surroundings, remote and close, solid and etherial, of sentient objects. Figure 2: Generic-Events architecture. Body: The physical embodiment of a sentient object (e.g., the hardware where a mechatronic controller resides, the physical structure of a car). Note that due to the compositional approach taken in our model, part of what is "environment" to a smaller object seen individually, becomes "body" for a larger, containing object. In fact, the body is the "internal environment" of the object. This architecture layering allows composition to take place seamlessly, in what concerns information flow. Inside a body there may also be implicit knowledge, which can be exploited to make interaction more efficient, like the knowledge about the number of cooperating entities, the existence of a specific communication network or the simple fact that all components are co-located and thus the respective events do not need to specify location in their context attributes. Such intrinsic information is not available outside a body and, therefore, more explicit information has to be carried by an event. Translation Layer: The layer responsible for physical event transformation from/to their native form to event channel dialect, between environment/body and an event channel. Essentially one doing observation and actuation operations on the lower side, and doing transactions of event descriptions on the other. On the lower side this layer may also interact with dumb sensors or actuators, therefore "talking" the language of the specific device. These interactions are done through operational networks (hence the antenna symbol in the figure). Event Layer: The layer responsible for event propagation in the whole system, through several Event Channels (EC):. In concrete terms, this layer is a kind of middleware that provides important event-processing services which are crucial for any realistic event-based system. For example, some of the services that imply the processing of events may include publishing, subscribing, discrimination (zoning, filtering, fusion, tracing), and queuing. Communication Layer: The layer responsible for "wrapping" events (as a matter of fact, event descriptions in EC dialect) into "carrier" event-messages, to be transported to remote places. For example, a sensing event generated by a smart sensor is wrapped in an event-message and disseminated, to be caught by whoever is concerned. The same holds for an actuation event produced by a sentient object, to be delivered to a remote smart actuator. Likewise, this may apply to an event-message from one sentient object to another. Dumb sensors and actuators do not send event-messages, since they are unable to understand the EC dialect (they do not have an event layer neither a communication layer--they communicate, if needed, through operational networks). Regular Network: This is represented in the horizontal axis of the block diagram by the communication layer, which encompasses the usual LAN, TCP/IP, and realtime protocols, desirably augmented with reliable and/or ordered broadcast and other protocols. The GEAR introduces some innovative ideas in distributed systems architecture. While serving an object model based on production and consumption of generic events, it treats events produced by several sources (environment, body, objects) in a homogeneous way. This is possible due to the use of a common basic dialect for talking about events and due to the existence of the translation layer, which performs the necessary translation between the physical representation of a real-time entity and the EC compliant format. Crucial to the architecture is the event layer, which uses event channels to propagate events through regular network infrastructures. The event layer is realized by the COSMIC middleware, as described in Section 7. 5.1 Information Flow in GEAR The flow of information (external environment and computational part) is seamlessly supported by the L-shaped architecture. It occurs in a number of different ways, which demonstrates the expressiveness of the model with regard to the necessary forms of information encountered in real-time cooperative and embedded systems. Smart sensors produce events which report on the environment. Body sensors produce events which report on the body. They are disseminated by the local event layer module, on an event channel (EC) propagated through the regular network, to any relevant remote event layer modules where entities showed an interest on them, normally, sentient objects attached to the respective local event layer modules. Sentient objects consume events they are interested in, process them, and produce other events. Some of these events are destined to other sentient objects. They are published on an EC using the same EC dialect that serves, e.g., sensor originated events. However, these events are semantically of a kind such that they are to be subscribed by the relevant sentient objects, for example, the sentient objects composing a robot controller system, or, at a higher level, the sentient objects composing the actual robots in a cooperative application. Smart actuators, on the other hand, merely consume events produced by sentient objects, whereby they accept and execute actuation commands. Alternatively to "talking" to other sentient objects, sentient objects can produce events of a lower level, for example, actuation commands on the body or environment. They publish these exactly the same way: on an event channel through the local event layer representative. Now, if these commands are of concern to local actuator units (e.g., body, including internal operational networks), they are passed on to the local translation layer. If they are of concern to a remote smart actuator, they are disseminated through the distributed event layer, to reach the former. In any case, if they are also of interest to other entities, such as other sentient objects that wish to be informed of the actuation command, then they are also disseminated through the EC to these sentient objects. A key advantage of this architecture is that event-messages and physical events can be globally ordered, if necessary, since they all pass through the event layer. The model also offers opportunities to solve a long lasting problem in realtime, computer control, and embedded systems: the inconsistency between message passing and the feedback loop information flow subsystems. 6. TEMPORAL ASPECTS OF THE INTERACTIONS Any interaction needs some form of predictability. If safety critical scenarios are considered as it is done in CORTEX, temporal aspects become crucial and have to be made explicit. The problem is how to define temporal constraints and how to enforce them by appropriate resource usage in a dynamic ad-hoc environment. In an system where interactions are spontaneous, it may be also necessary to determine temporal properties dynamically. To do this, the respective temporal information must be stated explicitly and available during run-time. Secondly, it is not always ensured that temporal properties can be fulfilled. In these cases, adaptations and timing failure notification must be provided [2, 28]. In most real-time systems, the notion of a deadline is the prevailing scheme to express and enforce timeliness. However, a deadline only weakly reflect the temporal characteristics of the information which is handled. Secondly, a deadline often includes implicit knowledge about the system and the relations between activities. In a rather well defined, closed environment, it is possible to make such implicit assumptions and map these to execution times and deadlines. E.g. the engineer knows how long a vehicle position can be used before the vehicle movement outdates this information. Thus he maps this dependency between speed and position on a deadline which then assures that the position error can be assumed to be bounded. In a open environment, this implicit mapping is not possible any more because, as an obvious reason, the relation between speed and position, and thus the error bound, cannot easily be reverse engineered from a deadline. Therefore, our event model includes explicit quality attributes which allow to specify the temporal attributes for every individual event. This is of course an overhead compared to the use of implicit knowledge, but in a dynamic environment such information is needed. To illustrate the problem, consider the example of the position of a vehicle. A position is a typical example for (time, value) entity [30]. Thus, the position is useful if we can determine an error bound which is related to time, e.g. if we want a position error below 10 meters to establish a safety property between cooperating cars moving with 5 m/sec, the position has a validity time of 2 seconds. In a (time, value) entity entity we can trade time against the precision of the value. This is known as value over time and time over value [18]. Once having established the time-value relation and captured in event attributes, subscribers of this event can locally decide about the usefulness of an information. In the GEAR architecture temporal validity is used to reason about safety properties in a event-based system [29]. We will briefly review the respective notions and see how they are exploited in our COSMIC event middleware. Consider the timeline of generating an event representing some real-time entity [18] from its occurrence to the notification of a certain sentient object (Figure 3). The real-time entity is captured at the sensor interface of the system and has to be transformed in a form which can be treated by a computer. During the time interval t0 the sensor reads the real-time entity and a time stamp is associated with the respective value. The derived (time, value) entity represents an observation. It may be necessary to perform substantial local computations to derive application relevant information from the raw sensor data. However, it should be noted that the time stamp of the observation is associated with the capture time and thus independent from further signal processing and event generation. This close relationship between capture time and the associated value is supported by smart sensors described above. The processed sensor information is assembled in an event data structure after ts to be published to an event channel. As is described later, the event includes the time stamp of generation and the temporal validity as attributes. The temporal validity is an application defined measure for the expiration of a (time, value). As we explained in the example of a position above, it may vary dependent on application parameters. Temporal validity is a more general concept than that of a deadline. It is independent of a certain technical implementation of a system. While deadlines may be used to schedule the respective steps in an event generation and dissemination, a temporal validity is an intrinsic property of a (time, value) entity carried in an event. A temporal validity allows to reason about the usefulness of information and is beneficial even in systems in which timely dissemination of events cannot be enforced because it enables timing failure detection at the event consumer. It is obvious that deadlines or periods can be derived from the temporal validity of an event. To set a deadline, knowledge of an implementation, worst case execution times or message dissemination latencies is necessary. Thus, in the timeline of Figure 3 every interval may have a deadline. Event dissemination through soft real-time channels in COSMIC exploits the temporal validity to define dissemination deadlines. Quality attributes can be defined, for instance, in terms of (validity interval, omission degree) pairs. These allow to characterize the usefulness of the event for a certain application, in a certain context. Because of that, quality attributes of an event clearly depend on higher level issues, such as the nature of the sentient object or of the smart sensor that produced the event. For instance, an event containing an indication of some vehicle speed must have different quality attributes depending on the kind of vehicle Figure 3: Event processing and dissemination. from which it originated, or depending on its current speed. The same happens with the position event of the car example above, whose validity depends on the current speed and on a predefined required precision. However, since quality attributes are strictly related with the semantics of the application or, at least, with some high level knowledge of the purpose of the system (from which the validity of the information can be derived), the definition of these quality attributes may be done by exploiting the information provided at the programming interface. Therefore, it is important to understand how the system programmer can specify non-functional requirements at the API, and how these requirements translate into quality attributes assigned to events. While temporal validity is identified as an intrinsic event property, which is exploited to decide on the usefulness of data at a certain point in time, it is still necessary to provide a communication facility which can disseminate the event before the validity is expired. In a WAN-of-CANs network structure we have to cope with very different network characteristics and quality of service properties. Therefore, when crossing the network boundaries the quality of service guarantees available in a certain network will be lost and it will be very hard, costly and perhaps impossible to achieve these properties in the next larger area of the WAN-of CANs structure. CORTEX has a couple of abstractions to cope with this situation (network zones, body/environment) which have been discussed above. From the temporal point of view we need a high level abstraction like the temporal validity for the individual event now to express our quality requirements of the dissemination over the network. The (bound, coverage) pair, introduced in relation with the TCB [28] seems to be an appropriate approach. It considers the inherent uncertainty of networks and allows to trade the quality of dissemination against the resources which are needed. In relation with the event channel model discussed later, the (bound, coverage) pair allows to specify the quality properties of an event channel independently of specific technical issues. Given the typical environments in which sentient applications will operate, where it is difficult or even impossible to provide timeliness or reliability guarantees, we proposed an alternative way to handle non-functional application requirements, in relation with the TCB approach [28]. The proposed approach exploits intrinsic characteristics of applications, such as fail-safety, or time-elasticity, in order to secure QoS specifications of the form (bound, coverage). Instead of constructing systems that rely on guaranteed bounds, the idea is to use (possibly changing) bounds that are secured with a constant probability all over the execution. This obviously requires an application to be able to adapt to changing conditions (and/or changing bounds) or, if this is not possible, to be able to perform some safety procedures when the operational conditions degrade to an unbearable level. The bounds we mentioned above refer essentially to timeliness bounds associated to the execution of local or distributed activities, or combinations thereof. From these bounds it is then possible to derive the quality attributes, in particular validity intervals, that characterize the events published in the event channel. 6.1 The Role of Smart Sensors and Actuators Smart devices encapsulate hardware, software and mechanical components and provide information and a set of well specified functions and which are closely related to the interaction with the environment. The built-in computational components and the network interface enable the implementation of a well-defined high level interface that does not just provide raw transducer data, but a processed, application-related set of events. Moreover, they exhibit an autonomous spontaneous behaviour. They differ from general purpose nodes because they are dedicated to a certain functionality which complies to their sensing and actuating capabilities while general purpose node may execute any program. Concerning the sentient object model, smart sensors and actuators may be basic sentient objects themselves, consuming events from the real-world environment and producing the respective generic events for the system's event layer or, vice versa consuming a generic event and converting it to a real-world event by an actuation. Smart components therefore constitute the periphery, i.e. the real-world interface of a more complex sentient object. The model of sentient objects also constitutes the framework to built more complex "virtual" sensors by relating multiple (primary, i.e. sensors which directly sense a physical entity) sensors. Smart components translate events of the environment to an appropriate form available at the event layer or, vice versa, transform a system event into an actuation. For smart components we can assume that: • Smart components have dedicated resources to perform a specific function. • These resources are not used for other purposes during normal real-time operation. • No local temporal conflicts occur that will change the observable temporal behaviour. • The functions of a component can usually only be changed during a configuration procedure which is not performed when the component is involved in critical operations. • An observation of the environment as a (time, value) pair can be obtained with a bounded jitter in time. Many predictability and scheduling problems arise from the fact, that very low level timing behaviours have to be handled on a single processor. Here, temporal encapsulation of activities is difficult because of the possible side effects when sharing a single processor resource. Consider the control of a simple IR-range detector which is used for obstacle avoidance. Dependent on its range and the speed of a vehicle, it has to be polled to prevent the vehicle from crashing into an obstacle. On a single central processor, this critical activity has to be coordinated with many similar, possibly less critical functions. It means that a very fine grained schedule has to be derived based purely on the artifacts of the low level device control. In a smart sensor component, all this low level timing behaviour can be optimized and encapsulated. Thus we can assume temporal encapsulation similar to information hiding in the functional domain. Of course, there is still the problem to guarantee that an event will be disseminated and recognized in due time by the respective system components, but this relates to application related events rather than the low artifacts of a device timing. The main responsibility to provide timeliness guarantees is shifted to the event layer where these events are disseminated. Smart sensors thus lead to network centric system model. The network constitute the shared resource which has to be scheduled in a predictable way. The COSMIC middleware introduced in the next section is an approach to provide predictable event dissemination for a network of smart sensors and actuators. 7. AN EVENT MODEL AND MIDDLEWARE FOR COOPERATING SMART DEVICES An event model and a middleware suitable for smart components must support timely and reliable communication and also must be resource efficient. COSMIC (COoperating Smart devices) is aimed at supporting the interaction between those components according to the concepts introduced so far. Based on the model of a WAN-of-CANs, we assume that the components are connected to some form of CAN as a fieldbus or a special wireless sensor network which provides specific network properties. E.g. a fieldbus developed for control applications usually includes mechanisms for predictable communication while other networks only support a best effort dissemination. A gateway connects these CANs to the next level in the network hierarchy. The event system should allow the dynamic interaction over a hierarchy of such networks and comply with the overall CORTEX generic event model. Events are typed information carriers and are disseminated in a publisher / subscriber style [24, 7], which is particularly suitable because it supports generative, anonymous communication [3] and does not create any artificial control dependencies between producers of information and the consumers. This decoupling in space (no references or names of senders or receivers are needed for communication) and the flow decoupling (no control transfer occurs with a data transfer) are well known [24, 7, 14] and crucial properties to maintain autonomy of components and dynamic interactions. It is obvious that not all networks can provide the same QoS guarantees and secondly, applications may have widely differing requirements for event dissemination. Additionally, when striving for predictability, resources have to be reserved and data structures must be set up before communication takes place. Thus, these things cannot predictably be made on the fly while disseminating an event. Therefore, we introduced the notion of an event channel to cope with differing properties and requirements and have an object to which we can assign resources and reservations. The concept of an event channel is not new [10, 25], however, it has not yet been used to reflect the properties of the underlying heterogeneous communication networks and mechanisms as described by the GEAR architecture. Rather, existing event middleware allows to specify the priorities or deadlines of events handled in an event server. Event channels allow to specify the communication properties on the level of the event system in a fine grained way. An event channel is defined by: event channel: = (subject, quality attributeList, handlers) The subject determines the types of events event which may be issued to the channel. The quality attributes model the properties of the underlying communication network and dissemination scheme. These attributes include latency specifications, dissemination constraints and reliability parameters. The notion of zones which represent a guaranteed quality of service in a subnetwork support this approach. Our goal is to handle the temporal specifications as (bound, coverage) pairs [28] orthogonal to the more technical questions of how to achieve a certain synchrony property of the dissemination infrastructure. Currently, we support quality attributes of event channels in a CAN-Bus environment represented by explicit synchrony classes. The COSMIC middleware maps the channel properties to lower level protocols of the regular network. Based on our previous work on predictable protocols for the CAN-Bus, COSMIC defines an abstract network which provides hard, soft and non real-time message classes [21]. Correspondingly, we distinguish three event channel classes according to their synchrony properties: hard real-time channels, soft real-time channels and non-real-time channels. Hard real-time channels (HRTC) guarantee event propagation within the defined time constraints in the presence of a specified number of omission faults. HRTECs are supported by a reservation scheme which is similar to the scheme used in time-triggered protocols like TTP [16] [31], TTP/A [17], and TTCAN [8]. However, a substantial advantage over a TDMA scheme is that due to CAN-Bus properties, bandwidth which was reserved but is not needed by a HRTEC can be used by less critical traffic [21]. Soft real-time channels (SRTC) exploit the temporal validity interval of events to derive deadlines for scheduling. The validity interval defines the point in time after which an event becomes temporally inconsistent. Therefore, in a real-time system an event is useless after this point and may me discarded. The transmission deadline (DL) is defined as the latest point in time when a message has to be transmitted and is specified in a time interval which is derived from the expiration time: tevent ready <DL <texpiration--Δnotification texpiration defines the point in time when the temporal validity expires. Δnotification is the expected end-to-end latency which includes the transfer time over the network and the time the event may be delayed by the local event handling in the nodes. As said before, event deadlines are used to schedule the dissemination by SRTECs. However, deadlines may be missed in transient overload situations or due to arbitrary arrival times of events. On the publisher side the application's exception handler is called whenever the event deadline expires before event transmission. At this point in time the event is also not expected to arrive at the subscriber side before the validity expires. Therefore, the event is removed from the sending queue. On the subscriber side the expiration time is used to schedule the delivery of the event. If the event cannot be delivered until its expiration time it is removed from the respective queues allocated by the COSMIC middleware. This prevents the communication system to be loaded by outdated messages. Non-real-time channels do not assume any temporal specification and disseminate events in a best effort manner. An instance of an event channel is created locally, whenever a publisher makes an announcement for publication or a subscriber subscribes for an event notification. When a publisher announces publication, the respective data structures of an event channel are created by the middleware. When a subscriber subscribes to an event channel, it may specify context attributes of an event which are used to filter events locally. E.g. a subscriber may only be interested in events generated at a certain location. Additionally the subscriber specifies quality properties of the event channel. A more detailed description of the event channels can be found in [13]. Currently, COSMIC handles all event channels which disseminate events beyond the CAN network boundary as non real-time event channels. This is mainly because we use the TCP/IP protocol to disseminate events over wireless links or to the standard Ethernet. However, there are a number of possible improvements which can easily be integrated in the event channel model. The Timely Computing Base (TCB) [28] can be exploited for timing failure detection and thus would provide awareness for event dissemination in environments where timely delivery of events cannot be enforced. Additionally, there are wireless protocols which can provide timely and reliable message delivery [6, 23] which may be exploited for the respective event channel classes. Events are the information carriers which are exchanged between sentient objects through event channels. To cope with the requirements of an ad-hoc environment, an event includes the description of the context in which it has been generated and quality attributes defining requirements for dissemination. This is particularly important in an open, dynamic environment where an event may travel over multiple networks. An event instance is specified as: A subject defines the type of the event and is related to the event contents. It supports anonymous communication and is used to route an event. The subject has to match to the subject of the event channel through which the event is disseminated. Attributes are complementary to the event contents. They describe individual functional and non-functional properties of the event. The context attributes describe the environment in which the event has been generated, e.g. a location, an operational mode or a time of occurrence. The quality attributes specify timeliness and dependability aspects in terms of (validity interval, omission degree) pairs. The validity interval defines the point in time after which an event becomes temporally inconsistent [18]. As described above, the temporal validity can be mapped to a deadline. However, usually a deadline is an engineering artefact which is used for scheduling while the temporal validity is a general property of a (time, value) entity. In a environment where a deadline cannot be enforced, a consumer of an event eventually must decide whether the event still is temporally consistent, i.e. represents a valid (time, value) entity. 7.1 The Architecture of the COSMIC Middleware On the architectural level, COSMIC distinguish three layers roughly depicted in Figure 4. Two of them, the event layer and the abstract network layer are implemented by the COSMIC middleware. The event layer provides the API for the application and realizes the abstraction of event and event channels. The abstract network implements real-time message classes and adapts the quality requirements to the underlying real network. An event channel handler resides in every node. It supports the programming interface and provides the necessary data structures for event-based communication. Whenever an object subscribes to a channel or a publisher announces a channel, the event channel handler is involved. It initiates the binding of the channel's subject, which is represented by a network independent unique identifier to an address of the underlying abstract network to enable communication [14]. The event channel handler then tightly cooperates with the respective handlers of the abstract network layer to disseminate events or receive event notifications. It should be noted that the QoS properties of the event layer in general depend on what the abstract network layer can provide. Thus, it may not always be possible to e.g. support hard real-time event channels because the abstract network layer cannot provide the respective guarantees. In [13], we describe the protocols and services of the abstract network layer particularly for the CAN-Bus. As can be seen in Figure 4, the hard real-time (HRT) message class is supported by a dedicated handler which is able to provide the time triggered message dissemination. Figure 4: Architecture layers of COSMIC. The HRT handler maintains the HRT message list, which contains an entry for each local HRT message to be sent. The entry holds the parameters for the message, the activation status and the binding information. Messages are scheduled on the bus according to the HRT message calendar which comprises the precise start time for each time slot allocated for a message. Soft real-time message queues order outgoing messages according to their transmission deadlines derived from the temporal validity interval. If the transmission deadline is exceeded, the event message is purged out of the queue. The respective application is notified via the exception notification interface and can take actions like trying to publish the event again or publish it to a channel of another class. Incoming event messages are ordered according to their temporal validity. If an event message arrive, the respective applications are notified. At the moment, an outdated message is deleted from the queue and if the queue runs out of space, the oldest message is discarded. However, there are other policies possible depending on event attributes and available memory space. Non real-time messages are FIFO ordered in a fixed size circular buffer. 7.2 Status of COSMIC The goal for developing COSMIC was to provide a platform to seamlessly integrate smart tiny components in a large system. Therefore, COSMIC should run also on the small, resource constraint devices which are built around 16Bit or even 8-Bit micro-controllers. The distributed COSMIC middleware has been implemented and tested on various platforms. Under RT-Linux, we support the real-time channels over the CAN Bus as described above. The RTLinux version runs on Pentium processors and is currently evaluated before we intent to port it to a smart sensor or actuator. For the interoperability in a WAN-of-CANs environment, we only provide non real-time channels at the moment. This version includes a gateway between the CANbus and a TCP/IP network. It allows us to use a standard wireless 802.11 network. The non real-time version of COSMIC is available on Linux, RT-Linux and on the microcontroller families C167 (Infineon) and 68HC908 (Motorola). Both micro-controllers have an on-board CAN controller and thus do not require additional hardware components for the network. The memory footprint of COSMIC is about 13 Kbyte on a C167 and slightly more on the 68HC908 where it fits into the on-board flash memory without problems. Because only a few channels are required on such a smart sensor or actuator component, the requirement of RAM (which is a scarce resource on many single chip systems) to hold the dynamic data structures of a channel is low. The COSMIC middleware makes it very easy to include new smart sensors in an existing system. Particularly, the application running on a smart sensor to condition and process the raw physical data must not be aware of any low level network specific details. It seamlessly interacts with other components of the system exclusively via event channels. The demo example, briefly described in the next chapter, is using a distributed infrastructure of tiny smart sensors and actuators directly cooperating via event channels over heterogeneous networks. 8. AN ILLUSTRATIVE EXAMPLE A simple example for many important properties of the proposed system showing the coordination through the environment and events disseminated over the network is the demo of two cooperating robots depicted in Figure 5. Each robot is equipped with smart distance sensors, speed sensors, acceleration sensors and one of the robots (the "guide" (KURT2) in front (Figure 5)) has a tracking camera allowing to follow a white line. The robots form a WAN-of-CANs system in which their local CANs are interconnected via a wireless 802.11 network. COSMIC provides the event layer for seamless interaction. The "blind" robot (N.N.) is searching the guide randomly. Whenever the blind robot detects (by its front distance sensors) an obstacle, it checks whether this may be the guide. For this purpose, it dynamically subscribes to the event channel disseminating distance events from rear distance sensors of the guide (s) and compares these with the distance events from its local front sensors. If the distance is approximately the same it infers that it is really behind a guide. Now N.N. also subscribes to the event channels of the tracking camera and the speed sensors Figure 5: Cooperating robots. to follow the guide. The demo application highlights the following properties of the system: 1. Dynamic interaction of robots which is not known in advance. In principle, any two a priori unknown robots can cooperate. All what publishers and subscribers have to know to dynamically interact in this environment is the subject of the respective event class. A problem will be to receive only the events of the robot which is closest. A robot identity does not help much to solve this problem. Rather, the position of the event generation entity which is captured in the respective attributes can be evaluated to filter the relevant event out of the event stream. A suitable wireless protocol which uses proximity to filter events has been proposed by Meier and Cahill [22] in the CORTEX project. 2. Interaction through the environment. The cooperation between the robots is controlled by sensing the distance between the robots. If the guide detects that the distance grows, it slows down. Respectively, if the blind robot comes too close it reduces its speed. The local distance sensors produce events which are disseminated through a low latency, highly predictable event channel. The respective reaction time can be calculated as function of the speed and the distance of the robots and define a dynamic dissemination deadline for events. Thus, the interaction through the environment will secure the safety properties of the application, i.e. the follower may not crash into the guide and the guide may not loose the follower. Additionally, the robots have remote subscriptions to the respective distance events which are used to check it with the local sensor readings to validate that they really follow the guide which they detect with their local sensors. Because there may be longer latencies and omissions, this check occasionally will not be possible. The unavailability of the remote events will decrease the quality of interaction and probably and slow down the robots, but will not affect safety properties. 3. Cooperative sensing. The blind robot subscribes to the events of the line tracking camera. Thus it can" see" through the eye of the guide. Because it knows the distance to the guide and the speed as well, it can foresee the necessary movements. The proposed system provides the architectural framework for such a cooperation. The respective sentient object controlling the actuation of the robot receives as input the position and orientation of the white line to be tracked. In the case of the guide robot, this information is directly delivered as a body event with a low latency and a high reliability over the internal network. For the follower robot, the information comes also via an event channel but with different quality attributes. These quality attributes are reflected in the event channel description. The sentient object controlling the actuation of the follower is aware of the increased latency and higher probability of omission. 9. CONCLUSION AND FUTURE WORK The paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components. We cannot assume that the network architecture in such a system is homogeneous. Rather multiple "edge -" networks are fused to a hierarchical, heterogeneous wide area network. They connect the tiny sensors and actuators perceiving the environment and providing sentience to the application. Additionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes. The work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles. Rather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints. The paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network. While appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences. Therefore, one of the main concerns is to define temporal properties in such an open infrastructure. The notion of an event channel has been introduced which allows to specify quality aspects explicitly. They can be verified at subscription and define a boundary for event dissemination. The COSMIC middleware is a first attempt to put these concepts into operation. COSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes. There are many open questions that emerged from our work. One direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain. Additionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research. To reduce network traffic and only disseminate those events to the subscribers which they are really interested in and which have a chance to arrive timely, the encapsulation and scoping schemes have to be transformed into respective multi-level filtering rules. The event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose. Finally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.
An Architectural Framework and a Middleware for Cooperating Smart Components * U.Lisboa U.Ulm U.Lisboa casim@di.fc.ul.pt kaiser@informatik.uni- pjv@di.fc.ul.pt ulm.de ABSTRACT In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including * This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI/1999/CHS / 33996 (DEFEATS). the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. 1. INTRODUCTION In recent years we have seen the continuous improvement of technologies that are relevant for the construction of distributed embedded systems, including trustworthy visual, auditory, and location sensing [11], communication and processing. We believe that in a future networked physical world a new class of applications will emerge, composed of a myriad of smart sensors and actuators to assess and control aspects of their environments and autonomously act in response to it. The anticipated challenging characteristics of these applications include autonomy, responsiveness and safety criticality, large scale, geographical dispersion, mobility and evolution. In order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms. Unfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent. Looking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface. However, classical event/object models are usually software oriented and, as such, when trans ported to a real-time, embedded systems setting, their harmony is cluttered by the conflict between, on the one side, send/receive of "software" events (message-based), and on the other side, input/output of "hardware" or "real-world" events, register-based. In terms of interaction paradigms, and although the use of event-based models appears to be a convenient solution [10, 22], these often lack the appropriate support for non-functional requirements like reliability, timeliness or security. This paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system. When choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications. Unlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect. Therefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows. This view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach. This provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows. In fact, the generic event layer hides the different communication channels, including the interactions through the environment. Additionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings. The paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer. In fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. The paper is organized as follows. In Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system. Then, in Section 4, we describe the componentbased approach that allows composition of objects. GEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions. Section 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects. A simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper. 2. RELATED WORK Our work considers a wired physical world in which a very large number of autonomous components cooperate. It is inspired by many research efforts in very different areas. Event-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22]. Intended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues. Secondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints. In [10] a real-time event system for CORBA has been introduced. The events are routed via a central event server which provides scheduling functions to support the real-time requirements. Such a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices. There are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19]. They are targeted for the CAN-Bus [9], a popular network developed for the automotive industry. However, in these approaches the support for timeliness or dependability issues does not exist or is only very limited. A new scheme to integrate smart devices in a CORBA environment is proposed in [17] and has lead to the proposal of a standard by the Object Management Group (OMG) [26]. Smart transducers are organized in clusters that are connected to a CORBA system by a gateway. The clusters form isolated subnetworks. A special master node enforces the temporal properties in the cluster subnet. A CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS). The basic structure is similar to the WAN-of-CANs structure which has been introduced in the CORTEX project [4]. Islands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks. However, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel. Secondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided. 3. INFORMATION FLOW AND INTERACTION MODEL 4. SENTIENT OBJECT COMPOSITION 4.1 Component-based System Construction 4.2 Encapsulation and Scoping 5. A GENERIC EVENTS ARCHITECTURE 5.1 Information Flow in GEAR 6. TEMPORAL ASPECTS OF THE INTERACTIONS 6.1 The Role of Smart Sensors and Actuators 7. AN EVENT MODEL AND MIDDLEWARE FOR COOPERATING SMART DEVICES 7.1 The Architecture of the COSMIC Middleware 7.2 Status of COSMIC 8. AN ILLUSTRATIVE EXAMPLE 9. CONCLUSION AND FUTURE WORK The paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components. We cannot assume that the network architecture in such a system is homogeneous. Rather multiple "edge -" networks are fused to a hierarchical, heterogeneous wide area network. They connect the tiny sensors and actuators perceiving the environment and providing sentience to the application. Additionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes. The work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles. Rather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints. The paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network. While appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences. Therefore, one of the main concerns is to define temporal properties in such an open infrastructure. The notion of an event channel has been introduced which allows to specify quality aspects explicitly. They can be verified at subscription and define a boundary for event dissemination. The COSMIC middleware is a first attempt to put these concepts into operation. COSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes. There are many open questions that emerged from our work. One direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain. Additionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research. To reduce network traffic and only disseminate those events to the subscribers which they are really interested in and which have a chance to arrive timely, the encapsulation and scoping schemes have to be transformed into respective multi-level filtering rules. The event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose. Finally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.
An Architectural Framework and a Middleware for Cooperating Smart Components * U.Lisboa U.Ulm U.Lisboa casim@di.fc.ul.pt kaiser@informatik.uni- pjv@di.fc.ul.pt ulm.de ABSTRACT In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including * This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI/1999/CHS / 33996 (DEFEATS). the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. 1. INTRODUCTION In order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms. Unfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent. Looking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface. This paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system. When choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications. Unlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect. Therefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows. This view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach. This provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows. In fact, the generic event layer hides the different communication channels, including the interactions through the environment. Additionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings. The paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer. In fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination. The paper is organized as follows. In Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system. Then, in Section 4, we describe the componentbased approach that allows composition of objects. GEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions. Section 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects. A simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper. 2. RELATED WORK Our work considers a wired physical world in which a very large number of autonomous components cooperate. It is inspired by many research efforts in very different areas. Event-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22]. Intended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues. Secondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints. In [10] a real-time event system for CORBA has been introduced. The events are routed via a central event server which provides scheduling functions to support the real-time requirements. Such a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices. There are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19]. They are targeted for the CAN-Bus [9], a popular network developed for the automotive industry. Smart transducers are organized in clusters that are connected to a CORBA system by a gateway. The clusters form isolated subnetworks. A special master node enforces the temporal properties in the cluster subnet. A CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS). Islands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks. However, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel. Secondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided. 9. CONCLUSION AND FUTURE WORK The paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components. We cannot assume that the network architecture in such a system is homogeneous. Rather multiple "edge -" networks are fused to a hierarchical, heterogeneous wide area network. They connect the tiny sensors and actuators perceiving the environment and providing sentience to the application. Additionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes. The work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles. Rather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints. The paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network. While appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences. Therefore, one of the main concerns is to define temporal properties in such an open infrastructure. The notion of an event channel has been introduced which allows to specify quality aspects explicitly. They can be verified at subscription and define a boundary for event dissemination. The COSMIC middleware is a first attempt to put these concepts into operation. COSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes. There are many open questions that emerged from our work. One direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain. Additionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research. The event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose. Finally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.
J-49
Information Markets vs. Opinion Pools: An Empirical Comparison
In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the market probabilities given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.
[ "inform market", "opinion pool", "forecast", "expert aggreg", "market probabl", "pool predict", "price", "futur event", "expertis", "contract", "expert opinion", "predict accuraci" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U", "U", "R", "R" ]
Information Markets vs. Opinion Pools: An Empirical Comparison Yiling Chen Chao-Hsien Chu Tracy Mullen School of Information Sciences & Technology The Pennsylvania State University University Park, PA 16802 {ychen|chu|tmullen}@ist. psu.edu David M. Pennock Yahoo! Research Labs 74 N. Pasadena Ave, 3rd Floor Pasadena, CA 91103 pennockd@yahoo-inc.com ABSTRACT In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people``s subjective probability judgments on 2003 US National Football League games and compare with the market probabilities given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods. Categories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-economics General Terms Economics, Performance 1. INTRODUCTION Forecasting is a ubiquitous endeavor in human societies. For decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches. Statistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event. When these conditions can not be met, non-statistical approaches that rely on judgmental information about the future event could be better choices. One widely used non-statistical method is to elicit opinions from experts. Since experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction. These belief aggregation methods are called opinion pools, which have been extensively studied in statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and applied in many domains such as group decision making [29] and risk analysis [12]. With the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool. Information markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events. To form the predictions, information markets tie payoffs of securities to outcomes of events. For example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game. Otherwise, it pays off nothing. The security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game. Such markets are becoming very popular. The Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections. The Hollywood Stock Exchange (HSX) [3] is a virtual (play-money) exchange for trading securities to forecast future box office proceeds of new movies, and the outcomes of entertainment awards, etc.. TradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events. The Foresight Exchange (FX) [4] allows traders to wager play money on unresolved scientific questions or other claims of public interest, and NewsFutures.com``s World News Exchange [1] has 58 popular sports and financial betting markets, also grounded in a play-money currency. Despite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict? Previous research in general shows that information markets are remarkably accurate. The political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19]. Prices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37]. However, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et. al [36], in which the authors compare two information markets against arithmetic average of expert opinions. Since information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective. The comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs. This paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools. In terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games. Our results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools. In selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP. The remainder of the paper is organized as follows. Section 2 reviews popular opinion pooling methods. Section 3 introduces the basics of information markets. Data sets and our analysis methods are described in Section 4. We present results and analysis in Section 5, followed by conclusions in Section 6. 2. REVIEW OF OPINION POOLS Clemen and Winkler [12] classify opinion pooling methods into two broad categories: mathematical approaches and behavioral approaches. In mathematical approaches, the opinions of individual experts are expressed as subjective probability distributions over outcomes of an uncertain event. They are combined through various mathematical methods to form an aggregated probability distribution. Genest and Zidek [24] and French [20] provide comprehensive reviews of mathematical approaches. Mathematical approaches can be further distinguished into axiomatic approaches and Bayesian approaches. Axiomatic approaches apply prespecified functions that map expert opinions, expressed as a set of individual probability distributions, to a single aggregated probability distribution. These pooling functions are justified using axioms or certain desirable properties. Two of the most common pooling functions are the linear opinion pool (LinOP) and the logarithmic opinion pool (LogOP). Using LinOP, the aggregate probability distribution is a weighted arithmetic mean of individual probability distributions: p(θ) = n i=1 wipi(θ), (1) where pi(θ) is expert i``s probability distribution of uncertain event θ, p(θ) represents the aggregate probability distribution, wi``s are weights for experts, which are usually nonnegative and sum to 1, and n is the number of experts. Using LogOP, the aggregate probability distribution is a weighted geometric mean of individual probability distributions: p(θ) = k n i=1 pi(θ)wi , (2) where k is a normalization constant to ensure that the pooled opinion is a probability distribution. Other axiomatic pooling methods often are extensions of LinOP [22], LogOP [23], or both [13]. Winkler [39] and Morris [29, 30] establish the early framework of Bayesian aggregation methods. Bayesian approaches assume as if there is a decision maker who has a prior probability distribution over event θ and a likelihood function over expert opinions given the event. This decision maker takes expert opinions as evidence and updates its priors over the event and opinions according to Bayes rule. The resulted posterior probability distribution of θ is the pooled opinion. Behavioral approaches have been widely studied in the field of group decision making and organizational behavior. The important assumption of behavioral approaches is that, through exchanging opinions or information, experts can eventually reach an equilibrium where further interaction won``t change their opinions. One of the best known behavioral approaches is the Delphi technique [28]. Typically, this method and its variants do not allow open discussion, but each expert has chance to judge opinions of other experts, and is given feedback. Experts then can reassess their opinions and repeat the process until a consensus or a smaller spread of opinions is achieved. Some other behavioral methods, such as the Nominal Group technique [14], promote open discussions in controlled environments. Each approach has its pros and cons. Axiomatic approaches are easy to use. But they don``t have a normative basis to choose weights. In addition, several impossibility results (e.g., Genest [21]) show that no aggregation function can satisfy all desired properties of an opinion pool, unless the pooled opinion degenerates to a single individual opinion, which effectively implies a dictator. Bayesian approaches are nicely based on the normative Bayesian framework. However, it is sometimes frustratingly difficult to apply because it requires either (1) constructing an obscenely complex joint prior over the event and opinions (often impractical even in terms of storage / space complexity, not to mention from an elicitation standpoint) or (2) making strong assumptions about the prior, like conditional independence of experts. Behavior approaches allow experts to dynamically improve their information and revise their opinions during interactions, but many of them are not fixed or completely specified, and can``t guarantee convergence or repeatability. 59 3. HOW INFORMATION MARKETS WORK Much of the enthusiasm for information markets stems from Hayek hypothesis [26] and efficient market hypothesis [15]. Hayek, in his classic critique of central planning in 1940``s, claims that the price system in a competitive market is a very efficient mechanism to aggregate dispersed information among market participants. The efficient market hypothesis further states that, in an efficient market, the price of a security almost instantly incorporates all available information. The market price summarizes all relevant information across traders, hence is the market participants'' consensus expectation about the future value of the security. Empirical evidence supports both hypotheses to a large extent [25, 27, 35]. Thus, when associating the value of a security with the outcome of an uncertain future event, market price, by revealing the consensus expectation of the security value, can indirectly predict the outcome of the event. This idea gives rise to information markets. For example, if we want to predict which team will win the NFL game between New England and Carolina, an information market can trade a security $100 if New England defeats Carolina, whose payoff per share at the end of the game is specified as follow: $100 if New England wins the game; $0 otherwise. The security price should roughly equal the expected payoff of the security in an efficient market. The time value of money usually can be ignored because durations of most information markets are short. Assuming exposure to risk is roughly equal for both outcomes, or that there are sufficient effectively risk-neutral speculators in the market, the price should not be biased by the risk attitudes of various players in the market. Thus, p = Pr(Patriots win) × 100 + [1 − Pr(Patriots win)] × 0, where p is the price of the security $100 if New England defeats Carolina and Pr(Patriots win) is the probability that New England will win the game. Observing the security price p before the game, we can derive Pr(Patriots win), which is the market participants'' collective prediction about how likely it is that New England will win the game. The above security is a winner-takes-all contract. It is used when the event to be predicted is a discrete random variable with disjoint outcomes (in this case binary). Its price predicts the probability that a specific outcome will be realized. When the outcome of a prediction problem can be any value in a continuous interval, we can design a security that pays its holder proportional to the realized value. This kind of security is what Wolfers and Zitzewitz [40] called an index contract. It predicts the expected value of a future outcome. Many other aspects of a future event such as median value of outcome can also be predicted in information markets by designing and trading different securities. Wolfers and Zitzewitz [40] provide a summary of the main types of securities traded in information markets and what statistical properties they can predict. In practice, conceiving a security for a prediction problem is only one of the many decisions in designing an effective information market. Spann and Skiera [37] propose an initial framework for designing information markets. 4. DESIGN OF ANALYSIS 4.1 Data Sets Our data sets cover 210 NFL games held between September 28th, 2003 and December 28th, 2003. NFL games are very suitable for our purposes because: (1) two online exchanges and one online prediction contest already exist that provide data on both information markets and the opinions of self-identified experts for the same set of games; (2) the popularity of NFL games in the United States provides natural incentives for people to participate in information markets and/or the contest, which increases liquidity of information markets and improves the quality and number of opinions in the contest; (3) intense media coverage and analysis of the profiles and strengths of teams and individual players provide the public with much information so that participants of information markets and the contest can be viewed as knowledgeable regarding to the forecasting goal. Information market data was acquired, by using a specially designed crawler program, from TradeSports.com``s Football-NFL markets [7] and NewsFutures.com``s Sports Exchange [1]. For each NFL game, both TradeSports and NewsFutures have a winner-takes-all information market to predict the game outcome. We introduce the design of the two markets according to Spann and Skiera``s three steps for designing an information market [37] as below. • Choice of forecasting goal: Markets at both TradeSports and NewsFutures aim at predicting which one of the two teams will win a NFL football game. They trade similar winner-takes-all securities that pay off 100 if a team wins the game and 0 if it loses the game. Small differences exist in how they deal with ties. In the case of a tie, TradeSports will unwind all trades that occurred and refund all exchange fees, but the security is worth 50 in NewsFutures. Since the probability of a tie is usually very low (much less the 1%), prices at both markets effectively represent the market participants'' consensus assessment of the probability that the team will win. • Incentive for participation and information revelation: TradeSports and NewsFutures use different incentives for participation and information revelation. TradeSports is a real-money exchange. A trader needs to open and fund an account with a minimum of $100 to participate in the market. Both profits and losses can occur as a result of trading activity. On the contrary, a trader can register at NewsFutures for free and get 2000 units of Sport Exchange virtual money at the time of registration. Traders at NewsFutures will not incur any real financial loss. They can accumulate virtual money by trading securities. The virtual money can then be used to bid for a few real prizes at NewsFutures'' online shop. • Financial market design: Both markets at TradeSports and NewsFutures use the continuous double auction as their trading mechanism. TradeSports charges a small fee on each security transaction and expiry, while NewsFutures does not. We can see that the main difference between two information markets is real money vs. virtual money. Servan-Schreiber 60 et. al [36] have compared the effect of money on the performance of the two information markets and concluded that the prediction accuracy of the two markets are at about the same level. Not intending to compare these two markets, we still use both markets in our analysis to ensure that our findings are not accidental. We obtain the opinions of 1966 self-identified experts for NFL games from the ProbabilityFootball online contest [5], one of several ProbabilitySports contests [6]. The contest is free to enter. Participants of the contest are asked to enter their subjective probability that a team will win a game by noon on the day of the game. Importantly, the contest evaluates the participants'' performance via the quadratic scoring rule: s = 100 − 400 × Prob Lose2 , (3) where s represents the score that a participant earns for the game, and Prob Lose is the probability that the participant assigns to the actual losing team. The quadratic score is one of a family of so-called proper scoring rules that have the property that an expert``s expected score is maximized when the expert reports probabilities truthfully. For example, for a game team A vs. team B, if a player assigns 0.5 to both team A and B, his/her score for the game is 0 no matter which team wins. If he/she assigns 0.8 to team A and 0.2 to team B, showing that he is confident in team A``s winning, he/she will score 84 points for the game if team A wins, and lose 156 points if team B wins. This quadratic scoring rule rewards bold predictions that are right, but penalizes bold predictions that turn out to be wrong. The top players, measured by accumulated scores over all games, win the prizes of the contest. The suggested strategy at the contest website is to make picks for each game that match, as closely as possible, the probabilities that each team will win. This strategy is correct if the participant seeks to maximize expected score. However, as prizes are awarded only to the top few winners, participants'' goals are to maximize the probability of winning, not maximize expected score, resulting in a slightly different and more risk-seeking optimization.1 Still, as far as we are aware, this data offer the closest thing available to true subjective probability judgments from so many people over so many public events that have corresponding information markets. 4.2 Methods of Analysis In order to compare the prediction accuracy of information markets and that of opinion pools, we proceed to derive predictions from market data of TradeSports and NewsFutures, form pooled opinions using expert data from ProbabilityFootball contest, and specify the performance measures to be used. 4.2.1 Deriving Predictions For information markets, deriving predictions is straightforward. We can take the security price and divide it by 100 to get the market``s prediction of the probability that a team will win. To match the time when participants at the ProbabilityFootball contest are required to report their probability assessments, we derive predictions using the last trade price before noon on the day of the game. For more 1 Ideally, prizes would be awarded by lottery in proportion to accumulated score. than half of the games, this time is only about an hour earlier than the game starting time, while it is several hours earlier for other games. Two sets of market predictions are derived: • NF: Prediction equals NewsFutures'' last trade price before noon of the game day divided by 100. • TS: Prediction equals TradeSports'' last trade price before noon of the game day divided by 100. We apply LinOP and LogOP to ProbabilityFootball data to obtain aggregate expert predictions. The reason that we do not consider other aggregation methods include: (1) data from ProbabilityFootball is only suitable for mathematical pooling methods-we can rule out behavioral approaches, (2) Bayesian aggregation requires us to make assumptions about the prior probability distribution of game outcomes and the likelihood function of expert opinions: given the large number of games and participants, making reasonable assumptions is difficult, and (3) for axiomatic approaches, previous research has shown that simpler aggregation methods often perform better than more complex methods [12]. Because the output of LogOP is indeterminate if there are probability assessments of both 0 and 1 (and because assessments of 0 and 1 are dictatorial using LogOP), we add a small number 0.01 to an expert opinion if it is 0, and subtract 0.01 from it if it is 1. In pooling opinions, we consider two influencing factors: weights of experts and number of expert opinions to be pooled. For weights of experts, we experiment with equal weights and performance-based weights. The performancebased weights are determined according to previous accumulated score in the contest. The score for each game is calculated according to equation 3, the scoring rule used in the ProbabilityFootball contest. For the first week, since no previous scores are available, we choose equal weights. For later weeks, we calculate accumulated past scores for each player. Because the cumulative scores can be negative, we shift everyone``s score if needed to ensure the weights are non-negative. Thus, wi = cumulative scorei + shift n j=1(cumulative scorej + shift) . (4) where shift equals 0 if the smallest cumulative scorej is non-negative, and equals the absolute value of the smallest cumulative scorej otherwise. For simplicity, we call performance-weighted opinion pool as weighted, and equally weighted opinion pool as unweighted. We will use them interchangeably in the remaining of the paper. As for the number of opinions used in an opinion pool, we form different opinion pools with different number of experts. Only the best performing experts are selected. For example, to form an opinion pool with 20 expert opinions, we choose the top 20 participants. Since there is no performance record for the first week, we use opinions of all participants in the first week. For week 2, we select opinions of 20 individuals whose scores in the first week are among the top 20. For week 3, 20 individuals whose cumulative scores of week 1 and 2 are among the top 20s are selected. Experts are chosen in a similar way for later weeks. Thus, the top 20 participants can change from week to week. The possible opinion pools, varied in pooling functions, weighting methods, and number of expert opinions, are shown 61 Table 1: Pooled Expert Predictions # Symbol Description 1 Lin-All-u Unweighted (equally weighted) LinOP of all experts. 2 Lin-All-w Weighted (performance-weighted) LinOP of all experts. 3 Lin-n-u Unweighted (equally weighted) LinOP with n experts. 4 Lin-n-w Weighted (performance-weighted) LinOP with n experts. 5 Log-All-u Unweighted (equally weighted) LogOP of all experts. 6 Log-All-w Weighted (performance-weighted) LogOP of all experts. 7 Log-n-u Unweighted (equally weighted) LogOP with n experts. 8 Log-n-w Weighted (performance-weighted) LogOP with n experts. in Table 1. Lin represents linear, and Log represents Logarithmic. n is the number of expert opinions that are pooled, and All indicates that all opinions are combined. We use u to symbolize unweighted (equally weighted) opinion pools. w is used for weighted (performance-weighted) opinion pools. Lin-All-u, the equally weighted LinOP with all participants, is basically the arithmetic mean of all participants'' opinions. Log-All-u is simply the geometric mean of all opinions. When a participant did not enter a prediction for a particular game, that participant was removed from the opinion pool for that game. This contrasts with the ProbabilityFootball average reported on the contest website and used by Servan-Schreiber et. al [36], where unreported predictions were converted to 0.5 probability predictions. 4.2.2 Performance Measures We use three common metrics to assess prediction accuracy of information markets and opinion pools. These measures have been used by Servan-Schreiber et. al [36] in evaluating the prediction accuracy of information markets. 1. Absolute Error = Prob Lose, where Prob Lose is the probability assigned to the eventual losing team. Absolute error simply measures the difference between a perfect prediction (1 for winning team) and the actual prediction. A prediction with lower absolute error is more accurate. 2. Quadratic Score = 100 − 400 × (Prob Lose2 ). Quadratic score is the scoring function that is used in the ProbabilityFootball contest. It is a linear transformation of squared error, Prob Lose2 , which is one of the mostly used metrics in evaluating forecasting accuracy. Quadratic score can be negative. A prediction with higher quadratic score is more accurate. 3. Logarithmic Score = log(Prob W in), where Prob W in is the probability assigned to the eventual winning team. The logarithmic score, like the quadratic score, is a proper scoring rule. A prediction with higher (less negative) logarithmic score is more accurate. 5. EMPIRICAL RESULTS 5.1 Performance of Opinion Pools Depending on how many opinions are used, there can be numerous different opinion pools. We first examine the effect of number of opinions on prediction accuracy by forming opinion pools with the number of expert opinions varying from 1 to 960. In the ProbabilityFootball Competition, not all 1966 registered participants provide their probability assessments for every game. 960 is the smallest number of participants for all games. For each game, we sort experts according to their accumulated quadratic score in previous weeks. Predictions of the best performing n participants are picked to form an opinion pool with n experts. Figure 1 shows the prediction accuracy of LinOP and LogOP in terms of mean values of the three performance measures across all 210 games. We can see the following trends in the figure. 1. Unweighted opinion pools and performance-weighted opinion pools have similar levels of prediction accuracy, especially for LinOP. 2. For LinOP, increasing the number of experts in general increases or keeps the same the level of prediction accuracy. When there are more than 200 experts, the prediction accuracy of LinOP is stable regarding the number of experts. 3. LogOP seems more accurate than LinOP in terms of mean absolute error. But, using all other performance measures, LinOP outperforms LogOP. 4. For LogOP, increasing the number of experts increases the prediction accuracy at the beginning. But the curves (including the points with all experts) for mean quadratic score, and mean logarithmic score have slight bell-shapes, which represent a decrease in prediction accuracy when the number of experts is very large. The curves for mean absolute error, on the other hand, show a consistent increase of accuracy. The first and second trend above imply that when using LinOP, the simplest way, which has good prediction accuracy, is to average the opinions of all experts. Weighting does not seem to improve performance. Selecting experts according to past performance also does not help. It is a very interesting observation that even if many participants of the ProbabilityFootball contest do not provide accurate individual predictions (they have negative quadratic scores in the contest), including their opinions into the opinion pool still increases the prediction accuracy. One explanation of this phenomena could be that biases of individual judgment can offset with each other when opinions are diverse, which makes the pooled prediction more accurate. The third trend presents a controversy. The relative prediction accuracy of LogOP and LinOP flips when using different accuracy measures. To investigate this disagreement, we plot the absolute error of Log-All-u and Lin-All-u for each game in Figure 2. When the absolute error of an opinion 62 0 100 200 300 400 500 600 700 800 900 All 0.4 0.405 0.41 0.415 0.42 0.425 0.43 0.435 0.44 0.445 0.45 Number of Expert Opinions MeanAbsoluteError Unweighted Linear Weighted Linear Unweighted Logarithmic Weighted Logarithmic Lin−All−u Lin−All−w Log−All−u Log−All−w (a) Mean Absolute Error 0 100 200 300 400 500 600 700 800 900 All 4 5 6 7 8 9 10 11 12 13 14 Number of Expert Opinions MeanQuadraticScore Unweighted Linear Weighted Linear Unweighted Logarithmic Weighted Logarithmic Lin−All−u Lin−All−w Log−All−u Log−All−w 0 100 200 300 400 500 600 700 800 900 All −0.71 −0.7 −0.69 −0.68 −0.67 −0.66 −0.65 −0.64 −0.63 −0.62 Number of Expert Opinions MeanLogarithmicScore Unweighted Linear Weighted Linear Unweighted Logarithmic Weighted Logarithmic Lin−All−u Lin−All−w Log−All−u Log−All−w (b) Mean Quadratic Score (c) Mean Logarithmic Score Figure 1: Prediction Accuracy of Opinion Pools pool for a game is less than 0.5, it means that the team favored by the opinion pool wins the game. If it is greater than 0.5, the underdog wins. Compared with Lin-All-u, Log-All-u has lower absolute error when it is less than 0.5, and greater absoluter error when it is greater than 0.5, which indicates that predictions of Log-All-u are bolder, more close to 0 or 1, than those of Lin-All-u. This is due to the nature of linear and logarithmic aggregating functions. Because quadratic score and logarithmic score penalize bold predictions that are wrong, LogOP is less accurate when measured in these terms. Similar reasoning accounts for the fourth trend. When there are more than 500 experts, increasing number of experts used in LogOP improves the prediction accuracy measured by absolute error, but worsens the accuracy measured by the other two metrics. Examining expert opinions, we find that participants who rank lower are more frequent in offering extreme predictions (0 or 1) than those ranking high in the list. When we increase the number of experts in an opinion pool, we are incorporating more extreme predictions into it. The resulting LogOP is bolder, and hence has lower mean quadratic score and mean logarithmic score. 5.2 Comparison of Information Markets and Opinion Pools Through the first screening of various opinion pools, we select Lin-All-u, Log-All-u, Log-All-w, and Log-200-u to compare with predictions from information markets. Lin-All-u as shown in Figure 1 can represent what LinOP can achieve. However, the performance of LogOP is not consistent when evaluated using different metrics. Log-All-u and Log-All-w offer either the best or the worst predictions. Log-200-u, the LogOP with the 200 top performing experts, provides more stable predictions. We use all of the three to stand for the performance of LogOP in our later comparison. If a prediction of the probability that a team will win a game, either from an opinion pool or an information market, is higher than 0.5, we say that the team is the predicted favorite for the game. Table 2 presents the number and percentage of games that predicted favorites actually win, out of a total of 210 games. All four opinion pools correctly predict a similar number and percentage of games as NF and TS. Since NF, TS, and the four opinion pools form their predictions using information available at noon of the game 63 Table 2: Number and Percentage of Games that Predicted Favorites Win NF TS Lin-All-u Log-All-u Log-All-w Log-200-u Number 142 137 144 144 143 141 Percentage 67.62% 65.24% 68.57% 68.57% 68.10% 67.14% Table 3: Mean of Prediction Accuracy Measures Absolute Error Quadratic Score Logarithmic Score NF 0.4253 15.4352 -0.6136 (0.0121) (4.6072) (0.0258) TS 0.4275 15.2739 -0.6121 (0.0118) (4.3982) (0.0241) Lin-All-u 0.4292 13.0525 -0.6260 (0.0126) (4.8088) (0.0268) Log-All-u 0.4024 10.0099 -0.6546 (0.0173) (6.6594) (0.0418) Log-All-w 0.4059 10.4491 -0.6497 (0.0168) (6.4440) (0.0398) Log-200-u 0.4266 12.3868 -0.6319 (0.0133) (5.0764) (0.0295) *Numbers in parentheses are standard errors. *Best value for each metric is shown in bold. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Absolute Error of Lin−All−u AbsoluteErrorofLog−All−u 45 Degree Line Absolute Error Figure 2: Absolute Error: Lin-All-u vs. Log-All-u day, information markets and opinion pools have comparable potential at the same time point. We then take a closer look at prediction accuracy of information markets and opinion pools using the three performance measures. Table 3 displays mean values of these measures over 210 games. Numbers in parentheses are standard errors, which estimate the standard deviation of the mean. To take into consideration of skewness of distributions, we also report median values of accuracy measures in Table 4. Judged by the mean values of accuracy measures in Table 3, all methods have similar accuracy levels, with NF and TS slightly better than the opinion pools. However, the median values of accuracy measures indicate that LogAll-u and Log-All-w opinion pools are more accurate than all other predictions. We employ the randomization test [32] to study whether the differences in prediction accuracy presented in Table 3 and Table 4 are statistically significant. The basic idea of randomization test is that, by randomly swapping predictions of two methods numerous times, an empirical distribution for the difference of prediction accuracy can be constructed. Using this empirical distribution, we are then able to evaluate that at what confidence level the observed difference reflects a real difference. For example, the mean absolute error of NF is higher than that of Log-All-u by 0.0229, as shown in Table 3. To test whether this difference is statistically significant, we shuffle predictions from two methods, randomly label half of predictions as NF and the other half as Log-All-u, and compute the difference of mean absolute error of the newly formed NF and Log-All-u data. The above procedure is repeated 10,000 times. The 10,000 differences of mean absolute error results in an empirical distribution of the difference. Comparing our observed difference, 0.0229, with this distribution, we find that the observed difference is greater than 75.37% of the empirical differences. This leads us to conclude that the difference of mean absolute error between NF and Log-All-u is not statistically significant, if we choose the level of significance to be 0.05. Table 5 and Table 6 are results of randomization test for mean and median differences respectively. Each cell of the table is for two different prediction methods, represented by name of the row and name of the column. The first lines of table cells are results for absolute error. The second and third lines are dedicated to quadratic score and logarithmic score respectively. We can see that, in terms of mean values of accuracy measures, the differences of all methods are not statistically significant to any reasonable degree. When it 64 Table 4: Median of Prediction Accuracy Measures Absolute Error Quadratic Score Logarithmic Score NF 0.3800 42.2400 -0.4780 TS 0.4000 36.0000 -0.5108 Lin-All-u 0.3639 36.9755 -0.5057 Log-All-u 0.3417 53.2894 -0.4181 Log-All-w 0.3498 51.0486 -0.4305 Log-200-u 0.3996 36.1300 -0.5101 *Best value for each metric is shown in bold. Table 5: Statistical Confidence of Mean Differences in Prediction Accuracy TS Lin-All-u Log-All-u Log-All-w Log-200-u NF 8.92% 22.07% 75.37% 66.47% 7.76% 2.38% 26.60% 50.74% 44.26% 32.24% 2.99% 22.81% 59.35% 56.21% 33.26% TS 10.13% 77.79% 68.15% 4.35% 27.25% 53.65% 44.90% 28.30% 32.35% 57.89% 60.69% 38.84% Lin-All-u 82.19% 68.86% 9.75% 28.91% 23.92% 6.81% 44.17% 43.01% 17.36% Log-All-u 11.14% 72.49% 3.32% 18.89% 5.25% 39.06% Log-All-w 69.89% 18.30% 30.23% *In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score. comes to median values of prediction accuracy, Log-All-u outperforms Lin-All-u at a high confidence level. These results indicate that differences in prediction accuracy between information markets and opinion pools are not statistically significant. This may seem to contradict the result of Servan-Schreiber et. al [36], in which NewsFutures``s information markets have been shown to provide statistically significantly more accurate predictions than the (unweighted) average of all ProbabilityFootball opinions. The discrepancy emerges in dealing with missing data. Not all 1966 registered ProbabilityFootball participants offer probability assessments for each game. When a participant does not provide a probability assessment for a game, the contest considers their prediction as 0.5. . This makes sense in the context of the contest, since 0.5 always yields 0 quadratic score. The ProbabilityFootball average reported on the contest website and used by Servan-Schreiber et. al includes these 0.5 estimates. Instead, we remove participants from games that they do not provide assessments, pooling only the available opinions together. Our treatment increases the prediction accuracy of Lin-All-u significantly. 6. CONCLUSIONS With the fast growth of the Internet, information markets have recently emerged as an alternative tool for predicting future events. Previous research has shown that information markets give as accurate or more accurate predictions than individual experts and polls. However, information markets, as an adaptive mechanism to aggregate different opinions of market participants, have not been calibrated against many belief aggregation methods. In this paper, we compare prediction accuracy of information markets with linear and logarithmic opinion pools (LinOP and LogOP) using predictions from two markets and 1966 individuals regarding the outcomes of 210 American football games during the 2003 NFL season. In screening for representative opinion pools to compare with information markets, we investigate the effect of weights and number of experts on prediction accuracy. Our results on both the comparison of information markets and opinion pools and the relative performance of different opinion pools are summarized as below. 1. At the same time point ahead of the events, information markets offer as accurate predictions as our selected opinion pools. We have selected four opinion pools to represent the prediction accuracy level that LinOP and LogOP can achieve. With all four performance metrics, our two information markets obtain similar prediction accuracy as the four opinion pools. 65 Table 6: Statistical Confidence of Median Differences in Prediction Accuracy TS Lin-All-u Log-All-u Log-All-w Log-200-u NF 48.85% 47.3% 84.8% 77.9% 65.36% 45.26% 44.55% 85.27% 75.65% 66.75% 44.89% 46.04% 84.43% 77.16% 64.78% TS 5.18% 94.83% 94.31% 0% 5.37% 92.08% 92.53% 0% 7.41% 95.62% 91.09% 0% Lin-All-u 95.11% 91.37% 7.31% 96.10% 92.69% 9.84% 95.45% 95.12% 7.79% Log-All-u 23.47% 95.89% 26.68% 93.85% 22.47% 96.42% Log-All-w 91.3% 91.4% 90.37% *In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score. *Confidence above 95% is shown in bold. 2. The arithmetic average of all opinions (Lin-All-u) is a simple, robust, and efficient opinion pool. Simply averaging across all experts seems resulting in better predictions than individual opinions and opinion pools with a few experts. It is quite robust in the sense that even if the included individual predictions are less accurate, averaging over all opinions still gives better (or equally good) predictions. 3. Weighting expert opinions according to past performance does not seem to significantly improve prediction accuracy of either LinOP or LogOP. Comparing performance-weighted opinion pools with equally weighted opinion pools, we do not observe much difference in terms of prediction accuracy. Since we only use one performance-weighting method, calculating the weights according to past accumulated quadratic score that participants earned, this might due to the weighting method we chose. 4. LogOP yields bolder predictions than LinOP. LogOP yields predictions that are closer to the extremes, 0 or 1. An information markets is a self-organizing mechanism for aggregating information and making predictions. Compared with opinion pools, it is less constrained by space and time, and can eliminate the efforts to identify experts and decide belief aggregation methods. But the advantages do not compromise their prediction accuracy to any extent. On the contrary, information markets can provide real-time predictions, which are hardly achievable through resorting to experts. In the future, we are interested in further exploring: • Performance comparison of information markets with other opinion pools and mathematical aggregation procedures. In this paper, we only compare information markets with two simple opinion pools, linear and logarithmic. It will be meaningful to investigate their relative prediction accuracy with other belief aggregation methods such as Bayesian approaches. There are also a number of theoretical expert algorithms with proven worst-case performance bounds [10] whose average-case or practical performance would be instructive to investigate. • Whether defining expertise more narrowly can improve predictions of opinion pools. In our analysis, we broadly treat participants of the ProbabilityFootball contest as experts in all games. If we define expertise more narrowly, selecting experts in certain football teams to predict games involving these teams, will the predictions of opinion pools be more accurate? • The possibility of combining information markets with other forecasting methods to achieve better prediction accuracy. Chen, Fine, and Huberman [11] use an information market to determine the risk attitude of participants, and then perform a nonlinear aggregation of their predictions based on their risk attitudes. The nonlinear aggregation mechanism is shown to outperform both the market and the best individual participants. It is worthy of more attention whether information markets, as an alternative forecasting method, can be used together with other methods to improve our predictions. 7. ACKNOWLEDGMENTS We thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data. We thank Varsha Dani, Lance Fortnow, Omid Madani, Sumit Sang66 hai, and the anonymous reviewers for useful insights and pointers. The authors acknowledge the support of The Penn State eBusiness Research Center. 8. REFERENCES [1] http://us.newsfutures.com [2] http://www.biz.uiowa.edu/iem/ [3] http://www.hsx.com/ [4] http://www.ideosphere.com/fx/ [5] http://www.probabilityfootball.com/ [6] http://www.probabilitysports.com/ [7] http://www.tradesports.com/ [8] A. H. Ashton and R. H. Ashton. Aggregating subjective forecasts: Some empirical results. Management Science, 31:1499-1508, 1985. [9] R. P. Batchelor and P. Dua. Forecaster diversity and the benefits of combining forecasts. Management Science, 41:68-75, 1995. [10] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427-485, 1997. [11] K. Chen, L. Fine, and B. Huberman. Predicting the future. Information System Frontier, 5(1):47-61, 2003. [12] R. T. Clemen and R. L. Winkler. Combining probability distributions from experts in risk analysis. Risk Analysis, 19(2):187-203, 1999. [13] R. M. Cook. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press, New York, 1991. [14] A. L. Delbecq, A. H. Van de Ven, and D. H. Gustafson. Group Techniques for Program Planners: A Guide to Nominal Group and Delphi Processes. Scott Foresman and Company, Glenview, IL, 1975. [15] E. F. Fama. Efficient capital market: A review of theory and empirical work. Journal of Finance, 25:383-417, 1970. [16] R. Forsythe and F. Lundholm. Information aggregation in an experimental market. Econometrica, 58:309-47, 1990. [17] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright. Forecasting elections: A market alternative to polls. In T. R. Palfrey, editor, Contemporary Laboratory Experiments in Political Economy, pages 69-111. University of Michigan Press, Ann Arbor, MI, 1991. [18] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright. Anatomy of an experimental political stock market. American Economic Review, 82(5):1142-1161, 1992. [19] R. Forsythe, T. A. Rietz, and T. W. Ross. Wishes, expectations, and actions: A survey on price formation in election stock markets. Journal of Economic Behavior and Organization, 39:83-110, 1999. [20] S. French. Group consensus probability distributions: a critical survey. Bayesian Statistics, 2:183-202, 1985. [21] C. Genest. A conflict between two axioms for combining subjective distributions. Journal of the Royal Statistical Society, 46(3):403-405, 1984. [22] C. Genest. Pooling operators with the marginalization property. Canadian Journal of Statistics, 12(2):153-163, 1984. [23] C. Genest, K. J. McConway, and M. J. Schervish. Characterization of externally Bayesian pooling operators. Annals of Statistics, 14(2):487-501, 1986. [24] C. Genest and J. V. Zidek. Combining probability distributions: A critique and an annotated bibliography. Statistical Science, 1(1):114-148, 1986. [25] S. J. Grossman. An introduction to the theory of rational expectations under asymmetric information. Review of Economic Studies, 48(4):541-559, 1981. [26] F. A. Hayek. The use of knowledge in society. American Economic Review, 35(4):519-530, 1945. [27] J. C. Jackwerth and M. Rubinstein. Recovering probability distribution from options prices. Journal of Finance, 51(5):1611-1631, 1996. [28] H. A. Linstone and M. Turoff. The Delphi Method: Techniques and Applications. Addison-Wesley, Reading, MA, 1975. [29] P. A. Morris. Decision analysis expert use. Management Science, 20(9):1233-1241, 1974. [30] P. A. Morris. Combining expert judgments: A bayesian approach. Management Science, 23(7):679-693, 1977. [31] P. A. Morris. An axiomatic approach to expert resolution. Management Science, 29(1):24-32, 1983. [32] E. W. Noreen. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley and Sons, Inc., New York, 1989. [33] D. M. Pennock, S. Lawrence, C. L. Giles, and F. A. Nielsen. The real power of artificial markets. Science, 291:987-988, February 2002. [34] D. M. Pennock, S. Lawrence, F. A. Nielsen, and C. L. Giles. Extracting collective probabilistic forecasts from web games. In Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 174-183, San Francisco, CA, 2001. [35] C. Plott and S. Sunder. Rational expectations and the aggregation of diverse information in laboratory security markets. Econometrica, 56:1085-118, 1988. [36] E. Servan-Schreiber, J. Wolfers, D. M. Pennock, and B. Galebach. Prediction markets: Does money matter? Electronic Markets, 14(3):243-251, 2004. [37] M. Spann and B. Skiera. Internet-based virtual stock markets for business forecasting. Management Science, 49(10):1310-1326, 2003. [38] M. West. Bayesian aggregation. Journal of the Royal Statistical Society. Series A. General, 147(4):600-607, 1984. [39] R. L. Winkler. The consensus of subjective probability distributions. Management Science, 15(2):B61-B75, 1968. [40] J. Wolfers and E. Zitzewitz. Prediction markets. Journal of Economic Perspectives, 18(2):107-126, 2004. 67
Information Markets vs. Opinion Pools: An Empirical Comparison ABSTRACT In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the "market probabilities" given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods. 1. INTRODUCTION Forecasting is a ubiquitous endeavor in human societies. For decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches. Statistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event. When these conditions cannot be met, non-statistical approaches that rely on judgmental information about the future event could be better choices. One widely used non-statistical method is to elicit opinions from experts. Since experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction. These belief aggregation methods are called opinion pools, which have been extensively studied in statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and applied in many domains such as group decision making [29] and risk analysis [12]. With the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool. Information markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events. To form the predictions, information markets tie payoffs of securities to outcomes of events. For example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game. Otherwise, it pays off nothing. The security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game. Such markets are becoming very popular. The Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections. The Hollywood Stock Exchange (HSX) [3] is a virtual (play-money) exchange for trading securities to forecast future box office proceeds of new movies, and the outcomes of entertainment awards, etc. . TradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events. The Foresight Exchange (FX) [4] allows traders to wager play money on unresolved scientific questions or other claims of public interest, and NewsFutures.com's World News Exchange [1] has popular sports and financial betting markets, also grounded in a play-money currency. Despite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict? Previous research in general shows that information markets are remarkably accurate. The political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19]. Prices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37]. However, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et. al [36], in which the authors compare two information markets against arithmetic average of expert opinions. Since information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective. The comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs. This paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools. In terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games. Our results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools. In selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP. The remainder of the paper is organized as follows. Section 2 reviews popular opinion pooling methods. Section 3 introduces the basics of information markets. Data sets and our analysis methods are described in Section 4. We present results and analysis in Section 5, followed by conclusions in Section 6. 2. REVIEW OF OPINION POOLS Clemen and Winkler [12] classify opinion pooling methods into two broad categories: mathematical approaches and behavioral approaches. In mathematical approaches, the opinions of individual experts are expressed as subjective probability distributions over outcomes of an uncertain event. They are combined through various mathematical methods to form an aggregated probability distribution. Genest and Zidek [24] and French [20] provide comprehensive reviews of mathematical approaches. Mathematical approaches can be further distinguished into axiomatic approaches and Bayesian approaches. Axiomatic approaches apply prespecified functions that map expert opinions, expressed as a set of individual probability distributions, to a single aggregated probability distribution. These pooling functions are justified using axioms or certain desirable properties. Two of the most common pooling functions are the linear opinion pool (LinOP) and the logarithmic opinion pool (LogOP). Using LinOP, the aggregate probability distribution is a weighted arithmetic mean of individual probability distributions: where pi (0) is expert i's probability distribution of uncertain event 0, p (0) represents the aggregate probability distribution, wi's are weights for experts, which are usually nonnegative and sum to 1, and n is the number of experts. Using LogOP, the aggregate probability distribution is a weighted geometric mean of individual probability distributions: where k is a normalization constant to ensure that the pooled opinion is a probability distribution. Other axiomatic pooling methods often are extensions of LinOP [22], LogOP [23], or both [13]. Winkler [39] and Morris [29, 30] establish the early framework of Bayesian aggregation methods. Bayesian approaches assume as if there is a decision maker who has a prior probability distribution over event 0 and a likelihood function over expert opinions given the event. This decision maker takes expert opinions as evidence and updates its priors over the event and opinions according to Bayes rule. The resulted posterior probability distribution of 0 is the pooled opinion. Behavioral approaches have been widely studied in the field of group decision making and organizational behavior. The important assumption of behavioral approaches is that, through exchanging opinions or information, experts can eventually reach an equilibrium where further interaction won't change their opinions. One of the best known behavioral approaches is the Delphi technique [28]. Typically, this method and its variants do not allow open discussion, but each expert has chance to judge opinions of other experts, and is given feedback. Experts then can reassess their opinions and repeat the process until a consensus or a smaller spread of opinions is achieved. Some other behavioral methods, such as the Nominal Group technique [14], promote open discussions in controlled environments. Each approach has its pros and cons. Axiomatic approaches are easy to use. But they don't have a normative basis to choose weights. In addition, several impossibility results (e.g., Genest [21]) show that no aggregation function can satisfy all desired properties of an opinion pool, unless the pooled opinion degenerates to a single individual opinion, which effectively implies a dictator. Bayesian approaches are nicely based on the normative Bayesian framework. However, it is sometimes frustratingly difficult to apply because it requires either (1) constructing an obscenely complex joint prior over the event and opinions (often impractical even in terms of storage / space complexity, not to mention from an elicitation standpoint) or (2) making strong assumptions about the prior, like conditional independence of experts. Behavior approaches allow experts to dynamically improve their information and revise their opinions during interactions, but many of them are not fixed or completely specified, and can't guarantee convergence or repeatability. 3. HOW INFORMATION MARKETS WORK Much of the enthusiasm for information markets stems from Hayek hypothesis [26] and efficient market hypothesis [15]. Hayek, in his classic critique of central planning in 1940's, claims that the price system in a competitive market is a very efficient mechanism to aggregate dispersed information among market participants. The efficient market hypothesis further states that, in an efficient market, the price of a security almost instantly incorporates all available information. The market price summarizes all relevant information across traders, hence is the market participants' consensus expectation about the future value of the security. Empirical evidence supports both hypotheses to a large extent [25, 27, 35]. Thus, when associating the value of a security with the outcome of an uncertain future event, market price, by revealing the consensus expectation of the security value, can indirectly predict the outcome of the event. This idea gives rise to information markets. For example, if we want to predict which team will win the NFL game between New England and Carolina, an information market can trade a security "$100 if New England defeats Carolina", whose payoff per share at the end of the game is specified as follow: $100 if New England wins the game; $0 otherwise. The security price should roughly equal the expected payoff of the security in an efficient market. The time value of money usually can be ignored because durations of most information markets are short. Assuming exposure to risk is roughly equal for both outcomes, or that there are sufficient effectively risk-neutral speculators in the market, the price should not be biased by the risk attitudes of various players in the market. Thus, where P is the price of the security "$100 if New England defeats Carolina" and Pr (Patriots win) is the probability that New England will win the game. Observing the security price P before the game, we can derive Pr (Patriots win), which is the market participants' collective prediction about how likely it is that New England will win the game. The above security is a winner-takes-all contract. It is used when the event to be predicted is a discrete random variable with disjoint outcomes (in this case binary). Its price predicts the probability that a specific outcome will be realized. When the outcome of a prediction problem can be any value in a continuous interval, we can design a security that pays its holder proportional to the realized value. This kind of security is what Wolfers and Zitzewitz [40] called an index contract. It predicts the expected value of a future outcome. Many other aspects of a future event such as median value of outcome can also be predicted in information markets by designing and trading different securities. Wolfers and Zitzewitz [40] provide a summary of the main types of securities traded in information markets and what statistical properties they can predict. In practice, conceiving a security for a prediction problem is only one of the many decisions in designing an effective information market. Spann and Skiera [37] propose an initial framework for designing information markets. 4. DESIGN OF ANALYSIS 4.1 Data Sets Our data sets cover 210 NFL games held between September 28th, 2003 and December 28th, 2003. NFL games are very suitable for our purposes because: (1) two online exchanges and one online prediction contest already exist that provide data on both information markets and the opinions of self-identified experts for the same set of games; (2) the popularity of NFL games in the United States provides natural incentives for people to participate in information markets and/or the contest, which increases liquidity of information markets and improves the quality and number of opinions in the contest; (3) intense media coverage and analysis of the profiles and strengths of teams and individual players provide the public with much information so that participants of information markets and the contest can be viewed as knowledgeable regarding to the forecasting goal. Information market data was acquired, by using a specially designed crawler program, from TradeSports.com's Football-NFL markets [7] and NewsFutures.com's Sports Exchange [1]. For each NFL game, both TradeSports and NewsFutures have a winner-takes-all information market to predict the game outcome. We introduce the design of the two markets according to Spann and Skiera's three steps for designing an information market [37] as below. 9 Choice of forecasting goal: Markets at both TradeSports and NewsFutures aim at predicting which one of the two teams will win a NFL football game. They trade similar winner-takes-all securities that pay off 100 if a team wins the game and 0 if it loses the game. Small differences exist in how they deal with ties. In the case of a tie, TradeSports will unwind all trades that occurred and refund all exchange fees, but the security is worth 50 in NewsFutures. Since the probability of a tie is usually very low (much less the 1%), prices at both markets effectively represent the market participants' consensus assessment of the probability that the team will win. 9 Incentive for participation and information rev elation: TradeSports and NewsFutures use different incentives for participation and information revelation. TradeSports is a real-money exchange. A trader needs to open and fund an account with a minimum of $100 to participate in the market. Both profits and losses can occur as a result of trading activity. On the contrary, a trader can register at NewsFutures for free and get 2000 units of Sport Exchange virtual money at the time of registration. Traders at NewsFutures will not incur any real financial loss. They can accumulate virtual money by trading securities. The virtual money can then be used to bid for a few real prizes at NewsFutures' online shop. 9 Financial market design: Both markets at TradeSports and NewsFutures use the continuous double auction as their trading mechanism. TradeSports charges a small fee on each security transaction and expiry, while NewsFutures does not. We can see that the main difference between two information markets is real money vs. virtual money. Servan-Schreiber et. al [36] have compared the effect of money on the performance of the two information markets and concluded that the prediction accuracy of the two markets are at about the same level. Not intending to compare these two markets, we still use both markets in our analysis to ensure that our findings are not accidental. We obtain the opinions of 1966 self-identified experts for NFL games from the ProbabilityFootball online contest [5], one of several ProbabilitySports contests [6]. The contest is free to enter. Participants of the contest are asked to enter their subjective probability that a team will win a game by noon on the day of the game. Importantly, the contest evaluates the participants' performance via the quadratic scoring rule: where s represents the score that a participant earns for the game, and Prob Lose is the probability that the participant assigns to the actual losing team. The quadratic score is one of a family of so-called proper scoring rules that have the property that an expert's expected score is maximized when the expert reports probabilities truthfully. For example, for a game team A vs. team B, if a player assigns 0.5 to both team A and B, his/her score for the game is 0 no matter which team wins. If he/she assigns 0.8 to team A and 0.2 to team B, showing that he is confident in team A's winning, he/she will score 84 points for the game if team A wins, and lose 156 points if team B wins. This quadratic scoring rule rewards bold predictions that are right, but penalizes bold predictions that turn out to be wrong. The top players, measured by accumulated scores over all games, win the prizes of the contest. The suggested strategy at the contest website is "to make picks for each game that match, as closely as possible, the probabilities that each team will win". This strategy is correct if the participant seeks to maximize expected score. However, as prizes are awarded only to the top few winners, participants' goals are to maximize the probability of winning, not maximize expected score, resulting in a slightly different and more risk-seeking optimization., Still, as far as we are aware, this data offer the closest thing available to true subjective probability judgments from so many people over so many public events that have corresponding information markets. 4.2 Methods of Analysis In order to compare the prediction accuracy of information markets and that of opinion pools, we proceed to derive predictions from market data of TradeSports and NewsFutures, form pooled opinions using expert data from ProbabilityFootball contest, and specify the performance measures to be used. 4.2.1 Deriving Predictions For information markets, deriving predictions is straightforward. We can take the security price and divide it by 100 to get the market's prediction of the probability that a team will win. To match the time when participants at the ProbabilityFootball contest are required to report their probability assessments, we derive predictions using the last trade price before noon on the day of the game. For more, Ideally, prizes would be awarded by lottery in proportion to accumulated score. than half of the games, this time is only about an hour earlier than the game starting time, while it is several hours earlier for other games. Two sets of market predictions are derived: 9 NF: Prediction equals NewsFutures' last trade price before noon of the game day divided by 100. 9 TS: Prediction equals TradeSports' last trade price before noon of the game day divided by 100. We apply LinOP and LogOP to ProbabilityFootball data to obtain aggregate expert predictions. The reason that we do not consider other aggregation methods include: (1) data from ProbabilityFootball is only suitable for mathematical pooling methods--we can rule out behavioral approaches, (2) Bayesian aggregation requires us to make assumptions about the prior probability distribution of game outcomes and the likelihood function of expert opinions: given the large number of games and participants, making reasonable assumptions is difficult, and (3) for axiomatic approaches, previous research has shown that simpler aggregation methods often perform better than more complex methods [12]. Because the output of LogOP is indeterminate if there are probability assessments of both 0 and 1 (and because assessments of 0 and 1 are dictatorial using LogOP), we add a small number 0.01 to an expert opinion if it is 0, and subtract 0.01 from it if it is 1. In pooling opinions, we consider two influencing factors: weights of experts and number of expert opinions to be pooled. For weights of experts, we experiment with equal weights and performance-based weights. The performancebased weights are determined according to previous accumulated score in the contest. The score for each game is calculated according to equation 3, the scoring rule used in the ProbabilityFootball contest. For the first week, since no previous scores are available, we choose equal weights. For later weeks, we calculate accumulated past scores for each player. Because the cumulative scores can be negative, we shift everyone's score if needed to ensure the weights are non-negative. Thus, where shift equals 0 if the smallest cumulative scored is non-negative, and equals the absolute value of the smallest cumulative scored otherwise. For simplicity, we call performance-weighted opinion pool as weighted, and equally weighted opinion pool as unweighted. We will use them interchangeably in the remaining of the paper. As for the number of opinions used in an opinion pool, we form different opinion pools with different number of experts. Only the best performing experts are selected. For example, to form an opinion pool with 20 expert opinions, we choose the top 20 participants. Since there is no performance record for the first week, we use opinions of all participants in the first week. For week 2, we select opinions of 20 individuals whose scores in the first week are among the top 20. For week 3, 20 individuals whose cumulative scores of week 1 and 2 are among the top 20s are selected. Experts are chosen in a similar way for later weeks. Thus, the top 20 participants can change from week to week. The possible opinion pools, varied in pooling functions, weighting methods, and number of expert opinions, are shown Table 1: Pooled Expert Predictions in Table 1. "Lin" represents linear, and "Log" represents Logarithmic. "n" is the number of expert opinions that are pooled, and "All" indicates that all opinions are combined. We use "u" to symbolize unweighted (equally weighted) opinion pools. "w" is used for weighted (performance-weighted) opinion pools. Lin-All-u, the equally weighted LinOP with all participants, is basically the arithmetic mean of all participants' opinions. Log-All-u is simply the geometric mean of all opinions. When a participant did not enter a prediction for a particular game, that participant was removed from the opinion pool for that game. This contrasts with the "ProbabilityFootball average" reported on the contest website and used by Servan-Schreiber et. al [36], where unreported predictions were converted to 0.5 probability predictions. 4.2.2 Performance Measures We use three common metrics to assess prediction accuracy of information markets and opinion pools. These measures have been used by Servan-Schreiber et. al [36] in evaluating the prediction accuracy of information markets. 1. Absolute Error = Prob Lose, where Prob Lose is the probability assigned to the eventual losing team. Absolute error simply measures the difference between a perfect prediction (1 for winning team) and the actual prediction. A prediction with lower absolute error is more accurate. 2. Quadratic Score = 100 − 400 × (Prob Lose2). Quadratic score is the scoring function that is used in the ProbabilityFootball contest. It is a linear transformation of squared error, Prob Lose2, which is one of the mostly used metrics in evaluating forecasting accuracy. Quadratic score can be negative. A prediction with higher quadratic score is more accurate. 3. Logarithmic Score = log (Prob Win), where Prob Win is the probability assigned to the eventual winning team. The logarithmic score, like the quadratic score, is a proper scoring rule. A prediction with higher (less negative) logarithmic score is more accurate. 5. EMPIRICAL RESULTS 5.1 Performance of Opinion Pools Depending on how many opinions are used, there can be numerous different opinion pools. We first examine the effect of number of opinions on prediction accuracy by forming opinion pools with the number of expert opinions varying from 1 to 960. In the ProbabilityFootball Competition, not all 1966 registered participants provide their probability assessments for every game. 960 is the smallest number of participants for all games. For each game, we sort experts according to their accumulated quadratic score in previous weeks. Predictions of the best performing n participants are picked to form an opinion pool with n experts. Figure 1 shows the prediction accuracy of LinOP and LogOP in terms of mean values of the three performance measures across all 210 games. We can see the following trends in the figure. 1. Unweighted opinion pools and performance-weighted opinion pools have similar levels of prediction accuracy, especially for LinOP. 2. For LinOP, increasing the number of experts in general increases or keeps the same the level of prediction accuracy. When there are more than 200 experts, the prediction accuracy of LinOP is stable regarding the number of experts. 3. LogOP seems more accurate than LinOP in terms of mean absolute error. But, using all other performance measures, LinOP outperforms LogOP. 4. For LogOP, increasing the number of experts increases the prediction accuracy at the beginning. But the curves (including the points with all experts) for mean quadratic score, and mean logarithmic score have slight bell-shapes, which represent a decrease in prediction accuracy when the number of experts is very large. The curves for mean absolute error, on the other hand, show a consistent increase of accuracy. The first and second trend above imply that when using LinOP, the simplest way, which has good prediction accuracy, is to average the opinions of all experts. Weighting does not seem to improve performance. Selecting experts according to past performance also does not help. It is a very interesting observation that even if many participants of the ProbabilityFootball contest do not provide accurate individual predictions (they have negative quadratic scores in the contest), including their opinions into the opinion pool still increases the prediction accuracy. One explanation of this phenomena could be that biases of individual judgment can offset with each other when opinions are diverse, which makes the pooled prediction more accurate. The third trend presents a controversy. The relative prediction accuracy of LogOP and LinOP flips when using different accuracy measures. To investigate this disagreement, we plot the absolute error of Log-All-u and Lin-All-u for each game in Figure 2. When the absolute error of an opinion Figure 1: Prediction Accuracy of Opinion Pools pool for a game is less than 0.5, it means that the team favored by the opinion pool wins the game. If it is greater than 0.5, the underdog wins. Compared with Lin-All-u, Log-All-u has lower absolute error when it is less than 0.5, and greater absoluter error when it is greater than 0.5, which indicates that predictions of Log-All-u are bolder, more close to 0 or 1, than those of Lin-All-u. This is due to the nature of linear and logarithmic aggregating functions. Because quadratic score and logarithmic score penalize bold predictions that are wrong, LogOP is less accurate when measured in these terms. Similar reasoning accounts for the fourth trend. When there are more than 500 experts, increasing number of experts used in LogOP improves the prediction accuracy measured by absolute error, but worsens the accuracy measured by the other two metrics. Examining expert opinions, we find that participants who rank lower are more frequent in offering extreme predictions (0 or 1) than those ranking high in the list. When we increase the number of experts in an opinion pool, we are incorporating more extreme predictions into it. The resulting LogOP is bolder, and hence has lower mean quadratic score and mean logarithmic score. 5.2 Comparison of Information Markets and Opinion Pools Through the first screening of various opinion pools, we select Lin-All-u, Log-All-u, Log-All-w, and Log-200-u to compare with predictions from information markets. Lin-All-u as shown in Figure 1 can represent what LinOP can achieve. However, the performance of LogOP is not consistent when evaluated using different metrics. Log-All-u and Log-All-w offer either the best or the worst predictions. Log-200-u, the LogOP with the 200 top performing experts, provides more stable predictions. We use all of the three to stand for the performance of LogOP in our later comparison. If a prediction of the probability that a team will win a game, either from an opinion pool or an information market, is higher than 0.5, we say that the team is the predicted favorite for the game. Table 2 presents the number and percentage of games that predicted favorites actually win, out of a total of 210 games. All four opinion pools correctly predict a similar number and percentage of games as NF and TS. Since NF, TS, and the four opinion pools form their predictions using information available at noon of the game Table 2: Number and Percentage of Games that Predicted Favorites Win Table 3: Mean of Prediction Accuracy Measures Figure 2: Absolute Error: Lin-All-u vs. Log-All-u day, information markets and opinion pools have comparable potential at the same time point. We then take a closer look at prediction accuracy of information markets and opinion pools using the three performance measures. Table 3 displays mean values of these measures over 210 games. Numbers in parentheses are standard errors, which estimate the standard deviation of the mean. To take into consideration of skewness of distributions, we also report median values of accuracy measures in Table 4. Judged by the mean values of accuracy measures in Table 3, all methods have similar accuracy levels, with NF and TS slightly better than the opinion pools. However, the median values of accuracy measures indicate that LogAll-u and Log-All-w opinion pools are more accurate than all other predictions. We employ the randomization test [32] to study whether the differences in prediction accuracy presented in Table 3 and Table 4 are statistically significant. The basic idea of randomization test is that, by randomly swapping predictions of two methods numerous times, an empirical distribution for the difference of prediction accuracy can be constructed. Using this empirical distribution, we are then able to evaluate that at what confidence level the observed difference reflects a real difference. For example, the mean absolute error of NF is higher than that of Log-All-u by 0.0229, as shown in Table 3. To test whether this difference is statistically significant, we shuffle predictions from two methods, randomly label half of predictions as NF and the other half as Log-All-u, and compute the difference of mean absolute error of the newly formed NF and Log-All-u data. The above procedure is repeated 10,000 times. The 10,000 differences of mean absolute error results in an empirical distribution of the difference. Comparing our observed difference, 0.0229, with this distribution, we find that the observed difference is greater than 75.37% of the empirical differences. This leads us to conclude that the difference of mean absolute error between NF and Log-All-u is not statistically significant, if we choose the level of significance to be 0.05. Table 5 and Table 6 are results of randomization test for mean and median differences respectively. Each cell of the table is for two different prediction methods, represented by name of the row and name of the column. The first lines of table cells are results for absolute error. The second and third lines are dedicated to quadratic score and logarithmic score respectively. We can see that, in terms of mean values of accuracy measures, the differences of all methods are not statistically significant to any reasonable degree. When it Table 4: Median of Prediction Accuracy Measures Table 5: Statistical Confidence of Mean Differences in Prediction Accuracy * In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score. comes to median values of prediction accuracy, Log-All-u outperforms Lin-All-u at a high confidence level. These results indicate that differences in prediction accuracy between information markets and opinion pools are not statistically significant. This may seem to contradict the result of Servan-Schreiber et. al [36], in which NewsFutures's information markets have been shown to provide statistically significantly more accurate predictions than the (unweighted) average of all ProbabilityFootball opinions. The discrepancy emerges in dealing with missing data. Not all 1966 registered ProbabilityFootball participants offer probability assessments for each game. When a participant does not provide a probability assessment for a game, the contest considers their prediction as 0.5. . This makes sense in the context of the contest, since 0.5 always yields 0 quadratic score. The ProbabilityFootball average reported on the contest website and used by Servan-Schreiber et. al includes these 0.5 estimates. Instead, we remove participants from games that they do not provide assessments, pooling only the available opinions together. Our treatment increases the prediction accuracy of Lin-All-u significantly. 6. CONCLUSIONS With the fast growth of the Internet, information markets have recently emerged as an alternative tool for predicting future events. Previous research has shown that information markets give as accurate or more accurate predictions than individual experts and polls. However, information markets, as an adaptive mechanism to aggregate different opinions of market participants, have not been calibrated against many belief aggregation methods. In this paper, we compare prediction accuracy of information markets with linear and logarithmic opinion pools (LinOP and LogOP) using predictions from two markets and 1966 individuals regarding the outcomes of 210 American football games during the 2003 NFL season. In screening for representative opinion pools to compare with information markets, we investigate the effect of weights and number of experts on prediction accuracy. Our results on both the comparison of information markets and opinion pools and the relative performance of different opinion pools are summarized as below. 1. At the same time point ahead of the events, information markets offer as accurate predictions as our selected opinion pools. We have selected four opinion pools to represent the prediction accuracy level that LinOP and LogOP can achieve. With all four performance metrics, our two information markets obtain similar prediction accuracy as the four opinion pools. Table 6: Statistical Confidence of Median Differences in Prediction Accuracy * In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score. * Confidence above 95% is shown in bold. 2. The arithmetic average of all opinions (Lin-All-u) is a simple, robust, and efficient opinion pool. Simply averaging across all experts seems resulting in better predictions than individual opinions and opinion pools with a few experts. It is quite robust in the sense that even if the included individual predictions are less accurate, averaging over all opinions still gives better (or equally good) predictions. 3. Weighting expert opinions according to past performance does not seem to significantly improve prediction accuracy of either LinOP or LogOP. Comparing performance-weighted opinion pools with equally weighted opinion pools, we do not observe much difference in terms of prediction accuracy. Since we only use one performance-weighting method, calculating the weights according to past accumulated quadratic score that participants earned, this might due to the weighting method we chose. 4. LogOP yields bolder predictions than LinOP. LogOP yields predictions that are closer to the extremes, 0 or 1. An information markets is a self-organizing mechanism for aggregating information and making predictions. Compared with opinion pools, it is less constrained by space and time, and can eliminate the efforts to identify experts and decide belief aggregation methods. But the advantages do not compromise their prediction accuracy to any extent. On the contrary, information markets can provide real-time predictions, which are hardly achievable through resorting to experts. In the future, we are interested in further exploring: 9 Performance comparison of information markets with other opinion pools and mathematical aggregation procedures. In this paper, we only compare information markets with two simple opinion pools, linear and logarithmic. It will be meaningful to investigate their relative prediction accuracy with other belief aggregation methods such as Bayesian approaches. There are also a number of theoretical expert algorithms with proven worst-case performance bounds [10] whose average-case or practical performance would be instructive to investigate. 9 Whether defining expertise more narrowly can improve predictions of opinion pools. In our analysis, we broadly treat participants of the ProbabilityFootball contest as experts in all games. If we define expertise more narrowly, selecting experts in certain football teams to predict games involving these teams, will the predictions of opinion pools be more accurate? 9 The possibility of combining information markets with other forecasting methods to achieve better prediction accuracy. Chen, Fine, and Huberman [11] use an information market to determine the risk attitude of participants, and then perform a nonlinear aggregation of their predictions based on their risk attitudes. The nonlinear aggregation mechanism is shown to outperform both the market and the best individual participants. It is worthy of more attention whether information markets, as an alternative forecasting method, can be used together with other methods to improve our predictions. 7. ACKNOWLEDGMENTS We thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data. We thank Varsha Dani, Lance Fortnow, Omid Madani, Sumit Sang hai, and the anonymous reviewers for useful insights and pointers. The authors acknowledge the support of The Penn State eBusiness Research Center.
Information Markets vs. Opinion Pools: An Empirical Comparison ABSTRACT In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the "market probabilities" given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods. 1. INTRODUCTION Forecasting is a ubiquitous endeavor in human societies. For decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches. Statistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event. When these conditions cannot be met, non-statistical approaches that rely on judgmental information about the future event could be better choices. One widely used non-statistical method is to elicit opinions from experts. Since experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction. These belief aggregation methods are called opinion pools, which have been extensively studied in statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and applied in many domains such as group decision making [29] and risk analysis [12]. With the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool. Information markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events. To form the predictions, information markets tie payoffs of securities to outcomes of events. For example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game. Otherwise, it pays off nothing. The security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game. Such markets are becoming very popular. The Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections. The Hollywood Stock Exchange (HSX) [3] is a virtual (play-money) exchange for trading securities to forecast future box office proceeds of new movies, and the outcomes of entertainment awards, etc. . TradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events. The Foresight Exchange (FX) [4] allows traders to wager play money on unresolved scientific questions or other claims of public interest, and NewsFutures.com's World News Exchange [1] has popular sports and financial betting markets, also grounded in a play-money currency. Despite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict? Previous research in general shows that information markets are remarkably accurate. The political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19]. Prices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37]. However, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et. al [36], in which the authors compare two information markets against arithmetic average of expert opinions. Since information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective. The comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs. This paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools. In terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games. Our results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools. In selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP. The remainder of the paper is organized as follows. Section 2 reviews popular opinion pooling methods. Section 3 introduces the basics of information markets. Data sets and our analysis methods are described in Section 4. We present results and analysis in Section 5, followed by conclusions in Section 6. 2. REVIEW OF OPINION POOLS 3. HOW INFORMATION MARKETS WORK 4. DESIGN OF ANALYSIS 4.1 Data Sets 9 Incentive for participation and information rev 4.2 Methods of Analysis 4.2.1 Deriving Predictions 4.2.2 Performance Measures 2. Quadratic Score = 100 − 400 × (Prob Lose2). 5. EMPIRICAL RESULTS 5.1 Performance of Opinion Pools 5.2 Comparison of Information Markets and Opinion Pools 6. CONCLUSIONS 4. LogOP yields bolder predictions than LinOP. 7. ACKNOWLEDGMENTS We thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data. We thank Varsha Dani, Lance Fortnow, Omid Madani, Sumit Sang hai, and the anonymous reviewers for useful insights and pointers. The authors acknowledge the support of The Penn State eBusiness Research Center.
Information Markets vs. Opinion Pools: An Empirical Comparison ABSTRACT In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the "market probabilities" given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods. 1. INTRODUCTION Forecasting is a ubiquitous endeavor in human societies. For decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches. Statistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event. When these conditions cannot be met, non-statistical approaches that rely on judgmental information about the future event could be better choices. One widely used non-statistical method is to elicit opinions from experts. Since experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction. With the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool. Information markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events. To form the predictions, information markets tie payoffs of securities to outcomes of events. For example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game. Otherwise, it pays off nothing. The security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game. Such markets are becoming very popular. The Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections. TradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events. popular sports and financial betting markets, also grounded in a play-money currency. Despite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict? Previous research in general shows that information markets are remarkably accurate. The political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19]. Prices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37]. However, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et. al [36], in which the authors compare two information markets against arithmetic average of expert opinions. Since information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective. The comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs. This paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools. In terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games. Our results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools. In selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP. The remainder of the paper is organized as follows. Section 2 reviews popular opinion pooling methods. Section 3 introduces the basics of information markets. Data sets and our analysis methods are described in Section 4. We present results and analysis in Section 5, followed by conclusions in Section 6. 7. ACKNOWLEDGMENTS We thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data.
C-79
A Cross-Layer Approach to Resource Discovery and Distribution in Mobile ad-hoc Networks
This paper describes a cross-layer approach to designing robust P2P system over mobile ad-hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented.
[ "resourc discoveri", "mobil ad-hoc network", "manet", "manet p2p system", "hybrid discoveri scheme", "neighbor discoveri protocol", "neg feedback", "queri packet", "replica invalid", "valid mesh", "invalid packet", "manet rout protocol", "rout discoveri messag", "concurr updat" ]
[ "P", "P", "U", "M", "M", "M", "U", "U", "U", "U", "U", "M", "M", "U" ]
A Cross-Layer Approach to Resource Discovery and Distribution in Mobile ad-hoc Networks Chaiporn Jaikaeo Computer Engineering Kasetsart University, Thailand (+662) 942-8555 Ext 1424 cpj@cpe.ku.ac.th Xiang Cao Computer and Information Sciences University of Delaware, USA (+1) 302-831-1131 cao@cis.udel.edu Chien-Chung Shen Computer and Information Sciences University of Delaware, USA (+1) 302-831-1951 cshen@cis.udel.edu ABSTRACT This paper describes a cross-layer approach to designing robust P2P system over mobile ad-hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented. Categories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed Applications General Terms Algorithms and design 1. INTRODUCTION Mobile ad-hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications. Without relying on any existing, pre-configured network infrastructure or centralized control, MANETs are useful in situations where impromptu communication facilities are required, such as battlefield communications and disaster relief missions. As MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities. One approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology. However, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes. Either way, constructing an overlay will potentially create a scalability problem for large-scale MANETs. Due to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes. These systems should also provide efficient and effective ways for peers to interact, as well as other desirable application specific features. This paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs. 1. Positive/negative feedback. Query packets are used to explore a route to other peers holding resources of interest. Optionally, advertisement packets are sent out to advertise routes from other peers about available resources. When traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions. 2. Sporadic random walk. As the network topology and/or the availability of resources change, existing routes may become stale while better routes become available. Sporadic random walk allows a control packet to explore different paths and opportunistically discover new and/or better routes. Adopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets. These packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing. With collaboration among these entities, a MANET P2P system is able to `learn'' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay. The remainder of this paper is organized as follows. Related work is described in the next section. Section 3 describes the resource discovery scheme. Section 4 describes the resource distribution scheme. The replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6. Section 7 concludes the paper. 2. RELATED WORK For MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches. A layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query/reply messages are delivered by the underlying MANET routing protocols. For instance, Konark [2] makes use of a underlying multicast protocol such that service providers and queriers advertise and search services via a predefined multicast group, respectively. Proem [3] is a high-level mobile computing platform for P2P systems over MANETs. It defines a transport protocol that sits on top of the existing TCP/IP stack, hence relying on an existing routing protocol to operate. With limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based. In contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery. 7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion. Hence, long search latency may be resulted as a 7DS node can get data of interest only if the node that holds the data is in its radio coverage. Mohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models. Specifically, a service provider/querier broadcasts advertisement/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time. In this way, the number of control packets on the network is constrained, thus providing good scalability. Despite the mechanism to reduce control packets, high overhead may still be unavoidable, especially when there are many clients trying to locate different services, due to the fact that the algorithm relies on flooding, For resource replication, Yin and Cao [6] design and evaluate cooperative caching techniques for MANETs. Caching, however, is performed reactively by intermediate nodes when a querier requests data from a server. Data items or resources are never pushed into other nodes proactively. Thanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique. If a server detects the number of requests exceed a threshold within a time period, it begins to replicate its data onto nodes capable of storing replicas, whose hop counts from the server are of certain values. Since data replication is triggered by the request frequency alone, it is possible that there are replicas unnecessarily created in a large scope even though only nodes within a small range request this data. Our proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently. 3. RESOURCE DISCOVERY We propose a cross-layer, hybrid resource discovery scheme that relies on the multiple interactions of query, reply and advertisement packets. We assume that each resource is associated with a unique ID1 . Initially, when a node wants to discover a resource, it deploys query packets, which carry the corresponding resource ID, and randomly explore the network to search for the requested resource. Upon receiving such a query packet, a reply packet is generated by the node providing the requested resource. Advertisement packets can also be used to proactively inform other nodes about what resources are available at each node. In addition to discovering the `identity'' of the node providing the requested resource, it may be also necessary to discover a `route'' leading to this node for further interaction. To allow intermediate nodes to make a decision on where to forward query packets, each node maintains two tables: neighbor 1 The assumption of unique ID is made for brevity in exposition, and resources could be specified via attribute-value assertions. table and pheromone table. The neighbor table maintains a list of all current neighbors obtained via a neighbor discovery protocol. The pheromone table maintains the mapping of a resource ID and a neighbor ID to a pheromone value. This table is initially empty, and is updated by a reply packet generated by a successful query. Figure 1 illustrates an example of a neighbor table and a pheromone table maintained by node A having four neighbors. When node A receives a query packet searching for a resource, it makes a decision to which neighbor it should forward the query packet by computing the desirability of each of the neighbors that have not been visited before by the same query packet. For a resource ID r, the desirability of choosing a neighbor n, δ(r,n), is obtained from the pheromone value of the entry whose neighbor and resource ID fields are n and r, respectively. If no such entry exists in the pheromone table, δ(r,n) is set to zero. Once the desirabilities of all valid next hops have been calculated, they are normalized to obtain the probability of choosing each neighbor. In addition, a small probability is also assigned to those neighbors with zero desirability to exercise the sporadic random walk primitive. Based on these probabilities, a next hop is selected to forward the query packet to. When a query packet encounters a node with a satisfying resource, a reply packet is returned to the querying node. The returning reply packet also updates the pheromone table at each node on its return trip by increasing the pheromone value in the entry whose resource ID and neighbor ID fields match the ID of the discovered resource and the previous hop, respectively. If such an entry does not exist, a new entry is added into the table. Therefore, subsequent query packets looking for the same resource, when encountering this pheromone information, are then guided toward the same destination with a small probability of taking an alternate path. Since the hybrid discovery scheme neither relies on a MANET routing protocol nor arranges nodes into a logical overlay, query packets are to traverse the actual network topology. In dense networks, relatively large nodal degrees can have potential impacts on this random exploring mechanism. To address this issue, the hybrid scheme also incorporates proactive advertisement in addition to the reactive query. To perform proactive advertisement, each node periodically deploys an advertising packet containing a list of its available resources'' IDs. These packets will traverse away from the advertising node in a random walk manner up to a limited number of hops and advertise resource information to surrounding nodes in the same way as reply packets. In the hybrid scheme, an increase of pheromone serves as a positive feedback which indirectly guides query packets looking for similar resources. Intuitively, the amount of pheromone increased is inversely proportional to the distance the reply packet has traveled back, and other metrics, such as quality of the resource, could contribute to this amount as well. Each node also performs an implicit negative feedback for resources that have not been given a positive feedback for some time by regularly decreasing the pheromone in all of its pheromone table entries over time. In addition, pheromone can be reduced by an explicit negative response, for instance, a reply packet returning from a node that is not willing to provide a resource due to excessive workload. As a result, load balancing can be achieved via positive and negative feedback. A node serving too many nodes can either return fewer responses to query packets or generate negative responses. 2 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006 Figure 1: Example illustrating neighbor and pheromone tables maintained by node A: (a) wireless connectivity around A showing that it currently has four neighbors, (b) A``s neighbor table, and (c) a possible pheromone table of A Figure 2: Sample scenarios illustrating the three mechanisms supporting load-balancing: (a) resource replication, (b) resource relocation, and (c) resource division 4. RESOURCE DISTRIBUTION In addition to resource discovery, a querying node usually attempts to access and retrieve the contents of a resource after a successful discovery. In certain situations, it is also beneficial to make a resource readily available at multiple nodes when the resource can be relocated and/or replicated, such as data files. Furthermore, in MANETs, we should consider not only the amount of load handled by a resource provider, but also the load on those intermediate nodes that are located on the communication paths between the provider and other nodes as well. Hence, we describe a cross-layer, hybrid resource distribution scheme to achieve load balancing by incorporating the functionalities of resource relocation, resource replication, and resource division. 4.1 Resource Replication Multiple replicas of a resource in the network help prevent a single node, as well as nodes surrounding it, from being overloaded by a large number of requests and data transfers. An example is when a node has obtained a data file from another node, the requesting node and the intermediate nodes can cache the file and start sharing that file with other surrounding nodes right away. In addition, replicable resources can also be proactively replicated at other nodes which are located in certain strategic areas. For instance, to help nodes find a resource quickly, we could replicate the resource so that it becomes reachable by random walk for a specific number of hops from any node with some probability, as depicted in Figure 2(a). To realize this feature, the hybrid resource distribution scheme employs a different type of control packet, called resource replication packet, which is responsible for finding an appropriate place to create a replica of a resource. A resource replication packet of type R is deployed by a node that is providing the resource R itself. Unlike a query packet which follows higher pheromone upstream toward a resource it is looking for, a resource replication packet tends to be propelled away from similar resources by moving itself downstream toward weaker pheromone. When a resource replication packet finds itself in an area with sufficiently low pheromone, it makes a decision whether it should continue exploring or turn back. The decision depends on conditions such as current workload and/or remaining energy of the node being visited, as well as popularity of the resource itself. 4.2 Resource Relocation In certain situations, a resource may be required to transfer from one node to another. For example, a node may no longer want to possess a file due to the shortage of storage space, but it cannot simply delete the file since other nodes may still need it in the future. In this case, the node can choose to create replicas of the file by the aforementioned resource replication mechanism and then delete its own copy. Let us consider a situation where a majority of nodes requesting for a resource are located far away from a resource provider, as shown on the top of Figure 2(b). If the resource R is relocatable, it is preferred to be relocated to another area that is closer to those nodes, similar to the bottom of the same figure. Hence network bandwidth is more efficiently utilized. The 3rd Conference on Mobile Technology, Applications and Systems - Mobility 2006 3 The hybrid resource distribution scheme incorporates resource relocation algorithms that are adaptive to user requests and aim to reduce communication overhead. Specifically, by following the same pheromone maintenance concept, the hybrid resource distribution scheme introduces another type of pheromone which corresponds to user requests, instead of resources. This type of pheromone, called request pheromone, is setup by query packets that are in their exploring phases (not returning ones) to guide a resource to a new location. 4.3 Resource Division Certain types of resources can be divided into smaller subresources (e.g., a large file being broken into smaller files) and distributed to multiple locations to avoid overloading a single node, as depicted in Figure 2(c). The hybrid resource distribution scheme incorporates a resource division mechanism that operates at a thin layer right above all the other mechanisms described earlier. The resource division mechanism is responsible for decomposing divisible resources into sub-resources, and then adds an extra keyword to distinguish each sub-resource from one another. Therefore, each of these sub-resources will be seen by the other mechanisms as one single resource which can be independently discovered, replicated, and relocated. The resource division mechanism is also responsible for combining data from these subresources together (e.g., merging pieces of a file) and delivering the final result to the application. 5. REPLICA INVALIDATION Although replicas improve accessibility and balance load, replica invalidation becomes a critical issue when nodes caching updatable resources may concurrently update their own replicas, which renders replicas held by other nodes obsolete. Most existing solutions to the replica invalidation problem either impose constrains that only the data source could perform update and invalidate other replicas, or resort to network-wide flooding which results in heavy network traffic and leads to scalability problem, or both. The lack of infrastructure supports and frequent topology changes in MANETs further challenge the issue. We apply the same cross-layer paradigm to invalidating replicas in MANETs which allows concurrent updates performed by multiple replicas. To coordinate concurrent updates and disseminate replica invalidations, a special infrastructure, called validation mesh or mesh for short, is adaptively maintained among nodes possessing `valid'' replicas of a resource. Once a node has updated its replica, an invalidation packet will only be disseminated over the validation mesh to inform other replica-possessing nodes that their replicas become invalid and should be deleted. The structure (topology) of the validation mesh keeps evolving (1) when nodes request and cache a resource, (2) when nodes update their respective replicas and invalidate other replicas, and (3) when nodes move. To accommodate the dynamics, our scheme integrates the components of swarm intelligence to adaptively maintain the validation mesh without relying on any underlying MANET routing protocol. In particular, the scheme takes into account concurrent updates initiated by multiple nodes to ensure the consistency among replicas. In addition, version number is used to distinguish new from old replicas when invalidating any stale replica. Simulation results show that the proposed scheme effectively facilitates concurrent replica updates and efficiently perform replica invalidation without incurring network-wide flooding. Figure 3 depicts the idea of `validation mesh'' which maintains connectivity among nodes holding valid replicas of a resource to avoid network-wide flooding when invalidating replicas. Figure 3: Examples showing maintenance of validation mesh There are eight nodes in the sample network, and we start with only node A holding the valid file, as shown in Figure 3(a). Later on, node G issues a query packet for the file and eventually obtains the file from A via nodes B and D. Since intermediate nodes are allowed to cache forwarded data, nodes B, D, and G will now hold valid replicas of the file. As a result, a validation mesh is established among nodes A, B, D, and G, as depicted in Figure 3(b). In Figure 3(c), another node, H, has issued a query packet for the same file and obtained it from node B``s cache via node E. At this point, six nodes hold valid replicas and are connected through the validation mesh. Now we assume node G updates its replica of the file and informs the other nodes by sending an invalidation packet over the validation mesh. Consequently, all other nodes except G remove their replicas of the file from their storage and the validation mesh is torn down. However, query forwarding pheromone, as denoted by the dotted arrows in Figure 3(d), is setup at these nodes via the `reverse paths'' in which the invalidation packets have traversed, so that future requests for this file will be forwarded to node G. In Figure 3(e), node H makes a new request for the file again. This time, its query packet follows the pheromone toward node G, where the updated file can be obtained. Eventually, a new validation mesh is established over nodes G, B, D, E, and H. To maintain a validation mesh among the nodes holding valid replicas, one of them is designated to be the focal node. Initially, the node that originally holds the data is the focal node. As nodes update replicas, the node that last (or most recently) updates a 4 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006 corresponding replica assumes the role of focal node. We also name nodes, such as G and H, who originate requests to replicate data as clients, and nodes B, D, and E who locally cache passing data as data nodes. For instance, in Figures 3(a), 3(b), and 3(c), node A is the focal node; in Figures 3(d), 3(e), and 3(f), node G becomes the focal node. In addition, to accommodate newly participating nodes and mobility of nodes, the focal node periodically floods the validation mesh with a keep-alive packet, so that nodes who can hear this packet are considered themselves to be part of the validation mesh. If a node holding a valid/updated replica doesn``t hear a keep-alive packet for a certain time interval, it will deploy a search packet using the resource discovery mechanism described in Section 3 to find another node (termed attachment point) currently on the validation mesh so that it can attach itself to. Once an attachment point is found, a search_reply packet is returned to the disconnected node who originated the search. Intermediate nodes who forward the search_reply packet will become part of the validation mesh as well. To illustrate the effect of node mobility, in Figure 3(f), node H has moved to a location where it is not directly connected to the mesh. Via the resource discovery mechanism, node H relies on an intermediate node F to connect itself to the mesh. Here, node F, although part of the validation mesh, doesn``t hold data replica, and hence is termed nondata node. Client and data node who keep hearing the keep-alive packets from the focal node act as if they are holding a valid replica, so that they can reply to query packets, like node B in Figure 3(c) replying a request from node H. While a disconnected node attempting to discover an attachment point to reattach itself to the mesh, the disconnected node can``t reply to a query packet. For instance, in Figure 3(f), node H does not reply to any query packet before it reattaches itself to the mesh. Although validation mesh provides a conceptual topology that (1) connects all replicas together, (2) coordinates concurrent updates, and (3) disseminates invalidation packets, the technical issue is how such a mesh topology could be effectively and efficiently maintained and evolved when (a) nodes request and cache a resource, (b) when nodes update their respective replicas and invalidate other replicas, and (c) when nodes move. Without relying on any MANET routing protocols, the two primitives work together to facilitate efficient search and adaptive maintenance. 6. PERFORMANCE EVALUATION We have conducted simulation experiments using the QualNet simulator to evaluate the performance of the described resource discovery, resource distribution, and replica invalidation schemes. However, due to space limitation only the performance of the replica invalidation is reported. In our experiments, eighty nodes are uniformly distributed over a terrain of size 1000×1000 m2 . Each node has a communication range of approximately 250 m over a 2 Mbps wireless channel, using IEEE802.11 as the MAC layer. We use the random-waypoint mobility model with a pause time of 1 second. Nodes may move at the minimum and maximum speeds of 1 m/s and 5 m/s, respectively. Table 1 lists other parameter settings used in the simulation. Initially, there is one resource server node in network. Two nodes are randomly picked up every 10 seconds as clients. Every β seconds, we check the number of nodes, N, which have gotten data. Then we randomly pickup Min(γ,N) nodes from them to initiate data update. Each experiment is run for 10 minutes. Table 1: Simulation Settings HOP_LIMIT 10 ADVERTISE_HOP_LIMIT 1 KEEPALIVE_INTERVAL 3 second NUM_SEARCH 1 ADVERTISE_INTERVAL 5 second EXPIRATION_INTERVAL 10 second Average Query Generation Rate 2 query/ 10 sec Max # of Concurrent Update (γ) 2 Frequency of Update (β) 3s We evaluate the performance under different mobility speed, the density, the maximum number of concurrent update nodes, and update frequency using two metrics: • Average overhead per update measures the average number of packets transmitted per update in the network. • Average delay per update measures how long our approach takes to finish an update on average. All figures shown present the results with a 70% confidence interval. Figure 4: Overhead vs. speed for 80 nodes Figure 5: Overhead vs. density Figure 6: Overhead vs. max #concurrent updates Figure 7: Overhead vs. freq. Figure 8: Delay vs. speed Figure 9: Delay vs. density The 3rd Conference on Mobile Technology, Applications and Systems - Mobility 2006 5 Figure 10: Delay vs. max #concurrent updates Figure 11: Delay vs. freq. Figures 4, 5, 6, and 7 show the overhead versus various parameter values. In Figure 4, the overhead increases as the speed increase, which is due to the fact that as the speed increase, nodes move out of mesh more frequently, and will send out more search packets. However, the overhead is not high, and even in speed 10m/sec, the overhead is below 100 packets. In contrast, the packets will be expected to be more than 200 packets at various speeds when flooding is used. Figure 5 shows that the overhead almost remains the same under various densities. That is attributed to only flooding over the mesh instead of the whole network. The size of mesh doesn``t vary much on various densities, so that the overhead doesn``t vary much. Figure 6 shows that overhead also almost remains the same under various maximum number of concurrent updates. That``s because one more node just means one more flood over the mesh during update process so that the impact is limited. Figure 7 shows that if updates happen more frequently, the overhead is higher. This is because the more quickly updates happen, (1) there will be more keep_alive message over the mesh between two updates, and (2) nodes move out of mesh more frequently and send out more search packets. Figures 8, 9, 10, and 11 show the delay versus various parameter values. From Figure 8, we know the delay increases as the speed increases, which is due to the fact that with increasing speed, clients will move out of mesh with higher probability. When these clients want to update data, they will spend time to first search the mesh. The faster the speed, the more time clients need to spend to search the mesh. Figure 9 shows that delay is negligibly affected by the density. Delay decreases slightly as the number of nodes increases, due to the fact that the more nodes in the network, the more nodes receives the advertisement packets which helps the search packet find the target so that the delay of update decreases. Figure 10 shows that delay decreases slightly as the maximum number of concurrent updates increases. The larger the maximum number of concurrent updates is, the more nodes are picked up to do update. Then with higher probability, one of these nodes is still in mesh and finishes the update immediately (don``t need to search mesh first), which decreases the delay. Figure 11 shows how the delay varies with the update frequency. When updates happen more frequently, the delay will higher. Because the less frequently, the more time nodes in mesh have to move out of mesh then they need to take time to search the mesh when they do update, which increases the delay. The simulation results show that the replica invalidation scheme can significantly reduce the overhead with an acceptable delay. 7. CONCLUSION To facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs'' physical topology. However, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage. Specifically, overlay routing relies on the network-layer routing protocols. In the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand. On the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage. Either way, constructing an overlay will potentially suffer from the scalability problem. The paper describes a design paradigm that uses the functional primitives of positive/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs. In particular, the scheme offers the features of (1) cross-layer design of P2P systems, which allows the routing process at both the P2P and the network layers to be integrated to reduce overhead, (2) scalability and mobility support, which minimizes the use of global flooding operations and adaptively combines proactive resource advertisement and reactive resource discovery, and (3) load balancing, which facilitates resource replication, relocation, and division to achieve load balancing. 8. REFERENCES [1] A. Oram, Peer-to-Peer: Harnessing the Power of Disruptive Technologies. O``Reilly, March 2000. [2] S. Helal, N. Desai, V. Verma, and C. Lee, Konark - A Service Discovery and Delivery Protocol for ad-hoc Networks, in the Third IEEE Conference on Wireless Communication Networks (WCNC), New Orleans, Louisiana, 2003. [3] G. Krotuem, Proem: A Peer-to-Peer Computing Platform for Mobile ad-hoc Networks, in Advanced Topic Workshop Middleware for Mobile Computing, Germany, 2001. [4] M. Papadopouli and H. Schulzrinne, A Performance Analysis of 7DS a Peer-to-Peer Data Dissemination and Prefetching Tool for Mobile Users, in Advances in wired and wireless communications, IEEE Sarnoff Symposium Digest, Ewing, NJ, 2001, (Best student paper & poster award). [5] U. Mohan, K. Almeroth, and E. Belding-Royer, Scalable Service Discovery in Mobile ad-hoc Networks, in IFIP Networking Conference, Athens, Greece, May 2004. [6] L. Yin and G. Cao, Supporting Cooperative Caching in Ad Hoc Networks, in IEEE INFOCOM, 2004. [7] V. Thanedar, K. Almeroth, and E. Belding-Royer, A Lightweight Content Replication Scheme for Mobile ad-hoc Environments, in IFIP Networking Conference, Athens, Greece, May 2004. 6 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006
A Cross-Layer Approach to Resource Discovery and Distribution in Mobile Ad hoc Networks ABSTRACT This paper describes a cross-layer approach to designing robust P2P system over mobile ad hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented. 1. INTRODUCTION Mobile ad hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications. Without relying on any existing, pre-configured network infrastructure or centralized control, MANETs are useful in situations where impromptu communication facilities are required, such as battlefield communications and disaster relief missions. As MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities. One approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology. However, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes. Either way, constructing an overlay will potentially create a scalability problem for large-scale MANETs. Due to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes. These systems should also provide efficient and effective ways for peers to interact, as well as other desirable application specific features. This paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs. 1. Positive/negative feedback. Query packets are used to explore a route to other peers holding resources of interest. Optionally, advertisement packets are sent out to advertise routes from other peers about available resources. When traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions. 2. Sporadic random walk. As the network topology and/or the availability of resources change, existing routes may become stale while better routes become available. Sporadic random walk allows a control packet to explore different paths and opportunistically discover new and/or better routes. Adopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets. These packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing. With collaboration among these entities, a MANET P2P system is able to ` learn' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay. The remainder of this paper is organized as follows. Related work is described in the next section. Section 3 describes the resource discovery scheme. Section 4 describes the resource distribution scheme. The replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6. Section 7 concludes the paper. 2. RELATED WORK For MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches. A layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query/reply messages are delivered by the underlying MANET routing protocols. For instance, Konark [2] makes use of a underlying multicast protocol such that service providers and queriers advertise and search services via a predefined multicast group, respectively. Proem [3] is a high-level mobile computing platform for P2P systems over MANETs. It defines a transport protocol that sits on top of the existing TCP/IP stack, hence relying on an existing routing protocol to operate. With limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based. In contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery. 7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion. Hence, long search latency may be resulted as a 7DS node can get data of interest only if the node that holds the data is in its radio coverage. Mohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models. Specifically, a service provider/querier broadcasts advertisement/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time. In this way, the number of control packets on the network is constrained, thus providing good scalability. Despite the mechanism to reduce control packets, high overhead may still be unavoidable, especially when there are many clients trying to locate different services, due to the fact that the algorithm relies on flooding, For resource replication, Yin and Cao [6] design and evaluate cooperative caching techniques for MANETs. Caching, however, is performed reactively by intermediate nodes when a querier requests data from a server. Data items or resources are never pushed into other nodes proactively. Thanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique. If a server detects the number of requests exceed a threshold within a time period, it begins to replicate its data onto nodes capable of storing replicas, whose hop counts from the server are of certain values. Since data replication is triggered by the request frequency alone, it is possible that there are replicas unnecessarily created in a large scope even though only nodes within a small range request this data. Our proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently. 3. RESOURCE DISCOVERY We propose a cross-layer, hybrid resource discovery scheme that relies on the multiple interactions of query, reply and advertisement packets. We assume that each resource is associated with a unique ID1. Initially, when a node wants to discover a resource, it deploys query packets, which carry the corresponding resource ID, and randomly explore the network to search for the requested resource. Upon receiving such a query packet, a reply packet is generated by the node providing the requested resource. Advertisement packets can also be used to proactively inform other nodes about what resources are available at each node. In addition to discovering the ` identity' of the node providing the requested resource, it may be also necessary to discover a ` route' leading to this node for further interaction. To allow intermediate nodes to make a decision on where to forward query packets, each node maintains two tables: neighbor 1The assumption of unique ID is made for brevity in exposition, and resources could be specified via attribute-value assertions. table and pheromone table. The neighbor table maintains a list of all current neighbors obtained via a neighbor discovery protocol. The pheromone table maintains the mapping of a resource ID and a neighbor ID to a pheromone value. This table is initially empty, and is updated by a reply packet generated by a successful query. Figure 1 illustrates an example of a neighbor table and a pheromone table maintained by node A having four neighbors. When node A receives a query packet searching for a resource, it makes a decision to which neighbor it should forward the query packet by computing the desirability of each of the neighbors that have not been visited before by the same query packet. For a resource ID r, the desirability of choosing a neighbor n, δ (r, n), is obtained from the pheromone value of the entry whose neighbor and resource ID fields are n and r, respectively. If no such entry exists in the pheromone table, δ (r, n) is set to zero. Once the desirabilities of all valid next hops have been calculated, they are normalized to obtain the probability of choosing each neighbor. In addition, a small probability is also assigned to those neighbors with zero desirability to exercise the sporadic random walk primitive. Based on these probabilities, a next hop is selected to forward the query packet to. When a query packet encounters a node with a satisfying resource, a reply packet is returned to the querying node. The returning reply packet also updates the pheromone table at each node on its return trip by increasing the pheromone value in the entry whose resource ID and neighbor ID fields match the ID of the discovered resource and the previous hop, respectively. If such an entry does not exist, a new entry is added into the table. Therefore, subsequent query packets looking for the same resource, when encountering this pheromone information, are then guided toward the same destination with a small probability of taking an alternate path. Since the hybrid discovery scheme neither relies on a MANET routing protocol nor arranges nodes into a logical overlay, query packets are to traverse the actual network topology. In dense networks, relatively large nodal degrees can have potential impacts on this random exploring mechanism. To address this issue, the hybrid scheme also incorporates proactive advertisement in addition to the reactive query. To perform proactive advertisement, each node periodically deploys an advertising packet containing a list of its available resources' IDs. These packets will traverse away from the advertising node in a random walk manner up to a limited number of hops and advertise resource information to surrounding nodes in the same way as reply packets. In the hybrid scheme, an increase of pheromone serves as a positive feedback which indirectly guides query packets looking for similar resources. Intuitively, the amount of pheromone increased is inversely proportional to the distance the reply packet has traveled back, and other metrics, such as quality of the resource, could contribute to this amount as well. Each node also performs an implicit negative feedback for resources that have not been given a positive feedback for some time by regularly decreasing the pheromone in all of its pheromone table entries over time. In addition, pheromone can be reduced by an explicit negative response, for instance, a reply packet returning from a node that is not willing to provide a resource due to excessive workload. As a result, load balancing can be achieved via positive and negative feedback. A node serving too many nodes can either return fewer responses to query packets or generate negative responses. Figure 1: Example illustrating neighbor and pheromone tables maintained by node A: (a) wireless connectivity around A showing that it currently has four neighbors, (b) A's neighbor table, and (c) a possible pheromone table of A Figure 2: Sample scenarios illustrating the three mechanisms supporting load-balancing: (a) resource replication, (b) resource relocation, and (c) resource division 4. RESOURCE DISTRIBUTION In addition to resource discovery, a querying node usually attempts to access and retrieve the contents of a resource after a successful discovery. In certain situations, it is also beneficial to make a resource readily available at multiple nodes when the resource can be relocated and/or replicated, such as data files. Furthermore, in MANETs, we should consider not only the amount of load handled by a resource provider, but also the load on those intermediate nodes that are located on the communication paths between the provider and other nodes as well. Hence, we describe a cross-layer, hybrid resource distribution scheme to achieve load balancing by incorporating the functionalities of resource relocation, resource replication, and resource division. 4.1 Resource Replication Multiple replicas of a resource in the network help prevent a single node, as well as nodes surrounding it, from being overloaded by a large number of requests and data transfers. An example is when a node has obtained a data file from another node, the requesting node and the intermediate nodes can cache the file and start sharing that file with other surrounding nodes right away. In addition, replicable resources can also be proactively replicated at other nodes which are located in certain strategic areas. For instance, to help nodes find a resource quickly, we could replicate the resource so that it becomes reachable by random walk for a specific number of hops from any node with some probability, as depicted in Figure 2 (a). To realize this feature, the hybrid resource distribution scheme employs a different type of control packet, called resource replication packet, which is responsible for finding an appropriate place to create a replica of a resource. A resource replication packet of type R is deployed by a node that is providing the resource R itself. Unlike a query packet which follows higher pheromone upstream toward a resource it is looking for, a resource replication packet tends to be propelled away from similar resources by moving itself downstream toward weaker pheromone. When a resource replication packet finds itself in an area with sufficiently low pheromone, it makes a decision whether it should continue exploring or turn back. The decision depends on conditions such as current workload and/or remaining energy of the node being visited, as well as popularity of the resource itself. 4.2 Resource Relocation In certain situations, a resource may be required to transfer from one node to another. For example, a node may no longer want to possess a file due to the shortage of storage space, but it cannot simply delete the file since other nodes may still need it in the future. In this case, the node can choose to create replicas of the file by the aforementioned resource replication mechanism and then delete its own copy. Let us consider a situation where a majority of nodes requesting for a resource are located far away from a resource provider, as shown on the top of Figure 2 (b). If the resource R is relocatable, it is preferred to be relocated to another area that is closer to those nodes, similar to the bottom of the same figure. Hence network bandwidth is more efficiently utilized. The 3rd Conference on Mobile Technology, Applications and Systems--Mobility 2006 3 The hybrid resource distribution scheme incorporates resource relocation algorithms that are adaptive to user requests and aim to reduce communication overhead. Specifically, by following the same pheromone maintenance concept, the hybrid resource distribution scheme introduces another type of pheromone which corresponds to user requests, instead of resources. This type of pheromone, called request pheromone, is setup by query packets that are in their exploring phases (not returning ones) to guide a resource to a new location. 4.3 Resource Division Certain types of resources can be divided into smaller subresources (e.g., a large file being broken into smaller files) and distributed to multiple locations to avoid overloading a single node, as depicted in Figure 2 (c). The hybrid resource distribution scheme incorporates a resource division mechanism that operates at a thin layer right above all the other mechanisms described earlier. The resource division mechanism is responsible for decomposing divisible resources into sub-resources, and then adds an extra keyword to distinguish each sub-resource from one another. Therefore, each of these sub-resources will be seen by the other mechanisms as one single resource which can be independently discovered, replicated, and relocated. The resource division mechanism is also responsible for combining data from these subresources together (e.g., merging pieces of a file) and delivering the final result to the application. 5. REPLICA INVALIDATION Although replicas improve accessibility and balance load, replica invalidation becomes a critical issue when nodes caching updatable resources may concurrently update their own replicas, which renders replicas held by other nodes obsolete. Most existing solutions to the replica invalidation problem either impose constrains that only the data source could perform update and invalidate other replicas, or resort to network-wide flooding which results in heavy network traffic and leads to scalability problem, or both. The lack of infrastructure supports and frequent topology changes in MANETs further challenge the issue. We apply the same cross-layer paradigm to invalidating replicas in MANETs which allows concurrent updates performed by multiple replicas. To coordinate concurrent updates and disseminate replica invalidations, a special infrastructure, called validation mesh or mesh for short, is adaptively maintained among nodes possessing ` valid' replicas of a resource. Once a node has updated its replica, an invalidation packet will only be disseminated over the validation mesh to inform other replica-possessing nodes that their replicas become invalid and should be deleted. The structure (topology) of the validation mesh keeps evolving (1) when nodes request and cache a resource, (2) when nodes update their respective replicas and invalidate other replicas, and (3) when nodes move. To accommodate the dynamics, our scheme integrates the components of swarm intelligence to adaptively maintain the validation mesh without relying on any underlying MANET routing protocol. In particular, the scheme takes into account concurrent updates initiated by multiple nodes to ensure the consistency among replicas. In addition, version number is used to distinguish new from old replicas when invalidating any stale replica. Simulation results show that the proposed scheme effectively facilitates concurrent replica updates and efficiently perform replica invalidation without incurring network-wide flooding. Figure 3 depicts the idea of ` validation mesh' which maintains connectivity among nodes holding valid replicas of a resource to avoid network-wide flooding when invalidating replicas. Figure 3: Examples showing maintenance of validation mesh There are eight nodes in the sample network, and we start with only node A holding the valid file, as shown in Figure 3 (a). Later on, node G issues a query packet for the file and eventually obtains the file from A via nodes B and D. Since intermediate nodes are allowed to cache forwarded data, nodes B, D, and G will now hold valid replicas of the file. As a result, a validation mesh is established among nodes A, B, D, and G, as depicted in Figure 3 (b). In Figure 3 (c), another node, H, has issued a query packet for the same file and obtained it from node B's cache via node E. At this point, six nodes hold valid replicas and are connected through the validation mesh. Now we assume node G updates its replica of the file and informs the other nodes by sending an invalidation packet over the validation mesh. Consequently, all other nodes except G remove their replicas of the file from their storage and the validation mesh is torn down. However, query forwarding pheromone, as denoted by the dotted arrows in Figure 3 (d), is setup at these nodes via the ` reverse paths' in which the invalidation packets have traversed, so that future requests for this file will be forwarded to node G. In Figure 3 (e), node H makes a new request for the file again. This time, its query packet follows the pheromone toward node G, where the updated file can be obtained. Eventually, a new validation mesh is established over nodes G, B, D, E, and H. To maintain a validation mesh among the nodes holding valid replicas, one of them is designated to be the focal node. Initially, the node that originally holds the data is the focal node. As nodes update replicas, the node that last (or most recently) updates a 4 The 3rd International Conference on Mobile Technology, Applications and Systems--Mobility 2006 corresponding replica assumes the role of focal node. We also name nodes, such as G and H, who originate requests to replicate data as clients, and nodes B, D, and E who locally cache passing data as data nodes. For instance, in Figures 3 (a), 3 (b), and 3 (c), node A is the focal node; in Figures 3 (d), 3 (e), and 3 (f), node G becomes the focal node. In addition, to accommodate newly participating nodes and mobility of nodes, the focal node periodically floods the validation mesh with a keep-alive packet, so that nodes who can hear this packet are considered themselves to be part of the validation mesh. If a node holding a valid/updated replica doesn't hear a keep-alive packet for a certain time interval, it will deploy a search packet using the resource discovery mechanism described in Section 3 to find another node (termed attachment point) currently on the validation mesh so that it can attach itself to. Once an attachment point is found, a search_reply packet is returned to the disconnected node who originated the search. Intermediate nodes who forward the search_reply packet will become part of the validation mesh as well. To illustrate the effect of node mobility, in Figure 3 (f), node H has moved to a location where it is not directly connected to the mesh. Via the resource discovery mechanism, node H relies on an intermediate node F to connect itself to the mesh. Here, node F, although part of the validation mesh, doesn't hold data replica, and hence is termed nondata node. Client and data node who keep hearing the keep-alive packets from the focal node act as if they are holding a valid replica, so that they can reply to query packets, like node B in Figure 3 (c) replying a request from node H. While a disconnected node attempting to discover an attachment point to reattach itself to the mesh, the disconnected node can't reply to a query packet. For instance, in Figure 3 (f), node H does not reply to any query packet before it reattaches itself to the mesh. Although validation mesh provides a conceptual topology that (1) connects all replicas together, (2) coordinates concurrent updates, and (3) disseminates invalidation packets, the technical issue is how such a mesh topology could be effectively and efficiently maintained and evolved when (a) nodes request and cache a resource, (b) when nodes update their respective replicas and invalidate other replicas, and (c) when nodes move. Without relying on any MANET routing protocols, the two primitives work together to facilitate efficient search and adaptive maintenance. 6. PERFORMANCE EVALUATION We have conducted simulation experiments using the QualNet simulator to evaluate the performance of the described resource discovery, resource distribution, and replica invalidation schemes. However, due to space limitation only the performance of the replica invalidation is reported. In our experiments, eighty nodes are uniformly distributed over a terrain of size 1000x1000 m2. Each node has a communication range of approximately 250 m over a 2 Mbps wireless channel, using IEEE802 .11 as the MAC layer. We use the random-waypoint mobility model with a pause time of 1 second. Nodes may move at the minimum and maximum speeds of 1 m/s and 5 m/s, respectively. Table 1 lists other parameter settings used in the simulation. Initially, there is one resource server node in network. Two nodes are randomly picked up every 10 seconds as clients. Every, B seconds, we check the number of nodes, N, which have gotten data. Then we randomly pickup Min (γ, N) nodes from them to initiate data update. Each experiment is run for 10 minutes. Table 1: Simulation Settings We evaluate the performance under different mobility speed, the density, the maximum number of concurrent update nodes, and update frequency using two metrics: • Average overhead per update measures the average number of packets transmitted per update in the network. • Average delay per update measures how long our approach takes to finish an update on average. All figures shown present the results with a 70% confidence interval. Figure 4: Overhead vs. speed Figure 5: Overhead vs. density for 80 nodes Figure 6: Overhead vs. max Figure 7: Overhead vs. freq. #concurrent updates Figure 8: Delay vs. speed Figure 9: Delay vs. density Figure 10: Delay vs. max Figure 11: Delay vs. freq. #concurrent updates Figures 4, 5, 6, and 7 show the overhead versus various parameter values. In Figure 4, the overhead increases as the speed increase, which is due to the fact that as the speed increase, nodes move out of mesh more frequently, and will send out more search packets. However, the overhead is not high, and even in speed 10m/sec, the overhead is below 100 packets. In contrast, the packets will be expected to be more than 200 packets at various speeds when flooding is used. Figure 5 shows that the overhead almost remains the same under various densities. That is attributed to only flooding over the mesh instead of the whole network. The size of mesh doesn't vary much on various densities, so that the overhead doesn't vary much. Figure 6 shows that overhead also almost remains the same under various maximum number of concurrent updates. That's because one more node just means one more flood over the mesh during update process so that the impact is limited. Figure 7 shows that if updates happen more frequently, the overhead is higher. This is because the more quickly updates happen, (1) there will be more keep_alive message over the mesh between two updates, and (2) nodes move out of mesh more frequently and send out more search packets. Figures 8, 9, 10, and 11 show the delay versus various parameter values. From Figure 8, we know the delay increases as the speed increases, which is due to the fact that with increasing speed, clients will move out of mesh with higher probability. When these clients want to update data, they will spend time to first search the mesh. The faster the speed, the more time clients need to spend to search the mesh. Figure 9 shows that delay is negligibly affected by the density. Delay decreases slightly as the number of nodes increases, due to the fact that the more nodes in the network, the more nodes receives the advertisement packets which helps the search packet find the target so that the delay of update decreases. Figure 10 shows that delay decreases slightly as the maximum number of concurrent updates increases. The larger the maximum number of concurrent updates is, the more nodes are picked up to do update. Then with higher probability, one of these nodes is still in mesh and finishes the update immediately (don't need to search mesh first), which decreases the delay. Figure 11 shows how the delay varies with the update frequency. When updates happen more frequently, the delay will higher. Because the less frequently, the more time nodes in mesh have to move out of mesh then they need to take time to search the mesh when they do update, which increases the delay. The simulation results show that the replica invalidation scheme can significantly reduce the overhead with an acceptable delay. 7. CONCLUSION To facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs' physical topology. However, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage. Specifically, overlay routing relies on the network-layer routing protocols. In the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand. On the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage. Either way, constructing an overlay will potentially suffer from the scalability problem. The paper describes a design paradigm that uses the functional primitives of positive/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs. In particular, the scheme offers the features of (1) cross-layer design of P2P systems, which allows the routing process at both the P2P and the network layers to be integrated to reduce overhead, (2) scalability and mobility support, which minimizes the use of global flooding operations and adaptively combines proactive resource advertisement and reactive resource discovery, and (3) load balancing, which facilitates resource replication, relocation, and division to achieve load balancing.
A Cross-Layer Approach to Resource Discovery and Distribution in Mobile Ad hoc Networks ABSTRACT This paper describes a cross-layer approach to designing robust P2P system over mobile ad hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented. 1. INTRODUCTION Mobile ad hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications. Without relying on any existing, pre-configured network infrastructure or centralized control, MANETs are useful in situations where impromptu communication facilities are required, such as battlefield communications and disaster relief missions. As MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities. One approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology. However, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes. Either way, constructing an overlay will potentially create a scalability problem for large-scale MANETs. Due to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes. These systems should also provide efficient and effective ways for peers to interact, as well as other desirable application specific features. This paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs. 1. Positive/negative feedback. Query packets are used to explore a route to other peers holding resources of interest. Optionally, advertisement packets are sent out to advertise routes from other peers about available resources. When traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions. 2. Sporadic random walk. As the network topology and/or the availability of resources change, existing routes may become stale while better routes become available. Sporadic random walk allows a control packet to explore different paths and opportunistically discover new and/or better routes. Adopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets. These packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing. With collaboration among these entities, a MANET P2P system is able to ` learn' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay. The remainder of this paper is organized as follows. Related work is described in the next section. Section 3 describes the resource discovery scheme. Section 4 describes the resource distribution scheme. The replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6. Section 7 concludes the paper. 2. RELATED WORK For MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches. A layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query/reply messages are delivered by the underlying MANET routing protocols. For instance, Konark [2] makes use of a underlying multicast protocol such that service providers and queriers advertise and search services via a predefined multicast group, respectively. Proem [3] is a high-level mobile computing platform for P2P systems over MANETs. It defines a transport protocol that sits on top of the existing TCP/IP stack, hence relying on an existing routing protocol to operate. With limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based. In contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery. 7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion. Hence, long search latency may be resulted as a 7DS node can get data of interest only if the node that holds the data is in its radio coverage. Mohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models. Specifically, a service provider/querier broadcasts advertisement/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time. In this way, the number of control packets on the network is constrained, thus providing good scalability. Despite the mechanism to reduce control packets, high overhead may still be unavoidable, especially when there are many clients trying to locate different services, due to the fact that the algorithm relies on flooding, For resource replication, Yin and Cao [6] design and evaluate cooperative caching techniques for MANETs. Caching, however, is performed reactively by intermediate nodes when a querier requests data from a server. Data items or resources are never pushed into other nodes proactively. Thanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique. If a server detects the number of requests exceed a threshold within a time period, it begins to replicate its data onto nodes capable of storing replicas, whose hop counts from the server are of certain values. Since data replication is triggered by the request frequency alone, it is possible that there are replicas unnecessarily created in a large scope even though only nodes within a small range request this data. Our proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently. 3. RESOURCE DISCOVERY 4. RESOURCE DISTRIBUTION 4.1 Resource Replication 4.2 Resource Relocation 4.3 Resource Division 5. REPLICA INVALIDATION 4 The 3rd International Conference on Mobile Technology, Applications and Systems--Mobility 2006 6. PERFORMANCE EVALUATION 7. CONCLUSION To facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs' physical topology. However, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage. Specifically, overlay routing relies on the network-layer routing protocols. In the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand. On the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage. Either way, constructing an overlay will potentially suffer from the scalability problem. The paper describes a design paradigm that uses the functional primitives of positive/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs. In particular, the scheme offers the features of (1) cross-layer design of P2P systems, which allows the routing process at both the P2P and the network layers to be integrated to reduce overhead, (2) scalability and mobility support, which minimizes the use of global flooding operations and adaptively combines proactive resource advertisement and reactive resource discovery, and (3) load balancing, which facilitates resource replication, relocation, and division to achieve load balancing.
A Cross-Layer Approach to Resource Discovery and Distribution in Mobile Ad hoc Networks ABSTRACT This paper describes a cross-layer approach to designing robust P2P system over mobile ad hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented. 1. INTRODUCTION Mobile ad hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications. As MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities. One approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology. However, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes. Either way, constructing an overlay will potentially create a scalability problem for large-scale MANETs. Due to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes. This paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs. 1. Positive/negative feedback. Query packets are used to explore a route to other peers holding resources of interest. Optionally, advertisement packets are sent out to advertise routes from other peers about available resources. When traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions. 2. Sporadic random walk. As the network topology and/or the availability of resources change, existing routes may become stale while better routes become available. Sporadic random walk allows a control packet to explore different paths and opportunistically discover new and/or better routes. Adopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets. These packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing. With collaboration among these entities, a MANET P2P system is able to ` learn' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay. The remainder of this paper is organized as follows. Related work is described in the next section. Section 3 describes the resource discovery scheme. Section 4 describes the resource distribution scheme. The replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6. Section 7 concludes the paper. 2. RELATED WORK For MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches. A layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query/reply messages are delivered by the underlying MANET routing protocols. Proem [3] is a high-level mobile computing platform for P2P systems over MANETs. routing protocol to operate. With limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based. In contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery. 7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion. Mohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models. Specifically, a service provider/querier broadcasts advertisement/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time. In this way, the number of control packets on the network is constrained, thus providing good scalability. Caching, however, is performed reactively by intermediate nodes when a querier requests data from a server. Data items or resources are never pushed into other nodes proactively. Thanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique. Our proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently. 7. CONCLUSION To facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs' physical topology. However, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage. Specifically, overlay routing relies on the network-layer routing protocols. In the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand. On the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage. Either way, constructing an overlay will potentially suffer from the scalability problem. The paper describes a design paradigm that uses the functional primitives of positive/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs.
C-45
StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks
The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 × 60 ft2 area. The localization accuracy ranges from 2 ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.
[ "local", "wireless sensor network", "rang", "sensor node", "imag process", "perform", "corner-cube retro-reflector", "aerial vehicl", "scene label", "consist", "probabl", "uniqu map", "connect" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "M", "U", "U", "U", "U" ]
StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks ∗ Radu Stoleru, Pascal Vicaire, Tian He†, John A. Stankovic Department of Computer Science, University of Virginia †Department of Computer Science and Engineering, University of Minnesota {stoleru, pv9f}@cs. virginia.edu, tianhe@cs.umn.edu, stankovic@cs.virginia.edu Abstract The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 × 60 ft2 area. The localization accuracy ranges from 2 ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes. Categories and Subject Descriptors: C.2.4 [Computer Communications Networks]: Distributed Systems; C.3 [Special Purpose and Application Based Systems]: Real-time and embedded systems General Terms: Algorithms, Measurement, Performance, Design, Experimentation 1 Introduction Wireless Sensor Networks (WSN) have been envisioned to revolutionize the way humans perceive and interact with the surrounding environment. One vision is to embed tiny sensor devices in outdoor environments, by aerial deployments from unmanned air vehicles. The sensor nodes form a network and collaborate (to compensate for the extremely scarce resources available to each of them: computational power, memory size, communication capabilities) to accomplish the mission. Through collaboration, redundancy and fault tolerance, the WSN is then able to achieve unprecedented sensing capabilities. A major step forward has been accomplished by developing systems for several domains: military surveillance [1] [2] [3], habitat monitoring [4] and structural monitoring [5]. Even after these successes, several research problems remain open. Among these open problems is sensor node localization, i.e., how to find the physical position of each sensor node. Despite the attention the localization problem in WSN has received, no universally acceptable solution has been developed. There are several reasons for this. On one hand, localization schemes that use ranging are typically high end solutions. GPS ranging hardware consumes energy, it is relatively expensive (if high accuracy is required) and poses form factor challenges that move us away from the vision of dust size sensor nodes. Ultrasound has a short range and is highly directional. Solutions that use the radio transceiver for ranging either have not produced encouraging results (if the received signal strength indicator is used) or are sensitive to environment (e.g., multipath). On the other hand, localization schemes that only use the connectivity information for inferring location information are characterized by low accuracies: ≈ 10 ft in controlled environments, 40−50 ft in realistic ones. To address these challenges, we propose a framework for WSN localization, called StarDust, in which the complexity associated with the node localization is completely removed from the sensor node. The basic principle of the framework is localization through passivity: each sensor node is equipped with a corner-cube retro-reflector and possibly an optical filter (a coloring device). An aerial vehicle projects light onto the deployment area and records images containing retro-reflected light beams (they appear as luminous spots). Through image processing techniques, the locations of the retro-reflectors (i.e., sensor nodes) is deter57 mined. For inferring the identity of the sensor node present at a particular location, the StarDust framework develops a constraint-based node ID relaxation algorithm. The main contributions of our work are the following. We propose a novel framework for node localization in WSNs that is very promising and allows for many future extensions and more accurate results. We propose a constraint-based label relaxation algorithm for mapping node IDs to the locations, and four constraints (node, connectivity, time and space), which are building blocks for very accurate and very fast localization systems. We develop a sensor node hardware prototype, called a SensorBall. We evaluate the performance of a localization system for which we obtain location accuracies of 2 − 5 ft with a localization duration ranging from 10 milliseconds to 2 minutes. We investigate the range of a system built on our framework by considering realities of physical phenomena that occurs during light propagation through the atmosphere. The rest of the paper is structured as follows. Section 2 is an overview of the state of art. The design of the StarDust framework is presented in Section 3. One implementation and its performance evaluation are in Sections 4 and 5, followed by a suite of system optimization techniques, in Section 6. In Section 7 we present our conclusions. 2 Related Work We present the prior work in localization in two major categories: the range-based, and the range-free schemes. The range-based localization techniques have been designed to use either more expensive hardware (and hence higher accuracy) or just the radio transceiver. Ranging techniques dependent on hardware are the time-of-flight (ToF) and the time-difference-of-arrival(TDoA). Solutions that use the radio are based on the received signal strength indicator (RSSI) and more recently on radio interferometry. The ToF localization technique that is most widely used is the GPS. GPS is a costly solution for a high accuracy localization of a large scale sensor network. AHLoS [6] employs a TDoA ranging technique that requires extensive hardware and solves relatively large nonlinear systems of equations. The Cricket location-support system (TDoA) [7] can achieve a location granularity of tens of inches with highly directional and short range ultrasound transceivers. In [2] the location of a sniper is determined in an urban terrain, by using the TDoA between an acoustic wave and a radio beacon. The PushPin project [8] uses the TDoA between ultrasound pulses and light flashes for node localization. The RADAR system [9] uses the RSSI to build a map of signal strengths as emitted by a set of beacon nodes. A mobile node is located by the best match, in the signal strength space, with a previously acquired signature. In MAL [10], a mobile node assists in measuring the distances (acting as constraints) between nodes until a rigid graph is generated. The localization problem is formulated as an on-line state estimation in a nonlinear dynamic system [11]. A cooperative ranging that attempts to achieve a global positioning from distributed local optimizations is proposed in [12]. A very recent, remarkable, localization technique is based on radio interferometry, RIPS [13], which utilizes two transmitters to create an interfering signal. The frequencies of the emitters are very close to each other, thus the interfering signal will have a low frequency envelope that can be easily measured. The ranging technique performs very well. The long time required for localization and multi-path environments pose significant challenges. Real environments create additional challenges for the range based localization schemes. These have been emphasized by several studies [14] [15] [16]. To address these challenges, and others (hardware cost, the energy expenditure, the form factor, the small range, localization time), several range-free localization schemes have been proposed. Sensor nodes use primarily connectivity information for inferring proximity to a set of anchors. In the Centroid localization scheme [17], a sensor node localizes to the centroid of its proximate beacon nodes. In APIT [18] each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacons within node``s communication range. The Gradient algorithm [19], leverages the knowledge about the network density to infer the average one hop length. This, in turn, can be transformed into distances to nodes with known locations. DV-Hop [20] uses the hop by hop propagation capability of the network to forward distances to landmarks. More recently, several localization schemes that exploit the sensing capabilities of sensor nodes, have been proposed. Spotlight [21] creates well controlled (in time and space) events in the network while the sensor nodes detect and timestamp this events. From the spatiotemporal knowledge for the created events and the temporal information provided by sensor nodes, nodes'' spatial information can be obtained. In a similar manner, the Lighthouse system [22] uses a parallel light beam, that is emitted by an anchor which rotates with a certain period. A sensor node detects the light beam for a period of time, which is dependent on the distance between it and the light emitting device. Many of the above localization solutions target specific sets of requirements and are useful for specific applications. StarDust differs in that it addresses a particular demanding set of requirements that are not yet solved well. StarDust is meant for localizing air dropped nodes where node passiveness, high accuracy, low cost, small form factor and rapid localization are all required. Many military applications have such requirements. 3 StarDust System Design The design of the StarDust system (and its name) was inspired by the similarity between a deployed sensor network, in which sensor nodes indicate their presence by emitting light, and the Universe consisting of luminous and illuminated objects: stars, galaxies, planets, etc.. The main difficulty when applying the above ideas to the real world is the complexity of the hardware that needs to be put on a sensor node so that the emitted light can be detected from thousands of feet. The energy expenditure for producing an intense enough light beam is also prohibitive. Instead, what we propose to use for sensor node localization is a passive optical element called a retro-reflector. The most common retro-reflective optical component is a Corner-Cube Retroreflector (CCR), shown in Figure 1(a). It consists of three mutually perpendicular mirrors. The inter58 (a) (b) Figure 1. Corner-Cube Retroreflector (a) and an array of CCRs molded in plastic (b) esting property of this optical component is that an incoming beam of light is reflected back, towards the source of the light, irrespective of the angle of incidence. This is in contrast with a mirror, which needs to be precisely positioned to be perpendicular to the incident light. A very common and inexpensive implementation of an array of CCRs is the retroreflective plastic material used on cars and bicycles for night time detection, shown in Figure 1(b). In the StarDust system, each node is equipped with a small (e.g. 0.5in2) array of CCRs and the enclosure has self-righting capabilities that orient the array of CCRs predominantly upwards. It is critical to understand that the upward orientation does not need to be exact. Even when large angular variations from a perfectly upward orientation are present, a CCR will return the light in the exact same direction from which it came. In the remaining part of the section, we present the architecture of the StarDust system and the design of its main components. 3.1 System Architecture The envisioned sensor network localization scenario is as follows: • The sensor nodes are released, possibly in a controlled manner, from an aerial vehicle during the night. • The aerial vehicle hovers over the deployment area and uses a strobe light to illuminate it. The sensor nodes, equipped with CCRs and optical filters (acting as coloring devices) have self-righting capabilities and retroreflect the incoming strobe light. The retro-reflected light is either white, as the originating source light, or colored, due to optical filters. • The aerial vehicle records a sequence of two images very close in time (msec level). One image is taken when the strobe light is on, the other when the strobe light is off. The acquired images are used for obtaining the locations of sensor nodes (which appear as luminous spots in the image). • The aerial vehicle executes the mapping of node IDs to the identified locations in one of the following ways: a) by using the color of a retro-reflected light, if a sensor node has a unique color; b) by requiring sensor nodes to establish neighborhood information and report it to a base station; c) by controlling the time sequence of sensor nodes deployment and recording additional imLight Emitter Sensor Node i Transfer Function Φi(λ) Ψ(λ) Φ(Ψ(λ)) Image Processing Node ID Matching Radio Model R G(Λ,E) Central Device V'' V'' Figure 2. The StarDust system architecture ages; d) by controlling the location where a sensor node is deployed. • The computed locations are disseminated to the sensor network. The architecture of the StarDust system is shown in Figure 2. The architecture consists of two main components: the first is centralized and it is located on a more powerful device. The second is distributed and it resides on all sensor nodes. The Central Device consists of the following: the Light Emitter, the Image Processing module, the Node ID Mapping module and the Radio Model. The distributed component of the architecture is the Transfer Function, which acts as a filter for the incoming light. The aforementioned modules are briefly described below: • Light Emitter - It is a strobe light, capable of producing very intense, collimated light pulses. The emitted light is non-monochromatic (unlike a laser) and it is characterized by a spectral density Ψ(λ), a function of the wavelength. The emitted light is incident on the CCRs present on sensor nodes. • Transfer Function Φ(Ψ(λ)) - This is a bandpass filter for the incident light on the CCR. The filter allows a portion of the original spectrum, to be retro-reflected. From here on, we will refer to the transfer function as the color of a sensor node. • Image Processing - The Image Processing module acquires high resolution images. From these images the locations and the colors of sensor nodes are obtained. If only one set of pictures can be taken (i.e., one location of the light emitter/image analysis device), then the map of the field is assumed to be known as well as the distance between the imaging device and the field. The aforementioned assumptions (field map and distance to it) are not necessary if the images can be simultaneously taken from different locations. It is important to remark here that the identity of a node can not be directly obtained through Image Processing alone, unless a specific characteristic of a sensor node can be identified in the image. • Node ID Matching - This module uses the detected locations and through additional techniques (e.g., sensor node coloring and connectivity information (G(Λ,E)) from the deployed network) to uniquely identify the sensor nodes observed in the image. The connectivity information is represented by neighbor tables sent from 59 Algorithm 1 Image Processing 1: Background filtering 2: Retro-reflected light recognition through intensity filtering 3: Edge detection to obtain the location of sensor nodes 4: Color identification for each detected sensor node each sensor node to the Central Device. • Radio Model - This component provides an estimate of the radio range to the Node ID Matching module. It is only used by node ID matching techniques that are based on the radio connectivity in the network. The estimate of the radio range R is based on the sensor node density (obtained through the Image Processing module) and the connectivity information (i.e., G(Λ,E)). The two main components of the StarDust architecture are the Image Processing and the Node ID Mapping. Their design and analysis is presented in the sections that follow. 3.2 Image Processing The goal of the Image Processing Algorithm (IPA) is to identify the location of the nodes and their color. Note that IPA does not identify which node fell where, but only what is the set of locations where the nodes fell. IPA is executed after an aerial vehicle records two pictures: one in which the field of deployment is illuminated and one when no illuminations is present. Let Pdark be the picture of the deployment area, taken when no light was emitted and Plight be the picture of the same deployment area when a strong light beam was directed towards the sensor nodes. The proposed IPA has several steps, as shown in Algorithm 1. The first step is to obtain a third picture Pfilter where only the differences between Pdark and Plight remain. Let us assume that Pdark has a resolution of n × m, where n is the number of pixels in a row of the picture, while m is the number of pixels in a column of the picture. Then Pdark is composed of n × m pixels noted Pdark(i, j), i ∈ 1 ≤ i ≤ n,1 ≤ j ≤ m. Similarly Plight is composed of n × m pixels noted Plight(i, j), 1 ≤ i ≤ n,1 ≤ j ≤ m. Each pixel P is described by an RGB value where the R value is denoted by PR, the G value is denoted by PG, and the B value is denoted by PB. IPA then generates the third picture, Pfilter, through the following transformations: PR filter(i, j) = PR light(i, j)−PR dark(i, j) PG filter(i, j) = PG light(i, j)−PG dark(i, j) PB filter(i, j) = PB light(i, j)−PB dark(i, j) (1) After this transformation, all the features that appeared in both Pdark and Plight are removed from Pfilter. This simplifies the recognition of light retro-reflected by sensor nodes. The second step consists of identifying the elements contained in Pfilter that retro-reflect light. For this, an intensity filter is applied to Pfilter. First IPA converts Pfilter into a grayscale picture. Then the brightest pixels are identified and used to create Preflect. This step is eased by the fact that the reflecting nodes should appear much brighter than any other illuminated object in the picture. Support: Q(λk) ni P1 ... P2 ... PN λ1 ... λk ... λN Figure 3. Probabilistic label relaxation The third step runs an edge detection algorithm on Preflect to identify the boundary of the nodes present. A tool such as Matlab provides a number of edge detection techniques. We used the bwboundaries function. For the obtained edges, the location (x,y) (in the image) of each node is determined by computing the centroid of the points constituting its edges. Standard computer graphics techniques [23] are then used to transform the 2D locations of sensor nodes detected in multiple images into 3D sensor node locations. The color of the node is obtained as the color of the pixel located at (x,y) in Plight. 3.3 Node ID Matching The goal of the Node ID Matching module is to obtain the identity (node ID) of a luminous spot in the image, detected to be a sensor node. For this, we define V = {(x1,y1),(x2,y2),...,(xm,ym)} to be the set of locations of the sensor nodes, as detected by the Image Processing module and Λ = {λ1,λ2,...,λm} to be the set of unique node IDs assigned to the m sensor nodes, before deployment. From here on, we refer to node IDs as labels. We model the problem of finding the label λj of a node ni as a probabilistic label relaxation problem, frequently used in image processing/understanding. In the image processing domain, scene labeling (i.e., identifying objects in an image) plays a major role. The goal of scene labeling is to assign a label to each object detected in an image, such that an appropriate image interpretation is achieved. It is prohibitively expensive to consider the interactions among all the objects in an image. Instead, constraints placed among nearby objects generate local consistencies and through iteration, global consistencies can be obtained. The main idea of the sensor node localization through probabilistic label relaxation is to iteratively compute the probability of each label being the correct label for a sensor node, by taking into account, at each iteration, the support for a label. The support for a label can be understood as a hint or proof, that a particular label is more likely to be the correct one, when compared with the other potential labels for a sensor node. We pictorially depict this main idea in Figure 3. As shown, node ni has a set of candidate labels {λ1,...,λk}. Each of the labels has a different value for the Support function Q(λk). We defer the explanation of how the Support function is implemented until the subsections that follow, where we provide four concrete techniques. Formally, the algorithm is outlined in Algorithm 2, where the equations necessary for computing the new probability Pni(λk) for a label λk of a node ni, are expressed by the 60 Algorithm 2 Label Relaxation 1: for each sensor node ni do 2: assign equal prob. to all possible labels 3: end for 4: repeat 5: converged ← true 6: for each sensor node ni do 7: for each each label λj of ni do 8: compute the Support label λj: Equation 4 9: end for 10: compute K for the node ni: Equation 3 11: for each each label λj do 12: update probability of label λj: Equation 2 13: if |new prob. −old prob. | ≥ ε then 14: converged ← false 15: end if 16: end for 17: end for 18: until converged = true following equations: Ps+1 ni (λk) = 1 Kni Ps ni (λk)Qs ni (λk) (2) where Kni is a normalizing constant, given by: Kni = N ∑ k=1 Ps ni (λk)Qs ni (λk) (3) and Qs ni (λk) is: Qs ni (λk) = support for label λk of node ni (4) The label relaxation algorithm is iterative and it is polynomial in the size of the network(number of nodes). The pseudo-code is shown in Algorithm 2. It initializes the probabilities associated with each possible label, for a node ni, through a uniform distribution. At each iteration s, the algorithm updates the probability associated with each label, by considering the Support Qs ni (λk) for each candidate label of a sensor node. In the sections that follow, we describe four different techniques for implementing the Support function: based on node coloring, radio connectivity, the time of deployment (time) and the location of deployment (space). While some of these techniques are simplistic, they are primitives which, when combined, can create powerful localization systems. These design techniques have different trade-offs, which we will present in Section 3.3.6. 3.3.1 Relaxation with Color Constraints The unique mapping between a sensor node``s position (identified by the image processing) and a label can be obtained by assigning a unique color to each sensor node. For this we define C = {c1,c2,...,cn} to be the set of unique colors available and M : Λ → C to be a one-to-one mapping of labels to colors. This mapping is known prior to the sensor node deployment (from node manufacturing). In the case of color constrained label relaxation, the support for label λk is expressed as follows: Qs ni (λk) = 1 (5) As a result, the label relaxation algorithm (Algorithm 2) consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), implicitly having a probability Pni(λk) = 1 ; the algorithm executes a single iteration, when the support function, simply, reiterates the confidence in the unique labeling. However, it is often the case that unique colors for each node will not be available. It is interesting to discuss here the influence that the size of the coloring space (i.e., |C|) has on the accuracy of the localization algorithm. Several cases are discussed below: • If |C| = 0, no colors are used and the sensor nodes are equipped with simple CCRs that reflect back all the incoming light (i.e., no filtering, and no coloring of the incoming light). From the image processing system, the position of sensor nodes can still be obtained. Since all nodes appear white, no single sensor node can be uniquely identified. • If |C| = m − 1 then there are enough unique colors for all nodes (one node remains white, i.e. no coloring), the problem is trivially solved. Each node can be identified, based on its unique color. This is the scenario for the relaxation with color constraints. • If |C| ≥ 1, there are several options for how to partition the coloring space. If C = {c1} one possibility is to assign the color c1 to a single node, and leave the remaining m−1 sensor nodes white, or to assign the color c1 to more than one sensor node. One can observe that once a color is assigned uniquely to a sensor node, in effect, that sensor node is given the status of anchor, or node with known location. It is interesting to observe that there is an entire spectrum of possibilities for how to partition the set of sensor nodes in equivalence classes (where an equivalence class is represented by one color), in order to maximize the success of the localization algorithm. One of the goals of this paper is to understand how the size of the coloring space and its partitioning affect localization accuracy. Despite the simplicity of this method of constraining the set of labels that can be assigned to a node, we will show that this technique is very powerful, when combined with other relaxation techniques. 3.3.2 Relaxation with Connectivity Constraints Connectivity information, obtained from the sensor network through beaconing, can provide additional information for locating sensor nodes. In order to gather connectivity information, the following need to occur: 1) after deployment, through beaconing of HELLO messages, sensor nodes build their neighborhood tables; 2) each node sends its neighbor table information to the Central device via a base station. First, let us define G = (Λ,E) to be the weighted connectivity graph built by the Central device from the received neighbor table information. In G the edge (λi,λj) has a 61 λ1 λ2 ... λN ni nj gi2,j2 λ1 λ2 ... λN Pj,λ1 Pj,λ2 ... Pj,λN Pi,λ1 Pi,λ1 ... Pi,λN gi2,jm Figure 4. Label relaxation with connectivity constraints weight gij represented by the number of beacons sent by λj and received by λi. In addition, let R be the radio range of the sensor nodes. The main idea of the connectivity constrained label relaxation is depicted in Figure 4 in which two nodes ni and nj have been assigned all possible labels. The confidence in each of the candidate labels for a sensor node, is represented by a probability, shown in a dotted rectangle. It is important to remark that through beaconing and the reporting of neighbor tables to the Central Device, a global view of all constraints in the network can be obtained. It is critical to observe that these constraints are among labels. As shown in Figure 4 two constraints exist between nodes ni and nj. The constraints are depicted by gi2,j2 and gi2,jM, the number of beacons sent the labels λj2 and λjM and received by the label λi2. The support for the label λk of sensor node ni, resulting from the interaction (i.e., within radio range) with sensor node nj is given by: Qs ni (λk) = M ∑ m=1 gλkλm Ps nj (λm) (6) As a result, the localization algorithm (Algorithm 3 consists of the following steps: all labels are assigned to each sensor node (lines 1-3 of the algorithm), and implicitly each label has a probability initialized to Pni(λk) = 1/|Λ|; in each iteration, the probabilities for the labels of a sensor node are updated, when considering the interaction with the labels of sensor nodes within R. It is important to remark that the identity of the nodes within R is not known, only the candidate labels and their probabilities. The relaxation algorithm converges when, during an iteration, the probability of no label is updated by more than ε. The label relaxation algorithm based on connectivity constraints, enforces such constraints between pairs of sensor nodes. For a large scale sensor network deployment, it is not feasible to consider all pairs of sensor nodes in the network. Hence, the algorithm should only consider pairs of sensor nodes that are within a reasonable communication range (R). We assume a circular radio range and a symmetric connectivity. In the remaining part of the section we propose a simple analytical model that estimates the radio range R for medium-connected networks (less than 20 neighbors per R). We consider the following to be known: the size of the deployment field (L), the number of sensor nodes deployed (N) Algorithm 3 Localization 1: Estimate the radio range R 2: Execute the Label Relaxation Algorithm with Support Function given by Equation 6 for neighbors less than R apart 3: for each sensor node ni do 4: node identity is λk with max. prob. 5: end for and the total number of unidirectional (i.e., not symmetric) one-hop radio connections in the network (k). For our analysis, we uniformly distribute the sensor nodes in a square area of length L, by using a grid of unit length L/ √ N. We use the substitution u = L/ √ N to simplify the notation, in order to distinguish the following cases: if u ≤ R ≤ √ 2u each node has four neighbors (the expected k = 4N); if √ 2u ≤ R ≤ 2u each node has eight neighbors (the expected k = 8N); if 2u ≤ R ≤ √ 5u each node has twelve neighbors ( the expected k = 12N); if √ 5u ≤ R ≤ 3u each node has twenty neighbors (the expected k = 20N) For a given t = k/4N we take R to be the middle of the interval. As an example, if t = 5 then R = (3 + √ 5)u/2. A quadratic fitting for R over the possible values of t, produces the following closed-form solution for the communication range R, as a function of network connectivity k, assuming L and N constant: R(k) = L √ N −0.051 k 4N 2 +0.66 k 4N +0.6 (7) We investigate the accuracy of our model in Section 5.2.1. 3.3.3 Relaxation with Time Constraints Time constraints can be treated similarly with color constraints. The unique identification of a sensor node can be obtained by deploying sensor nodes individually, one by one, and recording a sequence of images. The sensor node that is identified as new in the last picture (it was not identified in the picture before last) must be the last sensor node dropped. In a similar manner with color constrained label relaxation, the time constrained approach is very simple, but may take too long, especially for large scale systems. While it can be used in practice, it is unlikely that only a time constrained label relaxation is used. As we will see, by combining constrained-based primitives, realistic localization systems can be implemented. The support function for the label relaxation with time constraints is defined identically with the color constrained relaxation: Qs ni (λk) = 1 (8) The localization algorithm (Algorithm 2 consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), and implicitly having a probability Pni(λk) = 1 ; the algorithm executes a single iteration, 62 D1 D2 D4 D 3Node Label-1 Label-2 Label-3 Label-4 0.2 0.1 0.5 0.2 Figure 5. Relaxation with space constraints 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 1 2 3 4 5 6 7 8 PDF Distance D σ = 0.5 σ = 1 σ = 2 Figure 6. Probability distribution of distances -4 -3 -2 -1 0 1 2 3 4 X -4 -3 -2 -1 0 1 2 3 4 Y 0 0.2 0.4 0.6 0.8 1 Node Density Figure 7. Distribution of nodes when the support function, simply, reiterates the confidence in the unique labeling. 3.3.4 Relaxation with Space Constraints Spatial information related to sensor deployment can also be employed as another input to the label relaxation algorithm. To do that, we use two types of locations: the node location pn and the label location pl. The former pn is defined as the position of nodes (xn,yn,zn) after deployment, which can be obtained through Image Processing as mentioned in Section 3.3. The latter pl is defined as the location (xl,yl,zl) where a node is dropped. We use Dni λm to denote the horizontal distance between the location of the label λm and the location of the node ni. Clearly, Dni λm = (xn −xl)2 +(yn −yl)2. At the time of a sensor node release, the one-to-one mapping between the node and its label is known. In other words, the label location is the same as the node location at the release time. After release, the label location information is partially lost due to the random factors such as wind and surface impact. However, statistically, the node locations are correlated with label locations. Such correlation depends on the airdrop methods employed and environments. For the sake of simplicity, let``s assume nodes are dropped from the air through a helicopter hovering in the air. Wind can be decomposed into three components X,Y and Z. Only X and Y affect the horizontal distance a node can travel. According to [24], we can assume that X and Y follow an independent normal distribution. Therefore, the absolute value of the wind speed follows a Rayleigh distribution. Obviously the higher the wind speed is, the further a node would land away horizontally from the label location. If we assume that the distance D is a function of the wind speed V [25] [26], we can obtain the probability distribution of D under a given wind speed distribution. Without loss of generality, we assume that D is proportional to the wind speed. Therefore, D follows the Rayleigh distribution as well. As shown in Figure 5, the spatial-based relaxation is a recursive process to assign the probability that a nodes has a certain label by using the distances between the location of a node with multiple label locations. We note that the distribution of distance D affects the probability with which a label is assigned. It is not necessarily true that the nearest label is always chosen. For example, if D follows the Rayleigh(σ2) distribution, we can obtain the Probability Density Function (PDF) of distances as shown in Figure 6. This figure indicates that the possibility of a node to fall vertically is very small under windy conditions (σ > 0), and that the distance D is affected by the σ. The spatial distribution of nodes for σ = 1 is shown in Figure 7. Strong wind with a high σ value leads to a larger node dispersion. More formally, given a probability density function PDF(D), the support for label λk of sensor node ni can be formulated as: Qs ni (λk) = PDF(Dni λk ) (9) It is interesting to point out two special cases. First, if all nodes are released at once (i.e., only one label location for all released nodes), the distance D from a node to all labels is the same. In this case, Ps+1 ni (λk) = Ps ni (λk), which indicates that we can not use the spatial-based relaxation to recursively narrow down the potential labels for a node. Second, if nodes are released at different locations that are far away from each other, we have: (i) If node ni has label λk, Ps ni (λk) → 1 when s → ∞, (ii) If node ni does not have label λk, Ps ni (λk) → 0 when s → ∞. In this second scenario, there are multiple labels (one label per release), hence it is possible to correlate release times (labels) with positions on the ground. These results indicate that spatial-based relaxation can label the node with a very high probability if the physical separation among nodes is large. 3.3.5 Relaxation with Color and Connectivity Constraints One of the most interesting features of the StarDust architecture is that it allows for hybrid localization solutions to be built, depending on the system requirements. One example is a localization system that uses the color and connectivity constraints. In this scheme, the color constraints are used for reducing the number of candidate labels for sensor nodes, to a more manageable value. As a reminder, in the connectivity constrained relaxation, all labels are candidate labels for each sensor node. The color constraints are used in the initialization phase of Algorithm 3 (lines 1-3). After the initialization, the standard connectivity constrained relaxation algorithm is used. For a better understanding of how the label relaxation algorithm works, we give a concrete example, exemplified in Figure 8. In part (a) of the figure we depict the data structures 63 11 8 4 1 12 9 7 5 3 ni nj 12 8 10 11 10 0.2 0.2 0.2 0.2 0.2 0.25 0.25 0.25 0.25 (a) 11 8 4 1 12 9 7 5 3 ni nj 12 8 10 11 10 0.2 0.2 0.2 0.2 0.2 0.32 0 0.68 0 (b) Figure 8. A step through the algorithm. After initialization (a) and after the 1st iteration for node ni (b) associated with nodes ni and nj after the initialization steps of the algorithm (lines 1-6), as well as the number of beacons between different labels (as reported by the network, through G(Λ,E)). As seen, the potential labels (shown inside the vertical rectangles) are assigned to each node. Node ni can be any of the following: 11,8,4,1. Also depicted in the figure are the probabilities associated with each of the labels. After initialization, all probabilities are equal. Part (b) of Figure 8 shows the result of the first iteration of the localization algorithm for node ni, assuming that node nj is the first wi chosen in line 7 of Algorithm 3. By using Equation 6, the algorithm computes the support Q(λi) for each of the possible labels for node ni. Once the Q(λi)``s are computed, the normalizing constant, given by Equation 3 can be obtained. The last step of the iteration is to update the probabilities associated with all potential labels of node ni, as given by Equation 2. One interesting problem, which we explore in the performance evaluation section, is to assess the impact the partitioning of the color set C has on the accuracy of localization. When the size of the coloring set is smaller than the number of sensor nodes (as it is the case for our hybrid connectivity/color constrained relaxation), the system designer has the option of allowing one node to uniquely have a color (acting as an anchor), or multiple nodes. Intuitively, by assigning one color to more than one node, more constraints (distributed) can be enforced. 3.3.6 Relaxation Techniques Analysis The proposed label relaxation techniques have different trade-offs. For our analysis of the trade-offs, we consider the following metrics of interest: the localization time (duration), the energy consumed (overhead), the network size (scale) that can be handled by the technique and the localization accuracy. The parameters of interest are the following: the number of sensor nodes (N), the energy spent for one aerial drop (εd), the energy spent in the network for collecting and reporting neighbor information εb and the time Td taken by a sensor node to reach the ground after being aerially deployed. The cost comparison of the different label relaxation techniques is shown in Table 1. As shown, the relaxation techniques based on color and space constraints have the lowest localization duration, zero, for all practical purposes. The scalability of the color based relaxation technique is, however, limited to the number of (a) (b) Figure 9. SensorBall with self-righting capabilities (a) and colored CCRs (b) unique color filters that can be built. The narrower the Transfer Function Ψ(λ), the larger the number of unique colors that can be created. The manufacturing costs, however, are increasing as well. The scalability issue is addressed by all other label relaxation techniques. Most notably, the time constrained relaxation, which is very similar to the colorconstrained relaxation, addresses the scale issue, at a higher deployment cost. Criteria Color Connectivity Time Space Duration 0 NTb NTd 0 Overhead εd εd +Nεb Nεd εd Scale |C| |N| |N| |N| Accuracy High Low High Medium Table 1. Comparison of label relaxation techniques 4 System Implementation The StarDust localization framework, depicted in Figure 2, is flexible in that it enables the development of new localization systems, based on the four proposed label relaxation schemes, or the inclusion of other, yet to be invented, schemes. For our performance evaluation we implemented a version of the StarDust framework, namely the one proposed in Section 3.3.5, where the constraints are based on color and connectivity. The Central device of the StarDust system consists of the following: the Light Emitter - we used a common-off-theshelf flash light (QBeam, 3 million candlepower); the image acquisition was done with a 3 megapixel digital camera (Sony DSC-S50) which provided the input to the Image Processing algorithm, implemented in Matlab. For sensor nodes we built a custom sensor node, called SensorBall, with self-righting capabilities, shown in Figure 9(a). The self-righting capabilities are necessary in order to orient the CCR predominantly upwards. The CCRs that we used were inexpensive, plastic molded, night time warning signs commonly available on bicycles, as shown in Figure 9(b). We remark here the low quality of the CCRs we used. The reflectivity of each CCR (there are tens molded in the plastic container) is extremely low, and each CCR is not built with mirrors. A reflective effect is achieved by employing finely polished plastic surfaces. We had 5 colors available, in addition to the standard CCR, which reflects all the incoming light (white CCR). For a slightly higher price (ours were 20cents/piece), better quality CCRs can be employed. 64 Figure 10. The field in the dark Figure 11. The illuminated field Figure 12. The difference: Figure 10Figure 11 Higher quality (better mirrors) would translate in more accurate image processing (better sensor node detection) and smaller form factor for the optical component (an array of CCRs with a smaller area can be used). The sensor node platform we used was the micaZ mote. The code that runs on each node is a simple application which broadcasts 100 beacons, and maintains a neighbor table containing the percentage of successfully received beacons, for each neighbor. On demand, the neighbor table is reported to a base station, where the node ID mapping is performed. 5 System Evaluation In this section we present the performance evaluation of a system implementation of the StarDust localization framework. The three major research questions that our evaluation tries to answer are: the feasibility of the proposed framework (can sensor nodes be optically detected at large distances), the localization accuracy of one actual implementation of the StarDust framework, and whether or not atmospheric conditions can affect the recognition of sensor nodes in an image. The first two questions are investigated by evaluating the two main components of the StarDust framework: the Image Processing and the Node ID Matching. These components have been evaluated separately mainly because of lack of adequate facilities. We wanted to evaluate the performance of the Image Processing Algorithm in a long range, realistic, experimental set-up, while the Node ID Matching required a relatively large area, available for long periods of time (for connectivity data gathering). The third research question is investigated through a computer modeling of atmospheric phenomena. For the evaluation of the Image Processing module, we performed experiments in a football stadium where we deploy 6 sensor nodes in a 3×2 grid. The distance between the Central device and the sensor nodes is approximately 500 ft. The metrics of interest are the number of false positives and false negatives in the Image Processing Algorithm. For the evaluation of the Node ID Mapping component, we deploy 26 sensor nodes in an 120 × 60 ft2 flat area of a stadium. In order to investigate the influence the radio connectivity has on localization accuracy, we vary the height above ground of the deployed sensor nodes. Two set-ups are used: one in which the sensor nodes are on the ground, and the second one, in which the sensor nodes are raised 3 inches above ground. From here on, we will refer to these two experimental set-ups as the low connectivity and the high connectivity networks, respectively because when nodes are on the ground the communication range is low resulting in less neighbors than when the nodes are elevated and have a greater communication range. The metrics of interest are: the localization error (defined as the distance between the computed location and the true location - known from the manual placement), the percentage of nodes correctly localized, the convergence of the label relaxation algorithm, the time to localize and the robustness of the node ID mapping to errors in the Image Processing module. The parameters that we vary experimentally are: the angle under which images are taken, the focus of the camera, and the degree of connectivity. The parameters that we vary in simulations (subsequent to image acquisition and connectivity collection) the number of colors, the number of anchors, the number of false positives or negatives as input to the Node ID Matching component, the distance between the imaging device and sensor network (i.e., range), atmospheric conditions (light attenuation coefficient) and CCR reflectance (indicative of its quality). 5.1 Image Processing For the IPA evaluation, we deploy 6 sensor nodes in a 3 × 2 grid. We take 13 sets of pictures using different orientations of the camera and different zooming factors. All pictures were taken from the same location. Each set is composed of a picture taken in the dark and of a picture taken with a light beam pointed at the nodes. We process the pictures offline using a Matlab implementation of IPA. Since we are interested in the feasibility of identifying colored sensor nodes at large distance, the end result of our IPA is the 2D location of sensor nodes (position in the image). The transformation to 3D coordinates can be done through standard computer graphics techniques [23]. One set of pictures obtained as part of our experiment is shown in Figures 10 and 11. The execution of our IPA algorithm results in Figure 12 which filters out the background, and Figure 13 which shows the output of the edge detection step of IPA. The experimental results are depicted in Figure 14. For each set of pictures the graph shows the number of false positives (the IPA determines that there is a node 65 Figure 13. Retroreflectors detected in Figure 12 0 1 2 3 1 2 3 4 5 6 7 8 9 10 11 Experiment Number Count False Positives False Negatives Figure 14. False Positives and Negatives for the 6 nodes while there is none), and the number of false negatives (the IPA determines that there is no node while there is one). In about 45% of the cases, we obtained perfect results, i.e., no false positives and no false negatives. In the remaining cases, we obtained a number of false positives of at most one, and a number of false negatives of at most two. We exclude two pairs of pictures from Figure 14. In the first excluded pair, we obtain 42 false positives and in the second pair 10 false positives and 7 false negatives. By carefully examining the pictures, we realized that the first pair was taken out of focus and that a car temporarily appeared in one of the pictures of the second pair. The anomaly in the second set was due to the fact that we waited too long to take the second picture. If the pictures had been taken a few milliseconds apart, the car would have been represented on either both or none of the pictures and the IPA would have filtered it out. 5.2 Node ID Matching We evaluate the Node ID Matching component of our system by collecting empirical data (connectivity information) from the outdoor deployment of 26 nodes in the 120×60 ft2 area. We collect 20 sets of data for the high connectivity and low connectivity network deployments. Off-line we investigate the influence of coloring on the metrics of interest, by randomly assigning colors to the sensor nodes. For one experimental data set we generate 50 random assignments of colors to sensor nodes. It is important to observe that, for the evaluation of the Node ID Matching algorithm (color and connectivity constrained), we simulate the color assignment to sensor nodes. As mentioned in Section 4 the size of the coloring space available to us was 5 (5 colors). Through simulations of color assignment (not connectivity) we are able to investigate the influence that the size of the coloring space has on the accuracy of localization. The value of the param0 10 20 30 40 50 60 0 10 20 30 40 50 60 70 80 90 Distance [feet] Count Connected Not Connected Figure 15. The number of existing and missing radio connections in the sparse connectivity experiment 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 80 90 10 11 12 Distance [feet] Count Connected Not Connected Figure 16. The number of existing and missing radio connections in the high connectivity experiment eter ε used in Algorithm 2 was 0.001. The results presented here represent averages over the randomly generated colorings and over all experimental data sets. We first investigate the accuracy of our proposed Radio Model, and subsequently use the derived values for the radio range in the evaluation of the Node ID matching component. 5.2.1 Radio Model From experiments, we obtain the average number of observed beacons (k, defined in Section 3.3.2) for the low connectivity network of 180 beacons and for the high connectivity network of 420 beacons. From our Radio Model (Equation 7, we obtain a radio range R = 25 ft for the low connectivity network and R = 40 ft for the high connectivity network. To estimate the accuracy of our simple model, we plot the number of radio links that exist in the networks, and the number of links that are missing, as functions of the distance between nodes. The results are shown in Figures 15 and 16. We define the average radio range R to be the distance over which less than 20% of potential radio links, are missing. As shown in Figure 15, the radio range is between 20 ft and 25 ft. For the higher connectivity network, the radio range was between 30 ft and 40 ft. We choose two conservative estimates of the radio range: 20 ft for the low connectivity case and 35 ft for the high connectivity case, which are in good agreement with the values predicted by our Radio Model. 5.2.2 Localization Error vs. Coloring Space Size In this experiment we investigate the effect of the number of colors on the localization accuracy. For this, we randomly assign colors from a pool of a given size, to the sensor nodes. 66 0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 Number of Colors LocalizationError[feet] R=15feet R=20feet R=25feet Figure 17. Localization error 0 10 20 30 40 50 60 70 80 90 100 0 5 10 15 20 Number of Colors %CorrectLocalized[x100] R=15feet R=20feet R=25feet Figure 18. Percentage of nodes correctly localized We then execute the localization algorithm, which uses the empirical data. The algorithm is run for three different radio ranges: 15, 20 and 25 ft, to investigate its influence on the localization error. The results are depicted in Figure 17 (localization error) and Figure 18 (percentage of nodes correctly localized). As shown, for an estimate of 20 ft for the radio range (as predicted by our Radio Model) we obtain the smallest localization errors, as small as 2 ft, when enough colors are used. Both Figures 17 and 18 confirm our intuition that a larger number of colors available significantly decrease the error in localization. The well known fact that relaxation algorithms do not always converge, was observed during our experiments. The percentage of successful runs (when the algorithm converged) is depicted in Figure 19. As shown, in several situations, the algorithm failed to converge (the algorithm execution was stopped after 100 iterations per node). If the algorithm does not converge in a predetermined number of steps, it will terminate and the label with the highest probability will provide the identity of the node. It is very probable that the chosen label is incorrect, since the probabilities of some of labels are constantly changing (with each iteration). The convergence of relaxation based algorithms is a well known issue. 5.2.3 Localization Error vs. Color Uniqueness As mentioned in the Section 3.3.1, a unique color gives a sensor node the statute of an anchor. A sensor node that is an anchor can unequivocally be identified through the Image Processing module. In this section we investigate the effect unique colors have on the localization accuracy. Specifically, we want to experimentally verify our intuition that assigning more nodes to a color can benefit the localization accuracy, by enforcing more constraints, as opposed to uniquely assigning a color to a single node. 90 95 100 105 0 5 10 15 20 Number of Colors ConvergenceRate[x100] R=15feet R=20feet R=25feet Figure 19. Convergence error 0 2 4 6 8 10 12 14 16 4 6 8 Number of Colors LocalizationError[feet] 0 anchors 2 anchors 4 anchors 6 anchors 8 anchors Figure 20. Localization error vs. number of colors For this, we fix the number of available colors to either 4, 6 or 8 and vary the number of nodes that are given unique colors, from 0, up to the maximum number of colors (4, 6 or 8). Naturally, if we have a maximum number of colors of 4, we can assign at most 4 anchors. The experimental results are depicted in Figure 20 (localization error) and Figure 21 (percentage of sensor node correctly localized). As expected, the localization accuracy increases with the increase in the number of colors available (larger coloring space). Also, for a given size of the coloring space (e.g., 6 colors available), if more colors are uniquely assigned to sensor nodes then the localization accuracy decreases. It is interesting to observe that by assigning colors uniquely to nodes, the benefit of having additional colors is diminished. Specifically, if 8 colors are available and all are assigned uniquely, the system would be less accurately localized (error ≈ 7 ft), when compared to the case of 6 colors and no unique assignments of colors (≈ 5 ft localization error). The same trend, of a less accurate localization can be observed in Figure 21, which shows the percentage of nodes correctly localized (i.e., 0 ft localization error). As shown, if we increase the number of colors that are uniquely assigned, the percentage of nodes correctly localized decreases. 5.2.4 Localization Error vs. Connectivity We collected empirical data for two network deployments with different degrees of connectivity (high and low) in order to assess the influence of connectivity on location accuracy. The results obtained from running our localization algorithm are depicted in Figure 22 and Figure 23. We varied the number of colors available and assigned no anchors (i.e., no unique assignments of colors). In both scenarios, as expected, localization error decrease with an increase in the number of colors. It is interesting to observe, however, that the low connectivity scenario im67 0 20 40 60 80 100 120 140 4 6 8 Number of Colors %CorrectLocalized[x100] 0 anchors 2 anchors 4 anchors 6 anchors 8 anchors Figure 21. Percentage of nodes correctly localized vs. number of colors 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 Number of Colors LocalizationError[feet] Low Connectivity High Connectivity Figure 22. Localization error vs. number of colors proves the localization accuracy quicker, from the additional number of colors available. When the number of colors becomes relatively large (twelve for our 26 sensor node network), both scenarios (low and high connectivity) have comparable localization errors, of less that 2 ft. The same trend of more accurate location information is evidenced by Figure 23 which shows that the percentage of nodes that are localized correctly grows quicker for the low connectivity deployment. 5.3 Localization Error vs. Image Processing Errors So far we investigated the sources for error in localization that are intrinsic to the Node ID Matching component. As previously presented, luminous objects can be mistakenly detected to be sensor nodes during the location detection phase of the Image Processing module. These false positives can be eliminated by the color recognition procedure of the Image Processing module. More problematic are false negatives (when a sensor node does not reflect back enough light to be detected). They need to be handled by the localization algorithm. In this case, the localization algorithm is presented with two sets of nodes of different sizes, that need to be matched: one coming from the Image Processing (which misses some nodes) and one coming from the network, with the connectivity information (here we assume a fully connected network, so that all sensor nodes report their connectivity information). In this experiment we investigate how Image Processing errors (false negatives) influence the localization accuracy. For this evaluation, we ran our localization algorithm with empirical data, but dropped a percentage of nodes from the list of nodes detected by the Image Processing algorithm (we artificially introduced false negatives in the Image Process0 10 20 30 40 50 60 70 80 90 100 0 2 4 6 8 10 12 Number of Colors %CorrectLocalized[x100] Low Connectivity High Connectivity Figure 23. Percentage of nodes correctly localized 0 2 4 6 8 10 12 14 0 4 8 12 16 % False Negatives [x100] LocalizationError[feet] 4 colors 8 colors 12 colors Figure 24. Impact of false negatives on the localization error ing). The effect of false negatives on localization accuracy is depicted in Figure 24. As seen in the figure if the number of false negatives is 15%, the error in position estimation doubles when 4 colors are available. It is interesting to observe that the scenario when more colors are available (e.g., 12 colors) is being affected more drastically than the scenario with less colors (e.g., 4 colors). The benefit of having more colors available is still being maintained, at least for the range of colors we investigated (4 through 12 colors). 5.4 Localization Time In this section we look more closely at the duration for each of the four proposed relaxation techniques and two combinations of them: color-connectivity and color-time. We assume that 50 unique color filters can be manufactured, that the sensor network is deployed from 2,400 ft (necessary for the time-constrained relaxation) and that the time required for reporting connectivity grows linearly, with an initial reporting period of 160sec, as used in a real world tracking application [1]. The localization duration results, as presented in Table 1, are depicted in Figure 25. As shown, for all practical purposes the time required by the space constrained relaxation techniques is 0sec. The same applies to the color constrained relaxation, for which the localization time is 0sec (if the number of colors is sufficient). Considering our assumptions, only for a network of size 50 the color constrained relaxation works. The localization duration for all other network sizes (100, 150 and 200) is infinite (i.e., unique color assignments to sensor nodes can not be made, since only 50 colors are unique), when only color constrained relaxation is used. Both the connectivity constrained and time constrained techniques increase linearly with the network size (for the time constrained, the Central device deploys sensor nodes one by one, recording an image after the time a sensor node is expected to reach the 68 0 500 1000 1500 2000 2500 3000 Color Connectivity Time Space ColorConenctivity Color-Time Localization technique Localizationtime[sec] 50 nodes 100 nodes 150 nodes 200 nodes Figure 25. Localization time for different label relaxation schemes 0 2000 4000 6000 8000 0 0.5 1 1.5 2 2.5 3 3.5 4 r [feet] C r 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 26. Apparent contrast in a clear atmosphere 0 2000 4000 6000 8000 0 0.5 1 1.5 2 2.5 3 3.5 4 r [feet] C r 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 27. Apparent contrast in a hazing atmosphere ground). It is interesting to notice in Figure 25 the improvement in the localization time obtained by simply combining the color and the connectivity constrained techniques. The localization duration in this case is identical with the connectivity constrained technique. The combination of color and time constrained relaxations is even more interesting. For a reasonable localization duration of 52seconds a perfect (i.e., 0 ft localization error) localization system can be built. In this scenario, the set of sensor nodes is split in batches, with each batch having a set of unique colors. It would be very interesting to consider other scenarios, where the strength of the space constrained relaxation (0sec for any sensor network size) is used for improving the other proposed relaxation techniques. We leave the investigation and rigorous classification of such technique combination for future work. 5.5 System Range In this section we evaluate the feasibility of the StarDust localization framework when considering the realities of light propagation through the atmosphere. The main factor that determines the range of our system is light scattering, which redirects the luminance of the source into the medium (in essence equally affecting the luminosity of the target and of the background). Scattering limits the visibility range by reducing the apparent contrast between the target and its background (approaches zero, as the distance increases). The apparent contrast Cr is quantitatively expressed by the formula: Cr = (Nt r −Nb r )/Nb r (10) where Nt r and Nb r are the apparent target radiance and apparent background radiance at distance r from the light source, respectively. The apparent radiance Nt r of a target at a distance r from the light source, is given by: Nt r = Na + Iρte−2σr πr2 (11) where I is the intensity of the light source, ρt is the target reflectance, σ is the spectral attenuation coefficient (≈ 0.12km−1 and ≈ 0.60km−1 for a clear and a hazy atmosphere, respectively) and Na is the radiance of the atmospheric backscatter, and it can be expressed as follows: Na = Gσ2I 2π 2σrZ 0.02σr e−x x2 dx (12) where G = 0.24 is a backscatter gain. The apparent background radiance Nb r is given by formulas similar with Equations 11 and 12, where only the target reflectance ρt is substituted with the background reflectance ρb. It is important to remark that when Cr reaches its lower limit, no increase in the source luminance or receiver sensitivity will increase the range of the system. From Equations 11 and 12 it can be observed that the parameter which can be controlled and can influence the range of the system is ρt, the target reflectance. Figures 26 and 27 depict the apparent contrast Cr as a function of the distance r for a clear and for a hazy atmosphere, respectively. The apparent contrast is investigated for reflectance coefficients ρt ranging from 0.3 to 1.0 (perfect reflector). For a contrast C of at least 0.5, as it can be seen in Figure 26 a range of approximately 4,500 ft can be achieved if the atmosphere is clear. The performance dramatically deteriorates, when the atmospheric conditions are problematic. As shown in Figure 27 a range of up to 1,500 ft is achievable, when using highly reflective CCR components. While our light source (3 million candlepower) was sufficient for a range of a few hundred feet, we remark that there exist commercially available light sources (20 million candlepower) or military (150 million candlepower [27]), powerful enough for ranges of a few thousand feet. 6 StarDust System Optimizations In this section we describe extensions of the proposed architecture that can constitute future research directions. 6.1 Chained Constraint Primitives In this paper we proposed four primitives for constraintbased relaxation algorithms: color, connectivity, time and space. To demonstrate the power that can be obtained by combining them, we proposed and evaluated one combination of such primitives: color and connectivity. An interesting research direction to pursue could be to chain more than two of these primitives. An example of such chain is: color, temporal, spatial and connectivity. Other research directions could be to use voting scheme for deciding which primitive to use or assign different weights to different relaxation algorithms. 69 6.2 Location Learning If after several iterations of the algorithm, none of the label probabilities for a node ni converges to a higher value, the confidence in our labeling of that node is relatively low. It would be interesting to associate with a node, more than one label (implicitly more than one location) and defer the label assignment decision until events are detected in the network (if the network was deployed for target tracking). 6.3 Localization in Rugged Environments The initial driving force for the StarDust localization framework was to address the sensor node localization in extremely rugged environments. Canopies, dense vegetation, extremely obstructing environments pose significant challenges for sensor nodes localization. The hope, and our original idea, was to consider the time period between the aerial deployment and the time when the sensor node disappears under the canopy. By recording the last visible position of a sensor node (as seen from the aircraft) a reasonable estimate of the sensor node location can be obtained. This would require that sensor nodes posses self-righting capabilities, while in mid-air. Nevertheless, we remark on the suitability of our localization framework for rugged, non-line-of-sight environments. 7 Conclusions StarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required. Results show that accuracy can be within 2 ft and localization time within milliseconds. StarDust also shows robustness with respect to errors. We predict the influence the atmospheric conditions can have on the range of a system based on the StarDust framework, and show that hazy environments or daylight can pose significant challenges. Most importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments. 8 References [1] T. He, S. Krishnamurthy, J. A. Stankovic, T. Abdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui, and B. Krogh, An energy-efficient surveillance system using wireless sensor networks, in MobiSys, 2004. [2] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas, G. Pap, J. Sallai, and K. Frampton, Sensor network-based countersniper system, in SenSys, 2004. [3] A. Arora, P. Dutta, and B. Bapat, A line in the sand: A wireless sensor network for trage detection, classification and tracking, in Computer Networks, 2004. [4] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, and D. Culler, An analysis of a large scale habitat monitoring application, in ACM SenSys, 2004. [5] N. Xu, S. Rangwala, K. K. Chintalapudi, D. Ganesan, A. Broad, R. Govindan, and D. Estrin, A wireless sensor network for structural monitoring, in ACM SenSys, 2004. [6] A. Savvides, C. Han, and M. Srivastava, Dynamic fine-grained localization in ad-hoc networks of sensors, in Mobicom, 2001. [7] N. Priyantha, A. Chakraborty, and H. Balakrishnan, The cricket location-support system, in Mobicom, 2000. [8] M. Broxton, J. Lifton, and J. Paradiso, Localizing a sensor network via collaborative processing of global stimuli, in EWSN, 2005. [9] P. Bahl and V. N. Padmanabhan, Radar: An in-building rf-based user location and tracking system, in IEEE Infocom, 2000. [10] N. Priyantha, H. Balakrishnan, E. Demaine, and S. Teller, Mobileassisted topology generation for auto-localization in sensor networks, in IEEE Infocom, 2005. [11] P. N. Pathirana, A. Savkin, S. Jha, and N. Bulusu, Node localization using mobile robots in delay-tolerant sensor networks, IEEE Transactions on Mobile Computing, 2004. [12] C. Savarese, J. M. Rabaey, and J. Beutel, Locationing in distribued ad-hoc wireless sensor networks, in ICAASSP, 2001. [13] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar, S. Dora, and A. Ledeczi, Radio interferometric geolocation, in ACM SenSys, 2005. [14] K. Whitehouse, A. Woo, C. Karlof, F. Jiang, and D. Culler, The effects of ranging noise on multi-hop localization: An empirical study, in IPSN, 2005. [15] Y. Kwon, K. Mechitov, S. Sundresh, W. Kim, and G. Agha, Resilient localization for sensor networks in outdoor environment, UIUC, Tech. Rep., 2004. [16] R. Stoleru and J. A. Stankovic, Probability grid: A location estimation scheme for wireless sensor networks, in SECON, 2004. [17] N. Bulusu, J. Heidemann, and D. Estrin, GPS-less low cost outdoor localization for very small devices, IEEE Personal Communications Magazine, 2000. [18] T. He, C. Huang, B. Blum, J. A. Stankovic, and T. Abdelzaher, Range-Free localization schemes in large scale sensor networks, in ACM Mobicom, 2003. [19] R. Nagpal, H. Shrobe, and J. Bachrach, Organizing a global coordinate system from local information on an ad-hoc sensor network, in IPSN, 2003. [20] D. Niculescu and B. Nath, ad-hoc positioning system, in IEEE GLOBECOM, 2001. [21] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke, A high-accuracy low-cost localization system for wireless sensor networks, in ACM SenSys, 2005. [22] K. R¨omer, The lighthouse location system for smart dust, in ACM/USENIX MobiSys, 2003. [23] R. Y. Tsai, A versatile camera calibration technique for highaccuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses, IEEE JRA, 1987. [24] C. L. Archer and M. Z. Jacobson, Spatial and temporal distributions of U.S. winds and wind power at 80m derived from measurements, Geophysical Research Jrnl., 2003. [25] Team for advanced flow simulation and modeling. [Online]. Available: http://www.mems.rice.edu/TAFSM/RES/ [26] K. Stein, R. Benney, T. Tezduyar, V. Kalro, and J. Leonard, 3-D computation of parachute fluid-structure interactions - performance and control, in Aerodynamic Decelerator Systems Conference, 1999. [27] Headquarters Department of the Army, Technical manual for searchlight infrared AN/GSS-14(V)1, 1982. 70
StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks * Abstract The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 x 60ft2 area. The localization accuracy ranges from 2ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes. 1 Introduction Wireless Sensor Networks (WSN) have been envisioned to revolutionize the way humans perceive and interact with the surrounding environment. One vision is to embed tiny sensor devices in outdoor environments, by aerial deployments from unmanned air vehicles. The sensor nodes form a network and collaborate (to compensate for the extremely scarce resources available to each of them: computational power, memory size, communication capabilities) to accomplish the mission. Through collaboration, redundancy and fault tolerance, the WSN is then able to achieve unprecedented sensing capabilities. A major step forward has been accomplished by developing systems for several domains: military surveillance [1] [2] [3], habitat monitoring [4] and structural monitoring [5]. Even after these successes, several research problems remain open. Among these open problems is sensor node localization, i.e., how to find the physical position of each sensor node. Despite the attention the localization problem in WSN has received, no universally acceptable solution has been developed. There are several reasons for this. On one hand, localization schemes that use ranging are typically high end solutions. GPS ranging hardware consumes energy, it is relatively expensive (if high accuracy is required) and poses form factor challenges that move us away from the vision of dust size sensor nodes. Ultrasound has a short range and is highly directional. Solutions that use the radio transceiver for ranging either have not produced encouraging results (if the received signal strength indicator is used) or are sensitive to environment (e.g., multipath). On the other hand, localization schemes that only use the connectivity information for inferring location information are characterized by low accuracies:,: 10ft in controlled environments, 40 − 50ft in realistic ones. To address these challenges, we propose a framework for WSN localization, called StarDust, in which the complexity associated with the node localization is completely removed from the sensor node. The basic principle of the framework is localization through passivity: each sensor node is equipped with a corner-cube retro-reflector and possibly an optical filter (a coloring device). An aerial vehicle projects light onto the deployment area and records images containing retro-reflected light beams (they appear as luminous spots). Through image processing techniques, the locations of the retro-reflectors (i.e., sensor nodes) is deter mined. For inferring the identity of the sensor node present at a particular location, the StarDust framework develops a constraint-based node ID relaxation algorithm. The main contributions of our work are the following. We propose a novel framework for node localization in WSNs that is very promising and allows for many future extensions and more accurate results. We propose a constraint-based label relaxation algorithm for mapping node IDs to the locations, and four constraints (node, connectivity, time and space), which are building blocks for very accurate and very fast localization systems. We develop a sensor node hardware prototype, called a SensorBall. We evaluate the performance of a localization system for which we obtain location accuracies of 2 − 5 ft with a localization duration ranging from 10 milliseconds to 2 minutes. We investigate the range of a system built on our framework by considering realities of physical phenomena that occurs during light propagation through the atmosphere. The rest of the paper is structured as follows. Section 2 is an overview of the state of art. The design of the StarDust framework is presented in Section 3. One implementation and its performance evaluation are in Sections 4 and 5, followed by a suite of system optimization techniques, in Section 6. In Section 7 we present our conclusions. 2 Related Work We present the prior work in localization in two major categories: the range-based, and the range-free schemes. The range-based localization techniques have been designed to use either more expensive hardware (and hence higher accuracy) or just the radio transceiver. Ranging techniques dependent on hardware are the time-of-flight (ToF) and the time-difference-of-arrival (TDoA). Solutions that use the radio are based on the received signal strength indicator (RSSI) and more recently on radio interferometry. The ToF localization technique that is most widely used is the GPS. GPS is a costly solution for a high accuracy localization of a large scale sensor network. AHLoS [6] employs a TDoA ranging technique that requires extensive hardware and solves relatively large nonlinear systems of equations. The Cricket location-support system (TDoA) [7] can achieve a location granularity of tens of inches with highly directional and short range ultrasound transceivers. In [2] the location of a sniper is determined in an urban terrain, by using the TDoA between an acoustic wave and a radio beacon. The PushPin project [8] uses the TDoA between ultrasound pulses and light flashes for node localization. The RADAR system [9] uses the RSSI to build a map of signal strengths as emitted by a set of beacon nodes. A mobile node is located by the best match, in the signal strength space, with a previously acquired signature. In MAL [10], a mobile node assists in measuring the distances (acting as constraints) between nodes until a rigid graph is generated. The localization problem is formulated as an on-line state estimation in a nonlinear dynamic system [11]. A cooperative ranging that attempts to achieve a global positioning from distributed local optimizations is proposed in [12]. A very recent, remarkable, localization technique is based on radio interferometry, RIPS [13], which utilizes two transmitters to create an interfering signal. The frequencies of the emitters are very close to each other, thus the interfering signal will have a low frequency envelope that can be easily measured. The ranging technique performs very well. The long time required for localization and multi-path environments pose significant challenges. Real environments create additional challenges for the range based localization schemes. These have been emphasized by several studies [14] [15] [16]. To address these challenges, and others (hardware cost, the energy expenditure, the form factor, the small range, localization time), several range-free localization schemes have been proposed. Sensor nodes use primarily connectivity information for inferring proximity to a set of anchors. In the Centroid localization scheme [17], a sensor node localizes to the centroid of its proximate beacon nodes. In APIT [18] each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacons within node's communication range. The Gradient algorithm [19], leverages the knowledge about the network density to infer the average one hop length. This, in turn, can be transformed into distances to nodes with known locations. DV-Hop [20] uses the hop by hop propagation capability of the network to forward distances to landmarks. More recently, several localization schemes that exploit the sensing capabilities of sensor nodes, have been proposed. Spotlight [21] creates well controlled (in time and space) events in the network while the sensor nodes detect and timestamp this events. From the spatiotemporal knowledge for the created events and the temporal information provided by sensor nodes, nodes' spatial information can be obtained. In a similar manner, the Lighthouse system [22] uses a parallel light beam, that is emitted by an anchor which rotates with a certain period. A sensor node detects the light beam for a period of time, which is dependent on the distance between it and the light emitting device. Many of the above localization solutions target specific sets of requirements and are useful for specific applications. StarDust differs in that it addresses a particular demanding set of requirements that are not yet solved well. StarDust is meant for localizing air dropped nodes where node passiveness, high accuracy, low cost, small form factor and rapid localization are all required. Many military applications have such requirements. 3 StarDust System Design The design of the StarDust system (and its name) was inspired by the similarity between a deployed sensor network, in which sensor nodes indicate their presence by emitting light, and the Universe consisting of luminous and illuminated objects: stars, galaxies, planets, etc. . The main difficulty when applying the above ideas to the real world is the complexity of the hardware that needs to be put on a sensor node so that the emitted light can be detected from thousands of feet. The energy expenditure for producing an intense enough light beam is also prohibitive. Instead, what we propose to use for sensor node localization is a passive optical element called a retro-reflector. The most common retro-reflective optical component is a Corner-Cube Retroreflector (CCR), shown in Figure 1 (a). It consists of three mutually perpendicular mirrors. The inter Figure 1. Corner-Cube Retroreflector (a) and an array of CCRs molded in plastic (b) esting property of this optical component is that an incoming beam of light is reflected back, towards the source of the light, irrespective of the angle of incidence. This is in contrast with a mirror, which needs to be precisely positioned to be perpendicular to the incident light. A very common and inexpensive implementation of an array of CCRs is the retroreflective plastic material used on cars and bicycles for night time detection, shown in Figure 1 (b). In the StarDust system, each node is equipped with a small (e.g. 0.5 in2) array of CCRs and the enclosure has self-righting capabilities that orient the array of CCRs predominantly upwards. It is critical to understand that the upward orientation does not need to be exact. Even when large angular variations from a perfectly upward orientation are present, a CCR will return the light in the exact same direction from which it came. In the remaining part of the section, we present the architecture of the StarDust system and the design of its main components. 3.1 System Architecture The envisioned sensor network localization scenario is as follows: • The sensor nodes are released, possibly in a controlled manner, from an aerial vehicle during the night. • The aerial vehicle hovers over the deployment area and uses a strobe light to illuminate it. The sensor nodes, equipped with CCRs and optical filters (acting as coloring devices) have self-righting capabilities and retroreflect the incoming strobe light. The retro-reflected light is either" white", as the originating source light, or colored, due to optical filters. • The aerial vehicle records a sequence of two images very close in time (msec level). One image is taken when the strobe light is on, the other when the strobe light is off. The acquired images are used for obtaining the locations of sensor nodes (which appear as luminous spots in the image). • The aerial vehicle executes the mapping of node IDs to the identified locations in one of the following ways: a) by using the color of a retro-reflected light, if a sensor node has a unique color; b) by requiring sensor nodes to establish neighborhood information and report it to a base station; c) by controlling the time sequence of sensor nodes deployment and recording additional im Figure 2. The StarDust system architecture ages; d) by controlling the location where a sensor node is deployed. • The computed locations are disseminated to the sensor network. The architecture of the StarDust system is shown in Figure 2. The architecture consists of two main components: the first is centralized and it is located on a more powerful device. The second is distributed and it resides on all sensor nodes. The Central Device consists of the following: the Light Emitter, the Image Processing module, the Node ID Mapping module and the Radio Model. The distributed component of the architecture is the Transfer Function, which acts as a filter for the incoming light. The aforementioned modules are briefly described below: • Light Emitter - It is a strobe light, capable of producing very intense, collimated light pulses. The emitted light is non-monochromatic (unlike a laser) and it is characterized by a spectral density Ψ (λ), a function of the wavelength. The emitted light is incident on the CCRs present on sensor nodes. • Transfer Function Φ (Ψ (λ)) - This is a bandpass filter for the incident light on the CCR. The filter allows a portion of the original spectrum, to be retro-reflected. From here on, we will refer to the transfer function as the color of a sensor node. • Image Processing - The Image Processing module ac quires high resolution images. From these images the locations and the colors of sensor nodes are obtained. If only one set of pictures can be taken (i.e., one location of the light emitter/image analysis device), then the map of the field is assumed to be known as well as the distance between the imaging device and the field. The aforementioned assumptions (field map and distance to it) are not necessary if the images can be simultaneously taken from different locations. It is important to remark here that the identity of a node cannot be directly obtained through Image Processing alone, unless a specific characteristic of a sensor node can be identified in the image. • Node ID Matching - This module uses the detected locations and through additional techniques (e.g., sensor node coloring and connectivity information (G (Λ, E)) from the deployed network) to uniquely identify the sensor nodes observed in the image. The connectivity information is represented by neighbor tables sent from • Radio Model - This component provides an estimate of the radio range to the Node ID Matching module. It is only used by node ID matching techniques that are based on the radio connectivity in the network. The estimate of the radio range R is based on the sensor node density (obtained through the Image Processing module) and the connectivity information (i.e., G (A, E)). The two main components of the StarDust architecture are the Image Processing and the Node ID Mapping. Their design and analysis is presented in the sections that follow. 3.2 Image Processing The goal of the Image Processing Algorithm (IPA) is to identify the location of the nodes and their color. Note that IPA does not identify which node fell where, but only what is the set of locations where the nodes fell. IPA is executed after an aerial vehicle records two pictures: one in which the field of deployment is illuminated and one when no illuminations is present. Let Pdark be the picture of the deployment area, taken when no light was emitted and Plight be the picture of the same deployment area when a strong light beam was directed towards the sensor nodes. The proposed IPA has several steps, as shown in Algorithm 1. The first step is to obtain a third picture Pfilter where only the differences between Pdark and Plight remain. Let us assume that Pdark has a resolution of n × m, where n is the number of pixels in a row of the picture, while m is the number of pixels in a column of the picture. Then Pdark is composed of n × m pixels noted Pdark (i, j), i ∈ 1 ≤ i ≤ n, 1 ≤ j ≤ m. Similarly Plight is composed of n × m pixels noted Plight (i, j), 1 ≤ i ≤ n, 1 ≤ j ≤ m. Each pixel P is described by an RGB value where the R value is denoted by PR, the G value is denoted by PG, and the B value is denoted by PB. IPA then generates the third picture, Pfilter, through the following transformations: After this transformation, all the features that appeared in both Pdark and Plight are removed from Pfilter. This simplifies the recognition of light retro-reflected by sensor nodes. The second step consists of identifying the elements contained in Pfilter that retro-reflect light. For this, an intensity filter is applied to Pfilter. First IPA converts Pfilter into a grayscale picture. Then the brightest pixels are identified and used to create Pre flect. This step is eased by the fact that the reflecting nodes should appear much brighter than any other illuminated object in the picture. Figure 3. Probabilistic label relaxation The third step runs an edge detection algorithm on Preflect to identify the boundary of the nodes present. A tool such as Matlab provides a number of edge detection techniques. We used the bwboundaries function. For the obtained edges, the location (x, y) (in the image) of each node is determined by computing the centroid of the points constituting its edges. Standard computer graphics techniques [23] are then used to transform the 2D locations of sensor nodes detected in multiple images into 3D sensor node locations. The color of the node is obtained as the color of the pixel located at (x, y) in Plight. 3.3 Node ID Matching The goal of the Node ID Matching module is to obtain the identity (node ID) of a luminous spot in the image, detected to be a sensor node. For this, we define V = {(x1, y1), (x2, y2),..., (xm, ym)} to be the set of locations of the sensor nodes, as detected by the Image Processing module and A = {% 1,% 2,...,% m} to be the set of unique node IDs assigned to the m sensor nodes, before deployment. From here on, we refer to node IDs as labels. We model the problem of finding the label% j of a node ni as a probabilistic label relaxation problem, frequently used in image processing/understanding. In the image processing domain, scene labeling (i.e., identifying objects in an image) plays a major role. The goal of scene labeling is to assign a label to each object detected in an image, such that an appropriate image interpretation is achieved. It is prohibitively expensive to consider the interactions among all the objects in an image. Instead, constraints placed among nearby objects generate local consistencies and through iteration, global consistencies can be obtained. The main idea of the sensor node localization through probabilistic label relaxation is to iteratively compute the probability of each label being the correct label for a sensor node, by taking into account, at each iteration, the" support" for a label. The support for a label can be understood as a hint or proof, that a particular label is more likely to be the correct one, when compared with the other potential labels for a sensor node. We pictorially depict this main idea in Figure 3. As shown, node ni has a set of candidate labels {% 1,...,% k}. Each of the labels has a different value for the Support function Q (% k). We defer the explanation of how the Support function is implemented until the subsections that follow, where we provide four concrete techniques. Formally, the algorithm is outlined in Algorithm 2, where the equations necessary for computing the new probability Pni (% k) for a label% k of a node ni, are expressed by the 1: for each sensor node ni do 2: assign equal prob. to all possible labels 3: end for 4: repeat 5: converged +--true 6: for each sensor node ni do 7: for each each label Xj of ni do 8: compute the Support label Xj: Equation 4 9: end for 10: compute K for the node ni: Equation 3 11: for each each label Xj do 12: update probability of label Xj: Equation 2 13: if | new prob. − old prob. |> s then 14: converged +--false 15: end if 16: end for 17: end for 18: until converged = true following equations: and Qsni (Xk) is: The label relaxation algorithm is iterative and it is polynomial in the size of the network (number of nodes). The pseudo-code is shown in Algorithm 2. It initializes the probabilities associated with each possible label, for a node ni, through a uniform distribution. At each iteration s, the algorithm updates the probability associated with each label, by considering the Support Qsni (Xk) for each candidate label of a sensor node. In the sections that follow, we describe four different techniques for implementing the Support function: based on node coloring, radio connectivity, the time of deployment (time) and the location of deployment (space). While some of these techniques are simplistic, they are primitives which, when combined, can create powerful localization systems. These design techniques have different trade-offs, which we will present in Section 3.3.6. 3.3.1 Relaxation with Color Constraints The unique mapping between a sensor node's position (identified by the image processing) and a label can be obtained by assigning a unique color to each sensor node. For this we define C = {c1, c2,..., cn} to be the set of unique colors available and M: A → C to be a one-to-one mapping of labels to colors. This mapping is known prior to the sensor node deployment (from node manufacturing). In the case of color constrained label relaxation, the support for label Xk is expressed as follows: As a result, the label relaxation algorithm (Algorithm 2) consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), implicitly having a probability Pni (Xk) = 1; the algorithm executes a single iteration, when the support function, simply, reiterates the confidence in the unique labeling. However, it is often the case that unique colors for each node will not be available. It is interesting to discuss here the influence that the size of the coloring space (i.e., | C |) has on the accuracy of the localization algorithm. Several cases are discussed below: • If | C | = 0, no colors are used and the sensor nodes are equipped with simple CCRs that reflect back all the incoming light (i.e., no filtering, and no coloring of the incoming light). From the image processing system, the position of sensor nodes can still be obtained. Since all nodes appear white, no single sensor node can be uniquely identified. • If | C | = m − 1 then there are enough unique colors for all nodes (one node remains white, i.e. no coloring), the problem is trivially solved. Each node can be identified, based on its unique color. This is the scenario for the relaxation with color constraints. • If | C |> 1, there are several options for how to parti tion the coloring space. If C = {c1} one possibility is to assign the color c1 to a single node, and leave the remaining m − 1 sensor nodes white, or to assign the color c1 to more than one sensor node. One can observe that once a color is assigned uniquely to a sensor node, in effect, that sensor node is given the status of "anchor", or node with known location. It is interesting to observe that there is an entire spectrum of possibilities for how to partition the set of sensor nodes in equivalence classes (where an equivalence class is represented by one color), in order to maximize the success of the localization algorithm. One of the goals of this paper is to understand how the size of the coloring space and its partitioning affect localization accuracy. Despite the simplicity of this method of constraining the set of labels that can be assigned to a node, we will show that this technique is very powerful, when combined with other relaxation techniques. 3.3.2 Relaxation with Connectivity Constraints Connectivity information, obtained from the sensor network through beaconing, can provide additional information for locating sensor nodes. In order to gather connectivity information, the following need to occur: 1) after deployment, through beaconing of HELLO messages, sensor nodes build their neighborhood tables; 2) each node sends its neighbor table information to the Central device via a base station. First, let us define G = (A, E) to be the weighted connectivity graph built by the Central device from the received neighbor table information. In G the edge (Xi, Xj) has a Figure 4. Label relaxation with connectivity constraints weight gij represented by the number of beacons sent by Xj and received by Xi. In addition, let R be the radio range of the sensor nodes. The main idea of the connectivity constrained label relaxation is depicted in Figure 4 in which two nodes ni and nj have been assigned all possible labels. The confidence in each of the candidate labels for a sensor node, is represented by a probability, shown in a dotted rectangle. It is important to remark that through beaconing and the reporting of neighbor tables to the Central Device, a global view of all constraints in the network can be obtained. It is critical to observe that these constraints are among labels. As shown in Figure 4 two constraints exist between nodes ni and nj. The constraints are depicted by gi2, j2 and gi2, jM, the number of beacons sent the labels Xj2 and XjM and received by the label Xi2. The support for the label Xk of sensor node ni, resulting from the "interaction" (i.e., within radio range) with sensor node nj is given by: As a result, the localization algorithm (Algorithm 3 consists of the following steps: all labels are assigned to each sensor node (lines 1-3 of the algorithm), and implicitly each label has a probability initialized to Pni (Xk) = 1 / | A |; in each iteration, the probabilities for the labels of a sensor node are updated, when considering the interaction with the labels of sensor nodes within R. It is important to remark that the identity of the nodes within R is not known, only the candidate labels and their probabilities. The relaxation algorithm converges when, during an iteration, the probability of no label is updated by more than s. The label relaxation algorithm based on connectivity constraints, enforces such constraints between pairs of sensor nodes. For a large scale sensor network deployment, it is not feasible to consider all pairs of sensor nodes in the network. Hence, the algorithm should only consider pairs of sensor nodes that are within a reasonable communication range (R). We assume a circular radio range and a symmetric connectivity. In the remaining part of the section we propose a simple analytical model that estimates the radio range R for medium-connected networks (less than 20 neighbors per R). We consider the following to be known: the size of the deployment field (L), the number of sensor nodes deployed (N) Algorithm 3 Localization 1: Estimate the radio range R 2: Execute the Label Relaxation Algorithm with Support Function given by Equation 6 for neighbors less than R apart 3: for each sensor node ni do 4: node identity is Xk with max. prob. 5: end for and the total number of unidirectional (i.e., not symmetric) one-hop radio connections in the network (k). For our analyof length L, by using a grid of unit length L / √ N. We use the sis, we uniformly distribute the sensor nodes in a square area substitution u = L / √ N to simplify the notation, in order to √ distinguish the following cases: if u ≤ R ≤ √ 2u each node has four neighbors (the expected k = 4N); if 2u ≤ R ≤ 2u each node has eight neighbors (the expected k = 8N); if (the expected k = 20N) For a given t = k/4N we take R to be the middle of the √ interval. As an example, if t = 5 then R = (3 + 5) u/2. A quadratic fitting for R over the possible values of t, produces the following closed-form solution for the communication range R, as a function of network connectivity k, assuming L and N constant: We investigate the accuracy of our model in Section 5.2.1. 3.3.3 Relaxation with Time Constraints Time constraints can be treated similarly with color constraints. The unique identification of a sensor node can be obtained by deploying sensor nodes individually, one by one, and recording a sequence of images. The sensor node that is identified as new in the last picture (it was not identified in the picture before last) must be the last sensor node dropped. In a similar manner with color constrained label relaxation, the time constrained approach is very simple, but may take too long, especially for large scale systems. While it can be used in practice, it is unlikely that only a time constrained label relaxation is used. As we will see, by combining constrained-based primitives, realistic localization systems can be implemented. The support function for the label relaxation with time constraints is defined identically with the color constrained relaxation: The localization algorithm (Algorithm 2 consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), and implicitly having a probability Pni (Xk) = 1; the algorithm executes a single iteration, Figure 5. Relaxation with space constraints Figure 6. Probability distribution of distances Figure 7. Distribution of nodes when the support function, simply, reiterates the confidence in the unique labeling. 3.3.4 Relaxation with Space Constraints Spatial information related to sensor deployment can also be employed as another input to the label relaxation algorithm. To do that, we use two types of locations: the node location pn and the label location pl. The former pn is defined as the position of nodes (xn, yn, zn) after deployment, which can be obtained through Image Processing as mentioned in Section 3.3. The latter pl is defined as the location (xl, yl, zl) where a node is dropped. We use Dni λm to denote the horizontal distance between the location of the label λm and the locaV tion of the node ni. Clearly, Dniλm = (xn--xl) 2 + (yn--yl) 2. At the time of a sensor node release, the one-to-one mapping between the node and its label is known. In other words, the label location is the same as the node location at the release time. After release, the label location information is partially lost due to the random factors such as wind and surface impact. However, statistically, the node locations are correlated with label locations. Such correlation depends on the airdrop methods employed and environments. For the sake of simplicity, let's assume nodes are dropped from the air through a helicopter hovering in the air. Wind can be decomposed into three components ~ X, ~ Y and ~ Z. Only X ~ and Y ~ affect the horizontal distance a node can travel. According to [24], we can assume that X ~ and Y ~ follow an independent normal distribution. Therefore, the absolute value of the wind speed follows a Rayleigh distribution. Obviously the higher the wind speed is, the further a node would land away horizontally from the label location. If we assume that the distance D is a function of the wind speed V [25] [26], we can obtain the probability distribution of D under a given wind speed distribution. Without loss of generality, we assume that D is proportional to the wind speed. Therefore, D follows the Rayleigh distribution as well. As shown in Figure 5, the spatial-based relaxation is a recursive process to assign the probability that a nodes has a certain label by using the distances between the location of a node with multiple label locations. We note that the distribution of distance D affects the probability with which a label is assigned. It is not necessarily true that the nearest label is always chosen. For example, if D follows the Rayleigh (σ2) distribution, we can obtain the Probability Density Function (PDF) of distances as shown in Figure 6. This figure indicates that the possibility of a node to fall vertically is very small under windy conditions (σ> 0), and that the distance D is affected by the σ. The spatial distribution of nodes for σ = 1 is shown in Figure 7. Strong wind with a high σ value leads to a larger node dispersion. More formally, given a probability density function PDF (D), the support for label λk of sensor node ni can be formulated as: It is interesting to point out two special cases. First, if all nodes are released at once (i.e., only one label location for all released nodes), the distance D from a node to all labels is the same. In this case, Ps +1 that we cannot use the spatial-based relaxation to recursively narrow down the potential labels for a node. Second, if nodes are released at different locations that are far away from each other, we have: (i) If node ni has label λk, Psni (λk)--> 1 when s--> ∞, (ii) If node ni does not have label λk, Psni (λk)--> 0 when s--> ∞. In this second scenario, there are multiple labels (one label per release), hence it is possible to correlate release times (labels) with positions on the ground. These results indicate that spatial-based relaxation can label the node with a very high probability if the physical separation among nodes is large. 3.3.5 Relaxation with Color and Connectivity Constraints One of the most interesting features of the StarDust architecture is that it allows for hybrid localization solutions to be built, depending on the system requirements. One example is a localization system that uses the color and connectivity constraints. In this scheme, the color constraints are used for reducing the number of candidate labels for sensor nodes, to a more manageable value. As a reminder, in the connectivity constrained relaxation, all labels are candidate labels for each sensor node. The color constraints are used in the initialization phase of Algorithm 3 (lines 1-3). After the initialization, the standard connectivity constrained relaxation algorithm is used. For a better understanding of how the label relaxation algorithm works, we give a concrete example, exemplified in Figure 8. In part (a) of the figure we depict the data structures Figure 8. A step through the algorithm. After initialization (a) and after the 1st iteration for node ni (b) associated with nodes ni and n / after the initialization steps of the algorithm (lines 1-6), as well as the number of beacons between different labels (as reported by the network, through G (Λ, E)). As seen, the potential labels (shown inside the vertical rectangles) are assigned to each node. Node ni can be any of the following: 11, 8,4, 1. Also depicted in the figure are the probabilities associated with each of the labels. After initialization, all probabilities are equal. Part (b) of Figure 8 shows the result of the first iteration of the localization algorithm for node ni, assuming that node n / is the first wi chosen in line 7 of Algorithm 3. By using Equation 6, the algorithm computes the" support" Q (λi) for each of the possible labels for node ni. Once the Q (λi)'s are computed, the normalizing constant, given by Equation 3 can be obtained. The last step of the iteration is to update the probabilities associated with all potential labels of node ni, as given by Equation 2. One interesting problem, which we explore in the performance evaluation section, is to assess the impact the partitioning of the color set C has on the accuracy of localization. When the size of the coloring set is smaller than the number of sensor nodes (as it is the case for our hybrid connectivity/color constrained relaxation), the system designer has the option of allowing one node to uniquely have a color (acting as an anchor), or multiple nodes. Intuitively, by assigning one color to more than one node, more constraints (distributed) can be enforced. 3.3.6 Relaxation Techniques Analysis The proposed label relaxation techniques have different trade-offs. For our analysis of the trade-offs, we consider the following metrics of interest: the localization time (duration), the energy consumed (overhead), the network size (scale) that can be handled by the technique and the localization accuracy. The parameters of interest are the following: the number of sensor nodes (N), the energy spent for one aerial drop (εd), the energy spent in the network for collecting and reporting neighbor information εb and the time Td taken by a sensor node to reach the ground after being aerially deployed. The cost comparison of the different label relaxation techniques is shown in Table 1. As shown, the relaxation techniques based on color and space constraints have the lowest localization duration, zero, for all practical purposes. The scalability of the color based relaxation technique is, however, limited to the number of Figure 9. SensorBall with self-righting capabilities (a) and colored CCRs (b) unique color filters that can be built. The narrower the Transfer Function Ψ (λ), the larger the number of unique colors that can be created. The manufacturing costs, however, are increasing as well. The scalability issue is addressed by all other label relaxation techniques. Most notably, the time constrained relaxation, which is very similar to the colorconstrained relaxation, addresses the scale issue, at a higher deployment cost. Table 1. Comparison of label relaxation techniques 4 System Implementation The StarDust localization framework, depicted in Figure 2, is flexible in that it enables the development of new localization systems, based on the four proposed label relaxation schemes, or the inclusion of other, yet to be invented, schemes. For our performance evaluation we implemented a version of the StarDust framework, namely the one proposed in Section 3.3.5, where the constraints are based on color and connectivity. The Central device of the StarDust system consists of the following: the Light Emitter - we used a common-off-theshelf flash light (QBeam, 3 million candlepower); the image acquisition was done with a 3 megapixel digital camera (Sony DSC-S50) which provided the input to the Image Processing algorithm, implemented in Matlab. For sensor nodes we built a custom sensor node, called SensorBall, with self-righting capabilities, shown in Figure 9 (a). The self-righting capabilities are necessary in order to orient the CCR predominantly upwards. The CCRs that we used were inexpensive, plastic molded, night time warning signs commonly available on bicycles, as shown in Figure 9 (b). We remark here the low quality of the CCRs we used. The reflectivity of each CCR (there are tens molded in the plastic container) is extremely low, and each CCR is not built with mirrors. A reflective effect is achieved by employing finely polished plastic surfaces. We had 5 colors available, in addition to the standard CCR, which reflects all the incoming light (white CCR). For a slightly higher price (ours were 20cents/piece), better quality CCRs can be employed. Figure 10. The field in the dark Figure 11. The illuminated field Figure 12. The difference: Figure 10 Figure 11 Higher quality (better mirrors) would translate in more accurate image processing (better sensor node detection) and smaller form factor for the optical component (an array of CCRs with a smaller area can be used). The sensor node platform we used was the micaZ mote. The code that runs on each node is a simple application which broadcasts 100 beacons, and maintains a neighbor table containing the percentage of successfully received beacons, for each neighbor. On demand, the neighbor table is reported to a base station, where the node ID mapping is performed. 5 System Evaluation In this section we present the performance evaluation of a system implementation of the StarDust localization framework. The three major research questions that our evaluation tries to answer are: the feasibility of the proposed framework (can sensor nodes be optically detected at large distances), the localization accuracy of one actual implementation of the StarDust framework, and whether or not atmospheric conditions can affect the recognition of sensor nodes in an image. The first two questions are investigated by evaluating the two main components of the StarDust framework: the Image Processing and the Node ID Matching. These components have been evaluated separately mainly because of lack of adequate facilities. We wanted to evaluate the performance of the Image Processing Algorithm in a long range, realistic, experimental set-up, while the Node ID Matching required a relatively large area, available for long periods of time (for connectivity data gathering). The third research question is investigated through a computer modeling of atmospheric phenomena. For the evaluation of the Image Processing module, we performed experiments in a football stadium where we deploy 6 sensor nodes in a 3 × 2 grid. The distance between the Central device and the sensor nodes is approximately 500ft. The metrics of interest are the number of false positives and false negatives in the Image Processing Algorithm. For the evaluation of the Node ID Mapping component, we deploy 26 sensor nodes in an 120 × 60ft2 flat area of a stadium. In order to investigate the influence the radio connectivity has on localization accuracy, we vary the height above ground of the deployed sensor nodes. Two set-ups are used: one in which the sensor nodes are on the ground, and the second one, in which the sensor nodes are raised 3 inches above ground. From here on, we will refer to these two experimental set-ups as the low connectivity and the high connectivity networks, respectively because when nodes are on the ground the communication range is low resulting in less neighbors than when the nodes are elevated and have a greater communication range. The metrics of interest are: the localization error (defined as the distance between the computed location and the true location - known from the manual placement), the percentage of nodes correctly localized, the convergence of the label relaxation algorithm, the time to localize and the robustness of the node ID mapping to errors in the Image Processing module. The parameters that we vary experimentally are: the angle under which images are taken, the focus of the camera, and the degree of connectivity. The parameters that we vary in simulations (subsequent to image acquisition and connectivity collection) the number of colors, the number of anchors, the number of false positives or negatives as input to the Node ID Matching component, the distance between the imaging device and sensor network (i.e., range), atmospheric conditions (light attenuation coefficient) and CCR reflectance (indicative of its quality). 5.1 Image Processing For the IPA evaluation, we deploy 6 sensor nodes in a 3 × 2 grid. We take 13 sets of pictures using different orientations of the camera and different zooming factors. All pictures were taken from the same location. Each set is composed of a picture taken in the dark and of a picture taken with a light beam pointed at the nodes. We process the pictures offline using a Matlab implementation of IPA. Since we are interested in the feasibility of identifying colored sensor nodes at large distance, the end result of our IPA is the 2D location of sensor nodes (position in the image). The transformation to 3D coordinates can be done through standard computer graphics techniques [23]. One set of pictures obtained as part of our experiment is shown in Figures 10 and 11. The execution of our IPA algorithm results in Figure 12 which filters out the background, and Figure 13 which shows the output of the edge detection step of IPA. The experimental results are depicted in Figure 14. For each set of pictures the graph shows the number of false positives (the IPA determines that there is a node Figure 13. Retroreflectors detected in Figure 12 Figure 14. False Positives and Negatives for the 6 nodes while there is none), and the number of false negatives (the IPA determines that there is no node while there is one). In about 45% of the cases, we obtained perfect results, i.e., no false positives and no false negatives. In the remaining cases, we obtained a number of false positives of at most one, and a number of false negatives of at most two. We exclude two pairs of pictures from Figure 14. In the first excluded pair, we obtain 42 false positives and in the second pair 10 false positives and 7 false negatives. By carefully examining the pictures, we realized that the first pair was taken out of focus and that a car temporarily appeared in one of the pictures of the second pair. The anomaly in the second set was due to the fact that we waited too long to take the second picture. If the pictures had been taken a few milliseconds apart, the car would have been represented on either both or none of the pictures and the IPA would have filtered it out. 5.2 Node ID Matching We evaluate the Node ID Matching component of our system by collecting empirical data (connectivity information) from the outdoor deployment of 26 nodes in the 120 × 60ft2 area. We collect 20 sets of data for the high connectivity and low connectivity network deployments. Off-line we investigate the influence of coloring on the metrics of interest, by randomly assigning colors to the sensor nodes. For one experimental data set we generate 50 random assignments of colors to sensor nodes. It is important to observe that, for the evaluation of the Node ID Matching algorithm (color and connectivity constrained), we simulate the color assignment to sensor nodes. As mentioned in Section 4 the size of the coloring space available to us was 5 (5 colors). Through simulations of color assignment (not connectivity) we are able to investigate the influence that the size of the coloring space has on the accuracy of localization. The value of the param Figure 15. The number of existing and missing radio connections in the sparse connectivity experiment Figure 16. The number of existing and missing radio connections in the high connectivity experiment eter ε used in Algorithm 2 was 0.001. The results presented here represent averages over the randomly generated colorings and over all experimental data sets. We first investigate the accuracy of our proposed Radio Model, and subsequently use the derived values for the radio range in the evaluation of the Node ID matching component. 5.2.1 Radio Model From experiments, we obtain the average number of observed beacons (k, defined in Section 3.3.2) for the low connectivity network of 180 beacons and for the high connectivity network of 420 beacons. From our Radio Model (Equation 7, we obtain a radio range R = 25 ft for the low connectivity network and R = 40ft for the high connectivity network. To estimate the accuracy of our simple model, we plot the number of radio links that exist in the networks, and the number of links that are missing, as functions of the distance between nodes. The results are shown in Figures 15 and 16. We define the average radio range R to be the distance over which less than 20% of potential radio links, are missing. As shown in Figure 15, the radio range is between 20ft and 25ft. For the higher connectivity network, the radio range was between 30ft and 40ft. We choose two conservative estimates of the radio range: 20ft for the low connectivity case and 35 ft for the high connectivity case, which are in good agreement with the values predicted by our Radio Model. 5.2.2 Localization Error vs. Coloring Space Size In this experiment we investigate the effect of the number of colors on the localization accuracy. For this, we randomly assign colors from a pool of a given size, to the sensor nodes. Figure 17. Localization error Figure 19. Convergence error Figure 18. Percentage of nodes correctly localized We then execute the localization algorithm, which uses the empirical data. The algorithm is run for three different radio ranges: 15, 20 and 25ft, to investigate its influence on the localization error. The results are depicted in Figure 17 (localization error) and Figure 18 (percentage of nodes correctly localized). As shown, for an estimate of 20ft for the radio range (as predicted by our Radio Model) we obtain the smallest localization errors, as small as 2ft, when enough colors are used. Both Figures 17 and 18 confirm our intuition that a larger number of colors available significantly decrease the error in localization. The well known fact that relaxation algorithms do not always converge, was observed during our experiments. The percentage of successful runs (when the algorithm converged) is depicted in Figure 19. As shown, in several situations, the algorithm failed to converge (the algorithm execution was stopped after 100 iterations per node). If the algorithm does not converge in a predetermined number of steps, it will terminate and the label with the highest probability will provide the identity of the node. It is very probable that the chosen label is incorrect, since the probabilities of some of labels are constantly changing (with each iteration). The convergence of relaxation based algorithms is a well known issue. 5.2.3 Localization Error vs. Color Uniqueness As mentioned in the Section 3.3.1, a unique color gives a sensor node the statute of an anchor. A sensor node that is an anchor can unequivocally be identified through the Image Processing module. In this section we investigate the effect unique colors have on the localization accuracy. Specifically, we want to experimentally verify our intuition that assigning more nodes to a color can benefit the localization accuracy, by enforcing more constraints, as opposed to uniquely assigning a color to a single node. Figure 20. Localization error vs. number of colors For this, we fix the number of available colors to either 4, 6 or 8 and vary the number of nodes that are given unique colors, from 0, up to the maximum number of colors (4, 6 or 8). Naturally, if we have a maximum number of colors of 4, we can assign at most 4 anchors. The experimental results are depicted in Figure 20 (localization error) and Figure 21 (percentage of sensor node correctly localized). As expected, the localization accuracy increases with the increase in the number of colors available (larger coloring space). Also, for a given size of the coloring space (e.g., 6 colors available), if more colors are uniquely assigned to sensor nodes then the localization accuracy decreases. It is interesting to observe that by assigning colors uniquely to nodes, the benefit of having additional colors is diminished. Specifically, if 8 colors are available and all are assigned uniquely, the system would be less accurately localized (error ≈ 7ft), when compared to the case of 6 colors and no unique assignments of colors (≈ 5 ft localization error). The same trend, of a less accurate localization can be observed in Figure 21, which shows the percentage of nodes correctly localized (i.e., 0ft localization error). As shown, if we increase the number of colors that are uniquely assigned, the percentage of nodes correctly localized decreases. 5.2.4 Localization Error vs. Connectivity We collected empirical data for two network deployments with different degrees of connectivity (high and low) in order to assess the influence of connectivity on location accuracy. The results obtained from running our localization algorithm are depicted in Figure 22 and Figure 23. We varied the number of colors available and assigned no anchors (i.e., no unique assignments of colors). In both scenarios, as expected, localization error decrease with an increase in the number of colors. It is interesting to observe, however, that the low connectivity scenario im Figure 21. Percentage of nodes correctly localized vs. number of colors Figure 22. Localization error vs. number of colors proves the localization accuracy quicker, from the additional number of colors available. When the number of colors becomes relatively large (twelve for our 26 sensor node network), both scenarios (low and high connectivity) have comparable localization errors, of less that 2ft. The same trend of more accurate location information is evidenced by Figure 23 which shows that the percentage of nodes that are localized correctly grows quicker for the low connectivity deployment. 5.3 Localization Error vs. Image Processing Errors So far we investigated the sources for error in localization that are intrinsic to the Node ID Matching component. As previously presented, luminous objects can be mistakenly detected to be sensor nodes during the location detection phase of the Image Processing module. These false positives can be eliminated by the color recognition procedure of the Image Processing module. More problematic are false negatives (when a sensor node does not reflect back enough light to be detected). They need to be handled by the localization algorithm. In this case, the localization algorithm is presented with two sets of nodes of different sizes, that need to be matched: one coming from the Image Processing (which misses some nodes) and one coming from the network, with the connectivity information (here we assume a fully connected network, so that all sensor nodes report their connectivity information). In this experiment we investigate how Image Processing errors (false negatives) influence the localization accuracy. For this evaluation, we ran our localization algorithm with empirical data, but dropped a percentage of nodes from the list of nodes detected by the Image Processing algorithm (we artificially introduced false negatives in the Image Process Number of Colors Figure 23. Percentage of nodes correctly localized Figure 24. Impact of false negatives on the localization error ing). The effect of false negatives on localization accuracy is depicted in Figure 24. As seen in the figure if the number of false negatives is 15%, the error in position estimation doubles when 4 colors are available. It is interesting to observe that the scenario when more colors are available (e.g., 12 colors) is being affected more drastically than the scenario with less colors (e.g., 4 colors). The benefit of having more colors available is still being maintained, at least for the range of colors we investigated (4 through 12 colors). 5.4 Localization Time In this section we look more closely at the duration for each of the four proposed relaxation techniques and two combinations of them: color-connectivity and color-time. We assume that 50 unique color filters can be manufactured, that the sensor network is deployed from 2,400 ft (necessary for the time-constrained relaxation) and that the time required for reporting connectivity grows linearly, with an initial reporting period of 160sec, as used in a real world tracking application [1]. The localization duration results, as presented in Table 1, are depicted in Figure 25. As shown, for all practical purposes the time required by the space constrained relaxation techniques is 0sec. The same applies to the color constrained relaxation, for which the localization time is 0sec (if the number of colors is sufficient). Considering our assumptions, only for a network of size 50 the color constrained relaxation works. The localization duration for all other network sizes (100, 150 and 200) is infinite (i.e., unique color assignments to sensor nodes cannot be made, since only 50 colors are unique), when only color constrained relaxation is used. Both the connectivity constrained and time constrained techniques increase linearly with the network size (for the time constrained, the Central device deploys sensor nodes one by one, recording an image after the time a sensor node is expected to reach the Figure 25. Localization time for different la - Figure 26. Apparent contrast in a Figure 27. Apparent contrast in a bel relaxation schemes clear atmosphere hazing atmosphere ground). It is interesting to notice in Figure 25 the improvement in the localization time obtained by simply combining the color and the connectivity constrained techniques. The localization duration in this case is identical with the connectivity constrained technique. The combination of color and time constrained relaxations is even more interesting. For a reasonable localization duration of 52seconds a perfect (i.e., 0ft localization error) localization system can be built. In this scenario, the set of sensor nodes is split in batches, with each batch having a set of unique colors. It would be very interesting to consider other scenarios, where the strength of the space constrained relaxation (0sec for any sensor network size) is used for improving the other proposed relaxation techniques. We leave the investigation and rigorous classification of such technique combination for future work. 5.5 System Range In this section we evaluate the feasibility of the StarDust localization framework when considering the realities of light propagation through the atmosphere. The main factor that determines the range of our system is light scattering, which redirects the luminance of the source into the medium (in essence equally affecting the luminosity of the target and of the background). Scattering limits the visibility range by reducing the apparent contrast between the target and its background (approaches zero, as the distance increases). The apparent contrast Cr is quantitatively expressed by the formula: where Ntr and Nbr are the apparent target radiance and apparent background radiance at distance r from the light source, respectively. The apparent radiance Ntr of a target at a distance r from the light source, is given by: Iρte − 2σr Nt r = Na + (11) πr2 where I is the intensity of the light source, ρt is the target reflectance, σ is the spectral attenuation coefficient (≈ 0.12 km − 1 and ≈ 0.60 km − 1 for a clear and a hazy atmosphere, respectively) and Na is the radiance of the atmospheric backscatter, and it can be expressed as follows: e − x x2 dx (12) where G = 0.24 is a backscatter gain. The apparent background radiance Nbr is given by formulas similar with Equations 11 and 12, where only the target reflectance ρt is substituted with the background reflectance ρb. It is important to remark that when Cr reaches its lower limit, no increase in the source luminance or receiver sensitivity will increase the range of the system. From Equations 11 and 12 it can be observed that the parameter which can be controlled and can influence the range of the system is ρt, the target reflectance. Figures 26 and 27 depict the apparent contrast Cr as a function of the distance r for a clear and for a hazy atmosphere, respectively. The apparent contrast is investigated for reflectance coefficients ρt ranging from 0.3 to 1.0 (perfect reflector). For a contrast C of at least 0.5, as it can be seen in Figure 26 a range of approximately 4, 500ft can be achieved if the atmosphere is clear. The performance dramatically deteriorates, when the atmospheric conditions are problematic. As shown in Figure 27 a range of up to 1, 500ft is achievable, when using highly reflective CCR components. While our light source (3 million candlepower) was sufficient for a range of a few hundred feet, we remark that there exist commercially available light sources (20 million candlepower) or military (150 million candlepower [27]), powerful enough for ranges of a few thousand feet. 6 StarDust System Optimizations In this section we describe extensions of the proposed architecture that can constitute future research directions. 6.1 Chained Constraint Primitives In this paper we proposed four primitives for constraintbased relaxation algorithms: color, connectivity, time and space. To demonstrate the power that can be obtained by combining them, we proposed and evaluated one combination of such primitives: color and connectivity. An interesting research direction to pursue could be to chain more than two of these primitives. An example of such chain is: color, temporal, spatial and connectivity. Other research directions could be to use voting scheme for deciding which primitive to use or assign different weights to different relaxation algorithms. 6.2 Location Learning If after several iterations of the algorithm, none of the label probabilities for a node ni converges to a higher value, the confidence in our labeling of that node is relatively low. It would be interesting to associate with a node, more than one label (implicitly more than one location) and defer the label assignment decision until events are detected in the network (if the network was deployed for target tracking). 6.3 Localization in Rugged Environments The initial driving force for the StarDust localization framework was to address the sensor node localization in extremely rugged environments. Canopies, dense vegetation, extremely obstructing environments pose significant challenges for sensor nodes localization. The hope, and our original idea, was to consider the time period between the aerial deployment and the time when the sensor node disappears under the canopy. By recording the last visible position of a sensor node (as seen from the aircraft) a reasonable estimate of the sensor node location can be obtained. This would require that sensor nodes posses self-righting capabilities, while in mid-air. Nevertheless, we remark on the suitability of our localization framework for rugged, non-line-of-sight environments. 7 Conclusions StarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required. Results show that accuracy can be within 2ft and localization time within milliseconds. StarDust also shows robustness with respect to errors. We predict the influence the atmospheric conditions can have on the range of a system based on the StarDust framework, and show that hazy environments or daylight can pose significant challenges. Most importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.
StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks * Abstract The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 x 60ft2 area. The localization accuracy ranges from 2ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes. 1 Introduction 2 Related Work 3 StarDust System Design 3.1 System Architecture 3.2 Image Processing 3.3 Node ID Matching 3.3.1 Relaxation with Color Constraints 3.3.2 Relaxation with Connectivity Constraints Algorithm 3 Localization 3.3.3 Relaxation with Time Constraints 3.3.4 Relaxation with Space Constraints 3.3.5 Relaxation with Color and Connectivity Constraints 3.3.6 Relaxation Techniques Analysis 4 System Implementation 5 System Evaluation 5.1 Image Processing 5.2 Node ID Matching 5.2.1 Radio Model 5.2.2 Localization Error vs. Coloring Space Size 5.2.3 Localization Error vs. Color Uniqueness 5.2.4 Localization Error vs. Connectivity 5.3 Localization Error vs. Image Processing Errors Number of Colors 5.4 Localization Time 5.5 System Range Iρte − 2σr 6 StarDust System Optimizations 6.1 Chained Constraint Primitives 6.2 Location Learning 6.3 Localization in Rugged Environments 7 Conclusions StarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required. Results show that accuracy can be within 2ft and localization time within milliseconds. StarDust also shows robustness with respect to errors. We predict the influence the atmospheric conditions can have on the range of a system based on the StarDust framework, and show that hazy environments or daylight can pose significant challenges. Most importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.
StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks * Abstract The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 x 60ft2 area. The localization accuracy ranges from 2ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes. 7 Conclusions StarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required. Results show that accuracy can be within 2ft and localization time within milliseconds. StarDust also shows robustness with respect to errors. Most importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.
J-61
ICE: An Iterative Combinatorial Exchange
We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase.
[ "iter combinatori exchang", "combinatori exchang", "bid", "trade", "price", "prefer elicit", "doubl auction", "combinatori auction", "buyer and seller", "tree-base bid languag", "winner-determin", "threshold payment", "vcg" ]
[ "P", "P", "P", "P", "P", "P", "U", "M", "M", "M", "U", "M", "U" ]
ICE: An Iterative Combinatorial Exchange David C. Parkes∗ † Ruggiero Cavallo† Nick Elprin† Adam Juda† S´ebastien Lahaie† Benjamin Lubin† Loizos Michael† Jeffrey Shneidman† Hassan Sultan† ABSTRACT We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase. Categories and Subject Descriptors: I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence; J.4 [Computer Applications]: Social and Behavioral Sciences -Economics General Terms: Algorithms, Economics, Theory. 1. INTRODUCTION Combinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions. In a double auction (DA), multiple buyers and sellers trade units of an identical good [20]. In a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11]. Buyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language. A common goal in both market designs is to determine the efficient allocation, which is the allocation that maximizes total value. A combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods. For example, in an exchange for wireless spectrum, a bidder may declare that she is willing to pay $1 million for a trade where she obtains licenses for New York City, Boston, and Philadelphia, and loses her license for Washington DC. Thus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids. Unlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling. CEs have received recent attention both in the context of wireless spectrum allocation [18] and for airport takeoff and landing slot allocation [3]. In both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources. Another potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13]. The instantiation of our general purpose design to specific domains is a compelling next step in our research. This paper presents the first design for an iterative combinatorial exchange (ICE). The genesis of this project was a class, CS 286r Topics at the Interface between Economics and Computer Science, taught at Harvard University in Spring 2004.1 The entire class was dedicated to the design and prototyping of an iterative CE. The ICE design problem is multi-faceted and quite hard. The main innovation in our design is an expressive yet concise tree-based bidding language (which generalizes known languages such as XOR/OR [23]), and the tight coupling of this language with efficient algorithms for price-feedback to guide bidding, winner-determination to determine trades, and revealed-preference activity rules to ensure progress across rounds. The exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round. The Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments. The exchange has a number of interesting theoretical properties. For instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation. In addition, the 1 http://www.eecs.harvard.edu/∼parkes/cs286r/ice.html 249 Truth Agent Act Rule WD ACC FAIR BALClosing RuleVickreyThreshold DONE ! DONE 2,2 +A +10 +B +10 BUYER 2,2 -A -5 -B -5 SELLER 2,2 +A +15 +8 +B +15 +8 BUYER 2,2 -A -2 -6 -B -2 -6 SELLER BUYER, buy AB SELLER, sell AB 12 < PA+PB < 16 PA+PB=14 PA=PB=7 PBUYER = 16 - (4-0) = 12 PSELLER = -12 - (4-0) = -16 PBUYER = 14 PSELLER = -14 Pessim istic O ptim istic = 1 Figure 1: ICE System Flow of Control efficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades. This is essential in complex domains where the valuation problem can itself be very challenging for a participant [28]. While we cannot claim that straightforward bidding is an equilibrium of the exchange (and indeed, should not expect to by the Myerson-Satterthwaite impossibility theorem [22]), the Threshold payment rule minimizes the ex post incentive to manipulate across all budget-balanced payment rules. The exchange is implemented in Java and is currently in validation. In describing the exchange we will first provide an overview of the main components and introduce several working examples. Then, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round. We then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions. We state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps. 2. AN OVERVIEW OF THE ICE DESIGN The design has four main components, which we will introduce in order through the rest of the paper: • Expressive and concise tree-based bidding language. The language describes values for trades, such as my value for selling AB and buying C is $100, or my value for selling ABC is -$50, with negative values indicating that a bidder must receive a payment for the trade to be acceptable. The language allows bidders to express upper and lower bounds on value, which can be tightened across rounds. • Winner Determination. Winner-determination (WD) is formulated as a mixed-integer program (MIP), with the structure of the bid-trees captured explicitly in the formulation. Comparing the solution at upper and lower values allows for a determination to be made about termination, with progress in intermediate rounds driven by an intermediate valuation and the lower values adopted on termination. • Payments. Payments are computed using the Threshold payment rule [24], with the intermediate valuations adopted in early rounds and lower values adopted on termination. • Price feedback. An approximate price is computed for each item in the exchange in each round, in terms of the intermediate valuations and the provisional trade. The prices are optimized to approximate competitive equilibrium prices, and further optimized to best approximate the current Threshold payments with remaining ties broken to favor prices that are balanced across different items. In computing the prices, we adopt the methods of constraint-generation to exploit the structure of the bidding language and avoid enumerating all feasible trades. The subproblem to generate new constraints is a variation of the WD problem. • Activity rule. A revealed-preference activity rule [1] ensures progress across rounds. In order to remain active, a bidder must tighten bounds so that there is enough information to define a trade that maximizes surplus at the current prices. Another variation on the WD problem is formulated, both to verify that the activity rule is met and also to provide feedback to a bidder to explain how to meet the rule. An outline of the ICE system flow of control is provided in Figure 1. We will return to this example later in the paper. For now, just observe in this two-agent example that the agents state lower and upper bounds that are checked in the activity rule, and then passed to winner-determination (WD), and then through three stages of pricing (accuracy, fairness, balance). On passing the closing rule (in which parameters αeff and αthresh are checked for convergence of the trade and payments), the exchange goes to a last-and-final round. At the end of this round, the trade and payments are finally determined, based on the lower valuations. 2.1 Related Work Many ascending-price one-sided CAs are known in the literature [10, 25, 29]. Direct elicitation approaches have also been proposed for one-sided CAs in which agents respond to explicit queries about their valuations [8, 14, 19]. A number of ascending CAs are designed to work with simple prices on items [12, 17]. The price generation methods that we use in ICE generalize the methods in these earlier papers. Parkes et al. [24] studied sealed-bid combinatorial exchanges and introduced the Threshold payment rule. Subsequently, Krych [16] demonstrated experimentally that the Threshold rule promotes efficient allocations. We are not aware of any previous studies of iterative CEs. Dominant strategy DAs are known for unit demand [20] and also for single-minded agents [2]. No dominant strategy mechanisms are known for the general CE problem. ICE is a hybrid auction design, in that it couples simple item prices to drive bidding in early rounds with combinatorial WD and payments, a feature it shares with the clock-proxy design of Ausubel et al. [1] for one-sided CAs. We adopt a variation on the clock-proxy auctions``s revealedpreference activity rule. The bidding language shares some structural elements with the LGB language of Boutilier and Hoos [7], but has very different semantics. Rothkopf et al. [27] also describe a restricted tree-based bidding language. In LGB, the semantics are those of propositional logic, with the same items in an allocation able to satisfy a tree in multiple places. Although this can make LGB especially concise in some settings, the semantics that we propose appear to provide useful locality, so that the value of one component in a tree can be understood independently from the rest of the tree. The idea of capturing the structure of our bidding language explicitly within a mixed-integer programming formulation follows the developments in Boutilier [6]. 3. PRELIMINARIES In our model, we consider a set of goods, indexed {1, ... , m} and a set of bidders, indexed {1, ... , n}. The initial allocation of goods is denoted x0 = (x0 1, ... , x0 n), with x0 i = (x0 i1, ... , x0 im) and x0 ij ≥ 0 for good j indicating the number 250 of units of good j held by bidder i. A trade λ = (λ1, ... , λn) denotes the change in allocation, with λi = (λi1, ... , λim) where λij ∈ is the change in the number of units of item j to bidder i. So, the final allocation is x1 = x0 + λ. Each bidder has a value vi(λi) ∈ ¡ for a trade λi. This value can be positive or negative, and represents the change in value between the final allocation x0 i +λi and the initial allocation x0 i . Utility is quasi-linear, with ui(λi, p) = vi(λi)−p for trade λi and payment p ∈ ¡ . Price p can be negative, indicating the bidder receives a payment for the trade. We use the term payoff interchangeably with utility. Our goal in the ICE design is to implement the efficient trade. The efficient trade, λ∗ , maximizes the total increase in value across bidders. Definition 1 (Efficient trade). The efficient trade λ∗ solves max (λ1,...,λn) cents i vi(λi) s.t. λij + x0 ij ≥ 0, ∀i, ∀j (1) cents i λij ≤ 0, ∀j (2) λij ∈ (3) Constraints (1) ensure that no agent sells more items than it has in its initial allocation. Constraints (2) provide free disposal, and allows feasible trades to sell more items than are purchased (but not vice versa). Later, we adopt Feas(x0 ) to denote the set of feasible trades, given these constraints and given an initial allocation x0 = (x0 1, ... , x0 n). 3.1 Working Examples In this section, we provide three simple examples of instances that we will use to illustrate various components of the exchange. All three examples have only one seller, but this is purely illustrative. Example 1. One seller and one buyer, two goods {A, B}, with the seller having an initial allocation of AB. Changes in values for trades: seller buyer AND(−A, −B) AND(+A, +B) -10 +20 The AND indicates that both the buyer and the seller are only interested in trading both goods as a bundle. Here, the efficient (value-maximizing) trade is for the seller to sell AB to the buyer, denoted λ∗ = ([−1, −1], [+1, +1]). Example 2. One seller and four buyers, four goods {A, B, C, D}, with the seller having an initial allocation of ABCD. Changes in values for trades: seller buyer1 buyer 2 buyer 3 buyer 4 OR(−A, −B, AND(+A, XOR(+A, AND(+C, XOR(+C, −C, −D) +B) +B) +D) +D) 0 +6 +4 +3 +2 The OR indicates that the seller is willing to sell any number of goods. The XOR indicates that buyers 2 and 4 are willing to buy at most one of the two goods in which they are interested. The efficient trade is for bundle AB to go to buyer 1 and bundle CD to buyer 3, denoted λ∗ = ([−1, −1, −1, −1], [+1, +1, 0, 0], [0, 0, 0, 0], [0, 0, +1, +1], [0, 0, 0, 0]). 2,2 +A +10 +B +10 BUYER 2,2 -A -5 -B -5 SELLER Example 1: Example 3: 2,2 +C +D BUYER 2 2,2 +A +B BUYER 1 +11 +84,4 -B SELLER -A -C -D Example 2: 1,1 +A +B BUYER 2 2,2 +A +B BUYER 1 +6 +40,4 -B SELLER -C -D-A 1,1 +C +D BUYER 4 2,2 +C +D +3 +2 BUYER 3 -18 Figure 2: Example Bid Trees. Example 3. One seller and two buyers, four goods {A, B, C, D}, with the seller having an initial allocation of ABCD. Changes in values for trades: seller buyer1 buyer 2 AND(−A, −B, −C, −D) AND(+A, +B) AND(+C, +D) -18 +11 +8 The efficient trade is for bundle AB to go to buyer 1 and bundle CD to go to buyer 2, denoted λ∗ = ([−1, −1, −1, −1], [+1, +1, 0, 0], [0, 0, +1, +1]). 4. A ONE-SHOT EXCHANGE DESIGN The description of ICE is broken down into two sections: one-shot (sealed-bid) and iterative. In this section we abstract away the iterative aspect and introduce a specialization of the tree-based language that supports only exact values on nodes. 4.1 Tree-Based Bidding Language The bidding language is designed to be expressive and concise, entirely symmetric with respect to buyers and sellers, and to extend to capture bids from mixed buyers and sellers, ranging from simple swaps to highly complex trades. Bids are expressed as annotated bid trees, and define a bidder``s value for all possible trades. The language defines changes in values on trades, with leaves annotated with traded items and nodes annotated with changes in values (either positive or negative). The main feature is that it has a general interval-choose logical operator on internal nodes, and that it defines careful semantics for propagating values within the tree. We illustrate the language on each of Examples 1-3 in Figure 2. The language has a tree structure, with trades on items defined on leaves and values annotated on nodes and leaves. The nodes have zero values where no value is indicated. Internal nodes are also labeled with interval-choose (IC) ranges. Given a trade, the semantics of the language define which nodes in the tree can be satisfied, or switched-on. First, if a child is on then its parent must be on. Second, if a parent node is on, then the number of children that are on must be within the IC range on the parent node. Finally, leaves in which the bidder is buying items can only be on if the items are provided in the trade. For instance, in Example 2 we can consider the efficient trade, and observe that in this trade all nodes in the trees of buyers 1 and 3 (and also the seller), but none of the nodes in the trees of buyers 2 and 4, can be on. On the other hand, in 251 the trade in which A goes to buyer 2 and D to buyer 4, then the root and appropriate leaf nodes can be on for buyers 2 and 4, but no nodes can be on for buyers 1 and 3. Given a trade there is often a number of ways to choose the set of satisfied nodes. The semantics of the language require that the nodes that maximize the summed value across satisfied nodes be activated. Consider bid tree Ti from bidder i. This defines nodes β ∈ Ti, of which some are leaves, Leaf (i) ⊆ Ti. Let Child(β) ⊆ Ti denote the children of a node β (that is not itself a leaf). All nodes except leaves are labeled with the interval-choose operator [IC x i (β), ICy i (β)]. Every node is also labeled with a value, viβ ∈ ¡ . Each leaf β is labeled with a trade, qiβ ∈ m (i.e., leaves can define a bundled trade on more than one type of item.) Given a trade λi to bidder i, the interval-choose operators and trades on leaves define which nodes can be satisfied. There will often be a choice. Ties are broken to maximize value. Let satiβ ∈ {0, 1} denote whether node β is satisfied. Solution sati is valid given tree Ti and trade λi, written sati ∈ valid(Ti, λi), if and only if: cents β∈Leaf (i) qiβj · satiβ ≤ λij , ∀i, ∀j (4) ICx i (β)satiβ ≤ cents β ∈Child(β) satiβ ≤ ICy i (β)satiβ, ∀β /∈ Leaf (i) (5) In words, a set of leaves can only be considered satisfied given trade λi if the total increase in quantity summed across all such leaves is covered by the trade, for all goods (Eq. 4). This works for sellers as well as buyers: for sellers a trade is negative and this requires that the total number of items indicated sold in the tree is at least the total number sold as defined in the trade. We also need upwards-propagation: any time a node other than the root is satisfied then its parent must be satisfied (by β ∈Child(β) satiβ ≤ ICy i (β)satiβ in Eq. 5). Finally, we need downwards-propagation: any time an internal node is satisfied then the appropriate number of children must also be satisfied (Eq. 5). The total value of trade λi, given bid-tree Ti, is defined as: vi(Ti, λi) = max sat∈valid(Ti,λi) cents β∈T vβ · satβ (6) The tree-based language generalizes existing languages. For instance: IC(2, 2) on a node with 2 children is equivalent to an AND operator; IC(1, 3) on a node with 3 children is equivalent to an OR operator; and IC(1, 1) on a node with 2 children is equivalent to an XOR operator. Similarly, the XOR/OR bidding languages can be directly expressed as a bid tree in our language.2 4.2 Winner Determination This section defines the winner determination problem, which is formulated as a MIP and solved in our implementation with a commercial solver.3 The solver uses branchand-bound search with dynamic cut generation and branching heuristics to solve large MIPs in economically feasible run times. 2 The OR* language is the OR language with dummy items to provide additional structure. OR* is known to be expressive and concise. However, it is not known whether OR* dominates XOR/OR in terms of conciseness [23]. 3 CPLEX, www.ilog.com In defining the MIP representation we are careful to avoid an XOR-based enumeration of all bundles. A variation on the WD problem is reused many times within the exchange, e.g. for column generation in pricing and for checking revealed preference. Given bid trees T = (T1, ... , Tn) and initial allocation x0 , the mixed-integer formulation for WD is: WD(T, x0 ) : max λ,sat cents i cents β∈Ti viβ · satiβ s.t. (1), (2), satiβ ∈ {0, 1}, λij ∈ sati ∈ valid(Ti, λi), ∀i Some goods may go unassigned because free disposal is allowed within the clearing rules of winner determination. These items can be allocated back to agents that sold the items, i.e. for which λij < 0. 4.3 Computing Threshold Payments The Threshold payment rule is based on the payments in the Vickrey-Clarke-Groves (VCG) mechanism [15], which itself is truthful and efficient but does not satisfy budget balance. Budget-balance requires that the total payments to the exchange are equal to the total payments made by the exchange. In VCG, the payment paid by agent i is pvcg,i = ˆv(λ∗ i ) − (V ∗ − V−i) (7) where λ∗ is the efficient trade, V ∗ is the reported value of this trade, and V−i is the reported value of the efficient trade that would be implemented without bidder i. We call ∆vcg,i = V ∗ − V−i the VCG discount. For instance, in Example 1 pvcg,seller = −10 − (+10 − 0) = −20 and pvcg,buyer = +20 − (+10 − 0) = 10, and the exchange would run at a budget deficit of −20 + 10 = −10. The Threshold payment rule [24] determines budgetbalanced payments to minimize the maximal error across all agents to the VCG outcome. Definition 2. The Threshold payment scheme implements the efficient trade λ∗ given bids, and sets payments pthresh,i = ˆvi(λ∗ i ) − ∆i, where ∆ = (∆1, ... , ∆n) is set to minimize maxi(∆vcg,i − ∆i) subject to ∆i ≤ ∆vcg,i and i ∆i ≤ V ∗ (this gives budget-balance). Example 4. In Example 2, the VCG discounts are (9, 2, 0, 1, 0) to the seller and four buyers respectively, VCG payments are (−9, 4, 0, 2, 0) and the exchange runs at a deficit of -3. In Threshold, the discounts are (8, 1, 0, 0, 0) and the payments are (−8, 5, 0, 3, 0). This minimizes the worst-case error to VCG discounts across all budget-balanced payment schemes. Threshold payments are designed to minimize the maximal ex post incentive to manipulate. Krych [16] confirmed that Threshold promotes allocative efficiency in restricted and approximate Bayes-Nash equilibrium. 5. THE ICE DESIGN We are now ready to introduce the iterative combinatorial exchange (ICE) design. Several new components are introduced, relative to the design for the one-shot exchange. Rather than provide precise valuations, bidders can provide lower and upper valuations and revise this bid information across rounds. The exchange provides price-based feedback 252 to guide bidders in this process, and terminates with an efficient (or approximately-efficient) trade with respect to reported valuations. In each round t ∈ {0, 1, ...} the current lower and upper bounds, vt and vt , are used to define a provisional valuation profile vα (the α-valuation), together with a provisional trade λt and provisional prices pt = (pt 1, ... , pt m) on items. The α-valuation is a linear combination of the current upper and lower valuations, with αEFF ∈ [0, 1] chosen endogenously based on the closeness of the optimistic trade (at v) and the pessimistic trade (at v). Prices pt are used to inform an activity rule, and drive progress towards an efficient trade. 5.1 Upper and Lower Valuations The bidding language is extended to allow a bidder i to report a lower and upper value (viβ, viβ) on each node. These take the place of the exact value viβ defined in Section 4.1. Based on these labels, we can define the valuation functions vi(Ti, λi) and vi(Ti, λi), using the exact same semantics as in Eq. (6). We say that such a bid-tree is well-formed if viβ ≤ viβ for all nodes. The following lemma is useful: Lemma 1. Given a well-formed tree, T, then vi(Ti, λi) ≤ vi(Ti, λi) for all trades. Proof. Suppose there is some λi for which vi(Ti, λi) > vi(Ti, λi). Then, maxsat∈valid(Ti,λi) β∈Ti viβ · satβ > maxsat∈valid(Ti,λi) β∈Ti viβ · satβ. But, this is a contradiction because the trade λ that defines vi(Ti, λi) is still feasible with upper bounds vi, and viβ ≥ viβ for all nodes β in a well-formed tree. 5.2 Price Feedback In each round, approximate competitive-equilibrium (CE) prices, pt = (pt 1, ... , pt m), are determined. Given these provisional prices, the price on trade λi for bidder i is pt (λi) = j≤m pt j · λij. Definition 3 (CE prices). Prices p∗ are competitive equilibrium prices if the efficient trade λ∗ is supported at prices p∗ , so that for each bidder: λ∗ i ∈ arg max λ∈Feas(x0) {vi(λi) − p∗ (λi)} (8) CE prices will not always exist and we will often need to compute approximate prices [5]. We extend ideas due to Rassenti et al. [26], Kwasnica et al. [17] and Dunford et al. [12], and select approximate prices as follows: I: Accuracy. First, we compute prices that minimize the maximal error in the best-response constraints across all bidders. II: Fairness. Second, we break ties to prefer prices that minimize the maximal deviation from Threshold payments across all bidders. III: Balance. Third, we break ties to prefer prices that minimize the maximal price across all items. Taken together, these steps are designed to promote the informativeness of the prices in driving progress across rounds. In computing prices, we explain how to compute approximate (or otherwise) prices for structured bidding languages, and without enumerating all possible trades. For this, we adopt constraint generation to efficient handle an exponential number of constraints. Each step is described in detail below. I: Accuracy. We adopt a definition of price accuracy that generalizes the notions adopted in previous papers for unstructured bidding languages. Let λt denote the current provisional trade and suppose the provisional valuation is vα . To compute accurate CE prices, we consider: min p,δ δ (9) s.t. vα i (λ) − p(λ) ≤ vα i (λt i) − p(λt i) + δ, ∀i, ∀λ (10) δ ≥ 0,pj ≥ 0, ∀j. This linear program (LP) is designed to find prices that minimize the worst-case error across all agents. From the definition of CE prices, it follows that CE prices would have δ = 0 as a solution to (9), at which point trade λt i would be in the best-response set of every agent (with λt i = ∅, i.e. no trade, for all agents with no surplus for trade at the prices.) Example 5. We can illustrate the formulation (9) on Example 2, assuming for simplicity that vα = v (i.e. truth). The efficient trade allocates AB to buyer 1 and CD to buyer 3. Accuracy will seek prices p(A), p(B), p(C) and p(D) to minimize the δ ≥ 0 required to satisfy constraints: p(A) + p(B) + p(C) + p(D) ≥ 0 (seller) p(A) + p(B) ≤ 6 + δ (buyer 1) p(A) + δ ≥ 4, p(B) + δ ≥ 4 (buyer 2) p(C) + p(D) ≤ 3 (buyer 3) p(C) + δ ≥ 2, p(D) + δ ≥ 2 (buyer 4) An optimal solution requires p(A) = p(B) = 10/3, with δ = 2/3, with p(C) and p(D) taking values such as p(C) = p(D) = 3/2. But, (9) has an exponential number of constraints (Eq. 10). Rather than solve it explicitly we use constraint generation [4] and dynamically generate a sufficient subset of constraints. Let i denote a manageable subset of all possible feasible trades to bidder i. Then, a relaxed version of (9) (written ACC) is formulated by substituting (10) with vα i (λ) − p(λ) ≤ vα i (λt i) − p(λt i) + δ, ∀i, ∀λ ∈ i , (11) where i is a set of trades that are feasible for bidder i given the other bids. Fixing the prices p∗ , we then solve n subproblems (one for each bidder), max λ vα i (λi) − p∗ (λi) [R-WD(i)] s.t. λ ∈ Feas(x0 ), (12) to check whether solution (p∗ , δ∗ ) to ACC is feasible in problem (9). In R-WD(i) the objective is to determine a most preferred trade for each bidder at these prices. Let ˆλi denote the solution to R-WD(i). Check condition: vα i (ˆλi) − p∗ (ˆλ) ≤ vα i (λt i) − p∗ (λt i) + δ∗ , (13) and if this condition holds for all bidders i, then solution (p∗ , δ∗ ) is optimal for problem (9). Otherwise, trade ˆλi is added to i for all bidders i for which this constraint is 253 violated and we re-solve the LP with the new set of constraints.4 II: Fairness. Second, we break remaining ties to prefer fair prices: choosing prices that minimize the worst-case error with respect to Threshold payoffs (i.e. utility to bidders with Threshold payments), but without choosing prices that are less accurate.5 Example 6. For example, accuracy in Example 1 (depicted in Figure 1) requires 12 ≤ pA +pB ≤ 16 (for vα = v). At these valuations the Threshold payoffs would be 2 to both the seller and the buyer. This can be exactly achieved in pricing with pA + pB = 14. The fairness tie-breaking method is formulated as the following LP: min p,π π [FAIR] s.t. vα i (λ) − p(λ) ≤ vα i (λt i) − p(λt i) + δ∗ i , ∀i, ∀λ ∈ i (14) π ≥ πvcg,i − (vα i (λt i) − p(λt i)), ∀i (15) π ≥ 0,pj ≥ 0, ∀j, where δ∗ represents the error in the optimal solution, from ACC. The objective here is the same as in the Threshold payment rule (see Section 4.3): minimize the maximal error between bidder payoff (at vα ) for the provisional trade and the VCG payoff (at vα ). Problem FAIR is also solved through constraint generation, using R-WD(i) to add additional violated constraints as necessary. III: Balance. Third, we break remaining ties to prefer balanced prices: choosing prices that minimize the maximal price across all items. Returning again to Example 1, depicted in Figure 1, we see that accuracy and fairness require p(A) + p(B) = 14. Finally, balance sets p(A) = p(B) = 7. Balance is justified when, all else being equal, items are more likely to have similar than dissimilar values.6 The LP for balance is formulated as follows: min p,Y Y [BAL] s.t. vα i (λ) − p(λ) ≤ vα i (λt i) − p(λt i) + δ∗ i , ∀i, ∀λ ∈ i (16) π∗ i ≥ πvcg,i − (vα i (λt i) − p(λt i)), ∀i, (17) Y ≥ pj, ∀j (18) Y ≥ 0, pj ≥ 0, ∀j, where δ∗ represents the error in the optimal solution from ACC and π∗ represents the error in the optimal solution from FAIR. Constraint generation is also used to solve BAL, generating new trades for i as necessary. 4 Problem R-WD(i) is a specialization of the WD problem, in which the objective is to maximize the payoff of a single bidder, rather than the total value across all bidders. It is solved as a MIP, by rewriting the objective in WD(T, x0 ) as max{viβ · satiβ − j p∗ j · λij } for agent i. Thus, the structure of the bid-tree language is exploited in generating new constraints, because this is solved as a concise MIP. The other bidders are kept around in the MIP (but do not appear in the objective), and are used to define the space of feasible trades. 5 The methods of Dunford et al. [12], that use a nucleolus approach, are also closely related. 6 The use of balance was advocated by Kwasnica et al. [17]. Dunford et al. [12] prefer to smooth prices across rounds. Comment 1: Lexicographical Refinement. For all three sub-problems we also perform lexicographical refinement (with respect to bidders in ACC and FAIR, and with respect to goods in BAL). For instance, in ACC we successively minimize the maximal error across all bidders. Given an initial solution we first pin down the error on all bidders for whom a constraint (11) is binding. For such a bidder i, the constraint is replaced with vα i (λ) − p(λ) ≤ vα i (λt i) − p(λt i) + δ∗ i , ∀λ ∈ i , (19) and the error to bidder i no longer appears explicitly in the objective. ACC is then re-solved, and makes progress by further minimizing the maximal error across all bidders yet to be pinned down. This continues, pinning down any new bidders for whom one of constraints (11) is binding, until the error is lexicographically optimized for all bidders.7 The exact same process is repeated for FAIR and BAL, with bidders pinned down and constraints (15) replaced with π∗ i ≥ πvcg,i − (vα i (λt i) − p(λt i)), ∀λ ∈ i , (where π∗ i is the current objective) in FAIR, and items pinned down and constraints (18) replaced with p∗ j ≥ pj (where p∗ j represents the target for the maximal price on that item) in BAL. Comment 2: Computation. All constraints in i are retained, and this set grows across all stages and across all rounds of the exchange. Thus, the computational effort in constraint generation is re-used. In implementation we are careful to address a number of -issues that arise due to floating-point issues. We prefer to err on the side of being conservative in determining whether or not to add another constraint in performing check (13). This avoids later infeasibility issues. In addition, when pinning-down bidders for the purpose of lexicographical refinement we relax the associated bidder-constraints with a small > 0 on the righthand side. 5.3 Revealed-Preference Activity Rules The role of activity rules in the auction is to ensure both consistency and progress across rounds [21]. Consistency in our exchange requires that bidders tighten bounds as the exchange progresses. Activity rules ensure that bidders are active during early rounds, and promote useful elicitation throughout the exchange. We adopt a simple revealed-preference (RP) activity rule. The idea is loosely based around the RP-rule in Ausubel et al. [1], where it is used for one-sided CAs. The motivation is to require more than simply consistency: we need bidders to provide enough information for the system to be able to to prove that an allocation is (approximately) efficient. It is helpful to think about the bidders interacting with proxy agents that will act on their behalf in responding to provisional prices pt−1 determined at the end of round t − 1. The only knowledge that such a proxy has of the valuation of a bidder is through the bid-tree. Suppose a proxy was queried by the exchange and asked which trade the bidder was most interested in at the provisional prices. The RP rule says the following: the proxy must have enough 7 For example, applying this to accuracy on Example 2 we solve once and find bidders 1 and 2 are binding, for error δ∗ = 2/3. We pin these down and then minimize the error to bidders 3 and 4. Finally, this gives p(A) = p(B) = 10/3 and p(C) = p(D) = 5/3, with accuracy 2/3 to bidders 1 and 2 and 1/3 to bidders 3 and 4. 254 information to be able to determine this surplus-maximizing trade at current prices. Consider the following examples: Example 7. A bidder has XOR(+A, +B) and a value of +5 on the leaf +A and a value range of [5,10] on leaf +B. Suppose prices are currently 3 for each of A and B. The RP rule is satisfied because the proxy knows that however the remaining value uncertainty on +B is resolved the bidder will always (weakly) prefer +B to +A. Example 8. A bidder has XOR(+A, +B) and value bounds [5, 10] on the root node and a value of 1 on leaf +A. Suppose prices are currently 3 for each of A and B. The RP rule is satisfied because the bidder will always prefer +A to +B at equal prices, whichever way the uncertain value on the root node is ultimately resolved. Overloading notation, let vi ∈ Ti denote a valuation that is consistent with lower and upper valuations in bid tree Ti. Definition 4. Bid tree Ti satisfies RP at prices pt−1 if and only if there exists some feasible trade L∗ for which, vi(L∗ i ) − pt−1 (L∗ i ) ≥ max λ∈Feas(x0) vi(λi) − pt−1 (λi), ∀vi ∈ Ti. (20) To make this determination for bidder i we solve a sequence of problems, each of which is a variation on the WD problem. First, we construct a candidate lower-bound trade, which is a feasible trade that solves: max λ vi(λi) − pt−1 (λi) [RP1(i)] s.t. λ ∈ Feas(x0 ), (21) The solution π∗ l to RP1(i) represents the maximal payoff that bidder i can achieve across all feasible trades, given its pessimistic valuation. Second, we break ties to find a trade with maximal value uncertainty across all possible solutions to RP1(i): max λ vi(λi) − vi(λi) [RP2(i)] s.t. λ ∈ Feas(x0 ) (22) vi(λi) − pt−1 (λi) ≥ π∗ l (23) We adopt solution L∗ i as our candidate for the trade that may satisfy RP. To understand the importance of this tiebreaking rule consider Example 7. The proxy can prove +B but not +A is a best-response for all vi ∈ Ti, and should choose +B as its candidate. Notice that +B is a counterexample to +A, but not the other way round. Now, we construct a modified valuation ˜vi, by setting ˜viβ = viβ , if β ∈ sat(L∗ i ) viβ , otherwise. (24) where sat(L∗ i ) is the set of nodes that are satisfied in the lower-bound tree for trade L∗ i . Given this modified valuation, we find U∗ to solve: max λ ˜vi(λi) − pt−1 (λi) [RP3(i)] s.t. λ ∈ Feas(x0 ) (25) Let π∗ u denote the payoff from this optimal trade at modified values ˜v. We call trade U∗ i the witness trade. We show in Proposition 1 that the RP rule is satisfied if and only if π∗ l ≥ π∗ u. Constructing the modified valuation as ˜vi recognizes that there is shared uncertainty across trades that satisfy the same nodes in a bid tree. Example 8 helps to illustrate this. Just using vi in RP3(i), we would find L∗ i is buy A with payoff π∗ l = 3 but then find U∗ i is buy B with π∗ u = 7 and fail RP. We must recognize that however the uncertainty on the root node is resolved it will affect +A and +B in exactly the same way. For this reason, we set ˜viβ = viβ = 5 on the root node, which is exactly the same value that was adopted in determining π∗ l . Then, RP3(i) applied to U∗ i gives buy A and the RP test is judged to be passed. Proposition 1. Bid tree Ti satisfies RP given prices pt−1 if and only if any lower-bound trade L∗ i that solves RP1(i) and RP2(i) satisfies: vi(Ti, L∗ i ) − pt−1 (L∗ i ) ≥ ˜vi(Ti, U∗ i ) − pt−1 (U∗ i ), (26) where ˜vi is the modified valuation in Eq. (24). Proof. For sufficiency, notice that the difference in payoff between trade L∗ i and another trade λi is unaffected by the way uncertainty is resolved on any node that is satisfied in both L∗ i and λi. Fixing the values in ˜vi on nodes satisfied in L∗ i has the effect of removing this consideration when a trade U∗ i is selected that satisfies one of these nodes. On the other hand, fixing the values on these nodes has no effect on trades considered in RP3(i) that do not share a node with L∗ i . For the necessary direction, we first show that any trade that satisfies RP must solve RP1(i). Suppose otherwise, that some λi with payoff greater than π∗ l satisfies RP. But, valuation vi ∈ Ti together with L∗ i presents a counterexample to RP (Eq. 20). Now, suppose (for contradiction) that some λi with maximal payoff π∗ l but uncertainty less than L∗ i satisfies RP. Proceed by case analysis. Case a): only one solution to RP1(i) has uncertain value and so λi has certain value. But, this cannot satisfy RP because L∗ i with uncertain value would be a counterexample to RP (Eq. 20). Case b): two or more solutions to RP1(i) have uncertain value. Here, we first argue that one of these trades must satisfy a (weak) superset of all the nodes with uncertain value that are satisfied by all other trades in this set. This is by RP. Without this, then for any choice of trade that solves RP1(i), there is another trade with a disjoint set of uncertain but satisfied nodes that provides a counterexample to RP (Eq. 20). Now, consider the case that some trade contains a superset of all the uncertain satisfied nodes of the other trades. Clearly RP2(i) will choose this trade, L∗ i , and λi must satisfy a subset of these nodes (by assumption). But, we now see that λi cannot satisfy RP because L∗ i would be a counterexample to RP. Failure to meet the activity rule must have some consequence. In the current rules, the default action we choose is to set the upper bounds in valuations down to the maximal value of the provisional price on a node8 and the lowerbound value on that node.9 Such a bidder can remain active 8 The provisional price on a node is defined as the minimal total price across all feasible trades for which the subtree rooted at the tree is satisfied. 9 This is entirely analogous to when a bidder in an ascending clock auction stops bidding at a price: she is not permitted to bid at a higher price again in future rounds. 255 within the exchange, but only with valuations that are consistent with these new bounds. 5.4 Bidder Feedback In each round, our default design provides every bidder with the provisional trade and also with the current provisional prices. See 7 for an additional discussion. We also provide guidance to help a bidder meet the RP rule. Let sat(L∗ i ) and sat(U∗ i ) denote the nodes that are satisfied in trades L∗ i and U∗ i , as computed in RP1-RP3. Lemma 2. When RP fails, a bidder must increase a lower bound on at least one node in sat(L∗ i ) \ sat(U∗ i ) or decrease an upper bound on at least one node in sat(U∗ i ) \ sat(L∗ i ) in order to meet the activity rule. Proof. Changing the upper- or lower- values on nodes that are not satisfied by either trade does not change L∗ i or U∗ i , and does not change the payoff from these trades. Thus, the RP condition will continue to fail. Similarly, changing the bounds on nodes that are satisfied in both trades has no effect on revealed preference. A change to a lower bound on a shared node affects both L∗ i and U∗ i identically because of the use of the modified valuation to determine U∗ i . A change to an upper bound on a shared node has no effect in determining either L∗ i or U∗ i . Note that when sat(U∗ i ) = sat(L∗ i ) then condition (26) is always trivially satisfied, and so the guidance in the lemma is always well-defined when RP fails. This is an elegant feedback mechanism because it is adaptive. Once a bidder makes some changes on some subset of these nodes, the bidder can query the exchange. The exchange can then respond yes, or can revise the set of nodes sat(λ∗ l ) and sat(λ∗ u) as necessary. 5.5 Termination Conditions Once each bidder has committed its new bids (and either met the RP rule or suffered the penalty) then round t closes. At this point, the task is to determine the new α-valuation, and in turn the provisional allocation λt and provisional prices pt . A termination condition is also checked, to determine whether to move the exchange to a last-and-final round. To define the α-valuation we compute the following two quantities: Pessimistic at Pessimistic (PP) Determine an efficient trade, λ∗ l , at pessimistic values, i.e. to solve maxλ i vi(λi), and set PP= i vi(λ∗ li). Pessimistic at Optimistic (PO) Determine an efficient trade, λ∗ u, at optimistic values, i.e. to solve maxλ i vi(λi), and set PO= i vi(λ∗ ui). First, note that PP ≥ PO and PP ≥ 0 by definition, for all bid-trees, although PO can be negative (because the right trade at v is not currently a useful trade at v). Recognizing this, define γeff (PP, PO) = 1 + PP − PO PP , (27) when PP > 0, and observe that γeff (PP, PO) ≥ 1 when this is defined, and that γeff (PP, PO) will start large and then trend towards 1 as the optimistic allocation converges towards the pessimistic allocation. In each round, we define αeff ∈ [0, 1] as: αeff = 0 when PP is 0 1/γeff otherwise (28) which is 0 while PP is 0 and then trends towards 1 once PP> 0 in some round. This is used to define α-valuation vα i = αeff vi + (1 − αeff )vi, ∀i, (29) which is used to define the provisional allocation and provisional prices. The effect is to endogenously define a schedule for moving from optimistic to pessimistic values across rounds, based on how close the trades are to one another. Termination Condition. In moving to the last-and-final round, and finally closing, we also care about the convergence of payments, in addition to the convergence towards an efficient trade. For this we introduce another parameter, αthresh ∈ [0, 1], that trends from 0 to 1 as the Threshold payments at lower and upper valuations converge. Consider the following parameter: γthresh = 1 + ||pthresh(v) − pthresh(v)||2 (PP/Nactive) , (30) which is defined for PP > 0, where pthresh(v) denotes the Threshold payments at valuation profile v, Nactive is the number of bidders that are actively engaged in trade in the PP trade, and || · ||2 is the L2-norm. Note that γthresh is defined for payments and not payoffs. This is appropriate because it is the accuracy of the outcome of the exchange that matters: i.e. the trade and the payments. Given this, we define αthresh = 0 when PP is 0 1/γthresh otherwise (31) which is 0 while PP is 0 and then trends towards 1 as progress is made. Definition 5 (termination). ICE transitions to a lastand-final round when one of the following holds: 1. αeff ≥ CUTOFFeff and αthresh ≥ CUTOFFthresh, 2. there is no trade at the optimistic values, where CUTOFFeff , CUTOFFthresh ∈ (0, 1] determine the accuracy required for termination. At the end of the last-and-final round vα = v is used to define the final trade and the final Threshold payments. Example 9. Consider again Example 1, and consider the upper and lower bounds as depicted in Figure 1. First, if the seller``s bounds were [−20, −4] then there is an optimistic trade but no pessimistic trade, and PO = −4 and PP = 0, and αeff = 0. At the bounds depicted, both the optimistic and the pessimistic trades occur and PO = PP = 4 and αeff = 1. However, we can see the Threshold payments are (17, −17) at v but (14, −14) at v. Evaluating γthresh , we have γthresh = 1 + √ 1/2(32+32) (4/2) = 5/2, and αthresh = 2/5. For CUTOFFthresh < 2/5 the exchange would remain open. On the other hand, if the buyer``s value for +AB was between [18, 24] and the seller``s value for −AB was between [−12, −6], the Threshold payments are (15, −15) at both upper and lower bounds, and αthresh = 1. 256 Component Purpose Lines Agent. Captures strategic behavior and information revelation decisions 762 Model Support Provides XML support to load goods and valuations into world 200 World Keeps track of all agent, good, and valuation details 998 Exchange Driver & Communication Controls exchange, and coordinates remote agent behavior 585 Bidding Language Implements the tree-based bidding language 1119 Activity Rule Engine Implements the revealed preference rule with range support 203 Closing Rule Engine Checks if auction termination condition reached 137 WD Engine Provides WD-related logic 377 Pricing Engine Provides Pricing-related logic 460 MIP Builders Translates logic used by engines into our general optimizer formulation 346 Pricing Builders Used by three pricing stages 256 Winner Determination Builders Used by WD, activity rule, closing rule, and pricing constraint generation 365 Framework Support code; eases modular replacement of above components 510 Table 1: Exchange Component and Code Breakdown. 6. SYSTEMS INFRASTRUCTURE ICE is approximately 6502 lines of Java code, broken up into the functional packages described in Table 1.10 The prototype is modular so that researchers may easily replace components for experimentation. In addition to the core exchange discussed in this paper, we have developed an agent component that allows a user to simulate the behavior and knowledge of other players in the system, better allowing a user to formulate their strategy in advance of actual play. A user specifies a valuation model in an XMLinterpretation of our bidding language, which is revealed to the exchange via the agent``s strategy. Major exchange tasks are handled by engines that dictate the non-optimizer specific logic. These engines drive the appropriate MIP/LP builders. We realized that all of our optimization formulations boil down to two classes of optimization problem. The first, used by winner determination, activity rule, closing rule, and constraint generation in pricing, is a MIP that finds trades that maximize value, holding prices and slacks constant. The second, used by the three pricing stages, is an LP that holds trades constant, seeking to minimize slack, profit, or prices. We take advantage of the commonality of these problems by using common LP/MIP builders that differ only by a few functional hooks to provide the correct variables for optimization. We have generalized our back-end optimization solver interface11 (we currently support CPLEX and the LGPL- licensed LPSolve), and can take advantage of the load-balancing and parallel MIP/LP solving capability that this library provides. 7. DISCUSSION The bidding language was defined to allow for perfect symmetry between buyers and sellers and provide expressiveness in an exchange domain, for instance for mixed bidders interested in executing trades such as swaps. This proved especially challenging. The breakthrough came when we focused on changes in value for trades rather than providing absolute values for allocations. For simplicity, we require the same tree structure for both the upper and lower valuations. 10 Code size is measured in physical source line of code (SLOC), as generated using David A. Wheeler``s SLOC Count. The total of 6502 includes 184 for instrumentation (not shown in the table). The JOpt solver interface is another 1964 lines, and Castor automatically generates around 5200 lines of code for XML file manipulation. 11 http://econcs.eecs.harvard.edu/jopt This allows the language itself to ensure consistency (with the upper value at least the lower value on all trades) and enforce monotonic tightening of these bounds for all trades across rounds. It also provides for an efficient method to check the RP activity rule, because it makes it simple to reason about shared uncertainty between trades. The decision to adopt a direct and proxied approach in which bidders express their upper and lower values to a trusted proxy agent that interacts with the exchange was made early in the design process. In many ways this is the clearest and most immediate way to generalize the design in Parkes et al. [24] and make it iterative. In addition, this removes much opportunity for strategic manipulation: bidders are restricted to making (incremental) statements about their valuations. Another advantage is that it makes the activity rule easy to explain: bidders can always meet the activity rule by tightening bounds such that their true value remains in the support.12 Perhaps most importantly, having explicit information on upper and lower values permits progress in early rounds, even while there is no efficient trade at pessimistic values. Upper and lower bound information also provides guidance about when to terminate. Note that taken by itself, PP = PO does not imply that the current provisional trade is efficient with respect to all values consistent with current value information. The difference in values between different trades, aggregated across all bidders, could be similar at lower and upper bounds but quite different at intermediate values (including truth). Nevertheless, we conjecture that PP = PO will prove an excellent indicator of efficiency in practical settings where the shape of the upper and lower valuations does convey useful information. This is worthy of experimental investigation. Moreover, the use of price and RP activity provides additional guarantees. We adopted linear prices (prices on individual items) rather than non-linear prices (with prices on a trade not equal to the sum of the prices on the component items) early in the design process. The conciseness of this price representation is very important for computational tractability within the exchange and also to promote simplicity and transparency for bidders. The RP activity rule was adopted later, and is a good choice because of its excellent theoretical properties when coupled with CE prices. The following can be easily established: given exact CE prices pt−1 for provisional trade 12 This is in contrast to indirect price-based approaches, such as clock-proxy [1], in which bidders must be able to reason about the RP-constraints implied by bids in each round. 257 λt−1 at valuations vα , then if the upper and lower values at the start of round t already satisfy the RP rule (and without the need for any tie-breaking), the provisional trade is efficient for all valuations consistent with the current bid trees. When linear CE prices exist, this provides for a soundness and completeness statement: if PP = PO, linear CE prices exist, and the RP rule is satisfied, the provisional trade is efficient (soundness); if prices are exact CE prices for the provisional trade at vα , but the trade is inefficient with respect to some valuation profile consistent with the current bid trees, then at least one bidder must fail RP with her current bid tree and progress will be made (completeness). Future work must study convergence experimentally, and extend this theory to allow for approximate prices. Some strategic aspects of our ICE design deserve comment, and further study. First, we do not claim that truthfully responding to the RP rule is an ex post equilibrium.13 However, the exchange is designed to mimic the Threshold rule in its payment scheme, which is known to have useful incentive properties [16]. We must be careful, though. For instance we do not suggest to provide αeff to bidders, because as αeff approaches 1 it would inform bidders that bid values are becoming irrelevant to determining the trade but merely used to determine payments (and bidders would become increasingly reluctant to increase their lower valuations). Also, no consideration has been given in this work to collusion by bidders. This is an issue that deserves some attention in future work. 8. CONCLUSIONS In this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange. The design includes many interesting features, including: a new bid-tree language for exchanges, a new method to construct approximate linear prices from expressive languages, and a proxied elicitation method with optimistic and pessimistic valuations with a new method to evaluate a revealed- preference activity rule. The exchange is fully implemented in Java and is in a validation phase. The next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree. We intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances. In addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems. Acknowledgments We would like to dedicate this paper to all of the participants in CS 286r at Harvard University in Spring 2004. This work is supported in part by NSF grant IIS-0238147. 9. REFERENCES [1] L. Ausubel, P. Cramton, and P. Milgrom. The clock-proxy auction: A practical combinatorial auction design. In Cramton et al. [9], chapter 5. [2] M. Babaioff, N. Nisan, and E. Pavlov. Mechanisms for a spatially distributed market. In Proc. 5th ACM Conf. on Electronic Commerce, pages 9-20. ACM Press, 2001. 13 Given the Myerson-Satterthwaite impossibility theorem [22] and the method by which we determine the trade we should not expect this. [3] M. Ball, G. Donohue, and K. Hoffman. Auctions for the safe, efficient, and equitable allocation of airspace system resources. In S. Cramton, Shoham, editor, Combinatorial Auctions. 2004. Forthcoming. [4] D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997. [5] S. Bikhchandani and J. M. Ostroy. The package assignment model. Journal of Economic Theory, 107(2):377-406, 2002. [6] C. Boutilier. A pomdp formulation of preference elicitation problems. In Proc. 18th National Conference on Artificial Intelligence (AAAI-02), 2002. [7] C. Boutilier and H. Hoos. Bidding languages for combinatorial auctions. In Proc. 17th International Joint Conference on Artificial Intelligence (IJCAI-01), 2001. [8] W. Conen and T. Sandholm. Preference elicitation in combinatorial auctions. In Proc. 3rd ACM Conf. on Electronic Commerce (EC-01), pages 256-259. ACM Press, New York, 2001. [9] P. Cramton, Y. Shoham, and R. Steinberg, editors. Combinatorial Auctions. MIT Press, 2004. [10] S. de Vries, J. Schummer, and R. V. Vohra. On ascending Vickrey auctions for heterogeneous objects. Technical report, MEDS, Kellogg School, Northwestern University, 2003. [11] S. de Vries and R. V. Vohra. Combinatorial auctions: A survey. Informs Journal on Computing, 15(3):284-309, 2003. [12] M. Dunford, K. Hoffman, D. Menon, R. Sultana, and T. Wilson. Testing linear pricing algorithms for use in ascending combinatorial auctions. Technical report, SEOR, George Mason University, 2003. [13] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat. Sharp: an architecture for secure resource peering. In Proceedings of the nineteenth ACM symposium on Operating systems principles, pages 133-148. ACM Press, 2003. [14] B. Hudson and T. Sandholm. Effectiveness of query types and policies for preference elicitation in combinatorial auctions. In Proc. 3rd Int. Joint. Conf. on Autonomous Agents and Multi Agent Systems, pages 386-393, 2004. [15] V. Krishna. Auction Theory. Academic Press, 2002. [16] D. Krych. Calculation and analysis of Nash equilibria of Vickrey-based payment rules for combinatorial exchanges, Harvard College, April 2003. [17] A. M. Kwasnica, J. O. Ledyard, D. Porter, and C. DeMartini. A new and improved design for multi-object iterative auctions. Management Science, 2004. To appear. [18] E. Kwerel and J. Williams. A proposal for a rapid transition to market allocation of spectrum. Technical report, FCC Office of Plans and Policy, Nov 2002. [19] S. M. Lahaie and D. C. Parkes. Applying learning algorithms to preference elicitation. In Proc. ACM Conf. on Electronic Commerce, pages 180-188, 2004. [20] R. P. McAfee. A dominant strategy double auction. J. of Economic Theory, 56:434-450, 1992. [21] P. Milgrom. Putting auction theory to work: The simultaneous ascending auction. J.Pol. Econ., 108:245-272, 2000. [22] R. B. Myerson and M. A. Satterthwaite. Efficient mechanisms for bilateral trading. Journal of Economic Theory, 28:265-281, 1983. [23] N. Nisan. Bidding and allocation in combinatorial auctions. In Proc. 2nd ACM Conf. on Electronic Commerce (EC-00), pages 1-12, 2000. [24] D. C. Parkes, J. R. Kalagnanam, and M. Eso. Achieving budget-balance with Vickrey-based payment schemes in exchanges. In Proc. 17th International Joint Conference on Artificial Intelligence (IJCAI-01), pages 1161-1168, 2001. [25] D. C. Parkes and L. H. Ungar. Iterative combinatorial auctions: Theory and practice. In Proc. 17th National Conference on Artificial Intelligence (AAAI-00), pages 74-81, July 2000. [26] S. J. Rassenti, V. L. Smith, and R. L. Bulfin. A combinatorial mechanism for airport time slot allocation. Bell Journal of Economics, 13:402-417, 1982. [27] M. H. Rothkopf, A. Pekeˇc, and R. M. Harstad. Computationally manageable combinatorial auctions. Management Science, 44(8):1131-1147, 1998. [28] T. Sandholm and C. Boutilier. Preference elicitation in combinatorial auctions. In Cramton et al. [9], chapter 10. [29] P. R. Wurman and M. P. Wellman. AkBA: A progressive, anonymous-price combinatorial auction. In Second ACM Conference on Electronic Commerce, pages 21-29, 2000. 258
ICE: An Iterative Combinatorial Exchange David C. Parkes * s Ruggiero Cavallos Nick Elprins Adam Judas S ´ ebastien Lahaies ABSTRACT We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase. Keywords: Combinatorial exchange, Threshold payments, VCG, Preference Elicitation. 1. INTRODUCTION Combinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions. In a double auction (DA), multiple buyers and sellers trade units of an identical good [20]. In a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11]. Buyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language. A common goal in both market * Corresponding author. Remaining authors in alphabetical order. parkes@eecs.harvard.edu sDivision of Engineering and Applied Sciences, Harvard University, Cambridge MA 02138. designs is to determine the efficient allocation, which is the allocation that maximizes total value. A combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods. For example, in an exchange for wireless spectrum, a bidder may declare that she is willing to pay $1 million for a trade where she obtains licenses for New York City, Boston, and Philadelphia, and loses her license for Washington DC. Thus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids. Unlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling. CEs have received recent attention both in the context of wireless spectrum allocation [18] and for airport takeoff and landing slot allocation [3]. In both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources. Another potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13]. The instantiation of our general purpose design to specific domains is a compelling next step in our research. This paper presents the first design for an iterative combinatorial exchange (ICE). The genesis of this project was a class, CS 286r "Topics at the Interface between Economics and Computer Science," taught at Harvard University in Spring 2004.1 The entire class was dedicated to the design and prototyping of an iterative CE. The ICE design problem is multi-faceted and quite hard. The main innovation in our design is an expressive yet concise tree-based bidding language (which generalizes known languages such as XOR/OR [23]), and the tight coupling of this language with efficient algorithms for price-feedback to guide bidding, winner-determination to determine trades, and revealed-preference activity rules to ensure progress across rounds. The exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round. The Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments. The exchange has a number of interesting theoretical properties. For instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation. In addition, the Figure 1: ICE System Flow of Control efficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades. This is essential in complex domains where the valuation problem can itself be very challenging for a participant [28]. While we cannot claim that straightforward bidding is an equilibrium of the exchange (and indeed, should not expect to by the Myerson-Satterthwaite impossibility theorem [22]), the Threshold payment rule minimizes the ex post incentive to manipulate across all budget-balanced payment rules. The exchange is implemented in Java and is currently in validation. In describing the exchange we will first provide an overview of the main components and introduce several working examples. Then, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round. We then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions. We state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps. 2. AN OVERVIEW OF THE ICE DESIGN The design has four main components, which we will introduce in order through the rest of the paper: 9 Expressive and concise tree-based bidding language. The language describes values for trades, such as "my value for selling AB and buying C is $100," or "my value for selling ABC is - $50," with negative values indicating that a bidder must receive a payment for the trade to be acceptable. The language allows bidders to express upper and lower bounds on value, which can be tightened across rounds. 9 Winner Determination. Winner-determination (WD) is formulated as a mixed-integer program (MIP), with the structure of the bid-trees captured explicitly in the formulation. Comparing the solution at upper and lower values allows for a determination to be made about termination, with progress in intermediate rounds driven by an intermediate valuation and the lower values adopted on termination. 9 Payments. Payments are computed using the Threshold payment rule [24], with the intermediate valuations adopted in early rounds and lower values adopted on termination. 9 Price feedback. An approximate price is computed for each item in the exchange in each round, in terms of the intermediate valuations and the provisional trade. The prices are optimized to approximate competitive equilibrium prices, and further optimized to best approximate the current Threshold payments with remaining ties broken to favor prices that are balanced across different items. In computing the prices, we adopt the methods of constraint-generation to exploit the structure of the bidding language and avoid enumerating all feasible trades. The subproblem to generate new constraints is a variation of the WD problem. 9 Activity rule. A revealed-preference activity rule [1] ensures progress across rounds. In order to remain active, a bidder must tighten bounds so that there is enough information to define a trade that maximizes surplus at the current prices. Another variation on the WD problem is formulated, both to verify that the activity rule is met and also to provide feedback to a bidder to explain how to meet the rule. An outline of the ICE system flow of control is provided in Figure 1. We will return to this example later in the paper. For now, just observe in this two-agent example that the agents state lower and upper bounds that are checked in the activity rule, and then passed to winner-determination (WD), and then through three stages of pricing (accuracy, fairness, balance). On passing the closing rule (in which parameters aeff and athresh are checked for convergence of the trade and payments), the exchange goes to a last-and-final round. At the end of this round, the trade and payments are finally determined, based on the lower valuations. 2.1 Related Work Many ascending-price one-sided CAs are known in the literature [10, 25, 29]. Direct elicitation approaches have also been proposed for one-sided CAs in which agents respond to explicit queries about their valuations [8, 14, 19]. A number of ascending CAs are designed to work with simple prices on items [12, 17]. The price generation methods that we use in ICE generalize the methods in these earlier papers. Parkes et al. [24] studied sealed-bid combinatorial exchanges and introduced the Threshold payment rule. Subsequently, Krych [16] demonstrated experimentally that the Threshold rule promotes efficient allocations. We are not aware of any previous studies of iterative CEs. Dominant strategy DAs are known for unit demand [20] and also for single-minded agents [2]. No dominant strategy mechanisms are known for the general CE problem. ICE is a "hybrid" auction design, in that it couples simple item prices to drive bidding in early rounds with combinatorial WD and payments, a feature it shares with the clock-proxy design of Ausubel et al. [1] for one-sided CAs. We adopt a variation on the clock-proxy auctions's revealedpreference activity rule. The bidding language shares some structural elements with the LGB language of Boutilier and Hoos [7], but has very different semantics. Rothkopf et al. [27] also describe a restricted tree-based bidding language. In LGB, the semantics are those of propositional logic, with the same items in an allocation able to satisfy a tree in multiple places. Although this can make LGB especially concise in some settings, the semantics that we propose appear to provide useful "locality," so that the value of one component in a tree can be understood independently from the rest of the tree. The idea of capturing the structure of our bidding language explicitly within a mixed-integer programming formulation follows the developments in Boutilier [6]. 3. PRELIMINARIES In our model, we consider a set of goods, indexed {1,..., m} and a set of bidders, indexed {1,..., n}. The initial allocation of goods is denoted x0 = (x01,..., x0n), with x0i = (x0i1,..., x0im) and x0ij> 0 for good j indicating the number of units of good j held by bidder i. A trade A = (A1,..., An) denotes the change in allocation, with Ai = (Ai1,..., Aim) where Aij E is the change in the number of units of item j to bidder i. So, the final allocation is x1 = x0 + A. Each bidder has a value vi (Ai) E for a trade Ai. This value can be positive or negative, and represents the change in value between the final allocation x0i + Ai and the initial allocation x0i. Utility is quasi-linear, with ui (Ai, p) = vi (Ai) − p for trade Ai and payment p E. Price p can be negative, indicating the bidder receives a payment for the trade. We use the term payoff interchangeably with utility. Our goal in the ICE design is to implement the efficient trade. The efficient trade, A *, maximizes the total increase in value across bidders. Constraints (1) ensure that no agent sells more items than it has in its initial allocation. Constraints (2) provide free disposal, and allows feasible trades to sell more items than are purchased (but not vice versa). Later, we adopt Feas (x0) to denote the set of feasible trades, given these constraints and given an initial allocation x0 = (x01,..., x0 n). 3.1 Working Examples In this section, we provide three simple examples of instances that we will use to illustrate various components of the exchange. All three examples have only one seller, but this is purely illustrative. The "OR" indicates that the seller is willing to sell any number of goods. The "XOR" indicates that buyers 2 and 4 are willing to buy at most one of the two goods in which they are interested. The efficient trade is for bundle AB to go to buyer 1 and bundle CD to buyer 3, denoted A * = ([− 1, − 1, − 1, − 1], [+1, +1, 0, 0], [0, 0, 0, 0], [0, 0, +1, +1], [0, 0, 0, 0]). Figure 2: Example Bid Trees. 4. A ONE-SHOT EXCHANGE DESIGN The description of ICE is broken down into two sections: one-shot (sealed-bid) and iterative. In this section we abstract away the iterative aspect and introduce a specialization of the tree-based language that supports only exact values on nodes. 4.1 Tree-Based Bidding Language The bidding language is designed to be expressive and concise, entirely symmetric with respect to buyers and sellers, and to extend to capture bids from mixed buyers and sellers, ranging from simple swaps to highly complex trades. Bids are expressed as annotated bid trees, and define a bidder's value for all possible trades. The language defines changes in values on trades, with leaves annotated with traded items and nodes annotated with changes in values (either positive or negative). The main feature is that it has a general "interval-choose" logical operator on internal nodes, and that it defines careful semantics for propagating values within the tree. We illustrate the language on each of Examples 1--3 in Figure 2. The language has a tree structure, with trades on items defined on leaves and values annotated on nodes and leaves. The nodes have zero values where no value is indicated. Internal nodes are also labeled with interval-choose (IC) ranges. Given a trade, the semantics of the language define which nodes in the tree can be satisfied, or "switched-on." First, if a child is on then its parent must be on. Second, if a parent node is on, then the number of children that are on must be within the IC range on the parent node. Finally, leaves in which the bidder is buying items can only be on if the items are provided in the trade. For instance, in Example 2 we can consider the efficient trade, and observe that in this trade all nodes in the trees of buyers 1 and 3 (and also the seller), but none of the nodes in the trees of buyers 2 and 4, can be on. On the other hand, in the trade in which A goes to buyer 2 and D to buyer 4, then the root and appropriate leaf nodes can be on for buyers 2 and 4, but no nodes can be on for buyers 1 and 3. Given a trade there is often a number of ways to choose the set of satisfied nodes. The semantics of the language require that the nodes that maximize the summed value across satisfied nodes be activated. Consider bid tree Ti from bidder i. This defines nodes,3 Ti, of which some are leaves, Leaf (i) Ti. Let Child (,3) Ti denote the children of a node,3 (that is not itself a leaf). All nodes except leaves are labeled with the interval-choose operator [ICxi (,3), ICyi (,3)]. Every node is also labeled with a value, vi,3. Each leaf,3 is labeled with a trade, qi,3 m (i.e., leaves can define a bundled trade on more than one type of item.) Given a trade Ai to bidder i, the interval-choose operators and trades on leaves define which nodes can be satisfied. There will often be a choice. Ties are broken to maximize value. Let sati,3 {0, 1} denote whether node,3 is satisfied. Solution sati is valid given tree Ti and trade Ai, written sati valid (Ti, Ai), if and only if: In words, a set of leaves can only be considered satisfied given trade Ai if the total increase in quantity summed across all such leaves is covered by the trade, for all goods (Eq. 4). This works for sellers as well as buyers: for sellers a trade is negative and this requires that the total number of items indicated sold in the tree is at least the total number sold as defined in the trade. We also need "upwards-propagation": any time a node other than the root is satisfied then its parent must be satisfied (by,3 EChild (,3) sati,3 IC y i (,3) sati,3 in Eq. 5). Finally, we need "downwards-propagation": any time an internal node is satisfied then the appropriate number of children must also be satisfied (Eq. 5). The total value of trade Ai, given bid-tree Ti, is defined as: The tree-based language generalizes existing languages. For instance: IC (2, 2) on a node with 2 children is equivalent to an AND operator; IC (1, 3) on a node with 3 children is equivalent to an OR operator; and IC (1, 1) on a node with 2 children is equivalent to an XOR operator. Similarly, the XOR/OR bidding languages can be directly expressed as a bid tree in our language .2 4.2 Winner Determination This section defines the winner determination problem, which is formulated as a MIP and solved in our implementation with a commercial solver .3 The solver uses branchand-bound search with dynamic cut generation and branching heuristics to solve large MIPs in economically feasible run times. 2The OR * language is the OR language with dummy items to provide additional structure. OR * is known to be expressive and concise. However, it is not known whether OR * dominates XOR/OR in terms of conciseness [23]. 3CPLEX, www.ilog.com In defining the MIP representation we are careful to avoid an XOR-based enumeration of all bundles. A variation on the WD problem is reused many times within the exchange, e.g. for column generation in pricing and for checking revealed preference. Given bid trees T = (T1,..., Tte) and initial allocation x0, the mixed-integer formulation for WD is: Some goods may go unassigned because free disposal is allowed within the clearing rules of winner determination. These items can be allocated back to agents that sold the items, i.e. for which Aij <0. 4.3 Computing Threshold Payments The Threshold payment rule is based on the payments in the Vickrey-Clarke-Groves (VCG) mechanism [15], which itself is truthful and efficient but does not satisfy budget balance. Budget-balance requires that the total payments to the exchange are equal to the total payments made by the exchange. In VCG, the payment paid by agent i is where A * is the efficient trade, V * is the reported value of this trade, and V_i is the reported value of the efficient trade that would be implemented without bidder i. We call Avcg, i = V * − V_i the VCG discount. For instance, in Example 1 pvcg, seller = − 10 − (+10 − 0) = − 20 and pvcg, buyer = +20 − (+10 − 0) = 10, and the exchange would run at a budget deficit of − 20 + 10 = − 10. The Threshold payment rule [24] determines budgetbalanced payments to minimize the maximal error across all agents to the VCG outcome. Threshold payments are designed to minimize the maximal ex post incentive to manipulate. Krych [16] confirmed that Threshold promotes allocative efficiency in restricted and approximate Bayes-Nash equilibrium. 5. THE ICE DESIGN We are now ready to introduce the iterative combinatorial exchange (ICE) design. Several new components are introduced, relative to the design for the one-shot exchange. Rather than provide precise valuations, bidders can provide lower and upper valuations and revise this bid information across rounds. The exchange provides price-based feedback to guide bidders in this process, and terminates with an efficient (or approximately-efficient) trade with respect to reported valuations. In each round t {0, 1, ...} the current lower and upper bounds, vt and vt, are used to define a provisional valuation profile v (the a-valuation), together with a provisional trade At and provisional prices pt = (pt 1,..., ptm) on items. The a-valuation is a linear combination of the current upper and lower valuations, with aEFF [0, 1] chosen endogenously based on the "closeness" of the optimistic trade (at v) and the pessimistic trade (at v). Prices pt are used to inform an activity rule, and drive progress towards an efficient trade. 5.1 Upper and Lower Valuations The bidding language is extended to allow a bidder i to report a lower and upper value (vi, vi) on each node. These take the place of the exact value vi defined in Section 4.1. Based on these labels, we can define the valuation functions vi (Ti, Ai) and vi (Ti, Ai), using the exact same semantics as in Eq. (6). We say that such a bid-tree is well-formed if vi vi for all nodes. The following lemma is useful: LEMMA 1. Given a well-formed tree, T, then vi (Ti, Ai) vi (Ti, Ai) for all trades. PROOF. Suppose there is some Ai for which vi (Ti, Ai)> vi (Ti, Ai). Then, maxsatEvalid (Ti, i) ETi vi · sat> maxsatEvalid (Ti, i) ETi vi · sat. But, this is a contradiction because the trade A' that defines vi (Ti, Ai) is still feasible with upper bounds vi, and vi vi for all nodes,3 in a well-formed tree. 5.2 Price Feedback In each round, approximate competitive-equilibrium (CE) prices, pt = (pt 1,..., ptm), are determined. Given these provisional prices, the price on trade Ai for bidder i is pt (Ai) = j <m ptj · Aij. CE prices will not always exist and we will often need to compute approximate prices [5]. We extend ideas due to Rassenti et al. [26], Kwasnica et al. [17] and Dunford et al. [12], and select approximate prices as follows: I: Accuracy. First, we compute prices that minimize the maximal error in the best-response constraints across all bidders. II: Fairness. Second, we break ties to prefer prices that minimize the maximal deviation from Threshold payments across all bidders. III: Balance. Third, we break ties to prefer prices that minimize the maximal price across all items. Taken together, these steps are designed to promote the informativeness of the prices in driving progress across rounds. In computing prices, we explain how to compute approximate (or otherwise) prices for structured bidding languages, and without enumerating all possible trades. For this, we adopt constraint generation to efficient handle an exponential number of constraints. Each step is described in detail below. I: Accuracy. We adopt a definition of price accuracy that generalizes the notions adopted in previous papers for unstructured bidding languages. Let At denote the current provisional trade and suppose the provisional valuation is v. To compute accurate CE prices, we consider: This linear program (LP) is designed to find prices that minimize the worst-case error across all agents. From the definition of CE prices, it follows that CE prices would have S = 0 as a solution to (9), at which point trade Ati would be in the best-response set of every agent (with Ati =, i.e. no trade, for all agents with no surplus for trade at the prices.) EXAMPLE 5. We can illustrate the formulation (9) on Example 2, assuming for simplicity that v = v (i.e. truth). The efficient trade allocates AB to buyer 1 and CD to buyer 3. Accuracy will seek prices p (A), p (B), p (C) and p (D) to minimize the S 0 required to satisfy constraints: An optimal solution requires p (A) = p (B) = 10/3, with S = 2/3, with p (C) and p (D) taking values such as p (C) = p (D) = 3/2. But, (9) has an exponential number of constraints (Eq. 10). Rather than solve it explicitly we use constraint generation [4] and dynamically generate a sufficient subset of constraints. Let i denote a manageable subset of all possible feasible trades to bidder i. Then, a relaxed version of (9) (written ACC) is formulated by substituting (10) with where i is a set of trades that are feasible for bidder i given the other bids. Fixing the prices p *, we then solve n subproblems (one for each bidder), to check whether solution (p *, S *) to ACC is feasible in problem (9). In R-WD (i) the objective is to determine a most preferred trade for each bidder at these prices. Let ˆAi denote the solution to R-WD (i). Check condition: and if this condition holds for all bidders i, then solution (p *, S *) is optimal for problem (9). Otherwise, trade ˆAi is added to i for all bidders i for which this constraint is violated and we re-solve the LP with the new set of constraints .4 II: Fairness. Second, we break remaining ties to prefer fair prices: choosing prices that minimize the worst-case error with respect to Threshold payoffs (i.e. utility to bidders with Threshold payments), but without choosing prices that are less accurate .5 EXAMPLE 6. For example, accuracy in Example 1 (depicted in Figure 1) requires 12 pA + pB 16 (for v = v). At these valuations the Threshold payoffs would be 2 to both the seller and the buyer. This can be exactly achieved in pricing with pA + pB = 14. The fairness tie-breaking method is formulated as the following LP: where * represents the error in the optimal solution, from ACC. The objective here is the same as in the Threshold payment rule (see Section 4.3): minimize the maximal error between bidder payoff (at v) for the provisional trade and the VCG payoff (at v). Problem FAIR is also solved through constraint generation, using R-WD (i) to add additional violated constraints as necessary. III: Balance. Third, we break remaining ties to prefer balanced prices: choosing prices that minimize the maximal price across all items. Returning again to Example 1, depicted in Figure 1, we see that accuracy and fairness require p (A) + p (B) = 14. Finally, balance sets p (A) = p (B) = 7. Balance is justified when, all else being equal, items are more likely to have similar than dissimilar values .6 The LP for balance is formulated as follows: where * represents the error in the optimal solution from ACC and * represents the error in the optimal solution from FAIR. Constraint generation is also used to solve BAL, generating new trades for i as necessary. 4Problem R-WD (i) is a specialization of the WD problem, in which the objective is to maximize the payoff of a single bidder, rather than the total value across all bidders. It is solved as a MIP, by rewriting the objective in WD (T, x0) as max {vi · sati − j p * j · ij} for agent i. Thus, the structure of the bid-tree language is exploited in generating new constraints, because this is solved as a concise MIP. The other bidders are kept around in the MIP (but do not appear in the objective), and are used to define the space of feasible trades. 5The methods of Dunford et al. [12], that use a nucleolus approach, are also closely related. 6The use of balance was advocated by Kwasnica et al. [17]. Dunford et al. [12] prefer to smooth prices across rounds. Comment 1: Lexicographical Refinement. For all three sub-problems we also perform lexicographical refinement (with respect to bidders in ACC and FAIR, and with respect to goods in BAL). For instance, in ACC we successively minimize the maximal error across all bidders. Given an initial solution we first "pin down" the error on all bidders for whom a constraint (11) is binding. For such a bidder i, the constraint is replaced withvi () − p () v and the error to bidder i no longer appears explicitly in the objective. ACC is then re-solved, and makes progress by further minimizing the maximal error across all bidders yet to be pinned down. This continues, pinning down any new bidders for whom one of constraints (11) is binding, until the error is lexicographically optimized for all bidders .7 The exact same process is repeated for FAIR and BAL, with bidders pinned down and constraints (15) replaced with * i vcg, i − (v i (ti) − p (ti)), i, (where * i is the current objective) in FAIR, and items pinned down and constraints (18) replaced with p * j pj (where p * j represents the target for the maximal price on that item) in BAL. Comment 2: Computation. All constraints in i are retained, and this set grows across all stages and across all rounds of the exchange. Thus, the computational effort in constraint generation is re-used. In implementation we are careful to address a number of "- issues" that arise due to floating-point issues. We prefer to err on the side of being conservative in determining whether or not to add another constraint in performing check (13). This avoids later infeasibility issues. In addition, when pinning-down bidders for the purpose of lexicographical refinement we relax the associated bidder-constraints with a small> 0 on the righthand side. 5.3 Revealed-Preference Activity Rules The role of activity rules in the auction is to ensure both consistency and progress across rounds [21]. Consistency in our exchange requires that bidders tighten bounds as the exchange progresses. Activity rules ensure that bidders are active during early rounds, and promote useful elicitation throughout the exchange. We adopt a simple revealed-preference (RP) activity rule. The idea is loosely based around the RP-rule in Ausubel et al. [1], where it is used for one-sided CAs. The motivation is to require more than simply consistency: we need bidders to provide enough information for the system to be able to to prove that an allocation is (approximately) efficient. It is helpful to think about the bidders interacting with "proxy agents" that will act on their behalf in responding to provisional prices pt-1 determined at the end of round t − 1. The only knowledge that such a proxy has of the valuation of a bidder is through the bid-tree. Suppose a proxy was queried by the exchange and asked which trade the bidder was most interested in at the provisional prices. The RP rule says the following: the proxy must have enough 7For example, applying this to accuracy on Example 2 we solve once and find bidders 1 and 2 are binding, for error * = 2/3. We pin these down and then minimize the error to bidders 3 and 4. Finally, this gives p (A) = p (B) = 10/3 and p (C) = p (D) = 5/3, with accuracy 2/3 to bidders 1 and 2 and 1/3 to bidders 3 and 4. information to be able to determine this surplus-maximizing trade at current prices. Consider the following examples: EXAMPLE 7. A bidder has XOR (+ A, + B) and a value of +5 on the leaf + A and a value range of [5,10] on leaf + B. Suppose prices are currently 3 for each of A and B. The RP rule is satisfied because the proxy knows that however the remaining value uncertainty on + B is resolved the bidder will always (weakly) prefer + B to + A. EXAMPLE 8. A bidder has XOR (+ A, + B) and value bounds [5, 10] on the root node and a value of 1 on leaf + A. Suppose prices are currently 3 for each of A and B. The RP rule is satisfied because the bidder will always prefer + A to + B at equal prices, whichever way the uncertain value on the root node is ultimately resolved. Overloading notation, let vi E Ti denote a valuation that is consistent with lower and upper valuations in bid tree Ti. DEFINITION 4. Bid tree Ti satisfies RP at prices pt − 1 if and only if there exists some feasible trade L for which, To make this determination for bidder i we solve a sequence of problems, each of which is a variation on the WD problem. First, we construct a candidate lower-bound trade, which is a feasible trade that solves: The solution l to RP1 (i) represents the maximal payoff that bidder i can achieve across all feasible trades, given its pessimistic valuation. Second, we break ties to find a trade with maximal value uncertainty across all possible solutions to RP1 (i): We adopt solution L i as our candidate for the trade that may satisfy RP. To understand the importance of this tiebreaking rule consider Example 7. The proxy can prove + B but not + A is a best-response for all vi E Ti, and should choose + B as its candidate. Notice that + B is a counterexample to + A, but not the other way round. Now, we construct a modified valuation ˜vi, by setting (24) vi, otherwise. where sat (L i) is the set of nodes that are satisfied in the lower-bound tree for trade L i. Given this modified valuation, we find U to solve: Let u denote the payoff from this optimal trade at modified values ˜v. We call trade Ui the witness trade. We show in Proposition 1 that the RP rule is satisfied if and only if l> u. Constructing the modified valuation as ˜vi recognizes that there is "shared uncertainty" across trades that satisfy the same nodes in a bid tree. Example 8 helps to illustrate this. Just using vi in RP3 (i), we would find L i is "buy A" with payoff l = 3 but then find Ui is "buy B" with u = 7 and fail RP. We must recognize that however the uncertainty on the root node is resolved it will affect + A and + B in exactly the same way. For this reason, we set ˜vi = vi = 5 on the root node, which is exactly the same value that was adopted in determining l. Then, RP3 (i) applied to Ui gives "buy A" and the RP test is judged to be passed. where ˜vi is the modified valuation in Eq. (24). PROOF. For sufficiency, notice that the difference in payoff between trade L i and another trade i is unaffected by the way uncertainty is resolved on any node that is satisfied in both L i and i. Fixing the values in ˜vi on nodes satisfied in L i has the effect of removing this consideration when a trade Ui is selected that satisfies one of these nodes. On the other hand, fixing the values on these nodes has no effect on trades considered in RP3 (i) that do not share a node with L i. For the necessary direction, we first show that any trade that satisfies RP must solve RP1 (i). Suppose otherwise, that some i with payoff greater than l satisfies RP. But, valuation vi E Ti together with L i presents a counterexample to RP (Eq. 20). Now, suppose (for contradiction) that some i with maximal payoff l but uncertainty less than L i satisfies RP. Proceed by case analysis. Case a): only one solution to RP1 (i) has uncertain value and so i has certain value. But, this cannot satisfy RP because L i with uncertain value would be a counterexample to RP (Eq. 20). Case b): two or more solutions to RP1 (i) have uncertain value. Here, we first argue that one of these trades must satisfy a (weak) superset of all the nodes with uncertain value that are satisfied by all other trades in this set. This is by RP. Without this, then for any choice of trade that solves RP1 (i), there is another trade with a disjoint set of uncertain but satisfied nodes that provides a counterexample to RP (Eq. 20). Now, consider the case that some trade contains a superset of all the uncertain satisfied nodes of the other trades. Clearly RP2 (i) will choose this trade, L i, and i must satisfy a subset of these nodes (by assumption). But, we now see that i cannot satisfy RP because L i would be a counterexample to RP. Failure to meet the activity rule must have some consequence. In the current rules, the default action we choose is to set the upper bounds in valuations down to the maximal value of the provisional price on a node8 and the lowerbound value on that node .9 Such a bidder can remain active 8The provisional price on a node is defined as the minimal total price across all feasible trades for which the subtree rooted at the tree is satisfied. 9This is entirely analogous to when a bidder in an ascending clock auction stops bidding at a price: she is not permitted to bid at a higher price again in future rounds. within the exchange, but only with valuations that are consistent with these new bounds. 5.4 Bidder Feedback In each round, our default design provides every bidder with the provisional trade and also with the current provisional prices. See 7 for an additional discussion. We also provide guidance to help a bidder meet the RP rule. Let sat (L * i) and sat (Ui *) denote the nodes that are satisfied in trades L * i and Ui *, as computed in RP1--RP3. LEMMA 2. When RP fails, a bidder must increase a lower bound on at least one node in sat (L * i) \ sat (Ui *) or decrease an upper bound on at least one node in sat (Ui *) \ sat (L * i) in order to meet the activity rule. PROOF. Changing the upper - or lower - values on nodes that are not satisfied by either trade does not change L * i or Ui *, and does not change the payoff from these trades. Thus, the RP condition will continue to fail. Similarly, changing the bounds on nodes that are satisfied in both trades has no effect on revealed preference. A change to a lower bound on a shared node affects both L * i and Ui * identically because of the use of the modified valuation to determine Ui *. A change to an upper bound on a shared node has no effect in determining either L * i or Ui *. Note that when sat (Ui *) = sat (L * i) then condition (26) is always trivially satisfied, and so the guidance in the lemma is always well-defined when RP fails. This is an elegant feedback mechanism because it is adaptive. Once a bidder makes some changes on some subset of these nodes, the bidder can query the exchange. The exchange can then respond "yes," or can revise the set of nodes sat (* l) and sat (* u) as necessary. 5.5 Termination Conditions Once each bidder has committed its new bids (and either met the RP rule or suffered the penalty) then round t closes. At this point, the task is to determine the new - valuation, and in turn the provisional allocation t and provisional prices pt. A termination condition is also checked, to determine whether to move the exchange to a last-and-final round. To define the - valuation we compute the following two quantities: Pessimistic at Pessimistic (PP) Determine an efficient trade, * l, at pessimistic values, i.e. to solve max i vi (i), and set PP = i vi (* li). Pessimistic at Optimistic (PO) Determine an efficient trade, * u, at optimistic values, i.e. to solve max i vi (i), and set PO = i vi (* ui). First, note that PP> PO and PP> 0 by definition, for all bid-trees, although PO can be negative (because the "right" trade at v is not currently a useful trade at v). Recognizing this, define when PP> 0, and observe that eff (PP, PO)> 1 when this is defined, and that eff (PP, PO) will start large and then trend towards 1 as the optimistic allocation converges towards the pessimistic allocation. In each round, we define eff E [0, 1] as: which is 0 while PP is 0 and then trends towards 1 once PP> 0 in some round. This is used to define - valuation which is used to define the provisional allocation and provisional prices. The effect is to endogenously define a schedule for moving from optimistic to pessimistic values across rounds, based on how "close" the trades are to one another. Termination Condition. In moving to the last-and-final round, and finally closing, we also care about the convergence of payments, in addition to the convergence towards an efficient trade. For this we introduce another parameter, thresh E [0, 1], that trends from 0 to 1 as the Threshold payments at lower and upper valuations converge. Consider the following parameter: which is defined for PP> 0, where pthresh (v) denotes the Threshold payments at valuation profile v, Nactive is the number of bidders that are actively engaged in trade in the PP trade, and II - II2 is the L2-norm. Note that thresh is defined for payments and not payoffs. This is appropriate because it is the accuracy of the outcome of the exchange that matters: i.e. the trade and the payments. Given this, we define 0 when PP is 0 (31) 1 / thresh otherwise which is 0 while PP is 0 and then trends towards 1 as progress is made. DEFINITION 5 (TERMINATION). ICE transitions to a lastand-final round when one of the following holds: 1. eff> CUTOFFeff and thresh> CUTOFFthresh, 2. there is no trade at the optimistic values, where CUTOFFeff, CUTOFFthresh E (0, 1] determine the accuracy required for termination. At the end of the last-and-final round v = v is used to define the final trade and the final Threshold payments. EXAMPLE 9. Consider again Example 1, and consider the upper and lower bounds as depicted in Figure 1. First, if the seller's bounds were [--20,--4] then there is an optimistic trade but no pessimistic trade, and PO =--4 and PP = 0, and eff = 0. At the bounds depicted, both the optimistic and the pessimistic trades occur and PO = PP = 4 and eff = 1. However, we can see the Threshold payments are (17,--17) at v but (14,--14) at v. Evaluating thresh, we have thresh = 1 + = 5/2, and thresh = 2/5. (4/2) For CUTOFFthresh <2/5 the exchange would remain open. On the other hand, if the buyer's value for + AB was between [18, 24] and the seller's value for--AB was between [--12,--6], the Threshold payments are (15,--15) at both upper and lower bounds, and thresh = 1. Table 1: Exchange Component and Code Breakdown. 6. SYSTEMS INFRASTRUCTURE ICE is approximately 6502 lines of Java code, broken up into the functional packages described in Table 1.10 The prototype is modular so that researchers may easily replace components for experimentation. In addition to the core exchange discussed in this paper, we have developed an agent component that allows a user to simulate the behavior and knowledge of other players in the system, better allowing a user to formulate their strategy in advance of actual play. A user specifies a valuation model in an XMLinterpretation of our bidding language, which is revealed to the exchange via the agent's strategy. Major exchange tasks are handled by "engines" that dictate the non-optimizer specific logic. These engines drive the appropriate MIP/LP "builders". We realized that all of our optimization formulations boil down to two classes of optimization problem. The first, used by winner determination, activity rule, closing rule, and constraint generation in pricing, is a MIP that finds trades that maximize value, holding prices and slacks constant. The second, used by the three pricing stages, is an LP that holds trades constant, seeking to minimize slack, profit, or prices. We take advantage of the commonality of these problems by using common LP/MIP builders that differ only by a few functional hooks to provide the correct variables for optimization. We have generalized our back-end optimization solver interface11 (we currently support CPLEX and the LGPL - licensed LPSolve), and can take advantage of the load-balancing and parallel MIP/LP solving capability that this library provides. 8. CONCLUSIONS In this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange. The design includes many interesting features, including: a new bid-tree language for exchanges, a new method to construct approximate linear prices from expressive languages, and a proxied elicitation method with optimistic and pessimistic valuations with a new method to evaluate a revealed - preference activity rule. The exchange is fully implemented in Java and is in a validation phase. The next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree. We intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances. In addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.
ICE: An Iterative Combinatorial Exchange David C. Parkes * s Ruggiero Cavallos Nick Elprins Adam Judas S ´ ebastien Lahaies ABSTRACT We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase. Keywords: Combinatorial exchange, Threshold payments, VCG, Preference Elicitation. 1. INTRODUCTION Combinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions. In a double auction (DA), multiple buyers and sellers trade units of an identical good [20]. In a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11]. Buyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language. A common goal in both market * Corresponding author. Remaining authors in alphabetical order. parkes@eecs.harvard.edu sDivision of Engineering and Applied Sciences, Harvard University, Cambridge MA 02138. designs is to determine the efficient allocation, which is the allocation that maximizes total value. A combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods. For example, in an exchange for wireless spectrum, a bidder may declare that she is willing to pay $1 million for a trade where she obtains licenses for New York City, Boston, and Philadelphia, and loses her license for Washington DC. Thus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids. Unlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling. CEs have received recent attention both in the context of wireless spectrum allocation [18] and for airport takeoff and landing slot allocation [3]. In both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources. Another potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13]. The instantiation of our general purpose design to specific domains is a compelling next step in our research. This paper presents the first design for an iterative combinatorial exchange (ICE). The genesis of this project was a class, CS 286r "Topics at the Interface between Economics and Computer Science," taught at Harvard University in Spring 2004.1 The entire class was dedicated to the design and prototyping of an iterative CE. The ICE design problem is multi-faceted and quite hard. The main innovation in our design is an expressive yet concise tree-based bidding language (which generalizes known languages such as XOR/OR [23]), and the tight coupling of this language with efficient algorithms for price-feedback to guide bidding, winner-determination to determine trades, and revealed-preference activity rules to ensure progress across rounds. The exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round. The Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments. The exchange has a number of interesting theoretical properties. For instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation. In addition, the Figure 1: ICE System Flow of Control efficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades. This is essential in complex domains where the valuation problem can itself be very challenging for a participant [28]. While we cannot claim that straightforward bidding is an equilibrium of the exchange (and indeed, should not expect to by the Myerson-Satterthwaite impossibility theorem [22]), the Threshold payment rule minimizes the ex post incentive to manipulate across all budget-balanced payment rules. The exchange is implemented in Java and is currently in validation. In describing the exchange we will first provide an overview of the main components and introduce several working examples. Then, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round. We then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions. We state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps. 2. AN OVERVIEW OF THE ICE DESIGN 2.1 Related Work 3. PRELIMINARIES 3.1 Working Examples 4. A ONE-SHOT EXCHANGE DESIGN 4.1 Tree-Based Bidding Language 4.2 Winner Determination 4.3 Computing Threshold Payments 5. THE ICE DESIGN 5.1 Upper and Lower Valuations 5.2 Price Feedback 5.3 Revealed-Preference Activity Rules 5.4 Bidder Feedback 5.5 Termination Conditions 1 / thresh otherwise 6. SYSTEMS INFRASTRUCTURE 8. CONCLUSIONS In this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange. The design includes many interesting features, including: a new bid-tree language for exchanges, a new method to construct approximate linear prices from expressive languages, and a proxied elicitation method with optimistic and pessimistic valuations with a new method to evaluate a revealed - preference activity rule. The exchange is fully implemented in Java and is in a validation phase. The next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree. We intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances. In addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.
ICE: An Iterative Combinatorial Exchange David C. Parkes * s Ruggiero Cavallos Nick Elprins Adam Judas S ´ ebastien Lahaies ABSTRACT We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase. Keywords: Combinatorial exchange, Threshold payments, VCG, Preference Elicitation. 1. INTRODUCTION Combinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions. In a double auction (DA), multiple buyers and sellers trade units of an identical good [20]. In a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11]. Buyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language. A common goal in both market * Corresponding author. Remaining authors in alphabetical order. designs is to determine the efficient allocation, which is the allocation that maximizes total value. A combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods. Thus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids. Unlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling. In both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources. Another potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13]. The instantiation of our general purpose design to specific domains is a compelling next step in our research. This paper presents the first design for an iterative combinatorial exchange (ICE). The ICE design problem is multi-faceted and quite hard. The exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round. The Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments. The exchange has a number of interesting theoretical properties. For instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation. In addition, the Figure 1: ICE System Flow of Control efficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades. This is essential in complex domains where the valuation problem can itself be very challenging for a participant [28]. The exchange is implemented in Java and is currently in validation. In describing the exchange we will first provide an overview of the main components and introduce several working examples. Then, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round. We then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions. We state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps. 8. CONCLUSIONS In this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange. The exchange is fully implemented in Java and is in a validation phase. The next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree. We intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances. In addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.
H-64
Machine Learning for Information Architecture in a Large Governmental Website
This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived document-concept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations.
[ "machin learn", "inform architectur", "machin learn techniqu", "access", "complex digit librari", "digit librari", "data-driven approach", "supervis and unsupervis learn techniqu", "bureau of labor statist", "eigenvalu", "bl collect", "k-mean cluster", "multiwai classif", "interfac design" ]
[ "P", "P", "P", "P", "P", "P", "U", "M", "M", "U", "M", "U", "U", "M" ]
Machine Learning for Information Architecture in a Large Governmental Website ∗ Miles Efron School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 efrom@ils.unc.edu Jonathan Elsas School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 jelsas@email.unc.edu Gary Marchionini School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 march@ils.unc.edu Junliang Zhang School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 junliang@email.unc.edu ABSTRACT This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. Categories and Subject Descriptors H.3.7 [Information Storage and Retrieval]: Digital Libraries-Systems Issues, User Issues; H.3.3 [Information Storage and Retrieval]: Information Search and RetrievalClustering General Terms Design, Experimentation 1. INTRODUCTION The GovStat Project is a joint effort of the University of North Carolina Interaction Design Lab and the University of Maryland Human-Computer Interaction Lab1 . Citing end-user difficulty in finding governmental information (especially statistical data) online, the project seeks to create an integrated model of user access to US government statistical information that is rooted in realistic data models and innovative user interfaces. To enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques. In particular, our work analyzes a particular digital library-the website of the Bureau of Labor Statistics2 (BLS)-in efforts to discover a small number of linguistically meaningful concepts, or bins, that collectively summarize the semantic domain of the site. The project goal is to classify the site``s web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]). Many digital libraries already make use of content classification, both explicitly and implicitly; they divide their resources manually by topical relation; they organize content into hierarchically oriented file systems. The goal of the present 1 http://www.ils.unc.edu/govstat 2 http://www.bls.gov 151 research is to develop another means of browsing the content of these collections. By analyzing the distribution of terms across documents, our goal is to supplement the agency``s pre-existing information structures. Statistical learning technologies are appealing in this context insofar as they stand to define a data-driven-as opposed to an agency-drivennavigational structure for a site. Our approach combines supervised and unsupervised learning techniques. A pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6]. But strictly supervised techniques [5] are inappropriate, too. Although BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal. Thus we hope to learn an additional set of concepts by letting the data speak for themselves. The remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation. In Section 2 we describe the previously existing, human-created conceptual structure of the BLS website. This section also describes evidence that this structure leaves room for improvement. Next (Sections 3-5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text. Section 6 describes a two-part evaluation of the derived conceptual structures. Finally, we conclude in Section 7 by outlining upcoming work on the project. 2. STRUCTURING ACCESS TO THE BLS WEBSITE The Bureau of Labor Statistics is a federal government agency charged with compiling and publishing statistics pertaining to labor and production in the US and abroad. Given this broad mandate, the BLS publishes a wide array of information, intended for diverse audiences. The agency``s website acts as a clearinghouse for this process. With over 15,000 text/html documents (and many more documents if spreadsheets and typeset reports are included), providing access to the collection provides a steep challenge to information architects. 2.1 The Relation Browser The starting point of this work is the notion that access to information in the BLS website could be improved by the addition of a dynamic interface such as the relation browser described by Marchionini and Brunk [13]. The relation browser allows users to traverse complex data sets by iteratively slicing the data along several topics. In Figure 1 we see a prototype instantiation of the relation browser, applied to the FedStats website3 . The relation browser supports information seeking by allowing users to form queries in a stepwise fashion, slicing and re-slicing the data as their interests dictate. Its motivation is in keeping with Shneiderman``s suggestion that queries and their results should be tightly coupled [2]. Thus in Fig3 http://www.fedstats.gov Figure 1: Relation Browser Prototype ure 1, users might limit their search set to those documents about energy. Within this subset of the collection, they might further eliminate documents published more than a year ago. Finally, they might request to see only documents published in PDF format. As Marchionini and Brunk discuss, capturing the publication date and format of documents is trivial. But successful implementations of the relation browser also rely on topical classification. This presents two stumbling blocks for system designers: • Information architects must define the appropriate set of topics for their collection • Site maintainers must classify each document into its appropriate categories These tasks parallel common problems in the metadata community: defining appropriate elements and marking up documents to support metadata-aware information access. Given a collection of over 15,000 documents, these hurdles are especially daunting, and automatic methods of approaching them are highly desirable. 2.2 A Pre-Existing Structure Prior to our involvement with the project, designers at BLS created a shallow classificatory structure for the most important documents in their website. As seen in Figure 2, the BLS home page organizes 65 top-level documents into 15 categories. These include topics such as Employment and Unemployment, Productivity, and Inflation and Spending. 152 Figure 2: The BLS Home Page We hoped initially that these pre-defined categories could be used to train a 15-way document classifier, thus automating the process of populating the relation browser altogether. However, this approach proved unsatisfactory. In personal meetings, BLS officials voiced dissatisfaction with the existing topics. Their form, it was argued, owed as much to the institutional structure of BLS as it did to the inherent topology of the website``s information space. In other words, the topics reflected official divisions rather than semantic clusters. The BLS agents suggested that re-designing this classification structure would be desirable. The agents'' misgivings were borne out in subsequent analysis. The BLS topics comprise a shallow classificatory structure; each of the 15 top-level categories is linked to a small number of related pages. Thus there are 7 pages associated with Inflation. Altogether, the link structure of this classificatory system contains 65 documents; that is, excluding navigational links, there are 65 documents linked from the BLS home page, where each hyperlink connects a document to a topic (pages can be linked to multiple topics). Based on this hyperlink structure, we defined M, a symmetric 65×65 matrix, where mij counts the number of topics in which documents i and j are both classified on the BLS home page. To analyze the redundancy inherent in the pre-existing structure, we derived the principal components of M (cf. [11]). Figure 3 shows the resultant scree plot4 . Because all 65 documents belong to at least one BLS topic, 4 A scree plot shows the magnitude of the kth eigenvalue versus its rank. During principal component analysis scree plots visualize the amount of variance captured by each component. m00M0M 0 1010M10M 10 2020M20M 20 3030M30M 30 4040M40M 40 5050M50M 50 6060M60M 60 m00M0M 0 22M2M 2 44M4M 4 66M6M 6 88M8M 8 1010M10M 10 1212M12M 12 1414M14M 14 Eigenvalue RankMEigenvalue RankM Eigenvalue Rank Eigenvlue MagnitudeMEigenvlue MagnitudeM EigenvlueMagnitude Figure 3: Scree Plot of BLS Categories the rank of M is guaranteed to be less than or equal to 15 (hence, eigenvalues 16 ... 65 = 0). What is surprising about Figure 3, however, is the precipitous decline in magnitude among the first four eigenvalues. The four largest eigenvlaues account for 62.2% of the total variance in the data. This fact suggests a high degree of redundancy among the topics. Topical redundancy is not in itself problematic. However, the documents in this very shallow classificatory structure are almost all gateways to more specific information. Thus the listing of the Producer Price Index under three categories could be confusing to the site``s users. In light of this potential for confusion and the agency``s own request for redesign, we undertook the task of topic discovery described in the following sections. 3. A HYBRID APPROACH TO TOPIC DISCOVERY To aid in the discovery of a new set of high-level topics for the BLS website, we turned to unsupervised machine learning methods. In efforts to let the data speak for themselves, we desired a means of concept discovery that would be based not on the structure of the agency, but on the content of the material. To begin this process, we crawled the BLS website, downloading all documents of MIME type text/html. This led to a corpus of 15,165 documents. Based on this corpus, we hoped to derive k ≈ 10 topical categories, such that each document di is assigned to one or more classes. 153 Document clustering (cf. [16]) provided an obvious, but only partial solution to the problem of automating this type of high-level information architecture discovery. The problems with standard clustering are threefold. 1. Mutually exclusive clusters are inappropriate for identifying the topical content of documents, since documents may be about many subjects. 2. Due to the heterogeneity of the data housed in the BLS collection (tables, lists, surveys, etc.), many documents'' terms provide noisy topical information. 3. For application to the relation browser, we require a small number (k ≈ 10) of topics. Without significant data reduction, term-based clustering tends to deliver clusters at too fine a level of granularity. In light of these problems, we take a hybrid approach to topic discovery. First, we limit the clustering process to a sample of the entire collection, described in Section 4. Working on a focused subset of the data helps to overcome problems two and three, listed above. To address the problem of mutual exclusivity, we combine unsupervised with supervised learning methods, as described in Section 5. 4. FOCUSING ON CONTENT-RICH DOCUMENTS To derive empirically evidenced topics we initially turned to cluster analysis. Let A be the n×p data matrix with n observations in p variables. Thus aij shows the measurement for the ith observation on the jth variable. As described in [12], the goal of cluster analysis is to assign each of the n observations to one of a small number k groups, each of which is characterized by high intra-cluster correlation and low inter-cluster correlation. Though the algorithms for accomplishing such an arrangement are legion, our analysis focuses on k-means clustering5 , during which, each observation oi is assigned to the cluster Ck whose centroid is closest to it, in terms of Euclidean distance. Readers interested in the details of the algorithm are referred to [12] for a thorough treatment of the subject. Clustering by k-means is well-studied in the statistical literature, and has shown good results for text analysis (cf. [8, 16]). However, k-means clustering requires that the researcher specify k, the number of clusters to define. When applying k-means to our 15,000 document collection, indicators such as the gap statistic [17] and an analysis of the mean-squared distance across values of k suggested that k ≈ 80 was optimal. This paramterization led to semantically intelligible clusters. However, 80 clusters are far too many for application to an interface such as the relation 5 We have focused on k-means as opposed to other clustering algorithms for several reasons. Chief among these is the computational efficiency enjoyed by the k-means approach. Because we need only a flat clustering there is little to be gained by the more expensive hierarchical algorithms. In future work we will turn to model-based clustering [7] as a more principled method of selecting the number of clusters and of representing clusters. browser. Moreover, the granularity of these clusters was unsuitably fine. For instance, the 80-cluster solution derived a cluster whose most highly associated words (in terms of log-odds ratio [1]) were drug, pharmacy, and chemist. These words are certainly related, but they are related at a level of specificity far below what we sought. To remedy the high dimensionality of the data, we resolved to limit the algorithm to a subset of the collection. In consultation with employees of the BLS, we continued our analysis on documents that form a series titled From the Editor``s Desk6 . These are brief articles, written by BLS employees. BLS agents suggested that we focus on the Editor``s Desk because it is intended to span the intellectual domain of the agency. The column is published daily, and each entry describes an important current issue in the BLS domain. The Editor``s Desk column has been written daily (five times per week) since 1998. As such, we operated on a set of N = 1279 documents. Limiting attention to these 1279 documents not only reduced the dimensionality of the problem. It also allowed the clustering process to learn on a relatively clean data set. While the entire BLS collection contains a great deal of nonprose text (i.e. tables, lists, etc.), the Editor``s Desk documents are all written in clear, journalistic prose. Each document is highly topical, further aiding the discovery of termtopic relations. Finally, the Editor``s Desk column provided an ideal learning environment because it is well-supplied with topical metadata. Each of the 1279 documents contains a list of one or more keywords. Additionally, a subset of the documents (1112) contained a subject heading. This metadata informed our learning and evaluation, as described in Section 6.1. 5. COMBINING SUPERVISED AND UNSUPERVISED LEARNING FORTOPIC DISCOVERY To derive suitably general topics for the application of a dynamic interface to the BLS collection, we combined document clustering with text classification techniques. Specifically, using k-means, we clustered each of the 1279 documents into one of k clusters, with the number of clusters chosen by analyzing the within-cluster mean squared distance at different values of k (see Section 6.1). Constructing mutually exclusive clusters violates our assumption that documents may belong to multiple classes. However, these clusters mark only the first step in a two-phase process of topic identification. At the end of the process, documentcluster affinity is measured by a real-valued number. Once the Editor``s Desk documents were assigned to clusters, we constructed a k-way classifier that estimates the strength of evidence that a new document di is a member of class Ck. We tested three statistical classification techniques: probabilistic Rocchio (prind), naive Bayes, and support vector machines (SVMs). All were implemented using McCallum``s BOW text classification library [14]. Prind is a probabilistic version of the Rocchio classification algorithm [9]. Interested readers are referred to Joachims'' article for 6 http://www.bls.gov/opub/ted 154 further details of the classification method. Like prind, naive Bayes attempts to classify documents into the most probable class. It is described in detail in [15]. Finally, support vector machines were thoroughly explicated by Vapnik [18], and applied specifically to text in [10]. They define a decision boundary by finding the maximally separating hyperplane in a high-dimensional vector space in which document classes become linearly separable. Having clustered the documents and trained a suitable classifier, the remaining 14,000 documents in the collection are labeled by means of automatic classification. That is, for each document di we derive a k-dimensional vector, quantifying the association between di and each class C1 ... Ck. Deriving topic scores via naive Bayes for the entire 15,000document collection required less than two hours of CPU time. The output of this process is a score for every document in the collection on each of the automatically discovered topics. These scores may then be used to populate a relation browser interface, or they may be added to a traditional information retrieval system. To use these weights in the relation browser we currently assign to each document the two topics on which it scored highest. In future work we will adopt a more rigorous method of deriving documenttopic weight thresholds. Also, evaluation of the utility of the learned topics for users will be undertaken. 6. EVALUATION OF CONCEPT DISCOVERY Prior to implementing a relation browser interface and undertaking the attendant user studies, it is of course important to evaluate the quality of the inferred concepts, and the ability of the automatic classifier to assign documents to the appropriate subjects. To evaluate the success of the two-stage approach described in Section 5, we undertook two experiments. During the first experiment we compared three methods of document representation for the clustering task. The goal here was to compare the quality of document clusters derived by analysis of full-text documents, documents represented only by their titles, and documents represented by human-created keyword metadata. During the second experiment, we analyzed the ability of the statistical classifiers to discern the subject matter of documents from portions of the database in addition to the Editor``s Desk. 6.1 Comparing Document Representations Documents from The Editor``s Desk column came supplied with human-generated keyword metadata. Additionally, The titles of the Editor``s Desk documents tend to be germane to the topic of their respective articles. With such an array of distilled evidence of each document``s subject matter, we undertook a comparison of document representations for topic discovery by clustering. We hypothesized that keyword-based clustering would provide a useful model. But we hoped to see whether comparable performance could be attained by methods that did not require extensive human indexing, such as the title-only or full-text representations. To test this hypothesis, we defined three modes of document representation-full-text, title-only, and keyword only-we generated three sets of topics, Tfull, Ttitle, and Tkw, respectively. Topics based on full-text documents were derived by application of k-means clustering to the 1279 Editor``s Desk documents, where each document was represented by a 1908dimensional vector. These 1908 dimensions captured the TF.IDF weights [3] of each term ti in document dj, for all terms that occurred at least three times in the data. To arrive at the appropriate number of clusters for these data, we inspected the within-cluster mean-squared distance for each value of k = 1 ... 20. As k approached 10 the reduction in error with the addition of more clusters declined notably, suggesting that k ≈ 10 would yield good divisions. To select a single integer value, we calculated which value of k led to the least variation in cluster size. This metric stemmed from a desire to suppress the common result where one large cluster emerges from the k-means algorithm, accompanied by several accordingly small clusters. Without reason to believe that any single topic should have dramatically high prior odds of document membership, this heuristic led to kfull = 10. Clusters based on document titles were constructed similarly. However, in this case, each document was represented in the vector space spanned by the 397 terms that occur at least twice in document titles. Using the same method of minimizing the variance in cluster membership ktitle-the number of clusters in the title-based representation-was also set to 10. The dimensionality of the keyword-based clustering was very similar to that of the title-based approach. There were 299 keywords in the data, all of which were retained. The median number of keywords per document was 7, where a keyword is understood to be either a single word, or a multiword term such as consumer price index. It is worth noting that the keywords were not drawn from any controlled vocabulary; they were assigned to documents by publishers at the BLS. Using the keywords, the documents were clustered into 10 classes. To evaluate the clusters derived by each method of document representation, we used the subject headings that were included with 1112 of the Editor``s Desk documents. Each of these 1112 documents was assigned one or more subject headings, which were withheld from all of the cluster applications. Like the keywords, subject headings were assigned to documents by BLS publishers. Unlike the keywords, however, subject headings were drawn from a controlled vocabulary. Our analysis began with the assumption that documents with the same subject headings should cluster together. To facilitate this analysis, we took a conservative approach; we considered multi-subject classifications to be unique. Thus if document di was assigned to a single subject prices, while document dj was assigned to two subjects, international comparisons, prices, documents di and dj are not considered to come from the same class. Table 1 shows all Editor``s Desk subject headings that were assigned to at least 10 documents. As noted in the table, 155 Table 1: Top Editor``s Desk Subject Headings Subject Count prices 92 unemployment 55 occupational safety & health 53 international comparisons, prices 48 manufacturing, prices 45 employment 44 productivity 40 consumer expenditures 36 earnings & wages 27 employment & unemployment 27 compensation costs 25 earnings & wages, metro. areas 18 benefits, compensation costs 18 earnings & wages, occupations 17 employment, occupations 14 benefits 14 earnings & wage, regions 13 work stoppages 12 earnings & wages, industries 11 Total 609 Table 2: Contingecy Table for Three Document Representations Representation Right Wrong Accuracy Full-text 392 217 0.64 Title 441 168 0.72 Keyword 601 8 0.98 there were 19 such subject headings, which altogether covered 609 (54%) of the documents with subjects assigned. These document-subject pairings formed the basis of our analysis. Limiting analysis to subjects with N > 10 kept the resultant χ2 tests suitably robust. The clustering derived by each document representation was tested by its ability to collocate documents with the same subjects. Thus for each of the 19 subject headings in Table 1, Si, we calculated the proportion of documents assigned to Si that each clustering co-classified. Further, we assumed that whichever cluster captured the majority of documents for a given class constituted the right answer for that class. For instance, There were 92 documents whose subject heading was prices. Taking the BLS editors'' classifications as ground truth, all 92 of these documents should have ended up in the same cluster. Under the full-text representation 52 of these documents were clustered into category 5, while 35 were in category 3, and 5 documents were in category 6. Taking the majority cluster as the putative right home for these documents, we consider the accuracy of this clustering on this subject to be 52/92 = 0.56. Repeating this process for each topic across all three representations led to the contingency table shown in Table 2. The obvious superiority of the keyword-based clustering evidenced by Table 2 was borne out by a χ2 test on the accuracy proportions. Comparing the proportion right and Table 3: Keyword-Based Clusters benefits costs international jobs plans compensation import employment benefits costs prices jobs employees benefits petroleum youth occupations prices productivity safety workers prices productivity safety earnings index output health operators inflation nonfarm occupational spending unemployment expenditures unemployment consumer mass spending jobless wrong achieved by keyword and title-based clustering led to p 0.001. Due to this result, in the remainder of this paper, we focus our attention on the clusters derived by analysis of the Editor``s Desk keywords. The ten keyword-based clusters are shown in Table 3, represented by the three terms most highly associated with each cluster, in terms of the log-odds ratio. Additionally, each cluster has been given a label by the researchers. Evaluating the results of clustering is notoriously difficult. In order to lend our analysis suitable rigor and utility, we made several simplifying assumptions. Most problematic is the fact that we have assumed that each document belongs in only a single category. This assumption is certainly false. However, by taking an extremely rigid view of what constitutes a subject-that is, by taking a fully qualified and often multipart subject heading as our unit of analysis-we mitigate this problem. Analogically, this is akin to considering the location of books on a library shelf. Although a given book may cover many subjects, a classification system should be able to collocate books that are extremely similar, say books about occupational safety and health. The most serious liability with this evaluation, then, is the fact that we have compressed multiple subject headings, say prices : international into single subjects. This flattening obscures the multivalence of documents. We turn to a more realistic assessment of document-class relations in Section 6.2. 6.2 Accuracy of the Document Classifiers Although the keyword-based clusters appear to classify the Editor``s Desk documents very well, their discovery only solved half of the problem required for the successful implementation of a dynamic user interface such as the relation browser. The matter of roughly fourteen thousand unclassified documents remained to be addressed. To solve this problem, we trained the statistical classifiers described above in Section 5. For each document in the collection di, these classifiers give pi, a k-vector of probabilities or distances (depending on the classification method used), where pik quantifies the strength of association between the ith document and the kth class. All classifiers were trained on the full text of each document, regardless of the representation used to discover the initial clusters. The different training sets were thus constructed simply by changing the 156 Table 4: Cross Validation Results for 3 Classifiers Method Av. Percent Accuracy SE Prind 59.07 1.07 Naive Bayes 75.57 0.4 SVM 75.08 0.68 class variable for each instance (document) to reflect its assigned cluster under a given model. To test the ability of each classifier to locate documents correctly, we first performed a 10-fold cross validation on the Editor``s Desk documents. During cross-validation the data are split randomly into n subsets (in this case n = 10). The process proceeds by iteratively holding out each of the n subsets as a test collection for a model trained on the remaining n − 1 subsets. Cross validation is described in [15]. Using this methodology, we compared the performance of the three classification models described above. Table 4 gives the results from cross validation. Although naive Bayes is not significantly more accurate for these data than the SVM classifier, we limit the remainder of our attention to analysis of its performance. Our selection of naive Bayes is due to the fact that it appears to work comparably to the SVM approach for these data, while being much simpler, both in theory and implementation. Because we have only 1279 documents and 10 classes, the number of training documents per class is relatively small. In addition to models fitted to the Editor``s Desk data, then, we constructed a fourth model, supplementing the training sets of each class by querying the Google search engine7 and applying naive Bayes to the augmented test set. For each class, we created a query by submitting the three terms with the highest log-odds ratio with that class. Further, each query was limited to the domain www.bls.gov. For each class we retrieved up to 400 documents from Google (the actual number varied depending on the size of the result set returned by Google). This led to a training set of 4113 documents in the augmented model, as we call it below8 . Cross validation suggested that the augmented model decreased classification accuracy (accuracy= 58.16%, with standard error= 0.32). As we discuss below, however, augmenting the training set appeared to help generalization during our second experiment. The results of our cross validation experiment are encouraging. However, the success of our classifiers on the Editor``s Desk documents that informed the cross validation study may not be good predictors of the models'' performance on the remainder to the BLS website. To test the generality of the naive Bayes classifier, we solicited input from 11 human judges who were familiar with the BLS website. The sample was chosen by convenience, and consisted of faculty and graduate students who work on the GovStat project. However, none of the reviewers had prior knowledge of the outcome of the classification before their participation. For the experiment, a random sample of 100 documents was drawn from the entire BLS collection. On average each re7 http://www.google.com 8 A more formal treatment of the combination of labeled and unlabeled data is available in [4]. Table 5: Human-Model Agreement on 100 Sample Docs. Human Judge 1st Choice Model Model 1st Choice Model 2nd Choice N. Bayes (aug.) 14 24 N. Bayes 24 1 Human Judge 2nd Choice Model Model 1st Choice Model 2nd Choice N. Bayes (aug.) 14 21 N. Bayes 21 4 viewer classified 83 documents, placing each document into as many of the categories shown in Table 3 as he or she saw fit. Results from this experiment suggest that room for improvement remains with respect to generalizing to the whole collection from the class models fitted to the Editor``s Desk documents. In Table 5, we see, for each classifier, the number of documents for which it``s first or second most probable class was voted best or second best by the 11 human judges. In the context of this experiment, we consider a first- or second-place classification by the machine to be accurate because the relation browser interface operates on a multiway classification, where each document is classified into multiple categories. Thus a document with the correct class as its second choice would still be easily available to a user. Likewise, a correct classification on either the most popular or second most popular category among the human judges is considered correct in cases where a given document was classified into multiple classes. There were 72 multiclass documents in our sample, as seen in Figure 4. The remaining 28 documents were assigned to 1 or 0 classes. Under this rationale, The augmented naive Bayes classifier correctly grouped 73 documents, while the smaller model (not augmented by a Google search) correctly classified 50. The resultant χ2 test gave p = 0.001, suggesting that increasing the training set improved the ability of the naive Bayes model to generalize from the Editor``s Desk documents to the collection as a whole. However, the improvement afforded by the augmented model comes at some cost. In particular, the augmented model is significantly inferior to the model trained solely on Editor``s Desk documents if we concern ourselves only with documents selected by the majority of human reviewers-i.e. only first-choice classes. Limiting the right answers to the left column of Table 5 gives p = 0.02 in favor of the non-augmented model. For the purposes of applying the relation browser to complex digital library content (where documents will be classified along multiple categories), the augmented model is preferable. But this is not necessarily the case in general. It must also be said that 73% accuracy under a fairly liberal test condition leaves room for improvement in our assignment of topics to categories. We may begin to understand the shortcomings of the described techniques by consulting Figure 5, which shows the distribution of categories across documents given by humans and by the augmented naive Bayes model. The majority of reviewers put 157 Number of Human-Assigned ClassesMNumber of Human-Assigned ClassesM Number of Human-Assigned Classes FrequencyMFrequencyM Frequency m00M0M 0 11M1M 1 22M2M 2 33M3M 3 44M4M 4 55M5M 5 66M6M 6 77M7M 7 m00M0M 055M5M 51010M10M 101515M15M 152020M20M 202525M25M 253030M30M 303535M35M 35 Figure 4: Number of Classes Assigned to Documents by Judges documents into only three categories, jobs, benefits, and occupations. On the other hand, the naive Bayes classifier distributed classes more evenly across the topics. This behavior suggests areas for future improvement. Most importantly, we observed a strong correlation among the three most frequent classes among the human judges (for instance, there was 68% correlation between benefits and occupations). This suggests that improving the clustering to produce topics that were more nearly orthogonal might improve performance. 7. CONCLUSIONS AND FUTURE WORK Many developers and maintainers of digital libraries share the basic problem pursued here. Given increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time? Data mining and machine learning methods hold a great deal of promise with respect to this problem. Empirical methods of knowledge discovery can aid in the organization and retrieval of information. As we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces. This study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser. Our approach combines unsupervised learning techniques, applied to a focused subset of the BLS website. The goal of this initial stage is to discover the most basic and far-reaching topics in the collection. Based mjobsjobsMjobsM jobs benefitsunemploymentpricespricesMpricesM prices safetyinternationalspendingspendingMspendingM spending occupationscostscostsMcostsM costs productivityHuman ClassificationsMHuman ClassificationsM Human Classifications m0.000.00M0.00M 0.00 0.050.100.150.15M0.15M 0.15 0.200.25mjobsjobsMjobsM jobs benefitsunemploymentpricespricesMpricesM prices safetyinternationalspendingspendingMspendingM spending occupationscostscostsMcostsM costs productivityMachine ClassificationsMMachine ClassificationsM Machine Classifications m0.000.00M0.00M 0.00 0.050.100.10M0.10M 0.10 0.15 Figure 5: Distribution of Classes Across Documents on a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection. In the study reported here, this approach has demonstrated promise. In its favor, our approach is highly scalable. It also appears to give fairly good results. Comparing three modes of document representation-full-text, title only, and keyword-we found 98% accuracy as measured by collocation of documents with identical subject headings. While it is not surprising that editor-generated keywords should give strong evidence for such learning, their superiority over fulltext and titles was dramatic, suggesting that even a small amount of metadata can be very useful for data mining. However, we also found evidence that learning topics from a subset of the collection may lead to overfitted models. After clustering 1279 Editor``s Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection. While we saw fairly good results (classification accuracy of 75% with respect to a small sample of human judges), this experiment forced us to reconsider the quality of the topics learned by clustering. The high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor``s Desk were not independent. While we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model. Overall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically. It also suggests that a more sophisticated modeling approach might yield 158 better results in the future. In upcoming work we will experiment with streamlining the two-phase technique described here. Instead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor``s Desk. In current work we have defined algorithms to identify documents likely to help the topic discovery task. Supplied with a more comprehensive training set, we hope to experiment with model-based clustering, which combines the clustering and classification processes into a single modeling procedure. Topic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining. What is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design. Finally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries. 8. REFERENCES [1] A. Agresti. An Introduction to Categorical Data Analysis. Wiley, New York, 1996. [2] C. Ahlberg, C. Williamson, and B. Shneiderman. Dynamic queries for information exploration: an implementation and evaluation. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 619-626, 1992. [3] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. ACM Press, 1999. [4] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92-100. ACM Press, 1998. [5] H. Chen and S. Dumais. Hierarchical classification of web content. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 256-263, 2000. [6] M. Efron, G. Marchionini, and J. Zhang. Implications of the recursive representation problem for automatic concept identification in on-line governmental information. In Proceedings of the ASIST Special Interest Group on Classification Research (ASIST SIG-CR), 2003. [7] C. Fraley and A. E. Raftery. How many clusters? which clustering method? answers via model-based cluster analysis. The Computer Journal, 41(8):578-588, 1998. [8] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM Computing Surveys, 31(3):264-323, September 1999. [9] T. Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. In D. H. Fisher, editor, Proceedings of ICML-97, 14th International Conference on Machine Learning, pages 143-151, Nashville, US, 1997. Morgan Kaufmann Publishers, San Francisco, US. [10] T. Joachims. Text categorization with support vector machines: learning with many relevant features. In C. N´edellec and C. Rouveirol, editors, Proceedings of ECML-98, 10th European Conference on Machine Learning, pages 137-142, Chemnitz, DE, 1998. Springer Verlag, Heidelberg, DE. [11] I. T. Jolliffe. Principal Component Analysis. Springer, 2nd edition, 2002. [12] L. Kaufman and P. J. Rosseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. Wiley, 1990. [13] G. Marchionini and B. Brunk. Toward a general relation browser: a GUI for information architects. Journal of Digital Information, 4(1), 2003. http://jodi.ecs.soton.ac.uk/Articles/v04/i01/Marchionini/. [14] A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/˜mccallum/bow, 1996. [15] T. Mitchell. Machine Learning. McGraw Hill, 1997. [16] E. Rasmussen. Clustering algorithms. In W. B. Frakes and R. Baeza-Yates, editors, Information Retrieval: Data Structures and Algorithms, pages 419-442. Prentice Hall, 1992. [17] R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of clusters in a dataset via the gap statistic, 2000. http://citeseer.nj.nec.com/tibshirani00estimating.html. [18] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, 2000. 159
Machine Learning for Information Architecture in a Large Governmental Website * ABSTRACT This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. 1. INTRODUCTION The GovStat Project is a joint effort of the University of North Carolina Interaction Design Lab and the University of Maryland Human-Computer Interaction Lab1. Citing end-user difficulty in finding governmental information (especially statistical data) online, the project seeks to create an integrated model of user access to US government statistical information that is rooted in realistic data models and innovative user interfaces. To enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques. In particular, our work analyzes a particular digital library--the website of the Bureau of Labor Statistics2 (BLS)--in efforts to discover a small number of linguistically meaningful concepts, or "bins," that collectively summarize the semantic domain of the site. The project goal is to classify the site's web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]). Many digital libraries already make use of content classification, both explicitly and implicitly; they divide their resources manually by topical relation; they organize content into hierarchically oriented file systems. The goal of the present research is to develop another means of browsing the content of these collections. By analyzing the distribution of terms across documents, our goal is to supplement the agency's pre-existing information structures. Statistical learning technologies are appealing in this context insofar as they stand to define a data-driven--as opposed to an agency-driven--navigational structure for a site. Our approach combines supervised and unsupervised learning techniques. A pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6]. But strictly supervised techniques [5] are inappropriate, too. Although BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal. Thus we hope to learn an additional set of concepts by letting the data speak for themselves. The remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation. In Section 2 we describe the previously existing, human-created conceptual structure of the BLS website. This section also describes evidence that this structure leaves room for improvement. Next (Sections 3--5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text. Section 6 describes a two-part evaluation of the derived conceptual structures. Finally, we conclude in Section 7 by outlining upcoming work on the project. Figure 1: Relation Browser Prototype 2. STRUCTURING ACCESS TO THE BLS WEBSITE The Bureau of Labor Statistics is a federal government agency charged with compiling and publishing statistics pertaining to labor and production in the US and abroad. Given this broad mandate, the BLS publishes a wide array of information, intended for diverse audiences. The agency's website acts as a clearinghouse for this process. With over 15,000 text/html documents (and many more documents if spreadsheets and typeset reports are included), providing access to the collection provides a steep challenge to information architects. 2.1 The Relation Browser The starting point of this work is the notion that access to information in the BLS website could be improved by the addition of a dynamic interface such as the relation browser described by Marchionini and Brunk [13]. The relation browser allows users to traverse complex data sets by iteratively slicing the data along several topics. In Figure 1 we see a prototype instantiation of the relation browser, applied to the FedStats website3. The relation browser supports information seeking by allowing users to form queries in a stepwise fashion, slicing and re-slicing the data as their interests dictate. Its motivation is in keeping with Shneiderman's suggestion that queries and their results should be tightly coupled [2]. Thus in Fig ure 1, users might limit their search set to those documents about "energy." Within this subset of the collection, they might further eliminate documents published more than a year ago. Finally, they might request to see only documents published in PDF format. As Marchionini and Brunk discuss, capturing the publication date and format of documents is trivial. But successful implementations of the relation browser also rely on topical classification. This presents two stumbling blocks for system designers: • Information architects must define the appropriate set of topics for their collection • Site maintainers must classify each document into its appropriate categories These tasks parallel common problems in the metadata community: defining appropriate elements and marking up documents to support metadata-aware information access. Given a collection of over 15,000 documents, these hurdles are especially daunting, and automatic methods of approaching them are highly desirable. 2.2 A Pre-Existing Structure Prior to our involvement with the project, designers at BLS created a shallow classificatory structure for the most important documents in their website. As seen in Figure 2, the BLS home page organizes 65 "top-level" documents into 15 categories. These include topics such as Employment and Unemployment, Productivity, and Inflation and Spending. Figure 2: The BLS Home Page Figure 3: Scree Plot of BLS Categories We hoped initially that these pre-defined categories could be used to train a 15-way document classifier, thus automating the process of populating the relation browser altogether. However, this approach proved unsatisfactory. In personal meetings, BLS officials voiced dissatisfaction with the existing topics. Their form, it was argued, owed as much to the institutional structure of BLS as it did to the inherent topology of the website's information space. In other words, the topics reflected official divisions rather than semantic clusters. The BLS agents suggested that re-designing this classification structure would be desirable. The agents' misgivings were borne out in subsequent analysis. The BLS topics comprise a shallow classificatory structure; each of the 15 top-level categories is linked to a small number of related pages. Thus there are 7 pages associated with Inflation. Altogether, the link structure of this classificatory system contains 65 documents; that is, excluding navigational links, there are 65 documents linked from the BLS home page, where each hyperlink connects a document to a topic (pages can be linked to multiple topics). Based on this hyperlink structure, we defined M, a symmetric 65 × 65 matrix, where mil counts the number of topics in which documents i and j are both classified on the BLS home page. To analyze the redundancy inherent in the pre-existing structure, we derived the principal components of M (cf. [11]). Figure 3 shows the resultant scree plot4. Because all 65 documents belong to at least one BLS topic, 4A scree plot shows the magnitude of the kth eigenvalue versus its rank. During principal component analysis scree plots visualize the amount of variance captured by each component. the rank of M is guaranteed to be less than or equal to 15 (hence, eigenvalues 16...65 = 0). What is surprising about Figure 3, however, is the precipitous decline in magnitude among the first four eigenvalues. The four largest eigenvlaues account for 62.2% of the total variance in the data. This fact suggests a high degree of redundancy among the topics. Topical redundancy is not in itself problematic. However, the documents in this very shallow classificatory structure are almost all gateways to more specific information. Thus the listing of the Producer Price Index under three categories could be confusing to the site's users. In light of this potential for confusion and the agency's own request for redesign, we undertook the task of topic discovery described in the following sections. 3. A HYBRID APPROACH TO TOPIC DISCOVERY To aid in the discovery of a new set of high-level topics for the BLS website, we turned to unsupervised machine learning methods. In efforts to let the data speak for themselves, we desired a means of concept discovery that would be based not on the structure of the agency, but on the content of the material. To begin this process, we crawled the BLS website, downloading all documents of MIME type text/html. This led to a corpus of 15,165 documents. Based on this corpus, we hoped to derive k ≈ 10 topical categories, such that each document di is assigned to one or more classes. Document clustering (cf. [16]) provided an obvious, but only partial solution to the problem of automating this type of high-level information architecture discovery. The problems with standard clustering are threefold. 1. Mutually exclusive clusters are inappropriate for identifying the topical content of documents, since documents may be about many subjects. 2. Due to the heterogeneity of the data housed in the BLS collection (tables, lists, surveys, etc.), many documents' terms provide noisy topical information. 3. For application to the relation browser, we require a small number (k ≈ 10) of topics. Without significant data reduction, term-based clustering tends to deliver clusters at too fine a level of granularity. In light of these problems, we take a hybrid approach to topic discovery. First, we limit the clustering process to a sample of the entire collection, described in Section 4. Working on a focused subset of the data helps to overcome problems two and three, listed above. To address the problem of mutual exclusivity, we combine unsupervised with supervised learning methods, as described in Section 5. 4. FOCUSING ON CONTENT-RICH DOCUMENTS To derive empirically evidenced topics we initially turned to cluster analysis. Let A be the n × p data matrix with n observations in p variables. Thus aij shows the measurement for the ith observation on the jth variable. As described in [12], the goal of cluster analysis is to assign each of the n observations to one of a small number k groups, each of which is characterized by high intra-cluster correlation and low inter-cluster correlation. Though the algorithms for accomplishing such an arrangement are legion, our analysis focuses on k-means clustering5, during which, each observation oi is assigned to the cluster Ck whose centroid is closest to it, in terms of Euclidean distance. Readers interested in the details of the algorithm are referred to [12] for a thorough treatment of the subject. Clustering by k-means is well-studied in the statistical literature, and has shown good results for text analysis (cf. [8, 16]). However, k-means clustering requires that the researcher specify k, the number of clusters to define. When applying k-means to our 15,000 document collection, indicators such as the gap statistic [17] and an analysis of the mean-squared distance across values of k suggested that k ≈ 80 was optimal. This paramterization led to semantically intelligible clusters. However, 80 clusters are far too many for application to an interface such as the relation 5We have focused on k-means as opposed to other clustering algorithms for several reasons. Chief among these is the computational efficiency enjoyed by the k-means approach. Because we need only a flat clustering there is little to be gained by the more expensive hierarchical algorithms. In future work we will turn to model-based clustering [7] as a more principled method of selecting the number of clusters and of representing clusters. browser. Moreover, the granularity of these clusters was unsuitably fine. For instance, the 80-cluster solution derived a cluster whose most highly associated words (in terms of log-odds ratio [1]) were drug, pharmacy, and chemist. These words are certainly related, but they are related at a level of specificity far below what we sought. To remedy the high dimensionality of the data, we resolved to limit the algorithm to a subset of the collection. In consultation with employees of the BLS, we continued our analysis on documents that form a series titled From the Editor's Desk6. These are brief articles, written by BLS employees. BLS agents suggested that we focus on the Editor's Desk because it is intended to span the intellectual domain of the agency. The column is published daily, and each entry describes an important current issue in the BLS domain. The Editor's Desk column has been written daily (five times per week) since 1998. As such, we operated on a set of N = 1279 documents. Limiting attention to these 1279 documents not only reduced the dimensionality of the problem. It also allowed the clustering process to learn on a relatively clean data set. While the entire BLS collection contains a great deal of nonprose text (i.e. tables, lists, etc.), the Editor's Desk documents are all written in clear, journalistic prose. Each document is highly topical, further aiding the discovery of termtopic relations. Finally, the Editor's Desk column provided an ideal learning environment because it is well-supplied with topical metadata. Each of the 1279 documents contains a list of one or more keywords. Additionally, a subset of the documents (1112) contained a subject heading. This metadata informed our learning and evaluation, as described in Section 6.1. 5. COMBINING SUPERVISED AND UNSUPERVISED LEARNING FOR TOPIC DISCOVERY To derive suitably general topics for the application of a dynamic interface to the BLS collection, we combined document clustering with text classification techniques. Specifically, using k-means, we clustered each of the 1279 documents into one of k clusters, with the number of clusters chosen by analyzing the within-cluster mean squared distance at different values of k (see Section 6.1). Constructing mutually exclusive clusters violates our assumption that documents may belong to multiple classes. However, these clusters mark only the first step in a two-phase process of topic identification. At the end of the process, documentcluster affinity is measured by a real-valued number. Once the Editor's Desk documents were assigned to clusters, we constructed a k-way classifier that estimates the strength of evidence that a new document di is a member of class Ck. We tested three statistical classification techniques: probabilistic Rocchio (prind), naive Bayes, and support vector machines (SVMs). All were implemented using McCallum's BOW text classification library [14]. Prind is a probabilistic version of the Rocchio classification algorithm [9]. Interested readers are referred to Joachims' article for further details of the classification method. Like prind, naive Bayes attempts to classify documents into the most probable class. It is described in detail in [15]. Finally, support vector machines were thoroughly explicated by Vapnik [18], and applied specifically to text in [10]. They define a decision boundary by finding the maximally separating hyperplane in a high-dimensional vector space in which document classes become linearly separable. Having clustered the documents and trained a suitable classifier, the remaining 14,000 documents in the collection are labeled by means of automatic classification. That is, for each document di we derive a k-dimensional vector, quantifying the association between di and each class C1...Ck. Deriving topic scores via naive Bayes for the entire 15,000 document collection required less than two hours of CPU time. The output of this process is a score for every document in the collection on each of the automatically discovered topics. These scores may then be used to populate a relation browser interface, or they may be added to a traditional information retrieval system. To use these weights in the relation browser we currently assign to each document the two topics on which it scored highest. In future work we will adopt a more rigorous method of deriving documenttopic weight thresholds. Also, evaluation of the utility of the learned topics for users will be undertaken. 6. EVALUATION OF CONCEPT DISCOVERY Prior to implementing a relation browser interface and undertaking the attendant user studies, it is of course important to evaluate the quality of the inferred concepts, and the ability of the automatic classifier to assign documents to the appropriate subjects. To evaluate the success of the two-stage approach described in Section 5, we undertook two experiments. During the first experiment we compared three methods of document representation for the clustering task. The goal here was to compare the quality of document clusters derived by analysis of full-text documents, documents represented only by their titles, and documents represented by human-created keyword metadata. During the second experiment, we analyzed the ability of the statistical classifiers to discern the subject matter of documents from portions of the database in addition to the Editor's Desk. 6.1 Comparing Document Representations Documents from The Editor's Desk column came supplied with human-generated keyword metadata. Additionally, The titles of the Editor's Desk documents tend to be germane to the topic of their respective articles. With such an array of distilled evidence of each document's subject matter, we undertook a comparison of document representations for topic discovery by clustering. We hypothesized that keyword-based clustering would provide a useful model. But we hoped to see whether comparable performance could be attained by methods that did not require extensive human indexing, such as the title-only or full-text representations. To test this hypothesis, we defined three modes of document representation--full-text, title-only, and keyword only--we generated three sets of topics, Tfull, Ttitle, and Tkw, respectively. Topics based on full-text documents were derived by application of k-means clustering to the 1279 Editor's Desk documents, where each document was represented by a 1908dimensional vector. These 1908 dimensions captured the TF.IDF weights [3] of each term ti in document dj, for all terms that occurred at least three times in the data. To arrive at the appropriate number of clusters for these data, we inspected the within-cluster mean-squared distance for each value of k = 1...20. As k approached 10 the reduction in error with the addition of more clusters declined notably, suggesting that k ≈ 10 would yield good divisions. To select a single integer value, we calculated which value of k led to the least variation in cluster size. This metric stemmed from a desire to suppress the common result where one large cluster emerges from the k-means algorithm, accompanied by several accordingly small clusters. Without reason to believe that any single topic should have dramatically high prior odds of document membership, this heuristic led to kfull = 10. Clusters based on document titles were constructed similarly. However, in this case, each document was represented in the vector space spanned by the 397 terms that occur at least twice in document titles. Using the same method of minimizing the variance in cluster membership ktitle--the number of clusters in the title-based representation--was also set to 10. The dimensionality of the keyword-based clustering was very similar to that of the title-based approach. There were 299 keywords in the data, all of which were retained. The median number of keywords per document was 7, where a keyword is understood to be either a single word, or a multiword term such as "consumer price index." It is worth noting that the keywords were not drawn from any controlled vocabulary; they were assigned to documents by publishers at the BLS. Using the keywords, the documents were clustered into 10 classes. To evaluate the clusters derived by each method of document representation, we used the subject headings that were included with 1112 of the Editor's Desk documents. Each of these 1112 documents was assigned one or more subject headings, which were withheld from all of the cluster applications. Like the keywords, subject headings were assigned to documents by BLS publishers. Unlike the keywords, however, subject headings were drawn from a controlled vocabulary. Our analysis began with the assumption that documents with the same subject headings should cluster together. To facilitate this analysis, we took a conservative approach; we considered multi-subject classifications to be unique. Thus if document di was assigned to a single subject prices, while document dj was assigned to two subjects, international comparisons, prices, documents di and dj are not considered to come from the same class. Table 1 shows all Editor's Desk subject headings that were assigned to at least 10 documents. As noted in the table, Table 1: Top Editor's Desk Subject Headings Table 2: Contingecy Table for Three Document Representations there were 19 such subject headings, which altogether covered 609 (54%) of the documents with subjects assigned. These document-subject pairings formed the basis of our analysis. Limiting analysis to subjects with N> 10 kept the resultant χ2 tests suitably robust. The clustering derived by each document representation was tested by its ability to collocate documents with the same subjects. Thus for each of the 19 subject headings in Table 1, Si, we calculated the proportion of documents assigned to Si that each clustering co-classified. Further, we assumed that whichever cluster captured the majority of documents for a given class constituted the "right answer" for that class. For instance, There were 92 documents whose subject heading was prices. Taking the BLS editors' classifications as ground truth, all 92 of these documents should have ended up in the same cluster. Under the full-text representation 52 of these documents were clustered into category 5, while 35 were in category 3, and 5 documents were in category 6. Taking the majority cluster as the putative right home for these documents, we consider the accuracy of this clustering on this subject to be 52/92 = 0.56. Repeating this process for each topic across all three representations led to the contingency table shown in Table 2. The obvious superiority of the keyword-based clustering evidenced by Table 2 was borne out by a χ2 test on the accuracy proportions. Comparing the proportion right and Table 3: Keyword-Based Clusters wrong achieved by keyword and title-based clustering led to p "0.001. Due to this result, in the remainder of this paper, we focus our attention on the clusters derived by analysis of the Editor's Desk keywords. The ten keyword-based clusters are shown in Table 3, represented by the three terms most highly associated with each cluster, in terms of the log-odds ratio. Additionally, each cluster has been given a label by the researchers. Evaluating the results of clustering is notoriously difficult. In order to lend our analysis suitable rigor and utility, we made several simplifying assumptions. Most problematic is the fact that we have assumed that each document belongs in only a single category. This assumption is certainly false. However, by taking an extremely rigid view of what constitutes a subject--that is, by taking a fully qualified and often multipart subject heading as our unit of analysis--we mitigate this problem. Analogically, this is akin to considering the location of books on a library shelf. Although a given book may cover many subjects, a classification system should be able to collocate books that are extremely similar, say books about occupational safety and health. The most serious liability with this evaluation, then, is the fact that we have compressed multiple subject headings, say prices: international into single subjects. This flattening obscures the multivalence of documents. We turn to a more realistic assessment of document-class relations in Section 6.2. 6.2 Accuracy of the Document Classifiers Although the keyword-based clusters appear to classify the Editor's Desk documents very well, their discovery only solved half of the problem required for the successful implementation of a dynamic user interface such as the relation browser. The matter of roughly fourteen thousand unclassified documents remained to be addressed. To solve this problem, we trained the statistical classifiers described above in Section 5. For each document in the collection di, these classifiers give pi, a k-vector of probabilities or distances (depending on the classification method used), where pik quantifies the strength of association between the ith document and the kth class. All classifiers were trained on the full text of each document, regardless of the representation used to discover the initial clusters. The different training sets were thus constructed simply by changing the Table 4: Cross Validation Results for 3 Classifiers class variable for each instance (document) to reflect its assigned cluster under a given model. To test the ability of each classifier to locate documents correctly, we first performed a 10-fold cross validation on the Editor's Desk documents. During cross-validation the data are split randomly into n subsets (in this case n = 10). The process proceeds by iteratively holding out each of the n subsets as a test collection for a model trained on the remaining n − 1 subsets. Cross validation is described in [15]. Using this methodology, we compared the performance of the three classification models described above. Table 4 gives the results from cross validation. Although naive Bayes is not significantly more accurate for these data than the SVM classifier, we limit the remainder of our attention to analysis of its performance. Our selection of naive Bayes is due to the fact that it appears to work comparably to the SVM approach for these data, while being much simpler, both in theory and implementation. Because we have only 1279 documents and 10 classes, the number of training documents per class is relatively small. In addition to models fitted to the Editor's Desk data, then, we constructed a fourth model, supplementing the training sets of each class by querying the Google search engine7 and applying naive Bayes to the augmented test set. For each class, we created a query by submitting the three terms with the highest log-odds ratio with that class. Further, each query was limited to the domain www.bls.gov. For each class we retrieved up to 400 documents from Google (the actual number varied depending on the size of the result set returned by Google). This led to a training set of 4113 documents in the "augmented model," as we call it below8. Cross validation suggested that the augmented model decreased classification accuracy (accuracy = 58.16%, with standard error = 0.32). As we discuss below, however, augmenting the training set appeared to help generalization during our second experiment. The results of our cross validation experiment are encouraging. However, the success of our classifiers on the Editor's Desk documents that informed the cross validation study may not be good predictors of the models' performance on the remainder to the BLS website. To test the generality of the naive Bayes classifier, we solicited input from 11 human judges who were familiar with the BLS website. The sample was chosen by convenience, and consisted of faculty and graduate students who work on the GovStat project. However, none of the reviewers had prior knowledge of the outcome of the classification before their participation. For the experiment, a random sample of 100 documents was drawn from the entire BLS collection. On average each re Table 5: Human-Model Agreement on 100 Sample Docs. viewer classified 83 documents, placing each document into as many of the categories shown in Table 3 as he or she saw fit. Results from this experiment suggest that room for improvement remains with respect to generalizing to the whole collection from the class models fitted to the Editor's Desk documents. In Table 5, we see, for each classifier, the number of documents for which it's first or second most probable class was voted best or second best by the 11 human judges. In the context of this experiment, we consider a first - or second-place classification by the machine to be accurate because the relation browser interface operates on a multiway classification, where each document is classified into multiple categories. Thus a document with the "correct" class as its second choice would still be easily available to a user. Likewise, a correct classification on either the most popular or second most popular category among the human judges is considered correct in cases where a given document was classified into multiple classes. There were 72 multiclass documents in our sample, as seen in Figure 4. The remaining 28 documents were assigned to 1 or 0 classes. Under this rationale, The augmented naive Bayes classifier correctly grouped 73 documents, while the smaller model (not augmented by a Google search) correctly classified 50. The resultant χ2 test gave p = 0.001, suggesting that increasing the training set improved the ability of the naive Bayes model to generalize from the Editor's Desk documents to the collection as a whole. However, the improvement afforded by the augmented model comes at some cost. In particular, the augmented model is significantly inferior to the model trained solely on Editor's Desk documents if we concern ourselves only with documents selected by the majority of human reviewers--i.e. only first-choice classes. Limiting the right answers to the left column of Table 5 gives p = 0.02 in favor of the non-augmented model. For the purposes of applying the relation browser to complex digital library content (where documents will be classified along multiple categories), the augmented model is preferable. But this is not necessarily the case in general. It must also be said that 73% accuracy under a fairly liberal test condition leaves room for improvement in our assignment of topics to categories. We may begin to understand the shortcomings of the described techniques by consulting Figure 5, which shows the distribution of categories across documents given by humans and by the augmented naive Bayes model. The majority of reviewers put Figure 4: Number of Classes Assigned to Documents by Judges documents into only three categories, jobs, benefits, and occupations. On the other hand, the naive Bayes classifier distributed classes more evenly across the topics. This behavior suggests areas for future improvement. Most importantly, we observed a strong correlation among the three most frequent classes among the human judges (for instance, there was 68% correlation between benefits and occupations). This suggests that improving the clustering to produce topics that were more nearly orthogonal might improve performance. 7. CONCLUSIONS AND FUTURE WORK Many developers and maintainers of digital libraries share the basic problem pursued here. Given increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time? Data mining and machine learning methods hold a great deal of promise with respect to this problem. Empirical methods of knowledge discovery can aid in the organization and retrieval of information. As we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces. This study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser. Our approach combines unsupervised learning techniques, applied to a focused subset of the BLS website. The goal of this initial stage is to discover the most basic and far-reaching topics in the collection. Based jobs prices spending costs Figure 5: Distribution of Classes Across Documents on a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection. In the study reported here, this approach has demonstrated promise. In its favor, our approach is highly scalable. It also appears to give fairly good results. Comparing three modes of document representation--full-text, title only, and keyword--we found 98% accuracy as measured by collocation of documents with identical subject headings. While it is not surprising that editor-generated keywords should give strong evidence for such learning, their superiority over fulltext and titles was dramatic, suggesting that even a small amount of metadata can be very useful for data mining. However, we also found evidence that learning topics from a subset of the collection may lead to overfitted models. After clustering 1279 Editor's Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection. While we saw fairly good results (classification accuracy of 75% with respect to a small sample of human judges), this experiment forced us to reconsider the quality of the topics learned by clustering. The high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor's Desk were not independent. While we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model. Overall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically. It also suggests that a more sophisticated modeling approach might yield better results in the future. In upcoming work we will experiment with streamlining the two-phase technique described here. Instead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor's Desk. In current work we have defined algorithms to identify documents likely to help the topic discovery task. Supplied with a more comprehensive training set, we hope to experiment with model-based clustering, which combines the clustering and classification processes into a single modeling procedure. Topic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining. What is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design. Finally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.
Machine Learning for Information Architecture in a Large Governmental Website * ABSTRACT This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. 1. INTRODUCTION The GovStat Project is a joint effort of the University of North Carolina Interaction Design Lab and the University of Maryland Human-Computer Interaction Lab1. Citing end-user difficulty in finding governmental information (especially statistical data) online, the project seeks to create an integrated model of user access to US government statistical information that is rooted in realistic data models and innovative user interfaces. To enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques. In particular, our work analyzes a particular digital library--the website of the Bureau of Labor Statistics2 (BLS)--in efforts to discover a small number of linguistically meaningful concepts, or "bins," that collectively summarize the semantic domain of the site. The project goal is to classify the site's web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]). Many digital libraries already make use of content classification, both explicitly and implicitly; they divide their resources manually by topical relation; they organize content into hierarchically oriented file systems. The goal of the present research is to develop another means of browsing the content of these collections. By analyzing the distribution of terms across documents, our goal is to supplement the agency's pre-existing information structures. Statistical learning technologies are appealing in this context insofar as they stand to define a data-driven--as opposed to an agency-driven--navigational structure for a site. Our approach combines supervised and unsupervised learning techniques. A pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6]. But strictly supervised techniques [5] are inappropriate, too. Although BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal. Thus we hope to learn an additional set of concepts by letting the data speak for themselves. The remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation. In Section 2 we describe the previously existing, human-created conceptual structure of the BLS website. This section also describes evidence that this structure leaves room for improvement. Next (Sections 3--5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text. Section 6 describes a two-part evaluation of the derived conceptual structures. Finally, we conclude in Section 7 by outlining upcoming work on the project. Figure 1: Relation Browser Prototype 2. STRUCTURING ACCESS TO THE BLS WEBSITE 2.1 The Relation Browser 2.2 A Pre-Existing Structure 3. A HYBRID APPROACH TO TOPIC DISCOVERY 4. FOCUSING ON CONTENT-RICH DOCUMENTS 5. COMBINING SUPERVISED AND UNSUPERVISED LEARNING FOR TOPIC DISCOVERY 6. EVALUATION OF CONCEPT DISCOVERY 6.1 Comparing Document Representations 6.2 Accuracy of the Document Classifiers 7. CONCLUSIONS AND FUTURE WORK Many developers and maintainers of digital libraries share the basic problem pursued here. Given increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time? Data mining and machine learning methods hold a great deal of promise with respect to this problem. Empirical methods of knowledge discovery can aid in the organization and retrieval of information. As we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces. This study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser. Our approach combines unsupervised learning techniques, applied to a focused subset of the BLS website. The goal of this initial stage is to discover the most basic and far-reaching topics in the collection. Based jobs prices spending costs Figure 5: Distribution of Classes Across Documents on a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection. In the study reported here, this approach has demonstrated promise. In its favor, our approach is highly scalable. It also appears to give fairly good results. Comparing three modes of document representation--full-text, title only, and keyword--we found 98% accuracy as measured by collocation of documents with identical subject headings. While it is not surprising that editor-generated keywords should give strong evidence for such learning, their superiority over fulltext and titles was dramatic, suggesting that even a small amount of metadata can be very useful for data mining. However, we also found evidence that learning topics from a subset of the collection may lead to overfitted models. After clustering 1279 Editor's Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection. While we saw fairly good results (classification accuracy of 75% with respect to a small sample of human judges), this experiment forced us to reconsider the quality of the topics learned by clustering. The high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor's Desk were not independent. While we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model. Overall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically. It also suggests that a more sophisticated modeling approach might yield better results in the future. In upcoming work we will experiment with streamlining the two-phase technique described here. Instead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor's Desk. In current work we have defined algorithms to identify documents likely to help the topic discovery task. Supplied with a more comprehensive training set, we hope to experiment with model-based clustering, which combines the clustering and classification processes into a single modeling procedure. Topic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining. What is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design. Finally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.
Machine Learning for Information Architecture in a Large Governmental Website * ABSTRACT This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. 1. INTRODUCTION To enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques. The project goal is to classify the site's web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]). The goal of the present research is to develop another means of browsing the content of these collections. By analyzing the distribution of terms across documents, our goal is to supplement the agency's pre-existing information structures. Our approach combines supervised and unsupervised learning techniques. A pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6]. But strictly supervised techniques [5] are inappropriate, too. Although BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal. Thus we hope to learn an additional set of concepts by letting the data speak for themselves. The remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation. In Section 2 we describe the previously existing, human-created conceptual structure of the BLS website. This section also describes evidence that this structure leaves room for improvement. Next (Sections 3--5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text. Section 6 describes a two-part evaluation of the derived conceptual structures. Finally, we conclude in Section 7 by outlining upcoming work on the project. Figure 1: Relation Browser Prototype 7. CONCLUSIONS AND FUTURE WORK Many developers and maintainers of digital libraries share the basic problem pursued here. Given increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time? Data mining and machine learning methods hold a great deal of promise with respect to this problem. Empirical methods of knowledge discovery can aid in the organization and retrieval of information. As we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces. This study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser. Our approach combines unsupervised learning techniques, applied to a focused subset of the BLS website. The goal of this initial stage is to discover the most basic and far-reaching topics in the collection. Based jobs prices spending costs Figure 5: Distribution of Classes Across Documents on a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection. In the study reported here, this approach has demonstrated promise. In its favor, our approach is highly scalable. It also appears to give fairly good results. Comparing three modes of document representation--full-text, title only, and keyword--we found 98% accuracy as measured by collocation of documents with identical subject headings. However, we also found evidence that learning topics from a subset of the collection may lead to overfitted models. After clustering 1279 Editor's Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection. The high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor's Desk were not independent. While we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model. Overall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically. It also suggests that a more sophisticated modeling approach might yield better results in the future. In upcoming work we will experiment with streamlining the two-phase technique described here. Instead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor's Desk. In current work we have defined algorithms to identify documents likely to help the topic discovery task. Topic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining. What is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design. Finally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.
C-53
Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games
Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.
[ "dead-reckon", "local lag", "multiplay game", "consist", "gs-dr-ll", "network transmiss delai", "time warp", "accur state", "correct", "physic clock", "usabl and fair", "distribut multi-player game", "continu replic applic" ]
[ "P", "P", "P", "P", "P", "U", "U", "M", "U", "U", "M", "M", "M" ]
Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games Yi Zhang1 , Ling Chen1, 2 , Gencai Chen1 1College of Computer Science, Zhejiang University, Hangzhou 310027, P.R. China 2School of Computer Science and IT, The University of Nottingham, Nottingham NG8 1BB, UK {m05zhangyi, lingchen, chengc}@cs. zju.edu.cn ABSTRACT Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - distributed applications. General Terms Algorithms, Performance, Experimentation. 1. INTRODUCTION Nowadays, many distributed multiplayer games adopt replicated architectures. In such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2]. These games are referred to as Continuous Distributed Multiplayer Games (CDMG). Like other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay. Although new network techniques (e.g. QoS) can reduce or at least bound the delay, they can not completely eliminate it, as there exists the physical speed limitation of light, for instance, 100 ms is needed for light to propagate from Europe to Australia [3]. There are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7]. In replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc.. In order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches. Mauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications. It synchronizes the physical clocks of all sites in a system. After an operation is issued at local site, it delays the execution of the operation for a short time. During this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time. In order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state. Local lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites. Since operation transmission mechanism requests that all operations should be transmitted in a reliable way, message filtering is difficult to be deployed and the scalability of a system is limited. DR is based on state transmission mechanism. In addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities). After each update of its own entities, a site compares the accurate state with the estimated one. If the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected. Through state estimation, DR can not only maintain consistency but also decrease the number of transmitted state updates. Compared with aforementioned local lag, DR cannot maintain high consistency. Due to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update. In order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates. Detailed description of GS-DR can be found in Section 3. When a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote sites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure. Thus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8]. In this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR. By delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR. The rest of this paper is organized as follows: Section 2 gives the definition of consistency and corresponding metrics; the cause of the inconsistency of DR is analyzed in Section 3; Section 4 describes how GS-DR-LL works; performance evaluation is presented in Section 5; Section 6 concludes the paper. 2. CONSISTENCY DEFINITIONS AND METRICS The consistency of replicated applications has already been well defined in discrete domain [9, 10, 11, 12], but few related work has been done in continuous domain. Mauve et al [1] have given a definition of consistency for replicated applications in continuous domain, but the definition is based on operation transmission and it is difficult for the definition to describe state transmission based methods (e.g. DR). Here, we present an alternative definition of consistency in continuous domain, which suits state transmission based methods well. Given two distinct sites i and j, which have replicated a shared entity e, at a given time t, the states of e at sites i and j are Si(t) and Sj(t). DEFINITION 1: the states of e at sites i and j are consistent at time t, iff: De(i, j, t) = |Si(t) - Sj(t)| = 0 (1) DEFINITION 2: the states of e at sites i and j are consistent between time t1 and t2 (t1 < t2), iff: De(i, j, t1, t2) = dt|)t(S)t(S| t2 t1 ji = 0 (2) In this paper, formulas (1) and (2) are used to determine whether the states of shared entities are consistent between local and remote sites. Due to network transmission delay, it is difficult to maintain the states of shared entities absolutely consistent. Corresponding metrics are needed to measure the consistency of shared entities between local and remote sites. De(i, j, t) can be used as a metric to measure the degree of consistency at a certain time point. If De(i, j, t1) > De(i, j, t2), it can be stated that between sites i and j, the consistency of the states of entity e at time point t1 is lower than that at time point t2. If De(i, j, t) > De(l, k, t), it can be stated that, at time point t, the consistency of the states of entity e between sites i and j is lower than that between sites l and k. Similarly, De(i, j, t1, t2) can been used as a metric to measure the degree of consistency in a certain time period. If De(i, j, t1, t2) > De(i, j, t3, t4) and |t1 - t2| = |t3 - t4|, it can be stated that between sites i and j, the consistency of the states of entity e between time points t1 and t2 is lower than that between time points t3 and t4. If De(i, j, t1, t2) > De(l, k, t1, t2), it can be stated that between time points t1 and t2, the consistency of the states of entity e between sites i and j is lower than that between sites l and k. In DR, the states of entities are composed of the positions and orientations of entities and some prediction related parameters (e.g. the velocities of entities). Given two distinct sites i and j, which have replicated a shared entity e, at a given time point t, the positions of e at sites i and j are (xit, yit, zit) and (xjt, yjt, zjt), De(i, j, t) and D (i, j, t1, t2) could be calculated as: De(i, j, t) = )zz()yy()xx( jtit 2 jtit 2 jtit 2 (3) De(i, j, t1, t2) = dt)zz()yy()xx( 2t 1t jtit 2 jtit 2 jtit 2 (4) In this paper, formulas (3) and (4) are used as metrics to measure the consistency of shared entities between local and remote sites. 3. INCONSISTENCY IN DR The inconsistency in DR can be divided into two sections by the time point when a remote site receives a state update. The inconsistency before a remote site receives a state update is referred to as before inconsistency, and the inconsistency after a remote site receives a state update is referred to as after inconsistency. Before inconsistency and after inconsistency are similar with the terms before export error and after export error [8]. After inconsistency is caused by the lack of synchronization between the physical clocks of all sites in a system. By employing physical clock synchronization, GS-DR can accurately calculate the states of shared entities after receiving state updates, and it can eliminate after inconsistency. Before inconsistency is caused by two reasons. The first reason is the delay of sending state updates, as local site does not send a state update unless the difference between accurate state and the estimated one is larger than a predefined threshold. The second reason is network transmission delay, as a shared entity can be synchronized only after remote sites receiving corresponding state update. Figure 1. The paths of a shared entity by using GS-DR. For example, it is assumed that the velocity of a shared entity is the only parameter to predict the entity``s position, and current position of the entity can be calculated by its last position and current velocity. To simplify the description, it is also assumed that there are only two sites i and j in a game session, site i acts as 2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 local site and site j acts as remote site, and t1 is the time point the local site updates the state of the shared entity. Figure 1 illustrates the paths of the shared entity at local site and remote site in x axis by using GS-DR. At the beginning, the positions of the shared entity are the same at sites i and j and the velocity of the shared entity is 0. Before time point t0, the paths of the shared entity at sites i and j in x coordinate are exactly the same. At time point t0, the player at site i issues an operation, which changes the velocity in x axis to v0. Site i first periodically checks whether the difference between the accurate position of the shared entity and the estimated one, 0 in this case, is larger than a predefined threshold. At time point t1, site i finds that the difference is larger than the threshold and it sends a state update to site j. The state update contains the position and velocity of the shared entity at time point t1 and time point t1 is also attached as a timestamp. At time point t2, the state update reaches site j, and the received state and the time deviation between time points t1 and t2 are used to calculate the current position of the shared entity. Then site j updates its replicated entity``s position and velocity, and the paths of the shared entity at sites i and j overlap again. From Figure 1, it can be seen that the after inconsistency is 0, and the before consistency is composed of two parts, D1 and D2. D1 is De(i, j, t0, t1) and it is caused by the state filtering mechanism of DR. D2 is De(i, j, t1, t2) and it is caused by network transmission delay. 4. GLOBALLY SYNCHRONIZED DR WITH LOCAL LAG From the analysis in Section 3, It can be seen that GS-DR can eliminate after inconsistency, but it cannot effectively tackle before inconsistency. In order to decrease before inconsistency, we propose GS-DR-LL, which combines GS-DR with local lag and can effectively decrease before inconsistency. In GS-DR-LL, the state of a shared entity at a certain time point t is notated as S = (t, pos, par 1, par 2, ......, par n), in which pos means the position of the entity and par 1 to par n means the parameters to calculate the position of the entity. In order to simplify the description of GS-DR-LL, it is assumed that there are only one shared entity and one remote site. At the beginning of a game session, the states of the shared entity are the same at local and remote sites, with the same position p0 and parameters pars0 (pars represents all the parameters). Local site keeps three states: the real state of the entity Sreal, the predicted state at remote site Sp-remote, and the latest state updated to remote site Slate. Remote site keep only one state Sremote, which is the real state of the entity at remote site. Therefore, at the beginning of a game session Sreal = Sp-remote = Slate = Sremote = (t0, p0, pars0). In GS-DR-LL, it is assumed that the physical clocks of all sites are synchronized with a deviation of less than 50 ms (using NTP or GPS clock). Furthermore, it is necessary to make corrections to a physical clock in a way that does not result in decreasing the value of the clock, for example by slowing down or halting the clock for a period of time. Additionally it is assumed that the game scene is updated at a fixed frequency and T stands for the time interval between two consecutive updates, for example, if the scene update frequency is 50 Hz, T would be 20 ms. n stands for the lag value used by local lag, and t stands for current physical time. After updating the scene, local site waits for a constant amount of time T. During this time period, local site receives the operations of the player and stores them in a list L. All operations in L are sorted by their issue time. At the end of time period T, local site executes all stored operations, whose issue time is between t - T and t, on Slate to get the new Slate, and it also executes all stored operations, whose issue time is between t - (n + T) and t - n, on Sreal to get the new Sreal. Additionally, local site uses Sp-remote and corresponding prediction methods to estimate the new Sp-remote. After new Slate, Sreal, and Sp-remote are calculated, local site compares whether the difference between the new Slate and Spremote exceeds the predefined threshold. If YES, local site sends new Slate to remote site and Sp-remote is updated with new Slate. Note that the timestamp of the sent state update is t. After that, local site uses Sreal to update local scene and deletes the operations, whose issue time is less than t - n, from L. After updating the scene, remote site waits for a constant amount of time T. During this time period, remote site stores received state update(s) in a list R. All state updates in R are sorted by their timestamps. At the end of time period T, remote site checks whether R contains state updates whose timestamps are less than t - n. Note that t is current physical time and it increases during the transmission of state updates. If YES, it uses these state updates and corresponding prediction methods to calculate the new Sremote, else they use Sremote and corresponding prediction methods to estimate the new Sremote. After that, local site uses Sremote to update local scene and deletes the sate updates, whose timestamps are less than t - n, from R. From the above description, it can been see that the main difference between GS-DR and GS-DR-LL is that GS-DR-LL uses the operations, whose issue time is less than t - n, to calculate Sreal. That means that the scene seen by local player is the results of the operations issued a period of time (i.e. n) ago. Meanwhile, if the results of issued operations make the difference between Slate and Sp-remote exceed a predefined threshold, corresponding state updates are sent to remote sites immediately. The aforementioned is the basic mechanism of GS-DR-LL. In the case with multiple shared entities and remote sites, local site calculates Slate, Sreal, and Sp-remote for different shared entities respectively, if there are multiple Slate need to be transmitted, local site packets them in one state update and then send it to all remote sites. Figure 2 illustrates the paths of a shared entity at local site and remote site while using GS-DR and GS-DR-LL. All conditions are the same with the conditions used in the aforementioned example describing GS-DR. Compared with t1, t2, and n, T (i.e. the time interval between two consecutive updates) is quite small and it is ignored in the following description. At time point t0, the player at site i issues an operation, which changes the velocity of the shared entity form 0 to v0. By using GS-DR-LL, the results of the operation are updated to local scene at time point t0 + n. However the operation is immediately used to calculate Slate, thus in spite of GS-DR or GS-DR-LL, at time point t1 site i finds that the difference between accurate position and the estimated one is larger than the threshold and it sends a state update to site j. At time point t2, the state update is received by remote site j. Assuming that the timestamp of the state update is less than t - n, site j uses it to update local scene immediately. The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3 With GS-DR, the time period of before inconsistency is (t2 - t1) + (t1 - t0), whereas it decreases to (t2 - t1 - n) + (t1 - t0) with the help of GS-DR-LL. Note that t2 - t1 is caused by network transmission delay and t1 - t0 is caused by the state filtering mechanism of DR. If n is larger than t2 - t1, GS-DR-LL can eliminate the before inconsistency caused by network transmission delay, but it cannot eliminate the before inconsistency caused by the state filtering mechanism of DR (unless the threshold is set to 0). In highly interactive games, which request high consistency and GS-DR-LL might be employed, the results of operations are quite difficult to be estimated and a small threshold must be used. Thus, in practice, most before inconsistency is caused by network transmission delay and GS-DR-LL has the capability to eliminate such before inconsistency. Figure 2. The paths of a shared entity by using GS-DR and GS-DR-LL. To GS-DR-LL, the selection of lag value n is very important, and both network transmission delay and the effects of local lag on interaction should be considered. According to the results of HCI related researches, humans cannot perceive the delay imposed on a system when it is smaller than a specific value, and the specific value depends on both the system and the task. For example, in a graphical user interface a delay of approximately 150 ms cannot be noticed for keyboard interaction and the threshold increases to 195 ms for mouse interaction [13], and a delay of up to 50 ms is uncritical for a car-racing game [5]. Thus if network transmission delay is less than the specific value of a game system, n can be set to the specific value. Else n can be set in terms of the effects of local lag on the interaction of a system [14]. In the case that a large n must be used, some HCI methods (e.g. echo [15]) can be used to relieve the negative effects of the large lag. In the case that n is larger than the network transmission delay, GS-DR-LL can eliminate most before inconsistency. Traditional local lag requests that the lag value must be larger than typical network transmission delay, otherwise state repairs would flood the system. However GS-DR-LL allows n to be smaller than typical network transmission delay. In this case, the before inconsistency caused by network transmission delay still exists, but it can be decreased. 5. PERFORMANCE EVALUATION In order to evaluate GS-DR-LL and compare it with GS-DR in a real application, we had implemented both two methods in a networked game named spaceship [1]. Spaceship is a very simple networked computer game, in which players can control their spaceships to accelerate, decelerate, turn, and shoot spaceships controlled by remote players with laser beams. If a spaceship is hit by a laser beam, its life points decrease one. If the life points of a spaceship decrease to 0, the spaceship is removed from the game and the player controlling the spaceship loses the game. In our practical implementation, GS-DR-LL and GS-DR coexisted in the game system, and the test bed was composed of two computers connected by 100 M switched Ethernet, with one computer acted as local site and the other acted as remote site. In order to simulate network transmission delay, a specific module was developed to delay all packets transmitted between the two computers in terms of a predefined delay value. The main purpose of performance evaluation is to study the effects of GS-DR-LL on decreasing before inconsistency in a particular game system under different thresholds, lags, and network transmission delays. Two different thresholds were used in the evaluation, one is 10 pixels deviation in position or 15 degrees deviation in orientation, and the other is 4 pixels or 5 degrees. Six different combinations of lag and network transmission delay were used in the evaluation and they could be divided into two categories. In one category, the lag was fixed at 300 ms and three different network transmission delays (100 ms, 300 ms, and 500 ms) were used. In the other category, the network transmission delay was fixed at 800 ms and three different lags (100 ms, 300 ms, and 500 ms) were used. Therefore the total number of settings used in the evaluation was 12 (2 × 6). The procedure of performance evaluation was composed of three steps. In the first step, two participants were employed to play the game, and the operation sequences were recorded. Based on the records, a sub operation sequence, which lasted about one minute and included different operations (e.g. accelerate, decelerate, and turn), was selected. In the second step, the physical clocks of the two computers were synchronized first. Under different settings and consistency maintenance approaches, the selected sub operation sequence was played back on one computer, and it drove the two spaceships, one was local and the other was remote, to move. Meanwhile, the tracks of the spaceships on the two computers were recorded separately and they were called as a track couple. Since there are 12 settings and 2 consistency maintenance approaches, the total number of recorded track couples was 24. In the last step, to each track couple, the inconsistency between them was calculated, and the unit of inconsistency was pixel. Since the physical clocks of the two computers were synchronized, the calculation of inconsistency was quite simple. The inconsistency at a particular time point was the distance between the positions of the two spaceships at that time point (i.e. formula (3)). In order to show the results of inconsistency in a clear way, only parts of the results, which last about 7 seconds, are used in the following figures, and the figures show almost the same parts of the results. Figures 3, 4, and 5 show the results of inconsistency when the lag is fixed at 300 ms and the network transmission delays are 100, 300, and 500 ms. It can been seen that inconsistency does exist, but in most of the time it is 0. Additionally, inconsistency increases with the network transmission delay, but decreases with the threshold. Compared with GS-DR, GS-DR-LL can decrease more inconsistency, and it eliminates most inconsistency when the network transmission delay is 100 ms and the threshold is 4 pixels or 5 degrees. 4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 According to the prediction and state filtering mechanisms of DR, inconsistency cannot be completely eliminated if the threshold is not 0. With the definitions of before inconsistency and after inconsistency, it can be indicated that GS-DR and GS-DR-LL both can eliminate after inconsistency, and GS-DR-LL can effectively decrease before inconsistency. It can be foreseen that with proper lag and threshold (e.g. the lag is larger than the network transmission delay and the threshold is 0), GS-DR-LL even can eliminate before inconsistency. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 3. Inconsistency when the network transmission delay is 100 ms and the lag is 300 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 4. Inconsistency when the network transmission delay is 300 ms and the lag is 300 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.2 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 5. Inconsistency when the network transmission delay is 500 ms and the lag is 300 ms. Figures 6, 7, and 8 show the results of inconsistency when the network transmission delay is fixed at 800 ms and the lag are 100, 300, and 500 ms. It can be seen that with GS-DR-LL before inconsistency decreases with the lag. In traditional local lag, the lag must be set to a value larger than typical network transmission delay, otherwise the state repairs would flood the system. From the above results it can be seen that there does not exist any constraint on the selection of the lag, with GS-DR-LL a system would work fine even if the lag is much smaller than the network transmission delay. The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5 From all above results, it can be indicated that GS-DR and GSDR-LL both can eliminate after inconsistency, and GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. 0 10 20 30 40 0.0 1.5 3.1 4.7 6.2 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 6. Inconsistency when the network transmission delay is 800 ms and the lag is 100 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 7. Inconsistency when the network transmission delay is 800 ms and the lag is 300 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 8. Inconsistency when the network transmission delay is 800 ms and the lag is 500 ms. 6. CONCLUSIONS Compared with traditional DR, GS-DR can eliminate after inconsistency through the synchronization of physical clocks, but it cannot tackle before inconsistency, which would significantly influence the usability and fairness of a game. In this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene. Performance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. GS-DR-LL has significant implications to consistency maintenance approaches. First, GS-DR-LL shows that improved DR can not only eliminate after inconsistency but also decrease 6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 before inconsistency, with proper lag and threshold, it would even eliminate before inconsistency. As a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games). Second, GS-DR-LL shows that by combining local lag and GSDR, the constraint on selecting lag value is removed and a lag, which is smaller than typical network transmission delay, could be used. As a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games). 7. REFERENCES [1] Mauve, M., Vogel, J., Hilt, V., and Effelsberg, W. Local-Lag and Timewarp: Providing Consistency for Replicated Continuous Applications. IEEE Transactions on Multimedia, Vol. 6, No.1, 2004, 47-57. [2] Li, F.W., Li, L.W., and Lau, R.W. Supporting Continuous Consistency in Multiplayer Online Games. In Proc. of ACM Multimedia, 2004, 388-391. [3] Pantel, L. and Wolf, L. On the Suitability of Dead Reckoning Schemes for Games. In Proc. of NetGames, 2002, 79-84. [4] Alhalabi, M.O., Horiguchi, S., and Kunifuji, S. An Experimental Study on the Effects of Network Delay in Cooperative Shared Haptic Virtual Environment. Computers and Graphics, Vol. 27, No. 2, 2003, 205-213. [5] Pantel, L. and Wolf, L.C. On the Impact of Delay on RealTime Multiplayer Games. In Proc. of NOSSDAV, 2002, 2329. [6] Meehan, M., Razzaque, S., Whitton, M.C., and Brooks, F.P. Effect of Latency on Presence in Stressful Virtual Environments. In Proc. of IEEE VR, 2003, 141-148. [7] Bernier, Y.W. Latency Compensation Methods in Client/Server In-Game Protocol Design and Optimization. In Proc. of Game Developers Conference, 2001. [8] Aggarwal, S., Banavar, H., and Khandelwal, A. Accuracy in Dead-Reckoning based Distributed Multi-Player Games. In Proc. of NetGames, 2004, 161-165. [9] Raynal, M. and Schiper, A. From Causal Consistency to Sequential Consistency in Shared Memory Systems. In Proc. of Conference on Foundations of Software Technology and Theoretical Computer Science, 1995, 180-194. [10] Ahamad, M., Burns, J.E., Hutto, P.W., and Neiger, G. Causal Memory. In Proc. of International Workshop on Distributed Algorithms, 1991, 9-30. [11] Herlihy, M. and Wing, J. Linearizability: a Correctness Condition for Concurrent Objects. ACM Transactions on Programming Languages and Systems, Vol. 12, No. 3, 1990, 463-492. [12] Misra, J. Axioms for Memory Access in Asynchronous Hardware Systems. ACM Transactions on Programming Languages and Systems, Vol. 8, No. 1, 1986, 142-153. [13] Dabrowski, J.R. and Munson, E.V. Is 100 Milliseconds too Fast. In Proc. of SIGCHI Conference on Human Factors in Computing Systems, 2001, 317-318. [14] Chen, H., Chen, L., and Chen, G.C. Effects of Local-Lag Mechanism on Cooperation Performance in a Desktop CVE System. Journal of Computer Science and Technology, Vol. 20, No. 3, 2005, 396-401. [15] Chen, L., Chen, H., and Chen, G.C. Echo: a Method to Improve the Interaction Quality of CVEs. In Proc. of IEEE VR, 2005, 269-270. The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7
Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games ABSTRACT Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. 1. INTRODUCTION Nowadays, many distributed multiplayer games adopt replicated architectures. In such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2]. These games are referred to as Continuous Distributed Multiplayer Games (CDMG). Like other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay. Although new network techniques (e.g. QoS) can reduce or at least bound the delay, they cannot Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. completely eliminate it, as there exists the physical speed limitation of light, for instance, 100 ms is needed for light to propagate from Europe to Australia [3]. There are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7]. In replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc. . In order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches. Mauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications. It synchronizes the physical clocks of all sites in a system. After an operation is issued at local site, it delays the execution of the operation for a short time. During this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time. In order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state. Local lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites. Since operation transmission mechanism requests that all operations should be transmitted in a reliable way, message filtering is difficult to be deployed and the scalability of a system is limited. DR is based on state transmission mechanism. In addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities). After each update of its own entities, a site compares the accurate state with the estimated one. If the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected. Through state estimation, DR cannot only maintain consistency but also decrease the number of transmitted state updates. Compared with aforementioned local lag, DR cannot maintain high consistency. Due to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update. In order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates. Detailed description of GS-DR can be found in Section 3. When a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote sites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure. Thus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8]. In this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR. By delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR. The rest of this paper is organized as follows: Section 2 gives the definition of consistency and corresponding metrics; the cause of the inconsistency of DR is analyzed in Section 3; Section 4 describes how GS-DR-LL works; performance evaluation is presented in Section 5; Section 6 concludes the paper. 2. CONSISTENCY DEFINITIONS AND METRICS The consistency of replicated applications has already been well defined in discrete domain [9, 10, 11, 12], but few related work has been done in continuous domain. Mauve et al [1] have given a definition of consistency for replicated applications in continuous domain, but the definition is based on operation transmission and it is difficult for the definition to describe state transmission based methods (e.g. DR). Here, we present an alternative definition of consistency in continuous domain, which suits state transmission based methods well. Given two distinct sites i and j, which have replicated a shared entity e, at a given time t, the states of e at sites i and j are Si (t) and Sj (t). In this paper, formulas (1) and (2) are used to determine whether the states of shared entities are consistent between local and remote sites. Due to network transmission delay, it is difficult to maintain the states of shared entities absolutely consistent. Corresponding metrics are needed to measure the consistency of shared entities between local and remote sites. De (i, j, t) can be used as a metric to measure the degree of consistency at a certain time point. If De (i, j, t1)> De (i, j, t2), it can be stated that between sites i and j, the consistency of the states of entity e at time point t1 is lower than that at time point t2. If De (i, j, t)> De (l, k, t), it can be stated that, at time point t, the consistency of the states of entity e between sites i and j is lower than that between sites l and k. Similarly, De (i, j, t1, t2) can been used as a metric to measure the degree of consistency in a certain time period. If De (i, j, t1, t2)> De (i, j, t3, t4) and | t1--t2 | = | t3--t4 |, it can be stated that between sites i and j, the consistency of the states of entity e between time points t1 and t2 is lower than that between time points t3 and t4. If De (i, j, t1, t2)> De (l, k, t1, t2), it can be stated that between time points t1 and t2, the consistency of the states of entity e between sites i and j is lower than that between sites l and k. In DR, the states of entities are composed of the positions and orientations of entities and some prediction related parameters (e.g. the velocities of entities). Given two distinct sites i and j, which have replicated a shared entity e, at a given time point t, the positions of e at sites i and j are (xit, yit, zit) and (xjt, yjt, zjt), De (i, j, t) and D (i, j, t1, t2) could be calculated as: In this paper, formulas (3) and (4) are used as metrics to measure the consistency of shared entities between local and remote sites. 3. INCONSISTENCY IN DR The inconsistency in DR can be divided into two sections by the time point when a remote site receives a state update. The inconsistency before a remote site receives a state update is referred to as before inconsistency, and the inconsistency after a remote site receives a state update is referred to as after inconsistency. Before inconsistency and after inconsistency are similar with the terms before export error and after export error [8]. After inconsistency is caused by the lack of synchronization between the physical clocks of all sites in a system. By employing physical clock synchronization, GS-DR can accurately calculate the states of shared entities after receiving state updates, and it can eliminate after inconsistency. Before inconsistency is caused by two reasons. The first reason is the delay of sending state updates, as local site does not send a state update unless the difference between accurate state and the estimated one is larger than a predefined threshold. The second reason is network transmission delay, as a shared entity can be synchronized only after remote sites receiving corresponding state update. Figure 1. The paths of a shared entity by using GS-DR. For example, it is assumed that the velocity of a shared entity is the only parameter to predict the entity's position, and current position of the entity can be calculated by its last position and current velocity. To simplify the description, it is also assumed that there are only two sites i and j in a game session, site i acts as 2 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 local site and site j acts as remote site, and t1 is the time point the local site updates the state of the shared entity. Figure 1 illustrates the paths of the shared entity at local site and remote site in x axis by using GS-DR. At the beginning, the positions of the shared entity are the same at sites i and j and the velocity of the shared entity is 0. Before time point t0, the paths of the shared entity at sites i and j in x coordinate are exactly the same. At time point t0, the player at site i issues an operation, which changes the velocity in x axis to v0. Site i first periodically checks whether the difference between the accurate position of the shared entity and the estimated one, 0 in this case, is larger than a predefined threshold. At time point t1, site i finds that the difference is larger than the threshold and it sends a state update to site j. The state update contains the position and velocity of the shared entity at time point t1 and time point t1 is also attached as a timestamp. At time point t2, the state update reaches site j, and the received state and the time deviation between time points t1 and t2 are used to calculate the current position of the shared entity. Then site j updates its replicated entity's position and velocity, and the paths of the shared entity at sites i and j overlap again. From Figure 1, it can be seen that the after inconsistency is 0, and the before consistency is composed of two parts, D1 and D2. D1 is De (i, j, t0, t1) and it is caused by the state filtering mechanism of DR. D2 is De (i, j, t1, t2) and it is caused by network transmission delay. 4. GLOBALLY SYNCHRONIZED DR WITH LOCAL LAG From the analysis in Section 3, It can be seen that GS-DR can eliminate after inconsistency, but it cannot effectively tackle before inconsistency. In order to decrease before inconsistency, we propose GS-DR-LL, which combines GS-DR with local lag and can effectively decrease before inconsistency. In GS-DR-LL, the state of a shared entity at a certain time point t is notated as S = (t, pos, par 1, par 2,, par n), in which pos means the position of the entity and par 1 to par n means the parameters to calculate the position of the entity. In order to simplify the description of GS-DR-LL, it is assumed that there are only one shared entity and one remote site. At the beginning of a game session, the states of the shared entity are the same at local and remote sites, with the same position p0 and parameters pars0 (pars represents all the parameters). Local site keeps three states: the real state of the entity Sreal, the predicted state at remote site Sp-remote, and the latest state updated to remote site Slate. Remote site keep only one state Sremote, which is the real state of the entity at remote site. Therefore, at the beginning of a game session Sreal = Sp-remote = Slate = Sremote = (t0, p0, pars0). In GS-DR-LL, it is assumed that the physical clocks of all sites are synchronized with a deviation of less than 50 ms (using NTP or GPS clock). Furthermore, it is necessary to make corrections to a physical clock in a way that does not result in decreasing the value of the clock, for example by slowing down or halting the clock for a period of time. Additionally it is assumed that the game scene is updated at a fixed frequency and T stands for the time interval between two consecutive updates, for example, if the scene update frequency is 50 Hz, T would be 20 ms. n stands for the lag value used by local lag, and t stands for current physical time. After updating the scene, local site waits for a constant amount of time T. During this time period, local site receives the operations of the player and stores them in a list L. All operations in L are sorted by their issue time. At the end of time period T, local site executes all stored operations, whose issue time is between t--T and t, on Slate to get the new Slate, and it also executes all stored operations, whose issue time is between t--(n + T) and t--n, on Sreal to get the new Sreal. Additionally, local site uses Sp-remote and corresponding prediction methods to estimate the new Sp-remote. After new Slate, Sreal, and Sp-remote are calculated, local site compares whether the difference between the new Slate and Spremote exceeds the predefined threshold. If YES, local site sends new Slate to remote site and Sp-remote is updated with new Slate. Note that the timestamp of the sent state update is t. After that, local site uses Sreal to update local scene and deletes the operations, whose issue time is less than t--n, from L. After updating the scene, remote site waits for a constant amount of time T. During this time period, remote site stores received state update (s) in a list R. All state updates in R are sorted by their timestamps. At the end of time period T, remote site checks whether R contains state updates whose timestamps are less than t--n. Note that t is current physical time and it increases during the transmission of state updates. If YES, it uses these state updates and corresponding prediction methods to calculate the new Sremote, else they use Sremote and corresponding prediction methods to estimate the new Sremote. After that, local site uses Sremote to update local scene and deletes the sate updates, whose timestamps are less than t--n, from R. From the above description, it can been see that the main difference between GS-DR and GS-DR-LL is that GS-DR-LL uses the operations, whose issue time is less than t--n, to calculate Sreal. That means that the scene seen by local player is the results of the operations issued a period of time (i.e. n) ago. Meanwhile, if the results of issued operations make the difference between Slate and Sp-remote exceed a predefined threshold, corresponding state updates are sent to remote sites immediately. The aforementioned is the basic mechanism of GS-DR-LL. In the case with multiple shared entities and remote sites, local site calculates Slate, Sreal, and Sp-remote for different shared entities respectively, if there are multiple Slate need to be transmitted, local site packets them in one state update and then send it to all remote sites. Figure 2 illustrates the paths of a shared entity at local site and remote site while using GS-DR and GS-DR-LL. All conditions are the same with the conditions used in the aforementioned example describing GS-DR. Compared with t1, t2, and n, T (i.e. the time interval between two consecutive updates) is quite small and it is ignored in the following description. At time point t0, the player at site i issues an operation, which changes the velocity of the shared entity form 0 to v0. By using GS-DR-LL, the results of the operation are updated to local scene at time point t0 + n. However the operation is immediately used to calculate Slate, thus in spite of GS-DR or GS-DR-LL, at time point t1 site i finds that the difference between accurate position and the estimated one is larger than the threshold and it sends a state update to site j. At time point t2, the state update is received by remote site j. Assuming that the timestamp of the state update is less than t--n, site j uses it to update local scene immediately. The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 3 With GS-DR, the time period of before inconsistency is (t2--t1) + (t1--t0), whereas it decreases to (t2--t1--n) + (t1--t0) with the help of GS-DR-LL. Note that t2--t1 is caused by network transmission delay and t1--t0 is caused by the state filtering mechanism of DR. If n is larger than t2--t1, GS-DR-LL can eliminate the before inconsistency caused by network transmission delay, but it cannot eliminate the before inconsistency caused by the state filtering mechanism of DR (unless the threshold is set to 0). In highly interactive games, which request high consistency and GS-DR-LL might be employed, the results of operations are quite difficult to be estimated and a small threshold must be used. Thus, in practice, most before inconsistency is caused by network transmission delay and GS-DR-LL has the capability to eliminate such before inconsistency. Figure 2. The paths of a shared entity by using GS-DR and GS-DR-LL. To GS-DR-LL, the selection of lag value n is very important, and both network transmission delay and the effects of local lag on interaction should be considered. According to the results of HCI related researches, humans cannot perceive the delay imposed on a system when it is smaller than a specific value, and the specific value depends on both the system and the task. For example, in a graphical user interface a delay of approximately 150 ms cannot be noticed for keyboard interaction and the threshold increases to 195 ms for mouse interaction [13], and a delay of up to 50 ms is uncritical for a car-racing game [5]. Thus if network transmission delay is less than the specific value of a game system, n can be set to the specific value. Else n can be set in terms of the effects of local lag on the interaction of a system [14]. In the case that a large n must be used, some HCI methods (e.g. echo [15]) can be used to relieve the negative effects of the large lag. In the case that n is larger than the network transmission delay, GS-DR-LL can eliminate most before inconsistency. Traditional local lag requests that the lag value must be larger than typical network transmission delay, otherwise state repairs would flood the system. However GS-DR-LL allows n to be smaller than typical network transmission delay. In this case, the before inconsistency caused by network transmission delay still exists, but it can be decreased. 5. PERFORMANCE EVALUATION In order to evaluate GS-DR-LL and compare it with GS-DR in a real application, we had implemented both two methods in a networked game named spaceship [1]. Spaceship is a very simple networked computer game, in which players can control their spaceships to accelerate, decelerate, turn, and shoot spaceships controlled by remote players with laser beams. If a spaceship is hit by a laser beam, its life points decrease one. If the life points of a spaceship decrease to 0, the spaceship is removed from the game and the player controlling the spaceship loses the game. In our practical implementation, GS-DR-LL and GS-DR coexisted in the game system, and the test bed was composed of two computers connected by 100 M switched Ethernet, with one computer acted as local site and the other acted as remote site. In order to simulate network transmission delay, a specific module was developed to delay all packets transmitted between the two computers in terms of a predefined delay value. The main purpose of performance evaluation is to study the effects of GS-DR-LL on decreasing before inconsistency in a particular game system under different thresholds, lags, and network transmission delays. Two different thresholds were used in the evaluation, one is 10 pixels deviation in position or 15 degrees deviation in orientation, and the other is 4 pixels or 5 degrees. Six different combinations of lag and network transmission delay were used in the evaluation and they could be divided into two categories. In one category, the lag was fixed at 300 ms and three different network transmission delays (100 ms, 300 ms, and 500 ms) were used. In the other category, the network transmission delay was fixed at 800 ms and three different lags (100 ms, 300 ms, and 500 ms) were used. Therefore the total number of settings used in the evaluation was 12 (2 × 6). The procedure of performance evaluation was composed of three steps. In the first step, two participants were employed to play the game, and the operation sequences were recorded. Based on the records, a sub operation sequence, which lasted about one minute and included different operations (e.g. accelerate, decelerate, and turn), was selected. In the second step, the physical clocks of the two computers were synchronized first. Under different settings and consistency maintenance approaches, the selected sub operation sequence was played back on one computer, and it drove the two spaceships, one was local and the other was remote, to move. Meanwhile, the tracks of the spaceships on the two computers were recorded separately and they were called as a track couple. Since there are 12 settings and 2 consistency maintenance approaches, the total number of recorded track couples was 24. In the last step, to each track couple, the inconsistency between them was calculated, and the unit of inconsistency was pixel. Since the physical clocks of the two computers were synchronized, the calculation of inconsistency was quite simple. The inconsistency at a particular time point was the distance between the positions of the two spaceships at that time point (i.e. formula (3)). In order to show the results of inconsistency in a clear way, only parts of the results, which last about 7 seconds, are used in the following figures, and the figures show almost the same parts of the results. Figures 3, 4, and 5 show the results of inconsistency when the lag is fixed at 300 ms and the network transmission delays are 100, 300, and 500 ms. It can been seen that inconsistency does exist, but in most of the time it is 0. Additionally, inconsistency increases with the network transmission delay, but decreases with the threshold. Compared with GS-DR, GS-DR-LL can decrease more inconsistency, and it eliminates most inconsistency when the network transmission delay is 100 ms and the threshold is 4 pixels or 5 degrees. 4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 According to the prediction and state filtering mechanisms of DR, inconsistency cannot be completely eliminated if the threshold is not 0. With the definitions of before inconsistency and after inconsistency, it can be indicated that GS-DR and GS-DR-LL both can eliminate after inconsistency, and GS-DR-LL can The threshold is 10 pixels or 15degrees effectively decrease before inconsistency. It can be foreseen that with proper lag and threshold (e.g. the lag is larger than the network transmission delay and the threshold is 0), GS-DR-LL even can eliminate before inconsistency. Figure 3. Inconsistency when the network transmission delay is 100 ms and the lag is 300 ms. Figure 4. Inconsistency when the network transmission delay is 300 ms and the lag is 300 ms. Figure 5. Inconsistency when the network transmission delay is 500 ms and the lag is 300 ms. Figures 6, 7, and 8 show the results of inconsistency when the network transmission delay is fixed at 800 ms and the lag are 100, 300, and 500 ms. It can be seen that with GS-DR-LL before inconsistency decreases with the lag. In traditional local lag, the lag must be set to a value larger than typical network transmission delay, otherwise the state repairs would flood the system. From the above results it can be seen that there does not exist any constraint on the selection of the lag, with GS-DR-LL a system would work fine even if the lag is much smaller than the network transmission delay. The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 5 From all above results, it can be indicated that GS-DR and GSDR-LL both can eliminate after inconsistency, and GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. Figure 6. Inconsistency when the network transmission delay is 800 ms and the lag is 100 ms. Figure 7. Inconsistency when the network transmission delay is 800 ms and the lag is 300 ms. Figure 8. Inconsistency when the network transmission delay is 800 ms and the lag is 500 ms. 6. CONCLUSIONS Compared with traditional DR, GS-DR can eliminate after inconsistency through the synchronization of physical clocks, but it cannot tackle before inconsistency, which would significantly influence the usability and fairness of a game. In this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene. Performance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. GS-DR-LL has significant implications to consistency maintenance approaches. First, GS-DR-LL shows that improved DR cannot only eliminate after inconsistency but also decrease before inconsistency, with proper lag and threshold, it would even eliminate before inconsistency. As a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games). Second, GS-DR-LL shows that by combining local lag and GSDR, the constraint on selecting lag value is removed and a lag, which is smaller than typical network transmission delay, could be used. As a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).
Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games ABSTRACT Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. 1. INTRODUCTION Nowadays, many distributed multiplayer games adopt replicated architectures. In such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2]. These games are referred to as Continuous Distributed Multiplayer Games (CDMG). Like other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay. Although new network techniques (e.g. QoS) can reduce or at least bound the delay, they cannot Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. completely eliminate it, as there exists the physical speed limitation of light, for instance, 100 ms is needed for light to propagate from Europe to Australia [3]. There are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7]. In replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc. . In order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches. Mauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications. It synchronizes the physical clocks of all sites in a system. After an operation is issued at local site, it delays the execution of the operation for a short time. During this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time. In order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state. Local lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites. Since operation transmission mechanism requests that all operations should be transmitted in a reliable way, message filtering is difficult to be deployed and the scalability of a system is limited. DR is based on state transmission mechanism. In addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities). After each update of its own entities, a site compares the accurate state with the estimated one. If the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected. Through state estimation, DR cannot only maintain consistency but also decrease the number of transmitted state updates. Compared with aforementioned local lag, DR cannot maintain high consistency. Due to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update. In order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates. Detailed description of GS-DR can be found in Section 3. When a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote sites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure. Thus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8]. In this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR. By delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR. The rest of this paper is organized as follows: Section 2 gives the definition of consistency and corresponding metrics; the cause of the inconsistency of DR is analyzed in Section 3; Section 4 describes how GS-DR-LL works; performance evaluation is presented in Section 5; Section 6 concludes the paper. 2. CONSISTENCY DEFINITIONS AND METRICS 3. INCONSISTENCY IN DR 2 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 4. GLOBALLY SYNCHRONIZED DR WITH LOCAL LAG 5. PERFORMANCE EVALUATION 4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 6. CONCLUSIONS Compared with traditional DR, GS-DR can eliminate after inconsistency through the synchronization of physical clocks, but it cannot tackle before inconsistency, which would significantly influence the usability and fairness of a game. In this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene. Performance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. GS-DR-LL has significant implications to consistency maintenance approaches. First, GS-DR-LL shows that improved DR cannot only eliminate after inconsistency but also decrease before inconsistency, with proper lag and threshold, it would even eliminate before inconsistency. As a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games). Second, GS-DR-LL shows that by combining local lag and GSDR, the constraint on selecting lag value is removed and a lag, which is smaller than typical network transmission delay, could be used. As a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).
Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games ABSTRACT Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. 1. INTRODUCTION Nowadays, many distributed multiplayer games adopt replicated architectures. In such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2]. These games are referred to as Continuous Distributed Multiplayer Games (CDMG). Like other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay. There are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7]. In replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc. . In order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches. Mauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications. It synchronizes the physical clocks of all sites in a system. After an operation is issued at local site, it delays the execution of the operation for a short time. During this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time. In order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state. Local lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites. DR is based on state transmission mechanism. In addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities). After each update of its own entities, a site compares the accurate state with the estimated one. If the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected. Through state estimation, DR cannot only maintain consistency but also decrease the number of transmitted state updates. Compared with aforementioned local lag, DR cannot maintain high consistency. Due to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update. In order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates. Detailed description of GS-DR can be found in Section 3. When a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote sites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure. Thus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8]. In this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR. By delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR. 6. CONCLUSIONS In this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene. Performance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag. GS-DR-LL has significant implications to consistency maintenance approaches. First, GS-DR-LL shows that improved DR cannot only eliminate after inconsistency but also decrease before inconsistency, with proper lag and threshold, it would even eliminate before inconsistency. As a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games). As a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).
J-63
Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets
This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation.
[ "effici market", "imposs result", "individu ration", "incent compat", "effici truth market", "negoti base mechan", "buyer and seller", "good exchang", "negotiationrang market", "negoti-rang mechan", "real-world market environ", "util", "possibl agreement zone", "mechan design" ]
[ "P", "P", "P", "P", "R", "R", "M", "U", "M", "M", "M", "U", "R", "M" ]
Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets Yair Bartal ∗ School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel yair@cs.huji.ac.il Rica Gonen School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel rgonen@cs.huji.ac.il Pierfrancesco La Mura Leipzig Graduate School of Management Leipzig, Germany plamura@hhl.de ABSTRACT This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite``s impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers and Society]: Electronic Commerce-payment schemes General Terms Algorithms, Economics, Theory 1. INTRODUCTION In this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets. A market consists of multiple buyers and sellers who wish to exchange goods. The market``s main objective is to produce an allocation of sellers'' goods to buyers as to maximize the total gain from trade. A commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11]. In this model each player has a private valuation function that assigns real values to each possible allocation. The algorithm motivates players to participate truthfully by handing payments to them. The mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing:(i) a set of trades, and (ii) the payments made and received by players. In designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies. We set allocative efficiency as our primary goal. That is the mechanism must compute a set of trades that maximizes gain from trade. In addition we require individual rationality (IR) so that all players have positive expected utility to participate, budget balance (BB) so that the exchange does not run at a loss, and incentive compatibility (IC) so that reporting the truth is a dominant strategy for each player. Unfortunately, Myerson and Satterthwaite``s (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10]. A unique approach to overcome Myerson and Satterthwaite``s impossibility result was attempted by Parkes, Kalagnanam and Eso [12]. This result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency. In this paper we circumvent Myerson and Satterthwaite``s impossibility result by introducing a new model of negotiationrange markets. A negotiation-range mechanism does not produce payment prices for the market participants. Rather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA). The buyer is provided with the high end of the range and the seller with the low end of the range. This allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them. The negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform. In effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves. This concept is natural to many real-world market environments such as the real estate market. 1 We focus on the single-unit heterogeneous setting: n sellers offer one unique good each by placing sealed bids specifying their willingness to sell, and m buyers, interested in buying a single good each, placing sealed bids specifying their willingness to pay for each good they may be interested in. Our main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced. Our result does not contradict Myerson and Satterthwaite``s important theorem. Myerson-Satterthwaite``s proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market. Our single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite``s theorem by producing a price range, defined by a seller``s floor and a buyer``s ceiling, for each pair of matched players. In our market mechanism the seller``s upper bound utility may be the same while the seller``s floor is different and the buyer``s lower bound utility may be the same while the buyer``s ceiling is different. Moreover, the final price is not fixed by the mechanism at all. Instead, it is determined by an independent negotiation between the buyer and seller. More specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined. This negotiation stage is crucial for our mechanism to be IC. Intuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question. That is, he would tell the truth if this strategy maximizes the upper and lower bounds on his utility as expressed by the ZOPA boundaries. Yet, when carefully examined it turns out that it is impossible (by [10]) that this goal will always hold. This is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range``s boundaries). Here, the negotiation stage come into play. We show that if the above utility maximizing condition does not hold then it is the case that the player cannot influence the negotiation bound that is assigned to his deal partner no matter what value he declares. This means that the only thing that he may achieve by reporting a false valuation is modifying his own negotiation bound, something that he could alternatively achieve by reporting his true valuation and incorporating the effect of the modified negotiation bound into his negotiation strategy. This eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players'' true values. The problem of computing the optimal allocation which maximizes gain from trade can be conceptualized as the problem of finding the maximum weighted matching in a weighted bipartite graph connecting buyers and sellers, where each edge in the graph is assigned a weight equal to the difference between the respective buyer bid and seller bid. It is well known that this problem can be solved efficiently in polynomial time. VCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB. Our particular approach adapts the VCG payment scheme to achieve budget balance. The philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller``s opportunity cost of not selling the good to another buyer and not keeping the good to herself. The seller is paid in addition to the buyer``s price a compensation for the damage the mechanism did to the seller by not extracting the buyer``s full bid. Our philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself. The buyer pays at most his alternate seller``s opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself. The rest of this paper is organized as follows. In Section 2 we describe our model and definitions. In section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB. Finally, we conclude with a discussion in Section 4. 2. NEGOTIATION MARKETS PRELIMINARIES Let Π denote the set of players, N the set of n selling players, and M the set of m buying players, where Π = N ∪ M. Let Ψ = {1, ..., t} denote the set of goods. Let Ti ∈ {−1, 0, 1}t denote an exchange vector for a trade, such that player i buys goods {A ∈ Ψ|Ti (A) = 1} and sells goods {A ∈ Ψ|Ti (A) = −1}. Let T = (T1, ..., T|Π|) denote the complete trade between all players. We view T as describing the allocation of goods by the mechanism to the buyers and sellers. In the single-unit heterogeneous setting every good belongs to specific seller, and every buyer is interested in buying one good. The buyer may bid for several or all goods. At the end of the auction every good is either assigned to one of the buyers who bid for it or kept unsold by the seller. It is convenient to assume the sets of buyers and sellers are disjoint (though it is not required), i.e. N ∩ M = ∅. Each seller i is associated with exactly one good Ai, for which she has true valuation ci which expresses the price at which it is beneficial for her to sell the good. If the seller reports a false valuation at an attempt to improve the auction results for her, this valuation is denoted ˆci. A buyer has a valuation vector describing his valuation for each of the goods according to their owner. Specifically, vj(k) denotes buyer j``s valuation for good Ak. Similarly, if he reports a false valuation it is denoted ˆvj(k). If buyer j is matched by the mechanism with seller i then Ti(Ai) = −1 and Tj(Ai) = 1. Notice, that in our setting for every k = i, Ti(Ak) = 0 and Tj(Ai) = 0 and also for every z = j, Tz(Ai) = 0. For a matched buyer j - seller i pair, the gain from trade on the deal is defined as vj(i) − ci. Given and allocation T, the gain from trade associated with T is V = j∈M,i∈N (vj(i) − ci) · Tj(Ai). Let T∗ denote the optimal allocation which maximizes the gain from trade, computed according to the players'' true valuations. Let V ∗ denote the optimal gain from trade associated with this allocation. When players'' report false valuations we use ˆT∗ and ˆV ∗ to denote the optimal allocation and gain from trade, respectively, when computed according to the reported valuations. 2 We are interested in the design of negotiation-range mechanisms. In contrast to a standard auction mechanism where the buyer and seller are provided with the prices they should pay, the goal of a negotiation-range mechanism is to provide the player``s with a range of prices within which they can negotiate the final terms of the deal by themselves. The mechanism would provide the buyer with the upper bound on the range and the seller with the lower bound on the range. This gives each of them a promise that it will be beneficial for them to close the deal, but does not provide information about the other player``s terms of negotiation. Definition 1. Negotiation Range: Zone Of Possible Agreements, ZOPA, between a matched buyer and seller. The ZOPA is a range, (L, H), 0 ≤ L ≤ H, where H is an upper bound (ceiling) price for the buyer and L is a lower bound (floor) price for the seller. Definition 2. Negotiation-Range Mechanism: A mechanism that computes a ZOPA, (L, H), for each matched buyer and seller in T∗ , and provides the buyer with the upper bound H and the seller with the lower bound L. The basic assumption is that participants in the auction are self-interested players. That is their main goal is to maximize their expected utility. The utility for a buyer who does not participate in the trade is 0. If he does win some good, his utility is the surplus between his valuation for that good and the price he pays. For a seller, if she keeps the good unsold, her utility is just her valuation of the good, and the surplus is 0. If she gets to sell it, her utility is the price she is paid for it, and the surplus is the difference between this price and her valuation. Since negotiation-range mechanisms assign bounds on the range of prices rather than the final price, it is useful to define the upper and lower bounds on the player``s utilities defined by the range``s limits. Definition 3. Consider a buyer j - seller i pair matched by a negotiation-range mechanism and let (L, H) be their associated negotiation range. • The buyer``s top utility is: vj(i) − L, and the buyer``s bottom utility is vj(i) − H. • The seller``s top utility is H, with surplus H − ci, and the seller``s bottom utility is L, with surplus L − ci. 3. THE SINGLE-UNIT HETEROGENEOUS MECHANISM (ZOPAS) 3.1 Description of the Mechanism ZOPAS is a negotiation-range mechanism, it finds the optimal allocation T∗ and uses it to define a ZOPA for each buyer-seller pair. The first stage in applying the mechanism is for the buyers and sellers to submit their sealed bids. The mechanism then allocates buyers to sellers by computing the allocation T ∗ , which results in the optimal gain from trade V ∗ , and defines a ZOPA for each buyer-seller pair. Finally, buyers and sellers use the ZOPA to negotiate a final price. Computing T∗ involves solving the maximum weighted bipartite matching problem for the complete bipartite graph Kn,m constructed by placing the buyers on one side of the Find the optimal allocation T ∗ Compute the maximum weighted bipartite matching for the bipartite graph of buyers and sellers, and edge weights equal to the gain from trade. Calculate Sellers'' Floors For every buyer j, allocated good Ai Find the optimal allocation (T−j)∗ Li = vj(i) + (V−j)∗ − V ∗ Calculate Buyers'' Ceilings For every buyer j, allocated good Ai Find the optimal allocation (T −i )∗ Find the optimal allocation (T −i −j )∗ Hj = vj(i) + (V −i −j )∗ − (V −i )∗ Negotiation Phase For every buyer j and every seller i of good Ai Report to seller i her floor Li and identify her matched buyer j Report to buyer j his ceiling Hj and identify his matched seller i i, j negotiate the good``s final price Figure 1: The ZOPAS mechanism graph, the seller on another and giving the edge between buyer j and seller i weight equal to vj(i) − ci. The maximum weighted matching problem in solvable in polynomial time (e.g., using the Hungarian Method). This results in a matching between buyers and sellers that maximizes gain from trade. The next step is to compute for each buyer-seller pair a seller``s floor, which provides the lower bound of the ZOPA for this pair, and assigns it to the seller. A seller``s floor is computed by calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the buyer is included (the VCG principle). Let (T−j)∗ denote the gain from trade of the optimal allocation when buyer j``s bids are discarded. Denote by (V−j)∗ the total gain from trade in the allocation (T−j)∗ . Definition 4. Seller Floor: The lowest price the seller should expect to receive, communicated to the seller by the mechanism. The seller floor for player i who was matched with buyer j on good Ai, i.e., Tj(Ai) = 1, is computed as: Li = vj(i) + (V−j)∗ − V ∗ . The seller is instructed not to accept less than this price from her matched buyer. Next, the mechanism computes for each buyer-seller pair a buyer ceiling, which provides the upper bound of the ZOPA for this pair, and assigns it to the buyer. Each buyer``s ceiling is computed by removing the buyer``s matched seller and calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the 3 buyer is included. Let (T−i )∗ denote the gain from trade of the optimal allocation when seller i is removed from the trade. Denote by (V −i )∗ the total gain from trade in the allocation (T−i )∗ . Let (T−i −j )∗ denote the gain from trade of the optimal allocation when seller i is removed from the trade and buyer j``s bids are discarded. Denote by (V −i −j )∗ the total gain from trade in the allocation (T −i −j )∗ . Definition 5. Buyer Ceiling: The highest price the seller should expect to pay, communicated to the buyer by the mechanism. The buyer ceiling for player j who was matched with seller i on good Ai, i.e., Tj(Ai) = 1, is computed as: Hj = vj(i) + (V −i −j )∗ − (V −i )∗ . The buyer is instructed not to pay more than this price to his matched seller. Once the negotiation range lower bound and upper bound are computed for every matched pair, the mechanism reports the lower bound price to the seller and the upper bound price to the buyer. At this point each buyer-seller pair negotiates on the final price and concludes the deal. A schematic description the ZOPAS mechanism is given in Figure 3.1. 3.2 Analysis of the Mechanism In this section we analyze the properties of the ZOPAS mechanism. Theorem 1. The ZOPAS market negotiation-range mechanism is an incentive-compatible bilateral trade mechanism that is efficient, individually rational and budget balanced. Clearly ZOPAS is an efficient polynomial time mechanism. Let us show it satisfies the rest of the properties in the theorem. Claim 1. ZOPAS is individually rational, i.e., the mechanism maintains nonnegative utility surplus for all participants. Proof. If a participant does not trade in the optimal allocation then his utility surplus is zero by definition. Consider a pair of buyer j and seller i which are matched in the optimal allocation T ∗ . Then the buyer``s utility is at least vj(i) − Hj. Recall that Hj = vj(i) + (V −i −j )∗ − (V −i )∗ , so that vj(i) − Hj = (V −i )∗ − (V −i −j )∗ . Since the optimal gain from trade which includes j is higher than that which does not, we have that the utility is nonnegative: vj(i) − Hj ≥ 0. Now, consider the seller i. Her utility surplus is at least Li − ci. Recall that Li = vj(i) + (V−j)∗ − V ∗ . If we removed from the optimal allocation T ∗ the contribution of the buyer j - seller i pair, we are left with an allocation which excludes j, and has value V ∗ − (vj(i) − ci). This implies that (V−j)∗ ≥ V ∗ − vj(i) + ci, which implies that Li − ci ≥ 0. The fact that ZOPAS is a budget-balanced mechanism follows from the following lemma which ensures the validity of the negotiation range, i.e., that every seller``s floor is below her matched buyer``s ceiling. This ensures that they can close the deal at a final price which lies in this range. Lemma 1. For every buyer j- seller i pair matched by the mechanism: Li ≤ Hj. Proof. Recall that Li = vj(i) + (V−j)∗ − V ∗ and Hj = vj(i)+(V −i −j )∗ −(V −i )∗ . To prove that Li ≤ Hj it is enough to show that (V −i )∗ + (V−j)∗ ≤ V ∗ + (V −i −j )∗ . (1) The proof of (1) is based on a method which we apply several times in our analysis. We start with the allocations (T−i )∗ and (T−j)∗ which together have value equal to (V −i )∗ + (V−j)∗ . We now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations. This means that the sum of values of the new allocations is the same as the original pair of allocations. We also require that one of the new allocations does not include buyer j or seller i. This means that the sum of values of these new allocations is at most V ∗ + (V −i −j )∗ , which proves (1). Let G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair. The different allocations represent bipartite matchings in G. It will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching. Assign color 1 to the edges in the matching (T −i )∗ and assign color 2 to the edges in the matching (T−j)∗ . We claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations T (represented by color 3) and (T−i −j ) (represented by color 4). This implies the that inequality (1) holds. Figure 2 illustrates the graph G and the colorings of the different matchings. Define an alternating path P starting at j. Let S1 be the seller matched to j in (T −i )∗ (if none exists then P is empty). Let B1 be the buyer matched to S1 in (T−j)∗ , S2 be the seller matched to B1 in (T−i )∗ , B2 be the buyer matched to S2 in (T−j)∗ , and so on. This defines an alternating path P, starting at j, whose edges'' colors alternate between colors 1 and 2 (starting with 1). This path ends either in a seller who is not matched in (T−j)∗ or in a buyer who is not matched in (T−i )∗ . Since all sellers in this path are matched in (T−i )∗ , we have that seller i does not belong to P. This ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3). Since except for the first edge, all others do not involve i or j and thus may be colored 4 and be part of an allocation (T −i −j ) . We are left to recolor the edges that do not belong to P. Since none of these edges includes j we have that the edges that were colored 1, which are part of (T −i )∗ , may now be colored 4, and be included in the allocation (T −i −j ) . It is also clear that the edges that were colored 2, which are part of (T−j)∗ , may now be colored 3, and be included in the allocation T . This completes the proof of the lemma. 3.3 Incentive Compatibility The basic requirement in mechanism design is for an exchange mechanism to be incentive compatible. This means that its payment structure enforces that truth-telling is the players'' weakly dominant strategy, that is that the strategy by which the player is stating his true valuation results 4 ... jS1 S2 B1 S3 B2 B3S4 S5 B4 S6 B5 S8 B7 S7 B6 ... Figure 2: Alternating path argument for Lemma 1 (Validity of the Negotiation Range) and Claim 2 (part of Buyer``s IC proof) Colors Bidders 1 32 4 UnmatchedMatched Figure 3: Key to Figure 2 in bigger or equal utility as any other strategy. The utility surplus is defined as the absolute difference between the player``s bid and his price. Negotiation-range mechanisms assign bounds on the range of prices rather than the final price and therefore the player``s valuation only influences the minimum and maximum bounds on his utility. For a buyer the minimum (bottom) utility would be based on the top of the negotiation range (ceiling), and the maximum (top) utility would be based on the bottom of the negotiation range (floor). For a seller it``s the other way around. Therefore the basic natural requirement from negotiation-range mechanisms would be that stating the player``s true valuation results in both the higher bottom utility and higher top utility for the player, compared with other strategies. Unfortunately, this requirement is still too strong and it is impossible (by [10]) that this will always hold. Therefore we slightly relax it as follows: we require this holds when the false valuation based strategy changes the player``s allocation. When the allocation stays unchanged we require instead that the player would not be able to change his matched player``s bound (e.g. a buyer cannot change the seller``s floor). This means that the only thing he can influence is his own bound, something that he can alternatively achieve through means of negotiation. The following formally summarizes our incentive compatibility requirements from the negotiation-range mechanism. Buyer``s incentive compatibility: • Let j be a buyer matched with seller i by the mechanism according to valuation vj and the negotiationrange assigned is (Li, Hj). Assume that when the mechanism is applied according to valuation ˆvj, seller k = i is matched with j and the negotiation-range assigned is (ˆLk, ˆHj). Then vj(i) − Hj ≥ vj(k) − ˆHj. (2) vj(i) − Li ≥ vj(k) − ˆLk. (3) • Let j be a buyer not matched by the mechanism according to valuation vj. Assume that when the mechanism is applied according to valuation ˆvj, seller k = i is matched with j and the negotiation-range assigned is (ˆLk, ˆHj). Then vj(k) − ˆHj ≤ vj(k) − ˆLk ≤ 0. (4) • Let j be a buyer matched with seller i by the mechanism according to valuation vj and let the assigned bottom of the negotiation range (seller``s floor) be Li. Assume that when the mechanism is applied according to valuation ˆvj, the matching between i and j remains unchanged and let the assigned bottom of the negotiation range (seller``s floor) be ˆLi. Then, ˆLi = Li. (5) Notice that the first inequality of (4) always holds for a valid negotiation range mechanism (Lemma 1). Seller``s incentive compatibility: • Let i be a seller not matched by the mechanism according to valuation ci. Assume that when the mechanism 5 is applied according to valuation ˆci, buyer z = j is matched with i and the negotiation-range assigned is (ˆLi, ˆHz). Then ˆLi − ci ≤ ˆHz − ci ≤ 0. (6) • Let i be a buyer matched with buyer j by the mechanism according to valuation ci and let the assigned top of the negotiation range (buyer``s ceiling) be Hj. Assume that when the mechanism is applied according to valuation ˆci, the matching between i and j remains unchanged and let the assigned top of the negotiation range (buyer``s ceiling) be ˆHj. Then, ˆHj = Hj. (7) Notice that the first inequality of (6) always holds for a valid negotiation range mechanism (Lemma 1). Observe that in the case of sellers in our setting, the case expressed by requirement (6) is the only case in which the seller may change the allocation to her benefit. In particular, it is not possible for seller i who is matched in T ∗ to change her buyer by reporting a false valuation. This fact simply follows from the observation that reducing the seller``s valuation increases the gain from trade for the current allocation by at least as much than any other allocation, whereas increasing the seller``s valuation decreases the gain from trade for the current allocation by exactly the same amount as any other allocation in which it is matched. Therefore, the only case the optimal allocation may change is when in the new allocation i is not matched in which case her utility surplus is 0. Theorem 2. ZOPAS is an incentive compatible negotiationrange mechanism. Proof. We begin with the incentive compatibility for buyers. Consider a buyer j who is matched with seller i according to his true valuation v. Consider that j is reporting instead a false valuation ˆv which results in a different allocation in which j is matched with seller k = i. The following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his top utility. Claim 2. Let j be a buyer matched to seller i in T ∗ , and let k = i be the seller matched to j in ˆT∗ . Then, vj(i) − Hj ≥ vj(k) − ˆHj. (8) Proof. Recall that Hj = vj(i) + (V −i −j )∗ − (V −i )∗ and ˆHj = ˆvj(k) + ( ˆV −k −j )∗ − ( ˆV −k )∗ . Therefore, vj(i) − Hj = (V −i )∗ − (V −i −j )∗ and vj(k) − ˆHj = vj(k) − ˆvj(k) + ( ˆV −k )∗ − ( ˆV −k −j )∗ . It follows that in order to prove (8) we need to show ( ˆV −k )∗ + (V −i −j )∗ ≤ (V −i )∗ + ( ˆV −k −j )∗ + ˆvj(k) − vj(k). (9) Consider first the case were j is matched to i in ( ˆT−k )∗ . If we remove this pair and instead match j with k we obtain a matching which excludes i, if the gain from trade on the new pair is taken according to the true valuation then we get ( ˆV −k )∗ − (ˆvj(i) − ci) + (vj(k) − ck) ≤ (V −i )∗ . Now, since the optimal allocation ˆT∗ matches j with k rather than with i we have that (V −i −j )∗ + (ˆvj(i) − ci) ≤ ˆV ∗ = ( ˆV −k −j )∗ + (ˆvj(k) − ck), where we have used that ( ˆV −i −j )∗ = (V −i −j )∗ since these allocations exclude j. Adding up these two inequalities implies (9) in this case. It is left to prove (9) when j is not matched to i in ( ˆT−k )∗ . In fact, in this case we prove the stronger inequality ( ˆV −k )∗ + (V −i −j )∗ ≤ (V −i )∗ + ( ˆV −k −j )∗ . (10) It is easy to see that (10) indeed implies (9) since it follows from the fact that k is assigned to j in ˆT∗ that ˆvj(k) ≥ vj(k). The proof of (10) works as follows. We start with the allocations ( ˆT−k )∗ and (T−i −j )∗ which together have value equal to ( ˆV −k )∗ + (V −i −j )∗ . We now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations. This means that the sum of values of the new allocations is the same as the original pair of allocations. We also require that one of the new allocations does not include seller i and is based on the true valuation v, while the other allocation does not include buyer j or seller k and is based on the false valuation ˆv. This means that the sum of values of these new allocations is at most (V −i )∗ + ( ˆV −k −j )∗ , which proves (10). Let G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair. The different allocations represent bipartite matchings in G. It will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching. Assign color 1 to the edges in the matching ( ˆT−k )∗ and assign color 2 to the edges in the matching (T −i −j )∗ . We claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations (T −i ) (represented by color 3) and ( ˆT−k −j ) (represented by color 4). This implies the that inequality (10) holds. Figure 2 illustrates the graph G and the colorings of the different matchings. Define an alternating path P starting at j. Let S1 = i be the seller matched to j in ( ˆT−k )∗ (if none exists then P is empty). Let B1 be the buyer matched to S1 in (T−i −j )∗ , S2 be the seller matched to B1 in ( ˆT−k )∗ , B2 be the buyer matched to S2 in (T−i −j )∗ , and so on. This defines an alternating path P, starting at j, whose edges'' colors alternate between colors 1 and 2 (starting with 1). This path ends either in a seller who is not matched in (T −i −j )∗ or in a buyer who is not matched in ( ˆT−k )∗ . Since all sellers in this path are matched in ( ˆT−k )∗ , we have that seller k does not belong to P. Since in this case S1 = i and the rest of the sellers in P are matched in (T−i −j )∗ we have that seller i as well does not belong to P. This ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3). Since S1 = i, we may use color 3 for the first edge and thus assign it to the allocation (T−i ) . All other edges, do not involve i, j or k and thus may be either colored 4 and be part of an allocation ( ˆT−k −j ) or colored 3 and be part of an allocation (T−i ) , in an alternating fashion. We are left to recolor the edges that do not belong to P. Since none of these edges includes j we have that the edges 6 that were colored 1, which are part of ( ˆT−k )∗ , may now be colored 4, and be included in the allocation ( ˆT−k −j ) . It is also clear that the edges that were colored 2, which are part of (T−i −j )∗ , may now be colored 3, and be included in the allocation (T−i ) . This completes the proof of (10) and the claim. The following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his bottom utility. The proof is basically the standard VCG argument. Claim 3. Let j be a buyer matched to seller i in T ∗ , and k = i be the seller matched to j in ˆT∗ . Then, vj(i) − Li ≥ vj(k) − ˆLk. (11) Proof. Recall that Li = vj(i) + (V−j)∗ − V ∗ , and ˆLk = ˆvj(k) + ( ˆV−j)∗ − ˆV ∗ = ˆvj(k) + (V−j)∗ − ˆV ∗ . Therefore, vj(i) − Li = V ∗ − (V−j)∗ and vj(k) − ˆLk = vj(k) − ˆvj(k) + ˆV ∗ − (V−j)∗ . It follows that in order to prove (11) we need to show V ∗ ≥ vj(k) − ˆvj(k) + ˆV ∗ . (12) The scenario of this claim occurs when j understates his value for Ai or overstated his value for Ak. Consider these two cases: • ˆvj(k) > vj(k): Since Ak was allocated to j in the allocation ˆT∗ we have that using the allocation of ˆT∗ according to the true valuation gives an allocation of value U satisfying ˆV ∗ − ˆvj(k) + vj(k) ≤ U ≤ V ∗ . • ˆvj(k) = vj(k) and ˆvj(i) < vj(i): In this case (12) reduces to V ∗ ≥ ˆV ∗ . Since j is not allocated i in ˆT∗ we have that ˆT∗ is an allocation that uses only true valuations. From the optimality of T ∗ we conclude that V ∗ ≥ ˆV ∗ . Another case in which a buyer may try to improve his utility is when he does not win any good by stating his true valuation. He may give a false valuation under which he wins some good. The following claim shows that doing this is not beneficial to him. Claim 4. Let j be a buyer not matched in T ∗ , and assume seller k is matched to j in ˆT∗ . Then, vj(k) − ˆLk ≤ 0. Proof. The scenario of this claim occurs if j did not buy in the truth-telling allocation and overstates his value for Ak, ˆvj(k) > vj(k) in his false valuation. Recall that ˆLk = ˆvj(k) + ( ˆV−j)∗ − ˆV ∗ . Thus we need to show that 0 ≥ vj(k) − ˆvj(k) + ˆV ∗ − (V−j)∗ . Since j is not allocated in T∗ then (V−j)∗ = V ∗ . Since j is allocated Ak in ˆT∗ we have that using the allocation of ˆT∗ according to the true valuation gives an allocation of value U satisfying ˆV ∗ − ˆvj(k) + vj(k) ≤ U ≤ V ∗ . Thus we can conclude that 0 ≥ vj(k) − ˆvj(k) + ˆV ∗ − (V−j)∗ . Finally, the following claim ensures that a buyer cannot influence the floor bound of the ZOPA for the good he wins. Claim 5. Let j be a buyer matched to seller i in T ∗ , and assume that ˆT∗ = T∗ , then ˆLi = Li. Proof. Recall that Li = vj(i) + (V−j)∗ − V ∗ , and ˆLi = ˆvj(i) + ( ˆV−j)∗ − ˆV ∗ = ˆvj(i) + (V−j)∗ − ˆV ∗ . Therefore we need to show that ˆV ∗ = V ∗ + ˆvj(i) − vj(i). Since j is allocated Ai in T∗ , we have that using the allocation of T∗ according to the false valuation gives an allocation of value U satisfying V ∗ − vj(i) + ˆvj(i) ≤ U ≤ ˆV ∗ . Similarly since j is allocated Ai in ˆT∗ , we have that using the allocation of ˆT∗ according to the true valuation gives an allocation of value U satisfying ˆV ∗ − ˆvj(i)+vj(i) ≤ U ≤ V ∗ , which together with the previous inequality completes the proof. This completes the analysis of the buyer``s incentive compatibility. We now turn to prove the seller``s incentive compatibility properties of our mechanism. The following claim handles the case where a seller that was not matched in T ∗ falsely understates her valuation such that she gets matched n ˆT∗ . Claim 6. Let i be a seller not matched in T ∗ , and assume buyer z is matched to i in ˆT∗ . Then, ˆHz − ci ≤ 0. Proof. Recall that ˆHz = vz(i) + ( ˆV −i −z )∗ − ( ˆV −i )∗ . Since i is not matched in T ∗ and ( ˆT−i )∗ involves only true valuations we have that ( ˆV −i )∗ = V ∗ . Since i is matched with z in ˆT∗ it can be obtained by adding the buyer z - seller i pair to ( ˆT−i −z)∗ . It follows that ˆV ∗ = ( ˆV −i −z )∗ + vz(i) − ˆci. Thus, we have that ˆHz = ˆV ∗ + ˆci − V ∗ . Now, since i is matched in ˆT∗ , using this allocation according to the true valuation gives an allocation of value U satisfying ˆV ∗ + ˆci − ci ≤ U ≤ V ∗ . Therefore ˆHz −ci = ˆV ∗ +ˆci −V ∗ −ci ≤ 0. Finally, the following simple claim ensures that a seller cannot influence the ceiling bound of the ZOPA for the good she sells. Claim 7. Let i be a seller matched to buyer j in T ∗ , and assume that ˆT∗ = T∗ , then ˆHj = Hj. Proof. Since ( ˆV −i −j )∗ = (V −i −j )∗ and ( ˆV −i )∗ = (V −i )∗ it follows that ˆHj = vj(i)+( ˆV −i −j )∗ −( ˆV −i )∗ = vj(i)+(V −i −j )∗ −(V −i )∗ = Hj. 4. CONCLUSIONS AND EXTENSIONS In this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced. To this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller. The goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals. We present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced. The ZOPA produced by our mechanism is based 7 on a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation. The basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve ? Are there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals ? In the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings. A natural extension is that of a combinatorial market. Unfortunately, finding the optimal allocation in a combinatorial setting is NP-hard, and thus the problem of maintaining BB is compounded by the problem of maintaining IC when efficiency is approximated [1, 5, 6, 9, 11]. Applying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research. 5. REFERENCES [1] Y. Bartal, R. Gonen, and N. Nisan. Incentive Compatible Multi-Unit Combinatorial Auctions. Proceeding of 9th TARK 2003, pp. 72-87, June 2003. [2] E. H. Clarke. Multipart Pricing of Public Goods. In journal Public Choice 1971, volume 2, pages 17-33. [3] J.Feigenbaum, C. Papadimitriou, and S. Shenker. Sharing the Cost of Multicast Transmissions. Journal of Computer and System Sciences, 63(1),2001. [4] A. Fiat, A. Goldberg, J. Hartline, and A. Karlin. Competitive Generalized Auctions. Proceeding of 34th ACM Symposium on Theory of Computing,2002. [5] R. Gonen, and D. Lehmann. Optimal Solutions for Multi-Unit Combinatorial Auctions: Branch and Bound Heuristics. Proceeding of ACM Conference on Electronic Commerce EC``00, pages 13-20, October 2000. [6] R. Gonen, and D. Lehmann. Linear Programming helps solving Large Multi-unit Combinatorial Auctions. In Proceeding of INFORMS 2001, November, 2001. [7] T. Groves. Incentives in teams. In journal Econometrica 1973, volume 41, pages 617-631. [8] R. Lavi, A. Mu``alem and N. Nisan. Towards a Characterization of Truthful Combinatorial Auctions. Proceeding of 44th Annual IEEE Symposium on Foundations of Computer Science,2003. [9] D. Lehmann, L. I. O``Callaghan, and Y. Shoham. Truth revelation in rapid, approximately efficient combinatorial auctions. In Proceedings of the First ACM Conference on Electronic Commerce, pages 96-102, November 1999. [10] R. Myerson, M. Satterthwaite. Efficient Mechanisms for Bilateral Trading. Journal of Economic Theory, 28, pages 265-81, 1983. [11] N. Nisan and A. Ronen. Algorithmic Mechanism Design. In Proceeding of 31th ACM Symposium on Theory of Computing, 1999. [12] D.C. Parkes, J. Kalagnanam, and M. Eso. Achieving Budget-Balance with Vickrey-Based Payment Schemes in Exchanges. Proceeding of 17th International Joint Conference on Artificial Intelligence, pages 1161-1168, 2001. [13] W. Vickrey. Counterspeculation, Auctions and Competitive Sealed Tenders. In Journal of Finance 1961, volume 16, pages 8-37. 8
Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets ABSTRACT This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation. 1. INTRODUCTION In this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets. A market consists of multiple buyers and sellers who wish to exchange goods. The market's main objective is to produce an allocation of sellers' goods to buyers as to maximize the total gain from trade. * Supported in part by a grant from the Israeli National Science Foundation (195/02). A commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11]. In this model each player has a private valuation function that assigns real values to each possible allocation. The algorithm motivates players to participate truthfully by handing payments to them. The mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing: (i) a set of trades, and (ii) the payments made and received by players. In designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies. We set allocative efficiency as our primary goal. That is the mechanism must compute a set of trades that maximizes gain from trade. In addition we require individual rationality (IR) so that all players have positive expected utility to participate, budget balance (BB) so that the exchange does not run at a loss, and incentive compatibility (IC) so that reporting the truth is a dominant strategy for each player. Unfortunately, Myerson and Satterthwaite's (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10]. A unique approach to overcome Myerson and Satterthwaite's impossibility result was attempted by Parkes, Kalagnanam and Eso [12]. This result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency. In this paper we circumvent Myerson and Satterthwaite's impossibility result by introducing a new model of negotiationrange markets. A negotiation-range mechanism does not produce payment prices for the market participants. Rather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA). The buyer is provided with the high end of the range and the seller with the low end of the range. This allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them. The negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform. In effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves. This concept is natural to many real-world market environments such as the real estate market. We focus on the single-unit heterogeneous setting: n sellers offer one unique good each by placing sealed bids specifying their willingness to sell, and m buyers, interested in buying a single good each, placing sealed bids specifying their willingness to pay for each good they may be interested in. Our main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced. Our result does not contradict Myerson and Satterthwaite's important theorem. Myerson-Satterthwaite's proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market. Our single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite's theorem by producing a price range, defined by a seller's floor and a buyer's ceiling, for each pair of matched players. In our market mechanism the seller's upper bound utility may be the same while the seller's floor is different and the buyer's lower bound utility may be the same while the buyer's ceiling is different. Moreover, the final price is not fixed by the mechanism at all. Instead, it is determined by an independent negotiation between the buyer and seller. More specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined. This negotiation stage is crucial for our mechanism to be IC. Intuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question. That is, he would tell the truth if this strategy maximizes the upper and lower bounds on his utility as expressed by the ZOPA boundaries. Yet, when carefully examined it turns out that it is impossible (by [10]) that this goal will always hold. This is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range's boundaries). Here, the negotiation stage come into play. We show that if the above utility maximizing condition does not hold then it is the case that the player cannot influence the negotiation bound that is assigned to his deal partner no matter what value he declares. This means that the only thing that he may achieve by reporting a false valuation is modifying his own negotiation bound, something that he could alternatively achieve by reporting his true valuation and incorporating the effect of the modified negotiation bound into his negotiation strategy. This eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players' true values. The problem of computing the optimal allocation which maximizes gain from trade can be conceptualized as the problem of finding the maximum weighted matching in a weighted bipartite graph connecting buyers and sellers, where each edge in the graph is assigned a weight equal to the difference between the respective buyer bid and seller bid. It is well known that this problem can be solved efficiently in polynomial time. VCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB. Our particular approach adapts the VCG payment scheme to achieve budget balance. The philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller's opportunity cost of not selling the good to another buyer and not keeping the good to herself. The seller is paid in addition to the buyer's price a compensation for the damage the mechanism did to the seller by not extracting the buyer's full bid. Our philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself. The buyer pays at most his alternate seller's opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself. The rest of this paper is organized as follows. In Section 2 we describe our model and definitions. In section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB. Finally, we conclude with a discussion in Section 4. 2. NEGOTIATION MARKETS PRELIMINARIES Let H denote the set of players, N the set of n selling players, and M the set of m buying players, where H = N U M. Let T = {1,..., t} denote the set of goods. Let Ti G {− 1, 0, 1} t denote an exchange vector for a trade, such that player i buys goods {A G T | Ti (A) = 1} and sells goods {A G T | Ti (A) = − 1}. Let T = (T1,..., Tlnl) denote the complete trade between all players. We view T as describing the allocation of goods by the mechanism to the buyers and sellers. In the single-unit heterogeneous setting every good belongs to specific seller, and every buyer is interested in buying one good. The buyer may bid for several or all goods. At the end of the auction every good is either assigned to one of the buyers who bid for it or kept unsold by the seller. It is convenient to assume the sets of buyers and sellers are disjoint (though it is not required), i.e. N n M = 0. Each seller i is associated with exactly one good Ai, for which she has true valuation ci which expresses the price at which it is beneficial for her to sell the good. If the seller reports a false valuation at an attempt to improve the auction results for her, this valuation is denoted ˆci. A buyer has a valuation vector describing his valuation for each of the goods according to their owner. Specifically, vj (k) denotes buyer j's valuation for good Ak. Similarly, if he reports a false valuation it is denoted ˆvj (k). If buyer j is matched by the mechanism with seller i then Ti (Ai) = − 1 and Tj (Ai) = 1. Notice, that in our setting for every k = i, Ti (Ak) = 0 and Tj (Ai) = 0 and also for every z = j, Tz (Ai) = 0. For a matched buyer j - seller i pair, the gain from trade on the deal is defined as vj (i) − ci. Given and allocation T, the gain from trade associated with T is Let T * denote the optimal allocation which maximizes the gain from trade, computed according to the players' true valuations. Let V * denote the optimal gain from trade associated with this allocation. When players' report false valuations we use Tˆ * and Vˆ * to denote the optimal allocation and gain from trade, respectively, when computed according to the reported valuations. We are interested in the design of negotiation-range mechanisms. In contrast to a standard auction mechanism where the buyer and seller are provided with the prices they should pay, the goal of a negotiation-range mechanism is to provide the player's with a range of prices within which they can negotiate the final terms of the deal by themselves. The mechanism would provide the buyer with the upper bound on the range and the seller with the lower bound on the range. This gives each of them a promise that it will be beneficial for them to close the deal, but does not provide information about the other player's terms of negotiation. The basic assumption is that participants in the auction are self-interested players. That is their main goal is to maximize their expected utility. The utility for a buyer who does not participate in the trade is 0. If he does win some good, his utility is the surplus between his valuation for that good and the price he pays. For a seller, if she keeps the good unsold, her utility is just her valuation of the good, and the surplus is 0. If she gets to sell it, her utility is the price she is paid for it, and the surplus is the difference between this price and her valuation. Since negotiation-range mechanisms assign bounds on the range of prices rather than the final price, it is useful to define the upper and lower bounds on the player's utilities defined by the range's limits. • The buyer's top utility is: vj (i) − L, and the buyer's bottom utility is vj (i) − H. • The seller's top utility is H, with surplus H − ci, and the seller's bottom utility is L, with surplus L − ci. 3. THE SINGLE-UNIT HETEROGENEOUS MECHANISM (ZOPAS) 3.1 Description of the Mechanism ZOPAS is a negotiation-range mechanism, it finds the optimal allocation T * and uses it to define a ZOPA for each buyer-seller pair. The first stage in applying the mechanism is for the buyers and sellers to submit their sealed bids. The mechanism then allocates buyers to sellers by computing the allocation T *, which results in the optimal gain from trade V *, and defines a ZOPA for each buyer-seller pair. Finally, buyers and sellers use the ZOPA to negotiate a final price. Computing T * involves solving the maximum weighted bipartite matching problem for the complete bipartite graph Kn, m constructed by placing the buyers on one side of the Figure 1: The ZOPAS mechanism graph, the seller on another and giving the edge between buyer j and seller i weight equal to vj (i) − ci. The maximum weighted matching problem in solvable in polynomial time (e.g., using the Hungarian Method). This results in a matching between buyers and sellers that maximizes gain from trade. The next step is to compute for each buyer-seller pair a seller's floor, which provides the lower bound of the ZOPA for this pair, and assigns it to the seller. A seller's floor is computed by calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the buyer is included (the VCG principle). Let (T--j) * denote the gain from trade of the optimal allocation when buyer j's bids are discarded. Denote by (V--j) * the total gain from trade in the allocation (T--j) *. The seller is instructed not to accept less than this price from her matched buyer. Next, the mechanism computes for each buyer-seller pair a buyer ceiling, which provides the upper bound of the ZOPA for this pair, and assigns it to the buyer. Each buyer's ceiling is computed by removing the buyer's matched seller and calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the Find the optimal allocation T * Compute the maximum weighted bipartite matching for the bipartite graph of buyers and sellers, and edge weights equal to the gain from trade. and every seller i of good Ai Report to seller i her floor Li and identify her matched buyer j Report to buyer j his ceiling Hj and identify his matched seller i i, j negotiate the good's final price buyer is included. Let (T − i) denote the gain from trade of the optimal allocation when seller i is removed from the trade. Denote by (V − i) the total gain from trade in the allocation (T − i). Let (T − i − j) denote the gain from trade of the optimal allocation when seller i is removed from the trade and buyer j's bids are discarded. Denote by (V − i − j) the total gain from trade in the allocation (T − i − j). DEFINITION 5. Buyer Ceiling: The highest price the seller should expect to pay, communicated to the buyer by the mechanism. The buyer ceiling for player j who was matched with seller i on good Ai, i.e., Tj (Ai) = 1, is computed as: The buyer is instructed not to pay more than this price to his matched seller. Once the negotiation range lower bound and upper bound are computed for every matched pair, the mechanism reports the lower bound price to the seller and the upper bound price to the buyer. At this point each buyer-seller pair negotiates on the final price and concludes the deal. A schematic description the ZOPAS mechanism is given in Figure 3.1. 3.2 Analysis of the Mechanism In this section we analyze the properties of the ZOPAS mechanism. THEOREM 1. The ZOPAS market negotiation-range mechanism is an incentive-compatible bilateral trade mechanism that is efficient, individually rational and budget balanced. Clearly ZOPAS is an efficient polynomial time mechanism. Let us show it satisfies the rest of the properties in the theorem. CLAIM 1. ZOPAS is individually rational, i.e., the mechanism maintains nonnegative utility surplus for all participants. PROOF. If a participant does not trade in the optimal allocation then his utility surplus is zero by definition. Consider a pair of buyer j and seller i which are matched in the optimal allocation T. Then the buyer's utility is at least vj (i)--Hj. Recall that Hj = vj (i) + (V − i − j). Since the optimal gain from trade which includes j is higher than that which does not, we have that the utility is nonnegative: vj (i)--Hj> 0. Now, consider the seller i. Her utility surplus is at least Li--ci. Recall that Li = vj (i) + (V − j)--V. If we removed from the optimal allocation T the contribution of the buyer j - seller i pair, we are left with an allocation which excludes j, and has value V--(vj (i)--ci). This implies that (V − j)> V--vj (i) + ci, which implies that Li--ci> 0. The fact that ZOPAS is a budget-balanced mechanism follows from the following lemma which ensures the validity of the negotiation range, i.e., that every seller's floor is below her matched buyer's ceiling. This ensures that they can close the deal at a final price which lies in this range. PROOF. Recall that Li = vj (i) + (V − j)--V and Hj = The proof of (1) is based on a method which we apply several times in our analysis. We start with the allocations (T − i) and (T − j) which together have value equal to (V − i) + (V − j). We now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations. This means that the sum of values of the new allocations is the same as the original pair of allocations. We also require that one of the new allocations does not include buyer j or seller i. This means that the sum of values of these new allocations is at most V + (V − i − j), which proves (1). Let G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair. The different allocations represent bipartite matchings in G. It will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching. Assign color 1 to the edges in the matching (T − i) and assign color 2 to the edges in the matching (T − j). We claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations T (represented by color 3) and (T − i − j) (represented by color 4). This implies the that inequality (1) holds. Figure 2 illustrates the graph G and the colorings of the different matchings. Define an alternating path P starting at j. Let S1 be the seller matched to j in (T − i) (if none exists then P is empty). Let B1 be the buyer matched to S1 in (T − j), S2 be the seller matched to B1 in (T − i), B2 be the buyer matched to S2 in (T − j), and so on. This defines an alternating path P, starting at j, whose edges' colors alternate between colors 1 and 2 (starting with 1). This path ends either in a seller who is not matched in (T − j) or in a buyer who is not matched in (T − i). Since all sellers in this path are matched in (T − i), we have that seller i does not belong to P. This ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3). Since except for the first edge, all others do not involve i or j and thus may be colored 4 and be part of an allocation (T − j − i). We are left to recolor the edges that do not belong to P. Since none of these edges includes j we have that the edges that were colored 1, which are part of (T − i), may now be colored 4, and be included in the allocation (T − j − i). It is also clear that the edges that were colored 2, which are part of (T − j), may now be colored 3, and be included in the allocation T. This completes the proof of the lemma. 3.3 Incentive Compatibility The basic requirement in mechanism design is for an exchange mechanism to be incentive compatible. This means that its payment structure enforces that truth-telling is the players' weakly dominant strategy, that is that the strategy by which the player is stating his true valuation results Figure 2: Alternating path argument for Lemma 1 (Validity of the Negotiation Range) and Claim 2 (part of Buyer's IC proof) Figure 3: Key to Figure 2 in bigger or equal utility as any other strategy. The utility surplus is defined as the absolute difference between the player's bid and his price. Negotiation-range mechanisms assign bounds on the range of prices rather than the final price and therefore the player's valuation only influences the minimum and maximum bounds on his utility. For a buyer the minimum (bottom) utility would be based on the top of the negotiation range (ceiling), and the maximum (top) utility would be based on the bottom of the negotiation range (floor). For a seller it's the other way around. Therefore the basic natural requirement from negotiation-range mechanisms would be that stating the player's true valuation results in both the higher bottom utility and higher top utility for the player, compared with other strategies. Unfortunately, this requirement is still too strong and it is impossible (by [10]) that this will always hold. Therefore we slightly relax it as follows: we require this holds when the false valuation based strategy changes the player's allocation. When the allocation stays unchanged we require instead that the player would not be able to change his matched player's bound (e.g. a buyer cannot change the seller's floor). This means that the only thing he can influence is his own bound, something that he can alternatively achieve through means of negotiation. The following formally summarizes our incentive compatibility requirements from the negotiation-range mechanism. Buyer's incentive compatibility: 9 Let j be a buyer matched with seller i by the mechanism according to valuation vj and the negotiationrange assigned is (Li, Hj). Assume that when the mechanism is applied according to valuation ˆvj, seller k = i is matched with j and the negotiation-range assigned is (ˆLk, ˆHj). Then 9 Let j be a buyer not matched by the mechanism according to valuation vj. Assume that when the mechanism is applied according to valuation ˆvj, seller k = i is matched with j and the negotiation-range assigned is (ˆLk, ˆHj). Then 9 Let j be a buyer matched with seller i by the mechanism according to valuation vj and let the assigned bottom of the negotiation range (seller's floor) be Li. Assume that when the mechanism is applied according to valuation ˆvj, the matching between i and j remains unchanged and let the assigned bottom of the negotiation range (seller's floor) be ˆLi. Then, Notice that the first inequality of (4) always holds for a valid negotiation range mechanism (Lemma 1). Seller's incentive compatibility: 9 Let i be a seller not matched by the mechanism according to valuation ci. Assume that when the mechanism ˆHj = Hj. (7) Notice that the first inequality of (6) always holds for a valid negotiation range mechanism (Lemma 1). Observe that in the case of sellers in our setting, the case expressed by requirement (6) is the only case in which the seller may change the allocation to her benefit. In particular, it is not possible for seller i who is matched in T to change her buyer by reporting a false valuation. This fact simply follows from the observation that reducing the seller's valuation increases the gain from trade for the current allocation by at least as much than any other allocation, whereas increasing the seller's valuation decreases the gain from trade for the current allocation by exactly the same amount as any other allocation in which it is matched. Therefore, the only case the optimal allocation may change is when in the new allocation i is not matched in which case her utility surplus is 0. PROOF. We begin with the incentive compatibility for buyers. Consider a buyer j who is matched with seller i according to his true valuation v. Consider that j is reporting instead a false valuation vˆ which results in a different allocation in which j is matched with seller k = i. The following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his top utility. CLAIM 2. Let j be a buyer matched to seller i in T, and let k = i be the seller matched to j in Tˆ. Then, Consider first the case were j is matched to i in (Tˆ − k). If we remove this pair and instead match j with k we obtain a matching which excludes i, if the gain from trade on the new pair is taken according to the true valuation then we get (Vˆ − k)--(ˆvj (i)--ci) + (vj (k)--ck) <(V − i). Now, since the optimal allocation Tˆ matches j with k rather than with i we have that where we have used that (Vˆ − i − j) = (V − i − j) since these allocations exclude j. Adding up these two inequalities implies (9) in this case. It is left to prove (9) when j is not matched to i in (In fact, in this case we prove the stronger inequality It is easy to see that (10) indeed implies (9) since it follows from the fact that k is assigned to j in Tˆ that ˆvj (k)> vj (k). The proof of (10) works as follows. We start with the allocations (Tˆ − k) and (T − i − j) which together have value equal to (Vˆ − k) + (V − i − j). We now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations. This means that the sum of values of the new allocations is the same as the original pair of allocations. We also require that one of the new allocations does not include seller i and is based on the true valuation v, while the other allocation does not include buyer j or seller k and is based on the false valuation ˆv. This means that the sum of values of these new allocations is at most (V − i) + (Vˆ − k − j), which proves (10). Let G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair. The different allocations represent bipartite matchings in G. It will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching. Assign color 1 to the edges in the matching (assign color 2 to the edges in the matching (T − j − i). We claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations (T − i) (represented by color 3) and (Tˆ − k − j) (represented by color 4). This implies the that inequality (10) holds. Figure 2 illustrates the graph G and the colorings of the different matchings. Define an alternating path P starting at j. Let S1 = i be the seller matched to j in (Tˆ − k) (if none exists then P is empty). Let B1 be the buyer matched to S1 in (T − j − i), S2 be the seller matched to B1 in (Tˆ − k), B2 be the buyer matched to S2 in (T − i − j), and so on. This defines an alternating path P, starting at j, whose edges' colors alternate between colors 1 and 2 (starting with 1). This path ends either in a seller who is not matched in (T − i − j) or in a buyer who is not matched in (Tˆ − k). Since all sellers in this path are matched in (Tˆ − k), we have that seller k does not belong to P. Since in this case S1 = i and the rest of the sellers in P are matched in (T − i − j) we have that seller i as well does not belong to P. This ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3). Since S1 = i, we may use color 3 for the first edge and thus assign it to the allocation (T − i). All other edges, do not involve i, j or k and thus may be either colored 4 and be part of an allocation (Tˆ − k − j) or colored 3 and be part of an allocation (T − i), in an alternating fashion. We are left to recolor the edges that do not belong to P. Since none of these edges includes j we have that the edges is applied according to valuation ˆci, buyer z = j is matched with i and the negotiation-range assigned is (ˆLi, ˆHz). Then 9 Let i be a buyer matched with buyer j by the mechanism according to valuation ci and let the assigned top of the negotiation range (buyer's ceiling) be Hj. Assume that when the mechanism is applied according to valuation ˆci, the matching between i and j remains unchanged and let the assigned top of the negotiation range (buyer's ceiling) be ˆ Hj. Then, Tˆ − k). Tˆ − k) and that were colored 1, which are part of (Tˆ − k), may now be colored 4, and be included in the allocation (Tˆ − j − k). It is also clear that the edges that were colored 2, which are part of (T − i − j), may now be colored 3, and be included in the allocation (T − i). This completes the proof of (10) and the claim. The following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his bottom utility. The proof is basically the standard VCG argument. CLAIM 3. Let j be a buyer matched to seller i in T, and k = i be the seller matched to j in Tˆ. Then, It follows that in order to prove (11) we need to show The scenario of this claim occurs when j understates his value for Ai or overstated his value for Ak. Consider these two cases: 0 ˆvj (k)> vj (k): Since Ak was allocated to j in the allocation Tˆ we have that using the allocation of Tˆ according to the true valuation gives an allocation of value U satisfying Vˆ--ˆvj (k) + vj (k) <U <V. 0 ˆvj (k) = vj (k) and ˆvj (i) <vj (i): In this case (12) reduces to V> Vˆ. Since j is not allocated i in Tˆ we have that Tˆ is an allocation that uses only true valuations. From the optimality of T we conclude that V> Vˆ. Another case in which a buyer may try to improve his utility is when he does not win any good by stating his true valuation. He may give a false valuation under which he wins some good. The following claim shows that doing this is not beneficial to him. CLAIM 4. Let j be a buyer not matched in T, and assume seller k is matched to j in Tˆ. Then, CLAIM 5. Let j be a buyer matched to seller i in T, and assume that Tˆ = T, then ˆLi = Li. PROOF. Recall that Li = vj (i) + (V − j)--V, and ˆLi = ˆvj (i) + (ˆV − j)--Vˆ = ˆvj (i) + (V − j)--Vˆ. Therefore we need to show that Vˆ = V + ˆvj (i)--vj (i). Since j is allocated Ai in T, we have that using the allocation of T according to the false valuation gives an allocation of value U satisfying V--vj (i) + ˆvj (i) <U <Vˆ. Similarly since j is allocated Ai in Tˆ, we have that using the allocation of Tˆ according to the true valuation gives an allocation of value U satisfying Vˆ--ˆvj (i) + vj (i) <U <V, which together with the previous inequality completes the proof. This completes the analysis of the buyer's incentive compatibility. We now turn to prove the seller's incentive compatibility properties of our mechanism. The following claim handles the case where a seller that was not matched in T falsely understates her valuation such that she gets matched n Tˆ. CLAIM 6. Let i be a seller not matched in T, and assume buyer z is matched to i in Tˆ. Then, Since i is not matched in T and (Tˆ − i) involves only true valuations we have that (Vˆ − i) = V. Since i is matched with z in Tˆ it can be obtained by adding the buyer z - seller i pair to (Tˆ − z − i). It follows that ˆV = (ˆ V − i matched in Tˆ, using this allocation according to the true valuation gives an allocation of value U satisfying Vˆ + ˆci--Vˆ + ˆci--V--ci <0. Finally, the following simple claim ensures that a seller cannot influence the ceiling bound of the ZOPA for the good she sells. CLAIM 7. Let i be a seller matched to buyer j in T, and assume that Tˆ = T, then ˆHj = Hj. PROOF. Since (ˆV − j − i) = (V − j − i) and (ˆV − i) = (V − i) it follows that PROOF. The scenario of this claim occurs if j did not buy in the truth-telling allocation and overstates his value for Ak, ˆvj (k)> vj (k) in his false valuation. Recall that V − j)--V. Thus we need to show that 0> vj (k)--ˆvj (k) + Vˆ--(V − j). Since j is not allocated in T then (V − j) = V. Since j is allocated Ak in Tˆ we have that using the allocation of Tˆ according to the true valuation gives an allocation of value U satisfying Vˆ--ˆvj (k) + vj (k) <U <V. Thus we can conclude that 0> vj (k)--ˆvj (k) + Vˆ--(V − j). Finally, the following claim ensures that a buyer cannot influence the floor bound of the ZOPA for the good he wins. 4. CONCLUSIONS AND EXTENSIONS In this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced. To this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller. The goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals. We present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced. The ZOPA produced by our mechanism is based on a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation. The basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve? Are there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals? In the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings. A natural extension is that of a combinatorial market. Unfortunately, finding the optimal allocation in a combinatorial setting is NP-hard, and thus the problem of maintaining BB is compounded by the problem of maintaining IC when efficiency is approximated [1, 5, 6, 9, 11]. Applying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.
Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets ABSTRACT This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation. 1. INTRODUCTION In this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets. A market consists of multiple buyers and sellers who wish to exchange goods. The market's main objective is to produce an allocation of sellers' goods to buyers as to maximize the total gain from trade. * Supported in part by a grant from the Israeli National Science Foundation (195/02). A commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11]. In this model each player has a private valuation function that assigns real values to each possible allocation. The algorithm motivates players to participate truthfully by handing payments to them. The mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing: (i) a set of trades, and (ii) the payments made and received by players. In designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies. We set allocative efficiency as our primary goal. That is the mechanism must compute a set of trades that maximizes gain from trade. In addition we require individual rationality (IR) so that all players have positive expected utility to participate, budget balance (BB) so that the exchange does not run at a loss, and incentive compatibility (IC) so that reporting the truth is a dominant strategy for each player. Unfortunately, Myerson and Satterthwaite's (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10]. A unique approach to overcome Myerson and Satterthwaite's impossibility result was attempted by Parkes, Kalagnanam and Eso [12]. This result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency. In this paper we circumvent Myerson and Satterthwaite's impossibility result by introducing a new model of negotiationrange markets. A negotiation-range mechanism does not produce payment prices for the market participants. Rather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA). The buyer is provided with the high end of the range and the seller with the low end of the range. This allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them. The negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform. In effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves. This concept is natural to many real-world market environments such as the real estate market. We focus on the single-unit heterogeneous setting: n sellers offer one unique good each by placing sealed bids specifying their willingness to sell, and m buyers, interested in buying a single good each, placing sealed bids specifying their willingness to pay for each good they may be interested in. Our main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced. Our result does not contradict Myerson and Satterthwaite's important theorem. Myerson-Satterthwaite's proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market. Our single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite's theorem by producing a price range, defined by a seller's floor and a buyer's ceiling, for each pair of matched players. In our market mechanism the seller's upper bound utility may be the same while the seller's floor is different and the buyer's lower bound utility may be the same while the buyer's ceiling is different. Moreover, the final price is not fixed by the mechanism at all. Instead, it is determined by an independent negotiation between the buyer and seller. More specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined. This negotiation stage is crucial for our mechanism to be IC. Intuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question. That is, he would tell the truth if this strategy maximizes the upper and lower bounds on his utility as expressed by the ZOPA boundaries. Yet, when carefully examined it turns out that it is impossible (by [10]) that this goal will always hold. This is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range's boundaries). Here, the negotiation stage come into play. We show that if the above utility maximizing condition does not hold then it is the case that the player cannot influence the negotiation bound that is assigned to his deal partner no matter what value he declares. This means that the only thing that he may achieve by reporting a false valuation is modifying his own negotiation bound, something that he could alternatively achieve by reporting his true valuation and incorporating the effect of the modified negotiation bound into his negotiation strategy. This eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players' true values. The problem of computing the optimal allocation which maximizes gain from trade can be conceptualized as the problem of finding the maximum weighted matching in a weighted bipartite graph connecting buyers and sellers, where each edge in the graph is assigned a weight equal to the difference between the respective buyer bid and seller bid. It is well known that this problem can be solved efficiently in polynomial time. VCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB. Our particular approach adapts the VCG payment scheme to achieve budget balance. The philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller's opportunity cost of not selling the good to another buyer and not keeping the good to herself. The seller is paid in addition to the buyer's price a compensation for the damage the mechanism did to the seller by not extracting the buyer's full bid. Our philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself. The buyer pays at most his alternate seller's opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself. The rest of this paper is organized as follows. In Section 2 we describe our model and definitions. In section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB. Finally, we conclude with a discussion in Section 4. 2. NEGOTIATION MARKETS PRELIMINARIES 3. THE SINGLE-UNIT HETEROGENEOUS MECHANISM (ZOPAS) 3.1 Description of the Mechanism 3.2 Analysis of the Mechanism 3.3 Incentive Compatibility Buyer's incentive compatibility: Seller's incentive compatibility: 4. CONCLUSIONS AND EXTENSIONS In this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced. To this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller. The goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals. We present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced. The ZOPA produced by our mechanism is based on a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation. The basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve? Are there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals? In the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings. A natural extension is that of a combinatorial market. Unfortunately, finding the optimal allocation in a combinatorial setting is NP-hard, and thus the problem of maintaining BB is compounded by the problem of maintaining IC when efficiency is approximated [1, 5, 6, 9, 11]. Applying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.
Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets ABSTRACT This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation. 1. INTRODUCTION In this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets. A market consists of multiple buyers and sellers who wish to exchange goods. The market's main objective is to produce an allocation of sellers' goods to buyers as to maximize the total gain from trade. A commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11]. In this model each player has a private valuation function that assigns real values to each possible allocation. The algorithm motivates players to participate truthfully by handing payments to them. The mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing: (i) a set of trades, and (ii) the payments made and received by players. In designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies. We set allocative efficiency as our primary goal. That is the mechanism must compute a set of trades that maximizes gain from trade. Unfortunately, Myerson and Satterthwaite's (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10]. A unique approach to overcome Myerson and Satterthwaite's impossibility result was attempted by Parkes, Kalagnanam and Eso [12]. This result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency. In this paper we circumvent Myerson and Satterthwaite's impossibility result by introducing a new model of negotiationrange markets. A negotiation-range mechanism does not produce payment prices for the market participants. Rather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA). The buyer is provided with the high end of the range and the seller with the low end of the range. This allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them. The negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform. In effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves. This concept is natural to many real-world market environments such as the real estate market. Our main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced. Our result does not contradict Myerson and Satterthwaite's important theorem. Myerson-Satterthwaite's proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market. Our single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite's theorem by producing a price range, defined by a seller's floor and a buyer's ceiling, for each pair of matched players. In our market mechanism the seller's upper bound utility may be the same while the seller's floor is different and the buyer's lower bound utility may be the same while the buyer's ceiling is different. Moreover, the final price is not fixed by the mechanism at all. Instead, it is determined by an independent negotiation between the buyer and seller. More specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined. This negotiation stage is crucial for our mechanism to be IC. Intuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question. This is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range's boundaries). Here, the negotiation stage come into play. This eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players' true values. VCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB. Our particular approach adapts the VCG payment scheme to achieve budget balance. The philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller's opportunity cost of not selling the good to another buyer and not keeping the good to herself. The seller is paid in addition to the buyer's price a compensation for the damage the mechanism did to the seller by not extracting the buyer's full bid. Our philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself. The buyer pays at most his alternate seller's opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself. The rest of this paper is organized as follows. In Section 2 we describe our model and definitions. In section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB. Finally, we conclude with a discussion in Section 4. 4. CONCLUSIONS AND EXTENSIONS In this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced. To this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller. The goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals. We present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced. The ZOPA produced by our mechanism is based on a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation. The basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve? Are there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals? In the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings. A natural extension is that of a combinatorial market. Applying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.
C-84
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis
We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.
[ "cach", "distribut system", "game-theoret approach", "cach problem", "remot replica", "network topolog", "nash equilibrium", "peer-to-peer file system", "demand distribut", "anarchi price", "instrument server", "primal-dual techniqu", "social util", "submodular", "group strategyproof", "aggreg effect", "peer-to-peer system", "game-theoret model" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "M", "R", "M", "U", "M", "U", "U", "U", "M", "R" ]
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis Byung-Gon Chun ∗ bgchun@cs.berkeley.edu Kamalika Chaudhuri † kamalika@cs.berkeley.edu Hoeteck Wee ‡ hoeteck@cs.berkeley.edu Marco Barreno § barreno@cs.berkeley.edu Christos H. Papadimitriou † christos@cs.berkeley.edu John Kubiatowicz ∗ kubitron@cs.berkeley.edu Computer Science Division University of California, Berkeley ABSTRACT We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms Algorithms, Economics, Theory, Performance 1. INTRODUCTION Wide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer caches [15, 16], and web caches [6, 10] have become popular over the last few years. Caching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems. However, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server. In reality, servers may behave selfishly - seeking to maximize their own benefit. For example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains. They have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior. It has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior. In this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations. We model selfish caching as a non-cooperative game. In the basic model, the servers have two possible actions for each object. If a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica. On the other hand, if all replicas are located too far away, the server is better off caching the object itself. Decisions about caching the replicas locally are arrived at locally, taking into account only local costs. We also define a more elaborate payment model, in which each server bids for having an object replicated at another site. Each site now has the option of replicating an object and collecting the related bids. Once all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers. Game theory predicts that such a situation will end up in a Nash equilibrium, that is, a set of (possibly randomized) strategies with the property that no player can benefit by changing its strategy while the other players keep their strategies unchanged [28]. Foundational considerations notwithstanding, it is not easy to accept randomized strategies as the behavior of rational agents in a distributed system (see [28] for an extensive discussion) - but this is what classical game theory can guarantee. In certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted. With or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum. This loss of efficiency is 1 We will use caching and replication interchangeably. 2 We use the term object as an abstract entity that represents files and other data objects. 21 quantified by the price of anarchy [21]. The price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum. The price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own. However, in reality there are ways whereby the system can be guided, through seeding or incentives, to a preselected Nash equilibrium. This optimistic version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum. In this paper we address the following questions : • Do pure strategy Nash equilibria exist in the caching game? • If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions? • What is the effect of adopting payments? Will the Nash equilibria be improved? We show that pure strategy Nash equilibria always exist in the caching game. The price of anarchy of the basic game model can be O(n), where n is the number of servers; the intuitive reason is undersupply. Under certain topologies, the price of anarchy does have tighter bounds. For complete graphs and stars, it is O(1). For D-dimensional grids, it is O(n D D+1 ). Even the optimistic price of anarchy can be O(n). In the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one. Our simulation results show several interesting phases. As the placement cost increases from zero, the price of anarchy increases. When the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems. As the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy. The rest of the paper is organized as follows. In Section 2 we discuss related work. Section 3 discusses details of the basic game and analyzes the bounds of the price of anarchy. In Section 4 we discuss the payment game and analyze its price of anarchy. In Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed. We discuss extensions of the game and directions for future work in Section 6. 2. RELATED WORK There has been considerable research on wide-area peer-to-peer file systems such as OceanStore [22], CFS [5], PAST [32], FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and SummaryCache [10], and peer-to-peer caches such as Squirrel [16]. Most of these systems use caching for performance, availability, and reliability. The caching protocols assume obedience to the protocol and ignore participants'' incentives. Our work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly. The placement of replicas in the caching problem is the most important issue. There is much work on the placement of web replicas, instrumentation servers, and replicated resources. All protocols assume obedience and ignore participants'' incentives. In [14], Gribble et al. discuss the data placement problem in peer-to-peer systems. Ko and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20]. Chen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4]. Douceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8]. RaDar is a system that replicates and migrates objects for an Internet hosting service [31]. Tang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34]. Centralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19,23,30]. The facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27]. Since the problem is NP-hard, approximation algorithms based on primal-dual techniques, greedy algorithms, and local search have been explored [17, 24, 26]. Our caching game is different from all of these in that the optimization process is performed among distributed selfish servers. There is little research in non-cooperative facility location games, as far as we know. Vetta [35] considers a class of problems where the social utility is submodular (submodularity means decreasing marginal utility). In the case of competitive facility location among corporations he proves that any Nash equilibrium gives an expected social utility within a factor of 2 of optimal plus an additive term that depends on the facility opening cost. Their results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations. Note that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location. On the other hand, the significant problems in our game are undersupply and misplacement. In a recent paper, Goemans et al. analyze content distribution on ad-hoc wireless networks using a game-theoretic approach [12]. As in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria. However, their results are incomparable to ours because their payoff functions neglect network latencies between users, they consider multiple data items (markets), and each node has a limited budget to cache items. Cost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29]. Goemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13]. P´al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1/3 of the total cost for the facility location game [29]. Devanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7]. 3. BASIC GAME The caching problem we study is to find a configuration that meets certain objectives (e.g., minimum total cost). Figure 1 shows examples of caching among four servers. In network (a), A stores an object. Suppose B wants to access the object. If it is cheaper to access the remote replica than to cache it, B accesses the remote replica as shown in network (b). In network (c), C wants to access the object. If C is far from A, C caches the object instead of accessing the object from A. It is possible that in an optimal configuration it would be better to place replicas in A and B. Understanding the placement of replicas by selfish servers is the focus of our study. The caching problem is abstracted as follows. There is a set N of n servers and a set M of m objects. The distance between servers can be represented as a distance matrix D (i.e., dij is the distance 22 Server Server Server Server A B C D (a) Server Server Server Server A B C D (b) Server Server Server Server A B C D (c) Figure 1: Caching. There are four servers labeled A, B, C, and D. The rectangles are object replicas. In (a), A stores an object. If B incurs less cost accessing A``s replica than it would caching the object itself, it accesses the object from A as in (b). If the distance cost is too high, the server caches the object itself, as C does in (c). This figure is an example of our caching game model. from server i to server j). D models an underlying network topology. For our analysis we assume that the distances are symmetric and the triangle inequality holds on the distances (for all servers i, j, k: dij + djk ≥ dik). Each server has demand from clients that is represented by a demand matrix W (i.e., wij is the demand of server i for object j). When a server caches objects, the server incurs some placement cost that is represented by a matrix α (i.e., αij is a placement cost of server i for object j). In this study, we assume that servers have no capacity limit. As we discuss in the next section, this fact means that the caching behavior with respect to each object can be examined separately. Consequently, we can talk about configurations of the system with respect to a given object: DEFINITION 1. A configuration X for some object O is the set of servers replicating this object. The goal of the basic game is to find configurations that are achieved when servers optimize their cost functions locally. 3.1 Game Model We take a game-theoretic approach to analyzing the uncapacitated caching problem among networked selfish servers. We model the selfish caching problem as a non-cooperative game with n players (servers/nodes) whose strategies are sets of objects to cache. In the game, each server chooses a pure strategy that minimizes its cost. Our focus is to investigate the resulting configuration, which is the Nash equilibrium of the game. It should be emphasized that we consider only pure strategy Nash equilibria in this paper. The cost model is an important part of the game. Let Ai be the set of feasible strategies for server i, and let Si ∈ Ai be the strategy chosen by server i. Given a strategy profile S = (S1, S2, ..., Sn), the cost incurred by server i is defined as: Ci(S) = j∈Si αij + j /∈Si wij di (i,j). (1) where αij is the placement cost of object j, wij is the demand that server i has for object j, (i, j) is the closest server to i that caches object j, and dik is the distance between i and k. When no server caches the object, we define distance cost di (i,j) to be dM -large enough that at least one server will choose to cache the object. The placement cost can be further divided into first-time installation cost and maintenance cost: αij = k1i + k2i UpdateSizej ObjectSizej 1 T Pj k wkj , (2) where k1i is the installation cost, k2i is the relative weight between the maintenance cost and the installation cost, Pj is the ratio of the number of writes over the number of reads and writes, UpdateSizej is the size of an update, ObjectSizej is the size of the object, and T is the update period. We see tradeoffs between different parameters in this equation. For example, placing replicas becomes more expensive as UpdateSizej increases, Pj increases, or T decreases. However, note that by varying αij itself we can capture the full range of behaviors in the game. For our analysis, we use only αij . Since there is no capacity limit on servers, we can look at each single object as a separate game and combine the pure strategy equilibria of these games to obtain a pure strategy equilibrium of the multi-object game. Fabrikant, Papadimitriou, and Talwar discuss this existence argument: if two games are known to have pure equilibria, and their cost functions are cross-monotonic, then their union is also guaranteed to have pure Nash equilibria, by a continuity argument [9]. A Nash equilibrium for the multi-object game is the cross product of Nash equilibria for single-object games. Therefore, we can focus on the single object game in the rest of this paper. For single object selfish caching, each server i has two strategies - to cache or not to cache. The object under consideration is j. We define Si to be 1 when server i caches j and 0 otherwise. The cost incurred by server i is Ci(S) = αij Si + wij di (i,j)(1 − Si). (3) We refer to this game as the basic game. The extent to which Ci(S) represents actual cost incurred by server i is beyond the scope of this paper; we will assume that an appropriate cost function of the form of Equation 3 can be defined. 3.2 Nash Equilibrium Solutions In principle, we can start with a random configuration and let this configuration evolve as each server alters its strategy and attempts to minimize its cost. Game theory is interested in stable solutions called Nash equilibria. A pure strategy Nash equilibrium is reached when no server can benefit by unilaterally changing its strategy. A Nash equilibrium3 (S∗ i , S∗ −i) for the basic game specifies a configuration X such that ∀i ∈ N, i ∈ X ⇔ S∗ i = 1. Thus, we can consider a set E of all pure strategy Nash equilibrium configurations: X ∈ E ⇔ ∀i ∈ N, ∀Si ∈ Ai, Ci(S∗ i , S∗ −i) ≤ Ci(Si, S∗ −i) (4) By this definition, no server has incentive to deviate in the configurations since it cannot reduce its cost. For the basic game, we can easily see that: X ∈ E ⇔ ∀i ∈ N, ∃j ∈ X s.t. dji ≤ α and ∀j ∈ X, ¬∃k ∈ X s.t. dkj < α (5) The first condition guarantees that there is a server that places the replica within distance α of each server i. If the replica is not placed 3 The notation for strategy profile (S∗ i , S∗ −i) separates node i s strategy (S∗ i ) from the strategies of other nodes (S∗ −i). 23 A B1−α 0 0 0 0 0 0 0 0 0 0 2 n nodes 2 n nodes (a) A B1−α 0 0 0 0 0 0 0 0 0 0 2 n nodes 2 n nodes (b) A B1−α 2 n nodes 2 n nodes n2 n2 n2 n2 n2 n2 n2 n2 n2 n2 (c) Figure 2: Potential inefficiency of Nash equilibria illustrated by two clusters of n 2 servers. The intra-cluster distances are all zero and the distance between clusters is α − 1, where α is the placement cost. The dark nodes replicate the object. Network (a) shows a Nash equilibrium in the basic game, where one server in a cluster caches the object. Network (b) shows the social optimum where two replicas, one for each cluster, are placed. The price of anarchy is O(n) and even the optimistic price of anarchy is O(n). This high price of anarchy comes from the undersupply of replicas due to the selfish nature of servers. Network (c) shows a Nash equilibrium in the payment game, where two replicas, one for each cluster, are placed. Each light node in each cluster pays 2/n to the dark node, and the dark node replicates the object. Here, the optimistic price of anarchy is one. at i, then it is placed at another server within distance α of i, so i has no incentive to cache. If the replica is placed at i, then the second condition ensures there is no incentive to drop the replica because no two servers separated by distance less than α both place replicas. 3.3 Social Optimum The social cost of a given strategy profile is defined as the total cost incurred by all servers, namely: C(S) = n−1 i=0 Ci(S) (6) where Ci(S) is the cost incurred by server i given by Equation 1. The social optimum cost, referred to as C(SO) for the remainder of the paper, is the minimum social cost. The social optimum cost will serve as an important base case against which to measure the cost of selfish caching. We define C(SO) as: C(SO) = min S C(S) (7) where S varies over all possible strategy profiles. Note that in the basic game, this means varying configuration X over all possible configurations. In some sense, C(SO) represents the best possible caching behavior - if only nodes could be convinced to cooperate with one another. The social optimum configuration is a solution of a mini-sum facility location problem, which is NP-hard [11]. To find such configurations, we formulate an integer programming problem: minimize Èi Èj centsαij xij + Èk wij dikyijk # subject to ∀i, j Èk yijk = I(wij) ∀i, j, k xij − ykji ≥ 0 ∀i, j xij ∈ {0, 1} ∀i, j, k yijk ∈ {0, 1} (8) Here, xij is 1 if server i replicates object j and 0 otherwise; yijk is 1 if server i accesses object j from server k and 0 otherwise; I(w) returns 1 if w is nonzero and 0 otherwise. The first constraint specifies that if server i has demand for object j, then it must access j from exactly one server. The second constraint ensures that server i replicates object j if any other server accesses j from i. 3.4 Analysis To analyze the basic game, we first give a proof of the existence of pure strategy Nash equilibria. We discuss the price of anarchy in general and then on specific underlying topologies. In this analysis we use simply α in place of αij , since we deal with a single object and we assume placement cost is the same for all servers. In addition, when we compute the price of anarchy, we assume that all nodes have the same demand (i.e., ∀i ∈ N wij = 1). THEOREM 1. Pure strategy Nash equilibria exist in the basic game. PROOF. We show a constructive proof. First, initialize the set V to N. Then, remove all nodes with zero demand from V . Each node x defines βx, where βx = α wxj . Furthermore, let Z(y) = {z : dzy ≤ βz, z ∈ V }; Z(y) represents all nodes z for which y lies within βz from z. Pick a node y ∈ V such that βy ≤ βx for all x ∈ V . Place a replica at y and then remove y and all z ∈ Z(y) from V . No such z can have incentive to replicate the object because it can access y``s replica at lower (or equal) cost. Iterate this process of placing replicas until V is empty. Because at each iteration y is the remaining node with minimum β, no replica will be placed within distance βy of any such y by this process. The resulting configuration is a pure-strategy Nash equilibrium of the basic game. The Price of Anarchy (POA): To quantify the cost of lack of coordination, we use the price of anarchy [21] and the optimistic price of anarchy [3]. The price of anarchy is the ratio of the social costs of the worst-case Nash equilibrium and the social optimum, and the optimistic price of anarchy is the ratio of the social costs of the best-case Nash equilibrium and the social optimum. We show general bounds on the price of anarchy. Throughout our discussion, we use C(SW ) to represent the cost of worst case Nash equilibrium, C(SO) to represent the cost of social optimum, and PoA to represent the price of anarchy, which is C(SW ) C(SO) . The worst case Nash equilibrium maximizes the total cost under the constraint that the configuration meets the Nash condition. Formally, we can define C(SW ) as follows. C(SW ) = max X∈E (α|X| + i min j∈X dij) (9) where minj∈X dij is the distance to the closest replica (including i itself) from node i and X varies through Nash equilibrium configurations. Bounds on the Price of Anarchy: We show bounds of the price of anarchy varying α. Let dmin = min(i,j)∈N×N,i=j dij and dmax = max(i,j)∈N×N dij . We see that if α ≤ dmin, PoA = 1 24 Topology PoA Complete graph 1 Star ≤ 2 Line O( √ n) D-dimensional grid O(n D D+1 ) Table 1: PoA in the basic game for specific topologies trivially, since every server caches the object for both Nash equilibrium and social optimum. When α > dmax, there is a transition in Nash equilibria: since the placement cost is greater than any distance cost, only one server caches the object and other servers access it remotely. However, the social optimum may still place multiple replicas. Since α ≤ C(SO) ≤ α+minj∈N Èi dij when α > dmax, we obtain α+maxj∈N Èi dij α+minj∈N Èi dij ≤ PoA ≤ α+maxj∈N Èi dij α . Note that depending on the underlying topology, even the lower bound of PoA can be O(n). Finally, there is a transition when α > maxj∈N Èi dij. In this case, PoA = α+maxj∈N Èi dij α+minj∈N Èi dij and it is upper bounded by 2. Figure 2 shows an example of the inefficiency of a Nash equilibrium. In the network there are two clusters of servers whose size is n 2 . The distance between two clusters is α − 1 where α is the placement cost. Figure 2(a) shows a Nash equilibrium where one server in a cluster caches the object. In this case, C(SW ) = α + (α − 1)n 2 , since all servers in the other cluster accesses the remote replica. However, the social optimum places two replicas, one for each cluster, as shown in Figure 2(b). Therefore, C(SO) = 2α. PoA = α+(α−1) n 2 2α , which is O(n). This bad price of anarchy comes from an undersupply of replicas due to the selfish nature of the servers. Note that all Nash equilibria have the same cost; thus even the optimistic price of anarchy is O(n). In Appendix A, we analyze the price of anarchy with specific underlying topologies and show that PoA can have tighter bounds than O(n) for the complete graph, star, line, and D-dimensional grid. In these topologies, we set the distance between directly connected nodes to one. We describe the case where α > 1, since PoA = 1 trivially when α ≤ 1. A summary of the results is shown in Table 1. 4. PAYMENT GAME In this section, we present an extension to the basic game with payments and analyze the price of anarchy and the optimistic price of anarchy of the game. 4.1 Game Model The new game, which we refer to as the payment game, allows each player to offer a payment to another player to give the latter incentive to replicate the object. The cost of replication is shared among the nodes paying the server that replicates the object. The strategy for each player i is specified by a triplet (vi, bi, ti) ∈ {N, Ê+, Ê+}. vi specifies the player to whom i makes a bid, bi ≥ 0 is the value of the bid, and ti ≥ 0 denotes a threshold for payments beyond which i will replicate the object. In addition, we use Ri to denote the total amount of bids received by a node i (Ri = Èj:vj =i bj). A node i replicates the object if and only if Ri ≥ ti, that is, the amount of bids it receives is greater than or equal to its threshold. Let Ii denote the corresponding indicator variable, that is, Ii equals 1 if i replicates the object, and 0 otherwise. We make the rule that if a node i makes a bid to another node j and j replicates the object, then i must pay j the amount bi. If j does not replicate the object, i does not pay j. Given a strategy profile, the outcome of the game is the set of tuples {(Ii, vi, bi, Ri)}. Ii tells us whether player i replicates the object or not, bi is the payment player i makes to player vi, and Ri is the total amount of bids received by player i. To compute the payoffs given the outcome, we must now take into account the payments a node makes, in addition to the placement costs and access costs of the basic game. By our rules, a server node i pays bi to node vi if vi replicates the object, and receives a payment of Ri if it replicates the object itself. Its net payment is biIvi − RiIi. The total cost incurred by each node is the sum of its placement cost, access cost, and net payment. It is defined as Ci(S) = αij Ii + wij di (i,j)(1 − Ii) + biIvi − RiIi. (10) The cost of social optimum for the payment game is same as that for the basic game, since the net payments made cancel out. 4.2 Analysis In analyzing the payment model, we first show that a Nash equilibrium in the basic game is also a Nash equilibrium in the payment game. We then present an important positive result - in the payment game the socially optimal configuration can always be implemented by a Nash equilibrium. We know from the counterexample in Figure 2 that this is not guaranteed in the the basic game. In this analysis we use α to represent αij . THEOREM 2. Any configuration that is a pure strategy Nash equilibrium in the basic game is also a pure strategy Nash equilibrium in the payment game. Therefore, the price of anarchy of the payment game is at least that of the basic game. PROOF. Consider any Nash equilibrium configuration in the basic game. For each node i replicating the object, set its threshold ti to 0; everyone else has threshold α. Also, for all i, bi = 0. A node that replicates the object does not have incentive to change its strategy: changing the threshold does not decrease its cost, and it would have to pay at least α to access a remote replica or incentivize a nearby node to cache. Therefore it is better off keeping its threshold and bid at 0 and replicating the object. A node that is not replicating the object can access the object remotely at a cost less than or equal to α. Lowering its threshold does not decrease its cost, since all bi are zero. The payment necessary for another server to place a replica is at least α. No player has incentive to deviate, so the current configuration is a Nash equilibrium. In fact, Appendix B shows that the PoA of the payment game can be more than that of the basic game in a given topology. Now let us look at what happens to the example shown in Figure 2 in the best case. Suppose node B``s neighbors each decide to pay node B an amount 2/n. B does not have an incentive to deviate, since accessing the remote replica does not decrease its cost. The same argument holds for A because of symmetry in the graph. Since no one has an incentive to deviate, the configuration is a Nash equilibrium. Its total cost is 2α, the same as in the socially optimal configuration shown in Figure 2(b). Next we prove that indeed the payment game always has a strategy profile that implements the socially optimal configuration as a Nash equilibrium. We first present the following observation, which is used in the proof, about thresholds in the payment game. OBSERVATION 1. If node i replicates the object, j is the nearest node to i among the other nodes that replicate the object, and dij < α in a Nash equilibrium, then i should have a threshold at 25 least (α − dij). Otherwise, it cannot collect enough payment to compensate for the cost of replicating the object and is better off accessing the replica at j. THEOREM 3. In the payment game, there is always a pure strategy Nash equilibrium that implements the social optimum configuration. The optimistic price of anarchy in the payment game is therefore always one. PROOF. Consider the socially optimal configuration φopt. Let No be the set of nodes that replicate the object and Nc = N − No be the rest of the nodes. Also, for each i in No, let Qi denote the set of nodes that access the object from i, not including i itself. In the socially optimal configuration, dij ≤ α for all j in Qi. We want to find a set of payments and thresholds that makes this configuration implementable. The idea is to look at each node i in No and distribute the minimum payment needed to make i replicate the object among the nodes that access the object from i. For each i in No, and for each j in Qi, we define δj = min{α, min k∈No−{i} djk} − dji (11) Note that δj is the difference between j``s cost for accessing the replica at i and j``s next best option among replicating the object and accessing some replica other than i. It is clear that δj ≥ 0. CLAIM 1. For each i ∈ No, let be the nearest node to i in No. Then, Èj∈Qi δj ≥ α − di . PROOF. (of claim) Assume the contrary, that is, Èj∈Qi δj < α − di . Consider the new configuration φnew wherein i does not replicate and each node in Qi chooses its next best strategy (either replicating or accessing the replica at some node in No − {i}). In addition, we still place replicas at each node in No − {i}. It is easy to see that cost of φopt minus cost of φnew is at least: (α + j∈Qi dij) − (di + j∈Qi min{α, min k∈No−{i} dik}) = α − di − j∈Qi δj > 0, which contradicts the optimality of φopt. We set bids as follows. For each i in No, bi = 0 and for each j in Qi, j bids to i (i.e., vj = i) the amount: bj = max{0, δj − i/(|Qi| + 1)}, j ∈ Qi (12) where i = Èj∈Qi δj − α + di ≥ 0 and |Qi| is the cardinality of Qi. For the thresholds, we have: ti = α if i ∈ Nc;Èj∈Qi bj if i ∈ No. (13) This fully specifies the strategy profile of the nodes, and it is easy to see that the outcome is indeed the socially optimal configuration. Next, we verify that the strategies stipulated constitute a Nash equilibrium. Having set ti to α for i in Nc means that any node in N is at least as well off lowering its threshold and replicating as bidding α to some node in Nc to make it replicate, so we may disregard the latter as a profitable strategy. By observation 1, to ensure that each i in No does not deviate, we require that if is the nearest node to i in No, then Èj∈Qi bj is at least (α − di ). Otherwise, i will raise ti above Èj∈Qi bj so that it does not replicate and instead accesses the replica at . We can easily check that j∈Qi bj ≥ j∈Qi δj − |Qi| i |Qi| + 1 = α − di + i |Qi| + 1 ≥ α − di . 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 0 20 40 60 80 100 120 140 160 180 200 1 10 100 C(NE)/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) Figure 3: We present P oA, Ratio, and OP oA results for the basic game, varying α on a 100-node line topology, and we show number of replicas placed by the Nash equilibria and by the optimal solution. We see large peaks in P oA and OP oA at α = 100, where a phase transition causes an abrupt transition in the lines. Therefore, each node i ∈ No does not have incentive to change ti since i loses its payments received or there is no change, and i does not have incentive to bi since it replicates the object. Each node j in Nc has no incentive to change tj since changing tj does not reduce its cost. It also does not have incentive to reduce bj since the node where j accesses does not replicate and j has to replicate the object or to access the next closest replica, which costs at least the same from the definition of bj . No player has incentive to deviate, so this strategy profile is a Nash equilibrium. 5. SIMULATION We run simulations to compare Nash equilibria for the singleobject caching game with the social optimum computed by solving the integer linear program described in Equation 8 using Mosek [1]. We examine price of anarchy (PoA), optimistic price of anarchy (OPoA), and the average ratio of the costs of Nash equilibria and social optima (Ratio), and when relevant we also show the average numbers of replicas placed by the Nash equilibrium (Replica(NE)) and the social optimum (Replica(SO)). The PoA and OPoA are taken from the worst and best Nash equilibria, respectively, that we observe over the runs. Each data point in our figures is based on 1000 runs, randomly varying the initial strategy profile and player order. The details of the simulations including protocols and a discussion of convergence are presented in Appendix C. In our evaluation, we study the effects of variation in four categories: placement cost, underlying topology, demand distribution, and payments. As we vary the placement cost α, we directly influence the tradeoff between caching and not caching. In order to get a clear picture of the dependency of PoA on α in a simple case, we first analyze the basic game with a 100-node line topology whose edge distance is one. We also explore transit-stub topologies generated using the GTITM library [36] and power-law topologies (Router-level BarabasiAlbert model) generated using the BRITE topology generator [25]. For these topologies, we generate an underlying physical graph of 3050 physical nodes. Both topologies have similar minimum, average, and maximum physical node distances. The average distance is 0.42. We create an overlay of 100 server nodes and use the same overlay for all experiments with the given topology. In the game, each server has a demand whose distribution is Bernoulli(p), where p is the probability of having demand for the object; the default unless otherwise specified is p = 1.0. 26 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (a) 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (b) Figure 4: Transit-stub topology: (a) basic game, (b) payment game. We show the P oA, Ratio, OP oA, and the number of replicas placed while varying α between 0 and 2 with 100 servers on a 3050-physical-node transit-stub topology. 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (a) 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (b) Figure 5: Power-law topology: (a) basic game, (b) payment game. We show the P oA, Ratio, OP oA, and the number of replicas placed while varying α between 0 and 2 with 100 servers on a 3050-physical-node power-law topology. 5.1 Varying Placement Cost Figure 3 shows PoA, OPoA, and Ratio, as well as number of replicas placed, for the line topology as α varies. We observe two phases. As α increases the PoA rises quickly to a peak at 100. After 100, there is a gradual decline. OPoA and Ratio show behavior similar to PoA. These behaviors can be explained by examining the number of replicas placed by Nash equilibria and by optimal solutions. We see that when α is above one, Nash equilibrium solutions place fewer replicas than optimal on average. For example, when α is 100, the social optimum places four replicas, but the Nash equilibrium places only one. The peak in PoA at α = 100 occurs at the point for a 100-node line where the worst-case cost of accessing a remote replica is slightly less than the cost of placing a new replica, so selfish servers will never place a second replica. The optimal solution, however, places multiple replicas to decrease the high global cost of access. As α continues to increase, the undersupply problem lessens as the optimal solution places fewer replicas. 5.2 Different Underlying Topologies In Figure 4(a) we examine an overlay graph on the more realistic transit-stub topology. The trends for the PoA, OPoA, and Ratio are similar to the results for the line topology, with a peak in PoA at α = 0.8 due to maximal undersupply. In Figure 5(a) we examine an overlay graph on the power-law topology. We observe several interesting differences between the power-law and transit-stub results. First, the PoA peaks at a lower level in the power-law graph, around 2.3 (at α = 0.9) while the peak PoA in the transit-stub topology is almost 3.0 (at α = 0.8). After the peak, PoA and Ratio decrease more slowly as α increases. OPoA is close to one for the whole range of α values. This can be explained by the observation in Figure 5(a) that there is no significant undersupply problem here like there was in the transit-stub graph. Indeed the high PoA is due mostly to misplacement problems when α is from 0.7 to 2.0, since there is little decrease in PoA when the number of replicas in social optimum changes from two to one. The OPoA is equal to one in the figure when the same number of replicas are placed. 5.3 Varying Demand Distribution Now we examine the effects of varying the demand distribution. The set of servers with demand is random for p < 1, so we calculate the expected PoA by averaging over 5 trials (each data point is based on 5000 runs). We run simulations for demand levels of p ∈ {0.2, 0.6, 1.0} as α is varied on the 100 servers on top of the transit-stub graph. We observe that as demand falls, so does expected PoA. As p decreases, the number of replicas placed in the social optimum decreases, but the number in Nash equilibria changes little. Furthermore, when α exceeds the overlay diameter, the number in Nash equilibria stays constant when p varies. Therefore, lower p leads to a lesser undersupply problem, agreeing with intuition. We do not present the graph due to space limitations and redundancy; the PoA for p = 1.0 is identical to PoA in Figure 4(a), and the lines for p = 0.6 and p = 0.2 are similar but lower and flatter. 27 5.4 Effects of Payment Finally, we discuss the effects of payments on the efficiency of Nash equilibria. The results are presented in Figure 4(b) and Figure 5(b). As shown in the analysis, the simulations achieve OPoA close to one (it is not exactly one because of randomness in the simulations). The Ratio for the payment game is much lower than the Ratio for the basic game, since the protocol for the payment game tends to explore good regions in the space of Nash equilibria. We observe in Figure 4 that for α ≥ 0.4, the average number of replicas of Nash equilibria gets closer with payments to that of the social optimum than it does without. We observe in Figure 5 that more replicas are placed with payments than without when α is between 0.7 and 1.3, the only range of significant undersupply in the power-law case. The results confirm that payments give servers incentive to replicate the object and this leads to better equilibria. 6. DISCUSSION AND FUTURE WORK We suggest several interesting extensions and directions. One extension is to consider multiple objects in the capacitated caching game, in which servers have capacity limits when placing objects. Since caching one object affects the ability to cache another, there is no separability of a multi-object game into multiple single object games. As studied in [12], one way to formulate this problem is to find the best response of a server by solving a knapsack problem and to compute Nash equilibria. In our analyses, we assume that all nodes have the same demand. However, nodes could have different demand depending on objects. We intend to examine the effects of heterogeneous demands (or heterogeneous placement costs) analytically. We also want to look at the following aggregation effect. Suppose there are n − 1 clustered nodes with distance of α−1 from a node hosting a replica. All nodes have demands of one. In that case, the price of anarchy is O(n). However, if we aggregate n − 1 nodes into one node with demand n − 1, the price of anarchy becomes O(1), since α should be greater than (n − 1)(α − 1) to replicate only one object. Such aggregation can reduce the inefficiency of Nash equilibria. We intend to compute the bounds of the price of anarchy under different underlying topologies such as random graphs or growthrestricted metrics. We want to investigate whether there are certain distance constraints that guarantee O(1) price of anarchy. In addition, we want to run large-scale simulations to observe the change in the price of anarchy as the network size increases. Another extension is to consider server congestion. Suppose the distance is the network distance plus γ × (number of accesses) where γ is an extra delay when an additional server accesses the replica. Then, when α > γ, it can be shown that PoA is bounded by α γ . As γ increases, the price of anarchy bound decreases, since the load of accesses is balanced across servers. While exploring the caching problem, we made several observations that seem counterintuitive. First, the PoA in the payment game can be worse than the PoA in the basic game. Another observation we made was that the number of replicas in a Nash equilibrium can be more than the number of replicas in the social optimum even without payments. For example, a graph with diameter slightly more than α may have a Nash equilibrium configuration with two replicas at the two ends. However, the social optimum may place one replica at the center. We leave the investigation of more examples as an open issue. 7. CONCLUSIONS In this work we introduce a novel non-cooperative game model to characterize the caching problem among selfish servers without any central coordination. We show that pure strategy Nash equilibria exist in the game and that the price of anarchy can be O(n) in general, where n is the number of servers, due to undersupply problems. With specific topologies, we show that the price of anarchy can have tighter bounds. More importantly, with payments, servers are incentivized to replicate and the optimistic price of anarchy is always one. Non-cooperative caching is a more realistic model than cooperative caching in the competitive Internet, hence this work is an important step toward viable federated caching systems. 8. ACKNOWLEDGMENTS We thank Kunal Talwar for enlightening discussions regarding this work. 9. REFERENCES [1] http://www.mosek.com. [2] A. Adya et al.. FARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment. In Proc. of USENIX OSDI, 2002. [3] E. Anshelevich, A. Dasgupta, E. Tardos, and T. Wexler. Near-optimal Network Design with Selfish Agents. In Proc. of ACM STOC, 2003. [4] Y. Chen, R. H. Katz, and J. D. Kubiatowicz. SCAN: A Dynamic, Scalable, and Efficient Content Distribution Network. In Proc. of Intl.. Conf. on Pervasive Computing, 2002. [5] F. Dabek et al.. Wide-area Cooperative Storage with CFS. In Proc. of ACM SOSP, Oct. 2001. [6] P. B. Danzig. NetCache Architecture and Deploment. In Computer Networks and ISDN Systems, 1998. [7] N. Devanur, M. Mihail, and V. Vazirani. Strategyproof cost-sharing Mechanisms for Set Cover and Facility Location Games. In Proc. of ACM EC, 2003. [8] J. R. Douceur and R. P. Wattenhofer. Large-Scale Simulation of Replica Placement Algorithms for a Serverless Distributed File System. In Proc. of MASCOTS, 2001. [9] A. Fabrikant, C. H. Papadimitriou, and K. Talwar. The Complexity of Pure Nash Equilibria. In Proc. of ACM STOC, 2004. [10] L. Fan, P. Cao, J. Almeida, and A. Z. Broder. Summary Cache: A Scalable Wide-area Web Cache Sharing Protocol. IEEE/ACM Trans. on Networking, 8(3):281-293, 2000. [11] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Co., 1979. [12] M. X. Goemans, L. Li, V. S. Mirrokni, and M. Thottan. Market Sharing Games Applied to Content Distribution in ad-hoc Networks. In Proc. of ACM MOBIHOC, 2004. [13] M. X. Goemans and M. Skutella. Cooperative Facility Location Games. In Proc. of ACM-SIAM SODA, 2000. [14] S. Gribble et al.. What Can Databases Do for Peer-to-Peer? In WebDB Workshop on Databases and the Web, June 2001. [15] K. P. Gummadi et al.. Measurement, Modeling, and Analysis of a Peer-to-Peer File-Sharing Workload. In Proc. of ACM SOSP, October 2003. [16] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A Decentralized Peer-to-Peer Web Cache. In Proc. of ACM PODC, 2002. [17] K. Jain and V. V. Vazirani. Primal-Dual Approximation Algorithms for Metric Facility Location and k-Median Problems. In Proc. of IEEE FOCS, 1999. 28 [18] S. Jamin et al.. On the Placement of Internet Instrumentation. In Proc. of IEEE INFOCOM, pages 295-304, 2000. [19] S. Jamin et al.. Constrained Mirror Placement on the Internet. In Proc. of IEEE INFOCOM, pages 31-40, 2001. [20] B.-J. Ko and D. Rubenstein. A Distributed, Self-stabilizing Protocol for Placement of Replicated Resources in Emerging Networks. In Proc. of IEEE ICNP, 2003. [21] E. Koutsoupias and C. Papadimitriou. Worst-Case Equilibria. In STACS, 1999. [22] J. Kubiatowicz et al.. OceanStore: An Architecture for Global-scale Persistent Storage. In Proc. of ACM ASPLOS. ACM, November 2000. [23] B. Li, M. J. Golin, G. F. Italiano, and X. Deng. On the Optimal Placement of Web Proxies in the Internet. In Proc. of IEEE INFOCOM, 1999. [24] M. Mahdian, Y. Ye, and J. Zhang. Improved Approximation Algorithms for Metric Facility Location Problems. In Proc. of Intl.. Workshop on Approximation Algorithms for Combinatorial Optimization Problems, 2002. [25] A. Medina, A. Lakhina, I. Matta, and J. Byers. BRITE: Universal Topology Generation from a User``s Perspective. Technical Report 2001-003, 1 2001. [26] R. R. Mettu and C. G. Plaxton. The Online Median Problem. In Proc. of IEEE FOCS, 2000. [27] P. B. Mirchandani and R. L. Francis. Discrete Location Theory. Wiley-Interscience Series in Discrete Mathematics and Optimization, 1990. [28] M. J. Osborne and A. Rubinstein. A Course in Game Theory. MIT Press, 1994. [29] M. Pal and E. Tardos. Group Strategyproof Mechanisms via Primal-Dual Algorithms. In Proc. of IEEE FOCS, 2003. [30] L. Qiu, V. N. Padmanabhan, and G. M. Voelker. On the Placement of Web Server Replicas. In Proc. of IEEE INFOCOM, 2001. [31] M. Rabinovich, I. Rabinovich, R. Rajaraman, and A. Aggarwal. A Dynamic Object Replication and Migration Protocol for an Internet Hosting Service. In Proc. of IEEE ICDCS, 1999. [32] A. Rowstron and P. Druschel. Storage Management and Caching in PAST, A Large-scale, Persistent Peer-to-peer Storage Utility. In Proc. of ACM SOSP, October 2001. [33] Y. Saito, C. Karamanolis, M. Karlsson, and M. Mahalingam. Taming Aggressive Replication in the Pangaea Wide-Area File System. In Proc. of USENIX OSDI, 2002. [34] X. Tang and S. T. Chanson. Coordinated En-route Web Caching. In IEEE Trans. Computers, 2002. [35] A. Vetta. Nash Equilibria in Competitive Societies, with Applications to Facility Location, Traffic Routing, and Auctions. In Proc. of IEEE FOCS, 2002. [36] E. W. Zegura, K. L. Calvert, and S. Bhattacharjee. How to Model an Internetwork. In Proc. of IEEE INFOCOM, 1996. APPENDIX A. ANALYZING SPECIFIC TOPOLOGIES We now analyze the price of anarchy (PoA) for the basic game with specific underlying topologies and show that PoA can have better bounds. We look at complete graph, star, line, and Ddimensional grid. In all these topologies, we set the distance between two directly connected nodes to one. We describe the case where α > 1, since PoA = 1 trivially when α ≤ 1. A BC D3 α 3 α 4 3α 4 α 4 α Figure 6: Example where the payment game has a Nash equilibrium which is worse than any Nash equilibrium in the basic game. The unlabeled distances between the nodes in the cluster are all 1. The thresholds of white nodes are all α and the thresholds of dark nodes are all α/4. The two dark nodes replicate the object in this payment game Nash equilibrium. For a complete graph, PoA = 1, and for a star, PoA ≤ 2. For a complete graph, when α > 1, both Nash equilibria and social optima place one replica at one server, so PoA = 1. For star, when 1 < α < 2, the worst case Nash equilibrium places replicas at all leaf nodes. However, the social optimum places one replica at the center node. Therefore, PoA = (n−1)α+1 α+(n−1) ≤ 2(n−1)+1 1+(n−1) ≤ 2. When α > 2, the worst case Nash equilibrium places one replica at a leaf node and the other nodes access the remote replica, and the social optimum places one replica at the center. PoA = α+1+2(n−2) α+(n−1) = 1 + n α+(n−1) ≤ 2. For a line, the price of anarchy is O( √ n). When 1 < α < n, the worst case Nash equilibrium places replicas every 2α so that there is no overlap between areas covered by two adjacent servers that cache the object. The social optimum places replicas at least every √ 2α. The placement of replicas for the social optimum is as follows. Suppose there are two replicas separated by distance d. By placing an additional replica in the middle, we want to have the reduction of distance to be at least α. The distance reduction is d/2 + 2{((d/2 − 1) − 1) + ((d/2 − 2) − 2) + ... + ((d/2 − d/4) − d/4)} ≥ d2 /8. d should be at most 2 √ 2α. Therefore, the distance between replicas in the social optimum is at most √ 2α. C(SW ) = α(n−1) 2α + α(α+1) 2 (n−1) 2α = Θ(αn). C(SO) ≥ α n−1√ 2α + 2 √ 2α/2( √ 2α/2+1) 2 n−1√ 2α . C(SO) = Ω( √ αn). Therefore, PoA = O( √ α). When α > n − 1, the worst case Nash equilibrium places one replica at a leaf node and C(SW ) = α + (n−1)n 2 . However, the social optimum still places replicas every √ 2α. If we view PoA as a continuous function of α and compute a derivative of PoA, the derivative becomes 0 when α is Θ(n2 ), which means the function decreases as α increases from n. Therefore, PoA is maximum when α is n, and PoA = Θ(n2 ) Ω( √ nn) = O( √ n). When α > (n−1)n 2 , the social optimum also places only one replica, and PoA is trivially bounded by 2. This result holds for the ring and it can be generalized to the D-dimensional grid. As the dimension in the grid increases, the distance reduction of additional replica placement becomes Ω(dD+1 ) where d is the distance between two adjacent replicas. Therefore, PoA = Θ(n2) Ω(n 1 D+1 n) = O(n D D+1 ). B. PAYMENT CAN DO WORSE Consider the network in Figure 6 where α > 1+α/3. Any Nash equilibrium in the basic game model would have exactly two replicas - one in the left cluster, and one in the right. It is easy to verify that the worst placement (in terms of social cost) of two replicas occurs when they are placed at nodes A and B. This placement can be achieved as a Nash equilibrium in the payment game, but not in the basic game since A and B are a distance 3α/4 apart. 29 Algorithm 1 Initialization for the Basic Game L1 = a random subset of servers for each node i in N do if i ∈ L1 then Si = 1 ; replicate the object else Si = 0 Algorithm 2 Move Selection of i for the Basic Game Cost1 = α Cost2 = minj∈X−{i} dij ; X is the current configuration Costmin = min{Cost1, Cost2} if Costnow > Costmin then if Costmin == Cost1 then Si = 1 else Si = 0 C. NASH DYNAMICS PROTOCOLS The simulator initializes the game according to the given parameters and a random initial strategy profile and then iterates through rounds. Initially the order of player actions is chosen randomly. In each round, each server performs the Nash dynamics protocol that adjusts its strategies greedily in the chosen order. When a round passes without any server changing its strategy, the simulation ends and a Nash equilibrium is reached. In the basic game, we pick a random initial subset of servers to replicate the object as shown in Algorithm 1. After the initialization, each player runs the move selection procedure described in Algorithm 2 (in algorithms 2 and 4, Costnow represents the current cost for node i). This procedure chooses greedily between replication and non-replication. It is not hard to see that this Nash dynamics protocol converges in two rounds. In the payment game, we pick a random initial subset of servers to replicate the object by setting their thresholds to 0. In addition, we initialize a second random subset of servers to replicate the object with payments from other servers. The details are shown in Algorithm 3. After the initialization, each player runs the move selection procedure described in Algorithm 4. This procedure chooses greedily between replication and accessing a remote replica, with the possibilities of receiving and making payments, respectively. In the protocol, each node increases its threshold value by incr if it does not replicate the object. By this ramp up procedure, the cost of replicating an object is shared fairly among the nodes that access a replica from a server that does cache. If incr is small, cost is shared more fairly, and the game tends to reach equilibria that encourages more servers to store replicas, though the convergence takes longer. If incr is large, the protocol converges quickly, but it may miss efficient equilibria. In the simulations we set incr to 0.1. Most of our A B C a b c α/3+1 2α/3−1 2α/3 Figure 7: An example where the Nash dynamics protocol does not converge in the payment game. Algorithm 3 Initialization for the Payment Game L1 = a random subset of servers for each node i in N do bi = 0 if i ∈ L1 then ti = 0 ; replicate the object else ti = α L2 = {} for each node i in N do if coin toss == head then Mi = {j : d(j, i) < mink∈L1∪L2 d(j, k)} if Mi ! = ∅ then for each node j ∈ Mi do bj = max{ α+ Èk∈Mi d(i,k) |Mi| − d(i, j), 0} L2 = L2 ∪ {i} Algorithm 4 Move Selection of i for the Payment Game Cost1 = α − Ri Cost2 = minj∈N−{i}{tj − Rj + dij } Costmin = min{Cost1, Cost2} if Costnow > Costmin then if Costmin == Cost1 then ti = Ri else ti = Ri + incr vi = argminj{tj − Rj + dij} bi = tvi − Rvi simulation runs converged, but there were a very few cases where the simulation did not converge due to the cycles of dynamics. The protocol does not guarantee convergence within a certain number of rounds like the protocol for the basic game. We provide an example graph and an initial condition such that the Nash dynamics protocol does not converge in the payment game if started from this initial condition. The graph is represented by a shortest path metric on the network shown in Figure 7. In the starting configuration, only A replicates the object, and a pays it an amount α/3 to do so. The thresholds for A, B and C are α/3 each, and the thresholds for a, b and c are 2α/3. It is not hard to verify that the Nash dynamics protocol will never converge if we start with this condition. The Nash dynamics protocol for the payment game needs further investigation. The dynamics protocol for the payment game should avoid cycles of actions to achieve stabilization of the protocol. Finding a self-stabilizing dynamics protocol is an interesting problem. In addition, a fixed value of incr cannot adapt to changing environments. A small value of incr can lead to efficient equilibria, but it can take long time to converge. An important area for future research is looking at adaptively changing incr. 30
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis ABSTRACT We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate. 1. INTRODUCTION Wide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer caches [15,16], and web caches [6,10] have become popular over * Research supported by NSF career award ANI-9985250, NFS ITR award CCR-0085899, and California MICRO award 00-049. † Research supported by NSF ITR grant 0121555. ‡ Research supported by NSF grant CCR 9984703 and a US-Israel Binational Science Foundation grant. § Research supported by NSF grant EIA-0122599. the last few years. Caching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems. However, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server. In reality, servers may behave selfishly--seeking to maximize their own benefit. For example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains. They have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior. It has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior. In this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations. We model selfish caching as a non-cooperative game. In the basic model, the servers have two possible actions for each object. If a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica. On the other hand, if all replicas are located too far away, the server is better off caching the object itself. Decisions about caching the replicas locally are arrived at locally, taking into account only local costs. We also define a more elaborate payment model, in which each server bids for having an object replicated at another site. Each site now has the option of replicating an object and collecting the related bids. Once all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers. Game theory predicts that such a situation will end up in a Nash equilibrium, that is, a set of (possibly randomized) strategies with the property that no player can benefit by changing its strategy while the other players keep their strategies unchanged [28]. Foundational considerations notwithstanding, it is not easy to accept randomized strategies as the behavior of rational agents in a distributed system (see [28] for an extensive discussion)--but this is what classical game theory can guarantee. In certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted. With or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum. This loss of efficiency is quantified by the price of anarchy [21]. The price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum. The price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own. However, in reality there are ways whereby the system can be guided, through "seeding" or incentives, to a preselected Nash equilibrium. This "optimistic" version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum. In this paper we address the following questions: • Do pure strategy Nash equilibria exist in the caching game? • If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions? • What is the effect of adopting payments? Will the Nash equilibria be improved? We show that pure strategy Nash equilibria always exist in the caching game. The price of anarchy of the basic game model can be O (n), where n is the number of servers; the intuitive reason is undersupply. Under certain topologies, the price of anarchy does have tighter bounds. For complete graphs and stars, it is O (1). For anarchy can be O (n). In the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one. Our simulation results show several interesting phases. As the placement cost increases from zero, the price of anarchy increases. When the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems. As the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy. The rest of the paper is organized as follows. In Section 2 we discuss related work. Section 3 discusses details of the basic game and analyzes the bounds of the price of anarchy. In Section 4 we discuss the payment game and analyze its price of anarchy. In Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed. We discuss extensions of the game and directions for future work in Section 6. 2. RELATED WORK There has been considerable research on wide-area peer-to-peer file systems such as OceanStore [22], CFS [5], PAST [32], FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and SummaryCache [10], and peer-to-peer caches such as Squirrel [16]. Most of these systems use caching for performance, availability, and reliability. The caching protocols assume obedience to the protocol and ignore participants' incentives. Our work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly. The placement of replicas in the caching problem is the most important issue. There is much work on the placement of web replicas, instrumentation servers, and replicated resources. All protocols assume obedience and ignore participants' incentives. In [14], Gribble et al. discuss the data placement problem in peer-to-peer systems. Ko and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20]. Chen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4]. Douceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8]. RaDar is a system that replicates and migrates objects for an Internet hosting service [31]. Tang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34]. Centralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19, 23, 30]. The facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27]. Since the problem is NP-hard, approximation algorithms based on primal-dual techniques, greedy algorithms, and local search have been explored [17, 24, 26]. Our caching game is different from all of these in that the optimization process is performed among distributed selfish servers. There is little research in non-cooperative facility location games, as far as we know. Vetta [35] considers a class of problems where the social utility is submodular (submodularity means decreasing marginal utility). In the case of competitive facility location among corporations he proves that any Nash equilibrium gives an expected social utility within a factor of 2 of optimal plus an additive term that depends on the facility opening cost. Their results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations. Note that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location. On the other hand, the significant problems in our game are undersupply and misplacement. In a recent paper, Goemans et al. analyze content distribution on ad-hoc wireless networks using a game-theoretic approach [12]. As in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria. However, their results are incomparable to ours because their payoff functions neglect network latencies between users, they consider multiple data items (markets), and each node has a limited budget to cache items. Cost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29]. Goemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13]. P ´ al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1/3 of the total cost for the facility location game [29]. Devanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7]. 3. BASIC GAME The caching problem we study is to find a configuration that meets certain objectives (e.g., minimum total cost). Figure 1 shows examples of caching among four servers. In network (a), A stores an object. Suppose B wants to access the object. If it is cheaper to access the remote replica than to cache it, B accesses the remote replica as shown in network (b). In network (c), C wants to access the object. If C is far from A, C caches the object instead of accessing the object from A. It is possible that in an optimal configuration it would be better to place replicas in A and B. Understanding the placement of replicas by selfish servers is the focus of our study. The caching problem is abstracted as follows. There is a set N of n servers and a set M of m objects. The distance between servers can be represented as a distance matrix D (i.e., dij is the distance Figure 1: Caching. There are four servers labeled A, B, C, and D. The rectangles are object replicas. In (a), A stores an object. If B incurs less cost accessing A's replica than it would caching the object itself, it accesses the object from A as in (b). If the distance cost is too high, the server caches the object itself, as C does in (c). This figure is an example of our caching game model. from server i to server j). D models an underlying network topology. For our analysis we assume that the distances are symmetric and the triangle inequality holds on the distances (for all servers i, j, k: dij + djk> dik). Each server has demand from clients that is represented by a demand matrix W (i.e., wij is the demand of server i for object j). When a server caches objects, the server incurs some placement cost that is represented by a matrix α (i.e., αij is a placement cost of server i for object j). In this study, we assume that servers have no capacity limit. As we discuss in the next section, this fact means that the caching behavior with respect to each object can be examined separately. Consequently, we can talk about configurations of the system with respect to a given object: The goal of the basic game is to find configurations that are achieved when servers optimize their cost functions locally. 3.1 Game Model We take a game-theoretic approach to analyzing the uncapacitated caching problem among networked selfish servers. We model the selfish caching problem as a non-cooperative game with n players (servers/nodes) whose strategies are sets of objects to cache. In the game, each server chooses a pure strategy that minimizes its cost. Our focus is to investigate the resulting configuration, which is the Nash equilibrium of the game. It should be emphasized that we consider only pure strategy Nash equilibria in this paper. The cost model is an important part of the game. Let Ai be the set of feasible strategies for server i, and let Si E Ai be the strategy chosen by server i. Given a strategy profile S = (S1, S2,..., Sn), the cost incurred by server i is defined as: where αij is the placement cost of object j, wij is the demand that server i has for object j, f (i, j) is the closest server to i that caches object j, and dik is the distance between i and k. When no server caches the object, we define distance cost di ~ (i, j) to be dM--large enough that at least one server will choose to cache the object. The placement cost can be further divided into first-time installation cost and maintenance cost: where k1i is the installation cost, k2i is the relative weight between the maintenance cost and the installation cost, Pj is the ratio of the number of writes over the number of reads and writes, UpdateSizej is the size of an update, ObjectSizej is the size of the object, and T is the update period. We see tradeoffs between different parameters in this equation. For example, placing replicas becomes more expensive as UpdateSizej increases, Pj increases, or T decreases. However, note that by varying αij itself we can capture the full range of behaviors in the game. For our analysis, we use only αij. Since there is no capacity limit on servers, we can look at each single object as a separate game and combine the pure strategy equilibria of these games to obtain a pure strategy equilibrium of the multi-object game. Fabrikant, Papadimitriou, and Talwar discuss this existence argument: if two games are known to have pure equilibria, and their cost functions are cross-monotonic, then their union is also guaranteed to have pure Nash equilibria, by a continuity argument [9]. A Nash equilibrium for the multi-object game is the cross product of Nash equilibria for single-object games. Therefore, we can focus on the single object game in the rest of this paper. For single object selfish caching, each server i has two strategies--to cache or not to cache. The object under consideration is j. We define Si to be 1 when server i caches j and 0 otherwise. The cost incurred by server i is We refer to this game as the basic game. The extent to which Ci (S) represents actual cost incurred by server i is beyond the scope of this paper; we will assume that an appropriate cost function of the form of Equation 3 can be defined. 3.2 Nash Equilibrium Solutions In principle, we can start with a random configuration and let this configuration evolve as each server alters its strategy and attempts to minimize its cost. Game theory is interested in stable solutions called Nash equilibria. A pure strategy Nash equilibrium is reached when no server can benefit by unilaterally changing its strategy. A Nash equilibrium3 (S ∗ i, S ∗ − i) for the basic game specifies a configuration X such that ` di E N, i E X 4 * S ∗ i = 1. Thus, we can consider a set E of all pure strategy Nash equilibrium configurations: By this definition, no server has incentive to deviate in the configurations since it cannot reduce its cost. For the basic game, we can easily see that: The first condition guarantees that there is a server that places the replica within distance α of each server i. If the replica is not placed 3The notation for strategy profile (S ∗ i, S ∗ − i) separates node i ~ s strategy (S ∗ i) from the strategies of other nodes (S ∗ − i). Figure 2: Potential inefficiency of Nash equilibria illustrated by two clusters of n2 servers. The intra-cluster distances are all zero and the distance between clusters is α − 1, where α is the placement cost. The dark nodes replicate the object. Network (a) shows a Nash equilibrium in the basic game, where one server in a cluster caches the object. Network (b) shows the social optimum where two replicas, one for each cluster, are placed. The price of anarchy is O (n) and even the optimistic price of anarchy is O (n). This high price of anarchy comes from the undersupply of replicas due to the selfish nature of servers. Network (c) shows a Nash equilibrium in the payment game, where two replicas, one for each cluster, are placed. Each light node in each cluster pays 2/n to the dark node, and the dark node replicates the object. Here, the optimistic price of anarchy is one. at i, then it is placed at another server within distance α of i, so i has no incentive to cache. If the replica is placed at i, then the second condition ensures there is no incentive to drop the replica because no two servers separated by distance less than α both place replicas. 3.3 Social Optimum The social cost of a given strategy profile is defined as the total cost incurred by all servers, namely: where Ci (S) is the cost incurred by server i given by Equation 1. The social optimum cost, referred to as C (SO) for the remainder of the paper, is the minimum social cost. The social optimum cost will serve as an important base case against which to measure the cost of selfish caching. We define C (SO) as: where S varies over all possible strategy profiles. Note that in the basic game, this means varying configuration X over all possible configurations. In some sense, C (SO) represents the best possible caching behavior--if only nodes could be convinced to cooperate with one another. The social optimum configuration is a solution of a mini-sum facility location problem, which is NP-hard [11]. To find such configurations, we formulate an integer programming problem: Here, xij is 1 if server i replicates object j and 0 otherwise; yijk is 1 if server i accesses object j from server k and 0 otherwise; I (w) returns 1 if w is nonzero and 0 otherwise. The first constraint specifies that if server i has demand for object j, then it must access j from exactly one server. The second constraint ensures that server i replicates object j if any other server accesses j from i. 3.4 Analysis To analyze the basic game, we first give a proof of the existence of pure strategy Nash equilibria. We discuss the price of anarchy in general and then on specific underlying topologies. In this analysis we use simply α in place of αij, since we deal with a single object and we assume placement cost is the same for all servers. In addition, when we compute the price of anarchy, we assume that all nodes have the same demand (i.e., ∀ i ∈ N wij = 1). PROOF. We show a constructive proof. First, initialize the set V to N. Then, remove all nodes with zero demand from V. Each node x defines 3x, where 3x = wxj α. Furthermore, let Z (y) = {z: dzy ≤ 3z, z ∈ V}; Z (y) represents all nodes z for which y lies within 3z from z. Pick a node y ∈ V such that 3y ≤ 3x for all x ∈ V. Place a replica at y and then remove y and all z ∈ Z (y) from V. No such z can have incentive to replicate the object because it can access y's replica at lower (or equal) cost. Iterate this process of placing replicas until V is empty. Because at each iteration y is the remaining node with minimum 3, no replica will be placed within distance 3y of any such y by this process. The resulting configuration is a pure-strategy Nash equilibrium of the basic game. The Price of Anarchy (POA): To quantify the cost of lack of coordination, we use the price of anarchy [21] and the optimistic price of anarchy [3]. The price of anarchy is the ratio of the social costs of the worst-case Nash equilibrium and the social optimum, and the optimistic price of anarchy is the ratio of the social costs of the best-case Nash equilibrium and the social optimum. We show general bounds on the price of anarchy. Throughout our discussion, we use C (SW) to represent the cost of worst case Nash equilibrium, C (SO) to represent the cost of social optimum, and PoA to represent the price of anarchy, which is C (SW) C (SO). The worst case Nash equilibrium maximizes the total cost under the constraint that the configuration meets the Nash condition. Formally, we can define C (SW) as follows. where minj ∈ X dij is the distance to the closest replica (including i itself) from node i and X varies through Nash equilibrium configurations. Bounds on the Price of Anarchy: We show bounds of the price of anarchy varying α. Let dmin = min (i, j) ∈ N × N, i ~ = j dij and dmax = max (i, j) ∈ N × N dij. We see that if α ≤ dmin, PoA = 1 Table 1: PoA in the basic game for specific topologies trivially, since every server caches the object for both Nash equilibrium and social optimum. When α> dmax, there is a transition in Nash equilibria: since the placement cost is greater than any distance cost, only one server caches the object and other servers access it remotely. However, the social optimum may still place multiple replicas. Since α <C (SO) <α + minj ∈ N J: i dij when α> Note that depending on the underlying topology, even the lower bound of PoA can be O (n). Finally, there is a transition when α> maxj ∈ N J: i dij. In this case, PoA = α + maxj ∈ N F-F-i dij and Figure 2 shows an example of the inefficiency of a Nash equilibrium. In the network there are two clusters of servers whose size is n2. The distance between two clusters is α--1 where α is the placement cost. Figure 2 (a) shows a Nash equilibrium where one server in a cluster caches the object. In this case, C (SW) = α + (α--1) n2, since all servers in the other cluster accesses the remote replica. However, the social optimum places two replicas, one for each cluster, as shown in Figure 2 (b). Therefore, C (SO) = 2α. comes from an undersupply of replicas due to the selfish nature of the servers. Note that all Nash equilibria have the same cost; thus even the optimistic price of anarchy is O (n). In Appendix A, we analyze the price of anarchy with specific underlying topologies and show that PoA can have tighter bounds than O (n) for the complete graph, star, line, and D-dimensional grid. In these topologies, we set the distance between directly connected nodes to one. We describe the case where α> 1, since PoA = 1 trivially when α <1. A summary of the results is shown in Table 1. 4. PAYMENT GAME In this section, we present an extension to the basic game with payments and analyze the price of anarchy and the optimistic price of anarchy of the game. 4.1 Game Model The new game, which we refer to as the payment game, allows each player to offer a payment to another player to give the latter incentive to replicate the object. The cost of replication is shared among the nodes paying the server that replicates the object. The strategy for each player i is specified by a triplet (vi, bi, ti) E {N, R +, R +}. vi specifies the player to whom i makes a bid, bi> 0 is the value of the bid, and ti> 0 denotes a threshold for payments beyond which i will replicate the object. In addition, we use Ri to denote the total amount of bids received by a node i (Ri = J: j: vj = i bj). A node i replicates the object if and only if Ri> ti, that is, the amount of bids it receives is greater than or equal to its threshold. Let Ii denote the corresponding indicator variable, that is, Ii equals 1 if i replicates the object, and 0 otherwise. We make the rule that if a node i makes a bid to another node j and j replicates the object, then i must pay j the amount bi. If j does not replicate the object, i does not pay j. Given a strategy profile, the outcome of the game is the set of tuples {(Ii, vi, bi, Ri)}. Ii tells us whether player i replicates the object or not, bi is the payment player i makes to player vi, and Ri is the total amount of bids received by player i. To compute the payoffs given the outcome, we must now take into account the payments a node makes, in addition to the placement costs and access costs of the basic game. By our rules, a server node i pays bi to node vi if vi replicates the object, and receives a payment of Ri if it replicates the object itself. Its net payment is biIvi--RiIi. The total cost incurred by each node is the sum of its placement cost, access cost, and net payment. It is defined as The cost of social optimum for the payment game is same as that for the basic game, since the net payments made cancel out. 4.2 Analysis In analyzing the payment model, we first show that a Nash equilibrium in the basic game is also a Nash equilibrium in the payment game. We then present an important positive result--in the payment game the socially optimal configuration can always be implemented by a Nash equilibrium. We know from the counterexample in Figure 2 that this is not guaranteed in the the basic game. In this analysis we use α to represent αij. THEOREM 2. Any configuration that is a pure strategy Nash equilibrium in the basic game is also a pure strategy Nash equilibrium in the payment game. Therefore, the price of anarchy of the payment game is at least that of the basic game. PROOF. Consider any Nash equilibrium configuration in the basic game. For each node i replicating the object, set its threshold ti to 0; everyone else has threshold α. Also, for all i, bi = 0. A node that replicates the object does not have incentive to change its strategy: changing the threshold does not decrease its cost, and it would have to pay at least α to access a remote replica or incentivize a nearby node to cache. Therefore it is better off keeping its threshold and bid at 0 and replicating the object. A node that is not replicating the object can access the object remotely at a cost less than or equal to α. Lowering its threshold does not decrease its cost, since all bi are zero. The payment necessary for another server to place a replica is at least α. No player has incentive to deviate, so the current configuration is a Nash equilibrium. In fact, Appendix B shows that the PoA of the payment game can be more than that of the basic game in a given topology. Now let us look at what happens to the example shown in Figure 2 in the best case. Suppose node B's neighbors each decide to pay node B an amount 2/n. B does not have an incentive to deviate, since accessing the remote replica does not decrease its cost. The same argument holds for A because of symmetry in the graph. Since no one has an incentive to deviate, the configuration is a Nash equilibrium. Its total cost is 2α, the same as in the socially optimal configuration shown in Figure 2 (b). Next we prove that indeed the payment game always has a strategy profile that implements the socially optimal configuration as a Nash equilibrium. We first present the following observation, which is used in the proof, about thresholds in the payment game. least (α − dij). Otherwise, it cannot collect enough payment to compensate for the cost of replicating the object and is better off accessing the replica at j. THEOREM 3. In the payment game, there is always a pure strategy Nash equilibrium that implements the social optimum configuration. The optimistic price of anarchy in the payment game is therefore always one. PROOF. Consider the socially optimal configuration 0opt. Let No be the set of nodes that replicate the object and Nc = N − No be the rest of the nodes. Also, for each i in No, let Qi denote the set of nodes that access the object from i, not including i itself. In the socially optimal configuration, dij <α for all j in Qi. We want to find a set of payments and thresholds that makes this configuration implementable. The idea is to look at each node i in No and distribute the minimum payment needed to make i replicate the object among the nodes that access the object from i. For each i in No, and for each j in Qi, we define Note that 6j is the difference between j's cost for accessing the replica at i and j's next best option among replicating the object and accessing some replica other than i. It is clear that 6j> 0. No. Then, Ej ∈ Qi 6j> α − di ~. CLAIM 1. For each i E No, let f be the nearest node to i in PROOF. (of claim) Assume the contrary, that is, Ej ∈ Qi 6j <α − di ~. Consider the new configuration 0new wherein i does not replicate and each node in Qi chooses its next best strategy (either replicating or accessing the replica at some node in No − {i}). In addition, we still place replicas at each node in No − {i}. It is easy to see that cost of 0opt minus cost of 0new is at least: which contradicts the optimality of 0opt. We set bids as follows. For each i in No, bi = 0 and for each j in Qi, j bids to i (i.e., vj = i) the amount: where Ei = Ej ∈ Qi 6j − α + di ~> 0 and | Qi | is the cardinality of bj = max {0, 6j − Ei / (| Qi | + 1)}, j E Qi (12) Qi. For the thresholds, we have: ti = ~ α Ej ∈ Qi bj if if i iE ENc; No. (13) This fully specifies the strategy profile of the nodes, and it is easy to see that the outcome is indeed the socially optimal configuration. Next, we verify that the strategies stipulated constitute a Nash equilibrium. Having set ti to α for i in Nc means that any node in N is at least as well off lowering its threshold and replicating as bidding α to some node in Nc to make it replicate, so we may disregard the latter as a profitable strategy. By observation 1, to ensure that each i in No does not deviate, we require that if f is the nearest node to i in No, then Ej ∈ Qi bj is at least (α − di ~). Otherwise, i will raise ti above Ej ∈ Qi bj so that it does not replicate and instead accesses the replica at. We can easily check that Figure 3: We present PoA, Ratio, and OPoA results for the basic game, varying α on a 100-node line topology, and we show number of replicas placed by the Nash equilibria and by the optimal solution. We see large peaks in PoA and OPoA at α = 100, where a phase transition causes an abrupt transition in the lines. Therefore, each node i E No does not have incentive to change ti since i loses its payments received or there is no change, and i does not have incentive to bi since it replicates the object. Each node j in Nc has no incentive to change tj since changing tj does not reduce its cost. It also does not have incentive to reduce bj since the node where j accesses does not replicate and j has to replicate the object or to access the next closest replica, which costs at least the same from the definition of bj. No player has incentive to deviate, so this strategy profile is a Nash equilibrium. 5. SIMULATION We run simulations to compare Nash equilibria for the singleobject caching game with the social optimum computed by solving the integer linear program described in Equation 8 using Mosek [1]. We examine price of anarchy (PoA), optimistic price of anarchy (OPoA), and the average ratio of the costs of Nash equilibria and social optima (Ratio), and when relevant we also show the average numbers of replicas placed by the Nash equilibrium (Replica (NE)) and the social optimum (Replica (SO)). The PoA and OPoA are taken from the worst and best Nash equilibria, respectively, that we observe over the runs. Each data point in our figures is based on 1000 runs, randomly varying the initial strategy profile and player order. The details of the simulations including protocols and a discussion of convergence are presented in Appendix C. In our evaluation, we study the effects of variation in four categories: placement cost, underlying topology, demand distribution, and payments. As we vary the placement cost α, we directly influence the tradeoff between caching and not caching. In order to get a clear picture of the dependency of PoA on α in a simple case, we first analyze the basic game with a 100-node line topology whose edge distance is one. We also explore transit-stub topologies generated using the GTITM library [36] and power-law topologies (Router-level BarabasiAlbert model) generated using the BRITE topology generator [25]. For these topologies, we generate an underlying physical graph of 3050 physical nodes. Both topologies have similar minimum, average, and maximum physical node distances. The average distance is 0.42. We create an overlay of 100 server nodes and use the same overlay for all experiments with the given topology. In the game, each server has a demand whose distribution is Bernoulli (p), where p is the probability of having demand for the object; the default unless otherwise specified is p = 1.0. Figure 4: Transit-stub topology: (a) basic game, (b) payment game. We show the PoA, Ratio, OPoA, and the number of replicas placed while varying α between 0 and 2 with 100 servers on a 3050-physical-node transit-stub topology. Figure 5: Power-law topology: (a) basic game, (b) payment game. We show the PoA, Ratio, OPoA, and the number of replicas placed while varying α between 0 and 2 with 100 servers on a 3050-physical-node power-law topology. 5.1 Varying Placement Cost Figure 3 shows PoA, OPoA, and Ratio, as well as number of replicas placed, for the line topology as α varies. We observe two phases. As α increases the PoA rises quickly to a peak at 100. After 100, there is a gradual decline. OPoA and Ratio show behavior similar to PoA. These behaviors can be explained by examining the number of replicas placed by Nash equilibria and by optimal solutions. We see that when α is above one, Nash equilibrium solutions place fewer replicas than optimal on average. For example, when α is 100, the social optimum places four replicas, but the Nash equilibrium places only one. The peak in PoA at α = 100 occurs at the point for a 100-node line where the worst-case cost of accessing a remote replica is slightly less than the cost of placing a new replica, so selfish servers will never place a second replica. The optimal solution, however, places multiple replicas to decrease the high global cost of access. As α continues to increase, the undersupply problem lessens as the optimal solution places fewer replicas. 5.2 Different Underlying Topologies In Figure 4 (a) we examine an overlay graph on the more realistic transit-stub topology. The trends for the PoA, OPoA, and Ratio are similar to the results for the line topology, with a peak in PoA at α = 0.8 due to maximal undersupply. In Figure 5 (a) we examine an overlay graph on the power-law topology. We observe several interesting differences between the power-law and transit-stub results. First, the PoA peaks at a lower level in the power-law graph, around 2.3 (at α = 0.9) while the peak PoA in the transit-stub topology is almost 3.0 (at α = 0.8). After the peak, PoA and Ratio decrease more slowly as α increases. OPoA is close to one for the whole range of α values. This can be explained by the observation in Figure 5 (a) that there is no significant undersupply problem here like there was in the transit-stub graph. Indeed the high PoA is due mostly to misplacement problems when α is from 0.7 to 2.0, since there is little decrease in PoA when the number of replicas in social optimum changes from two to one. The OPoA is equal to one in the figure when the same number of replicas are placed. 5.3 Varying Demand Distribution Now we examine the effects of varying the demand distribution. The set of servers with demand is random for p <1, so we calculate the expected PoA by averaging over 5 trials (each data point is based on 5000 runs). We run simulations for demand levels of p ∈ {0.2, 0.6, 1.0} as α is varied on the 100 servers on top of the transit-stub graph. We observe that as demand falls, so does expected PoA. As p decreases, the number of replicas placed in the social optimum decreases, but the number in Nash equilibria changes little. Furthermore, when α exceeds the overlay diameter, the number in Nash equilibria stays constant when p varies. Therefore, lower p leads to a lesser undersupply problem, agreeing with intuition. We do not present the graph due to space limitations and redundancy; the PoA for p = 1.0 is identical to PoA in Figure 4 (a), and the lines for p = 0.6 and p = 0.2 are similar but lower and flatter. 5.4 Effects of Payment Finally, we discuss the effects of payments on the efficiency of Nash equilibria. The results are presented in Figure 4 (b) and Figure 5 (b). As shown in the analysis, the simulations achieve OPoA close to one (it is not exactly one because of randomness in the simulations). The Ratio for the payment game is much lower than the Ratio for the basic game, since the protocol for the payment game tends to explore good regions in the space of Nash equilibria. We observe in Figure 4 that for α> 0.4, the average number of replicas of Nash equilibria gets closer with payments to that of the social optimum than it does without. We observe in Figure 5 that more replicas are placed with payments than without when α is between 0.7 and 1.3, the only range of significant undersupply in the power-law case. The results confirm that payments give servers incentive to replicate the object and this leads to better equilibria. 6. DISCUSSION AND FUTURE WORK We suggest several interesting extensions and directions. One extension is to consider multiple objects in the capacitated caching game, in which servers have capacity limits when placing objects. Since caching one object affects the ability to cache another, there is no separability of a multi-object game into multiple single object games. As studied in [12], one way to formulate this problem is to find the best response of a server by solving a knapsack problem and to compute Nash equilibria. In our analyses, we assume that all nodes have the same demand. However, nodes could have different demand depending on objects. We intend to examine the effects of heterogeneous demands (or heterogeneous placement costs) analytically. We also want to look at the following "aggregation effect". Suppose there are n--1 clustered nodes with distance of α--1 from a node hosting a replica. All nodes have demands of one. In that case, the price of anarchy is O (n). However, if we aggregate n--1 nodes into one node with demand n--1, the price of anarchy becomes O (1), since α should be greater than (n--1) (α--1) to replicate only one object. Such aggregation can reduce the inefficiency of Nash equilibria. We intend to compute the bounds of the price of anarchy under different underlying topologies such as random graphs or growthrestricted metrics. We want to investigate whether there are certain distance constraints that guarantee O (1) price of anarchy. In addition, we want to run large-scale simulations to observe the change in the price of anarchy as the network size increases. Another extension is to consider server congestion. Suppose the distance is the network distance plus - y x (number of accesses) where - y is an extra delay when an additional server accesses the replica. Then, when α> - y, it can be shown that PoA is bounded by αγ. As - y increases, the price of anarchy bound decreases, since the load of accesses is balanced across servers. While exploring the caching problem, we made several observations that seem counterintuitive. First, the PoA in the payment game can be worse than the PoA in the basic game. Another observation we made was that the number of replicas in a Nash equilibrium can be more than the number of replicas in the social optimum even without payments. For example, a graph with diameter slightly more than α may have a Nash equilibrium configuration with two replicas at the two ends. However, the social optimum may place one replica at the center. We leave the investigation of more examples as an open issue. 7. CONCLUSIONS In this work we introduce a novel non-cooperative game model to characterize the caching problem among selfish servers without any central coordination. We show that pure strategy Nash equilibria exist in the game and that the price of anarchy can be O (n) in general, where n is the number of servers, due to undersupply problems. With specific topologies, we show that the price of anarchy can have tighter bounds. More importantly, with payments, servers are incentivized to replicate and the optimistic price of anarchy is always one. Non-cooperative caching is a more realistic model than cooperative caching in the competitive Internet, hence this work is an important step toward viable federated caching systems. 8. ACKNOWLEDGMENTS We thank Kunal Talwar for enlightening discussions regarding this work. 9. REFERENCES APPENDIX A. ANALYZING SPECIFIC TOPOLOGIES We now analyze the price of anarchy (PoA) for the basic game with specific underlying topologies and show that PoA can have better bounds. We look at complete graph, star, line, and Ddimensional grid. In all these topologies, we set the distance between two directly connected nodes to one. We describe the case where α> 1, since PoA = 1 trivially when α <1. Figure 6: Example where the payment game has a Nash equilibrium which is worse than any Nash equilibrium in the basic game. The unlabeled distances between the nodes in the cluster are all 1. The thresholds of white nodes are all α and the thresholds of dark nodes are all α / 4. The two dark nodes replicate the object in this payment game Nash equilibrium. For a complete graph, PoA = 1, and for a star, PoA <2. For a complete graph, when α> 1, both Nash equilibria and social optima place one replica at one server, so PoA = 1. For star, when 1 <α <2, the worst case Nash equilibrium places replicas at all leaf nodes. However, the social optimum places one replica at the center node. Therefore, PoA = (n − 1) α +1 α + (n − 1) <2 (n − 1) +1 1 + (n − 1) <2. When α> 2, the worst case Nash equilibrium places one replica at a leaf node and the other nodes access the remote replica, and the social optimum places one replica at the center. PoA = α +1 +2 (n − 2) For a line, the price of anarchy is O (% / n). When 1 <α <n, the worst case Nash equilibrium places replicas every 2α so that there is no overlap between areas covered by two adjacent servers that cache the object. The social optimum places replicas at least every% / 2α. The placement of replicas for the social optimum is as follows. Suppose there are two replicas separated by distance d. By placing an additional replica in the middle, we want to have the reduction of distance to be at least α. The distance reduction is d/2 + 2 {((d/2 - 1) - 1) + ((d/2 - 2) - 2) +...+ ((d/2 d/4) - d/4)}> d2/8. d should be at most 2% / 2α. Therefore, the distance between replicas in the social optimum is at most% / 2α. C (SW) = α (n − 1) one replica at a leaf node and C (SW) = α + (n − 1) n 2. However, the social optimum still places replicas every% / 2α. Ifwe view PoA as a continuous function of α and compute a derivative of PoA, the derivative becomes 0 when α is Θ (n2), which means the function decreases as α increases from n. Therefore, PoA is maximum when α is n, and PoA = Θ (n2) 2, the social optimum also places only one replica, and PoA is trivially bounded by 2. This result holds for the ring and it can be generalized to the D-dimensional grid. As the dimension in the grid increases, the distance reduction of additional replica placement becomes Ω (dD +1) where d is the distance between two adjacent replicas. Therefore, PoA = Θ (n2) B. PAYMENT CAN DO WORSE Consider the network in Figure 6 where α> 1 + α / 3. Any Nash equilibrium in the basic game model would have exactly two replicas - one in the left cluster, and one in the right. It is easy to verify that the worst placement (in terms of social cost) of two replicas occurs when they are placed at nodes A and B. This placement can be achieved as a Nash equilibrium in the payment game, but not in the basic game since A and B are a distance 3α / 4 apart. C. NASH DYNAMICS PROTOCOLS The simulator initializes the game according to the given parameters and a random initial strategy profile and then iterates through rounds. Initially the order of player actions is chosen randomly. In each round, each server performs the Nash dynamics protocol that adjusts its strategies greedily in the chosen order. When a round passes without any server changing its strategy, the simulation ends and a Nash equilibrium is reached. In the basic game, we pick a random initial subset of servers to replicate the object as shown in Algorithm 1. After the initialization, each player runs the move selection procedure described in Algorithm 2 (in algorithms 2 and 4, Costnow represents the current cost for node i). This procedure chooses greedily between replication and non-replication. It is not hard to see that this Nash dynamics protocol converges in two rounds. In the payment game, we pick a random initial subset of servers to replicate the object by setting their thresholds to 0. In addition, we initialize a second random subset of servers to replicate the object with payments from other servers. The details are shown in Algorithm 3. After the initialization, each player runs the move selection procedure described in Algorithm 4. This procedure chooses greedily between replication and accessing a remote replica, with the possibilities of receiving and making payments, respectively. In the protocol, each node increases its threshold value by incr if it does not replicate the object. By this ramp up procedure, the cost of replicating an object is shared fairly among the nodes that access a replica from a server that does cache. If incr is small, cost is shared more fairly, and the game tends to reach equilibria that encourages more servers to store replicas, though the convergence takes longer. If incr is large, the protocol converges quickly, but it may miss efficient equilibria. In the simulations we set incr to 0.1. Most of our Figure 7: An example where the Nash dynamics protocol does not converge in the payment game. simulation runs converged, but there were a very few cases where the simulation did not converge due to the cycles of dynamics. The protocol does not guarantee convergence within a certain number of rounds like the protocol for the basic game. We provide an example graph and an initial condition such that the Nash dynamics protocol does not converge in the payment game if started from this initial condition. The graph is represented by a shortest path metric on the network shown in Figure 7. In the starting configuration, only A replicates the object, and a pays it an amount α / 3 to do so. The thresholds for A, B and C are α / 3 each, and the thresholds for a, b and c are 2α / 3. It is not hard to verify that the Nash dynamics protocol will never converge if we start with this condition. The Nash dynamics protocol for the payment game needs further investigation. The dynamics protocol for the payment game should avoid cycles of actions to achieve stabilization of the protocol. Finding a self-stabilizing dynamics protocol is an interesting problem. In addition, a fixed value of incr cannot adapt to changing environments. A small value of incr can lead to efficient equilibria, but it can take long time to converge. An important area for future research is looking at adaptively changing incr.
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis ABSTRACT We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate. 1. INTRODUCTION Wide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer caches [15,16], and web caches [6,10] have become popular over * Research supported by NSF career award ANI-9985250, NFS ITR award CCR-0085899, and California MICRO award 00-049. † Research supported by NSF ITR grant 0121555. ‡ Research supported by NSF grant CCR 9984703 and a US-Israel Binational Science Foundation grant. § Research supported by NSF grant EIA-0122599. the last few years. Caching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems. However, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server. In reality, servers may behave selfishly--seeking to maximize their own benefit. For example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains. They have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior. It has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior. In this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations. We model selfish caching as a non-cooperative game. In the basic model, the servers have two possible actions for each object. If a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica. On the other hand, if all replicas are located too far away, the server is better off caching the object itself. Decisions about caching the replicas locally are arrived at locally, taking into account only local costs. We also define a more elaborate payment model, in which each server bids for having an object replicated at another site. Each site now has the option of replicating an object and collecting the related bids. Once all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers. Game theory predicts that such a situation will end up in a Nash equilibrium, that is, a set of (possibly randomized) strategies with the property that no player can benefit by changing its strategy while the other players keep their strategies unchanged [28]. Foundational considerations notwithstanding, it is not easy to accept randomized strategies as the behavior of rational agents in a distributed system (see [28] for an extensive discussion)--but this is what classical game theory can guarantee. In certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted. With or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum. This loss of efficiency is quantified by the price of anarchy [21]. The price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum. The price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own. However, in reality there are ways whereby the system can be guided, through "seeding" or incentives, to a preselected Nash equilibrium. This "optimistic" version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum. In this paper we address the following questions: • Do pure strategy Nash equilibria exist in the caching game? • If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions? • What is the effect of adopting payments? Will the Nash equilibria be improved? We show that pure strategy Nash equilibria always exist in the caching game. The price of anarchy of the basic game model can be O (n), where n is the number of servers; the intuitive reason is undersupply. Under certain topologies, the price of anarchy does have tighter bounds. For complete graphs and stars, it is O (1). For anarchy can be O (n). In the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one. Our simulation results show several interesting phases. As the placement cost increases from zero, the price of anarchy increases. When the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems. As the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy. The rest of the paper is organized as follows. In Section 2 we discuss related work. Section 3 discusses details of the basic game and analyzes the bounds of the price of anarchy. In Section 4 we discuss the payment game and analyze its price of anarchy. In Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed. We discuss extensions of the game and directions for future work in Section 6. 2. RELATED WORK There has been considerable research on wide-area peer-to-peer file systems such as OceanStore [22], CFS [5], PAST [32], FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and SummaryCache [10], and peer-to-peer caches such as Squirrel [16]. Most of these systems use caching for performance, availability, and reliability. The caching protocols assume obedience to the protocol and ignore participants' incentives. Our work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly. The placement of replicas in the caching problem is the most important issue. There is much work on the placement of web replicas, instrumentation servers, and replicated resources. All protocols assume obedience and ignore participants' incentives. In [14], Gribble et al. discuss the data placement problem in peer-to-peer systems. Ko and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20]. Chen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4]. Douceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8]. RaDar is a system that replicates and migrates objects for an Internet hosting service [31]. Tang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34]. Centralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19, 23, 30]. The facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27]. Since the problem is NP-hard, approximation algorithms based on primal-dual techniques, greedy algorithms, and local search have been explored [17, 24, 26]. Our caching game is different from all of these in that the optimization process is performed among distributed selfish servers. There is little research in non-cooperative facility location games, as far as we know. Vetta [35] considers a class of problems where the social utility is submodular (submodularity means decreasing marginal utility). In the case of competitive facility location among corporations he proves that any Nash equilibrium gives an expected social utility within a factor of 2 of optimal plus an additive term that depends on the facility opening cost. Their results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations. Note that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location. On the other hand, the significant problems in our game are undersupply and misplacement. In a recent paper, Goemans et al. analyze content distribution on ad-hoc wireless networks using a game-theoretic approach [12]. As in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria. However, their results are incomparable to ours because their payoff functions neglect network latencies between users, they consider multiple data items (markets), and each node has a limited budget to cache items. Cost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29]. Goemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13]. P ´ al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1/3 of the total cost for the facility location game [29]. Devanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7]. 3. BASIC GAME 3.1 Game Model 3.2 Nash Equilibrium Solutions 3.3 Social Optimum 3.4 Analysis 4. PAYMENT GAME 4.1 Game Model 4.2 Analysis 5. SIMULATION 5.1 Varying Placement Cost 5.2 Different Underlying Topologies 5.3 Varying Demand Distribution 5.4 Effects of Payment 6. DISCUSSION AND FUTURE WORK 7. CONCLUSIONS 8. ACKNOWLEDGMENTS 9. REFERENCES APPENDIX A. ANALYZING SPECIFIC TOPOLOGIES B. PAYMENT CAN DO WORSE C. NASH DYNAMICS PROTOCOLS
Selfish Caching in Distributed Systems: A Game-Theoretic Analysis ABSTRACT We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate. 1. INTRODUCTION † Research supported by NSF ITR grant 0121555. ‡ Research supported by NSF grant CCR 9984703 and a US-Israel Binational Science Foundation grant. § Research supported by NSF grant EIA-0122599. Caching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems. However, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server. In reality, servers may behave selfishly--seeking to maximize their own benefit. For example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains. They have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior. It has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior. In this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations. We model selfish caching as a non-cooperative game. In the basic model, the servers have two possible actions for each object. If a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica. On the other hand, if all replicas are located too far away, the server is better off caching the object itself. Decisions about caching the replicas locally are arrived at locally, taking into account only local costs. We also define a more elaborate payment model, in which each server bids for having an object replicated at another site. Each site now has the option of replicating an object and collecting the related bids. Once all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers. In certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted. With or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum. quantified by the price of anarchy [21]. The price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum. The price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own. However, in reality there are ways whereby the system can be guided, through "seeding" or incentives, to a preselected Nash equilibrium. This "optimistic" version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum. In this paper we address the following questions: • Do pure strategy Nash equilibria exist in the caching game? • If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions? • What is the effect of adopting payments? Will the Nash equilibria be improved? We show that pure strategy Nash equilibria always exist in the caching game. The price of anarchy of the basic game model can be O (n), where n is the number of servers; the intuitive reason is undersupply. Under certain topologies, the price of anarchy does have tighter bounds. For anarchy can be O (n). In the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one. Our simulation results show several interesting phases. As the placement cost increases from zero, the price of anarchy increases. When the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems. As the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy. The rest of the paper is organized as follows. In Section 2 we discuss related work. Section 3 discusses details of the basic game and analyzes the bounds of the price of anarchy. In Section 4 we discuss the payment game and analyze its price of anarchy. In Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed. We discuss extensions of the game and directions for future work in Section 6. 2. RELATED WORK Most of these systems use caching for performance, availability, and reliability. The caching protocols assume obedience to the protocol and ignore participants' incentives. Our work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly. The placement of replicas in the caching problem is the most important issue. There is much work on the placement of web replicas, instrumentation servers, and replicated resources. All protocols assume obedience and ignore participants' incentives. In [14], Gribble et al. discuss the data placement problem in peer-to-peer systems. Ko and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20]. Chen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4]. Douceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8]. RaDar is a system that replicates and migrates objects for an Internet hosting service [31]. Tang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34]. Centralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19, 23, 30]. The facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27]. Our caching game is different from all of these in that the optimization process is performed among distributed selfish servers. There is little research in non-cooperative facility location games, as far as we know. Their results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations. Note that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location. On the other hand, the significant problems in our game are undersupply and misplacement. As in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria. Cost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29]. Goemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13]. P ´ al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1/3 of the total cost for the facility location game [29]. Devanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7].
J-62
Weak Monotonicity Suffices for Truthfulness on Convex Domains
Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain.
[ "weak monoton", "truth", "truth", "convex domain", "social choic function", "individu prefer", "truth implement", "recognit algorithm", "nonneg cycl properti", "affin maxim", "non-truth function", "domin strategi", "mechan design", "strategyproof" ]
[ "P", "P", "P", "P", "P", "U", "R", "U", "U", "U", "M", "U", "M", "U" ]
Weak Monotonicity Suffices for Truthfulness on Convex Domains Michael Saks ∗ Dept. of Mathematics Rutgers University 110 Frelinghuysen Road Piscataway, NJ, 08854 saks@math.rutgers.edu Lan Yu † Dept. of Computer Science Rutgers University 110 Frelinghuysen Road Piscataway, NJ, 08854 lanyu@paul.rutgers.edu ABSTRACT Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu``alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu``alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain. Categories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers and Society]: Electronic Commerce-payment schemes General Terms Theory, Economics 1. INTRODUCTION Social choice theory centers around the general problem of selecting a single outcome out of a set A of alternative outcomes based on the individual preferences of a set P of players. A method for aggregating player preferences to select one outcome is called a social choice function. In this paper we assume that the range A is finite and that each player``s preference is expressed by a valuation function which assigns to each possible outcome a real number representing the benefit the player derives from that outcome. The ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes. A major difficulty connected with social choice functions is that players can not be required to tell the truth about their preferences. Since each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function. An important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences. By carefully choosing the payment function, one can hope to entice each player to tell the truth. A social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function. A mechanism is truthful (or to be strategyproof or to have a dominant strategy) if each player``s best strategy, knowing the preferences of the others, is always to declare his own true preferences. A social choice function is truthfully implementable, or truthful if it has a truthful implementation. (The property of truthful implementability is sometimes called dominant strategy incentive compatibility). This framework leads naturally to the question: which social choice functions are truthful? This question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), characterize the property. The definition of the property itself provides a characterization, so what more is needed? Here are some useful notions of characterization: • Recognition algorithm. Give an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property. • Parametric representation. Give an explicit parametrized family of functions and show that each function in the 1 The usual definition of mechanism is more general than this (see [8] Chapter 23.C or [9]); the mechanisms we consider here are usually called direct revelation mechanisms. 286 family has the property, and that every function with the property is in the family. A third notion applies in the case of hereditary properties of functions. A function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f. A property P of functions is hereditary if it is preserved under taking subfunctions. Truthfulness is easily seen to be hereditary. • Sets of obstructions. For a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn``t have the property. An obstruction is minimal if every proper subfunction has the property. A set of obstructions is complete if every function that does not have the property contains one of them as a subfunction. The set of all functions that don``t satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions. We are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness. It turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial. For functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness. For general domains, however, the picture is far from complete. Typically, the domains of social choice functions are specified by a system of constraints. For example, an order constraint requires that one specified entry in some row be larger than another in the same row, a range constraint places an upper or lower bound on an entry, and a zero constraint forces an entry to be 0. These are all examples of linear inequality constraints on the matrix entries. Building on work of Roberts [10], Lavi, Mu``alem and Nisan [6] defined a condition called weak monotonicity (WMON). (Independently, in the context of multi-unit auctions, Bikhchandani, Chatterji and Sen [3] identified the same condition and called it nondecreasing in marginal utilities (NDMU).) The definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F. The functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness. Lavi, Mu``alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains. The domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints. Lavi, Mu``alem and Nisan [6] conjectured that W-MON suffices for convex domains. The main result of this paper is an affirmative answer to this conjecture: Theorem 1. For any social choice function having convex domain and finite range, weak monotonicity is necessary and sufficient for truthfulness. Using the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains. The two hypotheses on the social choice function, that the domain is convex and that the range is finite, can not be omitted as is shown by the examples given in section 7. 1.1 Related Work There is a simple and natural parametrized set of truthful social choice functions called affine maximizers. Roberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain. There are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]). Each of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers. Lavi, Mu``alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is almost an affine maximizer. There are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed. For a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i. The following easily proved fact is used extensively in the literature: Proposition 2. The social choice function f is truthful if and only if every local subfunction of f is truthful. This implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness. This set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions. Rochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property. This condition can be viewed as providing a minimal complete set of non-truthful functions. As is required by proposition 2, each function in the set is local. Furthermore it is one-to-one. In particular its domain has size at most the number of possible outcomes |A|. As this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions. But by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class. The condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2. Thus the results of Lavi, Mu``alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions. Theorem 1 extends these results to a much larger subclass of functions. 287 1.2 Weak Monotonicity and the Nonnegative Cycle Property By proposition 2, a function is truthful if and only if each of its local subfunctions is truthful. Therefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions. The domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D ⊆ RA be the set of allowed choices for row i. Since f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function. Associated to any single player function f with domain D we define an edge-weighted directed graph Hf whose vertex set is the image of f. For convenience, we assume that f is surjective and so this image is A. For each a, b ∈ A, x ∈ f−1 (a) there is an edge ex(a, b) from a to b with weight x(a) − x(b). The weight of a set of edges is just the sum of the weights of the edges. We say that f satisfies: • the nonnegative cycle property if every directed cycle has nonnegative weight. • the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight. We say a local function g satisfies nonnegative cycle property/nonnegative two-cycle property if its associated single player function f does. The graph Hf has a possibly infinite number of edges between any two vertices. We define Gf to be the edgeweighted directed graph with exactly one edge from a to b, whose weight δab is the infimum (possibly −∞) of all of the edge weights ex(a, b) for x ∈ f−1 (a). It is easy to see that Hf has the nonnegative cycle property/nonnegative two-cycle property if and only if Gf does. Gf is called the outcome graph of f. The weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property. The following result was obtained by Rochet [11] in a slightly different form and rediscovered by Rozenshtrom [12] and Gui, Muller and Vohra [5]: Lemma 3. A local social choice function is truthful if and only if it has the nonnegative cycle property. Thus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property. In light of this, theorem 1 follows from: Theorem 4. For any surjective single player function f : D −→ A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property. This is the result we will prove. 1.3 Overview of the Proof of Theorem 4 Let D ⊆ RA be convex and let f : D −→ A be a single player function such that Gf has no negative two-cycles. We want to conclude that Gf has no negative cycles. For two vertices a, b, let δ∗ ab denote the minimum weight of any path from a to b. Clearly δ∗ ab ≤ δab. Our proof shows that the δ∗ -weight of every cycle is exactly 0, from which theorem 4 follows. There seems to be no direct way to compute δ∗ and so we proceed indirectly. Based on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths. We prove that for any two outcomes a, b, there is a straight path from a to b (lemma 8 and corollary 10), and all straight paths from a to b have the same weight, which we denote ρab (theorem 12). We show that ρab ≤ δab (lemma 14) and that the ρ-weight of every cycle is 0. The key step to this proof is showing that the ρ-weight of every directed triangle is 0 (lemma 17). It turns out that ρ is equal to δ∗ (corollary 20), although this equality is not needed in the proof of theorem 4. To expand on the above summary, we give the definitions of an admissible path and a straight path. These are somewhat technical and rely on the geometry of f. We first observe that, without loss of generality, we can assume that D is (topologically) closed (section 2). In section 3, for each a ∈ A, we enlarge the set f−1 (a) to a closed convex set Da ⊆ D in such a way that for a, b ∈ A with a = b, Da and Db have disjoint interiors. We define an admissible path to be a sequence of outcomes (a1, ... , ak) such that each of the sets Ij = Daj ∩ Daj+1 is nonempty (section 4). An admissible path is straight if there is a straight line that meets one point from each of the sets I1, ... , Ik−1 in order (section 5). Finally, we mention how the hypotheses of convex domain and finite range are used in the proof. Both hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8). (2) that the ρ-weight of a directed triangle is 0 (lemma 17). The convex domain hypothesis is also needed for the convexity of the sets Da (section 3). The finite range hypothesis is also needed to reduce theorem 4 to the case that D is closed (section 2) and to prove that every straight path from a to b has the same δ-weight (theorem 12). 2. REDUCTION TO CLOSED DOMAIN We first reduce the theorem to the case that D is closed. Write DC for the closure of D. Since A is finite, DC = ∪a∈A(f−1 (a))C . Thus for each v ∈ DC − D, there is an a = a(v) ∈ A such that v ∈ (f−1 (a))C . Extend f to the function g on DC by defining g(v) = a(v) for v ∈ DC − D and g(v) = f(v) for v ∈ D. It is easy to check that δab(g) = δab(f) for all a, b ∈ A and therefore it suffices to show that the nonnegative two-cycle property for g implies the nonnegative cycle property for g. Henceforth we assume D is convex and closed. 3. A DISSECTION OF THE DOMAIN In this section, we construct a family of closed convex sets {Da : a ∈ A} with disjoint interiors whose union is D and satisfying f−1 (a) ⊆ Da for each a ∈ A. Let Ra = {v : ∀b ∈ A, v(a) − v(b) ≥ δab}. Ra is a closed polyhedron containing f−1 (a). The next proposition implies that any two of these polyhedra intersect only on their boundary. Proposition 5. Let a, b ∈ A. If v ∈ Ra ∩Rb then v(a)− v(b) = δab = −δba. 288 Da Db Dc Dd De v w x y z u p Figure 1: A 2-dimensional domain with 5 outcomes. Proof. v ∈ Ra implies v(a) − v(b) ≥ δab and v ∈ Rb implies v(b)−v(a) ≥ δba which, by the nonnegative two-cycle property, implies v(a) − v(b) ≤ δab. Thus v(a) − v(b) = δab and by symmetry v(b) − v(a) = δba. Finally, we restrict the collection of sets {Ra : a ∈ A} to the domain D by defining Da = Ra ∩ D for each a ∈ A. Clearly, Da is closed and convex, and contains f−1 (a). Therefore S a∈A Da = D. Also, by proposition 5, any point v in Da ∩ Db satisfies v(a) − v(b) = δab = −δba. 4. PATHS AND D-SEQUENCES A path of size k is a sequence −→a = (a1, ... , ak) with each ai ∈ A (possibly with repetition). We call −→a an (a1, ak)path. For a path −→a , we write |−→a | for the size of −→a . −→a is simple if the ai``s are distinct. For b, c ∈ A we write Pbc for the set of (b, c)-paths and SPbc for the set of simple (b, c)-paths. The δ-weight of path −→a is defined by δ(−→a ) = k−1X i=1 δaiai+1 . A D-sequence of order k is a sequence −→u = (u0, ... , uk) with each ui ∈ D (possibly with repetition). We call −→u a (u0, uk)-sequence. For a D-sequence −→u , we write ord(u) for the order of −→u . For v, w ∈ D we write Svw for the set of (v, w)-sequences. A compatible pair is a pair (−→a , −→u ) where −→a is a path and −→u is a D-sequence satisfying ord(−→u ) = |−→a | and for each i ∈ [k], both ui−1 and ui belong to Dai . We write C(−→a ) for the set of D-sequences −→u that are compatible with −→a . We say that −→a is admissible if C(−→a ) is nonempty. For −→u ∈ C(−→a ) we define ∆−→a (−→u ) = |−→a |−1 X i=1 (ui(ai) − ui(ai+1)). For v, w ∈ D and b, c ∈ A, we define Cvw bc to be the set of compatible pairs (−→a , −→u ) such that −→a ∈ Pbc and −→u ∈ Svw . To illustrate these definitions, figure 1 gives the dissection of a domain, a 2-dimensional plane, into five regions Da, Db, Dc, Dd, De. D-sequence (v, w, x, y, z) is compatible with both path (a, b, c, e) and path (a, b, d, e); D-sequence (v, w, u, y, z) is compatible with a unique path (a, b, d, e). D-sequence (x, w, p, y, z) is compatible with a unique path (b, a, d, e). Hence (a, b, c, e), (a, b, d, e) and (b, a, d, e) are admissible paths. However, path (a, c, d) or path (b, e) is not admissible. Proposition 6. For any compatible pair (−→a , −→u ), ∆−→a (−→u ) = δ(−→a ). Proof. Let k = ord(−→u ) = |−→a |. By the definition of a compatible pair, ui ∈ Dai ∩ Dai+1 for i ∈ [k − 1]. ui(ai) − ui(ai+1) = δaiai+1 from proposition 5. Therefore, ∆−→a (−→u ) = k−1X i=1 (ui(ai) − ui(ai+1)) = k−1X i=1 δaiai+1 = δ(−→a ). Lemma 7. Let b, c ∈ A and let −→a , −→a ∈ Pbc. If C(−→a ) ∩ C(−→a ) = ∅ then δ(−→a ) = δ(−→a ). Proof. Let −→u be a D-sequence in C(−→a ) ∩ C(−→a ). By proposition 6, δ(−→a ) = ∆−→a (−→u ) and δ(−→a ) = ∆−→a (−→u ), it suffices to show ∆−→a (−→u ) = ∆−→a (−→u ). Let k = ord(−→u ) = |−→a | = |−→a |. Since ∆−→a (−→u ) = k−1X i=1 (ui(ai) − ui(ai+1)) = u1(a1) + k−1X i=2 (ui(ai) − ui−1(ai)) − uk−1(ak) = u1(b) + k−1X i=2 (ui(ai) − ui−1(ai)) − uk−1(c), ∆−→a (−→u ) − ∆−→a (−→u ) = k−1X i=2 ((ui(ai) − ui−1(ai)) − (ui(ai) − ui−1(ai))) = k−1X i=2 ((ui(ai) − ui(ai)) − (ui−1(ai) − ui−1(ai))). Noticing both ui−1 and ui belong to Dai ∩ Dai , we have by proposition 5 ui−1(ai) − ui−1(ai) = δaiai = ui(ai) − ui(ai). Hence ∆−→a (−→u ) − ∆−→a (−→u ) = 0. 5. LINEAR D-SEQUENCES AND STRAIGHT PATHS For v, w ∈ D we write vw for the (closed) line segment joining v and w. A D-sequence −→u of order k is linear provided that there is a sequence of real numbers 0 = λ0 ≤ λ1 ≤ ... ≤ λk = 1 such that ui = (1 − λi)u0 + λiuk. In particular, each ui belongs to u0uk. For v, w ∈ D we write Lvw for the set of linear (v, w)-sequences. For b, c ∈ A and v, w ∈ D we write LCvw bc for the set of compatible pairs (−→a , −→u ) such that −→a ∈ Pbc and −→u ∈ Lvw . For a path −→a , we write L(−→a ) for the set of linear sequences compatible with −→a . We say that −→a is straight if L(−→a ) = ∅. For example, in figure 1, D-sequence (v, w, x, y, z) is linear while (v, w, u, y, z), (x, w, p, y, z), and (x, v, w, y, z) are not. Hence path (a, b, c, e) and (a, b, d, e) are both straight. However, path (b, a, d, e) is not straight. 289 Lemma 8. Let b, c ∈ A and v ∈ Db, w ∈ Dc. There is a simple path −→a and D-sequence −→u such that (−→a , −→u ) ∈ LCvw bc . Furthermore, for any such path −→a , δ(−→a ) ≤ v(b) − v(c). Proof. By the convexity of D, any sequence of points on vw is a D-sequence. If b = c, singleton path −→a = (b) and D-sequence −→u = (v, w) are obviously compatible. δ(−→a ) = 0 = v(b) − v(c). So assume b = c. If Db ∩Dc ∩vw = ∅, we pick an arbitrary x from this set and let −→a = (b, c) ∈ SPbc, −→u = (v, x, w) ∈ Lvw . Again it is easy to check the compatibility of (−→a , −→u ). Since v ∈ Db, v(b) − v(c) ≥ δbc = δ(−→a ). For the remaining case b = c and Db ∩Dc ∩vw = ∅, notice v = w otherwise v = w ∈ Db ∩ Dc ∩ vw. So we can define λx for every point x on vw to be the unique number in [0, 1] such that x = (1 − λx)v + λxw. For convenience, we write x ≤ y for λx ≤ λy. Let Ia = Da ∩ vw for each a ∈ A. Since D = ∪a∈ADa, we have vw = ∪a∈AIa. Moreover, by the convexity of Da and vw, Ia is a (possibly trivial) closed interval. We begin by considering the case that Ib and Ic are each a single point, that is, Ib = {v} and Ic = {w}. Let S be a minimal subset of A satisfying ∪s∈SIs = vw. For each s ∈ S, Is is maximal, i.e., not contained in any other It, for t ∈ S. In particular, the intervals {Is : s ∈ S} have all left endpoints distinct and all right endpoints distinct and the order of the left endpoints is the same as that of the right endpoints. Let k = |S| + 2 and index S as a2, ... , ak−1 in the order defined by the right endpoints. Denote the interval Iai by [li, ri]. Thus l2 < l3 < ... < lk−1, r2 < r3 < ... < rk−1 and the fact that these intervals cover vw implies l2 = v, rk−1 = w and for all 2 ≤ i ≤ k − 2, li+1 ≤ ri which further implies li < ri. Now we define the path −→a = (a1, a2, ... , ak−1, ak) with a1 = b, ak = c and a2, a3, ... , ak−1 as above. Define the linear D-sequence −→u = (u0, u1, ... , uk) by u0 = u1 = v, uk = w and for 2 ≤ i ≤ k−1, ui = ri. It follows immediately that (−→a , −→u ) ∈ LCvw bc . Neither b nor c is in S since lb = rb and lc = rc. Thus −→a is simple. Finally to show δ(−→a ) ≤ v(b) − v(c), we note v(b) − v(c) = v(a1) − v(ak) = k−1X i=1 (v(ai) − v(ai+1)) and δ(−→a ) = ∆−→a (−→u ) = k−1X i=1 (ui(ai) − ui(ai+1)) = v(a1) − v(a2) + k−1X i=2 (ri(ai) − ri(ai+1)). For two outcomes d, e ∈ A, let us define fde(z) = z(d)−z(e) for all z ∈ D. It suffices to show faiai+1 (ri) ≤ faiai+1 (v) for 2 ≤ i ≤ k − 1. Fact 9. For d, e ∈ A, fde(z) is a linear function of z. Furthermore, if x ∈ Dd and y ∈ De with x = y, then fde(x) = x(d) − x(e) ≥ δde ≥ −δed ≥ −(y(e) − y(d)) = fde(y). Therefore fde(z) is monotonically nonincreasing along the line ←→ xy as z moves in the direction from x to y. Applying this fact with d = ai, e = ai+1, x = li and y = ri gives the desired conclusion. This completes the proof for the case that Ib = {v} and Ic = {w}. For general Ib, Ic, rb < lc otherwise Db ∩ Dc ∩ vw = Ib ∩ Ic = ∅. Let v = rb and w = lc. Clearly we can apply the above conclusion to v ∈ Db, w ∈ Dc and get a compatible pair (−→a , −→u ) ∈ LCv w bc with −→a simple and δ(−→a ) ≤ v (b) − v (c). Define the linear D-sequence −→u by u0 = v, uk = w and ui = ui for i ∈ [k − 1]. (−→a , −→u ) ∈ LCvw bc is evident. Moreover, applying the above fact with d = b, e = c, x = v and y = w, we get v(b) − v(c) ≥ v (b) − v (c) ≥ δ(−→a ). Corollary 10. For any b, c ∈ A there is a straight (b, c)path. The main result of this section (theorem 12) says that for any b, c ∈ A, every straight (b, c)-path has the same δ-weight. To prove this, we first fix v ∈ Db and w ∈ Dc and show (lemma 11) that every straight (b, c)-path compatible with some linear (v, w)-sequence has the same δ-weight ρbc(v, w). We then show in theorem 12 that ρbc(v, w) is the same for all choices of v ∈ Db and w ∈ Dc. Lemma 11. For b, c ∈ A, there is a function ρbc : Db × Dc −→ R satisfying that for any (−→a , −→u ) ∈ LCvw bc , δ(−→a ) = ρbc(v, w). Proof. Let (−→a , −→u ), (−→a , −→u ) ∈ LCvw bc . It suffices to show δ(−→a ) = δ(−→a ). To do this we construct a linear (v, w)-sequence −→u and paths −→a ∗ , −→a ∗∗ ∈ Pbc, both compatible with −→u , satisfying δ(−→a ∗ ) = δ(−→a ) and δ(−→a ∗∗ ) = δ(−→a ). Lemma 7 implies δ(−→a ∗ ) = δ(−→a ∗∗ ), which will complete the proof. Let |−→a | = ord(−→u ) = k and |−→a | = ord(−→u ) = l. We select −→u to be any linear (v, w)-sequence (u0, u1, ... , ut) such that −→u and −→u are both subsequences of −→u , i.e., there are indices 0 = i0 < i1 < · · · < ik = t and 0 = j0 < j1 < · · · < jl = t such that −→u = (ui0 , ui1 , ... , uik ) and −→u = (uj0 , uj1 , ... , ujl ). We now construct a (b, c)-path −→a ∗ compatible with −→u such that δ(−→a ∗ ) = δ(−→a ). (An analogous construction gives −→a ∗∗ compatible with −→u such that δ(−→a ∗∗ ) = δ(−→a ).) This will complete the proof. −→a ∗ is defined as follows: for 1 ≤ j ≤ t, a∗ j = ar where r is the unique index satisfying ir−1 < j ≤ ir. Since both uir−1 = ur−1 and uir = ur belong to Dar , uj ∈ Dar for ir−1 ≤ j ≤ ir by the convexity of Dar . The compatibility of (−→a ∗ , −→u ) follows immediately. Clearly, a∗ 1 = a1 = b and a∗ t = ak = c, so −→a ∗ ∈ Pbc. Furthermore, as δa∗ j a∗ j+1 = δarar = 0 for each r ∈ [k], ir−1 < j < ir, δ(−→a ∗ ) = k−1X r=1 δa∗ ir a∗ ir+1 = k−1X r=1 δarar+1 = δ(−→a ). We are now ready for the main theorem of the section: Theorem 12. ρbc is a constant map on Db × Dc. Thus for any b, c ∈ A, every straight (b, c)-path has the same δweight. Proof. For a path −→a , (v, w) is compatible with −→a if there is a linear (v, w)-sequence compatible with −→a . We write CP(−→a ) for the set of pairs (v, w) compatible with −→a . ρbc is constant on CP(−→a ) because for each (v, w) ∈ CP(−→a ), ρbc(v, w) = δ(−→a ). By lemma 8, we also haveS −→a ∈SPbc CP(−→a ) = Db ×Dc. Since A is finite, SPbc, the set of simple paths from b to c, is finite as well. 290 Next we prove that for any path −→a , CP(−→a ) is closed. Let ((vn , wn ) : n ∈ N) be a convergent sequence in CP(−→a ) and let (v, w) be the limit. We want to show that (v, w) ∈ CP(−→a ). For each n ∈ N, since (vn , wn ) ∈ CP(−→a ), there is a linear (vn , wn )-sequence un compatible with −→a , i.e., there are 0 = λn 0 ≤ λn 1 ≤ ... ≤ λn k = 1 (k = |−→a |) such that un j = (1 − λn j )vn + λn j wn (j = 0, 1, ... , k). Since for each n, λn = (λn 0 , λn 1 , ... , λn k ) belongs to the closed bounded set [0, 1]k+1 we can choose an infinite subset I ⊆ N such that the sequence (λn : n ∈ I) converges. Let λ = (λ0, λ1, ... , λk) be the limit. Clearly 0 = λ0 ≤ λ1 ≤ · · · ≤ λk = 1. Define the linear (v, w)-sequence −→u by uj = (1 − λj )v + λj w (j = 0, 1, ... , k). Then for each j ∈ {0, ... , k}, uj is the limit of the sequence (un j : n ∈ I). For j > 0, each un j belongs to the closed set Daj , so uj ∈ Daj . Similarly, for j < k each un j belongs to the closed set Daj+1 , so uj ∈ Daj+1 . Hence (−→a , −→u ) is compatible, implying (v, w) ∈ CP(−→a ). Now we have Db × Dc covered by finitely many closed subsets on each of them ρbc is a constant. Suppose for contradiction that there are (v, w), (v , w ) ∈ Db × Dc such that ρbc(v, w) = ρbc(v , w ). L = {((1 − λ)v + λv , (1 − λ)w + λw ) : λ ∈ [0, 1]} is a line segment in Db ×Dc by the convexity of Db, Dc. Let L1 = {(x, y) ∈ L : ρbc(x, y) = ρbc(v, w)} and L2 = L − L1. Clearly (v, w) ∈ L1, (v , w ) ∈ L2. Let P = {−→a ∈ SPbc : δ(−→a ) = ρbc(v, w)}. L1 = `S −→a ∈P CP(−→a ) ´ ∩ L, L2 = S −→a ∈SPbc−P CP(−→a ) ∩ L are closed by the finiteness of P. This is a contradiction, since it is well known (and easy to prove) that a line segment can not be expressed as the disjoint union of two nonempty closed sets. Summarizing corollary 10, lemma 11 and theorem 12, we have Corollary 13. For any b, c ∈ A, there is a real number ρbc with the property that (1) There is at least one straight (b, c)-path of δ-weight ρbc and (2) Every straight (b, c)-path has δ-weight ρbc. 6. PROOF OF THEOREM 4 Lemma 14. ρbc ≤ δbc for all b, c ∈ A. Proof. For contradiction, suppose ρbc − δbc = > 0. By the definition of δbc, there exists v ∈ f−1 (b) ⊆ Db with v(b) − v(c) < δbc + = ρbc. Pick an arbitrary w ∈ Dc. By lemma 8, there is a compatible pair (−→a , −→u ) ∈ LCvw bc with δ(−→a ) ≤ v(b) − v(c). Since −→a is a straight (b, c)-path, ρbc = δ(−→a ) ≤ v(b) − v(c), leading to a contradiction. Define another edge-weighted complete directed graph Gf on vertex set A where the weight of arc (a, b) is ρab. Immediately from lemma 14, the weight of every directed cycle in Gf is bounded below by its weight in Gf . To prove theorem 4, it suffices to show the zero cycle property of Gf , i.e., every directed cycle has weight zero. We begin by considering two-cycles. Lemma 15. ρbc + ρcb = 0 for all b, c ∈ A. Proof. Let −→a be a straight (b, c)-path compatible with linear sequence −→u . let −→a be the reverse of −→a and −→u the reverse of −→u . Obviously, (−→a , −→u ) is compatible as well and thus −→a is a straight (c, b)-path. Therefore, ρbc + ρcb = δ(−→a ) + δ(−→a ) = k−1X i=1 δaiai+1 + k−1X i=1 δai+1ai = k−1X i=1 (δaiai+1 + δai+1ai ) = 0, where the final equality uses proposition 5. Next, for three cycles, we first consider those compatible with linear triples. Lemma 16. If there are collinear points u ∈ Da, v ∈ Db, w ∈ Dc (a, b, c ∈ A), ρab + ρbc + ρca = 0. Proof. First, we prove for the case where v is between u and w. From lemma 8, there are compatible pairs (−→a , −→u ) ∈ LCuv ab , (−→a , −→u ) ∈ LCvw bc . Let |−→a | = ord(−→u ) = k and |−→a | = ord(−→u ) = l. We paste −→a and −→a together as −→a = (a = a1, a2, ... , ak−1, ak, a1 , ... , al = c), −→u and −→u as −→u = (u = u0, u1, ... , uk = v = u0 , u1 , ... , ul = w). Clearly (−→a , −→u ) ∈ LCuw ac and δ(−→a ) = k−1X i=1 δaiai+1 + δak a1 + l−1X i=1 δai ai+1 = δ(−→a ) + δbb + δ(−→a ) = δ(−→a ) + δ(−→a ). Therefore, ρac = δ(−→a ) = δ(−→a ) + δ(−→a ) = ρab + ρbc. Moreover, ρac = −ρca by lemma 15, so we get ρab + ρbc + ρca = 0. Now suppose w is between u and v. By the above argument, we have ρac + ρcb + ρba = 0 and by lemma 15, ρab + ρbc + ρca = −ρba − ρcb − ρac = 0. The case that u is between v and w is similar. Now we are ready for the zero three-cycle property: Lemma 17. ρab + ρbc + ρca = 0 for all a, b, c ∈ A. Proof. Let S = {(a, b, c) : ρab + ρbc + ρca = 0} and for contradiction, suppose S = ∅. S is finite. For each a ∈ A, choose va ∈ Da arbitrarily and let T be the convex hull of {va : a ∈ A}. For each (a, b, c) ∈ S, let Rabc = Da × Db × Dc ∩ T3 . Clearly, each Rabc is nonempty and compact. Moreover, by lemma 16, no (u, v, w) ∈ Rabc is collinear. Define f : D3 → R by f(u, v, w) = |v−u|+|w−v|+|u−w|. For (a, b, c) ∈ S, the restriction of f to the compact set Rabc attains a minimum m(a, b, c) at some point (u, v, w) ∈ Rabc by the continuity of f, i.e., there exists a triangle ∆uvw of minimum perimeter within T with u ∈ Da, v ∈ Db, w ∈ Dc. Choose (a∗ , b∗ , c∗ ) ∈ S so that m(a∗ , b∗ , c∗ ) is minimum and let (u∗ , v∗ , w∗ ) ∈ Ra∗b∗c∗ be a triple achieving it. Pick an arbitrary point p in the interior of ∆u∗ v∗ w∗ . By the convexity of domain D, there is d ∈ A such that p ∈ Dd. 291 Consider triangles ∆u∗ pw∗ , ∆w∗ pv∗ and ∆v∗ pu∗ . Since each of them has perimeter less than that of ∆u∗ v∗ w∗ and all three triangles are contained in T, by the minimality of ∆u∗ v∗ w∗ , (a∗ , d, c∗ ), (c∗ , d, b∗ ), (b∗ , d, a∗ ) ∈ S. Thus ρa∗d + ρdc∗ + ρc∗a∗ = 0, ρc∗d + ρdb∗ + ρb∗c∗ = 0, ρb∗d + ρda∗ + ρa∗b∗ = 0. Summing up the three equalities, (ρa∗d + ρdc∗ + ρc∗d + ρdb∗ + ρb∗d + ρda∗ ) +(ρc∗a∗ + ρb∗c∗ + ρa∗b∗ ) = 0, which yields a contradiction ρa∗b∗ + ρb∗c∗ + ρc∗a∗ = 0. With the zero two-cycle and three-cycle properties, the zero cycle property of Gf is immediate. As noted earlier, this completes the proof of theorem 4. Theorem 18. Every directed cycle of Gf has weight zero. Proof. Clearly, zero two-cycle and three-cycle properties imply triangle equality ρab +ρbc = ρac for all a, b, c ∈ A. For a directed cycle C = a1a2 ... aka1, by inductively applying triangle equality, we have Pk−1 i=1 ρaiai+1 = ρa1ak . Therefore, the weight of C is k−1X i=1 ρaiai+1 + ρaka1 = ρa1ak + ρaka1 = 0. As final remarks, we note that our result implies the following strengthenings of theorem 12: Corollary 19. For any b, c ∈ A, every admissible (b, c)path has the same δ-weight ρbc. Proof. First notice that for any b, c ∈ A, if Db ∩Dc = ∅, δbc = ρbc. To see this, pick v ∈ Db ∩ Dc arbitrarily. Obviously, path −→a = (b, c) is compatible with linear sequence −→u = (v, v, v) and is thus a straight (b, c)-path. Hence ρbc = δ(−→a ) = δbc. Now for any b, c ∈ A and any (b, c)-path −→a with C(−→a ) = ∅, let −→u ∈ C(−→a ). Since ui ∈ Dai ∩ Dai+1 for i ∈ [|−→a | − 1], δ(−→a ) = |−→a |−1 X i=1 δaiai+1 = |−→a |−1 X i=1 ρaiai+1 , which by theorem 18, = −ρa|−→a |a1 = ρa1a|−→a | = ρbc. Corollary 20. For any b, c ∈ A, ρbc is equal to δ∗ bc, the minimum δ-weight over all (b, c)-paths. Proof. Clearly ρbc ≥ δ∗ bc by corollary 13. On the other hand, for every (b, c)-path −→a = (b = a1, a2, ... , ak = c), by lemma 14, δ(−→a ) = k−1X i=1 δaiai+1 ≥ k−1X i=1 ρaiai+1 , which by theorem 18, = −ρaka1 = ρa1ak = ρbc. Hence ρbc ≤ δ∗ bc, which completes the proof. 7. COUNTEREXAMPLES TO STRONGER FORMS OF THEOREM 4 Theorem 4 applies to social choice functions with convex domain and finite range. We now show that neither of these hypotheses can be omitted. Our examples are single player functions. The first example illustrates that convexity can not be omitted. We present an untruthful single player social choice function with three outcomes a, b, c satisfying W-MON on a path-connected but non-convex domain. The domain is the boundary of a triangle whose vertices are x = (0, 1, −1), y = (−1, 0, 1) and z = (1, −1, 0). x and the open line segment zx is assigned outcome a, y and the open line segment xy is assigned outcome b, and z and the open line segment yz is assigned outcome c. Clearly, δab = −δba = δbc = −δcb = δca = −δac = −1, W-MON (the nonnegative twocycle property) holds. Since there is a negative cycle δab + δbc + δca = −3, by lemma 3, this is not a truthful choice function. We now show that the hypothesis of finite range can not be omitted. We construct a family of single player social choice functions each having a convex domain and an infinite number of outcomes, and satisfying weak monotonicity but not truthfulness. Our examples will be specified by a positive integer n and an n × n matrix M satisfying the following properties: (1) M is non-singular. (2) M is positive semidefinite. (3) There are distinct i1, i2, ... , ik ∈ [n] satisfying k−1X j=1 (M(ij, ij) − M(ij , ij+1)) + (M(ik, ik) − M(ik, i1)) < 0. Here is an example matrix with n = 3 and (i1, i2, i3) = (1, 2, 3): 0 @ 0 1 −1 −1 0 1 1 −1 0 1 A Let e1, e2, ... , en denote the standard basis of Rn . Let Sn denote the convex hull of {e1, e2 ... , en}, which is the set of vectors in Rn with nonnegative coordinates that sum to 1. The range of our social choice function will be the set Sn and the domain D will be indexed by Sn, that is D = {yλ : λ ∈ Sn}, where yλ is defined below. The function f maps yλ to λ. Next we specify yλ. By definition, D must be a set of functions from Sn to R. For λ ∈ Sn, the domain element yλ : Sn −→ R is defined by yλ(α) = λT Mα. The nonsingularity of M guarantees that yλ = yµ for λ = µ ∈ Sn. It is easy to see that D is a convex subset of the set of all functions from Sn to R. The outcome graph Gf is an infinite graph whose vertex set is the outcome set A = Sn. For outcomes λ, µ ∈ A, the edge weight δλµ is equal to δλµ = inf{v(λ) − v(µ) : f(v) = λ} = yλ(λ) − yλ(µ) = λT Mλ − λT Mµ = λT M(λ − µ). We claim that Gf satisfies the nonnegative two-cycle property (W-MON) but has a negative cycle (and hence is not truthful). For outcomes λ, µ ∈ A, δλµ +δµλ = λT M(λ−µ)+µT M(µ−λ) = (λ−µ)T M(λ−µ), 292 which is nonnegative since M is positive semidefinite. Hence the nonnegative two-cycle property holds. Next we show that Gf has a negative cycle. Let i1, i2, ... , ik be a sequence of indices satisfying property 3 of M. We claim ei1 ei2 ... eik ei1 is a negative cycle. Since δeiej = eT i M(ei − ej) = M(i, i) − M(i, j) for any i, j ∈ [k], the weight of the cycle k−1X j=1 δeij eij+1 + δeik ei1 = k−1X j=1 (M(ij , ij ) − M(ij, ij+1)) + (M(ik, ik) − M(ik, i1)) < 0, which completes the proof. Finally, we point out that the third property imposed on the matrix M has the following interpretation. Let R(M) = {r1, r2, ... , rn} be the set of row vectors of M and let hM be the single player social choice function with domain R(M) and range {1, 2, ... , n} mapping ri to i. Property 3 is equivalent to the condition that the outcome graph GhM has a negative cycle. By lemma 3, this is equivalent to the condition that hM is untruthful. 8. FUTURE WORK As stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness. Let us say that a set D of P × A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful. In this paper, we showed that for finite A, any convex D is a WM-domain. Typically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders. It is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain. We believe that convexity is the main part of the story, i.e., a WM-domain is, after excluding some exceptional cases, essentially a convex set. Turning to parametric representations, let us say a set D of P × A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer. Roberts'' theorem says that the unrestricted domain is an AM-domain. What are the most general conditions under which a set D of real matrices is an AM-domain? Acknowledgments We thank Ron Lavi for helpful discussions and the two anonymous referees for helpful comments. 9. REFERENCES [1] A. Archer and E. Tardos. Truthful mechanisms for one-parameter agents. In IEEE Symposium on Foundations of Computer Science, pages 482-491, 2001. [2] Y. Bartal, R. Gonen, and N. Nisan. Incentive compatible multi unit combinatorial auctions. In TARK ``03: Proceedings of the 9th conference on Theoretical aspects of rationality and knowledge, pages 72-87. ACM Press, 2003. [3] S. Bikhchandani, S. Chatterjee, and A. Sen. Incentive compatibility in multi-unit auctions. Technical report, UCLA Department of Economics, Dec. 2004. [4] A. Goldberg, J. Hartline, A. Karlin, M. Saks and A. Wright. Competitive Auctions, 2004. [5] H. Gui, R. Muller, and R. Vohra. Dominant strategy mechanisms with multidimensional types. Technical Report 047, Maastricht: METEOR, Maastricht Research School of Economics of Technology and Organization, 2004. available at http://ideas.repec.org/p/dgr/umamet/2004047.html. [6] R. Lavi, A. Mu``alem, and N. Nisan. Towards a characterization of truthful combinatorial auctions. In FOCS ``03: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, page 574. IEEE Computer Society, 2003. [7] D. Lehmann, L. O``Callaghan, and Y. Shoham. Truth revelation in approximately efficient combinatorial auctions. J. ACM, 49(5):577-602, 2002. [8] A. Mas-Colell, M. Whinston, and J. Green. Microeconomic Theory. Oxford University Press, 1995. [9] N. Nisan. Algorithms for selfish agents. Lecture Notes in Computer Science, 1563:1-15, 1999. [10] K. Roberts. The characterization of implementable choice rules. Aggregation and Revelation of Preferences, J-J. Laffont (ed.) , North Holland Publishing Company. [11] J.-C. Rochet. A necessary and sufficient condition for rationalizability in a quasi-linear context. Journal of Mathematical Economics, 16:191-200, 1987. [12] I. Rozenshtrom. Dominant strategy implementation with quasi-linear preferences. Master``s thesis, Dept. of Economics, The Hebrew University, Jerusalem, Israel, 1999. 293
Weak Monotonicity Suffices for Truthfulness on Convex Domains ABSTRACT Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain. 1. INTRODUCTION Social choice theory centers around the general problem of selecting a single outcome out of a set A of alternative out * This work was supported in part by NSF grant CCR9988526. † This work was supported in part by NSF grant CCR9988526 and DIMACS. comes based on the individual preferences of a set P of players. A method for aggregating player preferences to select one outcome is called a social choice function. In this paper we assume that the range A is finite and that each player's preference is expressed by a valuation function which assigns to each possible outcome a real number representing the "benefit" the player derives from that outcome. The ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes. A major difficulty connected with social choice functions is that players cannot be required to tell the truth about their preferences. Since each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function. An important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences. By carefully choosing the payment function, one can hope to entice each player to tell the truth. A social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function. A mechanism is truthful (or to be strategyproof or to have a dominant strategy) if each player's best strategy, knowing the preferences of the others, is always to declare his own true preferences. A social choice function is truthfully implementable, or truthful if it has a truthful implementation. (The property of truthful implementability is sometimes called dominant strategy incentive compatibility). This framework leads naturally to the question: which social choice functions are truthful? This question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), "characterize" the property. The definition of the property itself provides a characterization, so what more is needed? Here are some useful notions of characterization: • Recognition algorithm. Give an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property. • Parametric representation. Give an explicit parametrized family of functions and show that each function in the family has the property, and that every function with the property is in the family. A third notion applies in the case of hereditary properties of functions. A function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f. A property P of functions is hereditary if it is preserved under taking subfunctions. Truthfulness is easily seen to be hereditary. • Sets of obstructions. For a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn't have the property. An obstruction is minimal if every proper subfunction has the property. A set of obstructions is complete if every function that does not have the property contains one of them as a subfunction. The set of all functions that don't satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions. We are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness. It turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial. For functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness. For general domains, however, the picture is far from complete. Typically, the domains of social choice functions are specified by a system of constraints. For example, an order constraint requires that one specified entry in some row be larger than another in the same row, a range constraint places an upper or lower bound on an entry, and a zero constraint forces an entry to be 0. These are all examples of linear inequality constraints on the matrix entries. Building on work of Roberts [10], Lavi, Mu'alem and Nisan [6] defined a condition called weak monotonicity (WMON). (Independently, in the context of multi-unit auctions, Bikhchandani, Chatterji and Sen [3] identified the same condition and called it nondecreasing in marginal utilities (NDMU).) The definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F. The functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness. Lavi, Mu'alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains. The domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints. Lavi, Mu'alem and Nisan [6] conjectured that W-MON suffices for convex domains. The main result of this paper is an affirmative answer to this conjecture: Using the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains. The two hypotheses on the social choice function, that the domain is convex and that the range is finite, cannot be omitted as is shown by the examples given in section 7. 1.1 Related Work There is a simple and natural parametrized set of truthful social choice functions called affine maximizers. Roberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain. There are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]). Each of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers. Lavi, Mu'alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is "almost" an affine maximizer. There are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed. For a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i. The following easily proved fact is used extensively in the literature: This implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness. This set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions. Rochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property. This condition can be viewed as providing a minimal complete set of non-truthful functions. As is required by proposition 2, each function in the set is local. Furthermore it is one-to-one. In particular its domain has size at most the number of possible outcomes | A |. As this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions. But by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class. The condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2. Thus the results of Lavi, Mu'alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions. Theorem 1 extends these results to a much larger subclass of functions. 1.2 Weak Monotonicity and the Nonnegative Cycle Property By proposition 2, a function is truthful if and only if each of its local subfunctions is truthful. Therefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions. The domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D ⊆ RA be the set of allowed choices for row i. Since f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function. Associated to any single player function f with domain D we define an edge-weighted directed graph Hf whose vertex set is the image of f. For convenience, we assume that f is surjective and so this image is A. For each a, b ∈ A, x ∈ f -1 (a) there is an edge ex (a, b) from a to b with weight x (a) − x (b). The weight of a set of edges is just the sum of the weights of the edges. We say that f satisfies: • the nonnegative cycle property if every directed cycle has nonnegative weight. • the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight. We say a local function g satisfies nonnegative cycle property/nonnegative two-cycle property if its associated single player function f does. The graph Hf has a possibly infinite number of edges between any two vertices. We define Gf to be the edgeweighted directed graph with exactly one edge from a to b, whose weight δab is the infimum (possibly − ∞) of all of the edge weights ex (a, b) for x ∈ f -1 (a). It is easy to see that Hf has the nonnegative cycle property/nonnegative two-cycle property if and only if Gf does. Gf is called the outcome graph of f. The weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property. The following result was obtained by Rochet [11] in a slightly different form and rediscovered by Rozenshtrom [12] and Gui, Muller and Vohra [5]: LEMMA 3. A local social choice function is truthful if and only if it has the nonnegative cycle property. Thus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property. In light of this, theorem 1 follows from: THEOREM 4. For any surjective single player function f: D − → A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property. This is the result we will prove. 1.3 Overview of the Proof of Theorem 4 Let D ⊆ RA be convex and let f: D − → A be a single player function such that Gf has no negative two-cycles. We want to conclude that Gf has no negative cycles. For two vertices a, b, let δ ∗ ab denote the minimum weight of any path from a to b. Clearly δ ∗ ab ≤ δab. Our proof shows that the δ ∗ - weight of every cycle is exactly 0, from which theorem 4 follows. There seems to be no direct way to compute δ ∗ and so we proceed indirectly. Based on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths. We prove that for any two outcomes a, b, there is a straight path from a to b (lemma 8 and corollary 10), and all straight paths from a to b have the same weight, which we denote ρab (theorem 12). We show that ρab ≤ δab (lemma 14) and that the ρ-weight of every cycle is 0. The key step to this proof is showing that the ρ-weight of every directed triangle is 0 (lemma 17). It turns out that ρ is equal to δ ∗ (corollary 20), although this equality is not needed in the proof of theorem 4. To expand on the above summary, we give the definitions of an admissible path and a straight path. These are somewhat technical and rely on the geometry of f. We first observe that, without loss of generality, we can assume that D is (topologically) closed (section 2). In section 3, for each a ∈ A, we enlarge the set f-1 (a) to a closed convex set Da ⊆ D in such a way that for a, b ∈ A with a = b, Da and Db have disjoint interiors. We define an admissible path to be a sequence of outcomes (a1,..., ak) such that each of the sets Ij = Daj ∩ Daj +1 is nonempty (section 4). An admissible path is straight if there is a straight line that meets one point from each of the sets I1,..., Ik-1 in order (section 5). Finally, we mention how the hypotheses of convex domain and finite range are used in the proof. Both hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8). (2) that the ρ-weight of a directed triangle is 0 (lemma 17). The convex domain hypothesis is also needed for the convexity of the sets Da (section 3). The finite range hypothesis is also needed to reduce theorem 4 to the case that D is closed (section 2) and to prove that every straight path from a to b has the same δ-weight (theorem 12). 2. REDUCTION TO CLOSED DOMAIN We first reduce the theorem to the case that D is closed. Write DC for the closure of D. Since A is finite, DC = ∪ aEA (f-1 (a)) C. Thus for each v ∈ DC − D, there is an a = a (v) ∈ A such that v ∈ (f-1 (a)) C. Extend f to the function g on DC by defining g (v) = a (v) for v ∈ DC − D and g (v) = f (v) for v ∈ D. It is easy to check that δab (g) = δab (f) for all a, b ∈ A and therefore it suffices to show that the nonnegative two-cycle property for g implies the nonnegative cycle property for g. Henceforth we assume D is convex and closed. 3. A DISSECTION OF THE DOMAIN In this section, we construct a family of closed convex sets {Da: a ∈ A} with disjoint interiors whose union is D and satisfying f -1 (a) ⊆ Da for each a ∈ A. Let Ra = {v: ∀ b ∈ A, v (a) − v (b) ≥ δab}. Ra is a closed polyhedron containing f -1 (a). The next proposition implies that any two of these polyhedra intersect only on their boundary. Figure 1: A 2-dimensional domain with 5 outcomes. PROOF. v E Ra implies v (a)--v (b)> δab and v E Rb implies v (b)--v (a)> δba which, by the nonnegative two-cycle property, implies v (a)--v (b) <δab. Thus v (a)--v (b) = δab and by symmetry v (b)--v (a) = δba. missible paths. However, path (a, c, d) or path (b, e) is not admissible. PROOF. Let →--u be a D-sequence in C (--→ a) n C (--→ a ~). By proposition 6, δ (--→ a) = ∆ − → a (--→ u) and δ (--→ a ~) = ∆ − → a, (--→ u), it suffices to show ∆ − → a (--→ u) = ∆ − → a, (--→ u). Let k = ord (--→ u) = I--→ a I = I--→ a ~ I. Since Finally, we restrict the collection of sets {Ra: a E Al to the domain D by defining Da = Ra n D for each a E A. Clearly, Da is closed and convex, and contains f − 1 (a). Therefore Sa ∈ A Da = D. Also, by proposition 5, any point v in Da n Db satisfies v (a)--v (b) = δab =--δba. 4. PATHS AND D-SEQUENCES A path of size k is a sequence →--a = (a1,..., ak) with each ai E A (possibly with repetition). We call →--a an (a1, ak) path. For a path →--a, we write I--→ a I for the size of →--a. →--a is simple if the ai's are distinct. For b, c E A we write Pbc for the set of (b, c) - paths and SPbc for the set of simple (b, c) - paths. The δ-weight of path →--a is defined by A D-sequence of order k is a sequence →--u = (u0,..., uk) with each ui E D (possibly with repetition). We call →--u a (u0, uk) - sequence. For a D-sequence →--u, we write ord (u) for the order of →--u. For v, w E D we write Svw for the set of (v, w) - sequences. A compatible pair is a pair (--→ a, →--u) where →--a is a path and →--u is a D-sequence satisfying ord (--→ u) = I--→ a I and for each i E [k], both ui − 1 and ui belong to Dai. We write C (--→ a) for the set of D-sequences →--u that are compatible with →--a. We say that →--a is admissible if C (--→ a) is nonempty. For →--u E C (--→ a) we define For v, w E D and b, c E A, we define Cvw bc to be the set of compatible pairs (--→ a, →--u) such that →--a E Pbc and →--u E Svw. To illustrate these definitions, figure 1 gives the dissection of a domain, a 2-dimensional plane, into five regions Da, Db, Dc, Dd, De. D-sequence (v, w, x, y, z) is compatible with both path (a, b, c, e) and path (a, b, d, e); D-sequence (v, w, u, y, z) is compatible with a unique path (a, b, d, e). D-sequence (x, w, p, y, z) is compatible with a unique path (b, a, d, e). Hence (a, b, c, e), (a, b, d, e) and (b, a, d, e) are ad Noticing both ui − 1 and ui belong to Dai n Da, i, we have by proposition 5 5. LINEAR D-SEQUENCES AND STRAIGHT PATHS For v, w E D we write vw for the (closed) line segment joining v and w. A D-sequence →--u of order k is linear provided that there is a sequence of real numbers 0 = λ0 <λ1 <... <λk = 1 such that ui = (1--λi) u0 + λiuk. In particular, each ui belongs to u0uk. For v, w E D we write Lvw for the set of linear (v, w) - sequences. For b, c E A and v, w E D we write LCvw bc for the set of compatible pairs (--→ a, →--u) such that →--a E Pbc and →--u E Lvw. For a path →--a, we write L (--→ a) for the set of linear sequences compatible with →--a. We say that →--a is straight if L (--→ a) = 0. For example, in figure 1, D-sequence (v, w, x, y, z) is linear while (v, w, u, y, z), (x, w, p, y, z), and (x, v, w, y, z) are not. Hence path (a, b, c, e) and (a, b, d, e) are both straight. However, path (b, a, d, e) is not straight. LEMMA 8. Let b, c ∈ A and v ∈ Db, w ∈ Dc. There is a simple path → − a and D-sequence → − u such that (− → a, → − u) ∈ LCvw bc. Furthermore, for any such path → − a, δ (− → a) ≤ v (b) − v (c). PROOF. By the convexity of D, any sequence of points on vw is a D-sequence. If b = c, singleton path → − a = (b) and D-sequence → − u = (v, w) are obviously compatible. δ (− → a) = 0 = v (b) − v (c). So assume b = c. If Db ∩ Dc ∩ vw = ∅, we pick an arbitrary x from this set and let → − a = (b, c) ∈ SPbc, → − u = (v, x, w) ∈ Lvw. Again it is easy to check the compatibility of (− → a, → − u). Since v ∈ Db, v (b) − v (c) ≥ δbc = δ (− → a). For the remaining case b = c and Db ∩ Dc ∩ vw = ∅, notice v = w otherwise v = w ∈ Db ∩ Dc ∩ vw. So we can define λx for every point x on vw to be the unique number in [0, 1] such that x = (1 − λx) v + λxw. For convenience, we write x ≤ y for λx ≤ λy. Let Ia = Da ∩ vw for each a ∈ A. Since D = ∪ a ∈ ADa, we have vw = ∪ a ∈ AIa. Moreover, by the convexity of Da and vw, Ia is a (possibly trivial) closed interval. We begin by considering the case that Ib and Ic are each a single point, that is, Ib = {v} and Ic = {w}. Let S be a minimal subset of A satisfying ∪ s ∈ SIs = vw. For each s ∈ S, Is is maximal, i.e., not contained in any other It, for t ∈ S. In particular, the intervals {Is: s ∈ S} have all left endpoints distinct and all right endpoints distinct and the order of the left endpoints is the same as that of the right endpoints. Let k = | S | + 2 and index S as a2,..., ak − 1 in the order defined by the right endpoints. Denote the interval Iai by [li, ri]. Thus l2 <l3 <... <lk − 1, r2 <r3 <... <rk − 1 and the fact that these intervals cover vw implies l2 = v, rk − 1 = w and for all 2 ≤ i ≤ k − 2, li +1 ≤ ri which further implies li <ri. Now we define the path → − a = (a1, a2,..., ak − 1, ak) with a1 = b, ak = c and a2, a3,..., ak − 1 as above. Define the linear D-sequence → − u = (u0, u1,..., uk) by u0 = u1 = v, uk = w and for 2 ≤ i ≤ k − 1, ui = ri. It follows immediately that (− → a, → − u) ∈ LCvw bc. Neither b nor c is in S since lb = rb and lc = rc. Thus → − a is simple. Finally to show δ (− → a) ≤ v (b) − v (c), we note fde (y). Therefore fde (z) is monotonically nonincreasing along ← → the line xy as z moves in the direction from x to y. Applying this fact with d = ai, e = ai +1, x = li and y = ri gives the desired conclusion. This completes the proof for the case that Ib = {v} and Ic = {w}. For general Ib, Ic, rb <lc otherwise Db ∩ Dc ∩ vw = Ib ∩ Ic = ∅. Let v ~ = rb and w ~ = lc. Clearly we can apply the above conclusion to v ~ ∈ Db, w ~ ∈ Dc and get a compatible pair (− → a, → − u ~) ∈ LCv ~ w ~ bc with → − a simple and δ (− → a) ≤ v ~ (b) − v ~ (c). Define the linear D-sequence → − u by u0 = v, uk = w and ui = u ~ i for i ∈ [k − 1]. (− → a, → − u) ∈ LCvw bc is evident. Moreover, applying the above fact with d = b, e = c, x = v and y = w, we get v (b) − v (c) ≥ v ~ (b) − v ~ (c) ≥ δ (− → a). The main result of this section (theorem 12) says that for any b, c ∈ A, every straight (b, c) - path has the same δ-weight. To prove this, we first fix v ∈ Db and w ∈ Dc and show (lemma 11) that every straight (b, c) - path compatible with some linear (v, w) - sequence has the same δ-weight ρbc (v, w). We then show in theorem 12 that ρbc (v, w) is the same for all choices of v ∈ Db and w ∈ Dc. bc. It suffices to show δ (− → a ~) = δ (− → a ~ ~). To do this we construct a linear (v, w) - sequence → − u and paths → − a ∗, → − a ∗ ∗ ∈ Pbc, both compatible with → − u, satisfying δ (− → a ∗) = δ (− → a ~) and δ (− → a ∗ ∗) = δ (− → a ~ ~). Lemma 7 implies δ (− → a ∗) = δ (− → a ∗ ∗), which will complete the proof. Let | − → a ~ | = ord (− → u ~) = k and | − → a ~ ~ | = ord (− → u ~ ~) = l. We select → − u to be any linear (v, w) - sequence (u0, u1,..., ut) such that → − u ~ and → − u ~ ~ are both subsequences of → − u, i.e., there are indices 0 = i0 <i1 <· · · <ik = t and 0 = j0 <j1 <· · · <jl = t such that → − u ~ = (ui0, ui1,..., uik) and → − u ~ ~ = (uj0, uj1,..., ujl). We now construct a (b, c) - path → − a ∗ compatible with → − u such that δ (− → a ∗) = δ (− → a ~). (An analogous construction gives → − a ∗ ∗ compatible with → − u such that δ (− → a ∗ ∗) = δ (− → a ~ ~).) This will complete the proof. → − a ∗ is defined as follows: for 1 ≤ j ≤ t, a ∗ j = a ~ r where r is the unique index satisfying ir − 1 <j ≤ ir. Since both We are now ready for the main theorem of the section: THEOREM 12. ρbc is a constant map on Db × Dc. Thus for any b, c ∈ A, every straight (b, c) - path has the same δweight. PROOF. For a path → − a, (v, w) is compatible with → − a if there is a linear (v, w) - sequence compatible with → − a. We write CP (− → a) for the set of pairs (v, w) compatible with → − a. ρbc is constant on CP (− → a) because for each (v, w) ∈ CP (− → a), ρbc (v, w) = δ (− → a). By lemma 8, we also have S − → a ∈ SPbc CP (− → a) = Db × Dc. Since A is finite, SPbc, the set of simple paths from b to c, is finite as well. Next we prove that for any path → − a, CP (− → a) is closed. Let ((vn, wn): n ∈ ICY) be a convergent sequence in CP (− → a) and let (v, w) be the limit. We want to show that (v, w) ∈ CP (− → a). For each n ∈ ICY, since (vn, wn) ∈ CP (− → a), there is a linear (vn, wn) - sequence un compatible with → − a, i.e., there are 0 = λn0 ≤ λn1 ≤...≤ λnk = 1 (k = | − → a |) such that unj = (1 − λnj) vn + λnj wn (j = 0, 1,..., k). Since for each n, λn = (λn0, λn 1,..., λn k) belongs to the closed bounded set [0, 1] k +1 we can choose an infinite subset I ⊆ ICY such that the sequence (λn: n ∈ I) converges. Let λ = (λ0, λ1,..., λk) be the limit. Clearly 0 = λ0 ≤ λ1 ≤ · · · ≤ λk = 1. Define the linear (v, w) - sequence → − u by uj = (1 − λj) v + λjw (j = 0, 1,..., k). Then for each j ∈ {0,..., k}, uj is the limit of the sequence (unj: n ∈ I). For j> 0, each unj belongs to the closed set Daj, so uj ∈ Daj. Similarly, for j <k each unj belongs to the closed set Daj +1, so uj ∈ Daj +1. Hence (− → a, → − u) is compatible, implying (v, w) ∈ CP (− → a). Now we have Db × Dc covered by finitely many closed subsets on each of them ρbc is a constant. Suppose for contradiction that there are (v, w), (v', w') ∈ Db × Dc such that ρbc (v, w) = ρbc (v', w'). are closed by the finiteness of P. This is a contradiction, since it is well known (and easy to prove) that a line segment cannot be expressed as the disjoint union of two nonempty closed sets. Summarizing corollary 10, lemma 11 and theorem 12, we have 6. PROOF OF THEOREM 4 LEMMA 14. ρbc ≤ Sbc for all b, c ∈ A. PROOF. For contradiction, suppose ρbc − Sbc = e> 0. By the definition of Sbc, there exists v ∈ f--1 (b) ⊆ Db with v (b) − v (c) <Sbc + e = ρbc. Pick an arbitrary w ∈ Dc. By lemma 8, there is a compatible pair (− → a, → − u) ∈ LCvw bc with S (− → a) ≤ v (b) − v (c). Since → − a is a straight (b, c) - path, ρbc = S (− → a) ≤ v (b) − v (c), leading to a contradiction. Define another edge-weighted complete directed graph G' f on vertex set A where the weight of arc (a, b) is ρab. Immediately from lemma 14, the weight of every directed cycle in Gf is bounded below by its weight in G' f. To prove theorem 4, it suffices to show the zero cycle property of G' f, i.e., every directed cycle has weight zero. We begin by considering two-cycles. LEMMA 15. ρbc + ρcb = 0 for all b, c ∈ A. PROOF. Let → − a be a straight (b, c) - path compatible with linear sequence → − u. let → − a' be the reverse of → − a and → − u' the reverse of → − u. Obviously, (− → a', → − u') is compatible as well and thus → − a' is a straight (c, b) - path. Therefore, where the final equality uses proposition 5. Next, for three cycles, we first consider those compatible with linear triples. PROOF. First, we prove for the case where v is between u and w. From lemma 8, there are compatible pairs (− → a', → − u') ∈ Therefore, ρac = S (− → a' ' ') = S (− → a') + S (− → a' ') = ρab + ρbc. Moreover, ρac = − ρca by lemma 15, so we get ρab + ρbc + ρca = 0. Now suppose w is between u and v. By the above argument, we have ρac + ρcb + ρba = 0 and by lemma 15, ρab + ρbc + ρca = − ρba − ρcb − ρac = 0. The case that u is between v and w is similar. Now we are ready for the zero three-cycle property: LEMMA 17. ρab + ρbc + ρca = 0 for all a, b, c ∈ A. PROOF. Let S = {(a, b, c): ρab + ρbc + ρca = 0} and for contradiction, suppose S = ∅. S is finite. For each a ∈ A, choose va ∈ Da arbitrarily and let T be the convex hull of {va: a ∈ A}. For each (a, b, c) ∈ S, let Rabc = Da × Db × Dc ∩ T3. Clearly, each Rabc is nonempty and compact. Moreover, by lemma 16, no (u, v, w) ∈ Rabc is collinear. Define f: D3 → R by f (u, v, w) = | v − u | + | w − v | + | u − w |. For (a, b, c) ∈ S, the restriction of f to the compact set Rabc attains a minimum m (a, b, c) at some point (u, v, w) ∈ Rabc by the continuity of f, i.e., there exists a triangle ∆ uvw of minimum perimeter within T with u ∈ Da, v ∈ Db, w ∈ Dc. Choose (a *, b *, c *) ∈ S so that m (a *, b *, c *) is minimum and let (u *, v *, w *) ∈ Ra * b * c * be a triple achieving it. Pick an arbitrary point p in the interior of ∆ u * v * w *. By the convexity of domain D, there is d ∈ A such that p ∈ Dd. Consider triangles ∆ u ∗ pw ∗, ∆ w ∗ pv ∗ and ∆ v ∗ pu ∗. Since each of them has perimeter less than that of ∆ u ∗ v ∗ w ∗ and all three triangles are contained in T, by the minimality of ∆ u ∗ v ∗ w ∗, (a ∗, d, c ∗), (c ∗, d, b ∗), (b ∗, d, a ∗) ∈ S. Thus Summing up the three equalities, With the zero two-cycle and three-cycle properties, the zero cycle property of G ~ f is immediate. As noted earlier, this completes the proof of theorem 4. THEOREM 18. Every directed cycle of G ~ f has weight zero. PROOF. Clearly, zero two-cycle and three-cycle properties imply triangle equality Pab + Pbc = Pac for all a, b, c ∈ A. For triangle equality, we have Pk − 1 a directed cycle C = a1a2...aka1, by inductively applying the weight of C is As final remarks, we note that our result implies the following strengthenings of theorem 12: COROLLARY 19. For any b, c ∈ A, every admissible (b, c) path has the same S-weight Pbc. PROOF. First notice that for any b, c ∈ A, if Db ∩ Dc = ∅, Sbc = Pbc. To see this, pick v ∈ Db ∩ Dc arbitrarily. Obviously, path → − a = (b, c) is compatible with linear sequence → − u = (v, v, v) and is thus a straight (b, c) - path. Hence Pbc = S (− → a) = Sbc. Now for any b, c ∈ A and any (b, c) - path → − a with C (− → a) = ∅, let → − u ∈ C (− → a). Since ui ∈ Dai ∩ Dai +1 for i ∈ [| − → a | − 1], which by theorem 18, = − Paka1 = Pa1ak = Pbc. Hence Pbc ≤ S ∗ bc, which completes the proof. 7. COUNTEREXAMPLES TO STRONGER FORMS OF THEOREM 4 Theorem 4 applies to social choice functions with convex domain and finite range. We now show that neither of these hypotheses can be omitted. Our examples are single player functions. The first example illustrates that convexity cannot be omitted. We present an untruthful single player social choice function with three outcomes a, b, c satisfying W-MON on a path-connected but non-convex domain. The domain is the boundary of a triangle whose vertices are x = (0, 1, − 1), y = (− 1, 0, 1) and z = (1, − 1, 0). x and the open line segment zx is assigned outcome a, y and the open line segment xy is assigned outcome b, and z and the open line segment yz is assigned outcome c. Clearly, Sab = − Sba = Sbc = − Scb = Sca = − Sac = − 1, W-MON (the nonnegative twocycle property) holds. Since there is a negative cycle Sab + Sbc + Sca = − 3, by lemma 3, this is not a truthful choice function. We now show that the hypothesis of finite range cannot be omitted. We construct a family of single player social choice functions each having a convex domain and an infinite number of outcomes, and satisfying weak monotonicity but not truthfulness. Our examples will be specified by a positive integer n and an n × n matrix M satisfying the following properties: (1) M is non-singular. (2) M is positive semidefinite. (3) There are distinct i1, i2,..., ik ∈ [n] satisfying Here is an example matrix with n = 3 and (i1, i2, i3) = (1, 2, 3): Let e1, e2,..., en denote the standard basis of Rn. Let Sn denote the convex hull of {e1, e2..., en}, which is the set of vectors in Rn with nonnegative coordinates that sum to 1. The range of our social choice function will be the set Sn and the domain D will be indexed by Sn, that is D = {yλ: A ∈ Sn}, where yλ is defined below. The function f maps yλ to A. Next we specify yλ. By definition, D must be a set of functions from Sn to R. For A ∈ Sn, the domain element yλ: Sn − → R is defined by yλ (α) = AT Mα. The nonsingularity of M guarantees that yλ = yµ for A = µ ∈ Sn. It is easy to see that D is a convex subset of the set of all functions from Sn to R. The outcome graph Gf is an infinite graph whose vertex set is the outcome set A = Sn. For outcomes A, µ ∈ A, the edge weight Sλµ is equal to Sλµ = inf {v (A) − v (µ): f (v) = A} = yλ (A) − yλ (µ) = AT MA − ATMµ = AT M (A − µ). We claim that Gf satisfies the nonnegative two-cycle property (W-MON) but has a negative cycle (and hence is not truthful). For outcomes A, µ ∈ A, which is nonnegative since M is positive semidefinite. Hence the nonnegative two-cycle property holds. Next we show that Gf has a negative cycle. Let i1, i2,..., ik be a sequence of indices satisfying property 3 of M. We claim ei1 ei2...eik ei1 is a negative cycle. Since which completes the proof. Finally, we point out that the third property imposed on the matrix M has the following interpretation. Let R (M) = {r1, r2,..., rn} be the set of row vectors of M and let hM be the single player social choice function with domain R (M) and range {1, 2,..., n} mapping ri to i. Property 3 is equivalent to the condition that the outcome graph GhM has a negative cycle. By lemma 3, this is equivalent to the condition that hM is untruthful. 8. FUTURE WORK As stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness. Let us say that a set D of P x A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful. In this paper, we showed that for finite A, any convex D is a WM-domain. Typically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders. It is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain. We believe that convexity is the main part of the story, i.e., a WM-domain is, after excluding some exceptional cases," essentially" a convex set. Turning to parametric representations, let us say a set D of P x A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer. Roberts' theorem says that the unrestricted domain is an AM-domain. What are the most general conditions under which a set D of real matrices is an AM-domain?
Weak Monotonicity Suffices for Truthfulness on Convex Domains ABSTRACT Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain. 1. INTRODUCTION Social choice theory centers around the general problem of selecting a single outcome out of a set A of alternative out * This work was supported in part by NSF grant CCR9988526. † This work was supported in part by NSF grant CCR9988526 and DIMACS. comes based on the individual preferences of a set P of players. A method for aggregating player preferences to select one outcome is called a social choice function. In this paper we assume that the range A is finite and that each player's preference is expressed by a valuation function which assigns to each possible outcome a real number representing the "benefit" the player derives from that outcome. The ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes. A major difficulty connected with social choice functions is that players cannot be required to tell the truth about their preferences. Since each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function. An important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences. By carefully choosing the payment function, one can hope to entice each player to tell the truth. A social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function. A mechanism is truthful (or to be strategyproof or to have a dominant strategy) if each player's best strategy, knowing the preferences of the others, is always to declare his own true preferences. A social choice function is truthfully implementable, or truthful if it has a truthful implementation. (The property of truthful implementability is sometimes called dominant strategy incentive compatibility). This framework leads naturally to the question: which social choice functions are truthful? This question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), "characterize" the property. The definition of the property itself provides a characterization, so what more is needed? Here are some useful notions of characterization: • Recognition algorithm. Give an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property. • Parametric representation. Give an explicit parametrized family of functions and show that each function in the family has the property, and that every function with the property is in the family. A third notion applies in the case of hereditary properties of functions. A function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f. A property P of functions is hereditary if it is preserved under taking subfunctions. Truthfulness is easily seen to be hereditary. • Sets of obstructions. For a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn't have the property. An obstruction is minimal if every proper subfunction has the property. A set of obstructions is complete if every function that does not have the property contains one of them as a subfunction. The set of all functions that don't satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions. We are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness. It turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial. For functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness. For general domains, however, the picture is far from complete. Typically, the domains of social choice functions are specified by a system of constraints. For example, an order constraint requires that one specified entry in some row be larger than another in the same row, a range constraint places an upper or lower bound on an entry, and a zero constraint forces an entry to be 0. These are all examples of linear inequality constraints on the matrix entries. Building on work of Roberts [10], Lavi, Mu'alem and Nisan [6] defined a condition called weak monotonicity (WMON). (Independently, in the context of multi-unit auctions, Bikhchandani, Chatterji and Sen [3] identified the same condition and called it nondecreasing in marginal utilities (NDMU).) The definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F. The functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness. Lavi, Mu'alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains. The domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints. Lavi, Mu'alem and Nisan [6] conjectured that W-MON suffices for convex domains. The main result of this paper is an affirmative answer to this conjecture: Using the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains. The two hypotheses on the social choice function, that the domain is convex and that the range is finite, cannot be omitted as is shown by the examples given in section 7. 1.1 Related Work There is a simple and natural parametrized set of truthful social choice functions called affine maximizers. Roberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain. There are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]). Each of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers. Lavi, Mu'alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is "almost" an affine maximizer. There are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed. For a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i. The following easily proved fact is used extensively in the literature: This implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness. This set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions. Rochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property. This condition can be viewed as providing a minimal complete set of non-truthful functions. As is required by proposition 2, each function in the set is local. Furthermore it is one-to-one. In particular its domain has size at most the number of possible outcomes | A |. As this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions. But by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class. The condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2. Thus the results of Lavi, Mu'alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions. Theorem 1 extends these results to a much larger subclass of functions. 1.2 Weak Monotonicity and the Nonnegative Cycle Property By proposition 2, a function is truthful if and only if each of its local subfunctions is truthful. Therefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions. The domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D ⊆ RA be the set of allowed choices for row i. Since f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function. Associated to any single player function f with domain D we define an edge-weighted directed graph Hf whose vertex set is the image of f. For convenience, we assume that f is surjective and so this image is A. For each a, b ∈ A, x ∈ f -1 (a) there is an edge ex (a, b) from a to b with weight x (a) − x (b). The weight of a set of edges is just the sum of the weights of the edges. We say that f satisfies: • the nonnegative cycle property if every directed cycle has nonnegative weight. • the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight. We say a local function g satisfies nonnegative cycle property/nonnegative two-cycle property if its associated single player function f does. The graph Hf has a possibly infinite number of edges between any two vertices. We define Gf to be the edgeweighted directed graph with exactly one edge from a to b, whose weight δab is the infimum (possibly − ∞) of all of the edge weights ex (a, b) for x ∈ f -1 (a). It is easy to see that Hf has the nonnegative cycle property/nonnegative two-cycle property if and only if Gf does. Gf is called the outcome graph of f. The weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property. The following result was obtained by Rochet [11] in a slightly different form and rediscovered by Rozenshtrom [12] and Gui, Muller and Vohra [5]: LEMMA 3. A local social choice function is truthful if and only if it has the nonnegative cycle property. Thus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property. In light of this, theorem 1 follows from: THEOREM 4. For any surjective single player function f: D − → A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property. This is the result we will prove. 1.3 Overview of the Proof of Theorem 4 Let D ⊆ RA be convex and let f: D − → A be a single player function such that Gf has no negative two-cycles. We want to conclude that Gf has no negative cycles. For two vertices a, b, let δ ∗ ab denote the minimum weight of any path from a to b. Clearly δ ∗ ab ≤ δab. Our proof shows that the δ ∗ - weight of every cycle is exactly 0, from which theorem 4 follows. There seems to be no direct way to compute δ ∗ and so we proceed indirectly. Based on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths. We prove that for any two outcomes a, b, there is a straight path from a to b (lemma 8 and corollary 10), and all straight paths from a to b have the same weight, which we denote ρab (theorem 12). We show that ρab ≤ δab (lemma 14) and that the ρ-weight of every cycle is 0. The key step to this proof is showing that the ρ-weight of every directed triangle is 0 (lemma 17). It turns out that ρ is equal to δ ∗ (corollary 20), although this equality is not needed in the proof of theorem 4. To expand on the above summary, we give the definitions of an admissible path and a straight path. These are somewhat technical and rely on the geometry of f. We first observe that, without loss of generality, we can assume that D is (topologically) closed (section 2). In section 3, for each a ∈ A, we enlarge the set f-1 (a) to a closed convex set Da ⊆ D in such a way that for a, b ∈ A with a = b, Da and Db have disjoint interiors. We define an admissible path to be a sequence of outcomes (a1,..., ak) such that each of the sets Ij = Daj ∩ Daj +1 is nonempty (section 4). An admissible path is straight if there is a straight line that meets one point from each of the sets I1,..., Ik-1 in order (section 5). Finally, we mention how the hypotheses of convex domain and finite range are used in the proof. Both hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8). (2) that the ρ-weight of a directed triangle is 0 (lemma 17). The convex domain hypothesis is also needed for the convexity of the sets Da (section 3). The finite range hypothesis is also needed to reduce theorem 4 to the case that D is closed (section 2) and to prove that every straight path from a to b has the same δ-weight (theorem 12). 2. REDUCTION TO CLOSED DOMAIN 3. A DISSECTION OF THE DOMAIN 4. PATHS AND D-SEQUENCES 5. LINEAR D-SEQUENCES AND STRAIGHT PATHS 6. PROOF OF THEOREM 4 THEOREM 18. Every directed cycle of G ~ f has weight zero. 7. COUNTEREXAMPLES TO STRONGER FORMS OF THEOREM 4 8. FUTURE WORK As stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness. Let us say that a set D of P x A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful. In this paper, we showed that for finite A, any convex D is a WM-domain. Typically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders. It is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain. We believe that convexity is the main part of the story, i.e., a WM-domain is, after excluding some exceptional cases," essentially" a convex set. Turning to parametric representations, let us say a set D of P x A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer. Roberts' theorem says that the unrestricted domain is an AM-domain. What are the most general conditions under which a set D of real matrices is an AM-domain?
Weak Monotonicity Suffices for Truthfulness on Convex Domains ABSTRACT Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain. 1. INTRODUCTION Social choice theory centers around the general problem of selecting a single outcome out of a set A of alternative out * This work was supported in part by NSF grant CCR9988526. † This work was supported in part by NSF grant CCR9988526 and DIMACS. comes based on the individual preferences of a set P of players. A method for aggregating player preferences to select one outcome is called a social choice function. In this paper we assume that the range A is finite and that each player's preference is expressed by a valuation function which assigns to each possible outcome a real number representing the "benefit" the player derives from that outcome. The ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes. A major difficulty connected with social choice functions is that players cannot be required to tell the truth about their preferences. Since each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function. An important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences. By carefully choosing the payment function, one can hope to entice each player to tell the truth. A social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function. A social choice function is truthfully implementable, or truthful if it has a truthful implementation. (The property of truthful implementability is sometimes called dominant strategy incentive compatibility). This framework leads naturally to the question: which social choice functions are truthful? This question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), "characterize" the property. The definition of the property itself provides a characterization, so what more is needed? Here are some useful notions of characterization: • Recognition algorithm. Give an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property. • Parametric representation. Give an explicit parametrized family of functions and show that each function in the family has the property, and that every function with the property is in the family. A third notion applies in the case of hereditary properties of functions. A function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f. A property P of functions is hereditary if it is preserved under taking subfunctions. Truthfulness is easily seen to be hereditary. • Sets of obstructions. For a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn't have the property. An obstruction is minimal if every proper subfunction has the property. A set of obstructions is complete if every function that does not have the property contains one of them as a subfunction. The set of all functions that don't satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions. We are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness. It turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial. For functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness. For general domains, however, the picture is far from complete. Typically, the domains of social choice functions are specified by a system of constraints. These are all examples of linear inequality constraints on the matrix entries. Building on work of Roberts [10], Lavi, Mu'alem and Nisan [6] defined a condition called weak monotonicity (WMON). The definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F. The functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness. Lavi, Mu'alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains. The domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints. Lavi, Mu'alem and Nisan [6] conjectured that W-MON suffices for convex domains. The main result of this paper is an affirmative answer to this conjecture: Using the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains. The two hypotheses on the social choice function, that the domain is convex and that the range is finite, cannot be omitted as is shown by the examples given in section 7. 1.1 Related Work There is a simple and natural parametrized set of truthful social choice functions called affine maximizers. Roberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain. There are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]). Each of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers. Lavi, Mu'alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is "almost" an affine maximizer. There are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed. For a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i. The following easily proved fact is used extensively in the literature: This implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness. This set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions. Rochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property. This condition can be viewed as providing a minimal complete set of non-truthful functions. As is required by proposition 2, each function in the set is local. Furthermore it is one-to-one. In particular its domain has size at most the number of possible outcomes | A |. As this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions. But by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class. The condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2. Thus the results of Lavi, Mu'alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions. Theorem 1 extends these results to a much larger subclass of functions. 1.2 Weak Monotonicity and the Nonnegative Cycle Property By proposition 2, a function is truthful if and only if each of its local subfunctions is truthful. Therefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions. The domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D ⊆ RA be the set of allowed choices for row i. Since f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function. We say that f satisfies: • the nonnegative cycle property if every directed cycle has nonnegative weight. • the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight. We say a local function g satisfies nonnegative cycle property/nonnegative two-cycle property if its associated single player function f does. The graph Hf has a possibly infinite number of edges between any two vertices. It is easy to see that Hf has the nonnegative cycle property/nonnegative two-cycle property if and only if Gf does. Gf is called the outcome graph of f. The weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property. A local social choice function is truthful if and only if it has the nonnegative cycle property. Thus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property. In light of this, theorem 1 follows from: THEOREM 4. For any surjective single player function f: D − → A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property. This is the result we will prove. 1.3 Overview of the Proof of Theorem 4 Let D ⊆ RA be convex and let f: D − → A be a single player function such that Gf has no negative two-cycles. We want to conclude that Gf has no negative cycles. Our proof shows that the δ ∗ - weight of every cycle is exactly 0, from which theorem 4 follows. Based on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths. We show that ρab ≤ δab (lemma 14) and that the ρ-weight of every cycle is 0. The key step to this proof is showing that the ρ-weight of every directed triangle is 0 (lemma 17). To expand on the above summary, we give the definitions of an admissible path and a straight path. Finally, we mention how the hypotheses of convex domain and finite range are used in the proof. Both hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8). (2) that the ρ-weight of a directed triangle is 0 (lemma 17). The convex domain hypothesis is also needed for the convexity of the sets Da (section 3). 8. FUTURE WORK As stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness. Let us say that a set D of P x A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful. In this paper, we showed that for finite A, any convex D is a WM-domain. Typically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders. It is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain. Turning to parametric representations, let us say a set D of P x A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer. Roberts' theorem says that the unrestricted domain is an AM-domain. What are the most general conditions under which a set D of real matrices is an AM-domain?
C-46
TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs
Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargate-based proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.
[ "interv skip graph", "archiv", "archiv storag", "sensor data", "distribut index structur", "data separ", "analysi", "flood", "geograph hash tabl", "homogen architectur", "multi-tier sensor network", "flash storag", "metada", "spatial scope", "interv tree", "wireless sensor network", "index method" ]
[ "P", "P", "P", "P", "P", "R", "U", "U", "U", "R", "M", "M", "U", "U", "M", "M", "M" ]
TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs ∗ Peter Desnoyers, Deepak Ganesan, and Prashant Shenoy Department of Computer Science University of Massachusetts Amherst, MA 01003 pjd@cs.umass.edu, dganesan@cs.umass.edu, shenoy@cs.umass.edu ABSTRACT Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks. Categories and Subject Descriptors: C.2.4 [Computer - Communication Networks]: Distributed Systems General Terms: Algorithms, performance, experimentation. 1. Introduction 1.1 Motivation Many different kinds of networked data-centric sensor applications have emerged in recent years. Sensors in these applications sense the environment and generate data that must be processed, filtered, interpreted, and archived in order to provide a useful infrastructure to its users. To achieve its goals, a typical sensor application needs access to both live and past sensor data. Whereas access to live data is necessary in monitoring and surveillance applications, access to past data is necessary for applications such as mining of sensor logs to detect unusual patterns, analysis of historical trends, and post-mortem analysis of particular events. Archival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency. There have been a spectrum of approaches for constructing sensor storage systems. In the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time. Since sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead. In this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive. Further, all data is propagated to the server, regardless of whether it is ever used by the application. An alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads. A read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing. More complex read requests are handled by flooding. For instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system. Thus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads. Requests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors. Research efforts such as Directed Diffusion [17] have attempted to reduce these read costs, however, by intelligent message routing. Between these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1. The geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage. In this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items. Thus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table. A read request requires a lookup in the in-network hash table to locate the node that stores the data 39 item; observe that the presence of an index eliminates the need for flooding in this approach. Most of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained. In this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors. TSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction. We believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks. Specifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors. No existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers. Any sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore``s Law, while their costs continue to plummet. Thus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars. An even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication. Newer NAND flash memories offer very low write and erase energy costs - our comparison of a 1GB Samsung NAND flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio [4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost between the two devices, even before accounting for network protocol overheads. These trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks. TSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor. Sensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs. The resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors. This index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently - the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors. In-network index lookups are eliminated, reducing network overheads for read requests. This separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies. 1.2 Contributions This paper presents TSAR, a novel two-tier storage architecture for sensor networks. To the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks. Our design and implementation of TSAR has resulted in four contributions. At the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper. This index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1 TSAR: Tiered Storage ARchitecture for sensor networks. able. This data structure has O(log n) expected search and update complexity. Further, the index provides a logically unified view of all data in the system. Second, at the sensor level, each sensor maintains a local archive that stores data on flash memory. Our storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors. Storage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms. Further, the local store is optimized for time-series access to archived data, as is typical in many applications. Each sensor periodically sends a summary of its data to a proxy. TSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries. More fine grain summaries are sent whenever more false positives are observed, thereby balancing the energy cost of metadata updates and false positives. Third, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors. Our implementation supports spatio-temporal, value, and rangebased queries on sensor data. Fourth, we conduct a detailed experimental evaluation of TSAR using a combination of EmStar/EmTOS [10] and our prototype. While our EmStar/EmTOS experiments focus on the scalability of TSAR in larger settings, our prototype evaluation involves latency and energy measurements in a real setting. Our results demonstrate the logarithmic scaling property of the sparse skip graph and the low latency of end-to-end queries in a duty-cycled multi-hop network . The remainder of this paper is structured as follows. Section 2 presents key design issues that guide our work. Section 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively. Section 5 discusses our prototype implementation, and Section 6 presents our experimental results. We present related work in Section 7 and our conclusions in Section 8. 2. Design Considerations In this section, we first describe the various components of a multi-tier sensor network assumed in our work. We then present a description of the expected usage models for this system, followed by several principles addressing these factors which guide the design of our storage system. 2.1 System Model We envision a multi-tier sensor network comprising multiple tiers - a bottom tier of untethered remote sensor nodes, a middle tier of tethered sensor proxies, and an upper tier of applications and user terminals (see Figure 1). The lowest tier is assumed to form a dense deployment of lowpower sensors. A canonical sensor node at this tier is equipped with low-power sensors, a micro-controller, and a radio as well as a significant amount of flash memory (e.g., 1GB). The common constraint for this tier is energy, and the need for a long lifetime in spite of a finite energy constraint. The use of radio, processor, RAM, and the flash memory all consume energy, which needs to be limited. In general, we assume radio communication to be substantially more expensive than accesses to flash memory. The middle tier consists of power-rich sensor proxies that have significant computation, memory and storage resources and can use 40 Table 1: Characteristics of sensor storage systems System Data Index Reads Writes Order preserving Centralized store Centralized Centralized index Handled at store Send to store Yes Local sensor store Fully distributed No index Flooding, diffusion Local No GHT/DCS [24] Fully distributed In-network index Hash to node Send to hashed node No TSAR/PRESTO Fully distributed Distributed index at proxies Proxy lookup + sensor query Local plus index update Yes User Unified Logical Store Queries (time, space, value) Query Response Cache Query forwarding Proxy Remote Sensors Local Data Archive on Flash Memory Interval Skip Graph Query forwarding summaries start index end index linear traversal Query Response Cache-miss triggered query forwarding summaries Figure 1: Architecture of a multi-tier sensor network. these resources continuously. In urban environments, the proxy tier would comprise a tethered base-station class nodes (e.g., Crossbow Stargate), each with with multiple radios-an 802.11 radio that connects it to a wireless mesh network and a low-power radio (e.g. 802.15.4) that connects it to the sensor nodes. In remote sensing applications [10], this tier could comprise a similar Stargate node with a solar power cell. Each proxy is assumed to manage several tens to hundreds of lower-tier sensors in its vicinity. A typical sensor network deployment will contain multiple geographically distributed proxies. For instance, in a building monitoring application, one sensor proxy might be placed per floor or hallway to monitor temperature, heat and light sensors in their vicinity. At the highest tier of our infrastructure are applications that query the sensor network through a query interface[20]. In this work, we focus on applications that require access to past sensor data. To support such queries, the system needs to archive data on a persistent store. Our goal is to design a storage system that exploits the relative abundance of resources at proxies to mask the scarcity of resources at the sensors. 2.2 Usage Models The design of a storage system such as TSAR is affected by the queries that are likely to be posed to it. A large fraction of queries on sensor data can be expected to be spatio-temporal in nature. Sensors provide information about the physical world; two key attributes of this information are when a particular event or activity occurred and where it occurred. Some instances of such queries include the time and location of target or intruder detections (e.g., security and monitoring applications), notifications of specific types of events such as pressure and humidity values exceeding a threshold (e.g., industrial applications), or simple data collection queries which request data from a particular time or location (e.g., weather or environment monitoring). Expected queries of such data include those requesting ranges of one or more attributes; for instance, a query for all image data from cameras within a specified geographic area for a certain period of time. In addition, it is often desirable to support efficient access to data in a way that maintains spatial and temporal ordering. There are several ways of supporting range queries, such as locality-preserving hashes such as are used in DIMS [18]. However, the most straightforward mechanism, and one which naturally provides efficient ordered access, is via the use of order-preserving data structures. Order-preserving structures such as the well-known B-Tree maintain relationships between indexed values and thus allow natural access to ranges, as well as predecessor and successor operations on their key values. Applications may also pose value-based queries that involve determining if a value v was observed at any sensor; the query returns a list of sensors and the times at which they observed this value. Variants of value queries involve restricting the query to a geographical region, or specifying a range (v1, v2) rather than a single value v. Value queries can be handled by indexing on the values reported in the summaries. Specifically, if a sensor reports a numerical value, then the index is constructed on these values. A search involves finding matching values that are either contained in the search range (v1, v2) or match the search value v exactly. Hybrid value and spatio-temporal queries are also possible. Such queries specify a time interval, a value range and a spatial region and request all records that match these attributes - find all instances where the temperature exceeded 100o F at location R during the month of August. These queries require an index on both time and value. In TSAR our focus is on range queries on value or time, with planned extensions to include spatial scoping. 2.3 Design Principles Our design of a sensor storage system for multi-tier networks is based on the following set of principles, which address the issues arising from the system and usage models above. • Principle 1: Store locally, access globally: Current technology allows local storage to be significantly more energyefficient than network communication, while technology trends show no signs of erasing this gap in the near future. For maximum network life a sensor storage system should leverage the flash memory on sensors to archive data locally, substituting cheap memory operations for expensive radio transmission. But without efficient mechanisms for retrieval, the energy gains of local storage may be outweighed by communication costs incurred by the application in searching for data. We believe that if the data storage system provides the abstraction of a single logical store to applications, as 41 does TSAR, then it will have additional flexibility to optimize communication and storage costs. • Principle 2: Distinguish data from metadata: Data must be identified so that it may be retrieved by the application without exhaustive search. To do this, we associate metadata with each data record - data fields of known syntax which serve as identifiers and may be queried by the storage system. Examples of this metadata are data attributes such as location and time, or selected or summarized data values. We leverage the presence of resource-rich proxies to index metadata for resource-constrained sensors. The proxies share this metadata index to provide a unified logical view of all data in the system, thereby enabling efficient, low-latency lookups. Such a tier-specific separation of data storage from metadata indexing enables the system to exploit the idiosyncrasies of multi-tier networks, while improving performance and functionality. • Principle 3: Provide data-centric query support: In a sensor application the specific location (i.e. offset) of a record in a stream is unlikely to be of significance, except if it conveys information concerning the location and/or time at which the information was generated. We thus expect that applications will be best served by a query interface which allows them to locate data by value or attribute (e.g. location and time), rather than a read interface for unstructured data. This in turn implies the need to maintain metadata in the form of an index that provides low cost lookups. 2.4 System Design TSAR embodies these design principles by employing local storage at sensors and a distributed index at the proxies. The key features of the system design are as follows: In TSAR, writes occur at sensor nodes, and are assumed to consist of both opaque data as well as application-specific metadata. This metadata is a tuple of known types, which may be used by the application to locate and identify data records, and which may be searched on and compared by TSAR in the course of locating data for the application. In a camera-based sensing application, for instance, this metadata might include coordinates describing the field of view, average luminance, and motion values, in addition to basic information such as time and sensor location. Depending on the application, this metadata may be two or three orders of magnitude smaller than the data itself, for instance if the metadata consists of features extracted from image or acoustic data. In addition to storing data locally, each sensor periodically sends a summary of reported metadata to a nearby proxy. The summary contains information such as the sensor ID, the interval (t1, t2) over which the summary was generated, a handle identifying the corresponding data record (e.g. its location in flash memory), and a coarse-grain representation of the metadata associated with the record. The precise data representation used in the summary is application-specific; for instance, a temperature sensor might choose to report the maximum and minimum temperature values observed in an interval as a coarse-grain representation of the actual time series. The proxy uses the summary to construct an index; the index is global in that it stores information from all sensors in the system and it is distributed across the various proxies in the system. Thus, applications see a unified view of distributed data, and can query the index at any proxy to get access to data stored at any sensor. Specifically, each query triggers lookups in this distributed index and the list of matches is then used to retrieve the corresponding data from the sensors. There are several distributed index and lookup methods which might be used in this system; however, the index structure described in Section 3 is highly suited for the task. Since the index is constructed using a coarse-grain summary, instead of the actual data, index lookups will yield approximate matches. The TSAR summarization mechanism guarantees that index lookups will never yield false negatives - i.e. it will never miss summaries which include the value being searched for. However, index lookups may yield false positives, where a summary matches the query but when queried the remote sensor finds no matching value, wasting network resources. The more coarse-grained the summary, the lower the update overhead and the greater the fraction of false positives, while finer summaries incur update overhead while reducing query overhead due to false positives. Remote sensors may easily distinguish false positives from queries which result in search hits, and calculate the ratio between the two; based on this ratio, TSAR employs a novel adaptive technique that dynamically varies the granularity of sensor summaries to balance the metadata overhead and the overhead of false positives. 3. Data Structures At the proxy tier, TSAR employs a novel index structure called the Interval Skip Graph, which is an ordered, distributed data structure for finding all intervals that contain a particular point or range of values. Interval skip graphs combine Interval Trees [5], an interval-based binary search tree, with Skip Graphs [1], a ordered, distributed data structure for peer-to-peer systems [13]. The resulting data structure has two properties that make it ideal for sensor networks. First, it has O(log n) search complexity for accessing the first interval that matches a particular value or range, and constant complexity for accessing each successive interval. Second, indexing of intervals rather than individual values makes the data structure ideal for indexing summaries over time or value. Such summary-based indexing is a more natural fit for energyconstrained sensor nodes, since transmitting summaries incurs less energy overhead than transmitting all sensor data. Definitions: We assume that there are Np proxies and Ns sensors in a two-tier sensor network. Each proxy is responsible for multiple sensor nodes, and no assumption is made about the number of sensors per proxy. Each sensor transmits interval summaries of data or events regularly to one or more proxies that it is associated with, where interval i is represented as [lowi, highi]. These intervals can correspond to time or value ranges that are used for indexing sensor data. No assumption is made about the size of an interval or about the amount of overlap between intervals. Range queries on the intervals are posed by users to the network of proxies and sensors; each query q needs to determine all index values that overlap the interval [lowq, highq]. The goal of the interval skip graph is to index all intervals such that the set that overlaps a query interval can be located efficiently. In the rest of this section, we describe the interval skip graph in greater detail. 3.1 Skip Graph Overview In order to inform the description of the Interval Skip Graph, we first provide a brief overview of the Skip Graph data structure; for a more extensive description the reader is referred to [1]. Figure 2 shows a skip graph which indexes 8 keys; the keys may be seen along the bottom, and above each key are the pointers associated with that key. Each data element, consisting of a key and its associated pointers, may reside on a different node in the network, 42 7 9 13 17 21 25 311 level 0 level 1 level 2 key single skip graph element (each may be on different node) find(21) node-to-node messages Figure 2: Skip Graph of 8 Elements [6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5] 5 14 14 16 23 23 27 30 [low,high] max contains(13) match no match halt Figure 3: Interval Skip Graph [6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30] [2,5] Node 1 Node 2 Node 3 level 2 level 1 level 0 Figure 4: Distributed Interval Skip Graph and pointers therefore identify both a remote node as well as a data element on that node. In this figure we may see the following properties of a skip graph: • Ordered index: The keys are members of an ordered data type, for instance integers. Lookups make use of ordered comparisons between the search key and existing index entries. In addition, the pointers at the lowest level point directly to the successor of each item in the index. • In-place indexing: Data elements remain on the nodes where they were inserted, and messages are sent between nodes to establish links between those elements and others in the index. • Log n height: There are log2 n pointers associated with each element, where n is the number of data elements indexed. Each pointer belongs to a level l in [0... log2 n − 1], and together with some other pointers at that level forms a chain of n/2l elements. • Probabilistic balance: Rather than relying on re-balancing operations which may be triggered at insert or delete, skip graphs implement a simple random balancing mechanism which maintains close to perfect balance on average, with an extremely low probability of significant imbalance. • Redundancy and resiliency: Each data element forms an independent search tree root, so searches may begin at any node in the network, eliminating hot spots at a single search root. In addition the index is resilient against node failure; data on the failed node will not be accessible, but remaining data elements will be accessible through search trees rooted on other nodes. In Figure 2 we see the process of searching for a particular value in a skip graph. The pointers reachable from a single data element form a binary tree: a pointer traversal at the highest level skips over n/2 elements, n/4 at the next level, and so on. Search consists of descending the tree from the highest level to level 0, at each level comparing the target key with the next element at that level and deciding whether or not to traverse. In the perfectly balanced case shown here there are log2 n levels of pointers, and search will traverse 0 or 1 pointers at each level. We assume that each data element resides on a different node, and measure search cost by the number messages sent (i.e. the number of pointers traversed); this will clearly be O(log n). Tree update proceeds from the bottom, as in a B-Tree, with the root(s) being promoted in level as the tree grows. In this way, for instance, the two chains at level 1 always contain n/2 entries each, and there is never a need to split chains as the structure grows. The update process then consists of choosing which of the 2l chains to insert an element into at each level l, and inserting it in the proper place in each chain. Maintaining a perfectly balanced skip graph as shown in Figure 2 would be quite complex; instead, the probabilistic balancing method introduced in Skip Lists [23] is used, which trades off a small amount of overhead in the expected case in return for simple update and deletion. The basis for this method is the observation that any element which belongs to a particular chain at level l can only belong to one of two chains at level l+1. To insert an element we ascend levels starting at 0, randomly choosing one of the two possible chains at each level, an stopping when we reach an empty chain. One means of implementation (e.g. as described in [1]) is to assign each element an arbitrarily long random bit string. Each chain at level l is then constructed from those elements whose bit strings match in the first l bits, thus creating 2l possible chains at each level and ensuring that each chain splits into exactly two chains at the next level. Although the resulting structure is not perfectly balanced, following the analysis in [23] we can show that the probability of it being significantly out of balance is extremely small; in addition, since the structure is determined by the random number stream, input data patterns cannot cause the tree to become imbalanced. 3.2 Interval Skip Graph A skip graph is designed to store single-valued entries. In this section, we introduce a novel data structure that extends skip graphs to store intervals [lowi, highi] and allows efficient searches for all intervals covering a value v, i.e. {i : lowi ≤ v ≤ highi}. Our data structure can be extended to range searches in a straightforward manner. The interval skip graph is constructed by applying the method of augmented search trees, as described by Cormen, Leiserson, and Rivest [5] and applied to binary search trees to create an Interval Tree. The method is based on the observation that a search structure based on comparison of ordered keys, such as a binary tree, may also be used to search on a secondary key which is non-decreasing in the first key. Given a set of intervals sorted by lower bound - lowi ≤ lowi+1 - we define the secondary key as the cumulative maximum, maxi = maxk=0...i (highk). The set of intervals intersecting a value v may then be found by searching for the first interval (and thus the interval with least lowi) such that maxi ≥ v. We then 43 traverse intervals in increasing order lower bound, until we find the first interval with lowi > v, selecting those intervals which intersect v. Using this approach we augment the skip graph data structure, as shown in Figure 3, so that each entry stores a range (lower bound and upper bound) and a secondary key (cumulative maximum of upper bound). To efficiently calculate the secondary key maxi for an entry i, we take the greatest of highi and the maximum values reported by each of i``s left-hand neighbors. To search for those intervals containing the value v, we first search for v on the secondary index, maxi, and locate the first entry with maxi ≥ v. (by the definition of maxi, for this data element maxi = highi.) If lowi > v, then this interval does not contain v, and no other intervals will, either, so we are done. Otherwise we traverse the index in increasing order of mini, returning matching intervals, until we reach an entry with mini > v and we are done. Searches for all intervals which overlap a query range, or which completely contain a query range, are straightforward extensions of this mechanism. Lookup Complexity: Lookup for the first interval that matches a given value is performed in a manner very similar to an interval tree. The complexity of search is O(log n). The number of intervals that match a range query can vary depending on the amount of overlap in the intervals being indexed, as well as the range specified in the query. Insert Complexity: In an interval tree or interval skip list, the maximum value for an entry need only be calculated over the subtree rooted at that entry, as this value will be examined only when searching within the subtree rooted at that entry. For a simple interval skip graph, however, this maximum value for an entry must be computed over all entries preceding it in the index, as searches may begin anywhere in the data structure, rather than at a distinguished root element. It may be easily seen that in the worse case the insertion of a single interval (one that covers all existing intervals in the index) will trigger the update of all entries in the index, for a worst-case insertion cost of O(n). 3.3 Sparse Interval Skip Graph The final extensions we propose take advantage of the difference between the number of items indexed in a skip graph and the number of systems on which these items are distributed. The cost in network messages of an operation may be reduced by arranging the data structure so that most structure traversals occur locally on a single node, and thus incur zero network cost. In addition, since both congestion and failure occur on a per-node basis, we may eliminate links without adverse consequences if those links only contribute to load distribution and/or resiliency within a single node. These two modifications allow us to achieve reductions in asymptotic complexity of both update and search. As may be in Section 3.2, insert and delete cost on an interval skip graph has a worst case complexity of O(n), compared to O(log n) for an interval tree. The main reason for the difference is that skip graphs have a full search structure rooted at each element, in order to distribute load and provide resilience to system failures in a distributed setting. However, in order to provide load distribution and failure resilience it is only necessary to provide a full search structure for each system. If as in TSAR the number of nodes (proxies) is much smaller than the number of data elements (data summaries indexed), then this will result in significant savings. Implementation: To construct a sparse interval skip graph, we ensure that there is a single distinguished element on each system, the root element for that system; all searches will start at one of these root elements. When adding a new element, rather than splitting lists at increasing levels l until the element is in a list with no others, we stop when we find that the element would be in a list containing no root elements, thus ensuring that the element is reachable from all root elements. An example of applying this optimization may be seen in Figure 5. (In practice, rather than designating existing data elements as roots, as shown, it may be preferable to insert null values at startup.) When using the technique of membership vectors as in [1], this may be done by broadcasting the membership vectors of each root element to all other systems, and stopping insertion of an element at level l when it does not share an l-bit prefix with any of the Np root elements. The expected number of roots sharing a log2Np-bit prefix is 1, giving an expected expected height for each element of log2Np +O(1). An alternate implementation, which distributes information concerning root elements at pointer establishment time, is omitted due to space constraints; this method eliminates the need for additional messages. Performance: In a (non-interval) sparse skip graph, since the expected height of an inserted element is now log2 Np + O(1), expected insertion complexity is O(log Np), rather than O(log n), where Np is the number of root elements and thus the number of separate systems in the network. (In the degenerate case of a single system we have a skip list; with splitting probability 0.5 the expected height of an individual element is 1.) Note that since searches are started at root elements of expected height log2 n, search complexity is not improved. For an interval sparse skip graph, update performance is improved considerably compared to the O(n) worst case for the nonsparse case. In an augmented search structure such as this, an element only stores information for nodes which may be reached from that element-e.g. the subtree rooted at that element, in the case of a tree. Thus, when updating the maximum value in an interval tree, the update is only propagated towards the root. In a sparse interval skip graph, updates to a node only propagate towards the Np root elements, for a worst-case cost of Np log2 n. Shortcut search: When beginning a search for a value v, rather than beginning at the root on that proxy, we can find the element that is closest to v (e.g. using a secondary local index), and then begin the search at that element. The expected distance between this element and the search terminus is log2 Np, and the search will now take on average log2 Np + O(1) steps. To illustrate this optimization, in Figure 4 depending on the choice of search root, a search for [21, 30] beginning at node 2 may take 3 network hops, traversing to node 1, then back to node 2, and finally to node 3 where the destination is located, for a cost of 3 messages. The shortcut search, however, locates the intermediate data element on node 2, and then proceeds directly to node 3 for a cost of 1 message. Performance: This technique may be applied to the primary key search which is the first of two insertion steps in an interval skip graph. By combining the short-cut optimization with sparse interval skip graphs, the expected cost of insertion is now O(log Np), independent of the size of the index or the degree of overlap of the inserted intervals. 3.4 Alternative Data Structures Thus far we have only compared the sparse interval skip graph with similar structures from which it is derived. A comparison with several other data structures which meet at least some of the requirements for the TSAR index is shown in Table 2. 44 Table 2: Comparison of Distributed Index Structures Range Query Support Interval Representation Re-balancing Resilience Small Networks Large Networks DHT, GHT no no no yes good good Local index, flood query yes yes no yes good bad P-tree, RP* (distributed B-Trees) yes possible yes no good good DIMS yes no yes yes yes yes Interval Skipgraph yes yes no yes good good [6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5] Roots Node 1 Node 2 Figure 5: Sparse Interval Skip Graph The hash-based systems, DHT [25] and GHT [26], lack the ability to perform range queries and are thus not well-suited to indexing spatio-temporal data. Indexing locally using an appropriate singlenode structure and then flooding queries to all proxies is a competitive alternative for small networks; for large networks the linear dependence on the number of proxies becomes an issue. Two distributed B-Trees were examined - P-Trees [6] and RP* [19]. Each of these supports range queries, and in theory could be modified to support indexing of intervals; however, they both require complex re-balancing, and do not provide the resilience characteristics of the other structures. DIMS [18] provides the ability to perform spatio-temporal range queries, and has the necessary resilience to failures; however, it cannot be used index intervals, which are used by TSAR``s data summarization algorithm. 4. Data Storage and Summarization Having described the proxy-level index structure, we turn to the mechanisms at the sensor tier. TSAR implements two key mechanisms at the sensor tier. The first is a local archival store at each sensor node that is optimized for resource-constrained devices. The second is an adaptive summarization technique that enables each sensor to adapt to changing data and query characteristics. The rest of this section describes these mechanisms in detail. 4.1 Local Storage at Sensors Interval skip graphs provide an efficient mechanism to lookup sensor nodes containing data relevant to a query. These queries are then routed to the sensors, which locate the relevant data records in the local archive and respond back to the proxy. To enable such lookups, each sensor node in TSAR maintains an archival store of sensor data. While the implementation of such an archival store is straightforward on resource-rich devices that can run a database, sensors are often power and resource-constrained. Consequently, the sensor archiving subsystem in TSAR is explicitly designed to exploit characteristics of sensor data in a resource-constrained setting. Timestamp Calibration Parameters Opaque DataData/Event Attributes size Figure 6: Single storage record Sensor data has very distinct characteristics that inform our design of the TSAR archival store. Sensors produce time-series data streams, and therefore, temporal ordering of data is a natural and simple way of storing archived sensor data. In addition to simplicity, a temporally ordered store is often suitable for many sensor data processing tasks since they involve time-series data processing. Examples include signal processing operations such as FFT, wavelet transforms, clustering, similarity matching, and target detection. Consequently, the local archival store is a collection of records, designed as an append-only circular buffer, where new records are appended to the tail of the buffer. The format of each data record is shown in Figure 6. Each record has a metadata field which includes a timestamp, sensor settings, calibration parameters, etc.. Raw sensor data is stored in the data field of the record. The data field is opaque and application-specific-the storage system does not know or care about interpreting this field. A camera-based sensor, for instance, may store binary images in this data field. In order to support a variety of applications, TSAR supports variable-length data fields; as a result, record sizes can vary from one record to another. Our archival store supports three operations on records: create, read, and delete. Due to the append-only nature of the store, creation of records is simple and efficient. The create operation simply creates a new record and appends it to the tail of the store. Since records are always written at the tail, the store need not maintain a free space list. All fields of the record need to be specified at creation time; thus, the size of the record is known a priori and the store simply allocates the the corresponding number of bytes at the tail to store the record. Since writes are immutable, the size of a record does not change once it is created. proxy proxy proxy record 3 record summary local archive in flash memory data summary start,end offset time interval sensor summary sent to proxy Insert summaries into interval skip graph Figure 7: Sensor Summarization 45 The read operation enables stored records to be retrieved in order to answer queries. In a traditional database system, efficient lookups are enabled by maintaining a structure such as a B-tree that indexes certain keys of the records. However, this can be quite complex for a small sensor node with limited resources. Consequently, TSAR sensors do not maintain any index for the data stored in their archive. Instead, they rely on the proxies to maintain this metadata index-sensors periodically send the proxy information summarizing the data contained in a contiguous sequence of records, as well as a handle indicating the location of these records in flash memory. The mechanism works as follows: In addition to the summary of sensor data, each node sends metadata to the proxy containing the time interval corresponding to the summary, as well as the start and end offsets of the flash memory location where the raw data corresponding is stored (as shown in Figure 7). Thus, random access is enabled at granularity of a summary-the start offset of each chunk of records represented by a summary is known to the proxy. Within this collection, records are accessed sequentially. When a query matches a summary in the index, the sensor uses these offsets to access the relevant records on its local flash by sequentially reading data from the start address until the end address. Any queryspecific operation can then be performed on this data. Thus, no index needs to be maintained at the sensor, in line with our goal of simplifying sensor state management. The state of the archive is captured in the metadata associated with the summaries, and is stored and maintained at the proxy. While we anticipate local storage capacity to be large, eventually there might be a need to overwrite older data, especially in high data rate applications. This may be done via techniques such as multi-resolution storage of data [9], or just simply by overwriting older data. When older data is overwritten, a delete operation is performed, where an index entry is deleted from the interval skip graph at the proxy and the corresponding storage space in flash memory at the sensor is freed. 4.2 Adaptive Summarization The data summaries serve as glue between the storage at the remote sensor and the index at the proxy. Each update from a sensor to the proxy includes three pieces of information: the summary, a time period corresponding to the summary, and the start and end offsets for the flash archive. In general, the proxy can index the time interval representing a summary or the value range reported in the summary (or both). The former index enables quick lookups on all records seen during a certain interval, while the latter index enables quick lookups on all records matching a certain value. As described in Section 2.4, there is a trade-off between the energy used in sending summaries (and thus the frequency and resolution of those summaries) and the cost of false hits during queries. The coarser and less frequent the summary information, the less energy required, while false query hits in turn waste energy on requests for non-existent data. TSAR employs an adaptive summarization technique that balances the cost of sending updates against the cost of false positives. The key intuition is that each sensor can independently identify the fraction of false hits and true hits for queries that access its local archive. If most queries result in true hits, then the sensor determines that the summary can be coarsened further to reduce update costs without adversely impacting the hit ratio. If many queries result in false hits, then the sensor makes the granularity of each summary finer to reduce the number and overhead of false hits. The resolution of the summary depends on two parametersthe interval over which summaries of the data are constructed and transmitted to the proxy, as well as the size of the applicationspecific summary. Our focus in this paper is on the interval over which the summary is constructed. Changing the size of the data summary can be performed in an application-specific manner (e.g. using wavelet compression techniques as in [9]) and is beyond the scope of this paper. Currently, TSAR employs a simple summarization scheme that computes the ratio of false and true hits and decreases (increases) the interval between summaries whenever this ratio increases (decreases) beyond a threshold. 5. TSAR Implementation We have implemented a prototype of TSAR on a multi-tier sensor network testbed. Our prototype employs Crossbow Stargate nodes to implement the proxy tier. Each Stargate node employs a 400MHz Intel XScale processor with 64MB RAM and runs the Linux 2.4.19 kernel and EmStar release 2.1. The proxy nodes are equipped with two wireless radios, a Cisco Aironet 340-based 802.11b radio and a hostmote bridge to the Mica2 sensor nodes using the EmStar transceiver. The 802.11b wireless network is used for inter-proxy communication within the proxy tier, while the wireless bridge enables sensor-proxy communication. The sensor tier consists of Crossbow Mica2s and Mica2dots, each consisting of a 915MHz CC1000 radio, a BMAC protocol stack, a 4 Mb on-board flash memory and an ATMega 128L processor. The sensor nodes run TinyOS 1.1.8. In addition to the on-board flash, the sensor nodes can be equipped with external MMC/SD flash cards using a custom connector. The proxy nodes can be equipped with external storage such as high-capacity compact flash (up to 4GB), 6GB micro-drives, or up to 60GB 1.8inch mobile disk drives. Since sensor nodes may be several hops away from the nearest proxy, the sensor tier employs multi-hop routing to communicate with the proxy tier. In addition, to reduce the power consumption of the radio while still making the sensor node available for queries, low power listening is enabled, in which the radio receiver is periodically powered up for a short interval to sense the channel for transmissions, and the packet preamble is extended to account for the latency until the next interval when the receiving radio wakes up. Our prototype employs the MultiHopLEPSM routing protocol with the BMAC layer configured in the low-power mode with a 11% duty cycle (one of the default BMAC [22] parameters) Our TSAR implementation on the Mote involves a data gathering task that periodically obtains sensor readings and logs these reading to flash memory. The flash memory is assumed to be a circular append-only store and the format of the logged data is depicted in Figure 6. The Mote sends a report to the proxy every N readings, summarizing the observed data. The report contains: (i) the address of the Mote, (ii) a handle that contains an offset and the length of the region in flash memory containing data referred to by the summary, (iii) an interval (t1, t2) over which this report is generated, (iv) a tuple (low, high) representing the minimum and the maximum values observed at the sensor in the interval, and (v) a sequence number. The sensor updates are used to construct a sparse interval skip graph that is distributed across proxies, via network messages between proxies over the 802.11b wireless network. Our current implementation supports queries that request records matching a time interval (t1, t2) or a value range (v1, v2). Spatial constraints are specified using sensor IDs. Given a list of matching intervals from the skip graph, TSAR supports two types of messages to query the sensor: lookup and fetch. A lookup message triggers a search within the corresponding region in flash memory and returns the number of matching records in that memory region (but does not retrieve data). In contrast, a fetch message not only 46 0 10 20 30 40 50 60 70 80 128512 1024 2048 4096 NumberofMessages Index size (entries) Insert (skipgraph) Insert (sparse skipgraph) Initial lookup (a) James Reserve Data 0 10 20 30 40 50 60 70 80 512 1024 2048 4096 NumberofMessages Index size (entries) Insert (skipgraph) Insert (sparse skipgraph) Initial lookup (b) Synthetic Data Figure 8: Skip Graph Insert Performance triggers a search but also returns all matching data records to the proxy. Lookup messages are useful for polling a sensor, for instance, to determine if a query matches too many records. 6. Experimental Evaluation In this section, we evaluate the efficacy of TSAR using our prototype and simulations. The testbed for our experiments consists of four Stargate proxies and twelve Mica2 and Mica2dot sensors; three sensors each are assigned to each proxy. Given the limited size of our testbed, we employ simulations to evaluate the behavior of TSAR in larger settings. Our simulation employs the EmTOS emulator [10], which enables us to run the same code in simulation and the hardware platform. Rather than using live data from a real sensor, to ensure repeatable experiments, we seed each sensor node with a dataset (i.e., a trace) that dictates the values reported by that node to the proxy. One section of the flash memory on each sensor node is programmed with data points from the trace; these observations are then replayed during an experiment, logged to the local archive (located in flash memory, as well), and reported to the proxy. The first dataset used to evaluate TSAR is a temperature dataset from James Reserve [27] that includes data from eleven temperature sensor nodes over a period of 34 days. The second dataset is synthetically generated; the trace for each sensor is generated using a uniformly distributed random walk though the value space. Our experimental evaluation has four parts. First, we run EmTOS simulations to evaluate the lookup, update and delete overhead for sparse interval skip graphs using the real and synthetic datasets. Second, we provide summary results from micro-benchmarks of the storage component of TSAR, which include empirical characterization of the energy costs and latency of reads and writes for the flash memory chip as well as the whole mote platform, and comparisons to published numbers for other storage and communication technologies. These micro-benchmarks form the basis for our full-scale evaluation of TSAR on a testbed of four Stargate proxies and twelve Motes. We measure the end-to-end query latency in our multi-hop testbed as well as the query processing overhead at the mote tier. Finally, we demonstrate the adaptive summarization capability at each sensor node. The remainder of this section presents our experimental results. 6.1 Sparse Interval Skip Graph Performance This section evaluates the performance of sparse interval skip graphs by quantifying insert, lookup and delete overheads. We assume a proxy tier with 32 proxies and construct sparse interval skip graphs of various sizes using our datasets. For each skip 0 5 10 15 20 25 30 35 409620481024512 NumberofMessages Index size (entries) Initial lookup Traversal (a) James Reserve Data 0 2 4 6 8 10 12 14 409620481024512 NumberofMessages Index size (entries) Initial lookup Traversal (b) Synthetic Data Figure 9: Skip Graph Lookup Performance 0 10 20 30 40 50 60 70 1 4 8 16 24 32 48 Numberofmessages Number of proxies Skipgraph insert Sparse skipgraph insert Initial lookup (a) Impact of Number of Proxies 0 20 40 60 80 100 120 512 1024 2048 4096 NumberofMessages Index size (entries) Insert (redundant) Insert (non-redundant) Lookup (redundant) Lookup (non-redundant) (b) Impact of Redundant Summaries Figure 10: Skip Graph Overheads graph, we evaluate the cost of inserting a new value into the index. Each entry was deleted after its insertion, enabling us to quantify the delete overhead as well. Figure 8(a) and (b) quantify the insert overhead for our two datasets: each insert entails an initial traversal that incurs log n messages, followed by neighbor pointer update at increasing levels, incurring a cost of 4 log n messages. Our results demonstrate this behavior, and show as well that performance of delete-which also involves an initial traversal followed by pointer updates at each level-incurs a similar cost. Next, we evaluate the lookup performance of the index structure. Again, we construct skip graphs of various sizes using our datasets and evaluate the cost of a lookup on the index structure. Figures 9(a) and (b) depict our results. There are two components for each lookup-the lookup of the first interval that matches the query and, in the case of overlapping intervals, the subsequent linear traversal to identify all matching intervals. The initial lookup can be seen to takes log n messages, as expected. The costs of the subsequent linear traversal, however, are highly data dependent. For instance, temperature values for the James Reserve data exhibit significant spatial correlations, resulting in significant overlap between different intervals and variable, high traversal cost (see Figure 9(a)). The synthetic data, however, has less overlap and incurs lower traversal overhead as shown in Figure 9(b). Since the previous experiments assumed 32 proxies, we evaluate the impact of the number of proxies on skip graph performance. We vary the number of proxies from 10 to 48 and distribute a skip graph with 4096 entries among these proxies. We construct regular interval skip graphs as well as sparse interval skip graphs using these entries and measure the overhead of inserts and lookups. Thus, the experiment also seeks to demonstrate the benefits of sparse skip graphs over regular skip graphs. Figure 10(a) depicts our results. In regular skip graphs, the complexity of insert is O(log2n) in the 47 expected case (and O(n) in the worst case) where n is the number of elements. This complexity is unaffected by changing the number of proxies, as indicated by the flat line in the figure. Sparse skip graphs require fewer pointer updates; however, their overhead is dependent on the number of proxies, and is O(log2Np) in the expected case, independent of n. This can be seen to result in significant reduction in overhead when the number of proxies is small, which decreases as the number of proxies increases. Failure handling is an important issue in a multi-tier sensor architecture since it relies on many components-proxies, sensor nodes and routing nodes can fail, and wireless links can fade. Handling of many of these failure modes is outside the scope of this paper; however, we consider the case of resilience of skip graphs to proxy failures. In this case, skip graph search (and subsequent repair operations) can follow any one of the other links from a root element. Since a sparse skip graph has search trees rooted at each node, searching can then resume once the lookup request has routed around the failure. Together, these two properties ensure that even if a proxy fails, the remaining entries in the skip graph will be reachable with high probability-only the entries on the failed proxy and the corresponding data at the sensors becomes inaccessible. To ensure that all data on sensors remains accessible, even in the event of failure of a proxy holding index entries for that data, we incorporate redundant index entries. TSAR employs a simple redundancy scheme where additional coarse-grain summaries are used to protect regular summaries. Each sensor sends summary data periodically to its local proxy, but less frequently sends a lowerresolution summary to a backup proxy-the backup summary represents all of the data represented by the finer-grained summaries, but in a lossier fashion, thus resulting in higher read overhead (due to false hits) if the backup summary is used. The cost of implementing this in our system is low - Figure 10(b) shows the overhead of such a redundancy scheme, where a single coarse summary is send to a backup for every two summaries sent to the primary proxy. Since a redundant summary is sent for every two summaries, the insert cost is 1.5 times the cost in the normal case. However, these redundant entries result in only a negligible increase in lookup overhead, due the logarithmic dependence of lookup cost on the index size, while providing full resilience to any single proxy failure. 6.2 Storage Microbenchmarks Since sensors are resource-constrained, the energy consumption and the latency at this tier are important measures for evaluating the performance of a storage architecture. Before performing an endto-end evaluation of our system, we provide more detailed information on the energy consumption of the storage component used to implement the TSAR local archive, based on empirical measurements. In addition we compare these figures to those for other local storage technologies, as well as to the energy consumption of wireless communication, using information from the literature. For empirical measurements we measure energy usage for the storage component itself (i.e. current drawn by the flash chip), as well as for the entire Mica2 mote. The power measurements in Table 3 were performed for the AT45DB041 [15] flash memory on a Mica2 mote, which is an older NOR flash device. The most promising technology for low-energy storage on sensing devices is NAND flash, such as the Samsung K9K4G08U0M device [16]; published power numbers for this device are provided in the table. Published energy requirements for wireless transmission using the Chipcon [4] CC2420 radio (used in MicaZ and Telos motes) are provided for comparison, assuming Energy Energy/byte Mote flash Read 256 byte page 58µJ* / 136µJ* total 0.23µJ* Write 256 byte page 926µJ* / 1042µJ* total 3.6µJ* NAND Flash Read 512 byte page 2.7µJ 1.8nJ Write 512 byte page 7.8µJ 15nJ Erase 16K byte sector 60µJ 3.7nJ CC2420 radio Transmit 8 bits (-25dBm) 0.8µJ 0.8µJ Receive 8 bits 1.9µJ 1.9µJ Mote AVR processor In-memory search, 256 bytes 1.8µJ 6.9nJ Table 3: Storage and Communication Energy Costs (*measured values) 0 200 400 600 800 1000 1 2 3 Latency(ms) Number of hops (a) Multi-hop query performance 0 100 200 300 400 500 1 5121024 2048 4096 Latency(ms) Index size (entries) Sensor communication Proxy communication Sensor lookup, processing (b) Query Performance Figure 11: Query Processing Latency zero network and protocol overhead. Comparing the total energy cost for writing flash (erase + write) to the total cost for communication (transmit + receive), we find that the NAND flash is almost 150 times more efficient than radio communication, even assuming perfect network protocols. 6.3 Prototype Evaluation This section reports results from an end-to-end evaluation of the TSAR prototype involving both tiers. In our setup, there are four proxies connected via 802.11 links and three sensors per proxy. The multi-hop topology was preconfigured such that sensor nodes were connected in a line to each proxy, forming a minimal tree of depth 0 400 800 1200 1600 0 20 40 60 80 100 120 140 160 Retrievallatency(ms) Archived data retrieved (bytes) (a) Data Query and Fetch Time 0 2 4 6 8 10 12 4 8 16 32 Latency(ms) Number of 34-byte records searched (b) Sensor query processing delay Figure 12: Query Latency Components 48 3. Due to resource constraints we were unable to perform experiments with dozens of sensor nodes, however this topology ensured that the network diameter was as large as for a typical network of significantly larger size. Our evaluation metric is the end-to-end latency of query processing. A query posed on TSAR first incurs the latency of a sparse skip graph lookup, followed by routing to the appropriate sensor node(s). The sensor node reads the required page(s) from its local archive, processes the query on the page that is read, and transmits the response to the proxy, which then forwards it to the user. We first measure query latency for different sensors in our multi-hop topology. Depending on which of the sensors is queried, the total latency increases almost linearly from about 400ms to 1 second, as the number of hops increases from 1 to 3 (see Figure 11(a)). Figure 11(b) provides a breakdown of the various components of the end-to-end latency. The dominant component of the total latency is the communication over one or more hops. The typical time to communicate over one hop is approximately 300ms. This large latency is primarily due to the use of a duty-cycled MAC layer; the latency will be larger if the duty cycle is reduced (e.g. the 2% setting as opposed to the 11.5% setting used in this experiment), and will conversely decrease if the duty cycle is increased. The figure also shows the latency for varying index sizes; as expected, the latency of inter-proxy communication and skip graph lookups increases logarithmically with index size. Not surprisingly, the overhead seen at the sensor is independent of the index size. The latency also depends on the number of packets transmitted in response to a query-the larger the amount of data retrieved by a query, the greater the latency. This result is shown in Figure 12(a). The step function is due to packetization in TinyOS; TinyOS sends one packet so long as the payload is smaller than 30 bytes and splits the response into multiple packets for larger payloads. As the data retrieved by a query is increased, the latency increases in steps, where each step denotes the overhead of an additional packet. Finally, Figure 12(b) shows the impact of searching and processing flash memory regions of increasing sizes on a sensor. Each summary represents a collection of records in flash memory, and all of these records need to be retrieved and processed if that summary matches a query. The coarser the summary, the larger the memory region that needs to be accessed. For the search sizes examined, amortization of overhead when searching multiple flash pages and archival records, as well as within the flash chip and its associated driver, results in the appearance of sub-linear increase in latency with search size. In addition, the operation can be seen to have very low latency, in part due to the simplicity of our query processing, requiring only a compare operation with each stored element. More complex operations, however, will of course incur greater latency. 6.4 Adaptive Summarization When data is summarized by the sensor before being reported to the proxy, information is lost. With the interval summarization method we are using, this information loss will never cause the proxy to believe that a sensor node does not hold a value which it in fact does, as all archived values will be contained within the interval reported. However, it does cause the proxy to believe that the sensor may hold values which it does not, and forward query messages to the sensor for these values. These false positives constitute the cost of the summarization mechanism, and need to be balanced against the savings achieved by reducing the number of reports. The goal of adaptive summarization is to dynamically vary the summary size so that these two costs are balanced. 0 0.1 0.2 0.3 0.4 0.5 0 5 10 15 20 25 30 Fractionoftruehits Summary size (number of records) (a) Impact of summary size 0 5 10 15 20 25 30 35 0 5000 10000 15000 20000 25000 30000 Summarizationsize(num.records) Normalized time (units) query rate 0.2 query rate 0.03 query rate 0.1 (b) Adaptation to query rate Figure 13: Impact of Summarization Granularity Figure 13(a) demonstrates the impact of summary granularity on false hits. As the number of records included in a summary is increased, the fraction of queries forwarded to the sensor which match data held on that sensor (true positives) decreases. Next, in Figure 13(b) we run the a EmTOS simulation with our adaptive summarization algorithm enabled. The adaptive algorithm increases the summary granularity (defined as the number of records per summary) when Cost(updates) Cost(falsehits) > 1 + and reduces it if Cost(updates) Cost(falsehits) > 1 − , where is a small constant. To demonstrate the adaptive nature of our technique, we plot a time series of the summarization granularity. We begin with a query rate of 1 query per 5 samples, decrease it to 1 every 30 samples, and then increase it again to 1 query every 10 samples. As shown in Figure 13(b), the adaptive technique adjusts accordingly by sending more fine-grain summaries at higher query rates (in response to the higher false hit rate), and fewer, coarse-grain summaries at lower query rates. 7. Related Work In this section, we review prior work on storage and indexing techniques for sensor networks. While our work addresses both problems jointly, much prior work has considered them in isolation. The problem of archival storage of sensor data has received limited attention in sensor network literature. ELF [7] is a logstructured file system for local storage on flash memory that provides load leveling and Matchbox is a simple file system that is packaged with the TinyOS distribution [14]. Both these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives. Multi-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources. In contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors. The RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data. In order to efficiently access a distributed sensor store, an index needs to be constructed of the data. Early work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored. Sources publish the events that they detect, and sinks with interest in specific events can subscribe to these events. The Directed Diffusion substrate routes queries to specific locations 49 if the query has geographic information embedded in it (e.g.: find temperature in the south-west quadrant), and if not, the query is flooded throughout the network. These schemes had the drawback that for queries that are not geographically scoped, search cost (O(n) for a network of n nodes) may be prohibitive in large networks with frequent queries. Local storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9]. Recent research has seen a growing body of work on data indexing schemes for sensor networks[26][11][18]. One such scheme is DCS [26], which provides a hash function for mapping from event name to location. DCS constructs a distributed structure that groups events together spatially by their named type. Distributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data. While these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables. TSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers. Thus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple. In addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work. DHTs [25] can be used for indexing events based on their type, quad-tree variants such as Rtrees [12] can be used for optimizing spatial searches, and K-D trees [2] can be used for multi-attribute search. While this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data. 8. Conclusions In this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks. We presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure. We implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks. 9. REFERENCES [1] James Aspnes and Gauri Shah. Skip graphs. In Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 384-393, Baltimore, MD, USA, 12-14 January 2003. [2] Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Commun. ACM, 18(9):509-517, 1975. [3] Philippe Bonnet, J. E. Gehrke, and Praveen Seshadri. Towards sensor database systems. In Proceedings of the Second International Conference on Mobile Data Management., January 2001. [4] Chipcon. CC2420 2.4 GHz IEEE 802.15.4 / ZigBee-ready RF transceiver, 2004. [5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms. The MIT Press and McGraw-Hill, second edition edition, 2001. [6] Adina Crainiceanu, Prakash Linga, Johannes Gehrke, and Jayavel Shanmugasundaram. Querying Peer-to-Peer Networks Using P-Trees. Technical Report TR2004-1926, Cornell University, 2004. [7] Hui Dai, Michael Neufeld, and Richard Han. ELF: an efficient log-structured flash file system for micro sensor nodes. In SenSys ``04: Proceedings of the 2nd international conference on Embedded networked sensor systems, pages 176-187, New York, NY, USA, 2004. ACM Press. [8] Peter Desnoyers, Deepak Ganesan, Huan Li, and Prashant Shenoy. PRESTO: A predictive storage architecture for sensor networks. In Tenth Workshop on Hot Topics in Operating Systems (HotOS X). , June 2005. [9] Deepak Ganesan, Ben Greenstein, Denis Perelyubskiy, Deborah Estrin, and John Heidemann. An evaluation of multi-resolution storage in sensor networks. In Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys). , 2003. [10] L. Girod, T. Stathopoulos, N. Ramanathan, J. Elson, D. Estrin, E. Osterweil, and T. Schoellhammer. A system for simulation, emulation, and deployment of heterogeneous sensor networks. In Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems, Baltimore, MD, 2004. [11] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and S. Shenker. DIFS: A distributed index for features in sensor networks. Elsevier Journal of ad-hoc Networks, 2003. [12] Antonin Guttman. R-trees: a dynamic index structure for spatial searching. In SIGMOD ``84: Proceedings of the 1984 ACM SIGMOD international conference on Management of data, pages 47-57, New York, NY, USA, 1984. ACM Press. [13] Nicholas Harvey, Michael B. Jones, Stefan Saroiu, Marvin Theimer, and Alec Wolman. Skipnet: A scalable overlay network with practical locality properties. In In proceedings of the 4th USENIX Symposium on Internet Technologies and Systems (USITS ``03), Seattle, WA, March 2003. [14] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, and Kristofer Pister. System architecture directions for networked sensors. In Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX), pages 93-104, Cambridge, MA, USA, November 2000. ACM. [15] Atmel Inc. 4-megabit 2.5-volt or 2.7-volt DataFlash AT45DB041B, 2005. [16] Samsung Semiconductor Inc.. K9W8G08U1M, K9K4G08U0M: 512M x 8 bit / 1G x 8 bit NAND flash memory, 2003. [17] Chalermek Intanagonwiwat, Ramesh Govindan, and Deborah Estrin. Directed diffusion: A scalable and robust communication paradigm for sensor networks. In Proceedings of the Sixth Annual International Conference on Mobile Computing and Networking, pages 56-67, Boston, MA, August 2000. ACM Press. [18] Xin Li, Young-Jin Kim, Ramesh Govindan, and Wei Hong. Multi-dimensional range queries in sensor networks. In Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys). , 2003. to appear. [19] Witold Litwin, Marie-Anne Neimat, and Donovan A. Schneider. RP*: A family of order preserving scalable distributed data structures. In VLDB ``94: Proceedings of the 20th International Conference on Very Large Data Bases, pages 342-353, San Francisco, CA, USA, 1994. [20] Samuel Madden, Michael Franklin, Joseph Hellerstein, and Wei Hong. TAG: a tiny aggregation service for ad-hoc sensor networks. In OSDI, Boston, MA, 2002. [21] A. Mitra, A. Banerjee, W. Najjar, D. Zeinalipour-Yazti, D.Gunopulos, and V. Kalogeraki. High performance, low power sensor platforms featuring gigabyte scale storage. In SenMetrics 2005: Third International Workshop on Measurement, Modeling, and Performance Analysis of Wireless Sensor Networks, July 2005. [22] J. Polastre, J. Hill, and D. Culler. Versatile low power media access for wireless sensor networks. In Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2004. [23] William Pugh. Skip lists: a probabilistic alternative to balanced trees. Commun. ACM, 33(6):668-676, 1990. [24] S. Ratnasamy, D. Estrin, R. Govindan, B. Karp, L. Yin S. Shenker, and F. Yu. Data-centric storage in sensornets. In ACM First Workshop on Hot Topics in Networks, 2001. [25] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker. A scalable content addressable network. In Proceedings of the 2001 ACM SIGCOMM Conference, 2001. [26] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govindan, and S. Shenker. GHT - a geographic hash-table for data-centric storage. In First ACM International Workshop on Wireless Sensor Networks and their Applications, 2002. [27] N. Xu, E. Osterweil, M. Hamilton, and D. Estrin. http://www.lecs.cs.ucla.edu/˜nxu/ess/. James Reserve Data. 50
TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs * ABSTRACT Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead offalse hits resultingfrom querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks. 1. Introduction 1.1 Motivation Many different kinds of networked data-centric sensor applications have emerged in recent years. Sensors in these applications sense the environment and generate data that must be processed, * This work was supported in part by National Science Foundation grants EEC-0313747, CNS-0325868 and EIA-0098060. filtered, interpreted, and archived in order to provide a useful infrastructure to its users. To achieve its goals, a typical sensor application needs access to both live and past sensor data. Whereas access to live data is necessary in monitoring and surveillance applications, access to past data is necessary for applications such as mining of sensor logs to detect unusual patterns, analysis of historical trends, and post-mortem analysis of particular events. Archival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency. There have been a spectrum of approaches for constructing sensor storage systems. In the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time. Since sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead. In this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive. Further, all data is propagated to the server, regardless of whether it is ever used by the application. An alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads. A read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing. More complex read requests are handled by flooding. For instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system. Thus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads. Requests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors. Research efforts such as Directed Diffusion [17] have attempted to reduce these read costs, however, by intelligent message routing. Between these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1. The geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage. In this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items. Thus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table. A read request requires a lookup in the in-network hash table to locate the node that stores the data item; observe that the presence of an index eliminates the need for flooding in this approach. Most of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained. In this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors. TSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction. We believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks. Specifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors. No existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers. Any sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore's Law, while their costs continue to plummet. Thus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars. An even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication. Newer NAND flash memories offer very low write and erase energy costs--our comparison of a 1GB Samsung NAND flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio [4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost between the two devices, even before accounting for network protocol overheads. These trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks. TSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor. Sensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs. The resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors. This index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently--the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors. In-network index lookups are eliminated, reducing network overheads for read requests. This separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies. 1.2 Contributions This paper presents TSAR, a novel two-tier storage architecture for sensor networks. To the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks. Our design and implementation of TSAR has resulted in four contributions. At the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper. This index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1TSAR: Tiered Storage ARchitecture for sensor networks. able. This data structure has O (log n) expected search and update complexity. Further, the index provides a logically unified view of all data in the system. Second, at the sensor level, each sensor maintains a local archive that stores data on flash memory. Our storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors. Storage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms. Further, the local store is optimized for time-series access to archived data, as is typical in many applications. Each sensor periodically sends a summary of its data to a proxy. TSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries. More fine grain summaries are sent whenever more false positives are observed, thereby balancing the energy cost of metadata updates and false positives. Third, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors. Our implementation supports spatio-temporal, value, and rangebased queries on sensor data. Fourth, we conduct a detailed experimental evaluation of TSAR using a combination of EmStar/EmTOS [10] and our prototype. While our EmStar/EmTOS experiments focus on the scalability of TSAR in larger settings, our prototype evaluation involves latency and energy measurements in a real setting. Our results demonstrate the logarithmic scaling property of the sparse skip graph and the low latency of end-to-end queries in a duty-cycled multi-hop network. The remainder of this paper is structured as follows. Section 2 presents key design issues that guide our work. Section 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively. Section 5 discusses our prototype implementation, and Section 6 presents our experimental results. We present related work in Section 7 and our conclusions in Section 8. 2. Design Considerations In this section, we first describe the various components of a multi-tier sensor network assumed in our work. We then present a description of the expected usage models for this system, followed by several principles addressing these factors which guide the design of our storage system. 2.1 System Model We envision a multi-tier sensor network comprising multiple tiers--a bottom tier of untethered remote sensor nodes, a middle tier of tethered sensor proxies, and an upper tier of applications and user terminals (see Figure 1). The lowest tier is assumed to form a dense deployment of lowpower sensors. A canonical sensor node at this tier is equipped with low-power sensors, a micro-controller, and a radio as well as a significant amount of flash memory (e.g., 1GB). The common constraint for this tier is energy, and the need for a long lifetime in spite of a finite energy constraint. The use of radio, processor, RAM, and the flash memory all consume energy, which needs to be limited. In general, we assume radio communication to be substantially more expensive than accesses to flash memory. The middle tier consists of power-rich sensor proxies that have significant computation, memory and storage resources and can use Table 1: Characteristics of sensor storage systems Figure 1: Architecture of a multi-tier sensor network. these resources continuously. In urban environments, the proxy tier would comprise a tethered base-station class nodes (e.g., Crossbow Stargate), each with with multiple radios--an 802.11 radio that connects it to a wireless mesh network and a low-power radio (e.g. 802.15.4) that connects it to the sensor nodes. In remote sensing applications [10], this tier could comprise a similar Stargate node with a solar power cell. Each proxy is assumed to manage several tens to hundreds of lower-tier sensors in its vicinity. A typical sensor network deployment will contain multiple geographically distributed proxies. For instance, in a building monitoring application, one sensor proxy might be placed per floor or hallway to monitor temperature, heat and light sensors in their vicinity. At the highest tier of our infrastructure are applications that query the sensor network through a query interface [20]. In this work, we focus on applications that require access to past sensor data. To support such queries, the system needs to archive data on a persistent store. Our goal is to design a storage system that exploits the relative abundance of resources at proxies to mask the scarcity of resources at the sensors. 2.2 Usage Models The design of a storage system such as TSAR is affected by the queries that are likely to be posed to it. A large fraction of queries on sensor data can be expected to be spatio-temporal in nature. Sensors provide information about the physical world; two key attributes of this information are when a particular event or activity occurred and where it occurred. Some instances of such queries include the time and location of target or intruder detections (e.g., security and monitoring applications), notifications of specific types of events such as pressure and humidity values exceeding a threshold (e.g., industrial applications), or simple data collection queries which request data from a particular time or location (e.g., weather or environment monitoring). Expected queries of such data include those requesting ranges of one or more attributes; for instance, a query for all image data from cameras within a specified geographic area for a certain period of time. In addition, it is often desirable to support efficient access to data in a way that maintains spatial and temporal ordering. There are several ways of supporting range queries, such as locality-preserving hashes such as are used in DIMS [18]. However, the most straightforward mechanism, and one which naturally provides efficient ordered access, is via the use of order-preserving data structures. Order-preserving structures such as the well-known B-Tree maintain relationships between indexed values and thus allow natural access to ranges, as well as predecessor and successor operations on their key values. Applications may also pose value-based queries that involve determining if a value v was observed at any sensor; the query returns a list of sensors and the times at which they observed this value. Variants of value queries involve restricting the query to a geographical region, or specifying a range (v1, v2) rather than a single value v. Value queries can be handled by indexing on the values reported in the summaries. Specifically, if a sensor reports a numerical value, then the index is constructed on these values. A search involves finding matching values that are either contained in the search range (v1, v2) or match the search value v exactly. Hybrid value and spatio-temporal queries are also possible. Such queries specify a time interval, a value range and a spatial region and request all records that match these attributes--"find all instances where the temperature exceeded 100o F at location R during the month of August". These queries require an index on both time and value. In TSAR our focus is on range queries on value or time, with planned extensions to include spatial scoping. 2.3 Design Principles Our design of a sensor storage system for multi-tier networks is based on the following set of principles, which address the issues arising from the system and usage models above. • Principle 1: Store locally, access globally: Current technology allows local storage to be significantly more energyefficient than network communication, while technology trends show no signs of erasing this gap in the near future. For maximum network life a sensor storage system should leverage the flash memory on sensors to archive data locally, substituting cheap memory operations for expensive radio transmission. But without efficient mechanisms for retrieval, the energy gains of local storage may be outweighed by communication costs incurred by the application in searching for data. We believe that if the data storage system provides the abstraction of a single logical store to applications, as does TSAR, then it will have additional flexibility to optimize communication and storage costs. 9 Principle 2: Distinguish data from metadata: Data must be identified so that it may be retrieved by the application without exhaustive search. To do this, we associate metadata with each data record--data fields of known syntax which serve as identifiers and may be queried by the storage system. Examples of this metadata are data attributes such as location and time, or selected or summarized data values. We leverage the presence of resource-rich proxies to index metadata for resource-constrained sensors. The proxies share this metadata index to provide a unified logical view of all data in the system, thereby enabling efficient, low-latency lookups. Such a tier-specific separation of data storage from metadata indexing enables the system to exploit the idiosyncrasies of multi-tier networks, while improving performance and functionality. 9 Principle 3: Provide data-centric query support: In a sensor application the specific location (i.e. offset) of a record in a stream is unlikely to be of significance, except if it conveys information concerning the location and/or time at which the information was generated. We thus expect that applications will be best served by a query interface which allows them to locate data by value or attribute (e.g. location and time), rather than a read interface for unstructured data. This in turn implies the need to maintain metadata in the form of an index that provides low cost lookups. 2.4 System Design TSAR embodies these design principles by employing local storage at sensors and a distributed index at the proxies. The key features of the system design are as follows: In TSAR, writes occur at sensor nodes, and are assumed to consist of both opaque data as well as application-specific metadata. This metadata is a tuple of known types, which may be used by the application to locate and identify data records, and which may be searched on and compared by TSAR in the course of locating data for the application. In a camera-based sensing application, for instance, this metadata might include coordinates describing the field of view, average luminance, and motion values, in addition to basic information such as time and sensor location. Depending on the application, this metadata may be two or three orders of magnitude smaller than the data itself, for instance if the metadata consists of features extracted from image or acoustic data. In addition to storing data locally, each sensor periodically sends a summary of reported metadata to a nearby proxy. The summary contains information such as the sensor ID, the interval (t1, t2) over which the summary was generated, a handle identifying the corresponding data record (e.g. its location in flash memory), and a coarse-grain representation of the metadata associated with the record. The precise data representation used in the summary is application-specific; for instance, a temperature sensor might choose to report the maximum and minimum temperature values observed in an interval as a coarse-grain representation of the actual time series. The proxy uses the summary to construct an index; the index is global in that it stores information from all sensors in the system and it is distributed across the various proxies in the system. Thus, applications see a unified view of distributed data, and can query the index at any proxy to get access to data stored at any sensor. Specifically, each query triggers lookups in this distributed index and the list of matches is then used to retrieve the corresponding data from the sensors. There are several distributed index and lookup methods which might be used in this system; however, the index structure described in Section 3 is highly suited for the task. Since the index is constructed using a coarse-grain summary, instead of the actual data, index lookups will yield approximate matches. The TSAR summarization mechanism guarantees that index lookups will never yield false negatives - i.e. it will never miss summaries which include the value being searched for. However, index lookups may yieldfalse positives, where a summary matches the query but when queried the remote sensor finds no matching value, wasting network resources. The more coarse-grained the summary, the lower the update overhead and the greater the fraction of false positives, while finer summaries incur update overhead while reducing query overhead due to false positives. Remote sensors may easily distinguish false positives from queries which result in search hits, and calculate the ratio between the two; based on this ratio, TSAR employs a novel adaptive technique that dynamically varies the granularity of sensor summaries to balance the metadata overhead and the overhead of false positives. 3. Data Structures At the proxy tier, TSAR employs a novel index structure called the Interval Skip Graph, which is an ordered, distributed data structure for finding all intervals that contain a particular point or range of values. Interval skip graphs combine Interval Trees [5], an interval-based binary search tree, with Skip Graphs [1], a ordered, distributed data structure for peer-to-peer systems [13]. The resulting data structure has two properties that make it ideal for sensor networks. First, it has O (log n) search complexity for accessing the first interval that matches a particular value or range, and constant complexity for accessing each successive interval. Second, indexing of intervals rather than individual values makes the data structure ideal for indexing summaries over time or value. Such summary-based indexing is a more natural fit for energyconstrained sensor nodes, since transmitting summaries incurs less energy overhead than transmitting all sensor data. Definitions: We assume that there are Np proxies and Ns sensors in a two-tier sensor network. Each proxy is responsible for multiple sensor nodes, and no assumption is made about the number of sensors per proxy. Each sensor transmits interval summaries of data or events regularly to one or more proxies that it is associated with, where interval i is represented as [lowi, highi]. These intervals can correspond to time or value ranges that are used for indexing sensor data. No assumption is made about the size of an interval or about the amount of overlap between intervals. Range queries on the intervals are posed by users to the network of proxies and sensors; each query q needs to determine all index values that overlap the interval [lowq, highq]. The goal of the interval skip graph is to index all intervals such that the set that overlaps a query interval can be located efficiently. In the rest of this section, we describe the interval skip graph in greater detail. 3.1 Skip Graph Overview In order to inform the description of the Interval Skip Graph, we first provide a brief overview of the Skip Graph data structure; for a more extensive description the reader is referred to [1]. Figure 2 shows a skip graph which indexes 8 keys; the keys may be seen along the bottom, and above each key are the pointers associated with that key. Each data element, consisting of a key and its associated pointers, may reside on a different node in the network, level 2 level 1 level 0 level 2 level 1 level 0 Figure 2: Skip Graph of 8 Elements Figure 3: Interval Skip Graph Figure 4: Distributed Interval Skip Graph and pointers therefore identify both a remote node as well as a data element on that node. In this figure we may see the following properties of a skip graph: • Ordered index: The keys are members of an ordered data type, for instance integers. Lookups make use of ordered comparisons between the search key and existing index entries. In addition, the pointers at the lowest level point directly to the successor of each item in the index. • In-place indexing: Data elements remain on the nodes where they were inserted, and messages are sent between nodes to establish links between those elements and others in the index. • Log n height: There are loge n pointers associated with each element, where n is the number of data elements indexed. Each pointer belongs to a level l in [0...loge n − 1], and together with some other pointers at that level forms a chain of n/2l elements. • Probabilistic balance: Rather than relying on re-balancing operations which may be triggered at insert or delete, skip graphs implement a simple random balancing mechanism which maintains close to perfect balance on average, with an extremely low probability of significant imbalance. • Redundancy and resiliency: Each data element forms an independent search tree root, so searches may begin at any node in the network, eliminating hot spots at a single search root. In addition the index is resilient against node failure; data on the failed node will not be accessible, but remaining data elements will be accessible through search trees rooted on other nodes. In Figure 2 we see the process of searching for a particular value in a skip graph. The pointers reachable from a single data element form a binary tree: a pointer traversal at the highest level skips over n/2 elements, n/4 at the next level, and so on. Search consists of descending the tree from the highest level to level 0, at each level comparing the target key with the next element at that level and deciding whether or not to traverse. In the perfectly balanced case shown here there are loge n levels of pointers, and search will traverse 0 or 1 pointers at each level. We assume that each data element resides on a different node, and measure search cost by the number messages sent (i.e. the number of pointers traversed); this will clearly be O (log n). Tree update proceeds from the bottom, as in a B-Tree, with the root (s) being promoted in level as the tree grows. In this way, for instance, the two chains at level 1 always contain n/2 entries each, and there is never a need to split chains as the structure grows. The update process then consists of choosing which of the 2l chains to insert an element into at each level l, and inserting it in the proper place in each chain. Maintaining a perfectly balanced skip graph as shown in Figure 2 would be quite complex; instead, the probabilistic balancing method introduced in Skip Lists [23] is used, which trades off a small amount of overhead in the expected case in return for simple update and deletion. The basis for this method is the observation that any element which belongs to a particular chain at level l can only belong to one of two chains at level l + 1. To insert an element we ascend levels starting at 0, randomly choosing one of the two possible chains at each level, an stopping when we reach an empty chain. One means of implementation (e.g. as described in [1]) is to assign each element an arbitrarily long random bit string. Each chain at level l is then constructed from those elements whose bit strings match in the first l bits, thus creating 2l possible chains at each level and ensuring that each chain splits into exactly two chains at the next level. Although the resulting structure is not perfectly balanced, following the analysis in [23] we can show that the probability of it being significantly out of balance is extremely small; in addition, since the structure is determined by the random number stream, input data patterns cannot cause the tree to become imbalanced. 3.2 Interval Skip Graph A skip graph is designed to store single-valued entries. In this section, we introduce a novel data structure that extends skip graphs to store intervals [lowi, highi] and allows efficient searches for all intervals covering a value v, i.e. {i: lowi <v <highi}. Our data structure can be extended to range searches in a straightforward manner. The interval skip graph is constructed by applying the method of augmented search trees, as described by Cormen, Leiserson, and Rivest [5] and applied to binary search trees to create an Interval Tree. The method is based on the observation that a search structure based on comparison of ordered keys, such as a binary tree, may also be used to search on a secondary key which is non-decreasing in the first key. Given a set of intervals sorted by lower bound--lowi <lowi +1--we define the secondary key as the cumulative maximum, maxi = maxk = 0...i (highk). The set of intervals intersecting a value v may then be found by searching for the first interval (and thus the interval with least lowi) such that maxi> v. We then traverse intervals in increasing order lower bound, until we find the first interval with lowi> v, selecting those intervals which intersect v. Using this approach we augment the skip graph data structure, as shown in Figure 3, so that each entry stores a range (lower bound and upper bound) and a secondary key (cumulative maximum of upper bound). To efficiently calculate the secondary key maxi for an entry i, we take the greatest of highi and the maximum values reported by each of i's left-hand neighbors. To search for those intervals containing the value v, we first search for v on the secondary index, maxi, and locate the first entry with maxi ≥ v. (by the definition of maxi, for this data element maxi = highi.) If lowi> v, then this interval does not contain v, and no other intervals will, either, so we are done. Otherwise we traverse the index in increasing order of mini, returning matching intervals, until we reach an entry with mini> v and we are done. Searches for all intervals which overlap a query range, or which completely contain a query range, are straightforward extensions of this mechanism. Lookup Complexity: Lookup for the first interval that matches a given value is performed in a manner very similar to an interval tree. The complexity of search is O (log n). The number of intervals that match a range query can vary depending on the amount of overlap in the intervals being indexed, as well as the range specified in the query. Insert Complexity: In an interval tree or interval skip list, the maximum value for an entry need only be calculated over the subtree rooted at that entry, as this value will be examined only when searching within the subtree rooted at that entry. For a simple interval skip graph, however, this maximum value for an entry must be computed over all entries preceding it in the index, as searches may begin anywhere in the data structure, rather than at a distinguished root element. It may be easily seen that in the worse case the insertion of a single interval (one that covers all existing intervals in the index) will trigger the update of all entries in the index, for a worst-case insertion cost of O (n). 3.3 Sparse Interval Skip Graph The final extensions we propose take advantage of the difference between the number of items indexed in a skip graph and the number of systems on which these items are distributed. The cost in network messages of an operation may be reduced by arranging the data structure so that most structure traversals occur locally on a single node, and thus incur zero network cost. In addition, since both congestion and failure occur on a per-node basis, we may eliminate links without adverse consequences if those links only contribute to load distribution and/or resiliency within a single node. These two modifications allow us to achieve reductions in asymptotic complexity of both update and search. As may be in Section 3.2, insert and delete cost on an interval skip graph has a worst case complexity of O (n), compared to O (log n) for an interval tree. The main reason for the difference is that skip graphs have a full search structure rooted at each element, in order to distribute load and provide resilience to system failures in a distributed setting. However, in order to provide load distribution and failure resilience it is only necessary to provide a full search structure for each system. If as in TSAR the number of nodes (proxies) is much smaller than the number of data elements (data summaries indexed), then this will result in significant savings. Implementation: To construct a sparse interval skip graph, we ensure that there is a single distinguished element on each system, the root element for that system; all searches will start at one of these root elements. When adding a new element, rather than splitting lists at increasing levels l until the element is in a list with no others, we stop when we find that the element would be in a list containing no root elements, thus ensuring that the element is reachable from all root elements. An example of applying this optimization may be seen in Figure 5. (In practice, rather than designating existing data elements as roots, as shown, it may be preferable to insert null values at startup.) When using the technique of membership vectors as in [1], this may be done by broadcasting the membership vectors of each root element to all other systems, and stopping insertion of an element at level l when it does not share an l-bit prefix with any of the N, root elements. The expected number of roots sharing a logeN,-bit prefix is 1, giving an expected expected height for each element of logeN, + O (1). An alternate implementation, which distributes information concerning root elements at pointer establishment time, is omitted due to space constraints; this method eliminates the need for additional messages. Performance: In a (non-interval) sparse skip graph, since the expected height of an inserted element is now loge N, + O (1), expected insertion complexity is O (log N,), rather than O (log n), where N, is the number of root elements and thus the number of separate systems in the network. (In the degenerate case of a single system we have a skip list; with splitting probability 0.5 the expected height of an individual element is 1.) Note that since searches are started at root elements of expected height loge n, search complexity is not improved. For an interval sparse skip graph, update performance is improved considerably compared to the O (n) worst case for the nonsparse case. In an augmented search structure such as this, an element only stores information for nodes which may be reached from that element--e.g. the subtree rooted at that element, in the case of a tree. Thus, when updating the maximum value in an interval tree, the update is only propagated towards the root. In a sparse interval skip graph, updates to a node only propagate towards the N, root elements, for a worst-case cost of N, loge n. Shortcut search: When beginning a search for a value v, rather than beginning at the root on that proxy, we can find the element that is closest to v (e.g. using a secondary local index), and then begin the search at that element. The expected distance between this element and the search terminus is loge N,, and the search will now take on average loge N, + O (1) steps. To illustrate this optimization, in Figure 4 depending on the choice of search root, a search for [21, 30] beginning at node 2 may take 3 network hops, traversing to node 1, then back to node 2, and finally to node 3 where the destination is located, for a cost of 3 messages. The shortcut search, however, locates the intermediate data element on node 2, and then proceeds directly to node 3 for a cost of 1 message. Performance: This technique may be applied to the primary key search which is the first of two insertion steps in an interval skip graph. By combining the short-cut optimization with sparse interval skip graphs, the expected cost of insertion is now O (log N,), independent of the size of the index or the degree of overlap of the inserted intervals. 3.4 Alternative Data Structures Thus far we have only compared the sparse interval skip graph with similar structures from which it is derived. A comparison with several other data structures which meet at least some of the requirements for the TSAR index is shown in Table 2. Table 2: Comparison of Distributed Index Structures Figure 5: Sparse Interval Skip Graph The hash-based systems, DHT [25] and GHT [26], lack the ability to perform range queries and are thus not well-suited to indexing spatio-temporal data. Indexing locally using an appropriate singlenode structure and then flooding queries to all proxies is a competitive alternative for small networks; for large networks the linear dependence on the number of proxies becomes an issue. Two distributed B-Trees were examined - P-Trees [6] and RP * [19]. Each of these supports range queries, and in theory could be modified to support indexing of intervals; however, they both require complex re-balancing, and do not provide the resilience characteristics of the other structures. DIMS [18] provides the ability to perform spatio-temporal range queries, and has the necessary resilience to failures; however, it cannot be used index intervals, which are used by TSAR's data summarization algorithm. 4. Data Storage and Summarization Having described the proxy-level index structure, we turn to the mechanisms at the sensor tier. TSAR implements two key mechanisms at the sensor tier. The first is a local archival store at each sensor node that is optimized for resource-constrained devices. The second is an adaptive summarization technique that enables each sensor to adapt to changing data and query characteristics. The rest of this section describes these mechanisms in detail. 4.1 Local Storage at Sensors Interval skip graphs provide an efficient mechanism to lookup sensor nodes containing data relevant to a query. These queries are then routed to the sensors, which locate the relevant data records in the local archive and respond back to the proxy. To enable such lookups, each sensor node in TSAR maintains an archival store of sensor data. While the implementation of such an archival store is straightforward on resource-rich devices that can run a database, sensors are often power and resource-constrained. Consequently, the sensor archiving subsystem in TSAR is explicitly designed to exploit characteristics of sensor data in a resource-constrained setting. Figure 6: Single storage record Sensor data has very distinct characteristics that inform our design of the TSAR archival store. Sensors produce time-series data streams, and therefore, temporal ordering of data is a natural and simple way of storing archived sensor data. In addition to simplicity, a temporally ordered store is often suitable for many sensor data processing tasks since they involve time-series data processing. Examples include signal processing operations such as FFT, wavelet transforms, clustering, similarity matching, and target detection. Consequently, the local archival store is a collection of records, designed as an append-only circular buffer, where new records are appended to the tail of the buffer. The format of each data record is shown in Figure 6. Each record has a metadata field which includes a timestamp, sensor settings, calibration parameters, etc. . Raw sensor data is stored in the data field of the record. The data field is opaque and application-specific--the storage system does not know or care about interpreting this field. A camera-based sensor, for instance, may store binary images in this data field. In order to support a variety of applications, TSAR supports variable-length data fields; as a result, record sizes can vary from one record to another. Our archival store supports three operations on records: create, read, and delete. Due to the append-only nature of the store, creation of records is simple and efficient. The create operation simply creates a new record and appends it to the tail of the store. Since records are always written at the tail, the store need not maintain a "free space" list. All fields of the record need to be specified at creation time; thus, the size of the record is known a priori and the store simply allocates the the corresponding number of bytes at the tail to store the record. Since writes are immutable, the size of a record does not change once it is created. Figure 7: Sensor Summarization The read operation enables stored records to be retrieved in order to answer queries. In a traditional database system, efficient lookups are enabled by maintaining a structure such as a B-tree that indexes certain keys of the records. However, this can be quite complex for a small sensor node with limited resources. Consequently, TSAR sensors do not maintain any index for the data stored in their archive. Instead, they rely on the proxies to maintain this metadata index--sensors periodically send the proxy information summarizing the data contained in a contiguous sequence of records, as well as a handle indicating the location of these records in flash memory. The mechanism works as follows: In addition to the summary of sensor data, each node sends metadata to the proxy containing the time interval corresponding to the summary, as well as the start and end offsets of the flash memory location where the raw data corresponding is stored (as shown in Figure 7). Thus, random access is enabled at granularity of a summary--the start offset of each chunk of records represented by a summary is known to the proxy. Within this collection, records are accessed sequentially. When a query matches a summary in the index, the sensor uses these offsets to access the relevant records on its local flash by sequentially reading data from the start address until the end address. Any queryspecific operation can then be performed on this data. Thus, no index needs to be maintained at the sensor, in line with our goal of simplifying sensor state management. The state of the archive is captured in the metadata associated with the summaries, and is stored and maintained at the proxy. While we anticipate local storage capacity to be large, eventually there might be a need to overwrite older data, especially in high data rate applications. This may be done via techniques such as multi-resolution storage of data [9], or just simply by overwriting older data. When older data is overwritten, a delete operation is performed, where an index entry is deleted from the interval skip graph at the proxy and the corresponding storage space in flash memory at the sensor is freed. 4.2 Adaptive Summarization The data summaries serve as glue between the storage at the remote sensor and the index at the proxy. Each update from a sensor to the proxy includes three pieces of information: the summary, a time period corresponding to the summary, and the start and end offsets for the flash archive. In general, the proxy can index the time interval representing a summary or the value range reported in the summary (or both). The former index enables quick lookups on all records seen during a certain interval, while the latter index enables quick lookups on all records matching a certain value. As described in Section 2.4, there is a trade-off between the energy used in sending summaries (and thus the frequency and resolution of those summaries) and the cost of false hits during queries. The coarser and less frequent the summary information, the less energy required, while false query hits in turn waste energy on requests for non-existent data. TSAR employs an adaptive summarization technique that balances the cost of sending updates against the cost of false positives. The key intuition is that each sensor can independently identify the fraction of false hits and true hits for queries that access its local archive. If most queries result in true hits, then the sensor determines that the summary can be coarsened further to reduce update costs without adversely impacting the hit ratio. If many queries result in false hits, then the sensor makes the granularity of each summary finer to reduce the number and overhead of false hits. The resolution of the summary depends on two parameters--the interval over which summaries of the data are constructed and transmitted to the proxy, as well as the size of the applicationspecific summary. Our focus in this paper is on the interval over which the summary is constructed. Changing the size of the data summary can be performed in an application-specific manner (e.g. using wavelet compression techniques as in [9]) and is beyond the scope of this paper. Currently, TSAR employs a simple summarization scheme that computes the ratio of false and true hits and decreases (increases) the interval between summaries whenever this ratio increases (decreases) beyond a threshold. 5. TSAR Implementation We have implemented a prototype of TSAR on a multi-tier sensor network testbed. Our prototype employs Crossbow Stargate nodes to implement the proxy tier. Each Stargate node employs a 400MHz Intel XScale processor with 64MB RAM and runs the Linux 2.4.19 kernel and EmStar release 2.1. The proxy nodes are equipped with two wireless radios, a Cisco Aironet 340-based 802.11 b radio and a hostmote bridge to the Mica2 sensor nodes using the EmStar transceiver. The 802.11 b wireless network is used for inter-proxy communication within the proxy tier, while the wireless bridge enables sensor-proxy communication. The sensor tier consists of Crossbow Mica2s and Mica2dots, each consisting of a 915MHz CC1000 radio, a BMAC protocol stack, a 4 Mb on-board flash memory and an ATMega 128L processor. The sensor nodes run TinyOS 1.1.8. In addition to the on-board flash, the sensor nodes can be equipped with external MMC/SD flash cards using a custom connector. The proxy nodes can be equipped with external storage such as high-capacity compact flash (up to 4GB), 6GB micro-drives, or up to 60GB 1.8 inch mobile disk drives. Since sensor nodes may be several hops away from the nearest proxy, the sensor tier employs multi-hop routing to communicate with the proxy tier. In addition, to reduce the power consumption of the radio while still making the sensor node available for queries, low power listening is enabled, in which the radio receiver is periodically powered up for a short interval to sense the channel for transmissions, and the packet preamble is extended to account for the latency until the next interval when the receiving radio wakes up. Our prototype employs the MultiHopLEPSM routing protocol with the BMAC layer configured in the low-power mode with a 11% duty cycle (one of the default BMAC [22] parameters) Our TSAR implementation on the Mote involves a data gathering task that periodically obtains sensor readings and logs these reading to flash memory. The flash memory is assumed to be a circular append-only store and the format of the logged data is depicted in Figure 6. The Mote sends a report to the proxy every N readings, summarizing the observed data. The report contains: (i) the address of the Mote, (ii) a handle that contains an offset and the length of the region in flash memory containing data referred to by the summary, (iii) an interval (t1, t2) over which this report is generated, (iv) a tuple (low, high) representing the minimum and the maximum values observed at the sensor in the interval, and (v) a sequence number. The sensor updates are used to construct a sparse interval skip graph that is distributed across proxies, via network messages between proxies over the 802.11 b wireless network. Our current implementation supports queries that request records matching a time interval (t1, t2) or a value range (v1, v2). Spatial constraints are specified using sensor IDs. Given a list of matching intervals from the skip graph, TSAR supports two types of messages to query the sensor: lookup and fetch. A lookup message triggers a search within the corresponding region in flash memory and returns the number of matching records in that memory region (but does not retrieve data). In contrast, a fetch message not only Figure 8: Skip Graph Insert Performance triggers a search but also returns all matching data records to the proxy. Lookup messages are useful for polling a sensor, for instance, to determine if a query matches too many records. 6. Experimental Evaluation In this section, we evaluate the efficacy of TSAR using our prototype and simulations. The testbed for our experiments consists of four Stargate proxies and twelve Mica2 and Mica2dot sensors; three sensors each are assigned to each proxy. Given the limited size of our testbed, we employ simulations to evaluate the behavior of TSAR in larger settings. Our simulation employs the EmTOS emulator [10], which enables us to run the same code in simulation and the hardware platform. Rather than using live data from a real sensor, to ensure repeatable experiments, we seed each sensor node with a dataset (i.e., a trace) that dictates the values reported by that node to the proxy. One section of the flash memory on each sensor node is programmed with data points from the trace; these "observations" are then replayed during an experiment, logged to the local archive (located in flash memory, as well), and reported to the proxy. The first dataset used to evaluate TSAR is a temperature dataset from James Reserve [27] that includes data from eleven temperature sensor nodes over a period of 34 days. The second dataset is synthetically generated; the trace for each sensor is generated using a uniformly distributed random walk though the value space. Our experimental evaluation has four parts. First, we run EmTOS simulations to evaluate the lookup, update and delete overhead for sparse interval skip graphs using the real and synthetic datasets. Second, we provide summary results from micro-benchmarks of the storage component of TSAR, which include empirical characterization of the energy costs and latency of reads and writes for the flash memory chip as well as the whole mote platform, and comparisons to published numbers for other storage and communication technologies. These micro-benchmarks form the basis for our full-scale evaluation of TSAR on a testbed of four Stargate proxies and twelve Motes. We measure the end-to-end query latency in our multi-hop testbed as well as the query processing overhead at the mote tier. Finally, we demonstrate the adaptive summarization capability at each sensor node. The remainder of this section presents our experimental results. 6.1 Sparse Interval Skip Graph Performance This section evaluates the performance of sparse interval skip graphs by quantifying insert, lookup and delete overheads. We assume a proxy tier with 32 proxies and construct sparse interval skip graphs of various sizes using our datasets. For each skip Figure 10: Skip Graph Overheads graph, we evaluate the cost of inserting a new value into the index. Each entry was deleted after its insertion, enabling us to quantify the delete overhead as well. Figure 8 (a) and (b) quantify the insert overhead for our two datasets: each insert entails an initial traversal that incurs log n messages, followed by neighbor pointer update at increasing levels, incurring a cost of 4 log n messages. Our results demonstrate this behavior, and show as well that performance of delete--which also involves an initial traversal followed by pointer updates at each level--incurs a similar cost. Next, we evaluate the lookup performance of the index structure. Again, we construct skip graphs of various sizes using our datasets and evaluate the cost of a lookup on the index structure. Figures 9 (a) and (b) depict our results. There are two components for each lookup--the lookup of the first interval that matches the query and, in the case of overlapping intervals, the subsequent linear traversal to identify all matching intervals. The initial lookup can be seen to takes log n messages, as expected. The costs of the subsequent linear traversal, however, are highly data dependent. For instance, temperature values for the James Reserve data exhibit significant spatial correlations, resulting in significant overlap between different intervals and variable, high traversal cost (see Figure 9 (a)). The synthetic data, however, has less overlap and incurs lower traversal overhead as shown in Figure 9 (b). Since the previous experiments assumed 32 proxies, we evaluate the impact of the number of proxies on skip graph performance. We vary the number of proxies from 10 to 48 and distribute a skip graph with 4096 entries among these proxies. We construct regular interval skip graphs as well as sparse interval skip graphs using these entries and measure the overhead of inserts and lookups. Thus, the experiment also seeks to demonstrate the benefits of sparse skip graphs over regular skip graphs. Figure 10 (a) depicts our results. In regular skip graphs, the complexity of insert is O (log2n) in the Figure 9: Skip Graph Lookup Performance expected case (and O (n) in the worst case) where n is the number of elements. This complexity is unaffected by changing the number of proxies, as indicated by the flat line in the figure. Sparse skip graphs require fewer pointer updates; however, their overhead is dependent on the number of proxies, and is O (log2Np) in the expected case, independent of n. This can be seen to result in significant reduction in overhead when the number of proxies is small, which decreases as the number of proxies increases. Failure handling is an important issue in a multi-tier sensor architecture since it relies on many components--proxies, sensor nodes and routing nodes can fail, and wireless links can fade. Handling of many of these failure modes is outside the scope of this paper; however, we consider the case of resilience of skip graphs to proxy failures. In this case, skip graph search (and subsequent repair operations) can follow any one of the other links from a root element. Since a sparse skip graph has search trees rooted at each node, searching can then resume once the lookup request has routed around the failure. Together, these two properties ensure that even if a proxy fails, the remaining entries in the skip graph will be reachable with high probability--only the entries on the failed proxy and the corresponding data at the sensors becomes inaccessible. To ensure that all data on sensors remains accessible, even in the event of failure of a proxy holding index entries for that data, we incorporate redundant index entries. TSAR employs a simple redundancy scheme where additional coarse-grain summaries are used to protect regular summaries. Each sensor sends summary data periodically to its local proxy, but less frequently sends a lowerresolution summary to a backup proxy--the backup summary represents all of the data represented by the finer-grained summaries, but in a lossier fashion, thus resulting in higher read overhead (due to false hits) if the backup summary is used. The cost of implementing this in our system is low--Figure 10 (b) shows the overhead of such a redundancy scheme, where a single coarse summary is send to a backup for every two summaries sent to the primary proxy. Since a redundant summary is sent for every two summaries, the insert cost is 1.5 times the cost in the normal case. However, these redundant entries result in only a negligible increase in lookup overhead, due the logarithmic dependence of lookup cost on the index size, while providing full resilience to any single proxy failure. 6.2 Storage Microbenchmarks Since sensors are resource-constrained, the energy consumption and the latency at this tier are important measures for evaluating the performance of a storage architecture. Before performing an endto-end evaluation of our system, we provide more detailed information on the energy consumption of the storage component used to implement the TSAR local archive, based on empirical measurements. In addition we compare these figures to those for other local storage technologies, as well as to the energy consumption of wireless communication, using information from the literature. For empirical measurements we measure energy usage for the storage component itself (i.e. current drawn by the flash chip), as well as for the entire Mica2 mote. The power measurements in Table 3 were performed for the AT45DB041 [15] flash memory on a Mica2 mote, which is an older NOR flash device. The most promising technology for low-energy storage on sensing devices is NAND flash, such as the Samsung K9K4G08U0M device [16]; published power numbers for this device are provided in the table. Published energy requirements for wireless transmission using the Chipcon [4] CC2420 radio (used in MicaZ and Telos motes) are provided for comparison, assuming Table 3: Storage and Communication Energy Costs (* measured Figure 11: Query Processing Latency zero network and protocol overhead. Comparing the total energy cost for writing flash (erase + write) to the total cost for communication (transmit + receive), we find that the NAND flash is almost 150 times more efficient than radio communication, even assuming perfect network protocols. 6.3 Prototype Evaluation This section reports results from an end-to-end evaluation of the TSAR prototype involving both tiers. In our setup, there are four proxies connected via 802.11 links and three sensors per proxy. The multi-hop topology was preconfigured such that sensor nodes were connected in a line to each proxy, forming a minimal tree of depth Figure 12: Query Latency Components Fraction of true hits 3. Due to resource constraints we were unable to perform experiments with dozens of sensor nodes, however this topology ensured that the network diameter was as large as for a typical network of significantly larger size. Our evaluation metric is the end-to-end latency of query processing. A query posed on TSAR first incurs the latency of a sparse skip graph lookup, followed by routing to the appropriate sensor node (s). The sensor node reads the required page (s) from its local archive, processes the query on the page that is read, and transmits the response to the proxy, which then forwards it to the user. We first measure query latency for different sensors in our multi-hop topology. Depending on which of the sensors is queried, the total latency increases almost linearly from about 400ms to 1 second, as the number of hops increases from 1 to 3 (see Figure 11 (a)). Figure 11 (b) provides a breakdown of the various components of the end-to-end latency. The dominant component of the total latency is the communication over one or more hops. The typical time to communicate over one hop is approximately 300ms. This large latency is primarily due to the use of a duty-cycled MAC layer; the latency will be larger if the duty cycle is reduced (e.g. the 2% setting as opposed to the 11.5% setting used in this experiment), and will conversely decrease if the duty cycle is increased. The figure also shows the latency for varying index sizes; as expected, the latency of inter-proxy communication and skip graph lookups increases logarithmically with index size. Not surprisingly, the overhead seen at the sensor is independent of the index size. The latency also depends on the number of packets transmitted in response to a query--the larger the amount of data retrieved by a query, the greater the latency. This result is shown in Figure 12 (a). The step function is due to packetization in TinyOS; TinyOS sends one packet so long as the payload is smaller than 30 bytes and splits the response into multiple packets for larger payloads. As the data retrieved by a query is increased, the latency increases in steps, where each step denotes the overhead of an additional packet. Finally, Figure 12 (b) shows the impact of searching and processing flash memory regions of increasing sizes on a sensor. Each summary represents a collection of records in flash memory, and all of these records need to be retrieved and processed if that summary matches a query. The coarser the summary, the larger the memory region that needs to be accessed. For the search sizes examined, amortization of overhead when searching multiple flash pages and archival records, as well as within the flash chip and its associated driver, results in the appearance of sub-linear increase in latency with search size. In addition, the operation can be seen to have very low latency, in part due to the simplicity of our query processing, requiring only a compare operation with each stored element. More complex operations, however, will of course incur greater latency. 6.4 Adaptive Summarization When data is summarized by the sensor before being reported to the proxy, information is lost. With the interval summarization method we are using, this information loss will never cause the proxy to believe that a sensor node does not hold a value which it in fact does, as all archived values will be contained within the interval reported. However, it does cause the proxy to believe that the sensor may hold values which it does not, and forward query messages to the sensor for these values. These false positives constitute the cost of the summarization mechanism, and need to be balanced against the savings achieved by reducing the number of reports. The goal of adaptive summarization is to dynamically vary the summary size so that these two costs are balanced. Figure 13: Impact of Summarization Granularity Figure 13 (a) demonstrates the impact of summary granularity on false hits. As the number of records included in a summary is increased, the fraction of queries forwarded to the sensor which match data held on that sensor ("true positives") decreases. Next, in Figure 13 (b) we run the a EmTOS simulation with our adaptive summarization algorithm enabled. The adaptive algorithm increases the summary granularity (defined as the number of records per summary) when Cost (updates) strate the adaptive nature of our technique, we plot a time series of the summarization granularity. We begin with a query rate of 1 query per 5 samples, decrease it to 1 every 30 samples, and then increase it again to 1 query every 10 samples. As shown in Figure 13 (b), the adaptive technique adjusts accordingly by sending more fine-grain summaries at higher query rates (in response to the higher false hit rate), and fewer, coarse-grain summaries at lower query rates. 7. Related Work In this section, we review prior work on storage and indexing techniques for sensor networks. While our work addresses both problems jointly, much prior work has considered them in isolation. The problem of archival storage of sensor data has received limited attention in sensor network literature. ELF [7] is a logstructured file system for local storage on flash memory that provides load leveling and Matchbox is a simple file system that is packaged with the TinyOS distribution [14]. Both these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives. Multi-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources. In contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors. The RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data. In order to efficiently access a distributed sensor store, an index needs to be constructed of the data. Early work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored. Sources publish the events that they detect, and sinks with interest in specific events can subscribe to these events. The Directed Diffusion substrate routes queries to specific locations if the query has geographic information embedded in it (e.g.: find temperature in the south-west quadrant), and if not, the query is flooded throughout the network. These schemes had the drawback that for queries that are not geographically scoped, search cost (O (n) for a network of n nodes) may be prohibitive in large networks with frequent queries. Local storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9]. Recent research has seen a growing body of work on data indexing schemes for sensor networks [26] [11] [18]. One such scheme is DCS [26], which provides a hash function for mapping from event name to location. DCS constructs a distributed structure that groups events together spatially by their named type. Distributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data. While these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables. TSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers. Thus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple. In addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work. DHTs [25] can be used for indexing events based on their type, quad-tree variants such as Rtrees [12] can be used for optimizing spatial searches, and K-D trees [2] can be used for multi-attribute search. While this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data. 8. Conclusions In this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks. We presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure. We implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.
TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs * ABSTRACT Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead offalse hits resultingfrom querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks. 1. Introduction 1.1 Motivation Many different kinds of networked data-centric sensor applications have emerged in recent years. Sensors in these applications sense the environment and generate data that must be processed, * This work was supported in part by National Science Foundation grants EEC-0313747, CNS-0325868 and EIA-0098060. filtered, interpreted, and archived in order to provide a useful infrastructure to its users. To achieve its goals, a typical sensor application needs access to both live and past sensor data. Whereas access to live data is necessary in monitoring and surveillance applications, access to past data is necessary for applications such as mining of sensor logs to detect unusual patterns, analysis of historical trends, and post-mortem analysis of particular events. Archival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency. There have been a spectrum of approaches for constructing sensor storage systems. In the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time. Since sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead. In this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive. Further, all data is propagated to the server, regardless of whether it is ever used by the application. An alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads. A read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing. More complex read requests are handled by flooding. For instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system. Thus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads. Requests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors. Research efforts such as Directed Diffusion [17] have attempted to reduce these read costs, however, by intelligent message routing. Between these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1. The geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage. In this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items. Thus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table. A read request requires a lookup in the in-network hash table to locate the node that stores the data item; observe that the presence of an index eliminates the need for flooding in this approach. Most of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained. In this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors. TSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction. We believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks. Specifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors. No existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers. Any sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore's Law, while their costs continue to plummet. Thus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars. An even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication. Newer NAND flash memories offer very low write and erase energy costs--our comparison of a 1GB Samsung NAND flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio [4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost between the two devices, even before accounting for network protocol overheads. These trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks. TSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor. Sensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs. The resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors. This index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently--the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors. In-network index lookups are eliminated, reducing network overheads for read requests. This separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies. 1.2 Contributions This paper presents TSAR, a novel two-tier storage architecture for sensor networks. To the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks. Our design and implementation of TSAR has resulted in four contributions. At the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper. This index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1TSAR: Tiered Storage ARchitecture for sensor networks. able. This data structure has O (log n) expected search and update complexity. Further, the index provides a logically unified view of all data in the system. Second, at the sensor level, each sensor maintains a local archive that stores data on flash memory. Our storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors. Storage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms. Further, the local store is optimized for time-series access to archived data, as is typical in many applications. Each sensor periodically sends a summary of its data to a proxy. TSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries. More fine grain summaries are sent whenever more false positives are observed, thereby balancing the energy cost of metadata updates and false positives. Third, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors. Our implementation supports spatio-temporal, value, and rangebased queries on sensor data. Fourth, we conduct a detailed experimental evaluation of TSAR using a combination of EmStar/EmTOS [10] and our prototype. While our EmStar/EmTOS experiments focus on the scalability of TSAR in larger settings, our prototype evaluation involves latency and energy measurements in a real setting. Our results demonstrate the logarithmic scaling property of the sparse skip graph and the low latency of end-to-end queries in a duty-cycled multi-hop network. The remainder of this paper is structured as follows. Section 2 presents key design issues that guide our work. Section 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively. Section 5 discusses our prototype implementation, and Section 6 presents our experimental results. We present related work in Section 7 and our conclusions in Section 8. 2. Design Considerations 2.1 System Model 2.2 Usage Models 2.3 Design Principles 2.4 System Design 3. Data Structures 3.1 Skip Graph Overview 3.2 Interval Skip Graph 3.3 Sparse Interval Skip Graph 3.4 Alternative Data Structures 4. Data Storage and Summarization 4.1 Local Storage at Sensors 4.2 Adaptive Summarization 5. TSAR Implementation 6. Experimental Evaluation 6.1 Sparse Interval Skip Graph Performance 6.2 Storage Microbenchmarks 6.3 Prototype Evaluation 6.4 Adaptive Summarization 7. Related Work In this section, we review prior work on storage and indexing techniques for sensor networks. While our work addresses both problems jointly, much prior work has considered them in isolation. The problem of archival storage of sensor data has received limited attention in sensor network literature. ELF [7] is a logstructured file system for local storage on flash memory that provides load leveling and Matchbox is a simple file system that is packaged with the TinyOS distribution [14]. Both these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives. Multi-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources. In contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors. The RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data. In order to efficiently access a distributed sensor store, an index needs to be constructed of the data. Early work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored. Sources publish the events that they detect, and sinks with interest in specific events can subscribe to these events. The Directed Diffusion substrate routes queries to specific locations if the query has geographic information embedded in it (e.g.: find temperature in the south-west quadrant), and if not, the query is flooded throughout the network. These schemes had the drawback that for queries that are not geographically scoped, search cost (O (n) for a network of n nodes) may be prohibitive in large networks with frequent queries. Local storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9]. Recent research has seen a growing body of work on data indexing schemes for sensor networks [26] [11] [18]. One such scheme is DCS [26], which provides a hash function for mapping from event name to location. DCS constructs a distributed structure that groups events together spatially by their named type. Distributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data. While these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables. TSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers. Thus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple. In addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work. DHTs [25] can be used for indexing events based on their type, quad-tree variants such as Rtrees [12] can be used for optimizing spatial searches, and K-D trees [2] can be used for multi-attribute search. While this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data. 8. Conclusions In this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks. We presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure. We implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.
TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs * ABSTRACT Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead offalse hits resultingfrom querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks. 1. Introduction 1.1 Motivation Many different kinds of networked data-centric sensor applications have emerged in recent years. Sensors in these applications sense the environment and generate data that must be processed, * This work was supported in part by National Science Foundation grants EEC-0313747, CNS-0325868 and EIA-0098060. To achieve its goals, a typical sensor application needs access to both live and past sensor data. Archival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency. There have been a spectrum of approaches for constructing sensor storage systems. In the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time. Since sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead. In this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive. Further, all data is propagated to the server, regardless of whether it is ever used by the application. An alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads. A read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing. More complex read requests are handled by flooding. For instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system. Thus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads. Requests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors. Between these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1. The geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage. In this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items. Thus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table. A read request requires a lookup in the in-network hash table to locate the node that stores the data item; observe that the presence of an index eliminates the need for flooding in this approach. Most of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained. In this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors. TSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction. We believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks. Specifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors. No existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers. Any sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore's Law, while their costs continue to plummet. Thus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars. An even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication. These trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks. TSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor. Sensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs. The resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors. This index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently--the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors. In-network index lookups are eliminated, reducing network overheads for read requests. This separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies. 1.2 Contributions This paper presents TSAR, a novel two-tier storage architecture for sensor networks. To the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks. Our design and implementation of TSAR has resulted in four contributions. At the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper. This index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1TSAR: Tiered Storage ARchitecture for sensor networks. able. This data structure has O (log n) expected search and update complexity. Further, the index provides a logically unified view of all data in the system. Second, at the sensor level, each sensor maintains a local archive that stores data on flash memory. Our storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors. Storage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms. Further, the local store is optimized for time-series access to archived data, as is typical in many applications. Each sensor periodically sends a summary of its data to a proxy. TSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries. Third, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors. Our implementation supports spatio-temporal, value, and rangebased queries on sensor data. The remainder of this paper is structured as follows. Section 2 presents key design issues that guide our work. Section 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively. Section 5 discusses our prototype implementation, and Section 6 presents our experimental results. We present related work in Section 7 and our conclusions in Section 8. 7. Related Work In this section, we review prior work on storage and indexing techniques for sensor networks. The problem of archival storage of sensor data has received limited attention in sensor network literature. Both these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives. Multi-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources. In contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors. The RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data. In order to efficiently access a distributed sensor store, an index needs to be constructed of the data. Early work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored. The Directed Diffusion substrate routes queries to specific locations Local storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9]. Recent research has seen a growing body of work on data indexing schemes for sensor networks [26] [11] [18]. DCS constructs a distributed structure that groups events together spatially by their named type. Distributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data. While these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables. TSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers. Thus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple. In addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work. While this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data. 8. Conclusions In this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks. We presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure. We implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.
C-52
Fairness in Dead-Reckoning based Distributed Multi-Player Games
In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an error measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.
[ "fair", "dead-reckon", "dead-reckon vector", "accuraci", "mean error", "budget base algorithm", "schedul algorithm", "distribut multi-player game", "quantiz", "export error", "bucket synchron", "network delai", "clock synchron" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "U", "M", "U", "M", "U" ]
Fairness in Dead-Reckoning based Distributed Multi-Player Games Sudhir Aggarwal Hemant Banavar Department of Computer Science Florida State University, Tallahassee, FL Email: {sudhir, banavar}@cs. fsu.edu Sarit Mukherjee Sampath Rangarajan Center for Networking Research Bell Laboratories, Holmdel, NJ Email: {sarit, sampath}@bell-labs.com ABSTRACT In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an error measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed applications General Terms Algorithms, Design, Experimentation, Performance 1. INTRODUCTION In a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server. Usually, in such games, the players are part of the game and in addition they may control entities that make up the game. During the course of the game, the players and the entities move within the game space. A player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector. A DR vector contains information about the current position of the player/entity in terms of x, y and z coordinates (at the time the DR vector was sent) as well as the trajectory of the entity in terms of the velocity component in each of the dimensions. Each of the participating players receives such DR vectors from one another and renders the other players/entities on the local consoles until a new DR vector is received for that player/entity. In a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server. The idea of DR is used because it is almost impossible for players/entities to exchange their current positions at every time unit. DR vectors are quantization of the real trajectory (which we refer to as real path) at a player. Normally, a new DR vector is computed and sent whenever the real path deviates from the path extrapolated using the previous DR vector (say, in terms of distance in the x, y, z plane) by some amount specified by a threshold. We refer to the trajectory that can be computed using the sequence of DR vectors as the exported path. Therefore, at the sending player, there is a deviation between the real path and the exported path. The error due to this deviation can be removed if each movement of player/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same. Given that it is not feasible to satisfy this due to bandwidth limitations, this error is not of practical interest. Therefore, the receiving players can, at best, follow the exported path. Because of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player/entity may have already changed. Thus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path). We refer to this error as the export error. Note that the export error, in turn, results in a deviation between the real and the placed paths. The export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i) 1 before the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error). In an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero. That is, the placed and the exported paths match after the DR vector is received. We also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1]. Henceforth we assume that the players use such a technique which results in unavoidable but small overall export error. In this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers. Due to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers. This brings in unfairness in game playing. For instance a player with a large delay would always see an entity late in physical time compared to the other players and, therefore, her action on the entity would be delayed (in physical time) even if she reacted instantaneously after the entity was rendered. Our goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players. We explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender. We propose two algorithms to achieve this. Both the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players. At an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same. The goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses. The first algorithm (which we refer to as the scheduling algorithm) is based on estimating the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point. Through an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error). The drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR). To alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players. At a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer. Experimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach. It improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game. In addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors. 2. PREVIOUS WORK Earlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4]. These methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another. For example, the concept of local lag has been used in [3] where each player delays every local operation for a certain amount of time so that remote players can receive information about the local operation and execute the same operation at the about same time, thus reducing state inconsistencies. The online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays. In MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely. Players with a network delay larger than 100 ms simply cannot participate in the game. In general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players. There have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms. But these works [7, 8] assume the existence of a global view of the game where a game server maintains a view (or state) of the game. Players can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object). These additions and deletions are communicated to the game server using action messages. Based on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using update messages. Fairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a fair-order which takes into account the delays between the game server and the different players. Objects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works. In this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game. DR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model). It has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place. Both the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof. For example, a receiver could skew the delay estimate at the sender to make the sender believe that the delay between the sender and the receiver is high thereby gaining undue advantage. We emphasize that the focus of this paper is on fairness without addressing the issue of cheating. In the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1]. In Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag. Section 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing. Conclusions are presented in Section 6. 2 3. GAME MODEL The game architecture is based on players distributed across the Internet and exchanging DR vectors to each other. The DR vectors could either be sent directly from one player to another (peerto-peer model) or could be sent through a game server which receives the DR vector from a player and forwards it to other players (client-server model). As mentioned before, we assume synchronized clocks among the participating players. Each DR vector sent from one player to another specifies the trajectory of exactly one player/entity. We assume a linear DR vector in that the information contained in the DR vector is only enough at the receiving player to compute the trajectory and render the entity in a straight line path. Such a DR vector contains information about the starting position and velocity of the player/entity where the velocity is constant1 . Thus, the DR vectors sent by a player specifies the current time at the player when the DR vector is computed (not the time at which this DR vector is sent to the other players as we will explain later), the current position of the player/entity in terms of the x, y, z coordinates and the velocity vector in the direction of x, y and z coordinates. Specifically, the ith DR vector sent by player j about the kth entity is denoted by DRj ik and is represented by the following tuple (Tj ik, xj ik, yj ik, zj ik, vxj ik, vyj ik, vzj ik). Without loss of generality, in the rest of the discussion, we consider a sequence of DR vectors sent by only one player and for only one entity. For simplicity, we consider a two dimensional game space rather than a three dimensional one. Hence we use DRi to denote the ith such DR vector represented as the tuple (Ti, xi, yi, vxi, vyi). The receiving player computes the starting position for the entity based on xi, yi and the time difference between when the DR vector is received and the time Ti at which it was computed. Note that the computation of time difference is feasible since all the clocks are synchronized. The receiving player then uses the velocity components to project and render the trajectory of the entity. This trajectory is followed until a new DR vector is received which changes the position and/or velocity of the entity. timeT1 Real Exported Placed dt1 A B C D DR1 = (T1, x1, y1, vx1, vy1) computed at time T1 and sent to the receiver DR0 = (T0, x0, y0, vx0, vy0) computed at time T0 and sent to the receiver T0 dt0 Placed E Figure 1: Trajectories and deviations. Based on this model, Figure 1 illustrates the sending and receiv1 Other type of DR vectors include quadratic DR vectors which specify the acceleration of the entity and cubic spline DR vectors that consider the starting position and velocity and the ending position and velocity of the entity. ing of DR vectors and the different errors that are encountered. The figure shows the reception of DR vectors at a player (henceforth called the receiver). The horizontal axis shows the time which is synchronized among all the players. The vertical axis tries to conceptually capture the two-dimensional position of an entity. Assume that at time T0 a DR vector DR0 is computed by the sender and immediately sent to the receiver. Assume that DR0 is received at the receiver after a delay of dt0 time units. The receiver computes the initial position of the entity as (x0 + vx0 × dt0, y0 + vy0 × dt0) (shown as point E). The thick line EBD represents the projected and rendered trajectory at the receiver based on the velocity components vx0 and vy0 (placed path). At time T1 a DR vector DR1 is computed for the same entity and immediately sent to the receiver2 . Assume that DR1 is received at the receiver after a delay of dt1 time units. When this DR vector is received, assume that the entity is at point D. A new position for the entity is computed as (x1 + vx1 × dt1, y1 + vy0 × dt1) and the entity is moved to this position (point C). The velocity components vx1 and vy1 are used to project and render this entity further. Let us now consider the error due to network delay. Although DR1 was computed at time T1 and sent to the receiver, it did not reach the receiver until time T1 + dt1. This means, although the exported path based on DR1 at the sender at time T1 is the trajectory AC, until time T1 + dt1, at the receiver, this entity was being rendered at trajectory BD based on DR0. Only at time T1 + dt1 did the entity get moved to point C from which point onwards the exported and the placed paths are the same. The deviation between the exported and placed paths creates an error component which we refer to as the export error. A way to represent the export error is to compute the integral of the distance between the two trajectories over the time when they are out of sync. We represent the integral of the distances between the placed and exported paths due to some DR DRi over a time interval [t1, t2] as Err(DRi, t1, t2). In the figure, the export error due to DR1 is computed as the integral of the distance between the trajectories AC and BD over the time interval [T1, T1 + dt1]. Note that there could be other ways of representing this error as well, but in this paper, we use the integral of the distance between the two trajectories as a measure of the export error. Note that there would have been an export error created due to the reception of DR0 at which time the placed path would have been based on a previous DR vector. This is not shown in the figure but it serves to remind the reader that the export error is cumulative when a sequence of DR vectors are received. Starting from time T1 onwards, there is a deviation between the real and the exported paths. As we discussed earlier, this export error is unavoidable. The above figure and example illustrates one receiver only. But in reality, DR vectors DR0 and DR1 are sent by the sender to all the participating players. Each of these players receives DR0 and DR1 after varying delays thereby creating different export error values at different players. The goal of the DR vector scheduling algorithm to be described in the next section is to make this (cumulative) export error equal at every player independently for each of the entities that make up the game. 4. SCHEDULING ALGORITHM FORSENDING DR VECTORS In Section 3 we showed how delay from the sender of a new DR 2 Normally, DR vectors are not computed on a periodic basis but on an on-demand basis where the decision to compute a new DR vector is based on some threshold being exceeded on the deviation between the real path and the path exported by the previous DR vector. 3 vector to the receiver of the DR vector could lead to export error because of the deviation of the placed path from the exported path at the receiver until this new DR vector is received. We also mentioned that the goal of the DR vector scheduling algorithm is to make the export error equal at all receivers over a period of time. Since the game is played in a distributed environment, it makes sense for the sender of an entity to keep track of all the errors at the receivers and try to make them equal. However, the sender cannot know the actual error at a receiver till it gets some information regarding the error back from the receiver. Our algorithm estimates the error to compute a schedule to send DR vectors to the receivers and corrects the error when it gets feedbacks from the receivers. In this section we provide motivations for the algorithm and describe the steps it goes through. Throughout this section, we will use the following example to illustrate the algorithm. timeT1 Exported path Placed path at receiver 2 dt1 A B C D E F T0 G2 G1 dt2 DR1 sent to receiver 1 DR1 sent to receiver 2 T1 1 T1 2 da1 da2 G H I J K L N M DR1 estimated to be received by receiver 2 DR1 estimated to be received by receiver 1 DR1 actually received by receiver 1 DR1 actually received by receiver 2 DR0 sent to both receivers DR1 computed by sender Placed path at receiver 1 Figure 2: DR vector flow between a sender and two receivers and the evolution of estimated and actual placed paths at the receivers. DR0 = (T0, T0, x0, y0, vx0, vy0), sent at time T0 to both receivers. DR1 = (T1, T1 1 , x1, y1, vx1, vy1) sent at time T1 1 = T1+δ1 to receiver 1 and DR1 = (T1, T2 1 , x1, y1, vx1, vy1) sent at time T2 1 = T1 + δ2 to receiver 2. Consider the example in Figure 2. The figure shows a single sender sending DR vectors for an entity to two different receivers 1 and 2. DR0 computed at T0 is sent and received by the receivers sometime between T0 and T1 at which time they move the location of the entity to match the exported path. Thus, the path of the entity is shown only from the point where the placed path matches the exported path for DR0. Now consider DR1. At time T1, DR1 is computed by the sender but assume that it is not immediately sent to the receivers and is only sent after time δ1 to receiver 1 (at time T1 1 = T1 + δ1) and after time δ2 to receiver 2 (at time T2 1 = T1 + δ2). Note that the sender includes the sending timestamp with the DR vector as shown in the figure. Assume that the sender estimates (it will be clear shortly why the sender has to estimate the delay) that after a delay of dt1, receiver 1 will receive it, will use the coordinate and velocity parameters to compute the entity``s current location and move it there (point C) and from this time onwards, the exported and the placed paths will become the same. However, in reality, receiver 1 receives DR1 after a delay of da1 (which is less than sender``s estimates of dt1), and moves the corresponding entity to point H. Similarly, the sender estimates that after a delay of dt2, receiver 2 will receive DR1, will compute the current location of the entity and move it to that point (point E), while in reality it receives DR1 after a delay of da2 > dt2 and moves the entity to point N. The other points shown on the placed and exported paths will be used later in the discussion to describe different error components. 4.1 Computation of Relative Export Error Referring back to the discussion from Section 3, from the sender``s perspective, the export error at receiver 1 due to DR1 is given by Err(DR1, T1, T1 + δ1 + dt1) (the integral of the distance between the trajectories AC and DB over the time interval [T1, T1 + δ1 + dt1]) of Figure 2. This is due to the fact that the sender uses the estimated delay dt1 to compute this error. Similarly, the export error from the sender``s perspective at received 2 due to DR1 is given by Err(DR1, T1, T1 + δ2 + dt2) (the integral of the distance between the trajectories AE and DF over the time interval [T1, T1 + δ2 + dt2]). Note that the above errors from the sender``s perspective are only estimates. In reality, the export error will be either smaller or larger than the estimated value, based on whether the delay estimate was larger or smaller than the actual delay that DR1 experienced. This difference between the estimated and the actual export error is the relative export error (which could either be positive or negative) which occurs for every DR vector that is sent and is accumulated at the sender. The concept of relative export error is illustrated in Figure 2. Since the actual delay to receiver 1 is da1, the export error induced by DR1 at receiver 1 is Err(DR1, T1, T1 + δ1 + da1). This means, there is an error in the estimated export error and the sender can compute this error only after it gets a feedback from the receiver about the actual delay for the delivery of DR1, i.e., the value of da1. We propose that once receiver 1 receives DR1, it sends the value of da1 back to the sender. The receiver can compute this information as it knows the time at which DR1 was sent (T1 1 = T1 + δ1, which is appended to the DR vector as shown in Figure 2) and the local receiving time (which is synchronized with the sender``s clock). Therefore, the sender computes the relative export error for receiver 1, represented using R1 as R1 = Err(DR1, T1, T1 + δ1 + dt1) − Err(DR1, T1, T1 + δ1 + da1) = Err(DR1, T1 + δ1 + dt1, T1 + δ1 + da1) Similarly the relative export error for receiver 2 is computed as R2 = Err(DR1, T1, T1 + δ2 + dt2) − Err(DR1, T1, T1 + δ2 + da2) = Err(DR1, T1 + δ2 + dt2, T1 + δ2 + da2) Note that R1 > 0 as da1 < dt1, and R2 < 0 as da2 > dt2. Relative export errors are computed by the sender as and when it receives the feedback from the receivers. This example shows the 4 relative export error values after DR1 is sent and the corresponding feedbacks are received. 4.2 Equalization of Error Among Receivers We now explain what we mean by making the errors equal at all the receivers and how this can be achieved. As stated before the sender keeps estimates of the delays to the receivers, dt1 and dt2 in the example of Figure 2. This says that at time T1 when DR1 is computed, the sender already knows how long it may take messages carrying this DR vector to reach the receivers. The sender uses this information to compute the export errors, which are Err(DR1, T1, T1 + δ1 + dt1) and Err(DR1, T1, T1 + δ2 + dt2) for receivers 1 and 2, respectively. Note that the areas of these error components are a function of δ1 and δ2 as well as the network delays dt1 and dt2. If we are to make the exports errors due to DR1 the same at both receivers, the sender needs to choose δ1 and δ2 such that Err(DR1, T1, T1 + δ1 + dt1) = Err(DR1, T1, T1 + δ2 + dt2). But when T1 was computed there could already have been accumulated relative export errors due to previous DR vectors (DR0 and the ones before). Let us represent the accumulated relative error up to DRi for receiver j as Ri j. To accommodate these accumulated relative errors, the sender should now choose δ1 and δ2 such that R0 1 + Err(DR1, T1, T1 + δ1 + dt1) = R0 2 + Err(DR1, T1, T1 + δ2 + dt2) The δi determines the scheduling instant of the DR vector at the sender for receiver i. This method of computation of δ``s ensures that the accumulated export error (i.e., total actual error) for each receiver equalizes at the transmission of each DR vector. In order to establish this, assume that the feedback for DR vector Di from a receiver comes to the sender before schedule for Di+1 is computed. Let Si m and Ai m denote the estimated error for receiver m used for computing schedule for Di and accumulated error for receiver m computed after receiving feedback for Di, respectively. Then Ri m = Ai m −Si m. In order to compute the schedule instances (i.e., δ``s) for Di, for any pair of receivers m and n, we do Ri−1 m + Si m = Ri−1 n + Si n. The following theorem establishes the fact that the accumulated export error is equalized at every scheduling instant. THEOREM 4.1. When the schedule instances for sending Di are computed for any pair of receivers m and n, the following condition is satisfied: i−1 k=1 Ak m + Si m = i−1 k=1 Ak n + Si n. Proof: By induction. Assume that the premise holds for some i. We show that it holds for i+1. The base case for i = 1 holds since initially R0 m = R0 n = 0, and the S1 m = S1 n is used to compute the scheduling instances. In order to compute the schedule for Di+1, the we first compute the relative errors as Ri m = Ai m − Si m, and Ri n = Ai n − Si n. Then to compute δ``s we execute Ri m + Si+1 m = Ri n + Si+1 n Ai m − Si m + Si+1 m = Ai n − Si n + Si+1 n . Adding the condition of the premise on both sides we get, i k=1 Ak m + Si+1 m = i k=1 Ak n + Si+1 n . 4.3 Computation of the Export Error Let us now consider how the export errors can be computed. From the previous section, to find δ1 and δ2 we need to find Err(DR1, T1, T1 +δ1 +dt1) and Err(DR1, T1, T1 +δ2 +dt2). Note that the values of R0 1 and R0 2 are already known at the sender. Consider the computation of Err(DR1, T1, T1 +δ1 +dt1). This is the integral of the distance between the trajectories AC due to DR1 and BD due to DR0. From DR0 and DR1, point A is (X1, Y1) = (x1, y1) and point B is (X0, Y0) = (x0 + (T1 − T0) × vx0, y0 + (T1 − T0) × vy0). The trajectory AC can be represented as a function of time as (X1(t), Y1(t) = (X1 + vx1 × t, Y1 + vy1 × t) and the trajectory of BD can be represented as (X0(t), Y0(t) = (X0 + vx0 × t, Y0 + vy0 × t). The distance between the two trajectories as a function of time then becomes, dist(t) = (X1(t) − X0(t))2 + (Y1(t) − Y0(t))2 = ((X1 − X0) + (vx1 − vx0)t)2 +((Y1 − Y0) + (vy1 − vy0)t)2 = ((vx1 − vx0)2 + (vy1 − vy0)2)t2 +2((X1 − X0)(vx1 − vx0) +(Y1 − Y0)(vy1 − vy0))t +(X1 − X0)2 + (Y1 − Y0)2 Let a = (vx1 − vx0)2 + (vy1 − vy0)2 b = 2((X1 − X0)(vx1 − vx0) +(Y1 − Y0)(vy1 − vy0)) c = (X1 − X0)2 + (Y1 − Y0)2 Then dist(t) can be written as dist(t) = a × t2 + b × t + c. Then Err(DR1, t1, t2) for some time interval [t1, t2] becomes t2 t1 dist(t) dt = t2 t1 a × t2 + b × t + c dt. A closed form solution for the indefinite integral a × t2 + b × t + c dt = (2at + b) √ at2 + bt + c 4a + 1 2 ln 1 2b + at √ a + at2 + bt + c c 1 √ a − 1 8 ln 1 2b + at √ a + at2 + bt + c b2 a− 3 2 Err(DR1, T1, T1 +δ1 +dt1) and Err(DR1, T1, T1 +δ2 +dt2) can then be calculated by applying the appropriate limits to the above solution. In the next section, we consider the computation of the δ``s for N receivers. 5 4.4 Computation of Scheduling Instants We again look at the computation of δ``s by referring to Figure 2. The sender chooses δ1 and δ2 such that R0 1 + Err(DR1, T1, T1 + δ1 +dt1) = R0 2 +Err(DR1, T1, T1 +δ2 +dt2). If R0 1 and R0 2 both are zero, then δ1 and δ2 should be chosen such that Err(DR1, T1, T1+ δ1 +dt1) = Err(DR1, T1, T1 +δ2 +dt2). This equality will hold if δ1 + dt1 = δ2 + dt2. Thus, if there is no accumulated relative export error, all that the sender needs to do is to choose the δ``s in such a way that they counteract the difference in the delay to the two receivers, so that they receive the DR vector at the same time. As discussed earlier, because the sender is not able to a priori learn the delay, there will always be an accumulated relative export error from a previous DR vector that does have to be taken into account. To delve deeper into this, consider the computation of the export error as illustrated in the previous section. To compute the δ``s we require that R0 1 + Err(DR1, T1, T1 + δ1 + dt1) = R0 2 + Err(DR1, T1, T1 + δ2 + dt2). That is, R0 1 + T1+δ1+dt1 T1 dist(t) dt = R0 2 + T1+δ2+dt2 T1 dist(t) dt. That is R0 1 + T1+dt1 T1 dist(t) dt + T1+dt1+δ1 T1+dt1 dist(t) dt = R0 2 + T1+dt2 T1 dist(t) dt + T1+dt2+δ2 T1+dt2 dist(t) dt. The components R0 1, R0 2, are already known to (or estimated by) the sender. Further, the error components T1+dt1 T1 dist(t) dt and T1+dt2 T1 dist(t) dt can be a priori computed by the sender using estimated values of dt1 and dt2. Let us use E1 to denote R0 1 + T1+dt1 T1 dist(t) dt and E2 to denote R0 2 + T1+dt2 T1 dist(t) dt. Then, we require that E1 + T1+dt1+δ1 T1+dt1 dist(t) dt = E2 + T1+dt2+δ2 T1+dt2 dist(t) dt. Assume that E1 > E2. Then, for the above equation to hold, we require that T1+dt1+δ1 T1+dt1 dist(t) dt < T1+dt2+δ2 T1+dt2 dist(t) dt. To make the game as fast as possible within this framework, the δ values should be made as small as possible so that DR vectors are sent to the receivers as soon as possible subject to the fairness requirement. Given this, we would choose δ1 to be zero and compute δ2 from the equation E1 = E2 + T1+dt2+δ2 T1+dt2 dist(t) dt. In general, if there are N receivers 1, ... , N, when a sender generates a DR vector and decides to schedule them to be sent, it first computes the Ei values for all of them from the accumulated relative export errors and estimates of delays. Then, it finds the smallest of these values. Let Ek be the smallest value. The sender makes δk to be zero and computes the rest of the δ``s from the equality Ei + T1+dti+δi T1+dti dist(t) dt = Ek, ∀i 1 ≤ i ≤ N, i = k. (1) The δ``s thus obtained gives the scheduling instants of the DR vector for the receivers. 4.5 Steps of the Scheduling Algorithm For the purpose of the discussion below, as before let us denote the accumulated relative export error at a sender for receiver k up until DRi to be Ri k. Let us denote the scheduled delay at the sender before DRi is sent to receiver k as δi k. Given the above discussion, the algorithm steps are as follows: 1. The sender computes DRi at (say) time Ti and then computes δi k, and Ri−1 k , ∀k, 1 ≤ k ≤ N based on the estimation of delays dtk, ∀k, 1 ≤ k ≤ N as per Equation (1). It schedules, DRi to be sent to receiver k at time Ti + δi k. 2. The DR vectors are sent to the receivers at the scheduled times which are received after a delay of dak, ∀k, 1 ≤ k ≤ N where dak ≤ or > dtk. The receivers send the value of dak back to the sender (the receiver can compute this value based on the time stamps on the DR vector as described earlier). 3. The sender computes Ri k as described earlier and illustrated in Figure 2. The sender also recomputes (using exponential averaging method similar to round-trip time estimation by TCP [10]) the estimate of delay dtk from the new value of dak for receiver k. 4. Go back to Step 1 to compute DRi+1 when it is required and follow the steps of the algorithm to schedule and send this DR vector to the receivers. 4.6 Handling Cases in Practice So far we implicity assumed that DRi is sent out to all receivers before a decision is made to compute the next DR vector DRi+1, and the receivers send the value of dak corresponding to DRi and this information reaches the sender before it computes DRi+1 so that it can compute Ri+1 k and then use it in the computation of δi+1 k . Two issues need consideration with respect to the above algorithm when it is used in practice. • It may so happen that a new DR vector is computed even before the previous DR vector is sent out to all receivers. How will this situation be handled? • What happens if the feedback does not arrive before DRi+1 is computed and scheduled to be sent? Let us consider the first scenario. We assume that DRi has been scheduled to be sent and the scheduling instants are such that δi 1 < δi 2 < · · · < δi N . Assume that DRi+1 is to be computed (because the real path has deviated exceeding a threshold from the path exported by DRi) at time Ti+1 where Ti + δi k < Ti+1 < Ti + δi k+1. This means, DRi has been sent only to receivers up to k in the scheduled order. In our algorithm, in this case, the scheduled delay ordering queue is flushed which means DRi is not sent to receivers still queued to receive it, but a new scheduling order is computed for all the receivers to send DRi+1. For those receivers who have been sent DRi, assume for now that daj, 1 ≤ j ≤ k has been received from all receivers (the scenario where daj has not been received will be considered as a part of the second scenario later). For these receivers, Ei j, 1 ≤ j ≤ k can be computed. For those receivers j, k + 1 ≤ j ≤ N to whom DRi was not sent Ei j does not apply. Consider a receiver j, k + 1 ≤ j ≤ N to whom DRi was not sent. Refer to Figure 3. For such a receiver j, when DRi+1 is to be scheduled and 6 timeTi Exported path dtj A B C D Ti-1 Gi j DRi+1 computed by sender and DRi for receiver k+1 to N is removed from queue DRi+1 scheduled for receiver k+1 Ti+1 G H E F DRi scheduled for receiver j DRi computed by sender Placed path at receiver k+1 Gi+1 j Figure 3: Schedule computation when DRi is not sent to receiver j, k + 1 ≤ j ≤ N. δi+1 j needs to be computed, the total export error is the accumulated relative export error at time Ti when schedule for DRi was computed, plus the integral of the distance between the two trajectories AC and BD of Figure 3 over the time interval [Ti, Ti+1 + δi+1 j + dtj]. Note that this integral is given by Err(DRi, Ti, Ti+1) + Err(DRi+1, Ti+1, Ti+1 + δi+1 j + dtj). Therefore, instead of Ei j of Equation (1), we use the value Ri−1 j + Err(DRi, Ti, Ti+1) + Err(DRi+1, Ti+1, Ti+1 + δi+1 j + dtj) where Ri−1 j is relative export error used when the schedule for DRi was computed. Now consider the second scenario. Here the feedback dak corresponding to DRi has not arrived before DRi+1 is computed and scheduled. In this case, Ri k cannot be computed. Thus, in the computation of δk for DRi+1, this will be assumed to be zero. We do assume that a reliable mechanism is used to send dak back to the sender. When this information arrives at a later time, Ri k will be computed and accumulated to future relative export errors (for example Ri+1 k if dak is received before DRi+2 is computed) and used in the computation of δk when a future DR vector is to be scheduled (for example DRi+2). 4.7 Experimental Results In order to evaluate the effectiveness and quantify benefits obtained through the use of the scheduling algorithm, we implemented the proposed algorithm in BZFlag (Battle Zone Flag) [11] game. It is a first-person shooter game where the players in teams drive tanks and move within a battle field. The aim of the players is to navigate and capture flags belonging to the other team and bring them back to their own area. The players shoot each other``s tanks using shooting bullets. The movement of the tanks as well as that of the shots are exchanged among the players using DR vectors. We have modified the implementation of BZFlag to incorporate synchronized clocks among the players and the server and exchange time-stamps with the DR vector. We set up a testbed with four players running the instrumented version of BZFlag, with one as a sender and the rest as receivers. The scheduling approach and the base case where each DR vector was sent to all the receivers concurrently at every trigger point were implemented in the same run by tagging the DR vectors according to the type of approach used to send the DR vector. NISTNet [12] was used to introduce delays across the sender and the three receivers. Mean delays of 800ms, 500ms and 200ms were introduced between the sender and first, second and the third receiver, respectively. We introduce a variance of 100 msec (to the mean delay of each receiver) to model variability in delay. The sender logged the errors of each receiver every 100 milliseconds for both the scheduling approach and the base case. The sender also calculated the standard deviation and the mean of the accumulated export error of all the receivers every 100 milliseconds. Figure 4 plots the mean and standard deviation of the accumulated export error of all the receivers in the scheduling case against the base case. Note that the x-axis of these graphs (and the other graphs that follow) represents the system time when the snapshot of the game was taken. Observe that the standard deviation of the error with scheduling is much lower as compared to the base case. This implies that the accumulated errors of the receivers in the scheduling case are closer to one another. This shows that the scheduling approach achieves fairness among the receivers even if they are at different distances (i.e, latencies) from the sender. Observe that the mean of the accumulated error increased multifold with scheduling in comparison to the base case. Further exploration for the reason for the rise in the mean led to the conclusion that every time the DR vectors are scheduled in a way to equalize the total error, it pushes each receivers total error higher. Also, as the accumulated error has an estimated component, the schedule is not accurate to equalize the errors for the receivers, leading to the DR vector reaching earlier or later than the actual schedule. In either case, the error is not equalized and if the DR vector reaches late, it actually increases the error for a receiver beyond the highest accumulated error. This means that at the next trigger, this receiver will be the one with highest error and every other receiver``s error will be pushed to this error value. This flip-flop effect leads to the increase in the accumulated error for all the receivers. The scheduling for fairness leads to the decrease in standard deviation (i.e., increases the fairness among different players), but it comes at the cost of higher mean error, which may not be a desirable feature. This led us to explore different ways of equalizing the accumulated errors. The approach discussed in the following section is a heuristic approach based on the following idea. Using the same amount of DR vectors over time as in the base case, instead of sending the DR vectors to all the receivers at the same frequency as in the base case, if we can increase the frequency of sending the DR vectors to the receiver with higher accumulated error and decrease the frequency of sending DR vectors to the receiver with lower accumulated error, we can equalize the export error of all receivers over time. At the same time we wish to decrease the error of the receiver with the highest accumulated error in the base case (of course, this receiver would be sent more DR vectors than in the base case). We refer to such an algorithm as a budget based algorithm. 5. BUDGET BASED ALGORITHM In a game, the sender of an entity sends DR vectors to all the receivers every time a threshold is crossed by the entity. Lower the threshold, more DR vectors are generated during a given time period. Since the DR vectors are sent to all the receivers and the network delay between the sender-receiver pairs cannot be avoided, the before export error 3 with the most distant player will always 3 Note that after export error is eliminated by using synchronized clock among the players. 7 0 1000 2000 3000 4000 5000 15950 16000 16050 16100 16150 16200 16250 16300 MeanAccumulatedError Time in Seconds Base Case Scheduling Algorithm #1 0 50 100 150 200 250 300 350 400 450 500 15950 16000 16050 16100 16150 16200 16250 16300 StandardDeviationofAccumulatedError Time in Seconds Base Case Scheduling Algorithm #1 Figure 4: Mean and standard deviation of error with scheduling and without (i.e., base case). be higher than the rest. In order to mitigate the imbalance in the error, we propose to send DR vectors selectively to different players based on the accumulated errors of these players. The budget based algorithm is based on this idea and there are two variations of it. One is a probabilistic budget based scheme and the other, a deterministic budget base scheme. 5.1 Probabilistic budget based scheme The probabilistic budget based scheme has three main steps: a) lower the dead reckoning threshold but at the same time keep the total number of DRs sent the same as the base case, b) at every trigger, probabilistically pick a player to send the DR vector to, and c) send the DR vector to the chosen player. These steps are described below. The lowering of DR threshold is implemented as follows. Lowering the threshold is equivalent to increasing the number of trigger points where DR vectors are generated. Suppose the threshold is such that the number of triggers caused by it in the base case is t and at each trigger n DR vectors sent by the sender, which results in a total of nt DR vectors. Our goal is to keep the total number of DR vectors sent by the sender fixed at nt, but lower the number of DR vectors sent at each trigger (i.e., do not send the DR vector to all the receivers). Let n and t be the number of DR vectors sent at each trigger and number of triggers respectively in the modified case. We want to ensure n t = nt. Since we want to increase the number of trigger points, i.e, t > t, this would mean that n < n. That is, not all receivers will be sent the DR vector at every trigger. In the probabilistic budget based scheme, at each trigger, a probability is calculated for each receiver to be sent a DR vector and only one receiver is sent the DR (n = 1). This probability is based on the relative weights of the receivers'' accumulated errors. That is, a receiver with a higher accumulated error will have a higher probability of being sent the DR vector. Consider that the accumulated error for three players are a1, a2 and a3 respectively. Then the probability of player 1 receiving the DR vector would be a1 a1+a2+a3 . Similarly for the other players. Once the player is picked, the DR vector is sent to that player. To compare the probabilistic budget based algorithm with the base case, we needed to lower the threshold for the base case (for fair comparison). As the dead reckoning threshold in the base case was already very fine, it was decided that instead of lowering the threshold, the probabilistic budget based approach would be compared against a modified base case that would use the normal threshold as the budget based algorithm but the base case was modified such that every third trigger would be actually used to send out a DR vector to all the three receivers used in our experiments. This was called as the 1/3 base case as it resulted in 1/3 number of DR vectors being sent as compared to the base case. The budget per trigger for the probability based approach was calculated as one DR vector at each trigger as compared to three DR vectors at every third trigger in the 1/3 base case; thus the two cases lead to the same number of DR vectors being sent out over time. In order to evaluate the effectiveness of the probabilistic budget based algorithm, we instrumented the BZFlag game to use this approach. We used the same testbed consisting of one sender and three receivers with delays of 800ms, 500ms and 200ms from the sender and with low delay variance (100ms) and moderate delay variance (180ms). The results are shown in Figures 5 and 6. As mentioned earlier, the x-axis of these graphs represents the system time when the snapshot of the game was taken. Observe from the figures that the standard deviation of the accumulated error among the receivers with the probabilistic budget based algorithm is less than the 1/3 base case and the mean is a little higher than the 1/3 base case. This implies that the game is fairer as compared to the 1/3 base case at the cost of increasing the mean error by a small amount as compared to the 1/3 base case. The increase in mean error in the probabilistic case compared to the 1/3 base case can be attributed to the fact that the even though the probabilistic approach on average sends the same number of DR vectors as the 1/3 base case, it sometimes sends DR vectors to a receiver less frequently and sometimes more frequently than the 1/3 base case due to its probabilistic nature. When a receiver does not receive a DR vector for a long time, the receiver``s trajectory is more and more off of the sender``s trajectory and hence the rate of buildup of the error at the receiver is higher. At times when a receiver receives DR vectors more frequently, it builds up error at a lower rate but there is no way of reversing the error that was built up when it did not receive a DR vector for a long time. This leads the receivers to build up more error in the probabilistic case as compared to the 1/3 base case where the receivers receive a DR vector almost periodically. 8 0 200 400 600 800 1000 15950 16000 16050 16100 16150 16200 16250 16300 MeanAccumulatedError Time in Seconds 1/3 Base Case Deterministic Algorithm Probabilistic Algorithm 0 50 100 150 200 250 300 350 400 450 500 15950 16000 16050 16100 16150 16200 16250 16300 StandardDeviationofAccumulatedError Time in Seconds 1/3 Base Case Deterministic Algorithm Probabilistic Algorithm Figure 5: Mean and standard deviation of error for different algorithms (including budget based algorithms) for low delay variance. 0 200 400 600 800 1000 16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180 MeanAccumulatedError Time in Seconds 1/3 Base Case Deterministic Algorithm Probabilistic Algorithm 0 50 100 150 200 250 300 16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180 StandardDeviationofAccumulatedError Time in Seconds 1/3 Base Case Deterministic Algorithm Probabilistic Algorithm Figure 6: Mean and standard deviation of error for different algorithms (including budget based algorithms) for moderate delay variance. 5.2 Deterministic budget based scheme To bound the increase in mean error we decided to modify the budget based algorithm to be deterministic. The first two steps of the algorithm are the same as in the probabilistic algorithm; the trigger points are increased to lower the threshold and accumulated errors are used to compute the probability that a receiver will receiver a DR vector. Once these steps are completed, a deterministic schedule for the receiver is computed as follows: 1. If there is any receiver(s) tagged to receive a DR vector at the current trigger, the sender sends out the DR vector to the respective receiver(s). If at least one receiver was sent a DR vector, the sender calculates the probabilities of each receiver receiving a DR vector as explained before and follows steps 2 to 6, else it does not do anything. 2. For each receiver, the probability value is multiplied with the budget available at each trigger (which is set to 1 as explained below) to give the frequency of sending the DR vector to each receiver. 3. If any of the receiver``s frequency after multiplying with the budget goes over 1, the receiver``s frequency is set as 1 and the surplus amount is equally distributed to all the receivers by adding the amount to their existing frequencies. This process is repeated until all the receivers have a frequency of less than or equal to 1. This is due to the fact that at a trigger we cannot send more than one DR vector to the respective receiver. That will be wastage of DR vectors by sending redundant information. 4. (1/frequency) gives us the schedule at which the sender should send DR vectors to the respective receiver. Credit obtained previously (explained in step 5) if any is subtracted from the schedule. Observe that the resulting value of the schedule might not be an integer; hence, the value is rounded off by taking the ceiling of the schedule. For example, if the frequency is 1/3.5, this implies that we would like to have a DR vector sent every 3.5 triggers. However, we are constrained to send it at the 4th trigger giving us a credit of 0.5. When we do send the DR vector next time, we would be able to send it 9 on the 3rd trigger because of the 0.5 credit. 5. The difference between the schedule and the ceiling of the schedule is the credit that the receiver has obtained which is remembered for the future and used at the next time as explained in step 4. 6. For each of those receivers who were sent a DR vector at the current trigger, the receivers are tagged to receive the next DR vector at the trigger that happens exactly schedule (the ceiling of the schedule) number of times away from the current trigger. Observe that no other receiver``s schedule is modified at this point as they all are running a schedule calculated at some previous point of time. Those schedules will be automatically modified at the trigger when they are scheduled to receive the next DR vector. At the first trigger, the sender sends the DR vector to all the receivers and uses a relative probability of 1/n for each receiver and follows the steps 2 to 6 to calculate the next schedule for each receiver in the same way as mentioned for other triggers. This algorithm ensures that every receiver has a guaranteed schedule of receiving DR vectors and hence there is no irregularity in sending the DR vector to any receiver as was observed in the budget based probabilistic algorithm. We used the testbed described earlier (three receivers with varying delays) to evaluate the deterministic algorithm using the budget of 1 DR vector per trigger so as to use the same number of DR vectors as in the 1/3 base case. Results from our experiments are shown in Figures 5 and 6. It can be observed that the standard deviation of error in the deterministic budget based algorithm is less than the 1/3 base case and also has the same mean error as the 1/3 base case. This indicates that the deterministic algorithm is more fair than the 1/3 base case and at the same time does not increase the mean error thereby leading to a better game quality compared to the probabilistic algorithm. In general, when comparing the deterministic approach to the probabilistic approach, we found that the mean accumulated error was always less in the deterministic approach. With respect to standard deviation of the accumulated error, we found that in the fixed or low variance cases, the deterministic approach was generally lower, but in higher variance cases, it was harder to draw conclusions as the probabilistic approach was sometimes better than the deterministic approach. 6. CONCLUSIONS AND FUTURE WORK In distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors. Due to the variable delay between players, these DR vectors reach different players at different times. There is unfair advantage gained by receivers who are closer to the sender of the DR as they are able to render the sender``s position more accurately in real time. In this paper, we first developed a model for estimating the error in rendering player trajectories at the receivers. We then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby equalizing the error at different players. This algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players. To counter this effect, we presented budget based algorithms where the DR vectors are still scheduled to be sent at different players at different times but the algorithm balances the need for fairness with the requirement that the error of the worst case players (who are furthest from the sender) are not increased compared to the base case (where all DR vectors are sent to all players every time a DR vector is generated). We presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case. 7. REFERENCES [1] S.Aggarwal, H. Banavar, A. Khandelwal, S. Mukherjee, and S. Rangarajan, Accuracy in Dead-Reckoning based Distributed Multi-Player Games, Proceedings of ACM SIGCOMM 2004 Workshop on Network and System Support for Games (NetGames 2004), Aug. 2004. [2] L. Gautier and C. Diot, Design and Evaluation of MiMaze, a Multiplayer Game on the Internet, in Proc. of IEEE Multimedia (ICMCS``98), 1998. [3] M. Mauve, Consistency in Replicated Continuous Interactive Media, in Proc. of the ACM Conference on Computer Supported Cooperative Work (CSCW``00), 2000, pp. 181-190. [4] S.K. Singhal and D.R. Cheriton, Exploiting Position History for Efficient Remote Rendering in Networked Virtual Reality, Presence: Teleoperators and Virtual Environments, vol. 4, no. 2, pp. 169-193, 1995. [5] C. Diot and L. Gautier, A Distributed Architecture for Multiplayer Interactive Applications on the Internet, in IEEE Network Magazine, 1999, vol. 13, pp. 6-15. [6] L. Pantel and L.C. Wolf, On the Impact of Delay on Real-Time Multiplayer Games, in Proc. of ACM NOSSDAV``02, May 2002. [7] Y. Lin, K. Guo, and S. Paul, Sync-MS: Synchronized Messaging Service for Real-Time Multi-Player Distributed Games, in Proc. of 10th IEEE International Conference on Network Protocols (ICNP), Nov 2002. [8] K. Guo, S. Mukherjee, S. Rangarajan, and S. Paul, A Fair Message Exchange Framework for Distributed Multi-Player Games, in Proc. of NetGames2003, May 2003. [9] N. E. Baughman and B. N. Levine, Cheat-Proof Playout for Centralized and Distributed Online Games, in Proc. of IEEE INFOCOM``01, April 2001. [10] M. Allman and V. Paxson, On Estimating End-to-End Network Path Properties, in Proc. of ACM SIGCOMM``99, Sept. 1999. [11] BZFlag Forum, BZFlag Game, URL: http://www.bzflag.org. [12] Nation Institute of Standards and Technology, NIST Net, URL: http://snad.ncsl.nist.gov/nistnet/. 10
Fairness in Dead-Reckoning based Distributed Multi-Player Games ABSTRACT In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an "error" measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing. 1. INTRODUCTION In a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server. Usually, in such games, the players are part of the game and in addition they may control entities that make up the game. During the course of the game, the players and the entities move within the game space. A player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector. A DR vector contains information about the current position of the player/entity in terms of x, y and z coordinates (at the time the DR vector was sent) as well as the trajectory of the entity in terms of the velocity component in each of the dimensions. Each of the participating players receives such DR vectors from one another and renders the other players/entities on the local consoles until a new DR vector is received for that player/entity. In a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server. The idea of DR is used because it is almost impossible for players/entities to exchange their current positions at every time unit. DR vectors are "quantization" of the real trajectory (which we refer to as real path) at a player. Normally, a new DR vector is computed and sent whenever the real path deviates from the path extrapolated using the previous DR vector (say, in terms of distance in the x, y, z plane) by some amount specified by a threshold. We refer to the trajectory that can be computed using the sequence of DR vectors as the exported path. Therefore, at the sending player, there is a deviation between the real path and the exported path. The error due to this deviation can be removed if each movement of player/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same. Given that it is not feasible to satisfy this due to bandwidth limitations, this error is not of practical interest. Therefore, the receiving players can, at best, follow the exported path. Because of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player/entity may have already changed. Thus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path). We refer to this error as the export error. Note that the export error, in turn, results in a deviation between the real and the placed paths. The export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i) before the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error). In an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero. That is, the placed and the exported paths match after the DR vector is received. We also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1]. Henceforth we assume that the players use such a technique which results in unavoidable but small overall export error. In this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers. Due to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers. This brings in unfairness in game playing. For instance a player with a large delay would always see an entity "late" in physical time compared to the other players and, therefore, her action on the entity would be delayed (in physical time) even if she reacted instantaneously after the entity was rendered. Our goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players. We explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender. We propose two algorithms to achieve this. Both the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players. At an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same. The goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses. The first algorithm (which we refer to as the "scheduling algorithm") is based on "estimating" the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point. Through an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error). The drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR). To alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players. At a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer. Experimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach. It improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game. In addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors. 2. PREVIOUS WORK Earlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4]. These methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another. For example, the concept of local lag has been used in [3] where each player delays every local operation for a certain amount of time so that remote players can receive information about the local operation and execute the same operation at the about same time, thus reducing state inconsistencies. The online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays. In MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely. Players with a network delay larger than 100 ms simply cannot participate in the game. In general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players. There have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms. But these works [7, 8] assume the existence of a global view of the game where a game server maintains a view (or state) of the game. Players can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object). These additions and deletions are communicated to the game server using "action" messages. Based on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using "update" messages. Fairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a "fair-order" which takes into account the delays between the game server and the different players. Objects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works. In this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game. DR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model). It has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place. Both the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof. For example, a receiver could skew the delay estimate at the sender to make the sender believe that the delay between the sender and the receiver is high thereby gaining undue advantage. We emphasize that the focus of this paper is on fairness without addressing the issue of cheating. In the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1]. In Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag. Section 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing. Conclusions are presented in Section 6. 3. GAME MODEL The game architecture is based on players distributed across the Internet and exchanging DR vectors to each other. The DR vectors could either be sent directly from one player to another (peerto-peer model) or could be sent through a game server which receives the DR vector from a player and forwards it to other players (client-server model). As mentioned before, we assume synchronized clocks among the participating players. Each DR vector sent from one player to another specifies the trajectory of exactly one player/entity. We assume a linear DR vector in that the information contained in the DR vector is only enough at the receiving player to compute the trajectory and render the entity in a straight line path. Such a DR vector contains information about the starting position and velocity of the player/entity where the velocity is constant1. Thus, the DR vectors sent by a player specifies the current time at the player when the DR vector is computed (not the time at which this DR vector is sent to the other players as we will explain later), the current position of the player/entity in terms of the x, y, z coordinates and the velocity vector in the direction of x, y and z coordinates. Specifically, the ith DR vector sent by player j about the kth entity is denoted by DRjik and is represented by the following tuple (Tjik, xjik, yjik, zjik, vxjik, vyjik, vzjik). Without loss of generality, in the rest of the discussion, we consider a sequence of DR vectors sent by only one player and for only one entity. For simplicity, we consider a two dimensional game space rather than a three dimensional one. Hence we use DRi to denote the ith such DR vector represented as the tuple (Ti, xi, yi, vxi, vyi). The receiving player computes the starting position for the entity based on xi, yi and the time difference between when the DR vector is received and the time Ti at which it was computed. Note that the computation of time difference is feasible since all the clocks are synchronized. The receiving player then uses the velocity components to project and render the trajectory of the entity. This trajectory is followed until a new DR vector is received which changes the position and/or velocity of the entity. Figure 1: Trajectories and deviations. Based on this model, Figure 1 illustrates the sending and receiv1Other type of DR vectors include quadratic DR vectors which specify the acceleration of the entity and cubic spline DR vectors that consider the starting position and velocity and the ending position and velocity of the entity. ing of DR vectors and the different errors that are encountered. The figure shows the reception of DR vectors at a player (henceforth called the receiver). The horizontal axis shows the time which is synchronized among all the players. The vertical axis tries to conceptually capture the two-dimensional position of an entity. Assume that at time To a DR vector DRo is computed by the sender and immediately sent to the receiver. Assume that DRo is received at the receiver after a delay of dto time units. The receiver computes the initial position of the entity as (xo + vxo × dto, yo + vyo × dto) (shown as point E). The thick line EBD represents the projected and rendered trajectory at the receiver based on the velocity components vxo and vyo (placed path). At time Tl a DR vector DRl is computed for the same entity and immediately sent to the receiver2. Assume that DRl is received at the receiver after a delay of dtl time units. When this DR vector is received, assume that the entity is at point D. A new position for the entity is computed as (xl + vxl × dtl, yl + vyo × dtl) and the entity is moved to this position (point C). The velocity components vxl and vyl are used to project and render this entity further. Let us now consider the error due to network delay. Although DRl was computed at time Tl and sent to the receiver, it did not reach the receiver until time Tl + dtl. This means, although the exported path based on DRl at the sender at time Tl is the trajectory AC, until time Tl + dtl, at the receiver, this entity was being rendered at trajectory BD based on DRo. Only at time Tl + dtl did the entity get moved to point C from which point onwards the exported and the placed paths are the same. The deviation between the exported and placed paths creates an error component which we refer to as the export error. A way to represent the export error is to compute the integral of the distance between the two trajectories over the time when they are out of sync. We represent the integral of the distances between the placed and exported paths due to some DR DRi over a time interval [tl, t2] as Err (DRi, tl, t2). In the figure, the export error due to DRl is computed as the integral of the distance between the trajectories AC and BD over the time interval [Tl, Tl + dtl]. Note that there could be other ways of representing this error as well, but in this paper, we use the integral of the distance between the two trajectories as a measure of the export error. Note that there would have been an export error created due to the reception of DRo at which time the placed path would have been based on a previous DR vector. This is not shown in the figure but it serves to remind the reader that the export error is cumulative when a sequence of DR vectors are received. Starting from time Tl onwards, there is a deviation between the real and the exported paths. As we discussed earlier, this export error is unavoidable. The above figure and example illustrates one receiver only. But in reality, DR vectors DRo and DRl are sent by the sender to all the participating players. Each of these players receives DRo and DRl after varying delays thereby creating different export error values at different players. The goal of the DR vector scheduling algorithm to be described in the next section is to make this (cumulative) export error equal at every player independently for each of the entities that make up the game. 4. SCHEDULING ALGORITHM FOR SENDING DR VECTORS In Section 3 we showed how delay from the sender of a new DR 2Normally, DR vectors are not computed on a periodic basis but on an on-demand basis where the decision to compute a new DR vector is based on some threshold being exceeded on the deviation between the real path and the path exported by the previous DR vector. vector to the receiver of the DR vector could lead to export error because of the deviation of the placed path from the exported path at the receiver until this new DR vector is received. We also mentioned that the goal of the DR vector scheduling algorithm is to make the export error "equal" at all receivers over a period of time. Since the game is played in a distributed environment, it makes sense for the sender of an entity to keep track of all the errors at the receivers and try to make them equal. However, the sender cannot know the actual error at a receiver till it gets some information regarding the error back from the receiver. Our algorithm estimates the error to compute a schedule to send DR vectors to the receivers and corrects the error when it gets feedbacks from the receivers. In this section we provide motivations for the algorithm and describe the steps it goes through. Throughout this section, we will use the following example to illustrate the algorithm. Figure 2: DR vector flow between a sender and two receivers and the evolution of estimated and actual placed paths at the receivers. DR0 = (T0, T0, x0, y0, vx0, vy0), sent at time T0 to both receivers. DR1 = (T1, T11, x1, y1, vx1, vy1) sent at time T11 = T1 +61 to receiver 1 and DR1 = (T1, T12, x1, y1, vx1, vy1) sent at time T12 = T1 + 62 to receiver 2. Consider the example in Figure 2. The figure shows a single sender sending DR vectors for an entity to two different receivers 1 and 2. DR0 computed at T0 is sent and received by the receivers sometime between T0 and T1 at which time they move the location of the entity to match the exported path. Thus, the path of the entity is shown only from the point where the placed path matches the exported path for DR0. Now consider DR1. At time T1, DR1 is computed by the sender but assume that it is not immediately sent to the receivers and is only sent after time 61 to receiver 1 (at time T11 = T1 + 61) and after time 62 to receiver 2 (at time stamp with the DR vector as shown in the figure. Assume that the sender estimates (it will be clear shortly why the sender has to estimate the delay) that after a delay of dt1, receiver 1 will receive it, will use the coordinate and velocity parameters to compute the entity's current location and move it there (point C) and from this time onwards, the exported and the placed paths will become the same. However, in reality, receiver 1 receives DR1 after a delay of da1 (which is less than sender's estimates of dt1), and moves the corresponding entity to point H. Similarly, the sender estimates that after a delay of dt2, receiver 2 will receive DR1, will compute the current location of the entity and move it to that point (point E), while in reality it receives DR1 after a delay of da2> dt2 and moves the entity to point N. The other points shown on the placed and exported paths will be used later in the discussion to describe different error components. 4.1 Computation of Relative Export Error Referring back to the discussion from Section 3, from the sender's perspective, the export error at receiver 1 due to DR1 is given by Err (DR1, T1, T1 + 61 + dt1) (the integral of the distance between the trajectories AC and DB over the time interval [T1, T1 + 61 + dt1]) of Figure 2. This is due to the fact that the sender uses the estimated delay dt1 to compute this error. Similarly, the export error from the sender's perspective at received 2 due to DR1 is given by Err (DR1, T1, T1 + 62 + dt2) (the integral of the distance between the trajectories AE and DF over the time interval [T1, T1 + 62 + dt2]). Note that the above errors from the sender's perspective are only estimates. In reality, the export error will be either smaller or larger than the estimated value, based on whether the delay estimate was larger or smaller than the actual delay that DR1 experienced. This difference between the estimated and the actual export error is the relative export error (which could either be positive or negative) which occurs for every DR vector that is sent and is accumulated at the sender. The concept of relative export error is illustrated in Figure 2. Since the actual delay to receiver 1 is da1, the export error induced by DR1 at receiver 1 is Err (DR1, T1, T1 + 61 + da1). This means, there is an error in the estimated export error and the sender can compute this error only after it gets a feedback from the receiver about the actual delay for the delivery of DR1, i.e., the value of da1. We propose that once receiver 1 receives DR1, it sends the value of da1 back to the sender. The receiver can compute this information as it knows the time at which DR1 was sent (T11 = T1 + 61, which is appended to the DR vector as shown in Figure 2) and the local receiving time (which is synchronized with the sender's clock). Therefore, the sender computes the relative export error for receiver 1, represented using R1 as Note that R1> 0 as da1 <dt1, and R2 <0 as da2> dt2. Relative export errors are computed by the sender as and when it receives the feedback from the receivers. This example shows the relative export error values after DR1 is sent and the corresponding feedbacks are received. 4.2 Equalization of Error Among Receivers We now explain what we mean by making the errors "equal" at all the receivers and how this can be achieved. As stated before the sender keeps estimates of the delays to the receivers, dt1 and dt2 in the example of Figure 2. This says that at time T1 when DR1 is computed, the sender already knows how long it may take messages carrying this DR vector to reach the receivers. The sender uses this information to compute the export errors, which are Err (DR1, T1, T1 + 61 + dt1) and Err (DR1, T1, T1 + 62 + dt2) for receivers 1 and 2, respectively. Note that the areas of these error components are a function of 61 and 62 as well as the network delays dt1 and dt2. If we are to make the exports errors due to DR1 the same at both receivers, the sender needs to choose 61 and 62 such that But when T1 was computed there could already have been accumulated relative export errors due to previous DR vectors (DR0 and the ones before). Let us represent the accumulated relative error up to DRi for receiver j as Rij. To accommodate these accumulated relative errors, the sender should now choose 61 and 62 such that The 6i determines the scheduling instant of the DR vector at the sender for receiver i. This method of computation of 6's ensures that the accumulated export error (i.e., total actual error) for each receiver equalizes at the transmission of each DR vector. In order to establish this, assume that the feedback for DR vector Di from a receiver comes to the sender before schedule for Di +1 is computed. Let Sim and Aim denote the estimated error for receiver m used for computing schedule for Di and accumulated error for receiver m computed after receiving feedback for Di, respectively. Then Rim = Aim − Sim. In order to compute the schedule instances (i.e., 6's) for Di, for any pair of receivers m and n, we do Ri − 1 m + Sim = Ri − 1 n + Sin. The following theorem establishes the fact that the accumulated export error is equalized at every scheduling instant. THEOREM 4.1. When the schedule instances for sending Di are computedfor any pair of receivers m and n, thefollowing condition is satisfied: Xi − 1 Akm + Sim = Xi − 1 Akn + Sin. Proof: By induction. Assume that the premise holds for some i. We show that it holds for i + 1. The base case for i = 1 holds since initially R0m = R0n = 0, and the S1m = S1n is used to compute the scheduling instances. In order to compute the schedule for Di +1, the we first compute the relative errors as Rim = Aim − Sim, and Rin = Ain − Sin. Then to compute 6's we execute Adding the condition of the premise on both sides we get, 4.3 Computation of the Export Error Let us now consider how the export errors can be computed. From the previous section, to find 61 and 62 we need to find Note that the values of R01 and R02 are already known at the sender. Consider the computation of Err (DR1, T1, T1 +61 + dt1). This is the integral of the distance between the trajectories AC due to DR1 and BD due to DR0. From DR0 and DR1, point A is (X1, Y1) = (x1, y1) and point B is (X0, Y0) = (x0 + (T1 − T0) × vx0, y0 + (T1 − T0) × vy0). The trajectory AC can be represented as a function of time as (X1 (t), Y1 (t) = (X1 + vx1 × t, Y1 + vy1 × t) and the trajectory of BD can be represented as (X0 (t), Y0 (t) = (X0 + vx0 × t, Y0 + vy0 × t). The distance between the two trajectories as a function of time then becomes, can then be calculated by applying the appropriate limits to the above solution. In the next section, we consider the computation of the 6's for N receivers. 4.4 Computation of Scheduling Instants We again look at the computation of δ's by referring to Figure 2. The sender chooses δ1 and δ2 such that R01 + Err (DR1, T1, T1 + δ1 + dt1) = R02 + Err (DR1, T1, T1 + δ2 + dt2). If R01 and R02 both are zero, then δ1 and δ2 should be chosen such that Err (DR1, T1, T1 + δ1 + dt1) = Err (DR1, T1, T1 + δ2 + dt2). This equality will hold if δ1 + dt1 = δ2 + dt2. Thus, if there is no accumulated relative export error, all that the sender needs to do is to choose the δ's in such a way that they counteract the difference in the delay to the two receivers, so that they receive the DR vector at the same time. As discussed earlier, because the sender is not able to a priori learn the delay, there will always be an accumulated relative export error from a previous DR vector that does have to be taken into account. To delve deeper into this, consider the computation of the export error as illustrated in the previous section. To compute the δ's we require that R01 + Err (DR1, T1, T1 + δ1 + dt1) = R02 + Assume that E1> E2. Then, for the above equation to hold, we require that To make the game as fast as possible within this framework, the δ values should be made as small as possible so that DR vectors are sent to the receivers as soon as possible subject to the fairness requirement. Given this, we would choose δ1 to be zero and compute δ2 from the equation In general, if there are N receivers 1,..., N, when a sender generates a DR vector and decides to schedule them to be sent, it first computes the Ei values for all of them from the accumulated relative export errors and estimates of delays. Then, it finds the smallest of these values. Let Ek be the smallest value. The sender makes δk to be zero and computes the rest of the δ's from the equality The δ's thus obtained gives the scheduling instants of the DR vector for the receivers. 4.5 Steps of the Scheduling Algorithm For the purpose of the discussion below, as before let us denote the accumulated relative export error at a sender for receiver k up until DRi to be Ri k. Let us denote the scheduled delay at the sender before DRi is sent to receiver k as δi k. Given the above discussion, the algorithm steps are as follows: 1. The sender computes DRi at (say) time Ti and then computes δik, and Ri − 1 k, ` dk, 1 <k <N based on the estimation of delays dtk, ` dk, 1 <k <N as per Equation (1). It schedules, DRi to be sent to receiver k at time Ti + δi k. 2. The DR vectors are sent to the receivers at the scheduled times which are received after a delay of dak, ` dk, 1 <k <N where dak <or> dtk. The receivers send the value of dak back to the sender (the receiver can compute this value based on the time stamps on the DR vector as described earlier). 3. The sender computes Rik as described earlier and illustrated in Figure 2. The sender also recomputes (using exponential averaging method similar to round-trip time estimation by TCP [10]) the estimate of delay dtk from the new value of dak for receiver k. 4. Go back to Step 1 to compute DRi +1 when it is required and follow the steps of the algorithm to schedule and send this DR vector to the receivers. 4.6 Handling Cases in Practice So far we implicity assumed that DRi is sent out to all receivers before a decision is made to compute the next DR vector DRi +1, and the receivers send the value of dak corresponding to DRi and this information reaches the sender before it computes DRi +1 so that it can compute Ri +1 k and then use it in the computation of δi +1 k. Two issues need consideration with respect to the above algorithm when it is used in practice. • It may so happen that a new DR vector is computed even before the previous DR vector is sent out to all receivers. How will this situation be handled? • What happens if the feedback does not arrive before DRi +1 is computed and scheduled to be sent? Let us consider the first scenario. We assume that DRi has been scheduled to be sent and the scheduling instants are such that δi1 <δi2 <• • • <δiN. Assume that DRi +1 is to be computed (because the real path has deviated exceeding a threshold from the path exported by DRi) at time Ti +1 where Ti + δik <Ti +1 <Ti + δik +1. This means, DRi has been sent only to receivers up to k in the scheduled order. In our algorithm, in this case, the scheduled delay ordering queue is flushed which means DRi is not sent to receivers still queued to receive it, but a new scheduling order is computed for all the receivers to send DRi +1. For those receivers who have been sent DRi, assume for now that daj, 1 <j <k has been received from all receivers (the scenario where daj has not been received will be considered as a part of the second scenario later). For these receivers, Ei j, 1 <j <k can be computed. For those receivers j, k + 1 <j <N to whom DRi was not sent Eij does not apply. Consider a receiver j, k + 1 <j <N to whom DRi was not sent. Refer to Figure 3. For such a receiver j, when DRi +1 is to be scheduled and Figure 3: Schedule computation when DRi is not sent to re j needs to be computed, the total export error is the accumulated relative export error at time Ti when schedule for DRi was computed, plus the integral of the distance between the two trajectories AC and BD of Figure 3 over the time interval [Ti, Ti +1 + 5i +1 j + dtj]. Note that this integral is given by Err (DRi, Ti, Ti +1) + Err (DRi +1, Ti +1, Ti +1 + 5i +1 j + dtj). Therefore, instead of Eij of Equation (1), we use the value Ri − 1 j is relative export error used when the schedule for DRi was computed. Now consider the second scenario. Here the feedback dak corresponding to DRi has not arrived before DRi +1 is computed and scheduled. In this case, Rik cannot be computed. Thus, in the computation of 5k for DRi +1, this will be assumed to be zero. We do assume that a reliable mechanism is used to send dak back to the sender. When this information arrives at a later time, Rik will be computed and accumulated to future relative export errors (for example Ri +1 k if dak is received before DRi +2 is computed) and used in the computation of 5k when a future DR vector is to be scheduled (for example DRi +2). 4.7 Experimental Results In order to evaluate the effectiveness and quantify benefits obtained through the use of the scheduling algorithm, we implemented the proposed algorithm in BZFlag (Battle Zone Flag) [11] game. It is a first-person shooter game where the players in teams drive tanks and move within a battle field. The aim of the players is to navigate and capture flags belonging to the other team and bring them back to their own area. The players shoot each other's tanks using "shooting bullets". The movement of the tanks as well as that of the shots are exchanged among the players using DR vectors. We have modified the implementation of BZFlag to incorporate synchronized clocks among the players and the server and exchange time-stamps with the DR vector. We set up a testbed with four players running the instrumented version of BZFlag, with one as a sender and the rest as receivers. The scheduling approach and the base case where each DR vector was sent to all the receivers concurrently at every trigger point were implemented in the same run by tagging the DR vectors according to the type of approach used to send the DR vector. NISTNet [12] was used to introduce delays across the sender and the three receivers. Mean delays of 800ms, 500ms and 200ms were introduced between the sender and first, second and the third receiver, respectively. We introduce a variance of 100 msec (to the mean delay of each receiver) to model variability in delay. The sender logged the errors of each receiver every 100 milliseconds for both the scheduling approach and the base case. The sender also calculated the standard deviation and the mean of the accumulated export error of all the receivers every 100 milliseconds. Figure 4 plots the mean and standard deviation of the accumulated export error of all the receivers in the scheduling case against the base case. Note that the x-axis of these graphs (and the other graphs that follow) represents the system time when the snapshot of the game was taken. Observe that the standard deviation of the error with scheduling is much lower as compared to the base case. This implies that the accumulated errors of the receivers in the scheduling case are closer to one another. This shows that the scheduling approach achieves fairness among the receivers even if they are at different distances (i.e, latencies) from the sender. Observe that the mean of the accumulated error increased multifold with scheduling in comparison to the base case. Further exploration for the reason for the rise in the mean led to the conclusion that every time the DR vectors are scheduled in a way to equalize the total error, it pushes each receivers total error higher. Also, as the accumulated error has an estimated component, the schedule is not accurate to equalize the errors for the receivers, leading to the DR vector reaching earlier or later than the actual schedule. In either case, the error is not equalized and if the DR vector reaches late, it actually increases the error for a receiver beyond the highest accumulated error. This means that at the next trigger, this receiver will be the one with highest error and every other receiver's error will be pushed to this error value. This "flip-flop" effect leads to the increase in the accumulated error for all the receivers. The scheduling for fairness leads to the decrease in standard deviation (i.e., increases the fairness among different players), but it comes at the cost of higher mean error, which may not be a desirable feature. This led us to explore different ways of equalizing the accumulated errors. The approach discussed in the following section is a heuristic approach based on the following idea. Using the same amount of DR vectors over time as in the base case, instead of sending the DR vectors to all the receivers at the same frequency as in the base case, if we can increase the frequency of sending the DR vectors to the receiver with higher accumulated error and decrease the frequency of sending DR vectors to the receiver with lower accumulated error, we can equalize the export error of all receivers over time. At the same time we wish to decrease the error of the receiver with the highest accumulated error in the base case (of course, this receiver would be sent more DR vectors than in the base case). We refer to such an algorithm as a budget based algorithm. 5. BUDGET BASED ALGORITHM In a game, the sender of an entity sends DR vectors to all the receivers every time a threshold is crossed by the entity. Lower the threshold, more DR vectors are generated during a given time period. Since the DR vectors are sent to all the receivers and the network delay between the sender-receiver pairs cannot be avoided, the before export error 3 with the most distant player will always 3Note that after export error is eliminated by using synchronized clock among the players. Figure 4: Mean and standard deviation of error with scheduling and without (i.e., base case). be higher than the rest. In order to mitigate the imbalance in the error, we propose to send DR vectors selectively to different players based on the accumulated errors of these players. The budget based algorithm is based on this idea and there are two variations of it. One is a probabilistic budget based scheme and the other, a deterministic budget base scheme. 5.1 Probabilistic budget based scheme The probabilistic budget based scheme has three main steps: a) lower the dead reckoning threshold but at the same time keep the total number of DRs sent the same as the base case, b) at every trigger, probabilistically pick a player to send the DR vector to, and c) send the DR vector to the chosen player. These steps are described below. The lowering of DR threshold is implemented as follows. Lowering the threshold is equivalent to increasing the number of trigger points where DR vectors are generated. Suppose the threshold is such that the number of triggers caused by it in the base case is t and at each trigger n DR vectors sent by the sender, which results in a total of nt DR vectors. Our goal is to keep the total number of DR vectors sent by the sender fixed at nt, but lower the number of DR vectors sent at each trigger (i.e., do not send the DR vector to all the receivers). Let n' and t' be the number of DR vectors sent at each trigger and number of triggers respectively in the modified case. We want to ensuren't' = nt. Since we want to increase the number of trigger points, i.e, t'> t, this would mean that n' <n. That is, not all receivers will be sent the DR vector at every trigger. In the probabilistic budget based scheme, at each trigger, a probability is calculated for each receiver to be sent a DR vector and only one receiver is sent the DR (n' = 1). This probability is based on the relative weights of the receivers' accumulated errors. That is, a receiver with a higher accumulated error will have a higher probability of being sent the DR vector. Consider that the accumulated error for three players are a1, a2 and a3 respectively. Then the probability of player 1 receiving the DR vector would be a1 a1 + a2 + a3. Similarly for the other players. Once the player is picked, the DR vector is sent to that player. To compare the probabilistic budget based algorithm with the base case, we needed to lower the threshold for the base case (for fair comparison). As the dead reckoning threshold in the base case was already very fine, it was decided that instead of lowering the threshold, the probabilistic budget based approach would be compared against a modified base case that would use the normal threshold as the budget based algorithm but the base case was modified such that every third trigger would be actually used to send out a DR vector to all the three receivers used in our experiments. This was called as the 1/3 base case as it resulted in 1/3 number of DR vectors being sent as compared to the base case. The budget per trigger for the probability based approach was calculated as one DR vector at each trigger as compared to three DR vectors at every third trigger in the 1/3 base case; thus the two cases lead to the same number of DR vectors being sent out over time. In order to evaluate the effectiveness of the probabilistic budget based algorithm, we instrumented the BZFlag game to use this approach. We used the same testbed consisting of one sender and three receivers with delays of 800ms, 500ms and 200ms from the sender and with low delay variance (100ms) and moderate delay variance (180ms). The results are shown in Figures 5 and 6. As mentioned earlier, the x-axis of these graphs represents the system time when the snapshot of the game was taken. Observe from the figures that the standard deviation of the accumulated error among the receivers with the probabilistic budget based algorithm is less than the 1/3 base case and the mean is a little higher than the 1/3 base case. This implies that the game is fairer as compared to the 1/3 base case at the cost of increasing the mean error by a small amount as compared to the 1/3 base case. The increase in mean error in the probabilistic case compared to the 1/3 base case can be attributed to the fact that the even though the probabilistic approach on average sends the same number of DR vectors as the 1/3 base case, it sometimes sends DR vectors to a receiver less frequently and sometimes more frequently than the 1/3 base case due to its probabilistic nature. When a receiver does not receive a DR vector for a long time, the receiver's trajectory is more and more off of the sender's trajectory and hence the rate of buildup of the error at the receiver is higher. At times when a receiver receives DR vectors more frequently, it builds up error at a lower rate but there is no way of reversing the error that was built up when it did not receive a DR vector for a long time. This leads the receivers to build up more error in the probabilistic case as compared to the 1/3 base case where the receivers receive a DR vector almost periodically. Figure 5: Mean and standard deviation of error for different algorithms (including budget based algorithms) for low delay variance. Figure 6: Mean and standard deviation of error for different algorithms (including budget based algorithms) for moderate delay variance. 5.2 Deterministic budget based scheme To bound the increase in mean error we decided to modify the budget based algorithm to be "deterministic". The first two steps of the algorithm are the same as in the probabilistic algorithm; the trigger points are increased to lower the threshold and accumulated errors are used to compute the probability that a receiver will receiver a DR vector. Once these steps are completed, a deterministic schedule for the receiver is computed as follows: 1. If there is any receiver (s) tagged to receive a DR vector at the current trigger, the sender sends out the DR vector to the respective receiver (s). If at least one receiver was sent a DR vector, the sender calculates the probabilities of each receiver receiving a DR vector as explained before and follows steps 2 to 6, else it does not do anything. 2. For each receiver, the probability value is multiplied with the budget available at each trigger (which is set to 1 as explained below) to give the frequency of sending the DR vector to each receiver. 3. If any of the receiver's frequency after multiplying with the budget goes over 1, the receiver's frequency is set as 1 and the surplus amount is equally distributed to all the receivers by adding the amount to their existing frequencies. This process is repeated until all the receivers have a frequency of less than or equal to 1. This is due to the fact that at a trigger we cannot send more than one DR vector to the respective receiver. That will be wastage of DR vectors by sending redundant information. 4. (1/frequency) gives us the schedule at which the sender should send DR vectors to the respective receiver. Credit obtained previously (explained in step 5) if any is subtracted from the schedule. Observe that the resulting value of the schedule might not be an integer; hence, the value is rounded off by taking the ceiling of the schedule. For example, if the frequency is 1/3 .5, this implies that we would like to have a DR vector sent every 3.5 triggers. However, we are constrained to send it at the 4th trigger giving us a credit of 0.5. When we do send the DR vector next time, we would be able to send it on the 3rd trigger because of the 0.5 credit. 5. The difference between the schedule and the ceiling of the schedule is the credit that the receiver has obtained which is remembered for the future and used at the next time as explained in step 4. 6. For each of those receivers who were sent a DR vector at the current trigger, the receivers are tagged to receive the next DR vector at the trigger that happens exactly schedule (the ceiling of the schedule) number of times away from the current trigger. Observe that no other receiver's schedule is modified at this point as they all are running a schedule calculated at some previous point of time. Those schedules will be automatically modified at the trigger when they are scheduled to receive the next DR vector. At the first trigger, the sender sends the DR vector to all the receivers and uses a relative probability of 1/n for each receiver and follows the steps 2 to 6 to calculate the next schedule for each receiver in the same way as mentioned for other triggers. This algorithm ensures that every receiver has a guaranteed schedule of receiving DR vectors and hence there is no irregularity in sending the DR vector to any receiver as was observed in the budget based probabilistic algorithm. We used the testbed described earlier (three receivers with varying delays) to evaluate the deterministic algorithm using the budget of 1 DR vector per trigger so as to use the same number of DR vectors as in the 1/3 base case. Results from our experiments are shown in Figures 5 and 6. It can be observed that the standard deviation of error in the deterministic budget based algorithm is less than the 1/3 base case and also has the same mean error as the 1/3 base case. This indicates that the deterministic algorithm is more fair than the 1/3 base case and at the same time does not increase the mean error thereby leading to a better game quality compared to the probabilistic algorithm. In general, when comparing the deterministic approach to the probabilistic approach, we found that the mean accumulated error was always less in the deterministic approach. With respect to standard deviation of the accumulated error, we found that in the fixed or low variance cases, the deterministic approach was generally lower, but in higher variance cases, it was harder to draw conclusions as the probabilistic approach was sometimes better than the deterministic approach. 6. CONCLUSIONS AND FUTURE WORK In distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors. Due to the variable delay between players, these DR vectors reach different players at different times. There is unfair advantage gained by receivers who are closer to the sender of the DR as they are able to render the sender's position more accurately in real time. In this paper, we first developed a model for estimating the "error" in rendering player trajectories at the receivers. We then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby "equalizing" the error at different players. This algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players. To counter this effect, we presented "budget" based algorithms where the DR vectors are still scheduled to be sent at different players at different times but the algorithm balances the need for "fairness" with the requirement that the error of the "worst" case players (who are furthest from the sender) are not increased compared to the base case (where all DR vectors are sent to all players every time a DR vector is generated). We presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.
Fairness in Dead-Reckoning based Distributed Multi-Player Games ABSTRACT In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an "error" measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing. 1. INTRODUCTION In a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server. Usually, in such games, the players are part of the game and in addition they may control entities that make up the game. During the course of the game, the players and the entities move within the game space. A player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector. A DR vector contains information about the current position of the player/entity in terms of x, y and z coordinates (at the time the DR vector was sent) as well as the trajectory of the entity in terms of the velocity component in each of the dimensions. Each of the participating players receives such DR vectors from one another and renders the other players/entities on the local consoles until a new DR vector is received for that player/entity. In a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server. The idea of DR is used because it is almost impossible for players/entities to exchange their current positions at every time unit. DR vectors are "quantization" of the real trajectory (which we refer to as real path) at a player. Normally, a new DR vector is computed and sent whenever the real path deviates from the path extrapolated using the previous DR vector (say, in terms of distance in the x, y, z plane) by some amount specified by a threshold. We refer to the trajectory that can be computed using the sequence of DR vectors as the exported path. Therefore, at the sending player, there is a deviation between the real path and the exported path. The error due to this deviation can be removed if each movement of player/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same. Given that it is not feasible to satisfy this due to bandwidth limitations, this error is not of practical interest. Therefore, the receiving players can, at best, follow the exported path. Because of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player/entity may have already changed. Thus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path). We refer to this error as the export error. Note that the export error, in turn, results in a deviation between the real and the placed paths. The export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i) before the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error). In an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero. That is, the placed and the exported paths match after the DR vector is received. We also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1]. Henceforth we assume that the players use such a technique which results in unavoidable but small overall export error. In this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers. Due to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers. This brings in unfairness in game playing. For instance a player with a large delay would always see an entity "late" in physical time compared to the other players and, therefore, her action on the entity would be delayed (in physical time) even if she reacted instantaneously after the entity was rendered. Our goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players. We explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender. We propose two algorithms to achieve this. Both the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players. At an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same. The goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses. The first algorithm (which we refer to as the "scheduling algorithm") is based on "estimating" the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point. Through an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error). The drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR). To alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players. At a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer. Experimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach. It improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game. In addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors. 2. PREVIOUS WORK Earlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4]. These methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another. For example, the concept of local lag has been used in [3] where each player delays every local operation for a certain amount of time so that remote players can receive information about the local operation and execute the same operation at the about same time, thus reducing state inconsistencies. The online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays. In MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely. Players with a network delay larger than 100 ms simply cannot participate in the game. In general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players. There have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms. But these works [7, 8] assume the existence of a global view of the game where a game server maintains a view (or state) of the game. Players can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object). These additions and deletions are communicated to the game server using "action" messages. Based on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using "update" messages. Fairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a "fair-order" which takes into account the delays between the game server and the different players. Objects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works. In this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game. DR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model). It has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place. Both the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof. For example, a receiver could skew the delay estimate at the sender to make the sender believe that the delay between the sender and the receiver is high thereby gaining undue advantage. We emphasize that the focus of this paper is on fairness without addressing the issue of cheating. In the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1]. In Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag. Section 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing. Conclusions are presented in Section 6. 3. GAME MODEL 4. SCHEDULING ALGORITHM FOR SENDING DR VECTORS 4.1 Computation of Relative Export Error 4.2 Equalization of Error Among Receivers 4.3 Computation of the Export Error 4.4 Computation of Scheduling Instants 4.5 Steps of the Scheduling Algorithm 4.6 Handling Cases in Practice 4.7 Experimental Results 5. BUDGET BASED ALGORITHM 5.1 Probabilistic budget based scheme 5.2 Deterministic budget based scheme 6. CONCLUSIONS AND FUTURE WORK In distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors. Due to the variable delay between players, these DR vectors reach different players at different times. There is unfair advantage gained by receivers who are closer to the sender of the DR as they are able to render the sender's position more accurately in real time. In this paper, we first developed a model for estimating the "error" in rendering player trajectories at the receivers. We then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby "equalizing" the error at different players. This algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players. To counter this effect, we presented "budget" based algorithms where the DR vectors are still scheduled to be sent at different players at different times but the algorithm balances the need for "fairness" with the requirement that the error of the "worst" case players (who are furthest from the sender) are not increased compared to the base case (where all DR vectors are sent to all players every time a DR vector is generated). We presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.
Fairness in Dead-Reckoning based Distributed Multi-Player Games ABSTRACT In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an "error" measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing. 1. INTRODUCTION In a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server. Usually, in such games, the players are part of the game and in addition they may control entities that make up the game. During the course of the game, the players and the entities move within the game space. A player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector. Each of the participating players receives such DR vectors from one another and renders the other players/entities on the local consoles until a new DR vector is received for that player/entity. In a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server. DR vectors are "quantization" of the real trajectory (which we refer to as real path) at a player. We refer to the trajectory that can be computed using the sequence of DR vectors as the exported path. Therefore, at the sending player, there is a deviation between the real path and the exported path. The error due to this deviation can be removed if each movement of player/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same. Therefore, the receiving players can, at best, follow the exported path. Because of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player/entity may have already changed. Thus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path). We refer to this error as the export error. Note that the export error, in turn, results in a deviation between the real and the placed paths. The export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i) before the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error). In an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero. That is, the placed and the exported paths match after the DR vector is received. We also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1]. Henceforth we assume that the players use such a technique which results in unavoidable but small overall export error. In this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers. Due to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers. This brings in unfairness in game playing. Our goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players. We explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender. We propose two algorithms to achieve this. Both the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players. At an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same. The goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses. The first algorithm (which we refer to as the "scheduling algorithm") is based on "estimating" the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point. Through an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error). The drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR). To alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players. At a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer. Experimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach. It improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game. In addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors. 2. PREVIOUS WORK Earlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4]. These methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another. The online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays. In MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely. Players with a network delay larger than 100 ms simply cannot participate in the game. In general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players. There have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms. Players can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object). These additions and deletions are communicated to the game server using "action" messages. Based on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using "update" messages. Fairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a "fair-order" which takes into account the delays between the game server and the different players. Objects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works. In this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game. DR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model). It has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place. Both the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof. In the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1]. In Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag. Section 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing. Conclusions are presented in Section 6. 6. CONCLUSIONS AND FUTURE WORK In distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors. Due to the variable delay between players, these DR vectors reach different players at different times. In this paper, we first developed a model for estimating the "error" in rendering player trajectories at the receivers. We then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby "equalizing" the error at different players. This algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players. We presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.
H-98
Using Asymmetric Distributions to Improve Text Classifier Probability Estimates
Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a user-specified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the extremely irrelevant, hard to discriminate, and obviously relevant items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable.
[ "text classifi", "probabl estim", "decis threshold", "bayesian risk model", "empir score distribut", "parametr model", "inform retriev", "logist regress framework", "posterior function", "asymmetr laplac distribut", "search engin retriev", "symmetr distribut", "asymmetr gaussian", "maximum likelihood estim", "class-condit densiti", "text classif", "cost-sensit learn", "activ learn", "classifi combin" ]
[ "P", "P", "P", "P", "P", "M", "U", "U", "M", "M", "U", "M", "M", "M", "U", "M", "U", "U", "M" ]
Using Asymmetric Distributions to Improve Text Classifier Probability Estimates Paul N. Bennett Computer Science Dept. Carnegie Mellon University Pittsburgh, PA 15213 pbennett+@cs.cmu.edu ABSTRACT Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the extremely irrelevant, hard to discriminate, and obviously relevant items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; I.2.6 [Artificial Intelligence]: Learning; I.5.2 [Pattern Recognition]: Design Methodology General Terms Algorithms, Experimentation, Reliability. 1. INTRODUCTION Text classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified cost function dynamically chosen at prediction time. This can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant/irrelevant are not available during training but are specified at prediction time. Furthermore, the costs can be changed without retraining the model. Additionally, probability estimates are often used as the basis of deciding which document``s label to request next during active learning [17, 23]. Effective active learning can be key in many information retrieval tasks where obtaining labeled data can be costly - severely reducing the amount of labeled data needed to reach the same performance as when new labels are requested randomly [17]. Finally, they are also amenable to making other types of cost-sensitive decisions [26] and for combining decisions [3]. However, in all of these tasks, the quality of the probability estimates is crucial. Parametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data. Since many text classification tasks often have very little training data, we focus on parametric methods. However, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable. While some of these methods allow the distributions of the documents relevant and irrelevant to the topic to have different variances, they typically enforce the unnecessary constraint that the documents are symmetrically distributed around their respective modes. We introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models. Additionally, these models can be interpreted as assuming the scores produced by the text classifier have three basic types of empirical behavior - one corresponding to each of the extremely irrelevant, hard to discriminate, and obviously relevant items. We first review related work on improving probability estimates and score modeling in information retrieval. Then, we discuss in further detail the need for asymmetric models. After this, we describe two specific asymmetric models and, using two standard text classifiers, na¨ıve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores. We then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods. Finally, we summarize our contributions and discuss future directions. 2. RELATED WORK Parametric models have been employed to obtain probability estimates in several areas of information retrieval. Lewis & Gale [17] use logistic regression to recalibrate na¨ıve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning. Manmatha et. al [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines. They use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here. They also survey the long history of modeling the relevance scores of search engines. Our work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data. Focus on improving probability estimates has been growing lately. Zadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na¨ıve Bayes. In more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na¨ıve Bayes and a linear SVM. While they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results. Our work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue. In addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions. There is a variety of other work that this paper extends. Platt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM. His work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better). Finally, Bennett [1] obtained moderate gains by applying Platt``s method to the recalibration of na¨ıve Bayes but found there were more problematic areas than when it was applied to SVMs. Recalibrating poorly calibrated classifiers is not a new problem. Lindley et. al [19] first proposed the idea of recalibrating classifiers, and DeGroot & Fienberg [5, 6] gave the now accepted standard formalization for the problem of assessing calibration initiated by others [4, 24]. 3. PROBLEM DEFINITION & APPROACH Our work differs from earlier approaches primarily in three points: (1) We provide asymmetric parametric models suitable for use when little training data is available; (2) We explicitly analyze the quality of probability estimates these and competing methods produce and provide significance tests for these results; (3) We target text classifier outputs where a majority of the previous literature targeted the output of search engines. 3.1 Problem Definition The general problem we are concerned with is highlighted in Figure 1. A text classifier produces a prediction about a document and gives a score s(d) indicating the strength of its decision that the document belongs to the positive class (relevant to the topic). We assume throughout there are only two classes: the positive and the negative (or irrelevant) class (''+'' and ''-'' respectively). There are two general types of parametric approaches. The first of these tries to fit the posterior function directly, i.e. there is one p(s|+) p(s|−) Bayes'' RuleP(+) P(−) Classifier P(+| s(d)) Predict class, c(d)={+,−} confidence s(d) that c(d)=+ Document, d and give unnormalized Figure 1: We are concerned with how to perform the box highlighted in grey. The internals are for one type of approach. function estimator that performs a direct mapping of the score s to the probability P(+|s(d)). The second type of approach breaks the problem down as shown in the grey box of Figure 1. An estimator for each of the class-conditional densities (i.e. p(s|+) and p(s|−)) is produced, then Bayes'' rule and the class priors are used to obtain the estimate for P(+|s(d)). 3.2 Motivation for Asymmetric Distributions Most of the previous parametric approaches to this problem either directly or indirectly (when fitting only the posterior) correspond to fitting Gaussians to the class-conditional densities; they differ only in the criterion used to estimate the parameters. We can visualize this as depicted in Figure 2. Since increasing s usually indicates increased likelihood of belonging to the positive class, then the rightmost distribution usually corresponds to p(s|+). A B C 0 0.2 0.4 0.6 0.8 1 −10 −5 0 5 10 p(s|Class={+,−}) Unnormalized Confidence Score s p(s | Class = +) p(s | Class = −) Figure 2: Typical View of Discrimination based on Gaussians However, using standard Gaussians fails to capitalize on a basic characteristic commonly seen. Namely, if we have a raw output score that can be used for discrimination, then the empirical behavior between the modes (label B in Figure 2) is often very different than that outside of the modes (labels A and C in Figure 2). Intuitively, the area between the modes corresponds to the hard examples, which are difficult for this classifier to distinguish, while the areas outside the modes are the extreme examples that are usually easily distinguished. This suggests that we may want to uncouple the scale of the outside and inside segments of the distribution (as depicted by the curve denoted as A-Gaussian in Figure 3). As a result, an asymmetric distribution may be a more appropriate choice for application to the raw output score of a classifier. Ideally (i.e. perfect classification) there will exist scores θ− and θ+ such that all examples with score greater than θ+ are relevant and all examples with scores less than θ− are irrelevant. Furthermore, no examples fall between θ− and θ+. The distance | θ− − θ+ | corresponds to the margin in some classifiers, and an attempt is often made to maximize this quantity. Because text classifiers have training data to use to separate the classes, the final behavior of the score distributions is primarily a factor of the amount of training data and the consequent separation in the classes achieved. This is in contrast to search engine retrieval where the distribution of scores is more a factor of language distribution across documents, the similarity function, and the length and type of query. Perfect classification corresponds to using two very asymmetric distributions, but in this case, the probabilities are actually one and zero and many methods will work for typical purposes. Practically, some examples will fall between θ− and θ+, and it is often important to estimate the probabilities of these examples well (since they correspond to the hard examples). Justifications can be given for both why you may find more and less examples between θ− and θ+ than outside of them, but there are few empirical reasons to believe that the distributions should be symmetric. A natural first candidate for an asymmetric distribution is to generalize a common symmetric distribution, e.g. the Laplace or the Gaussian. An asymmetric Laplace distribution can be achieved by placing two exponentials around the mode in the following manner: p(x | θ, β, γ) = βγ β+γ exp [−β (θ − x)] x ≤ θ (β, γ > 0) βγ β+γ exp [−γ (x − θ)] x > θ (1) where θ, β, and γ are the model parameters. θ is the mode of the distribution, β is the inverse scale of the exponential to the left of the mode, and γ is the inverse scale of the exponential to the right. We will use the notation Λ(X | θ, β, γ) to refer to this distribution. 0 0.002 0.004 0.006 0.008 0.01 -300 -200 -100 0 100 200 p(s|Class={+,-}) Unnormalized Confidence Score s Gaussian A-Gaussian Figure 3: Gaussians vs. Asymmetric Gaussians. A Shortcoming of Symmetric Distributions - The vertical lines show the modes as estimated nonparametrically. We can create an asymmetric Gaussian in the same manner: p(x | θ, σl, σr) = 2√ 2π(σl+σr) exp −(x−θ)2 2σ2 l x ≤ θ (σl, σr > 0) 2√ 2π(σl+σr) exp −(x−θ)2 2σ2 r x > θ (2) where θ, σl, and σr are the model parameters. To refer to this asymmetric Gaussian, we use the notation Γ(X | θ, σl, σr). While these distributions are composed of halves, the resulting function is a single continuous distribution. These distributions allow us to fit our data with much greater flexibility at the cost of only fitting six parameters. We could instead try mixture models for each component or other extensions, but most other extensions require at least as many parameters (and can often be more computationally expensive). In addition, the motivation above should provide significant cause to believe the underlying distributions actually behave in this way. Furthermore, this family of distributions can still fit a symmetric distribution, and finally, in the empirical evaluation, evidence is presented that demonstrates this asymmetric behavior (see Figure 4). To our knowledge, neither family of distributions has been previously used in machine learning or information retrieval. Both are termed generalizations of an Asymmetric Laplace in [14], but we refer to them as described above to reflect the nature of how we derived them for this task. 3.3 Estimating the Parameters of the Asymmetric Distributions This section develops the method for finding maximum likelihood estimates (MLE) of the parameters for the above asymmetric distributions. In order to find the MLEs, we have two choices: (1) use numerical estimation to estimate all three parameters at once (2) fix the value of θ, and estimate the other two (β and γ or σl and σr) given our choice of θ, then consider alternate values of θ. Because of the simplicity of analysis in the latter alternative, we choose this method. 3.3.1 Asymmetric Laplace MLEs For D = {x1, x2, ... , xN } where the xi are i.i.d. and X ∼ Λ(X | θ, β, γ), the likelihood is N i Λ(X | θ, β, γ). Now, we fix θ and compute the maximum likelihood for that choice of θ. Then, we can simply consider all choices of θ and choose the one with the maximum likelihood over all choices of θ. The complete derivation is omitted because of space but is available in [2]. We define the following values: Nl = | {x ∈ D | x ≤ θ} | Nr = | {x ∈ D | x > θ} | Sl = x∈D|x≤θ x Sr = x∈D|x>θ x Dl = Nlθ − Sl Dr = Sr − Nrθ. Note that Dl and Dr are the sum of the absolute differences between the x belonging to the left and right halves of the distribution (respectively) and θ. Finally the MLEs for β and γ for a fixed θ are: βMLE = N Dl + √ DrDl γMLE = N Dr + √ DrDl . (3) These estimates are not wholly unexpected since we would obtain Nl Dl if we were to estimate β independently of γ. The elegance of the formulae is that the estimates will tend to be symmetric only insofar as the data dictate it (i.e. the closer Dl and Dr are to being equal, the closer the resulting inverse scales). By continuity arguments, when N = 0, we assign β = γ = 0 where 0 is a small constant that acts to disperse the distribution to a uniform. Similarly, when N = 0 and Dl = 0, we assign β = inf where inf is a very large constant that corresponds to an extremely sharp distribution (i.e. almost all mass at θ for that half). Dr = 0 is handled similarly. Assuming that θ falls in some range [φ, ψ] dependent upon only the observed documents, then this alternative is also easily computable. Given Nl, Sl, Nr, Sr, we can compute the posterior and the MLEs in constant time. In addition, if the scores are sorted, then we can perform the whole process quite efficiently. Starting with the minimum θ = φ we would like to try, we loop through the scores once and set Nl, Sl, Nr, Sr appropriately. Then we increase θ and just step past the scores that have shifted from the right side of the distribution to the left. Assuming the number of candidate θs are O(n), this process is O(n), and the overall process is dominated by sorting the scores, O(n log n) (or expected linear time). 3.3.2 Asymmetric Gaussian MLEs For D = {x1, x2, ... , xN } where the xi are i.i.d. and X ∼ Γ(X | θ, σl, σr), the likelihood is N i Γ(X | θ, β, γ). The MLEs can be worked out similar to the above. We assume the same definitions as above (the complete derivation omitted for space is available in [2]), and in addition, let: Sl2 = x∈D|x≤θ x2 Sr2 = x∈D|x>θ x2 Dl2 = Sl2 − Slθ + θ2 Nl Dr2 = Sr2 − Srθ + θ2 Nr. The analytical solution for the MLEs for a fixed θ is: σl,MLE = Dl2 + D 2/3 l2 D 1/3 r2 N (4) σr,MLE = Dr2 + D 2/3 r2 D 1/3 l2 N . (5) By continuity arguments, when N = 0, we assign σr = σl = inf , and when N = 0 and Dl2 = 0 (resp. Dr2 = 0), we assign σl = 0 (resp. σr = 0). Again, the same computational complexity analysis applies to estimating these parameters. 4. EXPERIMENTAL ANALYSIS 4.1 Methods For each of the methods that use a class prior, we use a smoothed add-one estimate, i.e. P(c) = |c|+1 N+2 where N is the number of documents. For methods that fit the class-conditional densities, p(s|+) and p(s|−), the resulting densities are inverted using Bayes'' rule as described above. All of the methods below are fit using maximum likelihood estimates. For recalibrating a classifier (i.e. correcting poor probability estimates output by the classifier), it is usual to use the log-odds of the classifier``s estimate as s(d). The log-odds are defined to be log P (+|d) P (−|d) . The normal decision threshold (minimizing error) in terms of log-odds is at zero (i.e. P(+|d) = P(−|d) = 0.5). Since it scales the outputs to a space [−∞, ∞], the log-odds make normal (and similar distributions) applicable [19]. Lewis & Gale [17] give a more motivating viewpoint that fitting the log-odds is a dampening effect for the inaccurate independence assumption and a bias correction for inaccurate estimates of the priors. In general, fitting the log-odds can serve to boost or dampen the signal from the original classifier as the data dictate. Gaussians A Gaussian is fit to each of the class-conditional densities, using the usual maximum likelihood estimates. This method is denoted in the tables below as Gauss. Asymmetric Gaussians An asymmetric Gaussian is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above. Intervals between adjacent scores are divided by 10 in testing candidate θs, i.e. 8 points between actual scores occurring in the data set are tested. This method is denoted as A. Gauss. Laplace Distributions Even though Laplace distributions are not typically applied to this task, we also tried this method to isolate why benefit is gained from the asymmetric form. The usual MLEs were used for estimating the location and scale of a classical symmetric Laplace distribution as described in [14]. We denote this method as Laplace below. Asymmetric Laplace Distributions An asymmetric Laplace is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above. As with the asymmetric Gaussian, intervals between adjacent scores are divided by 10 in testing candidate θs. This method is denoted as A. Laplace below. Logistic Regression This method is the first of two methods we evaluated that directly fit the posterior, P(+|s(d)). Both methods restrict the set of families to a two-parameter sigmoid family; they differ primarily in their model of class labels. As opposed to the above methods, one can argue that an additional boon of these methods is they completely preserve the ranking given by the classifier. When this is desired, these methods may be more appropriate. The previous methods will mostly preserve the rankings, but they can deviate if the data dictate it. Thus, they may model the data behavior better at the cost of departing from a monotonicity constraint in the output of the classifier. Lewis & Gale [17] use logistic regression to recalibrate na¨ıve Bayes for subsequent use in active learning. The model they use is: P(+|s(d)) = exp(a + b s(d)) 1 + exp(a + b s(d)) . (6) Instead of using the probabilities directly output by the classifier, they use the loglikelihood ratio of the probabilities, log P (d|+) P (d|−) , as the score s(d). Instead of using this below, we will use the logodds ratio. This does not affect the model as it simply shifts all of the scores by a constant determined by the priors. We refer to this method as LogReg below. Logistic Regression with Noisy Class Labels Platt [22] proposes a framework that extends the logistic regression model above to incorporate noisy class labels and uses it to produce probability estimates from the raw output of an SVM. This model differs from the LogReg model only in how the parameters are estimated. The parameters are still fit using maximum likelihood estimation, but a model of noisy class labels is used in addition to allow for the possibility that the class was mislabeled. The noise is modeled by assuming there is a finite probability of mislabeling a positive example and of mislabeling a negative example; these two noise estimates are determined by the number of positive examples and the number of negative examples (using Bayes'' rule to infer the probability of incorrect label). Even though the performance of this model would not be expected to deviate much from LogReg, we evaluate it for completeness. We refer to this method below as LR+Noise. 4.2 Data We examined several corpora, including the MSN Web Directory, Reuters, and TREC-AP. MSN Web Directory The MSN Web Directory is a large collection of heterogeneous web pages (from a May 1999 web snapshot) that have been hierarchically classified. We used the same train/test split of 50078/10024 documents as that reported in [9]. The MSN Web hierarchy is a seven-level hierarchy; we used all 13 of the top-level categories. The class proportions in the training set vary from 1.15% to 22.29%. In the testing set, they range from 1.14% to 21.54%. The classes are general subjects such as Health & Fitness and Travel & Vacation. Human indexers assigned the documents to zero or more categories. For the experiments below, we used only the top 1000 words with highest mutual information for each class; approximately 195K words appear in at least three training documents. Reuters The Reuters 21578 corpus [16] contains Reuters news articles from 1987. For this data set, we used the ModApte standard train/ test split of 9603/3299 documents (8676 unused documents). The classes are economic subjects (e.g., acq for acquisitions, earn for earnings, etc.) that human taggers applied to the document; a document may have multiple subjects. There are actually 135 classes in this domain (only 90 of which occur in the training and testing set); however, we only examined the ten most frequent classes since small numbers of testing examples make interpreting some performance measures difficult due to high variance.1 Limiting to the ten largest classes allows us to compare our results to previously published results [10, 13, 21, 22]. The class proportions in the training set vary from 1.88% to 29.96%. In the testing set, they range from 1.7% to 32.95%. For the experiments below we used only the top 300 words with highest mutual information for each class; approximately 15K words appear in at least three training documents. TREC-AP The TREC-AP corpus is a collection of AP news stories from 1988 to 1990. We used the same train/test split of 142791/66992 documents that was used in [18]. As described in [17] (see also [15]), the categories are defined by keywords in a keyword field. The title and body fields are used in the experiments below. There are twenty categories in total. The class proportions in the training set vary from 0.06% to 2.03%. In the testing set, they range from 0.03% to 4.32%. For the experiments described below, we use only the top 1000 words with the highest mutual information for each class; approximately 123K words appear in at least 3 training documents. 4.3 Classifiers We selected two classifiers for evaluation. A linear SVM classifier which is a discriminative classifier that does not normally output probability values, and a na¨ıve Bayes classifier whose probability outputs are often poor [1, 7] but can be improved [1, 26, 27]. 1 A separate comparison of only LogReg, LR+Noise, and A. Laplace over all 90 categories of Reuters was also conducted. After accounting for the variance, that evaluation also supported the claims made here. SVM For linear SVMs, we use the Smox toolkit which is based on Platt``s Sequential Minimal Optimization algorithm. The features were represented as continuous values. We used the raw output score of the SVM as s(d) since it has been shown to be appropriate before [22]. The normal decision threshold (assuming we are seeking to minimize errors) for this classifier is at zero. Na¨ıve Bayes The na¨ıve Bayes classifier model is a multinomial model [21]. We smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively. We use the log-odds estimated by the classifier as s(d). The normal decision threshold is at zero. 4.4 Performance Measures We use log-loss [12] and squared error [4, 6] to evaluate the quality of the probability estimates. For a document d with class c(d) ∈ {+, −} (i.e. the data have known labels and not probabilities), logloss is defined as δ(c(d), +) log P(+|d) + δ(c(d), −) log P(−|d) where δ(a, b) . = 1 if a = b and 0 otherwise. The squared error is δ(c(d), +)(1 − P(+|d))2 + δ(c(d), −)(1 − P(−|d))2 . When the class of a document is correctly predicted with a probability of one, log-loss is zero and squared error is zero. When the class of a document is incorrectly predicted with a probability of one, log-loss is −∞ and squared error is one. Thus, both measures assess how close an estimate comes to correctly predicting the item``s class but vary in how harshly incorrect predictions are penalized. We report only the sum of these measures and omit the averages for space. Their averages, average log-loss and mean squared error (MSE), can be computed from these totals by dividing by the number of binary decisions in a corpus. In addition, we also compare the error of the classifiers at their default thresholds and with the probabilities. This evaluates how the probability estimates have improved with respect to the decision threshold P(+|d) = 0.5. Thus, error only indicates how the methods would perform if a false positive was penalized the same as a false negative and not the general quality of the probability estimates. It is presented simply to provide the reader with a more complete understanding of the empirical tendencies of the methods. We use a a standard paired micro sign test [25] to determine statistical significance in the difference of all measures. Only pairs that the methods disagree on are used in the sign test. This test compares pairs of scores from two systems with the null hypothesis that the number of items they disagree on are binomially distributed. We use a significance level of p = 0.01. 4.5 Experimental Methodology As the categories under consideration in the experiments are not mutually exclusive, the classification was done by training n binary classifiers, where n is the number of classes. In order to generate the scores that each method uses to fit its probability estimates, we use five-fold cross-validation on the training data. We note that even though it is computationally efficient to perform leave-one-out cross-validation for the na¨ıve Bayes classifier, this may not be desirable since the distribution of scores can be skewed as a result. Of course, as with any application of n-fold cross-validation, it is also possible to bias the results by holding n too low and underestimating the performance of the final classifier. 4.6 Results & Discussion The results for recalibrating na¨ıve Bayes are given in Table 1a. Table 1b gives results for producing probabilistic outputs for SVMs. Log-loss Error2 Errors MSN Web Gauss -60656.41 10503.30 10754 A.Gauss -57262.26 8727.47 9675 Laplace -45363.84 8617.59 10927 A.Laplace -36765.88 6407.84† 8350 LogReg -36470.99 6525.47 8540 LR+Noise -36468.18 6534.61 8563 na¨ıve Bayes -1098900.83 17117.50 17834 Reuters Gauss -5523.14 1124.17 1654 A.Gauss -4929.12 652.67 888 Laplace -5677.68 1157.33 1416 A.Laplace -3106.95‡ 554.37‡ 726 LogReg -3375.63 603.20 786 LR+Noise -3374.15 604.80 785 na¨ıve Bayes -52184.52 1969.41 2121 TREC-AP Gauss -57872.57 8431.89 9705 A.Gauss -66009.43 7826.99 8865 Laplace -61548.42 9571.29 11442 A.Laplace -48711.55 7251.87‡ 8642 LogReg -48250.81 7540.60 8797 LR+Noise -48251.51 7544.84 8801 na¨ıve Bayes -1903487.10 41770.21 43661 Log-loss Error2 Errors MSN Web Gauss -54463.32 9090.57 10555 A. Gauss -44363.70 6907.79 8375 Laplace -42429.25 7669.75 10201 A. Laplace -31133.83 5003.32 6170 LogReg -30209.36 5158.74 6480 LR+Noise -30294.01 5209.80 6551 Linear SVM N/A N/A 6602 Reuters Gauss -3955.33 589.25 735 A. Gauss -4580.46 428.21 532 Laplace -3569.36 640.19 770 A. Laplace -2599.28 412.75 505 LogReg -2575.85 407.48 509 LR+Noise -2567.68 408.82 516 Linear SVM N/A N/A 516 TREC-AP Gauss -54620.94 6525.71 7321 A. Gauss -77729.49 6062.64 6639 Laplace -54543.19 7508.37 9033 A. Laplace -48414.39 5761.25‡ 6572‡ LogReg -48285.56 5914.04 6791 LR+Noise -48214.96 5919.25 6794 Linear SVM N/A N/A 6718 Table 1: (a) Results for na¨ıve Bayes (left) and (b) SVM (right). The best entry for a corpus is in bold. Entries that are statistically significantly better than all other entries are underlined. A † denotes the method is significantly better than all other methods except for na¨ıve Bayes. A ‡ denotes the entry is significantly better than all other methods except for A. Gauss (and na¨ıve Bayes for the table on the left). The reason for this distinction in significance tests is described in the text. We start with general observations that result from examining the performance of these methods over the various corpora. The first is that A. Laplace, LR+Noise, and LogReg, quite clearly outperform the other methods. There is usually little difference between the performance of LR+Noise and LogReg (both as shown here and on a decision by decision basis), but this is unsurprising since LR+Noise just adds noisy class labels to the LogReg model. With respect to the three different measures, LR+Noise and LogReg tend to perform slightly better (but never significantly) than A. Laplace at some tasks with respect to log-loss and squared error. However, A. Laplace always produces the least number of errors for all of the tasks, though at times the degree of improvement is not significant. In order to give the reader a better sense of the behavior of these methods, Figures 4-5 show the fits produced by the most competitive of these methods versus the actual data behavior (as estimated nonparametrically by binning) for class Earn in Reuters. Figure 4 shows the class-conditional densities, and thus only A. Laplace is shown since LogReg fits the posterior directly. Figure 5 shows the estimations of the log-odds, (i.e. log P (Earn|s(d)) P (¬Earn|s(d)) ). Viewing the log-odds (rather than the posterior) usually enables errors in estimation to be detected by the eye more easily. We can break things down as the sign test does and just look at wins and losses on the items that the methods disagree on. Looked at in this way only two methods (na¨ıve Bayes and A. Gauss) ever have more pairwise wins than A. Laplace; those two sometimes have more pairwise wins on log-loss and squared error even though the total never wins (i.e. they are dragged down by heavy penalties). In addition, this comparison of pairwise wins means that for those cases where LogReg and LR+Noise have better scores than A. Laplace, it would not be deemed significant by the sign test at any level since they do not have more wins. For example, of the 130K binary decisions over the MSN Web dataset, A. Laplace had approximately 101K pairwise wins versus LogReg and LR+Noise. No method ever has more pairwise wins than A. Laplace for the error comparison nor does any method every achieve a better total. The basic observation made about na¨ıve Bayes in previous work is that it tends to produce estimates very close to zero and one [1, 17]. This means if it tends to be right enough of the time, it will produce results that do not appear significant in a sign test that ignores size of difference (as the one here). The totals of the squared error and log-loss bear out the previous observation that when it``s wrong it``s really wrong. There are several interesting points about the performance of the asymmetric distributions as well. First, A. Gauss performs poorly because (similar to na¨ıve Bayes) there are some examples where it is penalized a large amount. This behavior results from a general tendency to perform like the picture shown in Figure 3 (note the crossover at the tails). While the asymmetric Gaussian tends to place the mode much more accurately than a symmetric Gaussian, its asymmetric flexibility combined with its distance function causes it to distribute too much mass to the outside tails while failing to fit around the mode accurately enough to compensate. Figure 3 is actually a result of fitting the two distributions to real data. As a result, at the tails there can be a large discrepancy between the likelihood of belonging to each class. Thus when there are no outliers A. Gauss can perform quite competitively, but when there is an 0 0.002 0.004 0.006 0.008 0.01 0.012 -600 -400 -200 0 200 400 p(s(d)|Class={+,-}) s(d) = naive Bayes log-odds Train Test A.Laplace 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 -15 -10 -5 0 5 10 15 p(s(d)|Class={+,-}) s(d) = linear SVM raw score Train Test A.Laplace Figure 4: The empirical distribution of classifier scores for documents in the training and the test set for class Earn in Reuters. Also shown is the fit of the asymmetric Laplace distribution to the training score distribution. The positive class (i.e. Earn) is the distribution on the right in each graph, and the negative class (i.e. ¬Earn) is that on the left in each graph. -6 -4 -2 0 2 4 6 8 -250 -200 -150 -100 -50 0 50 100 150 LogOdds=logP(+|s(d))-logP(-|s(d)) s(d) = naive Bayes log-odds Train Test A.Laplace LogReg -5 0 5 10 15 -4 -2 0 2 4 6 LogOdds=logP(+|s(d))-logP(-|s(d)) s(d) = linear SVM raw score Train Test A.Laplace LogReg Figure 5: The fit produced by various methods compared to the empirical log-odds of the training data for class Earn in Reuters. outlier A. Gauss is penalized quite heavily. There are enough such cases overall that it seems clearly inferior to the top three methods. However, the asymmetric Laplace places much more emphasis around the mode (Figure 4) because of the different distance function (think of the sharp peak of an exponential). As a result most of the mass stays centered around the mode, while the asymmetric parameters still allow more flexibility than the standard Laplace. Since the standard Laplace also corresponds to a piecewise fit in the log-odds space, this highlights that part of the power of the asymmetric methods is their sensitivity in placing the knots at the actual modes - rather than the symmetric assumption that the means correspond to the modes. Additionally, the asymmetric methods have greater flexibility in fitting the slopes of the line segments as well. Even in cases where the test distribution differs from the training distribution (Figure 4), A. Laplace still yields a solution that gives a better fit than LogReg (Figure 5), the next best competitor. Finally, we can make a few observations about the usefulness of the various performance metrics. First, log-loss only awards a finite amount of credit as the degree to which something is correct improves (i.e. there are diminishing returns as it approaches zero), but it can infinitely penalize for a wrong estimate. Thus, it is possible for one outlier to skew the totals, but misclassifying this example may not matter for any but a handful of actual utility functions used in practice. Secondly, squared error has a weakness in the other direction. That is, its penalty and reward are bounded in [0, 1], but if the number of errors is small enough, it is possible for a method to appear better when it is producing what we generally consider unhelpful probability estimates. For example, consider a method that only estimates probabilities as zero or one (which na¨ıve Bayes tends to but doesn``t quite reach if you use smoothing). This method could win according to squared error, but with just one error it would never perform better on log-loss than any method that assigns some non-zero probability to each outcome. For these reasons, we recommend that neither of these are used in isolation as they each give slightly different insights to the quality of the estimates produced. These observations are straightforward from the definitions but are underscored by the evaluation. 5. FUTURE WORK A promising extension to the work presented here is a hybrid distribution of a Gaussian (on the outside slopes) and exponentials (on the inner slopes). From the empirical evidence presented in [22], the expectation is that such a distribution might allow more emphasis of the probability mass around the modes (as with the exponential) while still providing more accurate estimates toward the tails. Just as logistic regression allows the log-odds of the posterior distribution to be fit directly with a line, we could directly fit the log-odds of the posterior with a three-piece line (a spline) instead of indirectly doing the same thing by fitting the asymmetric Laplace. This approach may provide more power since it retains the asymmetry assumption but not the assumption that the class-conditional densities are from an asymmetric Laplace. Finally, extending these methods to the outputs of other discriminative classifiers is an open area. We are currently evaluating the appropriateness of these methods for the output of a voted perceptron [11]. By analogy to the log-odds, the operative score that appears promising is log weight perceptrons voting + weight perceptrons voting − . 6. SUMMARY AND CONCLUSIONS We have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier. In addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function. We have given an efficient way to estimate the parameters of these distributions. While these distributions attempt to strike a balance between the generalization power of parametric distributions and the flexibility that the added asymmetric parameters give, the asymmetric Gaussian appears to have too great of an emphasis away from the modes. In striking contrast, the asymmetric Laplace distribution appears to be preferable over several large text domains and a variety of performance measures to the primary competing parametric methods, though comparable performance is sometimes achieved with one of two varieties of logistic regression. Given the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates. Acknowledgments We are grateful to Francisco Pereira for the sign test code, Anton Likhodedov for logistic regression code, and John Platt for the code support for the linear SVM classifier toolkit Smox. Also, we sincerely thank Chris Meek and John Platt for the very useful advice provided in the early stages of this work. Thanks also to Jaime Carbonell and John Lafferty for their useful feedback on the final versions of this paper. 7. REFERENCES [1] P. N. Bennett. Assessing the calibration of naive bayes'' posterior estimates. Technical Report CMU-CS-00-155, Carnegie Mellon, School of Computer Science, 2000. [2] P. N. Bennett. Using asymmetric distributions to improve classifier probabilities: A comparison of new and standard parametric methods. Technical Report CMU-CS-02-126, Carnegie Mellon, School of Computer Science, 2002. [3] H. Bourlard and N. Morgan. A continuous speech recognition system embedding mlp into hmm. In NIPS ``89, 1989. [4] G. Brier. Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78:1-3, 1950. [5] M. H. DeGroot and S. E. Fienberg. The comparison and evaluation of forecasters. Statistician, 32:12-22, 1983. [6] M. H. DeGroot and S. E. Fienberg. Comparing probability forecasters: Basic binary concepts and multivariate extensions. In P. Goel and A. Zellner, editors, Bayesian Inference and Decision Techniques. Elsevier Science Publishers B.V., 1986. [7] P. Domingos and M. Pazzani. Beyond independence: Conditions for the optimality of the simple bayesian classifier. In ICML ``96, 1996. [8] R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley & Sons, Inc., 2001. [9] S. T. Dumais and H. Chen. Hierarchical classification of web content. In SIGIR ``00, 2000. [10] S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami. Inductive learning algorithms and representations for text categorization. In CIKM ``98, 1998. [11] Y. Freund and R. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277-296, 1999. [12] I. Good. Rational decisions. Journal of the Royal Statistical Society, Series B, 1952. [13] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In ECML ``98, 1998. [14] S. Kotz, T. J. Kozubowski, and K. Podgorski. The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance. Birkh¨auser, 2001. [15] D. D. Lewis. A sequential algorithm for training text classifiers: Corrigendum and additional data. SIGIR Forum, 29(2):13-19, Fall 1995. [16] D. D. Lewis. Reuters-21578, distribution 1.0. http://www.daviddlewis.com/resources/ testcollections/reuters21578, January 1997. [17] D. D. Lewis and W. A. Gale. A sequential algorithm for training text classifiers. In SIGIR ``94, 1994. [18] D. D. Lewis, R. E. Schapire, J. P. Callan, and R. Papka. Training algorithms for linear text classifiers. In SIGIR ``96, 1996. [19] D. Lindley, A. Tversky, and R. Brown. On the reconciliation of probability assessments. Journal of the Royal Statistical Society, 1979. [20] R. Manmatha, T. Rath, and F. Feng. Modeling score distributions for combining the outputs of search engines. In SIGIR ``01, 2001. [21] A. McCallum and K. Nigam. A comparison of event models for naive bayes text classification. In AAAI ``98, Workshop on Learning for Text Categorization, 1998. [22] J. C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In A. J. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers. MIT Press, 1999. [23] M. Saar-Tsechansky and F. Provost. Active learning for class probability estimation and ranking. In IJCAI ``01, 2001. [24] R. L. Winkler. Scoring rules and the evaluation of probability assessors. Journal of the American Statistical Association, 1969. [25] Y. Yang and X. Liu. A re-examination of text categorization methods. In SIGIR ``99, 1999. [26] B. Zadrozny and C. Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In ICML ``01, 2001. [27] B. Zadrozny and C. Elkan. Reducing multiclass to binary by coupling probability estimates. In KDD ``02, 2002.
Using Asymmetric Distributions to Improve Text Classifier Probability Estimates ABSTRACT Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the "extremely irrelevant", "hard to discriminate", and "obviously relevant" items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable. 1. INTRODUCTION Text classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified cost function dynamically chosen at prediction time. This can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant/irrelevant are not available during training but are specified at prediction time. Furthermore, the costs can be changed without retraining the model. Additionally, probability estimates are often used as the basis of deciding which document's label to request next during active learning [17, 23]. Effective active learning can be key in many information retrieval tasks where obtaining labeled data can be costly--severely reducing the amount of labeled data needed to reach the same performance as when new labels are requested randomly [17]. Finally, they are also amenable to making other types of cost-sensitive decisions [26] and for combining decisions [3]. However, in all of these tasks, the quality of the probability estimates is crucial. Parametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data. Since many text classification tasks often have very little training data, we focus on parametric methods. However, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable. While some of these methods allow the distributions of the documents relevant and irrelevant to the topic to have different variances, they typically enforce the unnecessary constraint that the documents are symmetrically distributed around their respective modes. We introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models. Additionally, these models can be interpreted as assuming the scores produced by the text classifier have three basic types of empirical behavior--one corresponding to each of the "extremely irrelevant", "hard to discriminate", and "obviously relevant" items. We first review related work on improving probability estimates and score modeling in information retrieval. Then, we discuss in further detail the need for asymmetric models. After this, we describe two specific asymmetric models and, using two standard text classifiers, na ¨ ıve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores. We then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods. Finally, we summarize our contributions and discuss future directions. 2. RELATED WORK Parametric models have been employed to obtain probability estimates in several areas of information retrieval. Lewis & Gale [17] use logistic regression to recalibrate na ¨ ıve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning. Manmatha et. al [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines. They use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here. They also survey the long history of modeling the relevance scores of search engines. Our work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data. Focus on improving probability estimates has been growing lately. Zadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na ¨ ıve Bayes. In more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na ¨ ıve Bayes and a linear SVM. While they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results. Our work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue. In addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions. There is a variety of other work that this paper extends. Platt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM. His work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better). Finally, Bennett [1] obtained moderate gains by applying Platt's method to the recalibration of na ¨ ıve Bayes but found there were more problematic areas than when it was applied to SVMs. Recalibrating poorly calibrated classifiers is not a new problem. Lindley et. al [19] first proposed the idea of recalibrating classifiers, and DeGroot & Fienberg [5, 6] gave the now accepted standard formalization for the problem of assessing calibration initiated by others [4, 24]. 3. PROBLEM DEFINITION & APPROACH Our work differs from earlier approaches primarily in three points: (1) We provide asymmetric parametric models suitable for use when little training data is available; (2) We explicitly analyze the quality of probability estimates these and competing methods produce and provide significance tests for these results; (3) We target text classifier outputs where a majority of the previous literature targeted the output of search engines. 3.1 Problem Definition The general problem we are concerned with is highlighted in Figure 1. A text classifier produces a prediction about a document and gives a score s (d) indicating the strength of its decision that the document belongs to the positive class (relevant to the topic). We assume throughout there are only two classes: the positive and the negative (or irrelevant) class (' +' and' -' respectively). There are two general types of parametric approaches. The first of these tries to fit the posterior function directly, i.e. there is one Figure 1: We are concerned with how to perform the box highlighted in grey. The internals are for one type of approach. function estimator that performs a direct mapping of the score s to the probability P (+ Is (d)). The second type of approach breaks the problem down as shown in the grey box of Figure 1. An estimator for each of the class-conditional densities (i.e. p (sI +) and p (sI--)) is produced, then Bayes' rule and the class priors are used to obtain the estimate for P (+ Is (d)). 3.2 Motivation for Asymmetric Distributions Most of the previous parametric approaches to this problem either directly or indirectly (when fitting only the posterior) correspond to fitting Gaussians to the class-conditional densities; they differ only in the criterion used to estimate the parameters. We can visualize this as depicted in Figure 2. Since increasing s usually indicates increased likelihood of belonging to the positive class, then the rightmost distribution usually corresponds to p (sI +). Figure 2: Typical View of Discrimination based on Gaussians However, using standard Gaussians fails to capitalize on a basic characteristic commonly seen. Namely, if we have a raw output score that can be used for discrimination, then the empirical behavior between the modes (label B in Figure 2) is often very different than that outside of the modes (labels A and C in Figure 2). Intuitively, the area between the modes corresponds to the hard examples, which are difficult for this classifier to distinguish, while the areas outside the modes are the extreme examples that are usually easily distinguished. This suggests that we may want to uncouple the scale of the outside and inside segments of the distribution (as depicted by the curve denoted as A-Gaussian in Figure 3). As a result, an asymmetric distribution may be a more appropriate choice for application to the raw output score of a classifier. Ideally (i.e. perfect classification) there will exist scores θ − and θ + such that all examples with score greater than θ + are relevant and all examples with scores less than θ − are irrelevant. Furthermore, no examples fall between θ − and θ +. The distance | θ − − θ + | corresponds to the margin in some classifiers, and an attempt is often made to maximize this quantity. Because text classifiers have training data to use to separate the classes, the final behavior of the score distributions is primarily a factor of the amount of training data and the consequent separation in the classes achieved. This is in contrast to search engine retrieval where the distribution of scores is more a factor of language distribution across documents, the similarity function, and the length and type of query. Perfect classification corresponds to using two very asymmetric distributions, but in this case, the probabilities are actually one and zero and many methods will work for typical purposes. Practically, some examples will fall between θ − and θ +, and it is often important to estimate the probabilities of these examples well (since they correspond to the "hard" examples). Justifications can be given for both why you may find more and less examples between θ − and θ + than outside of them, but there are few empirical reasons to believe that the distributions should be symmetric. A natural first candidate for an asymmetric distribution is to generalize a common symmetric distribution, e.g. the Laplace or the Gaussian. An asymmetric Laplace distribution can be achieved by placing two exponentials around the mode in the following manner: where θ, β, and γ are the model parameters. θ is the mode of the distribution, β is the inverse scale of the exponential to the left of the mode, and γ is the inverse scale of the exponential to the right. We will use the notation Λ (X | θ, β, γ) to refer to this distribution. Figure 3: Gaussians vs. Asymmetric Gaussians. A Shortcoming of Symmetric Distributions--The vertical lines show the modes as estimated nonparametrically. We can create an asymmetric Gaussian in the same manner: where θ, σl, and σr are the model parameters. To refer to this asymmetric Gaussian, we use the notation Γ (X | θ, σl, σr). While these distributions are composed of "halves", the resulting function is a single continuous distribution. These distributions allow us to fit our data with much greater flexibility at the cost of only fitting six parameters. We could instead try mixture models for each component or other extensions, but most other extensions require at least as many parameters (and can often be more computationally expensive). In addition, the motivation above should provide significant cause to believe the underlying distributions actually behave in this way. Furthermore, this family of distributions can still fit a symmetric distribution, and finally, in the empirical evaluation, evidence is presented that demonstrates this asymmetric behavior (see Figure 4). To our knowledge, neither family of distributions has been previously used in machine learning or information retrieval. Both are termed generalizations of an Asymmetric Laplace in [14], but we refer to them as described above to reflect the nature of how we derived them for this task. 3.3 Estimating the Parameters of the Asymmetric Distributions This section develops the method for finding maximum likelihood estimates (MLE) of the parameters for the above asymmetric distributions. In order to find the MLEs, we have two choices: (1) use numerical estimation to estimate all three parameters at once (2) fix the value of θ, and estimate the other two (β and γ or σl and σr) given our choice of θ, then consider alternate values of θ. Because of the simplicity of analysis in the latter alternative, we choose this method. 3.3.1 Asymmetric Laplace MLEs For D = {x1, x2,..., xN} where the xi are i.i.d. and X ∼ Λ (X | θ, β, γ), the likelihood is ~ Ni Λ (X | θ, β, γ). Now, we fix θ and compute the maximum likelihood for that choice of θ. Then, we can simply consider all choices of θ and choose the one with the maximum likelihood over all choices of θ. The complete derivation is omitted because of space but is available in [2]. We define the following values: Note that Dl and Dr are the sum of the absolute differences between the x belonging to the left and right halves of the distribution (respectively) and θ. Finally the MLEs for β and γ for a fixed θ are: These estimates are not wholly unexpected since we would obtain Nl if we were to estimate β independently of γ. The elegance of Dl the formulae is that the estimates will tend to be symmetric only insofar as the data dictate it (i.e. the closer Dl and Dr are to being equal, the closer the resulting inverse scales). By continuity arguments, when N = 0, we assign β = γ = ~ 0 where ~ 0 is a small constant that acts to disperse the distribution to a uniform. Similarly, when N = ~ 0 and Dl = 0, we assign β = ~ inf where ~ inf is a very large constant that corresponds to an extremely sharp distribution (i.e. almost all mass at θ for that half). Dr = 0 is handled similarly. Assuming that θ falls in some range [φ, ψ] dependent upon only the observed documents, then this alternative is also easily computable. Given Nl, Sl, Nr, Sr, we can compute the posterior and the MLEs in constant time. In addition, if the scores are sorted, then we can perform the whole process quite efficiently. Starting with the minimum θ = φ we would like to try, we loop through the scores once and set Nl, Sl, Nr, Sr appropriately. Then we increase θ and just step past the scores that have shifted from the right side of the distribution to the left. Assuming the number of candidate θs are O (n), this process is O (n), and the overall process is dominated by sorting the scores, O (n log n) (or expected linear time). 3.3.2 Asymmetric Gaussian MLEs For D = {x1, x2,..., xN} where the xi are i.i.d. and X ∼ Γ (X | θ, σl, σr), the likelihood is ~ Ni Γ (X | θ, β, γ). The MLEs can be worked out similar to the above. We assume the same definitions as above (the complete derivation omitted for space is available in [2]), and in addition, let: By continuity arguments, when N = 0, we assign σr = σl = ~ inf, and when N = ~ 0 and Dl2 = 0 (resp. Dr2 = 0), we assign σl = ~ 0 (resp. σr = ~ 0). Again, the same computational complexity analysis applies to estimating these parameters. 4. EXPERIMENTAL ANALYSIS 4.1 Methods For each of the methods that use a class prior, we use a smoothed add-one estimate, i.e. P (c) = | c | +1 N +2 where N is the number of documents. For methods that fit the class-conditional densities, p (s | +) and p (s | −), the resulting densities are inverted using Bayes' rule as described above. All of the methods below are fit using maximum likelihood estimates. For recalibrating a classifier (i.e. correcting poor probability estimates output by the classifier), it is usual to use the log-odds of the classifier's estimate as s (d). The log-odds are defined to be log P (+ | d) P (− | d). The normal decision threshold (minimizing error) in terms of log-odds is at zero (i.e. P (+ | d) = P (− | d) = 0.5). Since it scales the outputs to a space [− oo, oo], the log-odds make normal (and similar distributions) applicable [19]. Lewis & Gale [17] give a more motivating viewpoint that fitting the log-odds is a dampening effect for the inaccurate independence assumption and a bias correction for inaccurate estimates of the priors. In general, fitting the log-odds can serve to boost or dampen the signal from the original classifier as the data dictate. Gaussians A Gaussian is fit to each of the class-conditional densities, using the usual maximum likelihood estimates. This method is denoted in the tables below as Gauss. Asymmetric Gaussians An asymmetric Gaussian is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above. Intervals between adjacent scores are divided by 10 in testing candidate θs, i.e. 8 points between actual scores occurring in the data set are tested. This method is denoted as A. Gauss. Laplace Distributions Even though Laplace distributions are not typically applied to this task, we also tried this method to isolate why benefit is gained from the asymmetric form. The usual MLEs were used for estimating the location and scale of a classical symmetric Laplace distribution as described in [14]. We denote this method as Laplace below. Asymmetric Laplace Distributions An asymmetric Laplace is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above. As with the asymmetric Gaussian, intervals between adjacent scores are divided by 10 in testing candidate θs. This method is denoted as A. Laplace below. Logistic Regression This method is the first of two methods we evaluated that directly fit the posterior, P (+ | s (d)). Both methods restrict the set of families to a two-parameter sigmoid family; they differ primarily in their model of class labels. As opposed to the above methods, one can argue that an additional boon of these methods is they completely preserve the ranking given by the classifier. When this is desired, these methods may be more appropriate. The previous methods will mostly preserve the rankings, but they can deviate if the data dictate it. Thus, they may model the data behavior better at the cost of departing from a monotonicity constraint in the output of the classifier. Lewis & Gale [17] use logistic regression to recalibrate na ¨ ıve Bayes for subsequent use in active learning. The model they use is: Logistic Regression with Noisy Class Labels Platt [22] proposes a framework that extends the logistic regression model above to incorporate noisy class labels and uses it to produce probability estimates from the raw output of an SVM. This model differs from the LogReg model only in how the parameters are estimated. The parameters are still fit using maximum likelihood estimation, but a model of noisy class labels is used in addition to allow for the possibility that the class was mislabeled. The noise is modeled by assuming there is a finite probability of mislabeling a positive example and of mislabeling a negative example; these two noise estimates are determined by the number of positive examples and the number of negative examples (using Bayes' rule to infer the probability of incorrect label). the score s (d). Instead of using this below, we will use the logodds ratio. This does not affect the model as it simply shifts all of the scores by a constant determined by the priors. We refer to this method as LogReg below. Even though the performance of this model would not be expected to deviate much from LogReg, we evaluate it for completeness. We refer to this method below as LR+N oise. 4.2 Data We examined several corpora, including the MSN Web Directory, Reuters, and TREC-AP. MSN Web Directory The MSN Web Directory is a large collection of heterogeneous web pages (from a May 1999 web snapshot) that have been hierarchically classified. We used the same train/test split of 50078/10024 documents as that reported in [9]. The MSN Web hierarchy is a seven-level hierarchy; we used all 13 of the top-level categories. The class proportions in the training set vary from 1.15% to 22.29%. In the testing set, they range from 1.14% to 21.54%. The classes are general subjects such as Health & Fitness and Travel & Vacation. Human indexers assigned the documents to zero or more categories. For the experiments below, we used only the top 1000 words with highest mutual information for each class; approximately 195K words appear in at least three training documents. Reuters The Reuters 21578 corpus [16] contains Reuters news articles from 1987. For this data set, we used the ModApte standard train / test split of 9603/3299 documents (8676 unused documents). The classes are economic subjects (e.g., "acq" for acquisitions, "earn" for earnings, etc.) that human taggers applied to the document; a document may have multiple subjects. There are actually 135 classes in this domain (only 90 of which occur in the training and testing set); however, we only examined the ten most frequent classes since small numbers of testing examples make interpreting some performance measures difficult due to high variance .1 Limiting to the ten largest classes allows us to compare our results to previously published results [10, 13, 21, 22]. The class proportions in the training set vary from 1.88% to 29.96%. In the testing set, they range from 1.7% to 32.95%. For the experiments below we used only the top 300 words with highest mutual information for each class; approximately 15K words appear in at least three training documents. TREC-AP The TREC-AP corpus is a collection of AP news stories from 1988 to 1990. We used the same train/test split of 142791/66992 documents that was used in [18]. As described in [17] (see also [15]), the categories are defined by keywords in a keyword field. The title and body fields are used in the experiments below. There are twenty categories in total. The class proportions in the training set vary from 0.06% to 2.03%. In the testing set, they range from 0.03% to 4.32%. For the experiments described below, we use only the top 1000 words with the highest mutual information for each class; approximately 123K words appear in at least 3 training documents. 4.3 Classifiers We selected two classifiers for evaluation. A linear SVM classifier which is a discriminative classifier that does not normally output probability values, and a na ¨ ıve Bayes classifier whose probability outputs are often poor [1, 7] but can be improved [1, 26, 27]. 1A separate comparison of only LogReg, LR+N oise, and A. Laplace over all 90 categories of Reuters was also conducted. After accounting for the variance, that evaluation also supported the claims made here. SVM For linear SVMs, we use the Smox toolkit which is based on Platt's Sequential Minimal Optimization algorithm. The features were represented as continuous values. We used the raw output score of the SVM as s (d) since it has been shown to be appropriate before [22]. The normal decision threshold (assuming we are seeking to minimize errors) for this classifier is at zero. Na ¨ ıve Bayes The na ¨ ıve Bayes classifier model is a multinomial model [21]. We smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively. We use the log-odds estimated by the classifier as s (d). The normal decision threshold is at zero. 4.4 Performance Measures We use log-loss [12] and squared error [4, 6] to evaluate the quality of the probability estimates. For a document d with class c (d) E {+, −} (i.e. the data have known labels and not probabilities), logloss is defined as δ (c (d), +) log P (+ | d) + δ (c (d), −) log P (− | d) where δ (a, b) =. 1 if a = b and 0 otherwise. The squared error is δ (c (d), +) (1 − P (+ | d)) 2 + δ (c (d), −) (1 − P (− | d)) 2. When the class of a document is correctly predicted with a probability of one, log-loss is zero and squared error is zero. When the class of a document is incorrectly predicted with a probability of one, log-loss is − oo and squared error is one. Thus, both measures assess how close an estimate comes to correctly predicting the item's class but vary in how harshly incorrect predictions are penalized. We report only the sum of these measures and omit the averages for space. Their averages, average log-loss and mean squared error (MSE), can be computed from these totals by dividing by the number of binary decisions in a corpus. In addition, we also compare the error of the classifiers at their default thresholds and with the probabilities. This evaluates how the probability estimates have improved with respect to the decision threshold P (+ | d) = 0.5. Thus, error only indicates how the methods would perform if a false positive was penalized the same as a false negative and not the general quality of the probability estimates. It is presented simply to provide the reader with a more complete understanding of the empirical tendencies of the methods. We use a a standard paired micro sign test [25] to determine statistical significance in the difference of all measures. Only pairs that the methods disagree on are used in the sign test. This test compares pairs of scores from two systems with the null hypothesis that the number of items they disagree on are binomially distributed. We use a significance level of p = 0.01. 4.5 Experimental Methodology As the categories under consideration in the experiments are not mutually exclusive, the classification was done by training n binary classifiers, where n is the number of classes. In order to generate the scores that each method uses to fit its probability estimates, we use five-fold cross-validation on the training data. We note that even though it is computationally efficient to perform leave-one-out cross-validation for the na ¨ ıve Bayes classifier, this may not be desirable since the distribution of scores can be skewed as a result. Of course, as with any application of n-fold cross-validation, it is also possible to bias the results by holding n too low and underestimating the performance of the final classifier. 4.6 Results & Discussion The results for recalibrating na ¨ ıve Bayes are given in Table 1a. Table 1b gives results for producing probabilistic outputs for SVMs. Table 1: (a) Results for na ¨ ıve Bayes (left) and (b) SVM (right). The best entry for a corpus is in bold. Entries that are statistically significantly better than all other entries are underlined. A † denotes the method is significantly better than all other methods except for na ¨ ıve Bayes. A ‡ denotes the entry is significantly better than all other methods except for A. Gauss (and na ¨ ıve Bayes for the table on the left). The reason for this distinction in significance tests is described in the text. We start with general observations that result from examining the performance of these methods over the various corpora. The first is that A. Laplace, LR+N oise, and LogReg, quite clearly outperform the other methods. There is usually little difference between the performance of LR+N oise and LogReg (both as shown here and on a decision by decision basis), but this is unsurprising since LR+N oise just adds noisy class labels to the LogReg model. With respect to the three different measures, LR+N oise and LogReg tend to perform slightly better (but never significantly) than A. Laplace at some tasks with respect to log-loss and squared error. However, A. Laplace always produces the least number of errors for all of the tasks, though at times the degree of improvement is not significant. In order to give the reader a better sense of the behavior of these methods, Figures 4-5 show the fits produced by the most competitive of these methods versus the actual data behavior (as estimated nonparametrically by binning) for class Earn in Reuters. Figure 4 shows the class-conditional densities, and thus only A. Laplace is shown since LogReg fits the posterior directly. Figure 5 shows the estimations of the log-odds, (i.e. log P <Earn | s <d)) P <¬ Earn | s <d))). Viewing the log-odds (rather than the posterior) usually enables errors in estimation to be detected by the eye more easily. We can break things down as the sign test does and just look at wins and losses on the items that the methods disagree on. Looked at in this way only two methods (na ¨ ıve Bayes and A. Gauss) ever have more pairwise wins than A. Laplace; those two sometimes have more pairwise wins on log-loss and squared error even though the total never wins (i.e. they are dragged down by heavy penalties). In addition, this comparison of pairwise wins means that for those cases where LogReg and LR+N oise have better scores than A. Laplace, it would not be deemed significant by the sign test at any level since they do not have more wins. For example, of the 130K binary decisions over the MSN Web dataset, A. Laplace had approximately 101K pairwise wins versus LogReg and LR+N oise. No method ever has more pairwise wins than A. Laplace for the error comparison nor does any method every achieve a better total. The basic observation made about na ¨ ıve Bayes in previous work is that it tends to produce estimates very close to zero and one [1, 17]. This means if it tends to be right enough of the time, it will produce results that do not appear significant in a sign test that ignores size of difference (as the one here). The totals of the squared error and log-loss bear out the previous observation that "when it's wrong it's really wrong". There are several interesting points about the performance of the asymmetric distributions as well. First, A. Gauss performs poorly because (similar to na ¨ ıve Bayes) there are some examples where it is penalized a large amount. This behavior results from a general tendency to perform like the picture shown in Figure 3 (note the crossover at the tails). While the asymmetric Gaussian tends to place the mode much more accurately than a symmetric Gaussian, its asymmetric flexibility combined with its distance function causes it to distribute too much mass to the outside tails while failing to fit around the mode accurately enough to compensate. Figure 3 is actually a result of fitting the two distributions to real data. As a result, at the tails there can be a large discrepancy between the likelihood of belonging to each class. Thus when there are no outliers A. Gauss can perform quite competitively, but when there is an Figure 4: The empirical distribution of classifier scores for documents in the training and the test set for class Earn in Reuters. Also shown is the fit of the asymmetric Laplace distribution to the training score distribution. The positive class (i.e. Earn) is the distribution on the right in each graph, and the negative class (i.e. ¬ Earn) is that on the left in each graph. Figure 5: The fit produced by various methods compared to the empirical log-odds of the training data for class Earn in Reuters. outlier A. Gauss is penalized quite heavily. There are enough such cases overall that it seems clearly inferior to the top three methods. However, the asymmetric Laplace places much more emphasis around the mode (Figure 4) because of the different distance function (think of the "sharp peak" of an exponential). As a result most of the mass stays centered around the mode, while the asymmetric parameters still allow more flexibility than the standard Laplace. Since the standard Laplace also corresponds to a piecewise fit in the log-odds space, this highlights that part of the power of the asymmetric methods is their sensitivity in placing the knots at the actual modes--rather than the symmetric assumption that the means correspond to the modes. Additionally, the asymmetric methods have greater flexibility in fitting the slopes of the line segments as well. Even in cases where the test distribution differs from the training distribution (Figure 4), A. Laplace still yields a solution that gives a better fit than LogReg (Figure 5), the next best competitor. Finally, we can make a few observations about the usefulness of the various performance metrics. First, log-loss only awards a finite amount of credit as the degree to which something is correct improves (i.e. there are diminishing returns as it approaches zero), but it can infinitely penalize for a wrong estimate. Thus, it is possible for one outlier to skew the totals, but misclassifying this example may not matter for any but a handful of actual utility functions used in practice. Secondly, squared error has a weakness in the other direction. That is, its penalty and reward are bounded in [0, 1], but if the number of errors is small enough, it is possible for a method to appear better when it is producing what we generally consider unhelpful probability estimates. For example, consider a method that only estimates probabilities as zero or one (which na ¨ ıve Bayes tends to but doesn't quite reach if you use smoothing). This method could win according to squared error, but with just one error it would never perform better on log-loss than any method that assigns some non-zero probability to each outcome. For these reasons, we recommend that neither of these are used in isolation as they each give slightly different insights to the quality of the estimates produced. These observations are straightforward from the definitions but are underscored by the evaluation. 5. FUTURE WORK A promising extension to the work presented here is a hybrid distribution of a Gaussian (on the outside slopes) and exponentials (on the inner slopes). From the empirical evidence presented in [22], the expectation is that such a distribution might allow more emphasis of the probability mass around the modes (as with the exponential) while still providing more accurate estimates toward the tails. Just as logistic regression allows the log-odds of the posterior distribution to be fit directly with a line, we could directly fit the log-odds of the posterior with a three-piece line (a spline) instead of indirectly doing the same thing by fitting the asymmetric Laplace. This approach may provide more power since it retains the asymmetry assumption but not the assumption that the class-conditional densities are from an asymmetric Laplace. Finally, extending these methods to the outputs of other discriminative classifiers is an open area. We are currently evaluating the appropriateness of these methods for the output of a voted perceptron [11]. By analogy to the log-odds, the operative score that appears promising is log weight perceptrons voting + weight perceptrons voting −. 6. SUMMARY AND CONCLUSIONS We have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier. In addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function. We have given an efficient way to estimate the parameters of these distributions. While these distributions attempt to strike a balance between the generalization power of parametric distributions and the flexibility that the added asymmetric parameters give, the asymmetric Gaussian appears to have too great of an emphasis away from the modes. In striking contrast, the asymmetric Laplace distribution appears to be preferable over several large text domains and a variety of performance measures to the primary competing parametric methods, though comparable performance is sometimes achieved with one of two varieties of logistic regression. Given the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.
Using Asymmetric Distributions to Improve Text Classifier Probability Estimates ABSTRACT Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the "extremely irrelevant", "hard to discriminate", and "obviously relevant" items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable. 1. INTRODUCTION Text classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified cost function dynamically chosen at prediction time. This can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant/irrelevant are not available during training but are specified at prediction time. Furthermore, the costs can be changed without retraining the model. Additionally, probability estimates are often used as the basis of deciding which document's label to request next during active learning [17, 23]. Effective active learning can be key in many information retrieval tasks where obtaining labeled data can be costly--severely reducing the amount of labeled data needed to reach the same performance as when new labels are requested randomly [17]. Finally, they are also amenable to making other types of cost-sensitive decisions [26] and for combining decisions [3]. However, in all of these tasks, the quality of the probability estimates is crucial. Parametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data. Since many text classification tasks often have very little training data, we focus on parametric methods. However, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable. While some of these methods allow the distributions of the documents relevant and irrelevant to the topic to have different variances, they typically enforce the unnecessary constraint that the documents are symmetrically distributed around their respective modes. We introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models. Additionally, these models can be interpreted as assuming the scores produced by the text classifier have three basic types of empirical behavior--one corresponding to each of the "extremely irrelevant", "hard to discriminate", and "obviously relevant" items. We first review related work on improving probability estimates and score modeling in information retrieval. Then, we discuss in further detail the need for asymmetric models. After this, we describe two specific asymmetric models and, using two standard text classifiers, na ¨ ıve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores. We then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods. Finally, we summarize our contributions and discuss future directions. 2. RELATED WORK Parametric models have been employed to obtain probability estimates in several areas of information retrieval. Lewis & Gale [17] use logistic regression to recalibrate na ¨ ıve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning. Manmatha et. al [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines. They use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here. They also survey the long history of modeling the relevance scores of search engines. Our work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data. Focus on improving probability estimates has been growing lately. Zadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na ¨ ıve Bayes. In more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na ¨ ıve Bayes and a linear SVM. While they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results. Our work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue. In addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions. There is a variety of other work that this paper extends. Platt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM. His work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better). Finally, Bennett [1] obtained moderate gains by applying Platt's method to the recalibration of na ¨ ıve Bayes but found there were more problematic areas than when it was applied to SVMs. Recalibrating poorly calibrated classifiers is not a new problem. Lindley et. al [19] first proposed the idea of recalibrating classifiers, and DeGroot & Fienberg [5, 6] gave the now accepted standard formalization for the problem of assessing calibration initiated by others [4, 24]. 3. PROBLEM DEFINITION & APPROACH 3.1 Problem Definition 3.2 Motivation for Asymmetric Distributions 3.3 Estimating the Parameters of the Asymmetric Distributions 3.3.1 Asymmetric Laplace MLEs 3.3.2 Asymmetric Gaussian MLEs 4. EXPERIMENTAL ANALYSIS 4.1 Methods Gaussians Asymmetric Gaussians Laplace Distributions Asymmetric Laplace Distributions Logistic Regression Logistic Regression with Noisy Class Labels 4.2 Data MSN Web Directory Reuters 4.3 Classifiers Na ¨ ıve Bayes 4.4 Performance Measures 4.5 Experimental Methodology 4.6 Results & Discussion 5. FUTURE WORK 6. SUMMARY AND CONCLUSIONS We have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier. In addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function. We have given an efficient way to estimate the parameters of these distributions. While these distributions attempt to strike a balance between the generalization power of parametric distributions and the flexibility that the added asymmetric parameters give, the asymmetric Gaussian appears to have too great of an emphasis away from the modes. In striking contrast, the asymmetric Laplace distribution appears to be preferable over several large text domains and a variety of performance measures to the primary competing parametric methods, though comparable performance is sometimes achieved with one of two varieties of logistic regression. Given the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.
Using Asymmetric Distributions to Improve Text Classifier Probability Estimates ABSTRACT Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the "extremely irrelevant", "hard to discriminate", and "obviously relevant" items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable. 1. INTRODUCTION Text classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified cost function dynamically chosen at prediction time. This can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant/irrelevant are not available during training but are specified at prediction time. Furthermore, the costs can be changed without retraining the model. Additionally, probability estimates are often used as the basis of deciding which document's label to request next during active learning [17, 23]. However, in all of these tasks, the quality of the probability estimates is crucial. Parametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data. Since many text classification tasks often have very little training data, we focus on parametric methods. However, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable. We introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models. We first review related work on improving probability estimates and score modeling in information retrieval. Then, we discuss in further detail the need for asymmetric models. After this, we describe two specific asymmetric models and, using two standard text classifiers, na ¨ ıve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores. We then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods. Finally, we summarize our contributions and discuss future directions. 2. RELATED WORK Parametric models have been employed to obtain probability estimates in several areas of information retrieval. Lewis & Gale [17] use logistic regression to recalibrate na ¨ ıve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning. Manmatha et. al [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines. They use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here. They also survey the long history of modeling the relevance scores of search engines. Our work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data. Focus on improving probability estimates has been growing lately. Zadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na ¨ ıve Bayes. In more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na ¨ ıve Bayes and a linear SVM. While they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results. Our work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue. In addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions. There is a variety of other work that this paper extends. Platt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM. His work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better). Recalibrating poorly calibrated classifiers is not a new problem. Lindley et. 6. SUMMARY AND CONCLUSIONS We have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier. In addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function. We have given an efficient way to estimate the parameters of these distributions. Given the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.
H-73
Unified Utility Maximization Framework for Resource Selection
This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of high-recall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms.
[ "resourc select", "databas recommend", "distribut document retriev", "distribut inform retriev", "distribut text inform retriev resourc select", "feder search", "hidden web content", "resourc represent", "retriev and result merg", "logist transform model", "semi-supervis learn", "unifi util maxim model" ]
[ "P", "P", "P", "P", "R", "U", "U", "M", "M", "M", "U", "R" ]
Unified Utility Maximization Framework for Resource Selection Luo Si Language Technology Inst. School of Compute Science Carnegie Mellon University Pittsburgh, PA 15213 lsi@cs.cmu.edu Jamie Callan Language Technology Inst. School of Compute Science Carnegie Mellon University Pittsburgh, PA 15213 callan@cs.cmu.edu ABSTRACT This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: General Terms Algorithms 1. INTRODUCTION Conventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing. Distributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database. For example, Hidden Web contents (also called invisible or deep Web contents) are information on the Web that cannot be accessed by the conventional search engines. Hidden web contents have been estimated to be 2-50 [19] times larger than the contents that can be searched by conventional search engines. Therefore, it is very important to search this type of valuable information. The architecture of distributed search solution is highly influenced by different environmental characteristics. In a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines. Early distributed information retrieval research focused on this type of cooperative environments [1,8]. On the other side, in a wide area network such as very large corporate environments or on the Web there are many types of search engines and it is difficult to assume that all the information providers can cooperate as they are required. Even if they are willing to cooperate in these environments, it may be hard to enforce a single solution for all the information providers or to detect whether information sources provide the correct information as they are required. Many applications fall into the latter type of uncooperative environments such as the Mind project [16] which integrates non-cooperating digital libraries or the QProber system [9] which supports browsing and searching of uncooperative hidden Web databases. In this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines. There are three important sub-problems in distributed information retrieval. First, information about the contents of each individual database must be acquired (resource representation) [1,8,21]. Second, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21]. Third, the results retrieved from all the selected resources have to be merged into a single final list before it can be presented to the end user (retrieval and results merging) [1,5,20,22]. Many types of solutions exist for distributed information retrieval. Invisible-web. net1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics. A database recommendation system goes a step further than a browsing system like Invisible-web. net by recommending most relevant information sources to users'' queries. It is composed of the resource description and the resource selection components. This solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically. Distributed document retrieval is a more sophisticated task. It selects relevant information sources for users'' queries as the database recommendation system does. Furthermore, users'' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users. The goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal. On the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal. Prior research indicated that these two goals are related but not identical [4,21]. However, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21]. This paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals. First, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21]. A logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance. Second, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document. The probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document``s score. Then, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates. For the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal. For resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance. This selection criterion meets the high-precision goal of distributed document retrieval application. Furthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list. The unified utility framework makes very few assumptions and works in uncooperative environments. Two key features make it a more solid model for distributed information retrieval: i) It formalizes the resource selection problems of different applications as various utility functions, and optimizes the utility functions to achieve the optimal results accordingly; and ii) It shows an effective and efficient way to estimate the probabilities of relevance of all documents across databases. Specifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model. The human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases. This is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15]. The unified utility framework is not only more theoretically solid but also very effective. Empirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations. The next section discusses related work. Section 3 describes the new unified utility maximization model. Section 4 explains our experimental methodology. Sections 5 and 6 present our experimental results for resource selection and document retrieval. Section 7 concludes. 2. PRIOR RESEARCH There has been considerable research on all the sub-problems of distributed information retrieval. We survey the most related work in this section. The first problem of distributed information retrieval is resource representation. The STARTS protocol is one solution for acquiring resource descriptions in cooperative environments [8]. However, in uncooperative environments, even the databases are willing to share their information, it is not easy to judge whether the information they provide is accurate or not. Furthermore, it is not easy to coordinate the databases to provide resource representations that are compatible with each other. Thus, in uncooperative environments, one common choice is query-based sampling, which randomly generates and sends queries to individual search engines and retrieves some documents to build the descriptions. As the sampled documents are selected by random queries, query-based sampling is not easily fooled by any adversarial spammer that is interested to attract more traffic. Experiments have shown that rather accurate resource descriptions can be built by sending about 80 queries and downloading about 300 documents [1]. Many resource selection algorithms such as gGlOSS/vGlOSS [8] and CORI [1] have been proposed in the last decade. The CORI algorithm represents each database by its terms, the document frequencies and a small number of corpus statistics (details in [1]). As prior research on different datasets has shown the CORI algorithm to be the most stable and effective of the three algorithms [1,17,18], we use it as a baseline algorithm in this work. The relevant document distribution estimation (ReDDE [21]) resource selection algorithm is a recent algorithm that tries to estimate the distribution of relevant documents across the available databases and ranks the databases accordingly. Although the ReDDE algorithm has been shown to be effective, it relies on heuristic constants that are set empirically [21]. The last step of the document retrieval sub-problem is results merging, which is the process of transforming database-specific 33 document scores into comparable database-independent document scores. The semi supervised learning (SSL) [20,22] result merging algorithm uses the documents acquired by querybased sampling as training data and linear regression to learn the database-specific, query-specific merging models. These linear models are used to convert the database-specific document scores into the approximated centralized document scores. The SSL algorithm has been shown to be effective [22]. It serves as an important component of our unified utility maximization framework (Section 3). In order to achieve accurate document retrieval results, many previous methods simply use resource selection algorithms that are effective of database recommendation system. But as pointed out above, a good resource selection algorithm optimized for high-recall may not work well for document retrieval, which targets the high-precision goal. This type of inconsistency has been observed in previous research [4,21]. The research in [21] tried to solve the problem with a heuristic method. The research most similar to what we propose here is the decision-theoretic framework (DTF) [7,15]. This framework computes a selection that minimizes the overall costs (e.g., retrieval quality, time) of document retrieval system and several methods [15] have been proposed to estimate the retrieval quality. However, two points distinguish our research from the DTF model. First, the DTF is a framework designed specifically for document retrieval, but our new model integrates two distinct applications with different requirements (database recommendation and distributed document retrieval) into the same unified framework. Second, the DTF builds a model for each database to calculate the probabilities of relevance. This requires human relevance judgments for the results retrieved from each database. In contrast, our approach only builds one logistic model for the centralized sample database. The centralized sample database can serve as a bridge to connect the individual databases with the centralized logistic model, thus the probabilities of relevance of documents in different databases can be estimated. This strategy can save large amount of human judgment effort and is a big advantage of the unified utility maximization framework over the DTF especially when there are a large number of databases. 3. UNIFIED UTILITY MAXIMIZATION FRAMEWORK The Unified Utility Maximization (UUM) framework is based on estimating the probabilities of relevance of the (mostly unseen) documents available in the distributed search environment. In this section we describe how the probabilities of relevance are estimated and how they are used by the Unified Utility Maximization model. We also describe how the model can be optimized for the high-recall goal of a database recommendation system and the high-precision goal of a distributed document retrieval system. 3.1 Estimating Probabilities of Relevance As pointed out above, the purpose of resource selection is highrecall and the purpose of document retrieval is high-precision. In order to meet these diverse goals, the key issue is to estimate the probabilities of relevance of the documents in various databases. This is a difficult problem because we can only observe a sample of the contents of each database using query-based sampling. Our strategy is to make full use of all the available information to calculate the probability estimates. 3.1.1 Learning Probabilities of Relevance In the resource description step, the centralized sample database is built by query-based sampling and the database sizes are estimated using the sample-resample method [21]. At the same time, an effective retrieval algorithm (Inquery [2]) is applied on the centralized sample database with a small number (e.g., 50) of training queries. For each training query, the CORI resource selection algorithm [1] is applied to select some number (e.g., 10) of databases and retrieve 50 document ids from each database. The SSL results merging algorithm [20,22] is used to merge the results. Then, we can download the top 50 documents in the final merged list and calculate their corresponding centralized scores using Inquery and the corpus statistics of the centralized sample database. The centralized scores are further normalized (divided by the maximum centralized score for each query), as this method has been suggested to improve estimation accuracy in previous research [15]. Human judgment is acquired for those documents and a logistic model is built to transform the normalized centralized document scores to probabilities of relevance as follows: ( ) ))(exp(1 ))(exp( |)( _ _ dSba dSba drelPdR ccc ccc ++ + == (1) where )( _ dSc is the normalized centralized document score and ac and bc are the two parameters of the logistic model. These two parameters are estimated by maximizing the probabilities of relevance of the training queries. The logistic model provides us the tool to calculate the probabilities of relevance from centralized document scores. 3.1.2 Estimating Centralized Document Scores When the user submits a new query, the centralized document scores of the documents in the centralized sample database are calculated. However, in order to calculate the probabilities of relevance, we need to estimate centralized document scores for all documents across the databases instead of only the sampled documents. This goal is accomplished using: the centralized scores of the documents in the centralized sample database, and the database size statistics. We define the database scale factor for the ith database as the ratio of the estimated database size and the number of documents sampled from this database as follows: ^ _ i i i db db db samp N SF N = (2) where ^ idbN is the estimated database size and _idb sampN is the number of documents from the ith database in the centralized sample database. The intuition behind the database scale factor is that, for a database whose scale factor is 50, if one document from this database in the centralized sample database has a centralized document score of 0.5, we may guess that there are about 50 documents in that database which have scores of about 0.5. Actually, we can apply a finer non-parametric linear interpolation method to estimate the centralized document score curve for each database. Formally, we rank all the sampled documents from the ith database by their centralized document 34 scores to get the sampled centralized document score list {Sc(dsi1), Sc(dsi2), Sc(dsi3),.... .} for the ith database; we assume that if we could calculate the centralized document scores for all the documents in this database and get the complete centralized document score list, the top document in the sampled list would have rank SFdbi/2, the second document in the sampled list would rank SFdbi3/2, and so on. Therefore, the data points of sampled documents in the complete list are: {(SFdbi/2, Sc(dsi1)), (SFdbi3/2, Sc(dsi2)), (SFdbi5/2, Sc(dsi3)),.... .} . Piecewise linear interpolation is applied to estimate the centralized document score curve, as illustrated in Figure 1. The complete centralized document score list can be estimated by calculating the values of different ranks on the centralized document curve as: ],1[,)(S ^^ c idbij Njd ∈ . It can be seen from Figure 1 that more sample data points produce more accurate estimates of the centralized document score curves. However, for databases with large database scale ratios, this kind of linear interpolation may be rather inaccurate, especially for the top ranked (e.g., [1, SFdbi/2]) documents. Therefore, an alternative solution is proposed to estimate the centralized document scores of the top ranked documents for databases with large scale ratios (e.g., larger than 100). Specifically, a logistic model is built for each of these databases. The logistic model is used to estimate the centralized document score of the top 1 document in the corresponding database by using the two sampled documents from that database with highest centralized scores. )) ()(exp(1 ))()(exp( )( 22110 22110 ^ 1 iciicii iciicii ic dsSdsS dsSdsS dS ααα ααα +++ ++ = (3) 0iα , 1iα and 2iα are the parameters of the logistic model. For each training query, the top retrieved document of each database is downloaded and the corresponding centralized document score is calculated. Together with the scores of the top two sampled documents, these parameters can be estimated. After the centralized score of the top document is estimated, an exponential function is fitted for the top part ([1, SFdbi/2]) of the centralized document score curve as: ]2/,1[)*exp()( 10 ^ idbiiijc SFjjdS ∈+= ββ (4) ^ 0 1 1log( ( ))i c i iS dβ β= − (5) )12/( ))(log()((log( ^ 11 1 − − = idb icic i SF dSdsS β (6) The two parameters 0iβ and 1iβ are fitted to make sure the exponential function passes through the two points (1, ^ 1)( ic dS ) and (SFdbi/2, Sc(dsi1)). The exponential function is only used to adjust the top part of the centralized document score curve and the lower part of the curve is still fitted with the linear interpolation method described above. The adjustment by fitting exponential function of the top ranked documents has been shown empirically to produce more accurate results. From the centralized document score curves, we can estimate the complete centralized document score lists accordingly for all the available databases. After the estimated centralized document scores are normalized, the complete lists of probabilities of relevance can be constructed out of the complete centralized document score lists by Equation 1. Formally for the ith database, the complete list of probabilities of relevance is: ],1[,)(R ^^ idbij Njd ∈ . 3.2 The Unified Utility Maximization Model In this section, we formally define the new unified utility maximization model, which optimizes the resource selection problems for two goals of high-recall (database recommendation) and high-precision (distributed document retrieval) in the same framework. In the task of database recommendation, the system needs to decide how to rank databases. In the task of document retrieval, the system not only needs to select the databases but also needs to decide how many documents to retrieve from each selected database. We generalize the database recommendation selection process, which implicitly recommends all documents in every selected database, as a special case of the selection decision for the document retrieval task. Formally, we denote di as the number of documents we would like to retrieve from the ith database and ,...},{ 21 ddd = as a selection action for all the databases. The database selection decision is made based on the complete lists of probabilities of relevance for all the databases. The complete lists of probabilities of relevance are inferred from all the available information specifically sR , which stands for the resource descriptions acquired by query-based sampling and the database size estimates acquired by sample-resample; cS stands for the centralized document scores of the documents in the centralized sample database. If the method of estimating centralized document scores and probabilities of relevance in Section 3.1 is acceptable, then the most probable complete lists of probabilities of relevance can be derived and we denote them as 1 ^ ^ * 1{(R( ), [1, ]),dbjd j Nθ = ∈ 2 ^ ^ 2(R( ), [1, ]),.... .} dbjd j N∈ . Random vector denotes an arbitrary set of complete lists of probabilities of relevance and ),|( cs SRP θ as the probability of generating this set of lists. Finally, to each selection action d and a set of complete lists of Figure 1. Linear interpolation construction of the complete centralized document score list (database scale factor is 50). 35 probabilities of relevance θ , we associate a utility function ),( dU θ which indicates the benefit from making the d selection when the true complete lists of probabilities of relevance are θ . Therefore, the selection decision defined by the Bayesian framework is: θθθ θ dSRPdUd cs d ). |(),(maxarg * = (7) One common approach to simplify the computation in the Bayesian framework is to only calculate the utility function at the most probable parameter values instead of calculating the whole expectation. In other words, we only need to calculate ),( * dU θ and Equation 7 is simplified as follows: ),(maxarg * * θdUd d = (8) This equation serves as the basic model for both the database recommendation system and the document retrieval system. 3.3 Resource Selection for High-Recall High-recall is the goal of the resource selection algorithm in federated search tasks such as database recommendation. The goal is to select a small set of resources (e.g., less than Nsdb databases) that contain as many relevant documents as possible, which can be formally defined as: = = i N j iji idb ddIdU ^ 1 ^ * )(R)(),( θ (9) I(di) is the indicator function, which is 1 when the ith database is selected and 0 otherwise. Plug this equation into the basic model in Equation 8 and associate the selected database number constraint to obtain the following: sdb i i i N j iji d NdItoSubject ddId idb = = = )(: )(R)(maxarg ^ 1 ^* (10) The solution of this optimization problem is very simple. We can calculate the expected number of relevant documents for each database as follows: = = idb i N j ijRd dN ^ 1 ^^ )(R (11) The Nsdb databases with the largest expected number of relevant documents can be selected to meet the high-recall goal. We call this the UUM/HR algorithm (Unified Utility Maximization for High-Recall). 3.4 Resource Selection for High-Precision High-Precision is the goal of resource selection algorithm in federated search tasks such as distributed document retrieval. It is measured by the Precision at the top part of the final merged document list. This high-precision criterion is realized by the following utility function, which measures the Precision of retrieved documents from the selected databases. = = i d j iji i ddIdU 1 ^ * )(R)(),( θ (12) Note that the key difference between Equation 12 and Equation 9 is that Equation 9 sums up the probabilities of relevance of all the documents in a database, while Equation 12 only considers a much smaller part of the ranking. Specifically, we can calculate the optimal selection decision by: = = i d j iji d i ddId 1 ^* )(R)(maxarg (13) Different kinds of constraints caused by different characteristics of the document retrieval tasks can be associated with the above optimization problem. The most common one is to select a fixed number (Nsdb) of databases and retrieve a fixed number (Nrdoc) of documents from each selected database, formally defined as: 0, )(: )(R)(maxarg 1 ^* ≠= = = = irdoci sdb i i i d j iji d difNd NdItoSubject ddId i (14) This optimization problem can be solved easily by calculating the number of expected relevant documents in the top part of the each database``s complete list of probabilities of relevance: = = rdoc i N j ijRdTop dN 1 ^^ _ )(R (15) Then the databases can be ranked by these values and selected. We call this the UUM/HP-FL algorithm (Unified Utility Maximization for High-Precision with Fixed Length document rankings from each selected database). A more complex situation is to vary the number of retrieved documents from each selected database. More specifically, we allow different selected databases to return different numbers of documents. For simplification, the result list lengths are required to be multiples of a baseline number 10. (This value can also be varied, but for simplification it is set to 10 in this paper.) This restriction is set to simulate the behavior of commercial search engines on the Web. (Search engines such as Google and AltaVista return only 10 or 20 document ids for every result page.) This procedure saves the computation time of calculating optimal database selection by allowing the step of dynamic programming to be 10 instead of 1 (more detail is discussed latterly). For further simplification, we restrict to select at most 100 documents from each database (di<=100) Then, the selection optimization problem is formalized as follows: ]10. . ,,2,1,0[,*10 )(: )(R)(maxarg _ 1 ^* ∈= = = = = kkd Nd NdItoSubject ddId i rdocTotal i i sdb i i i d j iji d i (16) NTotal_rdoc is the total number of documents to be retrieved. Unfortunately, there is no simple solution for this optimization problem as there are for Equations 10 and 14. However, a 36 dynamic programming algorithm can be applied to calculate the optimal solution. The basic steps of this dynamic programming method are described in Figure 2. As this algorithm allows retrieving result lists of varying lengths from each selected database, it is called UUM/HP-VL algorithm. After the selection decisions are made, the selected databases are searched and the corresponding document ids are retrieved from each database. The final step of document retrieval is to merge the returned results into a single ranked list with the semisupervised learning algorithm. It was pointed out before that the SSL algorithm maps the database-specific scores into the centralized document scores and builds the final ranked list accordingly, which is consistent with all our selection procedures where documents with higher probabilities of relevance (thus higher centralized document scores) are selected. 4. EXPERIMENTAL METHODOLOGY 4.1 Testbeds It is desirable to evaluate distributed information retrieval algorithms with testbeds that closely simulate the real world applications. The TREC Web collections WT2g or WT10g [4,13] provide a way to partition documents by different Web servers. In this way, a large number (O(1000)) of databases with rather diverse contents could be created, which may make this testbed a good candidate to simulate the operational environments such as open domain hidden Web. However, two weakness of this testbed are: i) Each database contains only a small amount of document (259 documents by average for WT2g) [4]; and ii) The contents of WT2g or WT10g are arbitrarily crawled from the Web. It is not likely for a hidden Web database to provide personal homepages or web pages indicating that the pages are under construction and there is no useful information at all. These types of web pages are contained in the WT2g/WT10g datasets. Therefore, the noisy Web data is not similar with that of high-quality hidden Web database contents, which are usually organized by domain experts. Another choice is the TREC news/government data [1,15,17, 18,21]. TREC news/government data is concentrated on relatively narrow topics. Compared with TREC Web data: i) The news/government documents are much more similar to the contents provided by a topic-oriented database than an arbitrary web page, ii) A database in this testbed is larger than that of TREC Web data. By average a database contains thousands of documents, which is more realistic than a database of TREC Web data with about 250 documents. As the contents and sizes of the databases in the TREC news/government testbed are more similar with that of a topic-oriented database, it is a good candidate to simulate the distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web sites, such as West that provides access to legal, financial and news text databases [3]. As most current distributed information retrieval systems are developed for the environments of large organizations (companies) or domainspecific hidden Web other than open domain hidden Web, TREC news/government testbed was chosen in this work. Trec123-100col-bysource testbed is one of the most used TREC news/government testbed [1,15,17,21]. It was chosen in this work. Three testbeds in [21] with skewed database size distributions and different types of relevant document distributions were also used to give more thorough simulation for real environments. Trec123-100col-bysource: 100 databases were created from TREC CDs 1, 2 and 3. They were organized by source and publication date [1]. The sizes of the databases are not skewed. Details are in Table 1. Three testbeds built in [21] were based on the trec123-100colbysource testbed. Each testbed contains many small databases and two large databases created by merging about 10-20 small databases together. Input: Complete lists of probabilities of relevance for all the |DB| databases. Output: Optimal selection solution for Equation 16. i) Create the three-dimensional array: Sel (1. . |DB|, 1. . NTotal_rdoc/10, 1. . Nsdb) Each Sel (x, y, z) is associated with a selection decision xyzd , which represents the best selection decision in the condition: only databases from number 1 to number x are considered for selection; totally y*10 documents will be retrieved; only z databases are selected out of the x database candidates. And Sel (x, y, z) is the corresponding utility value by choosing the best selection. ii) Initialize Sel (1, 1. . NTotal_rdoc/10, 1. . Nsdb) with only the estimated relevance information of the 1st database. iii) Iterate the current database candidate i from 2 to |DB| For each entry Sel (i, y, z): Find k such that: )10,min(1: ))()1,,1((maxarg *10 ^ * yktosubject dRzkyiSelk kj ij k ≤≤ +−−−= ≤ ),,1())()1,,1(( * *10 ^ * zyiSeldRzkyiSelIf kj ij −>+−−− ≤ This means that we should retrieve * 10 k∗ documents from the ith database, otherwise we should not select this database and the previous best solution Sel (i-1, y, z) should be kept. Then set the value of iyzd and Sel (i, y, z) accordingly. iv) The best selection solution is given by _ /10| | Toral rdoc sdbDB N Nd and the corresponding utility value is Sel (|DB|, NTotal_rdoc/10, Nsdb). Figure 2. The dynamic programming optimization procedure for Equation 16. Table1: Testbed statistics. Number of documents Size (MB) Testbed Size (GB) Min Avg Max Min Avg Max Trec123 3.2 752 10782 39713 28 32 42 Table2: Query set statistics. Name TREC Topic Set TREC Topic Field Average Length (Words) Trec123 51-150 Title 3.1 37 Trec123-2ldb-60col (representative): The databases in the trec123-100col-bysource were sorted with alphabetical order. Two large databases were created by merging 20 small databases with the round-robin method. Thus, the two large databases have more relevant documents due to their large sizes, even though the densities of relevant documents are roughly the same as the small databases. Trec123-AP-WSJ-60col (relevant): The 24 Associated Press collections and the 16 Wall Street Journal collections in the trec123-100col-bysource testbed were collapsed into two large databases APall and WSJall. The other 60 collections were left unchanged. The APall and WSJall databases have higher densities of documents relevant to TREC queries than the small databases. Thus, the two large databases have many more relevant documents than the small databases. Trec123-FR-DOE-81col (nonrelevant): The 13 Federal Register collections and the 6 Department of Energy collections in the trec123-100col-bysource testbed were collapsed into two large databases FRall and DOEall. The other 80 collections were left unchanged. The FRall and DOEall databases have lower densities of documents relevant to TREC queries than the small databases, even though they are much larger. 100 queries were created from the title fields of TREC topics 51-150. The queries 101-150 were used as training queries and the queries 51-100 were used as test queries (details in Table 2). 4.2 Search Engines In the uncooperative distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web, different databases may use different types of search engine. To simulate the multiple type-engine environment, three different types of search engines were used in the experiments: INQUERY [2], a unigram statistical language model with linear smoothing [12,20] and a TFIDF retrieval algorithm with ltc weight [12,20]. All these algorithms were implemented with the Lemur toolkit [12]. These three kinds of search engines were assigned to the databases among the four testbeds in a round-robin manner. 5. RESULTS: RESOURCE SELECTION OF DATABASE RECOMMENDATION All four testbeds described in Section 4 were used in the experiments to evaluate the resource selection effectiveness of the database recommendation system. The resource descriptions were created using query-based sampling. About 80 queries were sent to each database to download 300 unique documents. The database size statistics were estimated by the sample-resample method [21]. Fifty queries (101-150) were used as training queries to build the relevant logistic model and to fit the exponential functions of the centralized document score curves for large ratio databases (details in Section 3.1). Another 50 queries (51-100) were used as test data. Resource selection algorithms of database recommendation systems are typically compared using the recall metric nR [1,17,18,21]. Let B denote a baseline ranking, which is often the RBR (relevance based ranking), and E as a ranking provided by a resource selection algorithm. And let Bi and Ei denote the number of relevant documents in the ith ranked database of B or E. Then Rn is defined as follows: = = = k i i k i i k B E R 1 1 (17) Usually the goal is to search only a few databases, so our figures only show results for selecting up to 20 databases. The experiments summarized in Figure 3 compared the effectiveness of the three resource selection algorithms, namely the CORI, ReDDE and UUM/HR. The UUM/HR algorithm is described in Section 3.3. It can be seen from Figure 3 that the ReDDE and UUM/HR algorithms are more effective (on the representative, relevant and nonrelevant testbeds) or as good as (on the Trec123-100Col testbed) the CORI resource selection algorithm. The UUM/HR algorithm is more effective than the ReDDE algorithm on the representative and relevant testbeds and is about the same as the ReDDE algorithm on the Trec123100Col and the nonrelevant testbeds. This suggests that the UUM/HR algorithm is more robust than the ReDDE algorithm. It can be noted that when selecting only a few databases on the Trec123-100Col or the nonrelevant testbeds, the ReDEE algorithm has a small advantage over the UUM/HR algorithm. We attribute this to two causes: i) The ReDDE algorithm was tuned on the Trec123-100Col testbed; and ii) Although the difference is small, this may suggest that our logistic model of estimating probabilities of relevance is not accurate enough. More training data or a more sophisticated model may help to solve this minor puzzle. Collections Selected. Collections Selected. Trec123-100Col Testbed. Representative Testbed. Collection Selected. Collection Selected. Relevant Testbed. Nonrelevant Testbed. Figure 3. Resource selection experiments on the four testbeds. 38 6. RESULTS: DOCUMENT RETRIEVAL EFFECTIVENESS For document retrieval, the selected databases are searched and the returned results are merged into a single final list. In all of the experiments discussed in this section the results retrieved from individual databases were combined by the semisupervised learning results merging algorithm. This version of the SSL algorithm [22] is allowed to download a small number of returned document texts on the fly to create additional training data in the process of learning the linear models which map database-specific document scores into estimated centralized document scores. It has been shown to be very effective in environments where only short result-lists are retrieved from each selected database [22]. This is a common scenario in operational environments and was the case for our experiments. Document retrieval effectiveness was measured by Precision at the top part of the final document list. The experiments in this section were conducted to study the document retrieval effectiveness of five selection algorithms, namely the CORI, ReDDE, UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms. The last three algorithms were proposed in Section 3. All the first four algorithms selected 3 or 5 databases, and 50 documents were retrieved from each selected database. The UUM/HP-FL algorithm also selected 3 or 5 databases, but it was allowed to adjust the number of documents to retrieve from each selected database; the number retrieved was constrained to be from 10 to 100, and a multiple of 10. The Trec123-100Col and representative testbeds were selected for document retrieval as they represent two extreme cases of resource selection effectiveness; in one case the CORI algorithm is as good as the other algorithms and in the other case it is quite Table 5. Precision on the representative testbed when 3 databases were selected. (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) Precision at Doc Rank CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL 5 docs 0.3720 0.4080 (+9.7%) 0.4640 (+24.7%) 0.4600 (+23.7%)(-0.9%) 0.5000 (+34.4%)(+7.8%) 10 docs 0.3400 0.4060 (+19.4%) 0.4600 (+35.3%) 0.4540 (+33.5%)(-1.3%) 0.4640 (+36.5%)(+0.9%) 15 docs 0.3120 0.3880 (+24.4%) 0.4320 (+38.5%) 0.4240 (+35.9%)(-1.9%) 0.4413 (+41.4%)(+2.2) 20 docs 0.3000 0.3750 (+25.0%) 0.4080 (+36.0%) 0.4040 (+34.7%)(-1.0%) 0.4240 (+41.3%)(+4.0%) 30 docs 0.2533 0.3440 (+35.8%) 0.3847 (+51.9%) 0.3747 (+47.9%)(-2.6%) 0.3887 (+53.5%)(+1.0%) Table 6. Precision on the representative testbed when 5 databases were selected. (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) Precision at Doc Rank CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL 5 docs 0.3960 0.4080 (+3.0%) 0.4560 (+15.2%) 0.4280 (+8.1%)(-6.1%) 0.4520 (+14.1%)(-0.9%) 10 docs 0.3880 0.4060 (+4.6%) 0.4280 (+10.3%) 0.4460 (+15.0%)(+4.2%) 0.4560 (+17.5%)(+6.5%) 15 docs 0.3533 0.3987 (+12.9%) 0.4227 (+19.6%) 0.4440 (+25.7%)(+5.0%) 0.4453 (+26.0%)(+5.4%) 20 docs 0.3330 0.3960 (+18.9%) 0.4140 (+24.3%) 0.4290 (+28.8%)(+3.6%) 0.4350 (+30.6%)(+5.1%) 30 docs 0.2967 0.3740 (+26.1%) 0.4013 (+35.3%) 0.3987 (+34.4%)(-0.7%) 0.4060 (+36.8%)(+1.2%) Table 3. Precision on the trec123-100col-bysource testbed when 3 databases were selected. (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) Precision at Doc Rank CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL 5 docs 0.3640 0.3480 (-4.4%) 0.3960 (+8.8%) 0.4680 (+28.6%)(+18.1%) 0.4640 (+27.5%)(+17.2%) 10 docs 0.3360 0.3200 (-4.8%) 0.3520 (+4.8%) 0.4240 (+26.2%)(+20.5%) 0.4220 (+25.6%)(+19.9%) 15 docs 0.3253 0.3187 (-2.0%) 0.3347 (+2.9%) 0.3973 (+22.2%)(+15.7%) 0.3920 (+20.5%)(+17.1%) 20 docs 0.3140 0.2980 (-5.1%) 0.3270 (+4.1%) 0.3720 (+18.5%)(+13.8%) 0.3700 (+17.8%)(+13.2%) 30 docs 0.2780 0.2660 (-4.3%) 0.2973 (+6.9%) 0.3413 (+22.8%)(+14.8%) 0.3400 (+22.3%)(+14.4%) Table 4. Precision on the trec123-100col-bysource testbed when 5 databases were selected. (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) Precision at Doc Rank CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL 5 docs 0.4000 0.3920 (-2.0%) 0.4280 (+7.0%) 0.4680 (+17.0%)(+9.4%) 0.4600 (+15.0%)(+7.5%) 10 docs 0.3800 0.3760 (-1.1%) 0.3800 (+0.0%) 0.4180 (+10.0%)(+10.0%) 0.4320 (+13.7%)(+13.7%) 15 docs 0.3560 0.3560 (+0.0%) 0.3720 (+4.5%) 0.3920 (+10.1%)(+5.4%) 0.4080 (+14.6%)(+9.7%) 20 docs 0.3430 0.3390 (-1.2%) 0.3550 (+3.5%) 0.3710 (+8.2%)(+4.5%) 0.3830 (+11.7%)(+7.9%) 30 docs 0.3240 0.3140 (-3.1%) 0.3313 (+2.3%) 0.3500 (+8.0%)(+5.6%) 0.3487 (+7.6%)(+5.3%) 39 a lot worse than the other algorithms. Tables 3 and 4 show the results on the Trec123-100Col testbed, and Tables 5 and 6 show the results on the representative testbed. On the Trec123-100Col testbed, the document retrieval effectiveness of the CORI selection algorithm is roughly the same or a little bit better than the ReDDE algorithm but both of them are worse than the other three algorithms (Tables 3 and 4). The UUM/HR algorithm has a small advantage over the CORI and ReDDE algorithms. One main difference between the UUM/HR algorithm and the ReDDE algorithm was pointed out before: The UUM/HR uses training data and linear interpolation to estimate the centralized document score curves, while the ReDDE algorithm [21] uses a heuristic method, assumes the centralized document score curves are step functions and makes no distinction among the top part of the curves. This difference makes UUM/HR better than the ReDDE algorithm at distinguishing documents with high probabilities of relevance from low probabilities of relevance. Therefore, the UUM/HR reflects the high-precision retrieval goal better than the ReDDE algorithm and thus is more effective for document retrieval. The UUM/HR algorithm does not explicitly optimize the selection decision with respect to the high-precision goal as the UUM/HP-FL and UUM/HP-VL algorithms are designed to do. It can be seen that on this testbed, the UUM/HP-FL and UUM/HP-VL algorithms are much more effective than all the other algorithms. This indicates that their power comes from explicitly optimizing the high-precision goal of document retrieval in Equations 14 and 16. On the representative testbed, CORI is much less effective than other algorithms for distributed document retrieval (Tables 5 and 6). The document retrieval results of the ReDDE algorithm are better than that of the CORI algorithm but still worse than the results of the UUM/HR algorithm. On this testbed the three UUM algorithms are about equally effective. Detailed analysis shows that the overlap of the selected databases between the UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms is much larger than the experiments on the Trec123-100Col testbed, since all of them tend to select the two large databases. This explains why they are about equally effective for document retrieval. In real operational environments, databases may return no document scores and report only ranked lists of results. As the unified utility maximization model only utilizes retrieval scores of sampled documents with a centralized retrieval algorithm to calculate the probabilities of relevance, it makes database selection decisions without referring to the document scores from individual databases and can be easily generalized to this case of rank lists without document scores. The only adjustment is that the SSL algorithm merges ranked lists without document scores by assigning the documents with pseudo-document scores normalized for their ranks (In a ranked list of 50 documents, the first one has a score of 1, the second has a score of 0.98 etc) ,which has been studied in [22]. The experiment results on trec123-100Col-bysource testbed with 3 selected databases are shown in Table 7. The experiment setting was the same as before except that the document scores were eliminated intentionally and the selected databases only return ranked lists of document ids. It can be seen from the results that the UUM/HP-FL and UUM/HP-VL work well with databases returning no document scores and are still more effective than other alternatives. Other experiments with databases that return no document scores are not reported but they show similar results to prove the effectiveness of UUM/HP-FL and UUM/HPVL algorithms. The above experiments suggest that it is very important to optimize the high-precision goal explicitly in document retrieval. The new algorithms based on this principle achieve better or at least as good results as the prior state-of-the-art algorithms in several environments. 7. CONCLUSION Distributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets. Most previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application. We argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical. This kind of inconsistency has also been observed in previous work, but the prior solutions either used heuristic methods or assumed cooperation by individual databases (e.g., all the databases used the same kind of search engines), which is frequently not true in the uncooperative environment. In this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework. In this framework, the selection decisions are obtained by optimizing different objective functions. As far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner. The new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database. A single logistic model was trained on the centralized Table 7. Precision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) (Search engines do not return document scores) Precision at Doc Rank CORI ReDDE UUM/HR UUM/HP-FL UUM/HP-VL 5 docs 0.3520 0.3240 (-8.0%) 0.3680 (+4.6%) 0.4520 (+28.4%)(+22.8%) 0.4520 (+28.4%)(+22.8) 10 docs 0.3320 0.3140 (-5.4%) 0.3340 (+0.6%) 0.4120 (+24.1%)(+23.4%) 0.4020 (+21.1%)(+20.4%) 15 docs 0.3227 0.2987 (-7.4%) 0.3280 (+1.6%) 0.3920 (+21.5%)(+19.5%) 0.3733 (+15.7%)(+13.8%) 20 docs 0.3030 0.2860 (-5.6%) 0.3130 (+3.3%) 0.3670 (+21.2%)(+17.3%) 0.3590 (+18.5%)(+14.7%) 30 docs 0.2727 0.2640 (-3.2%) 0.2900 (+6.3%) 0.3273 (+20.0%)(+12.9%) 0.3273 (+20.0%)(+12.9%) 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model. Therefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database. This framework is not only more theoretically solid but also very effective. One algorithm for resource selection (UUM/HR) and two algorithms for document retrieval (UUM/HP-FL and UUM/HP-VL) are derived from this framework. Empirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web. Furthermore, the UUM/HP-FL and UUM/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores. Experiments have shown that these algorithms achieve results that are at least as good as the prior state-of-the-art, and sometimes considerably better. Detailed analysis indicates that the advantage of these algorithms comes from explicitly optimizing the goals of the specific tasks. The unified utility maximization framework is open for different extensions. When cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs. Another extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments. All of these are the directions of future research. ACKNOWLEDGEMENT This research was supported by NSF grants EIA-9983253 and IIS-0118767. Any opinions, findings, conclusions, or recommendations expressed in this paper are the authors'', and do not necessarily reflect those of the sponsor. REFERENCES [1] J. Callan. (2000). Distributed information retrieval. In W.B. Croft, editor, Advances in Information Retrieval. Kluwer Academic Publishers. (pp. 127-150). [2] J. Callan, W.B. Croft, and J. Broglio. (1995). TREC and TIPSTER experiments with INQUERY. Information Processing and Management, 31(3). (pp. 327-343). [3] J. G. Conrad, X. S. Guo, P. Jackson and M. Meziou. (2002). Database selection using actual physical and acquired logical collection resources in a massive domainspecific operational environment. Distributed search over the hidden web: Hierarchical database sampling and selection. In Proceedings of the 28th International Conference on Very Large Databases (VLDB). [4] N. Craswell. (2000). Methods for distributed information retrieval. Ph. D. thesis, The Australian Nation University. [5] N. Craswell, D. Hawking, and P. Thistlewaite. (1999). Merging results from isolated search engines. In Proceedings of 10th Australasian Database Conference. [6] D. D'Souza, J. Thom, and J. Zobel. (2000). A comparison of techniques for selecting text collections. In Proceedings of the 11th Australasian Database Conference. [7] N. Fuhr. (1999). A Decision-Theoretic approach to database selection in networked IR. ACM Transactions on Information Systems, 17(3). (pp. 229-249). [8] L. Gravano, C. Chang, H. Garcia-Molina, and A. Paepcke. (1997). STARTS: Stanford proposal for internet metasearching. In Proceedings of the 20th ACM-SIGMOD International Conference on Management of Data. [9] L. Gravano, P. Ipeirotis and M. Sahami. (2003). QProber: A System for Automatic Classification of Hidden-Web Databases. ACM Transactions on Information Systems, 21(1). [10] P. Ipeirotis and L. Gravano. (2002). Distributed search over the hidden web: Hierarchical database sampling and selection. In Proceedings of the 28th International Conference on Very Large Databases (VLDB). [11] InvisibleWeb.com. http://www.invisibleweb.com [12] The lemur toolkit. http://www.cs.cmu.edu/~lemur [13] J. Lu and J. Callan. (2003). Content-based information retrieval in peer-to-peer networks. In Proceedings of the 12th International Conference on Information and Knowledge Management. [14] W. Meng, C.T. Yu and K.L. Liu. (2002) Building efficient and effective metasearch engines. ACM Comput. Surv. 34(1). [15] H. Nottelmann and N. Fuhr. (2003). Evaluating different method of estimating retrieval quality for resource selection. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. [16] H., Nottelmann and N., Fuhr. (2003). The MIND architecture for heterogeneous multimedia federated digital libraries. ACM SIGIR 2003 Workshop on Distributed Information Retrieval. [17] A.L. Powell, J.C. French, J. Callan, M. Connell, and C.L. Viles. (2000). The impact of database selection on distributed searching. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. [18] A.L. Powell and J.C. French. (2003). Comparing the performance of database selection algorithms. ACM Transactions on Information Systems, 21(4). (pp. 412-456). [19] C. Sherman (2001). Search for the invisible web. Guardian Unlimited. [20] L. Si and J. Callan. (2002). Using sampled data and regression to merge search engine results. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. [21] L. Si and J. Callan. (2003). Relevant document distribution estimation method for resource selection. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. [22] L. Si and J. Callan. (2003). A Semi-Supervised learning method to merge search engine results. ACM Transactions on Information Systems, 21(4). (pp. 457-491). 41
Unified Utility Maximization Framework for Resource Selection ABSTRACT This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms. 1. INTRODUCTION Conventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing. Distributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database. For example, "Hidden Web" contents (also called "invisible" or "deep" Web contents) are information on the Web that cannot be accessed by the conventional search engines. Hidden web contents have been estimated to be 2-50 [19] times larger than the contents that can be searched by conventional search engines. Therefore, it is very important to search this type of valuable information. The architecture of distributed search solution is highly influenced by different environmental characteristics. In a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines. Early distributed information retrieval research focused on this type of cooperative environments [1,8]. On the other side, in a wide area network such as very large corporate environments or on the Web there are many types of search engines and it is difficult to assume that all the information providers can cooperate as they are required. Even if they are willing to cooperate in these environments, it may be hard to enforce a single solution for all the information providers or to detect whether information sources provide the correct information as they are required. Many applications fall into the latter type of uncooperative environments such as the Mind project [16] which integrates non-cooperating digital libraries or the QProber system [9] which supports browsing and searching of uncooperative hidden Web databases. In this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines. There are three important sub-problems in distributed information retrieval. First, information about the contents of each individual database must be acquired (resource representation) [1,8,21]. Second, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21]. Third, the results retrieved from all the selected resources have to be merged into a single final list before it can be presented to the end user (retrieval and results merging) [1,5,20,22]. Many types of solutions exist for distributed information retrieval. Invisible-web. net1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics. A database recommendation system goes a step further than a browsing system like Invisible-web. net by recommending most relevant information sources to users' queries. It is composed of the resource description and the resource selection components. This solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically. Distributed document retrieval is a more sophisticated task. It selects relevant information sources for users' queries as the database recommendation system does. Furthermore, users' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users. The goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal. On the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal. Prior research indicated that these two goals are related but not identical [4,21]. However, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21]. This paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals. First, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21]. A logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance. Second, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document. The probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document's score. Then, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates. For the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal. For resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance. This selection criterion meets the high-precision goal of distributed document retrieval application. Furthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list. The unified utility framework makes very few assumptions and works in uncooperative environments. Two key features make it a more solid model for distributed information retrieval: i) It formalizes the resource selection problems of different applications as various utility functions, and optimizes the utility functions to achieve the optimal results accordingly; and ii) It shows an effective and efficient way to estimate the probabilities of relevance of all documents across databases. Specifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model. The human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases. This is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15]. The unified utility framework is not only more theoretically solid but also very effective. Empirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations. The next section discusses related work. Section 3 describes the new unified utility maximization model. Section 4 explains our experimental methodology. Sections 5 and 6 present our experimental results for resource selection and document retrieval. Section 7 concludes. 2. PRIOR RESEARCH There has been considerable research on all the sub-problems of distributed information retrieval. We survey the most related work in this section. The first problem of distributed information retrieval is resource representation. The STARTS protocol is one solution for acquiring resource descriptions in cooperative environments [8]. However, in uncooperative environments, even the databases are willing to share their information, it is not easy to judge whether the information they provide is accurate or not. Furthermore, it is not easy to coordinate the databases to provide resource representations that are compatible with each other. Thus, in uncooperative environments, one common choice is query-based sampling, which randomly generates and sends queries to individual search engines and retrieves some documents to build the descriptions. As the sampled documents are selected by random queries, query-based sampling is not easily fooled by any adversarial spammer that is interested to attract more traffic. Experiments have shown that rather accurate resource descriptions can be built by sending about 80 queries and downloading about 300 documents [1]. Many resource selection algorithms such as gGlOSS/vGlOSS [8] and CORI [1] have been proposed in the last decade. The CORI algorithm represents each database by its terms, the document frequencies and a small number of corpus statistics (details in [1]). As prior research on different datasets has shown the CORI algorithm to be the most stable and effective of the three algorithms [1,17,18], we use it as a baseline algorithm in this work. The relevant document distribution estimation (ReDDE [21]) resource selection algorithm is a recent algorithm that tries to estimate the distribution of relevant documents across the available databases and ranks the databases accordingly. Although the ReDDE algorithm has been shown to be effective, it relies on heuristic constants that are set empirically [21]. The last step of the document retrieval sub-problem is results merging, which is the process of transforming database-specific document scores into comparable database-independent document scores. The semi supervised learning (SSL) [20,22] result merging algorithm uses the documents acquired by querybased sampling as training data and linear regression to learn the database-specific, query-specific merging models. These linear models are used to convert the database-specific document scores into the approximated centralized document scores. The SSL algorithm has been shown to be effective [22]. It serves as an important component of our unified utility maximization framework (Section 3). In order to achieve accurate document retrieval results, many previous methods simply use resource selection algorithms that are effective of database recommendation system. But as pointed out above, a good resource selection algorithm optimized for high-recall may not work well for document retrieval, which targets the high-precision goal. This type of inconsistency has been observed in previous research [4,21]. The research in [21] tried to solve the problem with a heuristic method. The research most similar to what we propose here is the decision-theoretic framework (DTF) [7,15]. This framework computes a selection that minimizes the overall costs (e.g., retrieval quality, time) of document retrieval system and several methods [15] have been proposed to estimate the retrieval quality. However, two points distinguish our research from the DTF model. First, the DTF is a framework designed specifically for document retrieval, but our new model integrates two distinct applications with different requirements (database recommendation and distributed document retrieval) into the same unified framework. Second, the DTF builds a model for each database to calculate the probabilities of relevance. This requires human relevance judgments for the results retrieved from each database. In contrast, our approach only builds one logistic model for the centralized sample database. The centralized sample database can serve as a bridge to connect the individual databases with the centralized logistic model, thus the probabilities of relevance of documents in different databases can be estimated. This strategy can save large amount of human judgment effort and is a big advantage of the unified utility maximization framework over the DTF especially when there are a large number of databases. 3. UNIFIED UTILITY MAXIMIZATION FRAMEWORK The Unified Utility Maximization (UUM) framework is based on estimating the probabilities of relevance of the (mostly unseen) documents available in the distributed search environment. In this section we describe how the probabilities of relevance are estimated and how they are used by the Unified Utility Maximization model. We also describe how the model can be optimized for the high-recall goal of a database recommendation system and the high-precision goal of a distributed document retrieval system. 3.1 Estimating Probabilities of Relevance As pointed out above, the purpose of resource selection is highrecall and the purpose of document retrieval is high-precision. In order to meet these diverse goals, the key issue is to estimate the probabilities of relevance of the documents in various databases. This is a difficult problem because we can only observe a sample of the contents of each database using query-based sampling. Our strategy is to make full use of all the available information to calculate the probability estimates. 3.1.1 Learning Probabilities of Relevance In the resource description step, the centralized sample database is built by query-based sampling and the database sizes are estimated using the sample-resample method [21]. At the same time, an effective retrieval algorithm (Inquery [2]) is applied on the centralized sample database with a small number (e.g., 50) of training queries. For each training query, the CORI resource selection algorithm [1] is applied to select some number (e.g., 10) of databases and retrieve 50 document ids from each database. The SSL results merging algorithm [20,22] is used to merge the results. Then, we can download the top 50 documents in the final merged list and calculate their corresponding centralized scores using Inquery and the corpus statistics of the centralized sample database. The centralized scores are further normalized (divided by the maximum centralized score for each query), as this method has been suggested to improve estimation accuracy in previous research [15]. Human judgment is acquired for those documents and a logistic model is built to transform the normalized centralized document scores to probabilities of relevance as follows: is the normalized centralized document score and ac and bc are the two parameters of the logistic model. These two parameters are estimated by maximizing the probabilities of relevance of the training queries. The logistic model provides us the tool to calculate the probabilities of relevance from centralized document scores. 3.1.2 Estimating Centralized Document Scores When the user submits a new query, the centralized document scores of the documents in the centralized sample database are calculated. However, in order to calculate the probabilities of relevance, we need to estimate centralized document scores for all documents across the databases instead of only the sampled documents. This goal is accomplished using: the centralized scores of the documents in the centralized sample database, and the database size statistics. We define the database scale factor for the ith database as the ratio of the estimated database size and the number of documents sampled from this database as follows: where Ndbi is the estimated database size and Ndbi _ samp is the number of documents from the ith database in the centralized sample database. The intuition behind the database scale factor is that, for a database whose scale factor is 50, if one document from this database in the centralized sample database has a centralized document score of 0.5, we may guess that there are about 50 documents in that database which have scores of about 0.5. Actually, we can apply a finer non-parametric linear interpolation method to estimate the centralized document score curve for each database. Formally, we rank all the sampled documents from the ith database by their centralized document scores to get the sampled centralized document score list {Sc (dsi1), Sc (dsi2), Sc (dsi3), ...} for the ith database; we assume that if we could calculate the centralized document scores for all the documents in this database and get the complete centralized document score list, the top document in the sampled list would have rank SFdbi/2, the second document in the sampled list would rank SFdbi3/2, and so on. Therefore, the data points of sampled documents in the complete list are: {(SFdbi/2, Sc (dsi1)), (SFdbi3/2, Sc (dsi2)), (SFdbi5/2, Sc (dsi3)), ...}. Piecewise linear interpolation is applied to estimate the centralized document score curve, as illustrated in Figure 1. The complete centralized document score list can be estimated by calculating the values of different ranks on the centralized document curve as: It can be seen from Figure 1 that more sample data points produce more accurate estimates of the centralized document score curves. However, for databases with large database scale ratios, this kind of linear interpolation may be rather inaccurate, especially for the top ranked (e.g., [1, SFdbi/2]) documents. Therefore, an alternative solution is proposed to estimate the centralized document scores of the top ranked documents for databases with large scale ratios (e.g., larger than 100). Specifically, a logistic model is built for each of these databases. The logistic model is used to estimate the centralized document score of the top 1 document in the corresponding database by using the two sampled documents from that database with highest centralized scores. αi0, αi1 and αi2 are the parameters of the logistic model. For each training query, the top retrieved document of each database is downloaded and the corresponding centralized document score is calculated. Together with the scores of the top two sampled documents, these parameters can be estimated. After the centralized score of the top document is estimated, an exponential function is fitted for the top part ([1, SFdbi/2]) of the centralized document score curve as: The two parameters βi0 and βi1 are fitted to make sure the ^ exponential function passes through the two points (1, Sc (di1)) and (SFdbi/2, Sc (dsi1)). The exponential function is only used to adjust the top part of the centralized document score curve and the lower part of the curve is still fitted with the linear interpolation method described above. The adjustment by fitting exponential function of the top ranked documents has been shown empirically to produce more accurate results. Figure 1. Linear interpolation construction of the complete centralized document score list (database scale factor is 50). From the centralized document score curves, we can estimate the complete centralized document score lists accordingly for all the available databases. After the estimated centralized document scores are normalized, the complete lists of probabilities of relevance can be constructed out of the complete centralized document score lists by Equation 1. Formally for the ith database, the complete list of probabilities of relevance is: 3.2 The Unified Utility Maximization Model In this section, we formally define the new unified utility maximization model, which optimizes the resource selection problems for two goals of high-recall (database recommendation) and high-precision (distributed document retrieval) in the same framework. In the task of database recommendation, the system needs to decide how to rank databases. In the task of document retrieval, the system not only needs to select the databases but also needs to decide how many documents to retrieve from each selected database. We generalize the database recommendation selection process, which implicitly recommends all documents in every selected database, as a special case of the selection decision for the document retrieval task Formally, we denote di as the number of documents we would like to retrieve from the ith database and d = {d1, d2,} as a selection action for all the databases. The database selection decision is made based on the complete lists of probabilities of relevance for all the databases. The complete lists of probabilities of relevance are inferred from all the available information specifically Rs, which stands for the resource descriptions acquired by query-based sampling and the database size estimates acquired by sample-resample; Sc stands for the centralized document scores of the documents in the centralized sample database. If the method of estimating centralized document scores and probabilities of relevance in Section 3.1 is acceptable, then the most probable complete lists of probabilities of relevance can be derived and we denote them as High-recall is the goal of the resource selection algorithm in federated search tasks such as database recommendation. The goal is to select a small set of resources (e.g., less than Nsdb databases) that contain as many relevant documents as possible, which can be formally defined as: I (di) is the indicator function, which is 1 when the ith database is selected and 0 otherwise. Plug this equation into the basic model in Equation 8 and associate the selected database number constraint to obtain the following: ^ ^ probabilities of relevance θ, we associate a utility function U (θ, d) which indicates the benefit from making the d selection when the true complete lists of probabilities of relevance are θ. Therefore, the selection decision defined by the Bayesian framework is: One common approach to simplify the computation in the Bayesian framework is to only calculate the utility function at the most probable parameter values instead of calculating the whole expectation. In other words, we only need to calculate The solution of this optimization problem is very simple. We can calculate the expected number of relevant documents for each database as follows: The Nsdb databases with the largest expected number of relevant documents can be selected to meet the high-recall goal. We call this the UUM/HR algorithm (Unified Utility Maximization for High-Recall). 3.4 Resource Selection for High-Precision High-Precision is the goal of resource selection algorithm in federated search tasks such as distributed document retrieval. It is measured by the Precision at the top part of the final merged document list. This high-precision criterion is realized by the following utility function, which measures the Precision of retrieved documents from the selected databases. Note that the key difference between Equation 12 and Equation 9 is that Equation 9 sums up the probabilities of relevance of all the documents in a database, while Equation 12 only considers a much smaller part of the ranking. Specifically, we can calculate the optimal selection decision by: Different kinds of constraints caused by different characteristics of the document retrieval tasks can be associated with the above optimization problem. The most common one is to select a fixed number (Nsdb) of databases and retrieve a fixed number (Nrdoc) of documents from each selected database, formally defined as: This optimization problem can be solved easily by calculating the number of expected relevant documents in the top part of the each database's complete list of probabilities of relevance: Then the databases can be ranked by these values and selected. We call this the UUM/HP-FL algorithm (Unified Utility Maximization for High-Precision with Fixed Length document rankings from each selected database). A more complex situation is to vary the number of retrieved documents from each selected database. More specifically, we allow different selected databases to return different numbers of documents. For simplification, the result list lengths are required to be multiples of a baseline number 10. (This value can also be varied, but for simplification it is set to 10 in this paper.) This restriction is set to simulate the behavior of commercial search engines on the Web. (Search engines such as Google and AltaVista return only 10 or 20 document ids for every result page.) This procedure saves the computation time of calculating optimal database selection by allowing the step of dynamic programming to be 10 instead of 1 (more detail is discussed latterly). For further simplification, we restrict to select at most NTotal_rdoc is the total number of documents to be retrieved. dynamic programming algorithm can be applied to calculate the optimal solution. The basic steps of this dynamic programming method are described in Figure 2. As this algorithm allows retrieving result lists of varying lengths from each selected database, it is called UUM/HP-VL algorithm. After the selection decisions are made, the selected databases are searched and the corresponding document ids are retrieved from each database. The final step of document retrieval is to merge the returned results into a single ranked list with the semisupervised learning algorithm. It was pointed out before that the SSL algorithm maps the database-specific scores into the centralized document scores and builds the final ranked list accordingly, which is consistent with all our selection procedures where documents with higher probabilities of relevance (thus higher centralized document scores) are selected. 4. EXPERIMENTAL METHODOLOGY 4.1 Testbeds It is desirable to evaluate distributed information retrieval algorithms with testbeds that closely simulate the real world applications. The TREC Web collections WT2g or WT10g [4,13] provide a way to partition documents by different Web servers. In this way, a large number (O (1000)) of databases with rather diverse Table1: Testbed statistics. contents could be created, which may make this testbed a good candidate to simulate the operational environments such as open domain hidden Web. However, two weakness of this testbed are: i) Each database contains only a small amount of document (259 documents by average for WT2g) [4]; and ii) The contents of WT2g or WT10g are arbitrarily crawled from the Web. It is not likely for a hidden Web database to provide personal homepages or web pages indicating that the pages are under construction and there is no useful information at all. These types of web pages are contained in the WT2g/WT10g datasets. Therefore, the noisy Web data is not similar with that of high-quality hidden Web database contents, which are usually organized by domain experts. Another choice is the TREC news/government data [1,15,17, 18,21]. TREC news/government data is concentrated on relatively narrow topics. Compared with TREC Web data: i) The news/government documents are much more similar to the contents provided by a topic-oriented database than an arbitrary web page, ii) A database in this testbed is larger than that of TREC Web data. By average a database contains thousands of documents, which is more realistic than a database of TREC Web data with about 250 documents. As the contents and sizes of the databases in the TREC news/government testbed are more similar with that of a topic-oriented database, it is a good candidate to simulate the distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web sites, such as West that provides access to legal, financial and news text databases [3]. As most current distributed information retrieval systems are developed for the environments of large organizations (companies) or domainspecific hidden Web other than open domain hidden Web, TREC news/government testbed was chosen in this work. Trec123-100col-bysource testbed is one of the most used TREC news/government testbed [1,15,17,21]. It was chosen in this work. Three testbeds in [21] with skewed database size distributions and different types of relevant document distributions were also used to give more thorough simulation for real environments. Trec123-100col-bysource: 100 databases were created from TREC CDs 1, 2 and 3. They were organized by source and publication date [1]. The sizes of the databases are not skewed. Details are in Table 1. Three testbeds built in [21] were based on the trec123-100colbysource testbed. Each testbed contains many "small" databases and two large databases created by merging about 10-20 small databases together. decision d xyz, which represents the best selection decision in the condition: only databases from number 1 to number x are considered for selection; totally y * 10 documents will be retrieved; only z databases are selected out of the x database candidates. And Sel (x, y, z) is the corresponding utility value by choosing the best selection. ii) Initialize Sel (1, 1. . NTotal_rdoc / 10, 1. . Nsdb) with only the estimated relevance information of the 1st database. iii) Iterate the current database candidate i from 2 to | DB | from the ith database, otherwise we should not select this database and the previous best solution Sel (i-1, y, z) should be kept. Then set the value of d iyz and Sel (i, y, z) accordingly. and the corresponding utility value is Sel (| DB |, NTotal_rdoc / 10, Nsdb). Figure 2. The dynamic programming optimization procedure for Equation 16. environment, three different types of search engines were used in the experiments: INQUERY [2], a unigram statistical language model with linear smoothing [12,20] and a TFIDF retrieval algorithm with "ltc" weight [12,20]. All these algorithms were implemented with the Lemur toolkit [12]. These three kinds of search engines were assigned to the databases among the four testbeds in a round-robin manner. 5. RESULTS: RESOURCE SELECTION OF DATABASE RECOMMENDATION All four testbeds described in Section 4 were used in the experiments to evaluate the resource selection effectiveness of the database recommendation system. The resource descriptions were created using query-based sampling. About 80 queries were sent to each database to download 300 unique documents. The database size statistics were estimated by the sample-resample method [21]. Fifty queries (101-150) were used as training queries to build the relevant logistic model and to fit the exponential functions of the centralized document score curves for large ratio databases (details in Section 3.1). Another 50 queries (51-100) were used as test data. Collection Selected. Collection Selected. Relevant Testbed. Nonrelevant Testbed. Figure 3. Resource selection experiments on the four testbeds. Trec123-2ldb-60col ("representative"): The databases in the trec123-100col-bysource were sorted with alphabetical order. Two large databases were created by merging 20 small databases with the round-robin method. Thus, the two large databases have more relevant documents due to their large sizes, even though the densities of relevant documents are roughly the same as the small databases. Trec123-AP-WSJ-60col ("relevant"): The 24 Associated Press collections and the 16 Wall Street Journal collections in the trec123-100col-bysource testbed were collapsed into two large databases APall and WSJall. The other 60 collections were left unchanged. The APall and WSJall databases have higher densities of documents relevant to TREC queries than the small databases. Thus, the two large databases have many more relevant documents than the small databases. Trec123-FR-DOE-81col ("nonrelevant"): The 13 Federal Register collections and the 6 Department of Energy collections in the trec123-100col-bysource testbed were collapsed into two large databases FRall and DOEall. The other 80 collections were left unchanged. The FRall and DOEall databases have lower densities of documents relevant to TREC queries than the small databases, even though they are much larger. 100 queries were created from the title fields of TREC topics 51-150. The queries 101-150 were used as training queries and the queries 51-100 were used as test queries (details in Table 2). 4.2 Search Engines In the uncooperative distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web, different databases may use different types of search engine. To simulate the multiple type-engine Resource selection algorithms of database recommendation systems are typically compared using the recall metric Rn [1,17,18,21]. Let B denote a baseline ranking, which is often the RBR (relevance based ranking), and E as a ranking provided by a resource selection algorithm. And let Bi and Ei denote the number of relevant documents in the ith ranked database of B or E. Then Rn is defined as follows: Usually the goal is to search only a few databases, so our figures only show results for selecting up to 20 databases. The experiments summarized in Figure 3 compared the effectiveness of the three resource selection algorithms, namely the CORI, ReDDE and UUM/HR. The UUM/HR algorithm is described in Section 3.3. It can be seen from Figure 3 that the ReDDE and UUM/HR algorithms are more effective (on the representative, relevant and nonrelevant testbeds) or as good as (on the Trec123-100Col testbed) the CORI resource selection algorithm. The UUM/HR algorithm is more effective than the ReDDE algorithm on the representative and relevant testbeds and is about the same as the ReDDE algorithm on the Trec123100Col and the nonrelevant testbeds. This suggests that the UUM/HR algorithm is more robust than the ReDDE algorithm. It can be noted that when selecting only a few databases on the Trec123-100Col or the nonrelevant testbeds, the ReDEE algorithm has a small advantage over the UUM/HR algorithm. We attribute this to two causes: i) The ReDDE algorithm was tuned on the Trec123-100Col testbed; and ii) Although the difference is small, this may suggest that our logistic model of estimating probabilities of relevance is not accurate enough. More training data or a more sophisticated model may help to solve this minor puzzle. Table 3. Precision on the trec123-100col-bysource testbed when 3 databases were selected. (The first baseline is CORI; the second Table 4. Precision on the trec123-100col-bysource testbed when 5 databases were selected. (The first baseline is CORI; the second Table 5. Precision on the representative testbed when 3 databases were selected. (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) Table 6. Precision on the representative testbed when 5 databases were selected. (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) 6. RESULTS: DOCUMENT RETRIEVAL EFFECTIVENESS For document retrieval, the selected databases are searched and the returned results are merged into a single final list. In all of the experiments discussed in this section the results retrieved from individual databases were combined by the semisupervised learning results merging algorithm. This version of the SSL algorithm [22] is allowed to download a small number of returned document texts "on the fly" to create additional training data in the process of learning the linear models which map database-specific document scores into estimated centralized document scores. It has been shown to be very effective in environments where only short result-lists are retrieved from each selected database [22]. This is a common scenario in operational environments and was the case for our experiments. Document retrieval effectiveness was measured by Precision at the top part of the final document list. The experiments in this section were conducted to study the document retrieval effectiveness of five selection algorithms, namely the CORI, ReDDE, UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms. The last three algorithms were proposed in Section 3. All the first four algorithms selected 3 or 5 databases, and 50 documents were retrieved from each selected database. The UUM/HP-FL algorithm also selected 3 or 5 databases, but it was allowed to adjust the number of documents to retrieve from each selected database; the number retrieved was constrained to be from 10 to 100, and a multiple of 10. The Trec123-100Col and representative testbeds were selected for document retrieval as they represent two extreme cases of resource selection effectiveness; in one case the CORI algorithm is as good as the other algorithms and in the other case it is quite Table 7. Precision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second baseline for UUM/HP methods is UUM/HR.) (Search engines do not return document scores) a lot worse than the other algorithms. Tables 3 and 4 show the results on the Trec123-100Col testbed, and Tables 5 and 6 show the results on the representative testbed. On the Trec123-100Col testbed, the document retrieval effectiveness of the CORI selection algorithm is roughly the same or a little bit better than the ReDDE algorithm but both of them are worse than the other three algorithms (Tables 3 and 4). The UUM/HR algorithm has a small advantage over the CORI and ReDDE algorithms. One main difference between the UUM/HR algorithm and the ReDDE algorithm was pointed out before: The UUM/HR uses training data and linear interpolation to estimate the centralized document score curves, while the ReDDE algorithm [21] uses a heuristic method, assumes the centralized document score curves are step functions and makes no distinction among the top part of the curves. This difference makes UUM/HR better than the ReDDE algorithm at distinguishing documents with high probabilities of relevance from low probabilities of relevance. Therefore, the UUM/HR reflects the high-precision retrieval goal better than the ReDDE algorithm and thus is more effective for document retrieval. The UUM/HR algorithm does not explicitly optimize the selection decision with respect to the high-precision goal as the UUM/HP-FL and UUM/HP-VL algorithms are designed to do. It can be seen that on this testbed, the UUM/HP-FL and UUM/HP-VL algorithms are much more effective than all the other algorithms. This indicates that their power comes from explicitly optimizing the high-precision goal of document retrieval in Equations 14 and 16. On the representative testbed, CORI is much less effective than other algorithms for distributed document retrieval (Tables 5 and 6). The document retrieval results of the ReDDE algorithm are better than that of the CORI algorithm but still worse than the results of the UUM/HR algorithm. On this testbed the three UUM algorithms are about equally effective. Detailed analysis shows that the overlap of the selected databases between the UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms is much larger than the experiments on the Trec123-100Col testbed, since all of them tend to select the two large databases. This explains why they are about equally effective for document retrieval. In real operational environments, databases may return no document scores and report only ranked lists of results. As the unified utility maximization model only utilizes retrieval scores of sampled documents with a centralized retrieval algorithm to calculate the probabilities of relevance, it makes database selection decisions without referring to the document scores from individual databases and can be easily generalized to this case of rank lists without document scores. The only adjustment is that the SSL algorithm merges ranked lists without document scores by assigning the documents with pseudo-document scores normalized for their ranks (In a ranked list of 50 documents, the first one has a score of 1, the second has a score of 0.98 etc), which has been studied in [22]. The experiment results on trec123-100Col-bysource testbed with 3 selected databases are shown in Table 7. The experiment setting was the same as before except that the document scores were eliminated intentionally and the selected databases only return ranked lists of document ids. It can be seen from the results that the UUM/HP-FL and UUM/HP-VL work well with databases returning no document scores and are still more effective than other alternatives. Other experiments with databases that return no document scores are not reported but they show similar results to prove the effectiveness of UUM/HP-FL and UUM/HPVL algorithms. The above experiments suggest that it is very important to optimize the high-precision goal explicitly in document retrieval. The new algorithms based on this principle achieve better or at least as good results as the prior state-of-the-art algorithms in several environments. 7. CONCLUSION Distributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets. Most previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application. We argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical. This kind of inconsistency has also been observed in previous work, but the prior solutions either used heuristic methods or assumed cooperation by individual databases (e.g., all the databases used the same kind of search engines), which is frequently not true in the uncooperative environment. In this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework. In this framework, the selection decisions are obtained by optimizing different objective functions. As far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner. The new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database. A single logistic model was trained on the centralized 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model. Therefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database. This framework is not only more theoretically solid but also very effective. One algorithm for resource selection (UUM/HR) and two algorithms for document retrieval (UUM/HP-FL and UUM/HP-VL) are derived from this framework. Empirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web. Furthermore, the UUM/HP-FL and UUM/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores. Experiments have shown that these algorithms achieve results that are at least as good as the prior state-of-the-art, and sometimes considerably better. Detailed analysis indicates that the advantage of these algorithms comes from explicitly optimizing the goals of the specific tasks. The unified utility maximization framework is open for different extensions. When cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs. Another extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments. All of these are the directions of future research.
Unified Utility Maximization Framework for Resource Selection ABSTRACT This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms. 1. INTRODUCTION Conventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing. Distributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database. For example, "Hidden Web" contents (also called "invisible" or "deep" Web contents) are information on the Web that cannot be accessed by the conventional search engines. Hidden web contents have been estimated to be 2-50 [19] times larger than the contents that can be searched by conventional search engines. Therefore, it is very important to search this type of valuable information. The architecture of distributed search solution is highly influenced by different environmental characteristics. In a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines. Early distributed information retrieval research focused on this type of cooperative environments [1,8]. On the other side, in a wide area network such as very large corporate environments or on the Web there are many types of search engines and it is difficult to assume that all the information providers can cooperate as they are required. Even if they are willing to cooperate in these environments, it may be hard to enforce a single solution for all the information providers or to detect whether information sources provide the correct information as they are required. Many applications fall into the latter type of uncooperative environments such as the Mind project [16] which integrates non-cooperating digital libraries or the QProber system [9] which supports browsing and searching of uncooperative hidden Web databases. In this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines. There are three important sub-problems in distributed information retrieval. First, information about the contents of each individual database must be acquired (resource representation) [1,8,21]. Second, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21]. Third, the results retrieved from all the selected resources have to be merged into a single final list before it can be presented to the end user (retrieval and results merging) [1,5,20,22]. Many types of solutions exist for distributed information retrieval. Invisible-web. net1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics. A database recommendation system goes a step further than a browsing system like Invisible-web. net by recommending most relevant information sources to users' queries. It is composed of the resource description and the resource selection components. This solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically. Distributed document retrieval is a more sophisticated task. It selects relevant information sources for users' queries as the database recommendation system does. Furthermore, users' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users. The goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal. On the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal. Prior research indicated that these two goals are related but not identical [4,21]. However, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21]. This paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals. First, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21]. A logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance. Second, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document. The probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document's score. Then, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates. For the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal. For resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance. This selection criterion meets the high-precision goal of distributed document retrieval application. Furthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list. The unified utility framework makes very few assumptions and works in uncooperative environments. Two key features make it a more solid model for distributed information retrieval: i) It formalizes the resource selection problems of different applications as various utility functions, and optimizes the utility functions to achieve the optimal results accordingly; and ii) It shows an effective and efficient way to estimate the probabilities of relevance of all documents across databases. Specifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model. The human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases. This is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15]. The unified utility framework is not only more theoretically solid but also very effective. Empirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations. The next section discusses related work. Section 3 describes the new unified utility maximization model. Section 4 explains our experimental methodology. Sections 5 and 6 present our experimental results for resource selection and document retrieval. Section 7 concludes. 2. PRIOR RESEARCH 3. UNIFIED UTILITY MAXIMIZATION FRAMEWORK 3.1 Estimating Probabilities of Relevance 3.1.1 Learning Probabilities of Relevance 3.1.2 Estimating Centralized Document Scores 3.2 The Unified Utility Maximization Model 3.4 Resource Selection for High-Precision 4. EXPERIMENTAL METHODOLOGY 4.1 Testbeds 5. RESULTS: RESOURCE SELECTION OF DATABASE RECOMMENDATION 4.2 Search Engines 6. RESULTS: DOCUMENT RETRIEVAL EFFECTIVENESS 7. CONCLUSION Distributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets. Most previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application. We argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical. This kind of inconsistency has also been observed in previous work, but the prior solutions either used heuristic methods or assumed cooperation by individual databases (e.g., all the databases used the same kind of search engines), which is frequently not true in the uncooperative environment. In this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework. In this framework, the selection decisions are obtained by optimizing different objective functions. As far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner. The new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database. A single logistic model was trained on the centralized 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model. Therefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database. This framework is not only more theoretically solid but also very effective. One algorithm for resource selection (UUM/HR) and two algorithms for document retrieval (UUM/HP-FL and UUM/HP-VL) are derived from this framework. Empirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web. Furthermore, the UUM/HP-FL and UUM/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores. Experiments have shown that these algorithms achieve results that are at least as good as the prior state-of-the-art, and sometimes considerably better. Detailed analysis indicates that the advantage of these algorithms comes from explicitly optimizing the goals of the specific tasks. The unified utility maximization framework is open for different extensions. When cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs. Another extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments. All of these are the directions of future research.
Unified Utility Maximization Framework for Resource Selection ABSTRACT This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms. 1. INTRODUCTION Conventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing. Distributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database. that cannot be accessed by the conventional search engines. Therefore, it is very important to search this type of valuable information. The architecture of distributed search solution is highly influenced by different environmental characteristics. In a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines. Early distributed information retrieval research focused on this type of cooperative environments [1,8]. In this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines. There are three important sub-problems in distributed information retrieval. First, information about the contents of each individual database must be acquired (resource representation) [1,8,21]. Second, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21]. Many types of solutions exist for distributed information retrieval. Invisible-web. net1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics. A database recommendation system goes a step further than a browsing system like Invisible-web. net by recommending most relevant information sources to users' queries. It is composed of the resource description and the resource selection components. This solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically. Distributed document retrieval is a more sophisticated task. It selects relevant information sources for users' queries as the database recommendation system does. Furthermore, users' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users. The goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal. On the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal. Prior research indicated that these two goals are related but not identical [4,21]. However, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21]. This paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals. First, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21]. A logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance. Second, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document. The probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document's score. Then, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates. For the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal. For resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance. This selection criterion meets the high-precision goal of distributed document retrieval application. Furthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list. The unified utility framework makes very few assumptions and works in uncooperative environments. Specifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model. The human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases. This is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15]. The unified utility framework is not only more theoretically solid but also very effective. Empirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations. The next section discusses related work. Section 3 describes the new unified utility maximization model. Section 4 explains our experimental methodology. Sections 5 and 6 present our experimental results for resource selection and document retrieval. Section 7 concludes. 7. CONCLUSION Distributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets. Most previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application. We argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical. In this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework. In this framework, the selection decisions are obtained by optimizing different objective functions. As far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner. The new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database. A single logistic model was trained on the centralized 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model. Therefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database. This framework is not only more theoretically solid but also very effective. One algorithm for resource selection (UUM/HR) and two algorithms for document retrieval (UUM/HP-FL and UUM/HP-VL) are derived from this framework. Empirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web. Furthermore, the UUM/HP-FL and UUM/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores. The unified utility maximization framework is open for different extensions. When cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs. Another extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments.
H-42
HITS Hits TREC: Exploring IR Evaluation Results with Network Analysis
We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known -- and somewhat generalized -- indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.
[ "hit", "trec", "ir evalu", "network analysi", "inform retriev evalu experi", "weight bipartit graph", "social network analysi", "system-topic graph", "hit algorithm", "human assessor", "mean averag precis", "web search engin implement", "link analysi techniqu", "inlink", "pagerank", "stem", "kleinberg' hit algorithm" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "M", "U", "R", "U", "M", "U", "U", "U", "M" ]
HITS Hits TRECExploring IR Evaluation Results with Network Analysis Stefano Mizzaro Dept. of Mathematics and Computer Science University of Udine Via delle Scienze, 206 - 33100 Udine, Italy mizzaro@dimi.uniud.it Stephen Robertson Microsoft Research 7 JJ Thomson Avenue Cambridge CB3 0FB, UK ser@microsoft.com ABSTRACT We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known - and somewhat generalized - indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Measurement, Experimentation 1. INTRODUCTION Evaluation is a primary concern in the Information Retrieval (IR) field. TREC (Text REtrieval Conference) [12, 15] is an annual benchmarking exercise that has become a de facto standard in IR evaluation: before the actual conference, TREC provides to participants a collection of documents and a set of topics (representations of information needs). Participants use their systems to retrieve, and submit to TREC, a list of documents for each topic. After the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set. This defines a set of relevant documents for each topic. System effectiveness is then measured by well established metrics (Mean Average Precision being the most used). Other conferences such as NTCIR, INEX, CLEF provide comparable data. Network analysis is a discipline that studies features and properties of (usually large) networks, or graphs. Of particular importance is Social Network Analysis [16], that studies networks made up by links among humans (friendship, acquaintance, co-authorship, bibliographic citation, etc.). Network analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3,6]. Current search engines use link analysis techniques to help rank the retrieved documents. Some indicators (and the corresponding algorithms that compute them) have been found useful in this respect, and are nowadays well known: Inlinks (the number of links to a Web page), PageRank [7], and HITS (Hyperlink-Induced Topic Search) [5]. Several extensions to these algorithms have been and are being proposed. These indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking. One example is using HITS for stemming, as described by Agosti et al. [1]. In this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems. Links represent effectiveness measurements on system-topic pairs. We then apply analysis methods originally developed for search applications to the resulting network. This reveals phenomena previously hidden in TREC data. In passing, we also provide a small generalization to Kleinberg``s HITS algorithm, as well as to Inlinks and PageRank. The paper is organized as follows: Sect. 2 gives some motivations for the work. Sect. 3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect. 4; Sect. 5 presents some experiments on TREC 8 data; Sect. 6 discusses some issues and Sect. 7 closes the paper. 2. MOTIVATIONS We are interested in the following hypotheses: 1. Some systems are more effective than others; t1 · · · tn MAP s1 AP(s1, t1) · · · AP(s1, tn) MAP(s1) ... ... ... sm AP(sm, t1) · · · AP(sm, tn) MAP(sm) AAP AAP(t1) · · · AAP(tn) (a) t1 t2 · · · MAP s1 0.5 0.4 · · · 0.6 s2 0.4 · · · · · · 0.3 ... ... ... ... AAP 0.6 0.3 · · · (b) Table 1: AP, MAP and AAP 2. Some topics are easier than others; 3. Some systems are better than others at distinguishing easy and difficult topics; 4. Some topics are better than others at distinguishing more or less effective systems. The first of these hypotheses needs no further justification - every reported significant difference between any two systems supports it. There is now also quite a lot of evidence for the second, centered on the TREC Robust Track [14]. Our primary interest is in the third and fourth. The third might be regarded as being of purely academic interest; however, the fourth has the potential for being of major practical importance in evaluation studies. If we could identify a relatively small number of topics which were really good at distinguishing effective and ineffective systems, we could save considerable effort in evaluating systems. One possible direction from this point would be to attempt direct identification of such small sets of topics. However, in the present paper, we seek instead to explore the relationships suggested by the hypotheses, between what different topics tell us about systems and what different systems tell us about topics. We seek methods of building and analysing a matrix of system-topic normalised performances, with a view to giving insight into the issue and confirming or refuting the third and fourth hypotheses. It turns out that the obvious symmetry implied by the above formulation of the hypotheses is a property worth investigating, and the investigation does indeed give us valuable insights. 3. THE IDEA 3.1 1st step: average precision table From TREC results, one can produce an Average Precision (AP) table (see Tab. 1a): each AP(si, tj) value measures the AP of system si on topic tj. Besides AP values, the table shows Mean Average Precision (MAP) values i.e., the mean of the AP values for a single system over all topics, and what we call Average Average Precision (AAP) values i.e., the average of the AP values for a single topic over all systems: MAP(si) = 1 n nX j=1 AP(si, tj), (1) AAP(tj) = 1 m mX i=1 AP(si, tj). (2) MAPs are indicators of systems performance: higher MAP means good system. AAP are indicators of the performance on a topic: higher AAP means easy topic - a topic on which all or most systems have good performance. 3.2 Critique of pure AP MAP is a standard, well known, and widely used IR effectiveness measure. Single AP values are used too (e.g., in AP histograms). Topic difficulty is often discussed (e.g., in TREC Robust track [14]), although AAP values are not used and, to the best of our knowledge, have never been proposed (the median, not the average, of AP on a topic is used to produce TREC AP histograms [11]). However, the AP values in Tab. 1 present two limitations, which are symmetric in some respect: • Problem 1. They are not reliable to compare the effectiveness of a system on different topics, relative to the other systems. If, for example, AP(s1, t1) > AP(s1, t2), can we infer that s1 is a good system (i.e., has a good performance) on t1 and a bad system on t2? The answer is no: t1 might be an easy topic (with high AAP) and t2 a difficult one (low AAP). See an example in Tab. 1b: s1 is outperformed (on average) by the other systems on t1, and it outperforms the other systems on t2. • Problem 2. Conversely, if, for example, AP(s1, t1) > AP(s2, t1), can we infer that t1 is considered easier by s1 than by s2? No, we cannot: s1 might be a good system (with high MAP) and s2 a bad one (low MAP); see an example in Tab. 1b. These two problems are a sort of breakdown of the well known high influence of topics on IR evaluation; again, our formulation makes explicit the topics / systems symmetry. 3.3 2nd step: normalizations To avoid these two problems, we can normalize the AP table in two ways. The first normalization removes the influence of the single topic ease on system performance. Each AP(si, tj) value in the table depends on both system goodness and topic ease (the value will increase if a system is good and/or the topic is easy). By subtracting from each AP(si, tj) the AAP(tj) value, we obtain normalized AP values (APA(si, tj), Normalized AP according to AAP): APA(si, tj) = AP(si, tj) − AAP(tj), (3) that depend on system performance only (the value will increase only if system performance is good). See Tab. 2a. The second normalization removes the influence of the single system effectiveness on topic ease: by subtracting from each AP(si, tj) the MAP(si) value, we obtain normalized AP values (APM(si, tj), Normalized AP according to MAP): APM(si, tj) = AP(si, tj) − MAP(si), (4) that depend on topic ease only (the value will increase only if the topic is easy, i.e., all systems perform well on that topic). See Tab. 2b. In other words, APA avoids Problem 1: APA(s, t) values measure the performance of system s on topic t normalized t1 · · · tn MAP s1 APA(s1, t1) · · · APA(s1, tn) MAP(s1) ... ... ... sm APA(sm, t1) · · · APA(sm, tn) MAP(sm) 0 · · · 0 0 (a) t1 · · · tn s1 APM(s1, t1) · · · APM(s1, tn) 0 ... ... ... sm APM(sm, t1) · · · APM(sm, tn) 0 AAP AAP(t1) · · · AAP(tn) 0 (b) t1 t2 · · · MAP s1 −0.1 0.1 · · · ... s2 0.2 · · · · · · ... ... ... ... 0 0 · · · t1 t2 · · · s1 −0.1 −0.2 · · · 0 s2 0.1 · · · · · · 0 ... ... ... AAP ... . · · · (c) (d) Table 2: Normalizations: APA and MAP: normalized AP (APA) and MAP (MAP) (a); normalized AP (APM) and AAP (AAP) (b); a numeric example (c) and (d) according to the ease of the topic (easy topics will not have higher APA values). Now, if, for example, APA(s1, t2) > APA(s1, t1), we can infer that s1 is a good system on t2 and a bad system on t1 (see Tab. 2c). Vice versa, APM avoids Problem 2: APM(s, t) values measure the ease of topic t according to system s, normalized according to goodness of the system (good systems will not lead to higher APM values). If, for example, APM(s2, t1) > APM(s1, t1), we can infer that t1 is considered easier by s2 than by s1 (see Tab. 2d). On the basis of Tables 2a and 2b, we can also define two new measures of system effectiveness and topic ease, i.e., a Normalized MAP (MAP), obtained by averaging the APA values on one row in Tab. 2a, and a Normalized AAP (AAP), obtained by averaging the APM values on one column in Tab. 2b: MAP(si) = 1 n nX j=1 APA(si, tj) (5) AAP(tj) = 1 m mX i=1 APM(si, tj). (6) Thus, overall system performance can be measured, besides by means of MAP, also by means of MAP. Moreover, MAP is equivalent to MAP, as can be immediately proved by using Eqs. (5), (3), and (1): MAP(si) = 1 n nX j=1 (AP(si, tj) − AAP(tj)) = = MAP(si) − 1 n nX j=1 AAP(tj) (and 1 n Pn j=1 AAP(tj) is the same for all systems). And, conversely, overall topic ease can be measured, besides by t1 · · · tn s1 ... APM sm t1 · · · tn s1 ... APA sm s1 · · · sm t1 · · · tn s1 ... 0 APM 0 sm t1 ... APA T 0 0 tn MAP AAP 0 Figure 1: Construction of the adjacency matrix. APA T is the transpose of APA. means of AAP, also by means of AAP, and this is equivalent (the proof is analogous, and relies on Eqs. (6), (4), and (2)). The two Tables 2a and 2b are interesting per se, and can be analyzed in several different ways. In the following we propose an analysis based on network analysis techniques, mainly Kleinberg``s HITS algorithm. There is a little further discussion of these normalizations in Sect. 6. 3.4 3rd step: Systems-Topics Graph The two tables 2a and 2b can be merged into a single one with the procedure shown in Fig. 1. The obtained matrix can be interpreted as the adjacency matrix of a complete weighted bipartite graph, that we call Systems-Topics graph. Arcs and weights in the graph can be interpreted as follows: • (weight on) arc s → t: how much the system s thinks that the topic t is easy - assuming that a system has no knowledge of the other systems (or in other words, how easy we might think the topic is, knowing only the results for this one system). This corresponds to APM values, i.e., to normalized topic ease (Fig. 2a). • (weight on) arc s ← t: how much the topic t thinks that the system s is good - assuming that a topic has no knowledge of the other topics (or in other words, how good we might think the system is, knowing only the results for this one topic). This corresponds to APA (normalized system effectiveness, Fig. 2b). Figs. 2c and 2d show the Systems-Topics complete weighted bipartite graph, on a toy example with 4 systems and 2 topics; the graph is split in two parts to have an understandable graphical representation: arcs in Fig. 2c are labeled with APM values; arcs in Fig. 2d are labeled with APA values. 4. ANALYSIS OF THE GRAPH 4.1 Weighted Inlinks, Outlinks, PageRank The sum of weighted outlinks, i.e., the sum of the weights on the outgoing arcs from each node, is always zero: • The outlinks on each node corresponding to a system s (Fig. 2c) is the sum of all the corresponding APM values on one row of the matrix in Tab. 2b. • The outlinks on each node corresponding to a topic t (Fig. 2d) is the sum of all the corresponding APA (a) (b) (c) (d) Figure 2: The relationships between systems and topics (a) and (b); and the Systems-Topics graph for a toy example (c) and (d). Dashed arcs correspond to negative values. h (a) s1 ... sm t1 ... tn = s1 · · · sm t1 · · · tn s1 ... 0 APM (APA) sm t1 ... APA T 0 tn (APM T ) · a (h) s1 ... sm t1 ... tn Figure 3: Hub and Authority computation values on one row of the transpose of the matrix in Tab. 2a. The average1 of weighted inlinks is: • MAP for each node corresponding to a system s; this corresponds to the average of all the corresponding APA values on one column of the APA T part of the adjacency matrix (see Fig. 1). • AAP for each node corresponding to a topic t; this corresponds to the average of all the corresponding APM values on one column of the APM part of the adjacency matrix (see Fig. 1). Therefore, weighted inlinks measure either system effectiveness or topic ease; weighted outlinks are not meaningful. We could also apply the PageRank algorithm to the network; the meaning of the PageRank of a node is not quite so obvious as Inlinks and Outlinks, but it also seems a sensible measure of either system effectiveness or topic ease: if a system is effective, it will have several incoming links with high 1 Usually, the sum of the weights on the incoming arcs to each node is used in place of the average; since the graph is complete, it makes no difference. weights (APA); if a topic is easy it will have high weights (APM) on the incoming links too. We will see experimental confirmation in the following. 4.2 Hubs and Authorities Let us now turn to more sophisticated indicators. Kleinberg``s HITS algorithm defines, for a directed graph, two indicators: hubness and authority; we reiterate here some of the basic details of the HITS algorithm in order to emphasize both the nature of our generalization and the interpretation of the HITS concepts in this context. Usually, hubness and authority are defined as h(x) = P x→y a(y) and a(x) = P y→x h(y), and described intuitively as a good hub links many good authorities; a good authority is linked from many good hubs. As it is well known, an equivalent formulation in linear algebra terms is (see also Fig. 3): h = Aa and a = AT h (7) (where h is the hubness vector, with the hub values for all the nodes; a is the authority vector; A is the adjacency matrix of the graph; and AT its transpose). Usually, A contains 0s and 1s only, corresponding to presence and absence of unweighted directed arcs, but Eq. (7) can be immediately generalized to (in fact, it is already valid for) A containing any real value, i.e., to weighted graphs. Therefore we can have a generalized version (or rather a generalized interpretation, since the formulation is still the original one) of hubness and authority for all nodes in a graph. An intuitive formulation of this generalized HITS is still available, although slightly more complex: a good hub links, by means of arcs having high weights, many good authorities; a good authority is linked, by means of arcs having high weights, from many good hubs. Since arc weights can be, in general, negative, hub and authority values can be negative, and one could speak of unhubness and unauthority; the intuitive formulation could be completed by adding that a good hub links good unauthorities by means of links with highly negative weights; a good authority is linked by good unhubs by means of links with highly negative weights. And, also, a good unhub links positively good unauthorities and negatively good authorities; a good unauthority is linked positively from good unhubs and negatively from good hubs. Let us now apply generalized HITS to our Systems-Topics graph. We compute a(s), h(s), a(t), and h(t). Intuitively, we expect that a(s) is somehow similar to Inlinks, so it should be a measure of either systems effectiveness or topic ease. Similarly, hubness should be more similar to Outlinks, thus less meaningful, although the interplay between hub and authority might lead to the discovery of something different. Let us start by remarking that authority of topics and hubness of systems depend only on each other; similarly hubness of topics and authority of systems depend only on each other: see Figs. 2c, 2d and 3. Thus the two graphs in Figs. 2c and 2d can be analyzed independently. In fact the entire HITS analysis could be done in one direction only, with just APM(s, t) values or alternatively with just APA(s, t). As discussed below, probably most interest resides in the hubness of topics and the authority of systems, so the latter makes sense. However, in this paper, we pursue both analyses together, because the symmetry itself is interesting. Considering Fig. 2c we can state that: • Authority a(t) of a topic node t increases when: - if h(si) > 0, APM(si, t) increases (or if APM(si, t) > 0, h(si) increases); - if h(si) < 0, APM(si, t) decreases (or if APM(si, t) < 0, h(si) decreases). • Hubness h(s) of a system node s increases when: - if a(tj) > 0, APM(s, tj) increases (or, if APM(s, tj) > 0, a(tj) increases); - if a(tj) < 0, APM(s, tj) decreases (or, if APM(s, tj) < 0, a(tj) decreases). We can summarize this as: a(t) is high if APM(s, t) is high for those systems with high h(s); h(s) is high if APM(s, t) is high for those topics with high a(t). Intuitively, authority a(t) of a topic measures topic ease; hubness h(s) of a system measures system``s capability to recognize easy topics. A system with high unhubness (negative hubness) would tend to regard easy topics as hard and hard ones as easy. The situation for Fig. 2d, i.e., for a(s) and h(t), is analogous. Authority a(s) of a system node s measures system effectiveness: it increases with the weight on the arc (i.e., APA(s, tj)) and the hubness of the incoming topic nodes tj. Hubness h(t) of a topic node t measures topic capability to recognize effective systems: if h(t) > 0, it increases further if APA(s, tj) increases; if h(t) < 0, it increases if APA(s, tj) decreases. Intuitively, we can state that A system has a higher authority if it is more effective on topics with high hubness; and A topic has a higher hubness if it is easier for those systems which are more effective in general. Conversely for system hubness and topic authority: A topic has a higher authority if it is easier on systems with high hubness; and A system has a higher hubness if it is more effective for those topics which are easier in general. Therefore, for each system we have two indicators: authority (a(s)), measuring system effectiveness, and hubness (h(s)), measuring system capability to estimate topic ease. And for each topic, we have two indicators: authority (a(t)), measuring topic ease, and hubness (h(t)), measuring topic capability to estimate systems effectiveness. We can define them formally as a(s) = X t h(t) · APA(s, t), h(t) = X s a(s) · APA(s, t), a(t) = X s h(s) · APM(s, t), h(s) = X t a(t) · APM(s, t). We observe that the hubness of topics may be of particular interest for evaluation studies. It may be that we can evaluate the effectiveness of systems efficiently by using relatively few high-hubness topics. 5. EXPERIMENTS We now turn to discuss if these indicators are meaningful and useful in practice, and how they correlate with standard measures used in TREC. We have built the Systems-Topics graph for TREC 8 data (featuring 1282 systems - actually, 2 Actually, TREC 8 data features 129 systems; due to some bug in our scripts, we did not include one system (8manexT3D1N0), but the results should not be affected. 0 0.1 0.2 0.3 -1 -0.5 0 0.5 1 NAPM NAPA AP Figure 4: Distributions of AP, APA, and APM values in TREC 8 data MAP In PR H A MAP 1 1.0 1.0 .80 .99 Inlinks 1 1.0 .80 .99 PageRank 1 .80 .99 Hub 1 .87 (a) AAP In PR H A AAP 1 1.0 1.0 .92 1.0 Inlinks 1 1.0 .92 1.0 PageRank 1 .92 1.0 Hub 1 .93 (b) Table 3: Correlations between network analysis measures and MAP (a) and AAP (b) runs - on 50 topics). This section illustrates the results obtained mining these data according to the method presented in previous sections. Fig. 4 shows the distributions of AP, APA, and APM: whereas AP is very skewed, both APA and APM are much more symmetric (as it should be, since they are constructed by subtracting the mean). Tables 3a and 3b show the Pearson``s correlation values between Inlinks, PageRank, Hub, Authority and, respectively, MAP or AAP (Outlinks values are not shown since they are always zero, as seen in Sect. 4). As expected, Inlinks and PageRank have a perfect correlation with MAP and AAP. Authority has a very high correlation too with MAP and AAP; Hub assumes slightly lower values. Let us analyze the correlations more in detail. The correlations chart in Figs. 5a and 5b demonstrate the high correlation between Authority and MAP or AAP. Hubness presents interesting phenomena: both Fig. 5c (correlation with MAP) and Fig. 5d (correlation with AAP) show that correlation is not exact, but neither is it random. This, given the meaning of hubness (capability in estimating topic ease and system effectiveness), means two things: (i) more effective systems are better at estimating topic ease; and (ii) easier topics are better at estimating system effectiveness. Whereas the first statement is fine (there is nothing against it), the second is a bit worrying. It means that system effectiveness in TREC is affected more by easy topics than by difficult topics, which is rather undesirable for quite obvious reasons: a system capable of performing well on a difficult topic, i.e., on a topic on which the other systems perform badly, would be an important result for IR effectiveness; con-8E-5 -6E-5 -4E-5 -2E-5 0E+0 2E-5 4E-5 6E-5 0.0 0.1 0.2 0.3 0.4 0.5 -3E-1 -2E-1 -1E-1 0E+0 1E-1 2E-1 3E-1 4E-1 5E-1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (a) (b) 0E+0 2E-2 4E-2 6E-2 8E-2 1E-1 1E-1 1E-1 0.0 0.1 0.2 0.3 0.4 0.5 0E+0 1E-5 2E-5 3E-5 4E-5 5E-5 6E-5 7E-5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (c) (d) Figure 5: Correlations: MAP (x axis) and authority (y axis) of systems (a); AAP and authority of topics (b); MAP and hub of systems (c) and AAP and hub of topics (d) versely, a system capable of performing well on easy topics is just a confirmation of the state of the art. Indeed, the correlation between hubness and AAP (statement (i) above) is higher than the correlation between hubness and MAP (corresponding to statement (ii)): 0.92 vs. 0.80. However, this phenomenon is quite strong. This is also confirmed by the work being done on the TREC Robust Track [14]. In this respect, it is interesting to see what happens if we use a different measure from MAP (and AAP). The GMAP (Geometric MAP) metric is defined as the geometric mean of AP values, or equivalently as the arithmetic mean of the logarithms of AP values [8]. GMAP has the property of giving more weight to the low end of the AP scale (i.e., to low AP values), and this seems reasonable, since, intuitively, a performance increase in MAP values from 0.01 to 0.02 should be more important than an increase from 0.81 to 0.82. To use GMAP in place of MAP and AAP, we only need to take the logarithms of initial AP values, i.e., those in Tab. 1a (zero values are modified into ε = 0.00001). We then repeat the same normalization process (with GMAP and GAAP - Geometric AAP - replacing MAP and AAP): whereas authority values still perfectly correlate with GMAP (0.99) and GAAP (1.00), the correlation with hubness largely disappears (values are −0.16 and −0.09 - slightly negative but not enough to concern us). This is yet another confirmation that TREC effectiveness as measured by MAP depends mainly on easy topics; GMAP appears to be a more balanced measure. Note that, perhaps surprisingly, GMAP is indeed fairly well balanced, not biased in the opposite direction - that is, it does not overemphasize the difficult topics. In Sect. 6.3 below, we discuss another transformation, replacing the log function used in GMAP with logit. This has a similar effect: the correlation of mean logitAP and average logitAP with hubness are now small positive numbers (0.23 and 0.15 respectively), still comfortably away from the high correlations with regular MAP and AAP, i.e., not presenting the problematic phenomenon (ii) above (over-dependency on easy topics). We also observe that hub values are positive, whereas authority assumes, as predicted, both positive and negative values. An intuitive justification is that negative hubness would indicate a node which disagrees with the other nodes, e.g., a system which does better on difficult topics, or a topic on which bad systems do better; such systems and topics would be quite strange, and probably do not appear in TREC. Finally, although one might think that topics with several relevant documents are more important and difficult, this is not the case: there is no correlation between hub (or any other indicator) and the number of documents relevant to a topic. 6. DISCUSSION 6.1 Related work There has been considerable interest in recent years in questions of statistical significance of effectiveness comparisons between systems (e.g. [2, 9]), and related questions of how many topics might be needed to establish differences (e.g. [13]). We regard some results of the present study as in some way complementary to this work, in that we make a step towards answering the question Which topics are best for establishing differences? . The results on evaluation without relevance judgements such as [10] show that, to some extent, good systems agree on which are the good documents. We have not addressed the question of individual documents in the present analysis, but this effect is certainly analogous to our results. 6.2 Are normalizations necessary? At this point it is also worthwhile to analyze what would happen without the MAP- and AAP-normalizations defined in Sect. 3.3. Indeed, the process of graph construction (Sect. 3.4) is still valid: both the APM and APA matrices are replaced by the AP one, and then everything goes on as above. Therefore, one might think that the normalizations are unuseful in this setting. This is not the case. From the theoretical point of view, the AP-only graph does not present the interesting properties above discussed: since the AP-only graph is symmetrical (the weight on each incoming link is equal to the weight on the corresponding outgoing link), Inlinks and Outlinks assume the same values. There is symmetry also in computing hub and authority, that assume the same value for each node since the weights on the incoming and outgoing arcs are the same. This could be stated in more precise and formal terms, but one might still wonder if on the overall graph there are some sort of counterbalancing effects. It is therefore easier to look at experimental data, which confirm that the normalizations are needed: the correlations between AP, Inlinks, Outlinks, Hub, and/or Authority are all very close to one (none of them is below 0.98). 6.3 Are these normalizations sufficient? It might be argued that (in the case of APA, for example) the amount we have subtracted from each AP value is topicdependent, therefore the range of the resulting APA value is also topic-dependent (e.g. the maximum is 1 − AAP(tj) and the minimum is − AAP(tj)). This suggests that the cross-topic comparisons of these values suggested in Sect. 3.3 may not be reliable. A similar issue arises for APM and comparisons across systems. One possible way to overcome this would be to use an unconstrained measure whose range is the full real line. Note that in applying the method to GMAP by using log AP, we avoid the problem with the lower limit but retain it for the upper limit. One way to achieve an unconstrainted range would be to use the logit function rather than the log [4,8]. We have also run this variant (as already reported in Sect. 5 above), and it appears to provide very similar results to the GMAP results already given. This is not surprising, since in practice the two functions are very similar over most of the operative range. The normalizations thus seem reliable. 6.4 On AAT and AT A It is well known that h and a vectors are the principal left eigenvectors of AAT and AT A, respectively (this can be easily derived from Eqs. (7)), and that, in the case of citation graphs, AAT and AT A represent, respectively, bibliographic coupling and co-citations. What is the meaning, if any, of AAT and AT A in our Systems-Topics graph? It is easy to derive that: AAT [i, j] = 8 >< >: 0 if i ∈ S ∧ j ∈ T or i ∈ T ∧ j ∈ S P k A[i, k] · A[j, k] otherwise AT A[i, j] = 8 >< >: 0 if i ∈ S ∧ j ∈ T or i ∈ T ∧ j ∈ S P k A[k, i] · A[k, j] otherwise (where S is the set of indices corresponding to systems and T the set of indices corresponding to topics). Thus AAT and AT A are block diagonal matrices, with two blocks each, one relative to systems and one relative to topics: (a) if i, j ∈ S, then AAT [i, j] = P k∈T APM(i, k)·APM(j, k) measures how much the two systems i and j agree in estimating topics ease (APM): high values mean that the two systems agree on topics ease. (b) if i, j ∈ T, then AAT [i, j] = P k∈S APA(k, i)·APA(k, j) measures how much the two topics i and j agree in estimating systems effectiveness (APA): high values mean that the two topics agree on systems effectiveness (and that TREC results would not change by leaving out one of the two topics). (c) if i, j ∈ S, then AT A[i, j] = P k∈T APA(i, k) · APA(j, k) measures how much agreement on the effectiveness of two systems i and j there is over all topics: high values mean that many topics quite agree on the two systems effectiveness; low values single out systems that are somehow controversial, and that need several topics to have a correct effectiveness assessment. (d) if i, j ∈ T, then AT A[i, j] = P k∈S APM(k, i)·APM(k, j) measures how much agreement on the ease of the two topics i and j there is over all systems: high values mean that many systems quite agree on the two topics ease. Therefore, these matrices are meaningful and somehow interesting. For instance, the submatrix (b) corresponds to a weighted undirected complete graph, whose nodes are the topics and whose arc weights are a measure of how much two topics agree on systems effectiveness. Two topics that are very close on this graph give the same information, and therefore one of them could be discarded without changes in TREC results. It would be interesting to cluster the topics on this graph. Furthermore, the matrix/graph (a) could be useful in TREC pool formation: systems that do not agree on topic ease would probably find different relevant documents, and should therefore be complementary in pool formation. Note that no notion of single documents is involved in the above analysis. 6.5 Insights As indicated, the primary contribution of this paper has been a method of analysis. However, in the course of applying this method to one set of TREC results, we have achieved some insights relating to the hypotheses formulated in Sect. 2: • We confirm Hypothesis 2 above, that some topics are easier than others. • Differences in the hubness of systems reveal that some systems are better than others at distinguishing easy and difficult topics; thus we have some confirmation of Hypothesis 3. • There are some relatively idiosyncratic systems which do badly on some topics generally considered easy but well on some hard topics. However, on the whole, the more effective systems are better at distinguishing easy and difficult topics. This is to be expected: a really bad system will do badly on everything, while even a good system may have difficulty with some topics. • Differences in the hubness of topics reveal that some topics are better than others at distinguising more or less effective systems; thus we have some confirmation of Hypothesis 4. • If we use MAP as the measure of effectiveness, it is also true that the easiest topics are better at distinguishing more or less effective systems. As argued in Sect. 5, this is an undesirable property. GMAP is more balanced. Clearly these ideas need to be tested on other data sets. However, they reveal that the method of analysis proposed in this paper can provide valuable information. 6.6 Selecting topics The confirmation of Hypothesis 4 leads, as indicated, to the idea that we could do reliable system evaluation on a much smaller set of topics, provided we could select such an appropriate set. This selection may not be straightforward, however. It is possible that simply selecting the high hubness topics will achieve this end; however, it is also possible that there are significant interactions between topics which would render such a simple rule ineffective. This investigation would therefore require serious experimentation. For this reason we have not attempted in this paper to point to the specific high hubness topics as being good for evaluation. This is left for future work. 7. CONCLUSIONS AND FUTURE DEVELOPMENTS The contribution of this paper is threefold: • we propose a novel way of normalizing AP values; • we propose a novel method to analyse TREC data; • the method applied on TREC data does indeed reveal some hidden properties. More particularly, we propose Average Average Precision (AAP), a measure of topic ease, and a novel way of normalizing the average precision measure in TREC, on the basis of both MAP (Mean Average Precision) and AAP. The normalized measures (APM and APA) are used to build a bipartite weighted Systems-Topics graph, that is then analyzed by means of network analysis indicators widely known in the (social) network analysis field, but somewhat generalised. We note that no such approach to TREC data analysis has been proposed so far. The analysis shows that, with current measures, a system that wants to be effective in TREC needs to be effective on easy topics. Also, it is suggested that a cluster analysis on topic similarity can lead to relying on a lower number of topics. Our method of analysis, as described in this paper, can be applied only a posteriori, i.e., once we have all the topics and all the systems available. Adding (removing) a new system / topic would mean re-computing hubness and authority indicators. Moreover, we are not explicitly proposing a change to current TREC methodology, although this could be a by-product of these - and further - analyses. This is an initial work, and further analyses could be performed. For instance, other effectiveness metrics could be used, in place of AP. Other centrality indicators, widely used in social network analysis, could be computed, although probably with similar results to PageRank. It would be interesting to compute the higher-order eigenvectors of AT A and AAT . The same kind of analysis could be performed at the document level, measuring document ease. Hopefully, further analyses of the graph defined in this paper, according to the approach described, can be insightful for a better understanding of TREC or similar data. Acknowledgments We would like to thank Nick Craswell for insightful discussions and the anonymous referees for useful remarks. Part of this research has been carried on while the first author was visiting Microsoft Research Cambridge, whose financial support is acknowledged. 8. REFERENCES [1] M. Agosti, M. Bacchin, N. Ferro, and M. Melucci. Improving the automatic retrieval of text documents. In Proceedings of the 3rd CLEF Workshop, volume 2785 of LNCS, pages 279-290, 2003. [2] C. Buckley and E. Voorhees. Evaluating evaluation measure stability. In 23rd SIGIR, pages 33-40, 2000. [3] S. Chakrabarti. Mining the Web. Morgan Kaufmann, 2003. [4] G. V. Cormack and T. R. Lynam. Statistical precision of information retrieval evaluation. In 29th SIGIR, pages 533-540, 2006. [5] J. Kleinberg. Authoritative sources in a hyperlinked environment. J. of the ACM, 46(5):604-632, 1999. [6] M. Levene. An Introduction to Search Engines and Web Navigation. Addison Wesley, 2006. [7] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank Citation Ranking: Bringing Order to the Web, 1998. http://dbpubs.stanford.edu:8090/pub/1999-66. [8] S. Robertson. On GMAP - and other transformations. In 13th CIKM, pages 78-83, 2006. [9] M. Sanderson and J. Zobel. Information retrieval system evaluation: effort, sensitivity, and reliability. In 28th SIGIR, pages 162-169, 2005. http://doi.acm.org/10.1145/1076034.1076064. [10] I. Soboroff, C. Nicholas, and P. Cahan. Ranking retrieval systems without relevance judgments. In 24th SIGIR, pages 66-73, 2001. [11] TREC Common Evaluation Measures, 2005. http://trec.nist.gov/pubs/trec14/appendices/ CE.MEASURES05.pdf (Last visit: Jan. 2007). [12] Text REtrieval Conference (TREC). http://trec.nist.gov/ (Last visit: Jan. 2007). [13] E. Voorhees and C. Buckley. The effect of topic set size on retrieval experiment error. In 25th SIGIR, pages 316-323, 2002. [14] E. M. Voorhees. Overview of the TREC 2005 Robust Retrieval Track. In TREC 2005 Proceedings, 2005. [15] E. M. Voorhees and D. K. Harman. TRECExperiment and Evaluation in Information Retrieval. MIT Press, 2005. [16] S. Wasserman and K. Faust. Social Network Analysis. Cambridge University Press, Cambridge, UK, 1994.
HITS Hits TREC--Exploring IR Evaluation Results with Network Analysis ABSTRACT We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known--and somewhat generalized--indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case. 1. INTRODUCTION Evaluation is a primary concern in the Information Retrieval (IR) field. TREC (Text REtrieval Conference) [12, 15] is an annual benchmarking exercise that has become a de facto standard in IR evaluation: before the actual conference, TREC provides to participants a collection of documents and a set of topics (representations of information needs). Participants use their systems to retrieve, and submit to TREC, a list of documents for each topic. After the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set. This defines a set of relevant documents for each topic. System effectiveness is then measured by well established metrics (Mean Average Precision being the most used). Other conferences such as NTCIR, INEX, CLEF provide comparable data. Network analysis is a discipline that studies features and properties of (usually large) networks, or graphs. Of particular importance is Social Network Analysis [16], that studies networks made up by links among humans (friendship, acquaintance, co-authorship, bibliographic citation, etc.). Network analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3, 6]. Current search engines use link analysis techniques to help rank the retrieved documents. Some indicators (and the corresponding algorithms that compute them) have been found useful in this respect, and are nowadays well known: Inlinks (the number of links to a Web page), PageRank [7], and HITS (Hyperlink-Induced Topic Search) [5]. Several extensions to these algorithms have been and are being proposed. These indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking. One example is using HITS for stemming, as described by Agosti et al. [1]. In this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems. Links represent effectiveness measurements on system-topic pairs. We then apply analysis methods originally developed for search applications to the resulting network. This reveals phenomena previously hidden in TREC data. In passing, we also provide a small generalization to Kleinberg's HITS algorithm, as well as to Inlinks and PageRank. The paper is organized as follows: Sect. 2 gives some motivations for the work. Sect. 3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect. 4; Sect. 5 presents some experiments on TREC 8 data; Sect. 6 discusses some issues and Sect. 7 closes the paper. 2. MOTIVATIONS Table 1: AP, MAP and AAP 2. Some topics are easier than others; 3. Some systems are better than others at distinguishing easy and difficult topics; 4. Some topics are better than others at distinguishing more or less effective systems. The first of these hypotheses needs no further justification--every reported significant difference between any two systems supports it. There is now also quite a lot of evidence for the second, centered on the TREC Robust Track [14]. Our primary interest is in the third and fourth. The third might be regarded as being of purely academic interest; however, the fourth has the potential for being of major practical importance in evaluation studies. If we could identify a relatively small number of topics which were really good at distinguishing effective and ineffective systems, we could save considerable effort in evaluating systems. One possible direction from this point would be to attempt direct identification of such small sets of topics. However, in the present paper, we seek instead to explore the relationships suggested by the hypotheses, between what different topics tell us about systems and what different systems tell us about topics. We seek methods of building and analysing a matrix of system-topic normalised performances, with a view to giving insight into the issue and confirming or refuting the third and fourth hypotheses. It turns out that the obvious symmetry implied by the above formulation of the hypotheses is a property worth investigating, and the investigation does indeed give us valuable insights. 3. THE IDEA 3.1 1st step: average precision table From TREC results, one can produce an Average Precision (AP) table (see Tab. 1a): each AP (si, tj) value measures the AP of system si on topic tj. Besides AP values, the table shows Mean Average Precision (MAP) values i.e., the mean of the AP values for a single system over all topics, and what we call Average Average Precision (AAP) values i.e., the average of the AP values for a single topic over all systems: MAPs are indicators of systems performance: higher MAP means good system. AAP are indicators of the performance on a topic: higher AAP means easy topic--a topic on which all or most systems have good performance. 3.2 Critique of pure AP MAP is a standard, well known, and widely used IR effectiveness measure. Single AP values are used too (e.g., in AP histograms). Topic difficulty is often discussed (e.g., in TREC Robust track [14]), although AAP values are not used and, to the best of our knowledge, have never been proposed (the median, not the average, of AP on a topic is used to produce TREC AP histograms [11]). However, the AP values in Tab. 1 present two limitations, which are symmetric in some respect: • Problem 1. They are not reliable to compare the effectiveness of a system on different topics, relative to the other systems. If, for example, AP (s1, t1)> AP (s1, t2), can we infer that s1 is a good system (i.e., has a good performance) on t1 and a bad system on t2? The answer is no: t1 might be an easy topic (with high AAP) and t2 a difficult one (low AAP). See an example in Tab. 1b: s1 is outperformed (on average) by the other systems on t1, and it outperforms the other systems on t2. • Problem 2. Conversely, if, for example, AP (s1, t1)> AP (s2, t1), can we infer that t1 is considered easier by s1 than by s2? No, we cannot: s1 might be a good system (with high MAP) and s2 a bad one (low MAP); see an example in Tab. 1b. These two problems are a sort of breakdown of the well known high influence of topics on IR evaluation; again, our formulation makes explicit the topics / systems symmetry. 3.3 2nd step: normalizations To avoid these two problems, we can normalize the AP table in two ways. The first normalization removes the influence of the single topic ease on system performance. Each AP (si, tj) value in the table depends on both system goodness and topic ease (the value will increase if a system is good and/or the topic is easy). By subtracting from each AP (si, tj) the AAP (tj) value, we obtain "normalized" AP values (APA (si, tj), Normalized AP according to AAP): that depend on system performance only (the value will increase only if system performance is good). See Tab. 2a. The second normalization removes the influence of the single system effectiveness on topic ease: by subtracting from each AP (si, tj) the MAP (si) value, we obtain "normalized" AP values (APM (si, tj), Normalized AP according to MAP): that depend on topic ease only (the value will increase only if the topic is easy, i.e., all systems perform well on that topic). See Tab. 2b. In other words, APA avoids Problem 1: APA (s, t) values measure the performance of system s on topic t normalized Table 2: Normalizations: APA and MAP: normalized AP (APA) and MAP (MAP) (a); normalized AP (APM) and AAP (AAP) (b); a numeric example (c) and (d) according to the ease of the topic (easy topics will not have higher APA values). Now, if, for example, APA (s1, t2)> APA (s1, t1), we can infer that s1 is a good system on t2 and a bad system on t1 (see Tab. 2c). Vice versa, APM avoids Problem 2: APM (s, t) values measure the ease of topic t according to system s, normalized according to goodness of the system (good systems will not lead to higher APM values). If, for example, APM (s2, t1)> APM (s1, t1), we can infer that t1 is considered easier by s2 than by s1 (see Tab. 2d). On the basis of Tables 2a and 2b, we can also define two new measures of system effectiveness and topic ease, i.e., a Normalized MAP (MAP), obtained by averaging the APA values on one row in Tab. 2a, and a Normalized AAP (AAP), obtained by averaging the APM values on one column in Tab. 2b: Thus, overall system performance can be measured, besides by means of MAP, also by means of MAP. Moreover, MAP is equivalent to MAP, as can be immediately proved by using Eqs. (5), (3), and (1): conversely, overall topic ease can be measured, besides by Figure 1: Construction of the adjacency matrix. APAT is the transpose of APA. means of AAP, also by means of AAP, and this is equivalent (the proof is analogous, and relies on Eqs. (6), (4), and (2)). The two Tables 2a and 2b are interesting per se, and can be analyzed in several different ways. In the following we propose an analysis based on network analysis techniques, mainly Kleinberg's HITS algorithm. There is a little further discussion of these normalizations in Sect. 6. 3.4 3rd step: Systems-Topics Graph The two tables 2a and 2b can be merged into a single one with the procedure shown in Fig. 1. The obtained matrix can be interpreted as the adjacency matrix of a complete weighted bipartite graph, that we call Systems-Topics graph. Arcs and weights in the graph can be interpreted as follows: • (weight on) arc s → t: how much the system s "thinks" that the topic t is easy--assuming that a system has no knowledge of the other systems (or in other words, how easy we might think the topic is, knowing only the results for this one system). This corresponds to APM values, i.e., to normalized topic ease (Fig. 2a). • (weight on) arc s ← t: how much the topic t "thinks" that the system s is good--assuming that a topic has no knowledge of the other topics (or in other words, how good we might think the system is, knowing only the results for this one topic). This corresponds to APA (normalized system effectiveness, Fig. 2b). Figs. 2c and 2d show the Systems-Topics complete weighted bipartite graph, on a toy example with 4 systems and 2 topics; the graph is split in two parts to have an understandable graphical representation: arcs in Fig. 2c are labeled with APM values; arcs in Fig. 2d are labeled with APA values. 4. ANALYSIS OF THE GRAPH 4.1 Weighted Inlinks, Outlinks, PageRank The sum of weighted outlinks, i.e., the sum of the weights on the outgoing arcs from each node, is always zero: • The outlinks on each node corresponding to a system s (Fig. 2c) is the sum of all the corresponding APM values on one row of the matrix in Tab. 2b. • The outlinks on each node corresponding to a topic Figure 2: The relationships between systems and topics (a) and (b); and the Systems-Topics graph for a toy example (c) and (d). Dashed arcs correspond to negative values. Figure 3: Hub and Authority computation values on one row of the transpose of the matrix in Tab. 2a. The average1of weighted inlinks is: 9 MAP for each node corresponding to a system s; this corresponds to the average of all the corresponding APA values on one column of the APA part of the adjacency matrix (see Fig. 1). 9 AAP for each node corresponding to a topic t; this corresponds to the average of all the corresponding APM values on one column of the APM part of the adjacency matrix (see Fig. 1). Therefore, weighted inlinks measure either system effectiveness or topic ease; weighted outlinks are not meaningful. We could also apply the PageRank algorithm to the network; the meaning of the PageRank of a node is not quite so obvious as Inlinks and Outlinks, but it also seems a sensible measure of either system effectiveness or topic ease: if a system is effective, it will have several incoming links with high 1Usually, the sum of the weights on the incoming arcs to each node is used in place of the average; since the graph is complete, it makes no difference. weights (APA); if a topic is easy it will have high weights (APM) on the incoming links too. We will see experimental confirmation in the following. 4.2 Hubs and Authorities Let us now turn to more sophisticated indicators. Kleinberg's HITS algorithm defines, for a directed graph, two indicators: hubness and authority; we reiterate here some of the basic details of the HITS algorithm in order to emphasize both the nature of our generalization and the interpretation of the HITS concepts in this context. Usually, hubness and authority are defined as h (x) = Px → y a (y) and a (x) = P y → x h (y), and described intuitively as "a good hub links many good authorities; a good authority is linked from many good hubs". As it is well known, an equivalent formulation in linear algebra terms is (see also Fig. 3): h = Aa and a = ATh (where h is the hubness vector, with the hub values for all the nodes; a is the authority vector; A is the adjacency matrix of the graph; and AT its transpose). Usually, A contains 0s and 1s only, corresponding to presence and absence of unweighted directed arcs, but Eq. (7) can be immediately generalized to (in fact, it is already valid for) A containing any real value, i.e., to weighted graphs. Therefore we can have a "generalized version" (or rather a generalized interpretation, since the formulation is still the original one) of hubness and authority for all nodes in a graph. An intuitive formulation of this generalized HITS is still available, although slightly more complex: "a good hub links, by means of arcs having high weights, many good authorities; a good authority is linked, by means of arcs having high weights, from many good hubs". Since arc weights can be, in general, negative, hub and authority values can be negative, and one could speak of unhubness and unauthority; the intuitive formulation could be completed by adding that "a good hub links good unauthorities by means of links with highly negative weights; a good authority is linked by good unhubs by means of links with highly negative weights". And, also, "a good unhub links positively good unauthorities and negatively good authorities; a good unauthority is linked positively from good unhubs and negatively from good hubs". Let us now apply generalized HITS to our Systems-Topics graph. We compute a (s), h (s), a (t), and h (t). Intuitively, we expect that a (s) is somehow similar to Inlinks, so it should be a measure of either systems effectiveness or topic ease. Similarly, hubness should be more similar to Outlinks, thus less meaningful, although the interplay between hub and authority might lead to the discovery of something different. Let us start by remarking that authority of topics and hubness of systems depend only on each other; similarly hubness of topics and authority of systems depend only on each other: see Figs. 2c, 2d and 3. Thus the two graphs in Figs. 2c and 2d can be analyzed independently. In fact the entire HITS analysis could be done in one direction only, with just APM (s, t) values or alternatively with just APA (s, t). As discussed below, probably most interest resides in the hubness of topics and the authority of systems, so the latter makes sense. However, in this paper, we pursue both analyses together, because the symmetry itself is interesting. Considering Fig. 2c we can state that: We can summarize this as: a (t) is high if APM (s, t) is high for those systems with high h (s); h (s) is high if APM (s, t) is high for those topics with high a (t). Intuitively, authority a (t) of a topic measures topic ease; hubness h (s) of a system measures system's "capability" to recognize easy topics. A system with high unhubness (negative hubness) would tend to regard easy topics as hard and hard ones as easy. The situation for Fig. 2d, i.e., for a (s) and h (t), is analogous. Authority a (s) of a system node s measures system effectiveness: it increases with the weight on the arc (i.e., APA (s, tj)) and the hubness of the incoming topic nodes tj. Hubness h (t) of a topic node t measures topic capability to recognize effective systems: if h (t)> 0, it increases further if APA (s, tj) increases; if h (t) <0, it increases if APA (s, tj) decreases. Intuitively, we can state that "A system has a higher authority if it is more effective on topics with high hubness"; and "A topic has a higher hubness if it is easier for those systems which are more effective in general". Conversely for system hubness and topic authority: "A topic has a higher authority if it is easier on systems with high hubness"; and "A system has a higher hubness if it is more effective for those topics which are easier in general". Therefore, for each system we have two indicators: authority (a (s)), measuring system effectiveness, and hubness (h (s)), measuring system capability to estimate topic ease. And for each topic, we have two indicators: authority (a (t)), measuring topic ease, and hubness (h (t)), measuring topic capability to estimate systems effectiveness. We can define them formally as We observe that the hubness of topics may be of particular interest for evaluation studies. It may be that we can evaluate the effectiveness of systems efficiently by using relatively few high-hubness topics. 5. EXPERIMENTS We now turn to discuss if these indicators are meaningful and useful in practice, and how they correlate with standard measures used in TREC. We have built the Systems-Topics graph for TREC 8 data (featuring 1282 systems--actually, 2Actually, TREC 8 data features 129 systems; due to some bug in our scripts, we did not include one system (8manexT3D1N0), but the results should not be affected. Figure 4: Distributions of AP, APA, and APM values in TREC 8 data Table 3: Correlations between network analysis measures and MAP (a) and AAP (b) runs--on 50 topics). This section illustrates the results obtained mining these data according to the method presented in previous sections. Fig. 4 shows the distributions of AP, APA, and APM: whereas AP is very skewed, both APA and APM are much more symmetric (as it should be, since they are constructed by subtracting the mean). Tables 3a and 3b show the Pearson's correlation values between Inlinks, PageRank, Hub, Authority and, respectively, MAP or AAP (Outlinks values are not shown since they are always zero, as seen in Sect. 4). As expected, Inlinks and PageRank have a perfect correlation with MAP and AAP. Authority has a very high correlation too with MAP and AAP; Hub assumes slightly lower values. Let us analyze the correlations more in detail. The correlations chart in Figs. 5a and 5b demonstrate the high correlation between Authority and MAP or AAP. Hubness presents interesting phenomena: both Fig. 5c (correlation with MAP) and Fig. 5d (correlation with AAP) show that correlation is not exact, but neither is it random. This, given the meaning of hubness (capability in estimating topic ease and system effectiveness), means two things: (i) more effective systems are better at estimating topic ease; and (ii) easier topics are better at estimating system effectiveness. Whereas the first statement is fine (there is nothing against it), the second is a bit worrying. It means that system effectiveness in TREC is affected more by easy topics than by difficult topics, which is rather undesirable for quite obvious reasons: a system capable of performing well on a difficult topic, i.e., on a topic on which the other systems perform badly, would be an important result for IR effectiveness; con Figure 5: Correlations: MAP (x axis) and authority (y axis) of systems (a); AAP and authority of topics (b); MAP and hub of systems (c) and AAP and hub of topics (d) versely, a system capable of performing well on easy topics is just a confirmation of the state of the art. Indeed, the correlation between hubness and AAP (statement (i) above) is higher than the correlation between hubness and MAP (corresponding to statement (ii)): 0.92 vs. 0.80. However, this phenomenon is quite strong. This is also confirmed by the work being done on the TREC Robust Track [14]. In this respect, it is interesting to see what happens if we use a different measure from MAP (and AAP). The GMAP (Geometric MAP) metric is defined as the geometric mean of AP values, or equivalently as the arithmetic mean of the logarithms of AP values [8]. GMAP has the property of giving more weight to the low end of the AP scale (i.e., to low AP values), and this seems reasonable, since, intuitively, a performance increase in MAP values from 0.01 to 0.02 should be more important than an increase from 0.81 to 0.82. To use GMAP in place of MAP and AAP, we only need to take the logarithms of initial AP values, i.e., those in Tab. 1a (zero values are modified into ε = 0.00001). We then repeat the same normalization process (with GMAP and GAAP--Geometric AAP--replacing MAP and AAP): whereas authority values still perfectly correlate with GMAP (0.99) and GAAP (1.00), the correlation with hubness largely disappears (values are − 0.16 and − 0.09--slightly negative but not enough to concern us). This is yet another confirmation that TREC effectiveness as measured by MAP depends mainly on easy topics; GMAP appears to be a more balanced measure. Note that, perhaps surprisingly, GMAP is indeed fairly well balanced, not biased in the opposite direction--that is, it does not overemphasize the difficult topics. In Sect. 6.3 below, we discuss another transformation, replacing the log function used in GMAP with logit. This has a similar effect: the correlation of mean logitAP and average logitAP with hubness are now small positive numbers (0.23 and 0.15 respectively), still comfortably away from the high correlations with regular MAP and AAP, i.e., not presenting the problematic phenomenon (ii) above (over-dependency on easy topics). We also observe that hub values are positive, whereas authority assumes, as predicted, both positive and negative values. An intuitive justification is that negative hubness would indicate a node which disagrees with the other nodes, e.g., a system which does better on difficult topics, or a topic on which bad systems do better; such systems and topics would be quite strange, and probably do not appear in TREC. Finally, although one might think that topics with several relevant documents are more important and difficult, this is not the case: there is no correlation between hub (or any other indicator) and the number of documents relevant to a topic. 6. DISCUSSION 6.1 Related work There has been considerable interest in recent years in questions of statistical significance of effectiveness comparisons between systems (e.g. [2,9]), and related questions of how many topics might be needed to establish differences (e.g. [13]). We regard some results of the present study as in some way complementary to this work, in that we make a step towards answering the question "Which topics are best for establishing differences?" . The results on evaluation without relevance judgements such as [10] show that, to some extent, good systems agree on which are the good documents. We have not addressed the question of individual documents in the present analysis, but this effect is certainly analogous to our results. 6.2 Are normalizations necessary? At this point it is also worthwhile to analyze what would happen without the MAP - and AAP-normalizations defined in Sect. 3.3. Indeed, the process of graph construction (Sect. 3.4) is still valid: both the APM and APA matrices are replaced by the AP one, and then everything goes on as above. Therefore, one might think that the normalizations are unuseful in this setting. This is not the case. From the theoretical point of view, the AP-only graph does not present the interesting properties above discussed: since the AP-only graph is symmetrical (the weight on each incoming link is equal to the weight on the corresponding outgoing link), Inlinks and Outlinks assume the same values. There is symmetry also in computing hub and authority, that assume the same value for each node since the weights on the incoming and outgoing arcs are the same. This could be stated in more precise and formal terms, but one might still wonder if on the overall graph there are some sort of counterbalancing effects. It is therefore easier to look at experimental data, which confirm that the normalizations are needed: the correlations between AP, Inlinks, Outlinks, Hub, and/or Authority are all very close to one (none of them is below 0.98). 6.3 Are these normalizations sufficient? It might be argued that (in the case of APA, for example) the amount we have subtracted from each AP value is topicdependent, therefore the range of the resulting APA value is also topic-dependent (e.g. the maximum is 1 − AAP (tj) and the minimum is − AAP (tj)). This suggests that the cross-topic comparisons of these values suggested in Sect. 3.3 may not be reliable. A similar issue arises for APM and comparisons across systems. One possible way to overcome this would be to use an unconstrained measure whose range is the full real line. Note that in applying the method to GMAP by using log AP, we avoid the problem with the lower limit but retain it for the upper limit. One way to achieve an unconstrainted range would be to use the logit function rather than the log [4,8]. We have also run this variant (as already reported in Sect. 5 above), and it appears to provide very similar results to the GMAP results already given. This is not surprising, since in practice the two functions are very similar over most of the operative range. The normalizations thus seem reliable. 6.4 On AAT and ATA It is well known that h and a vectors are the principal left eigenvectors of AAT and ATA, respectively (this can be easily derived from Eqs. (7)), and that, in the case of citation graphs, AAT and ATA represent, respectively, bibliographic coupling and co-citations. What is the meaning, if any, of AAT and ATA in our Systems-Topics graph? It is easy to derive that: (where S is the set of indices corresponding to systems and T the set of indices corresponding to topics). Thus AAT and ATA are block diagonal matrices, with two blocks each, one relative to systems and one relative to topics: (a) if i, j ∈ S, then AAT [i, j] = Ek ∈, APM (i, k) · APM (j, k) measures how much the two systems i and j agree in estimating topics ease (APM): high values mean that the two systems agree on topics ease. (b) if i, j ∈ T, then AAT [i, j] = Ek ∈ S APA (k, i) · APA (k, j) measures how much the two topics i and j agree in estimating systems effectiveness (APA): high values mean that the two topics agree on systems effectiveness (and that TREC results would not change by leaving out one of the two topics). (c) if i, j ∈ S, then ATA [i, j] = Ek ∈, APA (i, k) · APA (j, k) measures how much agreement on the effectiveness of two systems i and j there is over all topics: high values mean that many topics quite agree on the two systems effectiveness; low values single out systems that are somehow controversial, and that need several topics to have a correct effectiveness assessment. (d) if i, j ∈ T, then ATA [i, j] = Ek ∈ S APM (k, i) · APM (k, j) measures how much agreement on the ease of the two topics i and j there is over all systems: high values mean that many systems quite agree on the two topics ease. Therefore, these matrices are meaningful and somehow interesting. For instance, the submatrix (b) corresponds to a weighted undirected complete graph, whose nodes are the topics and whose arc weights are a measure of how much two topics agree on systems effectiveness. Two topics that are very close on this graph give the same information, and therefore one of them could be discarded without changes in TREC results. It would be interesting to cluster the topics on this graph. Furthermore, the matrix/graph (a) could be useful in TREC pool formation: systems that do not agree on topic ease would probably find different relevant documents, and should therefore be complementary in pool formation. Note that no notion of single documents is involved in the above analysis. 6.5 Insights As indicated, the primary contribution of this paper has been a method of analysis. However, in the course of applying this method to one set of TREC results, we have achieved some insights relating to the hypotheses formulated in Sect. 2: • We confirm Hypothesis 2 above, that some topics are easier than others. • Differences in the hubness of systems reveal that some systems are better than others at distinguishing easy and difficult topics; thus we have some confirmation of Hypothesis 3. • There are some relatively idiosyncratic systems which do badly on some topics generally considered easy but well on some hard topics. However, on the whole, the more effective systems are better at distinguishing easy and difficult topics. This is to be expected: a really bad system will do badly on everything, while even a good system may have difficulty with some topics. • Differences in the hubness of topics reveal that some topics are better than others at distinguising more or less effective systems; thus we have some confirmation of Hypothesis 4. SIGIR 2007 Proceedings Session 20: Link Analysis • If we use MAP as the measure of effectiveness, it is also true that the easiest topics are better at distinguishing more or less effective systems. As argued in Sect. 5, this is an undesirable property. GMAP is more balanced. Clearly these ideas need to be tested on other data sets. However, they reveal that the method of analysis proposed in this paper can provide valuable information. 6.6 Selecting topics The confirmation of Hypothesis 4 leads, as indicated, to the idea that we could do reliable system evaluation on a much smaller set of topics, provided we could select such an appropriate set. This selection may not be straightforward, however. It is possible that simply selecting the high hubness topics will achieve this end; however, it is also possible that there are significant interactions between topics which would render such a simple rule ineffective. This investigation would therefore require serious experimentation. For this reason we have not attempted in this paper to point to the specific high hubness topics as being good for evaluation. This is left for future work. 7. CONCLUSIONS AND FUTURE DEVELOPMENTS The contribution of this paper is threefold: • we propose a novel way of normalizing AP values; • we propose a novel method to analyse TREC data; • the method applied on TREC data does indeed reveal some hidden properties. More particularly, we propose Average Average Precision (AAP), a measure of topic ease, and a novel way of normalizing the average precision measure in TREC, on the basis of both MAP (Mean Average Precision) and AAP. The normalized measures (APM and APA) are used to build a bipartite weighted Systems-Topics graph, that is then analyzed by means of network analysis indicators widely known in the (social) network analysis field, but somewhat generalised. We note that no such approach to TREC data analysis has been proposed so far. The analysis shows that, with current measures, a system that wants to be effective in TREC needs to be effective on easy topics. Also, it is suggested that a cluster analysis on topic similarity can lead to relying on a lower number of topics. Our method of analysis, as described in this paper, can be applied only a posteriori, i.e., once we have all the topics and all the systems available. Adding (removing) a new system / topic would mean re-computing hubness and authority indicators. Moreover, we are not explicitly proposing a change to current TREC methodology, although this could be a by-product of these--and further--analyses. This is an initial work, and further analyses could be performed. For instance, other effectiveness metrics could be used, in place of AP. Other centrality indicators, widely used in social network analysis, could be computed, although probably with similar results to PageRank. It would be interesting to compute the higher-order eigenvectors of ATA and AAT. The same kind of analysis could be performed at the document level, measuring document ease. Hopefully, further analyses of the graph defined in this paper, according to the approach described, can be insightful for a better understanding of TREC or similar data.
HITS Hits TREC--Exploring IR Evaluation Results with Network Analysis ABSTRACT We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known--and somewhat generalized--indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case. 1. INTRODUCTION Evaluation is a primary concern in the Information Retrieval (IR) field. TREC (Text REtrieval Conference) [12, 15] is an annual benchmarking exercise that has become a de facto standard in IR evaluation: before the actual conference, TREC provides to participants a collection of documents and a set of topics (representations of information needs). Participants use their systems to retrieve, and submit to TREC, a list of documents for each topic. After the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set. This defines a set of relevant documents for each topic. System effectiveness is then measured by well established metrics (Mean Average Precision being the most used). Other conferences such as NTCIR, INEX, CLEF provide comparable data. Network analysis is a discipline that studies features and properties of (usually large) networks, or graphs. Of particular importance is Social Network Analysis [16], that studies networks made up by links among humans (friendship, acquaintance, co-authorship, bibliographic citation, etc.). Network analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3, 6]. Current search engines use link analysis techniques to help rank the retrieved documents. Some indicators (and the corresponding algorithms that compute them) have been found useful in this respect, and are nowadays well known: Inlinks (the number of links to a Web page), PageRank [7], and HITS (Hyperlink-Induced Topic Search) [5]. Several extensions to these algorithms have been and are being proposed. These indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking. One example is using HITS for stemming, as described by Agosti et al. [1]. In this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems. Links represent effectiveness measurements on system-topic pairs. We then apply analysis methods originally developed for search applications to the resulting network. This reveals phenomena previously hidden in TREC data. In passing, we also provide a small generalization to Kleinberg's HITS algorithm, as well as to Inlinks and PageRank. The paper is organized as follows: Sect. 2 gives some motivations for the work. Sect. 3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect. 4; Sect. 5 presents some experiments on TREC 8 data; Sect. 6 discusses some issues and Sect. 7 closes the paper. 2. MOTIVATIONS 3. THE IDEA 3.1 1st step: average precision table 3.2 Critique of pure AP 3.3 2nd step: normalizations 3.4 3rd step: Systems-Topics Graph 4. ANALYSIS OF THE GRAPH 4.1 Weighted Inlinks, Outlinks, PageRank 4.2 Hubs and Authorities P 5. EXPERIMENTS 6. DISCUSSION 6.1 Related work 6.2 Are normalizations necessary? 6.3 Are these normalizations sufficient? 6.4 On AAT and ATA 6.5 Insights SIGIR 2007 Proceedings Session 20: Link Analysis 6.6 Selecting topics 7. CONCLUSIONS AND FUTURE DEVELOPMENTS
HITS Hits TREC--Exploring IR Evaluation Results with Network Analysis ABSTRACT We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known--and somewhat generalized--indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case. 1. INTRODUCTION Evaluation is a primary concern in the Information Retrieval (IR) field. needs). Participants use their systems to retrieve, and submit to TREC, a list of documents for each topic. After the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set. This defines a set of relevant documents for each topic. System effectiveness is then measured by well established metrics (Mean Average Precision being the most used). Other conferences such as NTCIR, INEX, CLEF provide comparable data. Network analysis is a discipline that studies features and properties of (usually large) networks, or graphs. Network analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3, 6]. Current search engines use link analysis techniques to help rank the retrieved documents. Several extensions to these algorithms have been and are being proposed. These indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking. In this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems. Links represent effectiveness measurements on system-topic pairs. We then apply analysis methods originally developed for search applications to the resulting network. This reveals phenomena previously hidden in TREC data. In passing, we also provide a small generalization to Kleinberg's HITS algorithm, as well as to Inlinks and PageRank. The paper is organized as follows: Sect. Sect. 3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect. 4; Sect. 5 presents some experiments on TREC 8 data; Sect. 6 discusses some issues and Sect. 7 closes the paper.
H-95
Handling Locations in Search Engine Queries
This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches.
[ "search engin queri", "place refer", "geograph inform retriev", "geograph context", "name entiti recognit algorithm", "ner", "disambigu result", "textual string", "web search engin", "queri string", "search queri", "locationimplicit queri", "geograph ontolog", "geograph type express", "token scheme", "spell correct mechan", "geograph ir", "text mine", "queri process" ]
[ "P", "P", "P", "M", "U", "U", "M", "U", "M", "M", "R", "M", "R", "M", "U", "U", "M", "U", "M" ]
Handling Locations in Search Engine Queries Bruno Martins, Mário J. Silva, Sérgio Freitas and Ana Paula Afonso Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa, Portugal {bmartins,mjs,sfreitas,apa}@xldb. di.fc.ul.pt ABSTRACT This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval 1. INTRODUCTION Search engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]). One of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction. Nowadays, GIR is getting increasing attention. Systems that access resources on the basis of geographic context are starting to appear, both in the academic and commercial domains [4, 7]. Accurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others. Detecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms. Queries with a local intent are best answered with localized pages, while queries without any geographical references are best answered with broad pages [5]. Text mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13]. However, this body of research has been focused on processing Web pages and full-text documents. Search engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents. Moreover, the data is also noisier and more versatile in form, and we have to deal with misspellings, multilingualism and acronyms. How to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem. Key challenges in handling locations over search engine queries include their detection and disambiguation, the ranking of possible candidates, the detection of false positives (i.e not all contained location names refer to geographical locations), and the detection of implied locations by the context of the query (i.e. when the query does not explicitly contain a place reference but it is nonetheless geographical). Simple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references. For example the query Denzel Washington contains the place name Washington, but the query is not geographical. Queries can also be geographic without containing any explicit reference to locations at the dictionary. In these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information. This paper proposes simple and yet effective techniques for handling place references over queries. Each query is split into a triple < what,relation,where >, where what specifies the non-geographic aspect of the information need, where specifies the geographic areas of interest, and relation specifies a spatial relationship connecting what and where. When this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Disambiguating place references is one of the most important aspects. We use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred). Disambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology. When we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results. By doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query. Experiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results. The rest of this paper is organized as follows. We first formalize the problem and describe related work to our research. Next, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a < what,relation,where > triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set. Section 4 presents evaluation results. Finally, we give some conclusions and directions for future research. 2. CONCEPTS AND RELATED WORK Search engine performance depends on the ability to capture the most likely meaning of a query as intended by the user [16]. Previous studies showed that a significant portion of the queries submitted to search engines are geographic [8, 14]. A recent enhancement to search engine technology is the addition of geographic reasoning, combining geographic information systems and information retrieval in order to build search engines that find information associated with given locations. The ability to recognize and reason about the geographical terminology, given in the text documents and user queries, is a crucial aspect of these geographical information retrieval (GIR) systems [4, 7]. Extracting and distinguishing different types of entities in text is usually referred to as Named Entity Recognition (NER). For at least a decade, this has been an important text mining task, and a key feature of the Message Understanding Conferences (MUC) [3]. NER has been successfully automated with near-human performance, but the specific problem of recognizing geographical references presents additional challenges [9]. When handling named entities with a high level of detail, ambiguity problems arise more frequently. Ambiguity in geographical references is bi-directional [15]. The same name can be used for more than one location (referent ambiguity), and the same location can have more than one name (reference ambiguity). The former has another twist, since the same name can be used for locations as well as for other class of entities, such as persons or company names (referent class ambiguity). Besides the recognition of geographical expressions, GIR also requires that the recognized expressions be classified and grounded to unique identifiers [11]. Grounding the recognized expressions (e.g. associating them to coordinates or concepts at an ontology) assures that they can be used in more advanced GIR tasks. Previous works have addressed the tagging and grounding of locations in Web pages, as well as the assignment of geographic scopes to these documents [1, 7, 13]. This is a complementary aspect to the techniques described in this paper, since if we have the Web pages tagged with location information, a search engine can conveniently return pages with a geographical scope related to the scope of the query. The task of handling geographical references over documents is however considerably different from that of handling geographical references over queries. In our case, queries are usually short and often do not constitute proper sentences. Text mining techniques that make use of context information are difficult to apply for high accuracy. Previous studies have also addressed the use of text mining and automated classification techniques over search engine queries [16, 10]. However, most of these works did not consider place references or geographical categories. Again, these previously proposed methods are difficult to apply to the geographic domain. Gravano et. al. studied the classification of Web queries into two types, namely local and global [5]. They defined a query as local if its best matches on a Web search engine are likely to be local pages, such as houses for sale. A number of classification algorithms have been evaluated using search engine queries. However, their experimental results showed that only a rather low precision and recall could be achieved. The problem addressed in this paper is also slightly different, since we are trying not only to detect local queries but also to disambiguate the local of interest. Wang et. al. proposed to go further than detecting local queries, by also disambiguating the implicit local of interest [17]. The proposed approach works for both queries containing place references and queries not containing them, by looking for dominant geographic references over query logs and text from search results. In comparison, we propose simpler techniques based on matching names from a geographic ontology. Our approach looks for spatial relationships at the query string, and it also associates the place references to ontology concepts. In the case of queries not containing explicit place references, we use geographical scopes previously assigned to the documents, whereas Wang et. al. proposed to extract locations from the text of the top search results. There are nowadays many geocoding, reverse-geocoding, and mapping services on the Web that can be easily integrated with other applications. Geocoding is the process of locating points on the surface of the Earth from alphanumeric addressing data. Taking a string with an address, a geocoder queries a geographical information system and returns interpolated coordinate values for the given location. Instead of computing coordinates for a given place reference, the technique described in this paper aims at assigning references to the corresponding ontology concepts. However, if each concept at the ontology contains associated coordinate information, the approach described here could also be used to build a geocoding service. Most of such existing services are commercial in nature, and there are no technical publications describing them. A number of commercial search services have also started to support location-based searches. Google Local, for instance, initially required the user to specify a location qualifier separately from the search query. More recently, it added location look-up capabilities that extract locations from query strings. For example, in a search for Pizza Seattle, Google Local returns local results for pizza near Seattle, WA. However, the intrinsics of their solution are not published, and their approach also does not handle locationimplicit queries. Moreover, Google Local does not take spatial relations into account. In sum, there are already some studies on tagging geographical references, but Web queries pose additional challenges which have not been addressed. In this paper, we explain the proposed solutions for the identified problems. 3. HANDLINGQUERIESIN GIR SYSTEMS Most GIR queries can be parsed to < what,relation,where > triple, where the what term is used to specify the general nongeographical aspect of the information need, the where term is used to specify the geographical areas of interest, and the relation term is used to specify a spatial relationship connecting what and where. While the what term can assume any form, in order to reflect any information need, the relation and where terms should be part of a controlled vocabulary. In particular, the relation term should refer to a well-known geographical relation that the underlying GIR system can interpret (e.g. near or contained at), and the where term should be disambiguated into a set of unique identifiers, corresponding to concepts at the ontology. Different systems can use alternative schemes to take input queries from the users. Three general strategies can be identified, and GIR systems often support more than one of the following schemes: Figure 1: Strategies for processing queries in Geographical Information Retrieval systems. 1. Input to the system is a textual query string. This is the hardest case, since we need to separate the query into the three different components, and then we need to disambiguate the where term into a set of unique identifiers. 2. Input to the system is provided in two separate strings, one concerning the what term, and the other concerning the where. The relation term can be either fixed (e.g. always assume the near relation), specified together with the where string, or provided separately from the users from a set of possible choices. Although there is no need for separating query string into the different components, we still need to disambiguate the where term into a set of unique identifiers. 3. Input to the system is provided through a query string together with an unambiguous description of the geographical area of interest (e.g. a sketch in a map, spatial coordinates or a selection from a set of possible choices). No disambiguation is required, and therefore the techniques described in this paper do not have to be applied. The first two schemes depend on place name disambiguation. Figure 1 illustrates how we propose to handle geographic queries in these first two schemes. A common component is the algorithm for disambiguating place references into corresponding ontology concepts, which is described next. 3.1 From Place Names to Ontology Concepts A required task in handling GIR queries consists of associating a string containing a geographical reference with the set of corresponding concepts at the geographic ontology. We propose to do this according to the pseudo-code listed at Algorithm 1. The algorithm considers the cases where a second (or even more than one) location is given to qualify a first (e.g. Paris, France). It makes recursive calls to match each location, and relies on hierarchical part-of relations to detect if two locations share a common hierarchy path. One of the provided locations should be more general and the other more specific, in the sense that there must exist a part-of relationship among the associated concepts at the ontology (either direct or transitive). The most specific location is a sub-region of the most general, and the algorithm returns the most specific one (i.e. for Paris, France the algorithm returns the ontology concept associated with Paris, the capital city of France). We also consider the cases where a geographical type expression is used to qualify a given name (e.g. city of Lisbon or state of New York). For instance the name Lisbon can correspond to many different concepts at a geographical ontology, and type Algorithm 1 Matching a place name with ontology concepts Require: O = A geographic ontology Require: GN = A string with the geographic name to be matched 1: L = An empty list 2: INDEX = The position in GN for the first occurrence of a comma, semi-colon or bracket character 3: if INDEX is defined then 4: GN1 = The substring of GN from position 0 to INDEX 5: GN2 = The substring of GN from INDEX +1 to length(GN) 6: L1 = Algorithm1(O,GN1) 7: L2 = Algorithm1(O,GN2) 8: for each C1 in L1 do 9: for each C2 in L2 do 10: if C1 is an ancestor of C2 at O then 11: L = The list L after adding element C2 12: else if C1 is a descendant of C2 at O then 13: L = The list L after adding element C1 14: end if 15: end for 16: end for 17: else 18: GN = The string GN after removing case and diacritics 19: if GN contains a geographic type qualifier then 20: T = The substring of GN containing the type qualifier 21: GN = The substring of GN with the type qualifier removed 22: L = The list of concepts from O with name GN and type T 23: else 24: L = The list of concepts from O with name GN 25: end if 26: end if 27: return The list L qualifiers can provide useful information for disambiguation. The considered type qualifiers should also described at the ontologies (e.g. each geographic concept should be associated to a type that is also defined at the ontology, such as country, district or city). Ideally, the geographical reference provided by the user should be disambiguated into a single ontology concept. However, this is not always possible, since the user may not provide all the required information (i.e. a type expression or a second qualifying location). The output is therefore a list with the possible concepts being referred to by the user. In a final step, we propose to sort this list, so that if a single concept is required as output, we can use the one that is ranked higher. The sorting procedure reflects the likelihood of each concept being indeed the one referred to. We propose to rank concepts according to the following heuristics: 1. The geographical type expression associated with the ontology concept. For the same name, a country is more likely to be referenced than a city, and in turn a city more likely to be referenced than a street. 2. Number of ancestors at the ontology. Top places at the ontology tend to be more general, and are therefore more likely to be referenced in search engine queries. 3. Population count. Highly populated places are better known, and therefore more likely to be referenced in queries. 4. Population counts from direct ancestors at the ontology. Subregions of highly populated places are better known, and also more likely to be referenced in search engine queries. 5. Occurrence frequency over Web documents (e.g. Google counts) for the geographical names. Places names that occur more frequently over Web documents are also more likely to be referenced in search engine queries. 6. Number of descendants at the ontology. Places that have more sub-regions tend to be more general, and are therefore more likely to be mentioned in search engine queries. 7. String size for the geographical names. Short names are more likely to be mentioned in search engine queries. Algorithm 1, plus the ranking procedure, can already handle GIR queries where the where term is given separately from the what and relation terms. However, if the query is given in a single string, we require the identification of the associated < what,relation,where > triple, before disambiguating the where term into the corresponding ontology concepts. This is described in the following Section. 3.2 Handling Single Query Strings Algorithm 2 provides the mechanism for separating a query string into a < what,relation,where > triple. It uses Algorithm 1 to find the where term, disambiguating it into a set of ontology concepts. The algorithm starts by tokenizing the query string into individual words, also taking care of removing case and diacritics. We have a simple tokenizer that uses the space character as a word delimiter, but we could also have a tokenization approach similar to the proposal of Wang et. al. which relies on Web occurrence statistics to avoid breaking collocations [17]. In the future, we plan on testing if this different tokenization scheme can improve results. Next, the algorithm tests different possible splits of the query, building the what, relation and where terms through concatenations of the individual tokens. The relation term is matched against a list of possible values (e.g. near, at, around, or south of), corresponding to the operators that are supported by the GIR system. Note that is also the responsibility of the underlying GIR system to interpret the actual meaning of the different spatial relations. Algorithm 1 is used to check whether a where term constitutes a geographical reference or not. We also check if the last word in the what term belongs to a list of exceptions, containing for instance first names of people in different languages. This ensures that a query like Denzel Washington is appropriately handled. If the algorithm succeeds in finding valid relation and where terms, then the corresponding triple is returned. Otherwise, we return a triple with the what term equaling the query string, and the relation and where terms set as empty. If the entire query string constitutes a geographical reference, we return a triple with the what term set to empty, the where term equaling the query string, and the relation term set the DEFINITION (i.e. these queries should be answered with information about the given place references). The algorithm also handles query strings where more than one geographical reference is provided, using and or an equivalent preposition, together with a recursive call to Algorithm 2. A query like Diamond trade in Angola and South Africa is Algorithm 2 Get < what,relation,where > from a query string Require: O = A geographical ontology Require: Q = A non-empty string with the query 1: Q = The string Q after removing case and diacritics 2: TOKENS[0. . N] = An array of strings with the individual words of Q 3: N = The size of the TOKENS array 4: for INDEX = 0 to N do 5: if INDEX = 0 then 6: WHAT = Concatenation of TOKENS[0. . INDEX −1] 7: LASTWHAT = TOKENS[INDEX −1] 8: else 9: WHAT = An empty string 10: LASTWHAT = An empty string 11: end if 12: WHERE = Concatenation of TOKENS[INDEX. . N] 13: RELATION = An empty string 14: for INDEX2 = INDEX to N −1 do 15: RELATION2 = Concatenation of TOKENS[INDEX. . INDEX2] 16: if RELATION2 is a valid geographical relation then 17: WHERE = Concatenation of S[INDEX2 +1. . N] 18: RELATION = RELATION2; 19: end if 20: end for 21: if RELATION = empty AND LASTWHAT in an exception then 22: TESTGEO = FALSE 23: else 24: TESTGEO = TRUE 25: end if 26: if TESTGEO AND Algorithm1(WHERE) <> EMPTY then 27: if WHERE ends with AND SURROUNDINGS then 28: RELATION = The string NEAR; 29: WHERE = The substring of WHERE with AND SURROUNDINGS removed 30: end if 31: if WHAT ends with AND or similar) then 32: < WHAT,RELATION,WHERE2 >= Algorithm2(WHAT) 33: WHERE = Concatenation of WHERE with WHERE2 34: end if 35: if RELATION = An empty string then 36: if WHAT = An empty string then 37: RELATION = The string DEFINITION 38: else 39: RELATION = The string CONTAINED-AT 40: end if 41: end if 42: else 43: WHAT = The string Q 44: WHERE = An empty string 45: RELATION = An empty string 46: end if 47: end for 48: return < WHAT,RELATION,WHERE > therefore appropriately handled. Finally, if the geographical reference in the query is complemented with an expression similar to and its surroundings, the spatial relation (which is assumed to be CONTAINED-AT if none is provided) is changed to NEAR. 3.3 From Search Results to Query Locality The procedures given so far are appropriate for handling queries where a place reference is explicitly mentioned. However, the fact that a query can be associated with a geographical context may not be directly observable in the query itself, but rather from the results returned. For instance, queries like recommended hotels for SIGIR 2006 or SeaFair 2006 lodging can be seen to refer to the city of Seattle. Although they do not contain an explicit place reference, we expect results to be about hotels in Seattle. In the cases where a query does not contain place references, we start by assuming that the top results from a search engine represent the most popular and correct context and usage for the query. We Topic What Relation Where TGN concepts ML concepts Vegetable Exporters of Europe Vegetable Exporters CONTAINED-AT Europe 1 1 Trade Unions in Europe Trade Unions CONTAINED-AT Europe 1 1 Roman cities in the UK and Germany Roman cities CONTAINED-AT UK and Germany 6 2 Cathedrals in Europe Cathedrals CONTAINED-AT Europe 1 1 Car bombings near Madrid Car bombings NEAR Madrid 14 2 Volcanos around Quito Volcanos NEAR Quito 4 1 Cities within 100km of Frankfurt Cities NEAR Frankfurt 3 1 Russian troops in south(ern) Caucasus Russian troops in south(ern) CONTAINED-AT Caucasus 2 1 Cities near active volcanoes (This topic could not be appropriately handled - the relation and where terms are returned empty) Japanese rice imports (This topic could not be appropriately handled - the relation and where terms are returned empty) Table 1: Example topics from the GeoCLEF evaluation campaigns and the corresponding < what,relation,where > triples. then propose to use the distributional characteristics of geographical scopes previously assigned to the documents corresponding to these top results. In a previous work, we presented a text mining approach for assigning documents with corresponding geographical scopes, defined at an ontology, that worked as an offline preprocessing stage in a GIR system [13]. This pre-processing step is a fundamental stage of GIR, and it is reasonable to assume that this kind of information would be available on any system. Similarly to Wang et. al., we could also attempt to process the results on-line, in order to detect place references in the documents [17]. However, a GIR system already requires the offline stage. For the top N documents given at the results, we check the geographic scopes that were assigned to them. If a significant portion of the results are assigned to the same scope, than the query can be seen to be related to the corresponding geographic concept. This assumption could even be relaxed, for instance by checking if the documents belong to scopes that are hierarchically related. 4. EVALUATION EXPERIMENTS We used three different ontologies in evaluation experiments, namely the Getty thesaurus of geographic names (TGN) [6] and two specific resources developed at our group, here referred to as the PT and ML ontologies [2]. TGN and ML include global geographical information in multiple languages (although TGN is considerably larger), while the PT ontology focuses on the Portuguese territory with a high detail. Place types are also different across ontologies, as for instance PT includes street names and postal addresses, whereas ML only goes to the level of cities. The reader should refer to [2, 6] for a complete description of these resources. Our initial experiments used Portuguese and English topics from the GeoCLEF 2005 and 2006 evaluation campaigns. Topics in GeoCLEF correspond to query strings that can be used as input to a GIR system [4]. ImageCLEF 2006 also included topics specifying place references, and participants were encouraged to run their GIR systems on them. Our experiments also considered this dataset. For each topic, we measured if Algorithm 2 was able to find the corresponding < what,relation,where > triple. The ontologies used in this experiment were the TGN and ML, as topics were given in multiple languages and covered the whole globe. Dataset Number of Correct Triples Time per Query Queries ML TGN ML TGN GeoCLEF05 EN 25 19 20 GeoCLEF05 PT 25 20 18 288.1 334.5 GeoCLEF06 EN 32 28 19 msec msec GeoCLEF06 PT 25 23 11 ImgCLEF06 EN 24 16 18 Table 2: Summary of results over CLEF topics. Table 1 illustrates some of the topics, and Table 2 summarizes the obtained results. The tables show that the proposed technique adequately handles most of these queries. A manual inspection of the ontology concepts that were returned for each case also revealed that the where term was being correctly disambiguated. Note that the TGN ontology indeed added some ambiguity, as for instance names like Madrid can correspond to many different places around the globe. It should also be noted that some of the considered topics are very hard for an automated system to handle. Some of them were ambiguous (e.g. in Japanese rice imports, the query can be said to refer either rice imports in Japan or imports of Japanese rice), and others contained no direct geographical references (e.g. cities near active volcanoes). Besides these very hard cases, we also missed some topics due to their usage of place adjectives and specific regions that are not defined at the ontologies (e.g. environmental concerns around the Scottish Trossachs). In a second experiment, we used a sample of around 100,000 real search engine queries. The objective was to see if a significant number of these queries were geographical in nature, also checking if the algorithm did not produce many mistakes by classifying a query as geographical when that was not the case. The Portuguese ontology was used in this experiment, and queries were taken from the logs of a Portuguese Web search engine available at www.tumba.pt. Table 3 summarizes the obtained results. Many queries were indeed geographical (around 3.4%, although previous studies reported values above 14% [8]). A manual inspection showed that the algorithm did not produce many false positives, and the geographical queries were indeed correctly split into correct < what,relation,where > triple. The few mistakes we encountered were related to place names that were more frequently used in other contexts (e.g. in Teófilo Braga we have the problem that Braga is a Portuguese district, and Teófilo Braga was a well known Portuguese writer and politician). The addition of more names to the exception list can provide a workaround for most of these cases. Value Num. Queries 110,916 Num. Queries without Geographical References 107,159 (96.6%) Num. Queries with Geographical References 3,757 (3.4%) Table 3: Results from an experiment with search engine logs. We also tested the procedure for detecting queries that are implicitly geographical with a small sample of queries from the logs. For instance, for the query Estádio do Dragão (e.g. home stadium for a soccer team from Porto), the correct geographical context can be discovered from the analysis of the results (more than 75% of the top 20 results are assigned with the scope Porto). For future work, we plan on using a larger collection of queries to evaluate this aspect. Besides queries from the search engine logs, we also plan on using the names of well-known buildings, monuments and other landmarks, as they have a strong geographical connotation. Finally, we also made a comparative experiment with 2 popular geocoders, Maporama and Microsoft``s Mappoint. The objective was to compare Algorithm 1 with other approaches, in terms of being able to correctly disambiguate a string with a place reference. Civil Parishes from Lisbon Maporama Mappoint Ours Coded refs. (out of 53) 9 (16.9%) 30 (56,6%) 15 (28.3%) Avg. Time per ref. (msec) 506.23 1235.87 143.43 Civil Parishes from Porto Maporama Mappoint Ours Coded refs. (out of 15) 0 (0%) 2 (13,3%) 5 (33.3%) Avg. Time per ref. (msec) 514.45 991.88 132.14 Table 4: Results from a comparison with geocoding services. The Portuguese ontology was used in this experiment, taking as input the names of civil parishes from the Portuguese municipalities of Lisbon and Porto, and checking if the systems were able to disambiguate the full name (e.g. Campo Grande, Lisboa or Foz do Douro, Porto) into the correct geocode. We specifically measured whether our approach was better at unambiguously returning geocodes given the place reference (i.e. return the single correct code), and providing results rapidly. Table 4 shows the obtained results, and the accuracy of our method seems comparable to the commercial geocoders. Note that for Maporama and Mappoint, the times given at Table 4 include fetching results from the Web, but we have no direct way of accessing the geocoding algorithms (in both cases, fetching static content from the Web servers takes around 125 milliseconds). Although our approach cannot unambiguously return the correct geocode in most cases (only 20 out of a total of 68 cases), it nonetheless returns results that a human user can disambiguate (e.g. for Madalena, Lisboa we return both a street and a civil parish), as opposed to the other systems that often did not produce results. Moreover, if we consider the top geocode according to the ranking procedure described in Section 3.1, or if we use a type qualifier in the name (e.g. civil parish of Campo Grande, Lisboa), our algorithm always returns the correct geocode. 5. CONCLUSIONS This paper presented simple approaches for handling place references in search engine queries. This is a hard text mining problem, as queries are often ambiguous or underspecify information needs. However, our initial experiments indicate that for many queries, the referenced places can be determined effectively. Unlike the techniques proposed by Wang et. al. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts. The proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006. In queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents. In the future, we plan on doing a careful evaluation of this last approach. Another idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts. The proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines. After place references are appropriately disambiguated into ontology concepts, a GIR system can use them to retrieve relevant results, through the use of appropriate index structures (e.g. indexing the spatial coordinates associated with ontology concepts) and provided that the documents are also assigned to scopes corresponding to ontology concepts. A different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations. In a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query. The regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query. 6. REFERENCES [1] E. Amitay, N. Har``El, R. Sivan, and A. Soffer. Web-a-Where: Geotagging Web content. In Proceedings of SIGIR-04, the 27th Conference on research and development in information retrieval, 2004. [2] M. Chaves, M. J. Silva, and B. Martins. A Geographic Knowledge Base for Semantic Web Applications. In Proceedings of SBBD-05, the 20th Brazilian Symposium on Databases, 2005. [3] N. A. Chinchor. Overview of MUC-7/MET-2. In Proceedings of MUC-7, the 7th Message Understanding Conference, 1998. [4] F. Gey, R. Larson, M. Sanderson, H. Joho, and P. Clough. GeoCLEF: the CLEF 2005 cross-language geographic information retrieval track. In Working Notes for the CLEF 2005 Workshop, 2005. [5] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein. Categorizing Web queries according to geographical locality. In Proceedings of CIKM-03, the 12th Conference on Information and knowledge management, 2003. [6] P. Harpring. Proper words in proper places: The thesaurus of geographic names. MDA Information, 3, 1997. [7] C. Jones, R. Purves, A. Ruas, M. Sanderson, M. Sester, M. van Kreveld, and R. Weibel. Spatial information retrieval and geographical ontologies: An overview of the SPIRIT project. In Proceedings of SIGIR-02, the 25th Conference on Research and Development in Information Retrieval, 2002. [8] J. Kohler. Analyzing search engine queries for the use of geographic terms, 2003. (MSc Thesis). [9] A. Kornai and B. Sundheim, editors. Proceedings of the NAACL-HLT Workshop on the Analysis of Geographic References, 2003. [10] Y. Li, Z. Zheng, and H. Dai. KDD CUP-2005 report: Facing a great challenge. SIGKDD Explorations, 7, 2006. [11] D. Manov, A. Kiryakov, B. Popov, K. Bontcheva, D. Maynard, and H. Cunningham. Experiments with geographic knowledge for information extraction. In Proceedings of the NAACL-HLT Workshop on the Analysis of Geographic References, 2003. [12] B. Martins and M. J. Silva. Spelling correction for search engine queries. In Proceedings of EsTAL-04, España for Natural Language Processing, 2004. [13] B. Martins and M. J. Silva. A graph-ranking algorithm for geo-referencing documents. In Proceedings of ICDM-05, the 5th IEEE International Conference on Data Mining, 2005. [14] L. Souza, C. J. Davis, K. Borges, T. Delboni, and A. Laender. The role of gazetteers in geographic knowledge discovery on the web. In Proceedings of LA-Web-05, the 3rd Latin American Web Congress, 2005. [15] E. Tjong, K. Sang, and F. D. Meulder. Introduction to the CoNLL-2003 shared task: Language-Independent Named Entity Recognition. In Proceedings of CoNLL-2003, the 7th Conference on Natural Language Learning, 2003. [16] D. Vogel, S. Bickel, P. Haider, R. Schimpfky, P. Siemen, S. Bridges, and T. Scheffer. Classifying search engine queries using the Web as background knowledge. SIGKDD Explorations Newsletter, 7(2):117-122, 2005. [17] L. Wang, C. Wang, X. Xie, J. Forman, Y. Lu, W.-Y. Ma, and Y. Li. Detecting dominant locations from search queries. In Proceedings of SIGIR-05, the 28th Conference on Research and development in information retrieval, 2005.
Handling Locations in Search Engine Queries ABSTRACT This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches. 1. INTRODUCTION Search engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]). One of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction. Nowadays, GIR is getting increasing attention. Systems that access resources on the basis of geographic context are starting to appear, both in the academic and commercial domains [4, 7]. Accurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others. Detecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms. Queries with a local intent are best answered This research was partially supported Fundação para a Ciência e Tecnologia, under grants POSI/SRI/40193 / 2001 and SFRH/BD/10757 / 2002. with "localized" pages, while queries without any geographical references are best answered with "broad" pages [5]. Text mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13]. However, this body of research has been focused on processing Web pages and full-text documents. Search engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents. Moreover, the data is also noisier and more versatile in form, and we have to deal with misspellings, multilingualism and acronyms. How to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem. Key challenges in handling locations over search engine queries include their detection and disambiguation, the ranking of possible candidates, the detection of false positives (i.e not all contained location names refer to geographical locations), and the detection of implied locations by the context of the query (i.e. when the query does not explicitly contain a place reference but it is nonetheless geographical). Simple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references. For example the query "Denzel Washington" contains the place name "Washington," but the query is not geographical. Queries can also be geographic without containing any explicit reference to locations at the dictionary. In these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information. This paper proposes simple and yet effective techniques for handling place references over queries. Each query is split into a triple <what, relation, where>, where what specifies the non-geographic aspect of the information need, where specifies the geographic areas of interest, and relation specifies a spatial relationship connecting what and where. When this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Disambiguating place references is one of the most important aspects. We use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred). Disambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology. When we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results. By doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query. Experiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results. The rest of this paper is organized as follows. We first formalize the problem and describe related work to our research. Next, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a <what, relation, where> triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set. Section 4 presents evaluation results. Finally, we give some conclusions and directions for future research. 2. CONCEPTS AND RELATED WORK Search engine performance depends on the ability to capture the most likely meaning of a query as intended by the user [16]. Previous studies showed that a significant portion of the queries submitted to search engines are geographic [8, 14]. A recent enhancement to search engine technology is the addition of geographic reasoning, combining geographic information systems and information retrieval in order to build search engines that find information associated with given locations. The ability to recognize and reason about the geographical terminology, given in the text documents and user queries, is a crucial aspect of these geographical information retrieval (GIR) systems [4, 7]. Extracting and distinguishing different types of entities in text is usually referred to as Named Entity Recognition (NER). For at least a decade, this has been an important text mining task, and a key feature of the Message Understanding Conferences (MUC) [3]. NER has been successfully automated with near-human performance, but the specific problem of recognizing geographical references presents additional challenges [9]. When handling named entities with a high level of detail, ambiguity problems arise more frequently. Ambiguity in geographical references is bi-directional [15]. The same name can be used for more than one location (referent ambiguity), and the same location can have more than one name (reference ambiguity). The former has another twist, since the same name can be used for locations as well as for other class of entities, such as persons or company names (referent class ambiguity). Besides the recognition of geographical expressions, GIR also requires that the recognized expressions be classified and grounded to unique identifiers [11]. Grounding the recognized expressions (e.g. associating them to coordinates or concepts at an ontology) assures that they can be used in more advanced GIR tasks. Previous works have addressed the tagging and grounding of locations in Web pages, as well as the assignment of geographic scopes to these documents [1, 7, 13]. This is a complementary aspect to the techniques described in this paper, since if we have the Web pages tagged with location information, a search engine can conveniently return pages with a geographical scope related to the scope of the query. The task of handling geographical references over documents is however considerably different from that of handling geographical references over queries. In our case, queries are usually short and often do not constitute proper sentences. Text mining techniques that make use of context information are difficult to apply for high accuracy. Previous studies have also addressed the use of text mining and automated classification techniques over search engine queries [16, 10]. However, most of these works did not consider place references or geographical categories. Again, these previously proposed methods are difficult to apply to the geographic domain. Gravano et. al. studied the classification of Web queries into two types, namely local and global [5]. They defined a query as local if its best matches on a Web search engine are likely to be local pages, such as "houses for sale." A number of classification algorithms have been evaluated using search engine queries. However, their experimental results showed that only a rather low precision and recall could be achieved. The problem addressed in this paper is also slightly different, since we are trying not only to detect local queries but also to disambiguate the local of interest. Wang et. al. proposed to go further than detecting local queries, by also disambiguating the implicit local of interest [17]. The proposed approach works for both queries containing place references and queries not containing them, by looking for dominant geographic references over query logs and text from search results. In comparison, we propose simpler techniques based on matching names from a geographic ontology. Our approach looks for spatial relationships at the query string, and it also associates the place references to ontology concepts. In the case of queries not containing explicit place references, we use geographical scopes previously assigned to the documents, whereas Wang et. al. proposed to extract locations from the text of the top search results. There are nowadays many geocoding, reverse-geocoding, and mapping services on the Web that can be easily integrated with other applications. Geocoding is the process of locating points on the surface of the Earth from alphanumeric addressing data. Taking a string with an address, a geocoder queries a geographical information system and returns interpolated coordinate values for the given location. Instead of computing coordinates for a given place reference, the technique described in this paper aims at assigning references to the corresponding ontology concepts. However, if each concept at the ontology contains associated coordinate information, the approach described here could also be used to build a geocoding service. Most of such existing services are commercial in nature, and there are no technical publications describing them. A number of commercial search services have also started to support location-based searches. Google Local, for instance, initially required the user to specify a location qualifier separately from the search query. More recently, it added location look-up capabilities that extract locations from query strings. For example, in a search for "Pizza Seattle", Google Local returns "local results for pizza near Seattle, WA." However, the intrinsics of their solution are not published, and their approach also does not handle locationimplicit queries. Moreover, Google Local does not take spatial relations into account. In sum, there are already some studies on tagging geographical references, but Web queries pose additional challenges which have not been addressed. In this paper, we explain the proposed solutions for the identified problems. 3. HANDLING QUERIES IN GIR SYSTEMS Most GIR queries can be parsed to <what, relation, where> triple, where the what term is used to specify the general nongeographical aspect of the information need, the where term is used to specify the geographical areas of interest, and the relation term is used to specify a spatial relationship connecting what and where. While the what term can assume any form, in order to reflect any information need, the relation and where terms should be part of a controlled vocabulary. In particular, the relation term should refer to a well-known geographical relation that the underlying GIR system can interpret (e.g. "near" or "contained at"), and the where term should be disambiguated into a set of unique identifiers, corresponding to concepts at the ontology. Different systems can use alternative schemes to take input queries from the users. Three general strategies can be identified, and GIR systems often support more than one of the following schemes: Figure 1: Strategies for processing queries in Geographical Information Retrieval systems. 1. Input to the system is a textual query string. This is the hardest case, since we need to separate the query into the three different components, and then we need to disambiguate the where term into a set of unique identifiers. 2. Input to the system is provided in two separate strings, one concerning the what term, and the other concerning the where. The relation term can be either fixed (e.g. always assume the "near" relation), specified together with the where string, or provided separately from the users from a set of possible choices. Although there is no need for separating query string into the different components, we still need to disambiguate the where term into a set of unique identifiers. 3. Input to the system is provided through a query string together with an unambiguous description of the geographical area of interest (e.g. a sketch in a map, spatial coordinates or a selection from a set of possible choices). No disambiguation is required, and therefore the techniques described in this paper do not have to be applied. The first two schemes depend on place name disambiguation. Figure 1 illustrates how we propose to handle geographic queries in these first two schemes. A common component is the algorithm for disambiguating place references into corresponding ontology concepts, which is described next. 3.1 From Place Names to Ontology Concepts A required task in handling GIR queries consists of associating a string containing a geographical reference with the set of corresponding concepts at the geographic ontology. We propose to do this according to the pseudo-code listed at Algorithm 1. The algorithm considers the cases where a second (or even more than one) location is given to qualify a first (e.g. "Paris, France"). It makes recursive calls to match each location, and relies on hierarchical part-of relations to detect if two locations share a common hierarchy path. One of the provided locations should be more general and the other more specific, in the sense that there must exist a part-of relationship among the associated concepts at the ontology (either direct or transitive). The most specific location is a sub-region of the most general, and the algorithm returns the most specific one (i.e. for "Paris, France" the algorithm returns the ontology concept associated with Paris, the capital city of France). We also consider the cases where a geographical type expression is used to qualify a given name (e.g. "city of Lisbon" or "state of New York"). For instance the name "Lisbon" can correspond to many different concepts at a geographical ontology, and type 1: L = An empty list 2: INDEX = The position in GN for the first occurrence of a comma, semi-colon or bracket character 3: if INDEX is defined then 4: GN1 = The substring of GN from position 0 to INDEX 5: GN2 = The substring of GN from INDEX + 1 to length (GN) 6: L1 = Algorithm1 (O, GN1) 7: L2 = Algorithm1 (O, GN2) 8: for each C1 in L1 do 9: for each C2 in L2 do 10: if C1 is an ancestor of C2 at O then 11: L = The list L after adding element C2 12: else if C1 is a descendant of C2 at O then 13: L = The list L after adding element C1 14: end if 15: end for 16: end for 17: else 18: GN = The string GN after removing case and diacritics 19: if GN contains a geographic type qualifier then 20: T = The substring of GN containing the type qualifier 21: GN = The substring of GN with the type qualifier removed 22: L = The list of concepts from O with name GN and type T 23: else 24: L = The list of concepts from O with name GN 25: end if 26: end if 27: return The list L qualifiers can provide useful information for disambiguation. The considered type qualifiers should also described at the ontologies (e.g. each geographic concept should be associated to a type that is also defined at the ontology, such as country, district or city). Ideally, the geographical reference provided by the user should be disambiguated into a single ontology concept. However, this is not always possible, since the user may not provide all the required information (i.e. a type expression or a second qualifying location). The output is therefore a list with the possible concepts being referred to by the user. In a final step, we propose to sort this list, so that if a single concept is required as output, we can use the one that is ranked higher. The sorting procedure reflects the likelihood of each concept being indeed the one referred to. We propose to rank concepts according to the following heuristics: 1. The geographical type expression associated with the ontology concept. For the same name, a country is more likely to be referenced than a city, and in turn a city more likely to be referenced than a street. 2. Number of ancestors at the ontology. Top places at the ontology tend to be more general, and are therefore more likely to be referenced in search engine queries. 3. Population count. Highly populated places are better known, and therefore more likely to be referenced in queries. 4. Population counts from direct ancestors at the ontology. Subregions of highly populated places are better known, and also more likely to be referenced in search engine queries. 5. Occurrence frequency over Web documents (e.g. Google counts) for the geographical names. Places names that occur more frequently over Web documents are also more likely to be referenced in search engine queries. 6. Number of descendants at the ontology. Places that have more sub-regions tend to be more general, and are therefore more likely to be mentioned in search engine queries. 7. String size for the geographical names. Short names are more likely to be mentioned in search engine queries. Algorithm 1, plus the ranking procedure, can already handle GIR queries where the where term is given separately from the what and relation terms. However, if the query is given in a single string, we require the identification of the associated <what, relation, where> triple, before disambiguating the where term into the corresponding ontology concepts. This is described in the following Section. 3.2 Handling Single Query Strings Algorithm 2 provides the mechanism for separating a query string into a <what, relation, where> triple. It uses Algorithm 1 to find the where term, disambiguating it into a set of ontology concepts. The algorithm starts by tokenizing the query string into individual words, also taking care of removing case and diacritics. We have a simple tokenizer that uses the space character as a word delimiter, but we could also have a tokenization approach similar to the proposal of Wang et. al. which relies on Web occurrence statistics to avoid breaking collocations [17]. In the future, we plan on testing if this different tokenization scheme can improve results. Next, the algorithm tests different possible splits of the query, building the what, relation and where terms through concatenations of the individual tokens. The relation term is matched against a list of possible values (e.g. "near," "at," "around," or "south of"), corresponding to the operators that are supported by the GIR system. Note that is also the responsibility of the underlying GIR system to interpret the actual meaning of the different spatial relations. Algorithm 1 is used to check whether a where term constitutes a geographical reference or not. We also check if the last word in the what term belongs to a list of exceptions, containing for instance first names of people in different languages. This ensures that a query like "Denzel Washington" is appropriately handled. If the algorithm succeeds in finding valid relation and where terms, then the corresponding triple is returned. Otherwise, we return a triple with the what term equaling the query string, and the relation and where terms set as empty. If the entire query string constitutes a geographical reference, we return a triple with the what term set to empty, the where term equaling the query string, and the relation term set the "DEFINITION" (i.e. these queries should be answered with information about the given place references). The algorithm also handles query strings where more than one geographical reference is provided, using "and" or an equivalent preposition, together with a recursive call to Algorithm 2. A query like "Diamond trade in Angola and South Africa" is 1: Q = The string Q after removing case and diacritics 2: TOKENS [0. . N] = An array of strings with the individual words of Q 3: N = The size of the TOKENS array 4: for INDEX = 0 to N do 5: if INDEX = 0 then 6: WHAT = Concatenation of TOKENS [0. . INDEX − 1] 7: LASTWHAT = TOKENS [INDEX − 1] 8: else 9: WHAT = An empty string 10: LASTWHAT = An empty string 26: if TESTGEO AND Algorithm1 (W HERE) <> EMPTY then 27: if WHERE ends with "AND SURROUNDINGS" then 28: RELATION = The string "NEAR"; 29: WHERE = The substring of WHERE with "AND SURROUNDINGS" removed 30: end if 31: if WHAT ends with "AND" or similar) then 32: <WHAT, RELATION, WHERE2> = Algorithm2 (W HAT) 33: WHERE = Concatenation of WHERE with WHERE2 34: end if 35: if RELATION = An empty string then 36: if WHAT = An empty string then 37: RELATION = The string "DEFINITION" 38: else 39: RELATION = The string "CONTAINED-AT" 40: end if 41: end if 42: else 43: WHAT = The string Q 44: WHERE = An empty string 45: RELATION = An empty string 46: end if 47: end for 48: return <WHAT, RELATION, W HERE> therefore appropriately handled. Finally, if the geographical reference in the query is complemented with an expression similar to "and its surroundings," the spatial relation (which is assumed to be "CONTAINED-AT" if none is provided) is changed to "NEAR". 3.3 From Search Results to Query Locality The procedures given so far are appropriate for handling queries where a place reference is explicitly mentioned. However, the fact that a query can be associated with a geographical context may not be directly observable in the query itself, but rather from the results returned. For instance, queries like "recommended hotels for SIGIR 2006" or "SeaFair 2006 lodging" can be seen to refer to the city of Seattle. Although they do not contain an explicit place reference, we expect results to be about hotels in Seattle. In the cases where a query does not contain place references, we start by assuming that the top results from a search engine represent the most popular and correct context and usage for the query. We Table 1: Example topics from the GeoCLEF evaluation campaigns and the corresponding <what, relation, where> triples. then propose to use the distributional characteristics of geographical scopes previously assigned to the documents corresponding to these top results. In a previous work, we presented a text mining approach for assigning documents with corresponding geographical scopes, defined at an ontology, that worked as an offline preprocessing stage in a GIR system [13]. This pre-processing step is a fundamental stage of GIR, and it is reasonable to assume that this kind of information would be available on any system. Similarly to Wang et. al., we could also attempt to process the results on-line, in order to detect place references in the documents [17]. However, a GIR system already requires the offline stage. For the top N documents given at the results, we check the geographic scopes that were assigned to them. If a significant portion of the results are assigned to the same scope, than the query can be seen to be related to the corresponding geographic concept. This assumption could even be relaxed, for instance by checking if the documents belong to scopes that are hierarchically related. 4. EVALUATION EXPERIMENTS We used three different ontologies in evaluation experiments, namely the Getty thesaurus of geographic names (TGN) [6] and two specific resources developed at our group, here referred to as the PT and ML ontologies [2]. TGN and ML include global geographical information in multiple languages (although TGN is considerably larger), while the PT ontology focuses on the Portuguese territory with a high detail. Place types are also different across ontologies, as for instance PT includes street names and postal addresses, whereas ML only goes to the level of cities. The reader should refer to [2, 6] for a complete description of these resources. Our initial experiments used Portuguese and English topics from the GeoCLEF 2005 and 2006 evaluation campaigns. Topics in GeoCLEF correspond to query strings that can be used as input to a GIR system [4]. ImageCLEF 2006 also included topics specifying place references, and participants were encouraged to run their GIR systems on them. Our experiments also considered this dataset. For each topic, we measured if Algorithm 2 was able to find the corresponding <what, relation, where> triple. The ontologies used in this experiment were the TGN and ML, as topics were given in multiple languages and covered the whole globe. Table 2: Summary of results over CLEF topics. Table 1 illustrates some of the topics, and Table 2 summarizes the obtained results. The tables show that the proposed technique adequately handles most of these queries. A manual inspection of the ontology concepts that were returned for each case also revealed that the where term was being correctly disambiguated. Note that the TGN ontology indeed added some ambiguity, as for instance names like "Madrid" can correspond to many different places around the globe. It should also be noted that some of the considered topics are very hard for an automated system to handle. Some of them were ambiguous (e.g. in "Japanese rice imports," the query can be said to refer either rice imports in Japan or imports of Japanese rice), and others contained no direct geographical references (e.g. cities near active volcanoes). Besides these very hard cases, we also missed some topics due to their usage of place adjectives and specific regions that are not defined at the ontologies (e.g. environmental concerns around the Scottish Trossachs). In a second experiment, we used a sample of around 100,000 real search engine queries. The objective was to see if a significant number of these queries were geographical in nature, also checking if the algorithm did not produce many mistakes by classifying a query as geographical when that was not the case. The Portuguese ontology was used in this experiment, and queries were taken from the logs of a Portuguese Web search engine available at www.tumba.pt. Table 3 summarizes the obtained results. Many queries were indeed geographical (around 3.4%, although previous studies reported values above 14% [8]). A manual inspection showed that the algorithm did not produce many false positives, and the geographical queries were indeed correctly split into correct <what, relation, where> triple. The few mistakes we encountered were related to place names that were more frequently used in other contexts (e.g. in "Teófilo Braga" we have the problem that "Braga" is a Portuguese district, and "Teófilo Braga" was a well known Portuguese writer and politician). The addition of more names to the exception list can provide a workaround for most of these cases. Table 3: Results from an experiment with search engine logs. We also tested the procedure for detecting queries that are implicitly geographical with a small sample of queries from the logs. For instance, for the query "Estádio do Dragão" (e.g. home stadium for a soccer team from Porto), the correct geographical context can be discovered from the analysis of the results (more than 75% of the top 20 results are assigned with the scope "Porto"). For future work, we plan on using a larger collection of queries to evaluate this aspect. Besides queries from the search engine logs, we also plan on using the names of well-known buildings, monuments and other landmarks, as they have a strong geographical connotation. Finally, we also made a comparative experiment with 2 popular geocoders, Maporama and Microsoft's Mappoint. The objective was to compare Algorithm 1 with other approaches, in terms of being able to correctly disambiguate a string with a place reference. Table 4: Results from a comparison with geocoding services. The Portuguese ontology was used in this experiment, taking as input the names of civil parishes from the Portuguese municipalities of Lisbon and Porto, and checking if the systems were able to disambiguate the full name (e.g. "Campo Grande, Lisboa" or "Foz do Douro, Porto") into the correct geocode. We specifically measured whether our approach was better at unambiguously returning geocodes given the place reference (i.e. return the single correct code), and providing results rapidly. Table 4 shows the obtained results, and the accuracy of our method seems comparable to the commercial geocoders. Note that for Maporama and Mappoint, the times given at Table 4 include fetching results from the Web, but we have no direct way of accessing the geocoding algorithms (in both cases, fetching static content from the Web servers takes around 125 milliseconds). Although our approach cannot unambiguously return the correct geocode in most cases (only 20 out of a total of 68 cases), it nonetheless returns results that a human user can disambiguate (e.g. for "Madalena, Lisboa" we return both a street and a civil parish), as opposed to the other systems that often did not produce results. Moreover, if we consider the top geocode according to the ranking procedure described in Section 3.1, or if we use a type qualifier in the name (e.g. "civil parish of Campo Grande, Lisboa"), our algorithm always returns the correct geocode. 5. CONCLUSIONS This paper presented simple approaches for handling place references in search engine queries. This is a hard text mining problem, as queries are often ambiguous or underspecify information needs. However, our initial experiments indicate that for many queries, the referenced places can be determined effectively. Unlike the techniques proposed by Wang et. al. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts. The proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006. In queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents. In the future, we plan on doing a careful evaluation of this last approach. Another idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts. The proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines. After place references are appropriately disambiguated into ontology concepts, a GIR system can use them to retrieve relevant results, through the use of appropriate index structures (e.g. indexing the spatial coordinates associated with ontology concepts) and provided that the documents are also assigned to scopes corresponding to ontology concepts. A different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations. In a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query. The regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.
Handling Locations in Search Engine Queries ABSTRACT This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches. 1. INTRODUCTION Search engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]). One of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction. Nowadays, GIR is getting increasing attention. Systems that access resources on the basis of geographic context are starting to appear, both in the academic and commercial domains [4, 7]. Accurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others. Detecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms. Queries with a local intent are best answered This research was partially supported Fundação para a Ciência e Tecnologia, under grants POSI/SRI/40193 / 2001 and SFRH/BD/10757 / 2002. with "localized" pages, while queries without any geographical references are best answered with "broad" pages [5]. Text mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13]. However, this body of research has been focused on processing Web pages and full-text documents. Search engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents. Moreover, the data is also noisier and more versatile in form, and we have to deal with misspellings, multilingualism and acronyms. How to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem. Key challenges in handling locations over search engine queries include their detection and disambiguation, the ranking of possible candidates, the detection of false positives (i.e not all contained location names refer to geographical locations), and the detection of implied locations by the context of the query (i.e. when the query does not explicitly contain a place reference but it is nonetheless geographical). Simple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references. For example the query "Denzel Washington" contains the place name "Washington," but the query is not geographical. Queries can also be geographic without containing any explicit reference to locations at the dictionary. In these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information. This paper proposes simple and yet effective techniques for handling place references over queries. Each query is split into a triple <what, relation, where>, where what specifies the non-geographic aspect of the information need, where specifies the geographic areas of interest, and relation specifies a spatial relationship connecting what and where. When this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Disambiguating place references is one of the most important aspects. We use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred). Disambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology. When we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results. By doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query. Experiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results. The rest of this paper is organized as follows. We first formalize the problem and describe related work to our research. Next, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a <what, relation, where> triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set. Section 4 presents evaluation results. Finally, we give some conclusions and directions for future research. 2. CONCEPTS AND RELATED WORK 3. HANDLING QUERIES IN GIR SYSTEMS 3.1 From Place Names to Ontology Concepts 3.2 Handling Single Query Strings 3.3 From Search Results to Query Locality 4. EVALUATION EXPERIMENTS 5. CONCLUSIONS This paper presented simple approaches for handling place references in search engine queries. This is a hard text mining problem, as queries are often ambiguous or underspecify information needs. However, our initial experiments indicate that for many queries, the referenced places can be determined effectively. Unlike the techniques proposed by Wang et. al. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts. The proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006. In queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents. In the future, we plan on doing a careful evaluation of this last approach. Another idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts. The proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines. After place references are appropriately disambiguated into ontology concepts, a GIR system can use them to retrieve relevant results, through the use of appropriate index structures (e.g. indexing the spatial coordinates associated with ontology concepts) and provided that the documents are also assigned to scopes corresponding to ontology concepts. A different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations. In a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query. The regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.
Handling Locations in Search Engine Queries ABSTRACT This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches. 1. INTRODUCTION Search engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]). One of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction. Accurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others. Detecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms. Queries with a local intent are best answered This research was partially supported Fundação para a Ciência e Tecnologia, under grants POSI/SRI/40193 / 2001 and SFRH/BD/10757 / 2002. with "localized" pages, while queries without any geographical references are best answered with "broad" pages [5]. Text mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13]. However, this body of research has been focused on processing Web pages and full-text documents. Search engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents. How to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem. Simple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references. For example the query "Denzel Washington" contains the place name "Washington," but the query is not geographical. Queries can also be geographic without containing any explicit reference to locations at the dictionary. In these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information. This paper proposes simple and yet effective techniques for handling place references over queries. When this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Disambiguating place references is one of the most important aspects. We use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred). Disambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology. When we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results. By doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query. Experiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results. We first formalize the problem and describe related work to our research. Next, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a <what, relation, where> triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set. Section 4 presents evaluation results. Finally, we give some conclusions and directions for future research. 5. CONCLUSIONS This paper presented simple approaches for handling place references in search engine queries. This is a hard text mining problem, as queries are often ambiguous or underspecify information needs. However, our initial experiments indicate that for many queries, the referenced places can be determined effectively. Unlike the techniques proposed by Wang et. al. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts. The proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006. In queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents. In the future, we plan on doing a careful evaluation of this last approach. Another idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts. The proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines. A different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations. In a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query. The regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.
H-81
Distance Measures for MPEG-7-based Retrieval
In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.
[ "distanc measur", "distanc measur", "mpeg-7", "visual inform retriev", "visual descriptor", "media collect", "perform indic", "mpeg-7-base retriev", "visual media", "meehl index", "human similar percept", "predic-base model", "content-base imag retriev", "content-base video retriev", "similar measur", "similar percept" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "R", "U", "U", "M", "M", "M", "M", "U" ]
Distance Measures for MPEG-7-based Retrieval Horst Eidenberger Vienna University of Technology, Institute of Software Technology and Interactive Systems Favoritenstrasse 9-11 - A-1040 Vienna, Austria Tel. + 43-1-58801-18853 eidenberger@ims.tuwien.ac.at ABSTRACT In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - Information filtering, Query formulation, Retrieval models. General Terms Algorithms, Measurement, Experimentation, Performance, Theory. 1. INTRODUCTION The MPEG-7 standard defines - among others - a set of descriptors for visual media. Each descriptor consists of a feature extraction mechanism, a description (in binary and XML format) and guidelines that define how to apply the descriptor on different kinds of media (e.g. on temporal media). The MPEG-7 descriptors have been carefully designed to meet - partially complementaryrequirements of different application domains: archival, browsing, retrieval, etc. [9]. In the following, we will exclusively deal with the visual MPEG-7 descriptors in the context of media retrieval. The visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors. For retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions. Common rules are distance functions, like the Euclidean distance and the Mahalanobis distance. Unfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific. However, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor. These recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures. In the present study a large number of successful distance measures from different areas (statistics, psychology, medicine, social and economic sciences, etc.) were implemented and applied on MPEG-7 data vectors to verify whether or not the recommended MPEG-7 distance measures are really the best for any reasonable class of media objects. From the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets. The hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method. The paper is organised as follows. Section 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]). Section 3 gives an overview over the implemented distance measures. Section 4 describes the test setup, including the test data and the implemented evaluation methods. Finally, Section 5 presents the results per descriptor and over all descriptors. 2. BACKGROUND 2.1 MPEG-7: visual descriptors The visual part of the MPEG-7 standard defines several descriptors. Not all of them are really descriptors in the sense that they extract properties from visual media. Some of them are just structures for descriptor aggregation or localisation. The basic descriptors are Color Layout, Color Structure, Dominant Color, Scalable Color, Edge Histogram, Homogeneous Texture, Texture Browsing, Region-based Shape, Contour-based Shape, Camera Motion, Parametric Motion and Motion Activity. Other descriptors are based on low-level descriptors or semantic information: Group-of-Frames/Group-of-Pictures Color (based on Scalable Color), Shape 3D (based on 3D mesh information), Motion Trajectory (based on object segmentation) and Face Recognition (based on face extraction). Descriptors for spatiotemporal aggregation and localisation are: Spatial 2D Coordinates, Grid Layout, Region Locator (spatial), Time Series, Temporal Interpolation (temporal) and SpatioTemporal Locator (combined). Finally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects. These additional structures allow combining the basic descriptors in multiple ways and on different levels. But they do not change the characteristics of the extracted information. Consequently, structures for aggregation and localisation were not considered in the work described in this paper. 2.2 Similarity measurement on visual data Generally, similarity measurement on visual information aims at imitating human visual similarity perception. Unfortunately, human perception is much more complex than any of the existing similarity models (it includes perception, recognition and subjectivity). The common approach in visual information retrieval is measuring dis-similarity as distance. Both, query object and candidate object are represented by their corresponding feature vectors. The distance between these objects is measured by computing the distance between the two vectors. Consequently, the process is independent of the employed querying paradigm (e.g. query by example). The query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects). Goal of the measurement process is to express a relationship between the two objects by their distance. Iteration for multiple candidates allows then to define a partial order over the candidates and to address those in a (to be defined) neighbourhood being similar to the query object. At this point, it has to be mentioned that in a multi-descriptor environmentespecially in MPEG-7 - we are only half way towards a statement on similarity. If multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object. Still, distance measurement is the most important first step in similarity measurement. Obviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object. If distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible. Consequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms). Unfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity). To overcome this shortage, researchers from different areas have developed alternative models that are mostly predicate-based (descriptors are assumed to contain just binary elements, e.g. Tversky's Feature Contrast Model [17]) and fit better with human perception. In the following distance measures of both groups of approaches will be considered. 3. DISTANCE MEASURES The distance measures used in this work have been collected from various areas (Subsection 3.1). Because they work on differently quantised data, Subsection 3.2 sketches a model for unification on the basis of quantitative descriptions. Finally, Subsection 3.3 introduces the distance measures as well as their origin and the idea they implement. 3.1 Sources Distance measurement is used in many research areas such as psychology, sociology (e.g. comparing test results), medicine (e.g. comparing parameters of test persons), economics (e.g. comparing balance sheet ratios), etc.. Naturally, the character of data available in these areas differs significantly. Essentially, there are two extreme cases of data vectors (and distance measures): predicatebased (all vector elements are binary, e.g. {0, 1}) and quantitative (all vector elements are continuous, e.g. [0, 1]). Predicates express the existence of properties and represent highlevel information while quantitative values can be used to measure and mostly represent low-level information. Predicates are often employed in psychology, sociology and other human-related sciences and most predicate-based distance measures were therefore developed in these areas. Descriptions in visual information retrieval are nearly ever (if they do not integrate semantic information) quantitative. Consequently, mostly quantitative distance measures are used in visual information retrieval. The goal of this work is to compare the MPEG-7 distance measures with the most powerful distance measures developed in other areas. Since MPEG-7 descriptions are purely quantitative but some of the most sophisticated distance measures are defined exclusively on predicates, a model is mandatory that allows the application of predicate-based distance measures on quantitative data. The model developed for this purpose is presented in the next section. 3.2 Quantisation model The goal of the quantisation model is to redefine the set operators that are usually used in predicate-based distance measures on continuous data. The first in visual information retrieval to follow this approach were Santini and Jain, who tried to apply Tversky's Feature Contrast Model [17] to content-based image retrieval [12], [13]. They interpreted continuous data as fuzzy predicates and used fuzzy set operators. Unfortunately, their model suffered from several shortcomings they described in [12], [13] (for example, the quantitative model worked only for one specific version of the original predicate-based measure). The main idea of the presented quantisation model is that set operators are replaced by statistical functions. In [5] the authors could show that this interpretation of set operators is reasonable. The model offers a solution for the descriptors considered in the evaluation. It is not specific to one distance measure, but can be applied to any predicate-based measure. Below, it will be shown that the model does not only work for predicate data but for quantitative data as well. Each measure implementing the model can be used as a substitute for the original predicate-based measure. Generally, binary properties of two objects (e.g. media objects) can exist in both objects (denoted as a), in just one (b, c) or in none of them (d). The operator needed for these relationships are UNION, MINUS and NOT. In the quantisation model they are replaced as follows (see [5] for further details). 131 ∑ ≤ + − + ==∩= k jkikjkik kkji else xx Mif xx ssXXa 0 22, 1ε ( ) ( ) ∑ ∑ ∑ ≤ ++ −==¬∩¬= ≤−−− ==−= ≤−−− ==−= k jkikjkik kkji k ikjkikjk kkij k jkikjkik kkji else xx if xx MssXXd else xxMifxx ssXXc else xxMifxx ssXXb 0 22, 0 , 0 , 1 2 2 ε ε ε with: ( ) [ ] ( ) { }0\ . 0 1 . 0 1 , 2 2 1 minmax maxmin + ∈ − = ≥ − = = ≥ − = −= ∈= ∑ ∑ ∑ ∑ Rp ki x where else pif p M ki x where else pif p M xxM xxxwithxX i k ik i k ik ikiki µ σ σ σ ε µ µ µ ε a selects properties that are present in both data vectors (Xi, Xj representing media objects), b and c select properties that are present in just one of them and d selects properties that are present in neither of the two data vectors. Every property is selected by the extent to which it is present (a and d: mean, b and c: difference) and only if the amount to which it is present exceeds a certain threshold (depending on the mean and standard deviation over all elements of descriptor space). The implementation of these operators is based on one assumption. It is assumed that vector elements measure on interval scale. That means, each element expresses that the measured property is ``more or less'' present (``0'': not at all, ``M'': fully present). This is true for most visual descriptors and all MPEG-7 descriptors. A natural origin as it is assumed here (``0'') is not needed. Introducing p (called discriminance-defining parameter) for the thresholds 21 ,εε has the positive consequence that a, b, c, d can then be controlled through a single parameter. p is an additional criterion for the behaviour of a distance measure and determines the thresholds used in the operators. It expresses how accurate data items are present (quantisation) and consequently, how accurate they should be investigated. p can be set by the user or automatically. Interesting are the limits: 1. Mp →⇒∞→ 21 ,εε In this case, all elements (=properties) are assumed to be continuous (high quantisation). In consequence, all properties of a descriptor are used by the operators. Then, the distance measure is not discriminant for properties. 2. 0,0 21 →⇒→ εεp In this case, all properties are assumed to be predicates. In consequence, only binary elements (=predicates) are used by the operators (1-bit quantisation). The distance measure is then highly discriminant for properties. Between these limits, a distance measure that uses the quantisation model is - depending on p - more or less discriminant for properties. This means, it selects a subset of all available description vector elements for distance measurement. For both predicate data and quantitative data it can be shown that the quantisation model is reasonable. If description vectors consist of binary elements only, p should be used as follows (for example, p can easily be set automatically): ( )σµεε ,min. . ,0,0 21 ==⇒→ pgep In this case, a, b, c, d measure like the set operators they replace. For example, Table 1 shows their behaviour for two onedimensional feature vectors Xi and Xj. As can be seen, the statistical measures work like set operators. Actually, the quantisation model works accurate on predicate data for any p≠∞. To show that the model is reasonable for quantitative data the following fact is used. It is easy to show that for predicate data some quantitative distance measures degenerate to predicatebased measures. For example, the L1 metric (Manhattan metric) degenerates to the Hamming distance (from [9], without weights): distanceHammingcbxxL k jkik =+≡−= ∑1 If it can be shown that the quantisation model is able to reconstruct the quantitative measure from the degenerated predicate-based measure, the model is obviously able to extend predicate-based measures to the quantitative domain. This is easy to illustrate. For purely quantitative feature vectors, p should be used as follows (again, p can easily be set automatically): 1, 21 =⇒∞→ εεp Then, a and d become continuous functions: ∑ ∑ + −==⇒≡≤ + + ==⇒≡≤ + − k jkik kk jkik k jkik kk jkik xx MswheresdtrueM xx xx swheresatrueM xx M 22 22 b and c can be made continuous for the following expressions: ( ) ( ) ∑ ∑ ∑ −==+⇒ ≥−− ==⇒ ≥−≡≤−− ≥−− ==⇒ ≥−≡≤−− k jkikkk k ikjkikjk kk ikjkikjk k jkikjkik kk jkikjkik xxswherescb else xxifxx swheresc xxMxxM else xxifxx swheresb xxMxxM 0 0 0 0 0 0 Table 1. Quantisation model on predicate vectors. Xi Xj a b c d (1) (1) 1 0 0 0 (1) (0) 0 1 0 0 (0) (1) 0 0 1 0 (0) (0) 0 0 0 1 132 ∑ ∑ −==− −==− k ikjkkk k jkikkk xxswheresbc xxswherescb This means, for sufficiently high p every predicate-based distance measure that is either not using b and c or just as b+c, b-c or c-b, can be transformed into a continuous quantitative distance measure. For example, the Hamming distance (again, without weights): 1 Lxxxxswherescb k jkik k jkikkk =−=−==+ ∑∑ The quantisation model successfully reconstructs the L1 metric and no distance measure-specific modification has to be made to the model. This demonstrates that the model is reasonable. In the following it will be used to extend successful predicate-based distance measures on the quantitative domain. The major advantages of the quantisation model are: (1) it is application domain independent, (2) the implementation is straightforward, (3) the model is easy to use and finally, (4) the new parameter p allows to control the similarity measurement process in a new way (discriminance on property level). 3.3 Implemented measures For the evaluation described in this work next to predicate-based (based on the quantisation model) and quantitative measures, the distance measures recommended in the MPEG-7 standard were implemented (all together 38 different distance measures). Table 2 summarises those predicate-based measures that performed best in the evaluation (in sum 20 predicate-based measures were investigated). For these measures, K is the number of predicates in the data vectors Xi and Xj. In P1, the sum is used for Tversky's f() (as Tversky himself does in [17]) and α, β are weights for element b and c. In [5] the author's investigated Tversky's Feature Contrast Model and found α=1, β=0 to be the optimum parameters. Some of the predicate-based measures are very simple (e.g. P2, P4) but have been heavily exploited in psychological research. Pattern difference (P6) - a very powerful measure - is used in the statistics package SPSS for cluster analysis. P7 is a correlation coefficient for predicates developed by Pearson. Table 3 shows the best quantitative distance measures that were used. Q1 and Q2 are metric-based and were implemented as representatives for the entire group of Minkowski distances. The wi are weights. In Q5, ii σµ , are mean and standard deviation for the elements of descriptor Xi. In Q6, m is 2 M (=0.5). Q3, the Canberra metric, is a normalised form of Q1. Similarly, Q4, Clark's divergence coefficient is a normalised version of Q2. Q6 is a further-developed correlation coefficient that is invariant against sign changes. This measure is used even though its particular properties are of minor importance for this application domain. Finally, Q8 is a measure that takes the differences between adjacent vector elements into account. This makes it structurally different from all other measures. Obviously, one important distance measure is missing. The Mahalanobis distance was not considered, because different descriptors would require different covariance matrices and for some descriptors it is simply impossible to define a covariance matrix. If the identity matrix was used in this case, the Mahalanobis distance would degenerate to a Minkowski distance. Additionally, the recommended MPEG-7 distances were implemented with the following parameters: In the distance measure of the Color Layout descriptor all weights were set to ``1'' (as in all other implemented measures). In the distance measure of the Dominant Color descriptor the following parameters were used: 20,1,3.0,7.0 21 ==== dTww α (as recommended). In the Homogeneous Texture descriptor's distance all ( )kα were set to ``1'' and matching was done rotation- and scale-invariant. Important! Some of the measures presented in this section are distance measures while others are similarity measures. For the tests, it is important to notice, that all similarity measures were inverted to distance measures. 4. TEST SETUP Subsection 4.1 describes the descriptors (including parameters) and the collections (including ground truth information) that were used in the evaluation. Subsection 4.2 discusses the evaluation method that was implemented and Subsection 4.3 sketches the test environment used for the evaluation process. 4.1 Test data For the evaluation eight MPEG-7 descriptors were used. All colour descriptors: Color Layout, Color Structure, Dominant Color, Scalable Color, all texture descriptors: Edge Histogram, Homogeneous Texture, Texture Browsing and one shape descriptor: Region-based Shape. Texture Browsing was used even though the MPEG-7 standard suggests that it is not suitable for retrieval. The other basic shape descriptor, Contour-based Shape, was not used, because it produces structurally different descriptions that cannot be transformed to data vectors with elements measuring on interval-scales. The motion descriptors were not used, because they integrate the temporal dimension of visual media and would only be comparable, if the basic colour, texture and shape descriptors would be aggregated over time. This was not done. Finally, no high-level descriptors were used (Localisation, Face Recognition, etc., see Subsection 2.1), because - to the author's opinion - the behaviour of the basic descriptors on elementary media objects should be evaluated before conclusions on aggregated structures can be drawn. Table 2. Predicate-based distance measures. No. Measure Comment P1 cba . . βα −− Feature Contrast Model, Tversky 1977 [17] P2 a No. of co-occurrences P3 cb + Hamming distance P4 K a Russel 1940 [14] P5 cb a + Kulczvnski 1927 [14] P6 2 K bc Pattern difference [14] P7 ( )( )( )( )dcdbcaba bcad ++++ − Pearson 1926 [11] 133 The Texture Browsing descriptions had to be transformed from five bins to an eight bin representation in order that all elements of the descriptor measure on an interval scale. A Manhattan metric was used to measure proximity (see [6] for details). Descriptor extraction was performed using the MPEG-7 reference implementation. In the extraction process each descriptor was applied on the entire content of each media object and the following extraction parameters were used. Colour in Color Structure was quantised to 32 bins. For Dominant Color colour space was set to YCrCb, 5-bit default quantisation was used and the default value for spatial coherency was used. Homogeneous Texture was quantised to 32 components. Scalable Color values were quantised to sizeof(int)-3 bits and 64 bins were used. Finally, Texture Browsing was used with five components. These descriptors were applied on three media collections with image content: the Brodatz dataset (112 images, 512x512 pixel), a subset of the Corel dataset (260 images, 460x300 pixel, portrait and landscape) and a dataset with coats-of-arms images (426 images, 200x200 pixel). Figure 1 shows examples from the three collections. Designing appropriate test sets for a visual evaluation is a highly difficult task (for example, see the TREC video 2002 report [15]). Of course, for identifying the best distance measure for a descriptor, it should be tested on an infinite number of media objects. But this is not the aim of this study. It is just evaluated if - for likely image collections - better proximity measures than those suggested by the MPEG-7 group can be found. Collections of this relatively small size were used in the evaluation, because the applied evaluation methods are above a certain minimum size invariant against collection size and for smaller collections it is easier to define a high-quality ground truth. Still, the average ratio of ground truth size to collection size is at least 1:7. Especially, no collection from the MPEG-7 dataset was used in the evaluation because the evaluations should show, how well the descriptors and the recommended distance measures perform on ``unknown'' material. When the descriptor extraction was finished, the resulting XML descriptions were transformed into a data matrix with 798 lines (media objects) and 314 columns (descriptor elements). To be usable with distance measures that do not integrate domain knowledge, the elements of this data matrix were normalised to [0, 1]. For the distance evaluation - next to the normalised data matrixhuman similarity judgement is needed. In this work, the ground truth is built of twelve groups of similar images (four for each dataset). Group membership was rated by humans based on semantic criterions. Table 4 summarises the twelve groups and the underlying descriptions. It has to be noticed, that some of these groups (especially 5, 7 and 10) are much harder to find with lowlevel descriptors than others. 4.2 Evaluation method Usually, retrieval evaluation is performed based on a ground truth with recall and precision (see, for example, [3], [16]). In multidescriptor environments this leads to a problem, because the resulting recall and precision values are strongly influenced by the method used to merge the distance values for one media object. Even though it is nearly impossible to say, how big the influence of a single distance measure was on the resulting recall and precision values, this problem has been almost ignored so far. In Subsection 2.2 it was stated that the major task of a distance measure is to bring the relevant media objects as close to the origin (where the query object lies) as possible. Even in a multidescriptor environment it is then simple to identify the similar objects in a large distance space. Consequently, it was decided to Table 3. Quantitative distance measures. No. Measure Comment No. Measure Comment Q1 ∑ − k jkiki xxw City block distance (L1 ) Q2 ( )∑ − k jkiki xxw 2 Euclidean distance (L2 ) Q3 ∑ + − k jkik jkik xx xx Canberra metric, Lance, Williams 1967 [8] Q4 ( ) ∑ + − k jkik jkik xx xx K 2 1 Divergence coefficient, Clark 1952 [1] Q5 ( )( ) ( ) ( )∑ ∑ ∑ −− −− k k jjkiik k jjkiik xx xx 22 µµ µµ Correlation coefficient Q6 −+ −− +−− ∑∑∑ ∑ ∑∑ k ik k jkik k ik k k jk k ikjkik xmKmxxmKmx xxmKmxx 2. .2 2222 Cohen 1969 [2] Q7 ∑ ∑ ∑ k k jkik k jkik xx xx 22 Angular distance, Gower 1967 [7] Q8 ( ) ( )( )∑ − ++ −−− 1 2 11 K k jkjkikik xxxx Meehl Index [10] Table 4. Ground truth information. Coll. No Images Description 1 19 Regular, chequered patterns 2 38 Dark white noise 3 33 Moon-like surfaces Brodatz 4 35 Water-like surfaces 5 73 Humans in nature (difficult) 6 17 Images with snow (mountains, skiing) 7 76 Animals in nature (difficult) Corel 8 27 Large coloured flowers 9 12 Bavarian communal arms 10 10 All Bavarian arms (difficult) 11 18 Dark objects / light unsegmented shield Arms 12 14 Major charges on blue or red shield 134 use indicators measuring the distribution in distance space of candidates similar to the query object for this evaluation instead of recall and precision. Identifying clusters of similar objects (based on the given ground truth) is relatively easy, because the resulting distance space for one descriptor and any distance measure is always one-dimensional. Clusters are found by searching from the origin of distance space to the first similar object, grouping all following similar objects in the cluster, breaking off the cluster with the first un-similar object and so forth. For the evaluation two indicators were defined. The first measures the average distance of all cluster means to the origin: distanceavgclustersno sizecluster distanceclustersno i i sizecluster j ij d i _. _ _ _ _ ∑ ∑ =µ where distanceij is the distance value of the j-th element in the i-th cluster, ∑ ∑ ∑ = CLUSTERS i i CLUSTERS i sizecluster j ij sizecluster distance distanceavg i _ _ _ , no_clusters is the number of found clusters and cluster_sizei is the size of the i-th cluster. The resulting indicator is normalised by the distribution characteristics of the distance measure (avg_distance). Additionally, the standard deviation is used. In the evaluation process this measure turned out to produce valuable results and to be relatively robust against parameter p of the quantisation model. In Subsection 3.2 we noted that p affects the discriminance of a predicate-based distance measure: The smaller p is set the larger are the resulting clusters because the quantisation model is then more discriminant against properties and less elements of the data matrix are used. This causes a side-effect that is measured by the second indicator: more and more un-similar objects come out with exactly the same distance value as similar objects (a problem that does not exist for large p's) and become indiscernible from similar objects. Consequently, they are (false) cluster members. This phenomenon (conceptually similar to the ``false negatives'' indicator) was named ``cluster pollution'' and the indicator measures the average cluster pollution over all clusters: clustersno doublesno cp clustersno i sizecluster j ij i _ _ _ _ ∑ ∑ = where no_doublesij is the number of indiscernible un-similar objects associated with the j-th element of cluster i. Remark: Even though there is a certain influence, it could be proven in [5] that no significant correlation exists between parameter p of the quantisation model and cluster pollution. 4.3 Test environment As pointed out above, to generate the descriptors, the MPEG-7 reference implementation in version 5.6 was used (provided by TU Munich). Image processing was done with Adobe Photoshop and normalisation and all evaluations were done with Perl. The querying process was performed in the following steps: (1) random selection of a ground truth group, (2) random selection of a query object from this group, (3) distance comparison for all other objects in the dataset, (4) clustering of the resulting distance space based on the ground truth and finally, (5) evaluation. For each combination of dataset and distance measure 250 queries were issued and evaluations were aggregated over all datasets and descriptors. The next section shows the - partially surprisingresults. 5. RESULTS In the results presented below the first indicator from Subsection 4.2 was used to evaluate distance measures. In a first step parameter p had to be set in a way that all measures are equally discriminant. Distance measurement is fair if the following condition holds true for any predicate-based measure dP and any continuous measure dC: ( ) ( )CP dcppdcp ≈, Then, it is guaranteed that predicate-based measures do not create larger clusters (with a higher number of similar objects) for the price of higher cluster pollution. In more than 1000 test queries the optimum value was found to be p=1. Results are organised as follows: Subsection 5.1 summarises the Figure 1. Test datasets. Left: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset. 135 best distance measures per descriptor, Section 5.2 shows the best overall distance measures and Section 5.3 points out other interesting results (for example, distance measures that work particularly good on specific ground truth groups). 5.1 Best measure per descriptor Figure 2 shows the evaluation results for the first indicator. For each descriptor the best measure and the performance of the MPEG-7 recommendation are shown. The results are aggregated over the tested datasets. On first sight, it becomes clear that the MPEG-7 recommendations are mostly relatively good but never the best. For Color Layout the difference between MP7 and the best measure, the Meehl index (Q8), is just 4% and the MPEG-7 measure has a smaller standard deviation. The reason why the Meehl index is better may be that this descriptors generates descriptions with elements that have very similar variance. Statistical analysis confirmed that (see [6]). For Color Structure, Edge Histogram, Homogeneous Texture, Region-based Shape and Scalable Color by far the best measure is pattern difference (P6). Psychological research on human visual perception has revealed that in many situation differences between the query object and a candidate weigh much stronger than common properties. The pattern difference measure implements this insight in the most consequent way. In the author's opinion, the reason why pattern difference performs so extremely well on many descriptors is due to this fact. Additional advantages of pattern difference are that it usually has a very low variance andbecause it is a predicate-based measure - its discriminance (and cluster structure) can be tuned with parameter p. The best measure for Dominant Color turned out to be Clark's Divergence coefficient (Q4). This is a similar measure to pattern difference on the continuous domain. The Texture Browsing descriptor is a special problem. In the MPEG-7 standard it is recommended to use it exclusively for browsing. After testing it for retrieval on various distance measures the author supports this opinion. It is very difficult to find a good distance measure for Texture Browsing. The proposed Manhattan metric, for example, performs very bad. The best measure is predicate-based (P7). It works on common properties (a, d) but produces clusters with very high cluster pollution. For this descriptor the second indicator is up to eight times higher than for predicate-based measures on other descriptors. 5.2 Best overall measures Figure 3 summarises the results over all descriptors and media collections. The diagram should give an indication on the general potential of the investigated distance measures for visual information retrieval. It can be seen that the best overall measure is a predicate-based one. The top performance of pattern difference (P6) proves that the quantisation model is a reasonable method to extend predicate-based distance measures on the continuous domain. The second best group of measures are the MPEG-7 recommendations, which have a slightly higher mean but a lower standard deviation than pattern difference. The third best measure is the Meehl index (Q8), a measure developed for psychological applications but because of its characteristic properties tailormade for certain (homogeneous) descriptors. Minkowski metrics are also among the best measures: the average mean and variance of the Manhattan metric (Q1) and the Euclidean metric (Q2) are in the range of Q8. Of course, these measures do not perform particularly well for any of the descriptors. Remarkably for a predicate-based measure, Tversky's Feature Contrast Model (P1) is also in the group of very good measures (even though it is not among the best) that ends with Q5, the correlation coefficient. The other measures either have a significantly higher mean or a very large standard deviation. 5.3 Other interesting results Distance measures that perform in average worse than others may in certain situations (e.g. on specific content) still perform better. For Color Layout, for example, Q7 is a very good measure on colour photos. It performs as good as Q8 and has a lower standard deviation. For artificial images the pattern difference and the Hamming distance produce comparable results as well. If colour information is available in media objects, pattern difference performs well on Dominant Color (just 20% worse Q4) and in case of difficult ground truth (group 5, 7, 10) the Meehl index is as strong as P6. 0,000 0,001 0,002 0,003 0,004 0,005 0,006 0,007 0,008 Q8 MP7 P6 MP7 Q4 MP7 P6 MP7 P6 MP7 P6 MP7 P6 MP7 P7 Q2 Color Layout Color Structure Dominant Color Edge Histogram Homog. Texture Region Shape Scalable Color Texture Browsing Figure 2. Results per measure and descriptor. The horizontal axis shows the best measure and the performance of the MPEG-7 recommendation for each descriptor. The vertical axis shows the values for the first indicator (smaller value = better cluster structure). Shades have the following meaning: black=µ-σ (good cases), black + dark grey=µ (average) and black + dark grey + light grey=µ+σ (bad). 136 6. CONCLUSION The evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors. Eight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed. To be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data. In the evaluation the best overall distance measures for visual content - as extracted by the visual MPEG-7 descriptors - turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions). Since these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]). The choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity. This work offers suitable distance measures for various situations. In consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4]. ACKNOWLEDGEMENTS The author would like to thank Christian Breiteneder for his valuable comments and suggestions for improvement. The work presented in this paper is part of the VizIR project funded by the Austrian Scientific Research Fund FWF under grant no. P16111. REFERENCES [1] Clark, P.S.. An extension of the coefficient of divergence for use with multiple characters. Copeia, 2 (1952), 61-64. [2] Cohen, J. A profile similarity coefficient invariant over variable reflection. Psychological Bulletin, 71 (1969), 281284. [3] Del Bimbo, A. Visual information retrieval. Morgan Kaufmann Publishers, San Francisco CA, 1999. [4] Eidenberger, H., and Breiteneder, C. A framework for visual information retrieval. In Proceedings Visual Information Systems Conference (HSinChu Taiwan, March 2002), LNCS 2314, Springer Verlag, 105-116. [5] Eidenberger, H., and Breiteneder, C. Visual similarity measurement with the Feature Contrast Model. In Proceedings SPIE Storage and Retrieval for Media Databases Conference (Santa Clara CA, January 2003), SPIE Vol. 5021, 64-76. [6] Eidenberger, H., How good are the visual MPEG-7 features? In Proceedings SPIE Visual Communications and Image Processing Conference (Lugano Switzerland, July 2003), SPIE Vol. 5150, 476-488. [7] Gower, J.G. Multivariate analysis and multidimensional geometry. The Statistician, 17 (1967),13-25. [8] Lance, G.N., and Williams, W.T. Mixed data classificatory programs. Agglomerative Systems Australian Comp. Journal, 9 (1967), 373-380. [9] Manjunath, B.S., Ohm, J.R., Vasudevan, V.V., and Yamada, A. Color and texture descriptors. In Special Issue on MPEG7. IEEE Transactions on Circuits and Systems for Video Technology, 11/6 (June 2001), 703-715. [10] Meehl, P. E. The problem is epistemology, not statistics: Replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions. In Harlow, L.L., Mulaik, S.A., and Steiger, J.H. (Eds.) . What if there were no significance tests? Erlbaum, Mahwah NJ, 393-425. [11] Pearson, K. On the coefficients of racial likeness. Biometrica, 18 (1926), 105-117. [12] Santini, S., and Jain, R. Similarity is a geometer. Multimedia Tools and Application, 5/3 (1997), 277-306. [13] Santini, S., and Jain, R. Similarity measures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21/9 (September 1999), 871-883. [14] Sint, P.P. Similarity structures and similarity measures. Austrian Academy of Sciences Press, Vienna Austria, 1975 (in German). [15] Smeaton, A.F., and Over, P. The TREC-2002 video track report. NIST Special Publication SP 500-251 (March 2003), available from: http://trec.nist.gov/pubs/trec11/papers/ VIDEO.OVER.pdf (last visited: 2003-07-29) [16] Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., and Jain, R. Content-based image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22/12 (December 2000), 1349-1380. [17] Tversky, A. Features of similarity. Psychological Review, 84/4 (July 1977), 327-351. 0,000 0,002 0,004 0,006 0,008 0,010 0,012 0,014 0,016 0,018 0,020 P6 MP7 Q8 Q1 Q4 Q2 P2 P4 Q6 Q3 Q7 P1 Q5 P3 P5 P7 Figure 3. Overall results (ordered by the first indicator). The vertical axis shows the values for the first indicator (smaller value = better cluster structure). Shades have the following meaning: black=µ-σ, black + dark grey=µ and black + dark grey + light grey=µ+σ. 137
Distance Measures for MPEG-7-based Retrieval ABSTRACT In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better. 1. INTRODUCTION The MPEG-7 standard defines--among others--a set of descriptors for visual media. Each descriptor consists of a feature extraction mechanism, a description (in binary and XML format) and guidelines that define how to apply the descriptor on different kinds of media (e.g. on temporal media). The MPEG-7 descriptors have been carefully designed to meet--partially complementary--requirements of different application domains: archival, browsing, retrieval, etc. [9]. In the following, we will exclusively deal with the visual MPEG-7 descriptors in the context of media retrieval. The visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors. For retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions. Common rules are distance functions, like the Euclidean distance and the Mahalanobis distance. Unfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific. However, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor. These recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures. In the present study a large number of successful distance measures from different areas (statistics, psychology, medicine, social and economic sciences, etc.) were implemented and applied on MPEG-7 data vectors to verify whether or not the recommended MPEG-7 distance measures are really the best for any reasonable class of media objects. From the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets. The hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method. The paper is organised as follows. Section 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]). Section 3 gives an overview over the implemented distance measures. Section 4 describes the test setup, including the test data and the implemented evaluation methods. Finally, Section 5 presents the results per descriptor and over all descriptors. 2. BACKGROUND 2.1 MPEG-7: visual descriptors The visual part of the MPEG-7 standard defines several descriptors. Not all of them are really descriptors in the sense that they extract properties from visual media. Some of them are just structures for descriptor aggregation or localisation. The basic descriptors are Color Layout, Color Structure, Dominant Color, Scalable Color, Edge Histogram, Homogeneous Texture, Texture Browsing, Region-based Shape, Contour-based Shape, Camera Motion, Parametric Motion and Motion Activity. Other descriptors are based on low-level descriptors or semantic information: Group-of-Frames/Group-of-Pictures Color (based on Scalable Color), Shape 3D (based on 3D mesh information), Motion Trajectory (based on object segmentation) and Face Recognition (based on face extraction). Descriptors for spatiotemporal aggregation and localisation are: Spatial 2D Coordinates, Grid Layout, Region Locator (spatial), Time Series, Temporal Interpolation (temporal) and SpatioTemporal Locator (combined). Finally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects. These additional structures allow combining the basic descriptors in multiple ways and on different levels. But they do not change the characteristics of the extracted information. Consequently, structures for aggregation and localisation were not considered in the work described in this paper. 2.2 Similarity measurement on visual data Generally, similarity measurement on visual information aims at imitating human visual similarity perception. Unfortunately, human perception is much more complex than any of the existing similarity models (it includes perception, recognition and subjectivity). The common approach in visual information retrieval is measuring dis-similarity as distance. Both, query object and candidate object are represented by their corresponding feature vectors. The distance between these objects is measured by computing the distance between the two vectors. Consequently, the process is independent of the employed querying paradigm (e.g. query by example). The query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects). Goal of the measurement process is to express a relationship between the two objects by their distance. Iteration for multiple candidates allows then to define a partial order over the candidates and to address those in a (to be defined) neighbourhood being similar to the query object. At this point, it has to be mentioned that in a multi-descriptor environment--especially in MPEG-7--we are only half way towards a statement on similarity. If multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object. Still, distance measurement is the most important first step in similarity measurement. Obviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object. If distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible. Consequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms). Unfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity). To overcome this shortage, researchers from different areas have developed alternative models that are mostly predicate-based (descriptors are assumed to contain just binary elements, e.g. Tversky's Feature Contrast Model [17]) and fit better with human perception. In the following distance measures of both groups of approaches will be considered. 3. DISTANCE MEASURES The distance measures used in this work have been collected from various areas (Subsection 3.1). Because they work on differently quantised data, Subsection 3.2 sketches a model for unification on the basis of quantitative descriptions. Finally, Subsection 3.3 introduces the distance measures as well as their origin and the idea they implement. 3.1 Sources Distance measurement is used in many research areas such as psychology, sociology (e.g. comparing test results), medicine (e.g. comparing parameters of test persons), economics (e.g. comparing balance sheet ratios), etc. . Naturally, the character of data available in these areas differs significantly. Essentially, there are two extreme cases of data vectors (and distance measures): predicatebased (all vector elements are binary, e.g. {0, 1}) and quantitative (all vector elements are continuous, e.g. [0, 1]). Predicates express the existence of properties and represent highlevel information while quantitative values can be used to measure and mostly represent low-level information. Predicates are often employed in psychology, sociology and other human-related sciences and most predicate-based distance measures were therefore developed in these areas. Descriptions in visual information retrieval are nearly ever (if they do not integrate semantic information) quantitative. Consequently, mostly quantitative distance measures are used in visual information retrieval. The goal of this work is to compare the MPEG-7 distance measures with the most powerful distance measures developed in other areas. Since MPEG-7 descriptions are purely quantitative but some of the most sophisticated distance measures are defined exclusively on predicates, a model is mandatory that allows the application of predicate-based distance measures on quantitative data. The model developed for this purpose is presented in the next section. 3.2 Quantisation model The goal of the quantisation model is to redefine the set operators that are usually used in predicate-based distance measures on continuous data. The first in visual information retrieval to follow this approach were Santini and Jain, who tried to apply Tversky's Feature Contrast Model [17] to content-based image retrieval [12], [13]. They interpreted continuous data as fuzzy predicates and used fuzzy set operators. Unfortunately, their model suffered from several shortcomings they described in [12], [13] (for example, the quantitative model worked only for one specific version of the original predicate-based measure). The main idea of the presented quantisation model is that set operators are replaced by statistical functions. In [5] the authors could show that this interpretation of set operators is reasonable. The model offers a solution for the descriptors considered in the evaluation. It is not specific to one distance measure, but can be applied to any predicate-based measure. Below, it will be shown that the model does not only work for predicate data but for quantitative data as well. Each measure implementing the model can be used as a substitute for the original predicate-based measure. Generally, binary properties of two objects (e.g. media objects) can exist in both objects (denoted as a), in just one (b, c) or in none of them (d). The operator needed for these relationships are UNION, MINUS and NOT. In the quantisation model they are replaced as follows (see [5] for further details). operators (1-bit quantisation). The distance measure is then highly discriminant for properties. Between these limits, a distance measure that uses the quantisation model is--depending on p--more or less discriminant for properties. This means, it selects a subset of all available description vector elements for distance measurement. For both predicate data and quantitative data it can be shown that the quantisation model is reasonable. If description vectors consist of binary elements only, p should be used as follows (for example, p can easily be set automatically): p → 0 ⇒ ε1, ε2 = 0, e.g.p = min In this case, a, b, c, d measure like the set operators they replace. For example, Table 1 shows their behaviour for two onedimensional feature vectors Xi and Xj. As can be seen, the statistical measures work like set operators. Actually, the quantisation model works accurate on predicate data for any p ≠ ∞. To show that the model is reasonable for quantitative data the following fact is used. It is easy to show that for predicate data some quantitative distance measures degenerate to predicatebased measures. For example, the L1 metric (Manhattan metric) degenerates to the Hamming distance (from [9], without weights): If it can be shown that the quantisation model is able to reconstruct the quantitative measure from the degenerated predicate-based measure, the model is obviously able to extend predicate-based measures to the quantitative domain. This is easy to illustrate. For purely quantitative feature vectors, p should be used as follows (again, p can easily be set automatically): a selects properties that are present in both data vectors (Xi, Xj representing media objects), b and c select properties that are present in just one of them and d selects properties that are present in neither of the two data vectors. Every property is selected by the extent to which it is present (a and d: mean, b and c: difference) and only if the amount to which it is present exceeds a certain threshold (depending on the mean and standard deviation over all elements of descriptor space). The implementation of these operators is based on one assumption. It is assumed that vector elements measure on interval scale. That means, each element expresses that the measured property is "more or less" present ("0": not at all, "M": fully present). This is true for most visual descriptors and all MPEG-7 descriptors. A natural origin as it is assumed here ("0") is not needed. Introducing p (called discriminance-defining parameter) for the thresholds ε1, ε2 has the positive consequence that a, b, c, d can then be controlled through a single parameter. p is an additional criterion for the behaviour of a distance measure and determines the thresholds used in the operators. It expresses how accurate data items are present (quantisation) and consequently, how accurate they should be investigated. p can be set by the user or automatically. Interesting are the limits: In this case, all elements (= properties) are assumed to be continuous (high quantisation). In consequence, all properties of a descriptor are used by the operators. Then, the distance measure is not discriminant for properties. Table 2. Predicate-based distance measures. This means, for sufficiently high p every predicate-based distance measure that is either not using b and c or just as b + c, b-c or c-b, can be transformed into a continuous quantitative distance measure. For example, the Hamming distance (again, without weights): The quantisation model successfully reconstructs the L1 metric and no distance measure-specific modification has to be made to the model. This demonstrates that the model is reasonable. In the following it will be used to extend successful predicate-based distance measures on the quantitative domain. The major advantages of the quantisation model are: (1) it is application domain independent, (2) the implementation is straightforward, (3) the model is easy to use and finally, (4) the new parameter p allows to control the similarity measurement process in a new way (discriminance on property level). 3.3 Implemented measures For the evaluation described in this work next to predicate-based (based on the quantisation model) and quantitative measures, the distance measures recommended in the MPEG-7 standard were implemented (all together 38 different distance measures). Table 2 summarises those predicate-based measures that performed best in the evaluation (in sum 20 predicate-based measures were investigated). For these measures, K is the number of predicates in the data vectors Xi and Xj. In P1, the sum is used for Tversky's f () (as Tversky himself does in [17]) and α, β are weights for element b and c. In [5] the author's investigated Tversky's Feature Contrast Model and found α = 1, β = 0 to be the optimum parameters. Some of the predicate-based measures are very simple (e.g. P2, P4) but have been heavily exploited in psychological research. Pattern difference (P6)--a very powerful measure--is used in the statistics package SPSS for cluster analysis. P7 is a correlation coefficient for predicates developed by Pearson. Table 3 shows the best quantitative distance measures that were used. Q1 and Q2 are metric-based and were implemented as representatives for the entire group of Minkowski distances. The wi are weights. In Q5, µi, σi are mean and standard deviation for the elements of descriptor Xi. In Q6, m is 2 (= 0.5). Q3, the Canberra metric, is a normalised form of Q1. Similarly, Q4, Clark's divergence coefficient is a normalised version of Q2. Q6 is a further-developed correlation coefficient that is invariant against sign changes. This measure is used even though its particular properties are of minor importance for this application domain. Finally, Q8 is a measure that takes the differences between adjacent vector elements into account. This makes it structurally different from all other measures. Obviously, one important distance measure is missing. The Mahalanobis distance was not considered, because different descriptors would require different covariance matrices and for some descriptors it is simply impossible to define a covariance matrix. If the identity matrix was used in this case, the Mahalanobis distance would degenerate to a Minkowski distance. Additionally, the recommended MPEG-7 distances were implemented with the following parameters: In the distance measure of the Color Layout descriptor all weights were set to "1" (as in all other implemented measures). In the distance measure of the Dominant Color descriptor the following parameters were used: w1 = 0.7, w2 = 0.3, α = 1, Td = 20 (as recommended). In the Homogeneous Texture descriptor's distance all α (k) were set to "1" and matching was done rotation - and scale-invariant. Important! Some of the measures presented in this section are distance measures while others are similarity measures. For the tests, it is important to notice, that all similarity measures were inverted to distance measures. 4. TEST SETUP Subsection 4.1 describes the descriptors (including parameters) and the collections (including ground truth information) that were used in the evaluation. Subsection 4.2 discusses the evaluation method that was implemented and Subsection 4.3 sketches the test environment used for the evaluation process. 4.1 Test data For the evaluation eight MPEG-7 descriptors were used. All colour descriptors: Color Layout, Color Structure, Dominant Color, Scalable Color, all texture descriptors: Edge Histogram, Homogeneous Texture, Texture Browsing and one shape descriptor: Region-based Shape. Texture Browsing was used even though the MPEG-7 standard suggests that it is not suitable for retrieval. The other basic shape descriptor, Contour-based Shape, was not used, because it produces structurally different descriptions that cannot be transformed to data vectors with elements measuring on interval-scales. The motion descriptors were not used, because they integrate the temporal dimension of visual media and would only be comparable, if the basic colour, texture and shape descriptors would be aggregated over time. This was not done. Finally, no high-level descriptors were used (Localisation, Face Recognition, etc., see Subsection 2.1), because--to the author's opinion--the behaviour of the basic descriptors on elementary media objects should be evaluated before conclusions on aggregated structures can be drawn. Table 3. Quantitative distance measures. The Texture Browsing descriptions had to be transformed from five bins to an eight bin representation in order that all elements of the descriptor measure on an interval scale. A Manhattan metric was used to measure proximity (see [6] for details). Descriptor extraction was performed using the MPEG-7 reference implementation. In the extraction process each descriptor was applied on the entire content of each media object and the following extraction parameters were used. Colour in Color Structure was quantised to 32 bins. For Dominant Color colour space was set to YCrCb, 5-bit default quantisation was used and the default value for spatial coherency was used. Homogeneous Texture was quantised to 32 components. Scalable Color values were quantised to sizeof (int) -3 bits and 64 bins were used. Finally, Texture Browsing was used with five components. These descriptors were applied on three media collections with image content: the Brodatz dataset (112 images, 512x512 pixel), a subset of the Corel dataset (260 images, 460x300 pixel, portrait and landscape) and a dataset with coats-of-arms images (426 images, 200x200 pixel). Figure 1 shows examples from the three collections. Designing appropriate test sets for a visual evaluation is a highly difficult task (for example, see the TREC video 2002 report [15]). Of course, for identifying the best distance measure for a descriptor, it should be tested on an infinite number of media objects. But this is not the aim of this study. It is just evaluated if--for likely image collections--better proximity measures than those suggested by the MPEG-7 group can be found. Collections of this relatively small size were used in the evaluation, because the applied evaluation methods are above a certain minimum size invariant against collection size and for smaller collections it is easier to define a high-quality ground truth. Still, the average ratio of ground truth size to collection size is at least 1:7. Especially, no collection from the MPEG-7 dataset was used in the evaluation because the evaluations should show, how well the descriptors and the recommended distance measures perform on "unknown" material. When the descriptor extraction was finished, the resulting XML descriptions were transformed into a data matrix with 798 lines (media objects) and 314 columns (descriptor elements). To be usable with distance measures that do not integrate domain knowledge, the elements of this data matrix were normalised to [0, 1]. For the distance evaluation--next to the normalised data matrix--human similarity judgement is needed. In this work, the ground truth is built of twelve groups of similar images (four for each dataset). Group membership was rated by humans based on semantic criterions. Table 4 summarises the twelve groups and the underlying descriptions. It has to be noticed, that some of these groups (especially 5, 7 and 10) are much harder to find with lowlevel descriptors than others. 4.2 Evaluation method Usually, retrieval evaluation is performed based on a ground truth with recall and precision (see, for example, [3], [16]). In multidescriptor environments this leads to a problem, because the resulting recall and precision values are strongly influenced by the method used to merge the distance values for one media object. Even though it is nearly impossible to say, how big the influence of a single distance measure was on the resulting recall and precision values, this problem has been almost ignored so far. In Subsection 2.2 it was stated that the major task of a distance measure is to bring the relevant media objects as close to the origin (where the query object lies) as possible. Even in a multidescriptor environment it is then simple to identify the similar objects in a large distance space. Consequently, it was decided to Table 4. Ground truth information. Figure 1. Test datasets. Left: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset. use indicators measuring the distribution in distance space of candidates similar to the query object for this evaluation instead of recall and precision. Identifying clusters of similar objects (based on the given ground truth) is relatively easy, because the resulting distance space for one descriptor and any distance measure is always one-dimensional. Clusters are found by searching from the origin of distance space to the first similar object, grouping all following similar objects in the cluster, breaking off the cluster with the first un-similar object and so forth. For the evaluation two indicators were defined. The first measures the average distance of all cluster means to the origin: = clusters.avg _ distance where distanceij is the distance value of the j-th element in the i-th number of found clusters and cluster_sizei is the size of the i-th cluster. The resulting indicator is normalised by the distribution characteristics of the distance measure (avg_distance). Additionally, the standard deviation is used. In the evaluation process this measure turned out to produce valuable results and to be relatively robust against parameter p of the quantisation model. In Subsection 3.2 we noted that p affects the discriminance of a predicate-based distance measure: The smaller p is set the larger are the resulting clusters because the quantisation model is then more discriminant against properties and less elements of the data matrix are used. This causes a side-effect that is measured by the second indicator: more and more un-similar objects come out with exactly the same distance value as similar objects (a problem that does not exist for large p's) and become indiscernible from similar objects. Consequently, they are (false) cluster members. This phenomenon (conceptually similar to the "false negatives" indicator) was named "cluster pollution" and the indicator measures the average cluster pollution over all clusters: where no_doublesij is the number of indiscernible un-similar objects associated with the j-th element of cluster i. Remark: Even though there is a certain influence, it could be proven in [5] that no significant correlation exists between parameter p of the quantisation model and cluster pollution. 4.3 Test environment As pointed out above, to generate the descriptors, the MPEG-7 reference implementation in version 5.6 was used (provided by TU Munich). Image processing was done with Adobe Photoshop and normalisation and all evaluations were done with Perl. The querying process was performed in the following steps: (1) random selection of a ground truth group, (2) random selection of a query object from this group, (3) distance comparison for all other objects in the dataset, (4) clustering of the resulting distance space based on the ground truth and finally, (5) evaluation. For each combination of dataset and distance measure 250 queries were issued and evaluations were aggregated over all datasets and descriptors. The next section shows the--partially surprising--results. 5. RESULTS In the results presented below the first indicator from Subsection 4.2 was used to evaluate distance measures. In a first step parameter p had to be set in a way that all measures are equally discriminant. Distance measurement is fair if the following condition holds true for any predicate-based measure dP and any continuous measure dC: Then, it is guaranteed that predicate-based measures do not create larger clusters (with a higher number of similar objects) for the price of higher cluster pollution. In more than 1000 test queries the optimum value was found to be p = 1. Results are organised as follows: Subsection 5.1 summarises the Figure 2. Results per measure and descriptor. The horizontal axis shows the best measure and the performance of the MPEG-7 recommendation for each descriptor. The vertical axis shows the values for the first indicator (smaller value = better cluster structure). Shades have the following meaning: black = µ-σ (good cases), black + dark grey = µ (average) and black + dark grey + light grey = µ + σ (bad). best distance measures per descriptor, Section 5.2 shows the best overall distance measures and Section 5.3 points out other interesting results (for example, distance measures that work particularly good on specific ground truth groups). 5.1 Best measure per descriptor Figure 2 shows the evaluation results for the first indicator. For each descriptor the best measure and the performance of the MPEG-7 recommendation are shown. The results are aggregated over the tested datasets. On first sight, it becomes clear that the MPEG-7 recommendations are mostly relatively good but never the best. For Color Layout the difference between MP7 and the best measure, the Meehl index (Q8), is just 4% and the MPEG-7 measure has a smaller standard deviation. The reason why the Meehl index is better may be that this descriptors generates descriptions with elements that have very similar variance. Statistical analysis confirmed that (see [6]). For Color Structure, Edge Histogram, Homogeneous Texture, Region-based Shape and Scalable Color by far the best measure is pattern difference (P6). Psychological research on human visual perception has revealed that in many situation differences between the query object and a candidate weigh much stronger than common properties. The pattern difference measure implements this insight in the most consequent way. In the author's opinion, the reason why pattern difference performs so extremely well on many descriptors is due to this fact. Additional advantages of pattern difference are that it usually has a very low variance and--because it is a predicate-based measure--its discriminance (and cluster structure) can be tuned with parameter p. The best measure for Dominant Color turned out to be Clark's Divergence coefficient (Q4). This is a similar measure to pattern difference on the continuous domain. The Texture Browsing descriptor is a special problem. In the MPEG-7 standard it is recommended to use it exclusively for browsing. After testing it for retrieval on various distance measures the author supports this opinion. It is very difficult to find a good distance measure for Texture Browsing. The proposed Manhattan metric, for example, performs very bad. The best measure is predicate-based (P7). It works on common properties (a, d) but produces clusters with very high cluster pollution. For this descriptor the second indicator is up to eight times higher than for predicate-based measures on other descriptors. 5.2 Best overall measures Figure 3 summarises the results over all descriptors and media collections. The diagram should give an indication on the general potential of the investigated distance measures for visual information retrieval. It can be seen that the best overall measure is a predicate-based one. The top performance of pattern difference (P6) proves that the quantisation model is a reasonable method to extend predicate-based distance measures on the continuous domain. The second best group of measures are the MPEG-7 recommendations, which have a slightly higher mean but a lower standard deviation than pattern difference. The third best measure is the Meehl index (Q8), a measure developed for psychological applications but because of its characteristic properties tailormade for certain (homogeneous) descriptors. Minkowski metrics are also among the best measures: the average mean and variance of the Manhattan metric (Q1) and the Euclidean metric (Q2) are in the range of Q8. Of course, these measures do not perform particularly well for any of the descriptors. Remarkably for a predicate-based measure, Tversky's Feature Contrast Model (P1) is also in the group of very good measures (even though it is not among the best) that ends with Q5, the correlation coefficient. The other measures either have a significantly higher mean or a very large standard deviation. 5.3 Other interesting results Distance measures that perform in average worse than others may in certain situations (e.g. on specific content) still perform better. For Color Layout, for example, Q7 is a very good measure on colour photos. It performs as good as Q8 and has a lower standard deviation. For artificial images the pattern difference and the Hamming distance produce comparable results as well. If colour information is available in media objects, pattern difference performs well on Dominant Color (just 20% worse Q4) and in case of difficult ground truth (group 5, 7, 10) the Meehl index is as strong as P6. Figure 3. Overall results (ordered by the first indicator). The vertical axis shows the values for the first indicator (smaller value = better cluster structure). Shades have the following meaning: black = µ-σ, black + dark grey = µ and black + dark grey + light grey = µ + σ. 6. CONCLUSION The evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors. Eight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed. To be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data. In the evaluation the best overall distance measures for visual content--as extracted by the visual MPEG-7 descriptors--turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions). Since these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]). The choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity. This work offers suitable distance measures for various situations. In consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].
Distance Measures for MPEG-7-based Retrieval ABSTRACT In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better. 1. INTRODUCTION The MPEG-7 standard defines--among others--a set of descriptors for visual media. Each descriptor consists of a feature extraction mechanism, a description (in binary and XML format) and guidelines that define how to apply the descriptor on different kinds of media (e.g. on temporal media). The MPEG-7 descriptors have been carefully designed to meet--partially complementary--requirements of different application domains: archival, browsing, retrieval, etc. [9]. In the following, we will exclusively deal with the visual MPEG-7 descriptors in the context of media retrieval. The visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors. For retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions. Common rules are distance functions, like the Euclidean distance and the Mahalanobis distance. Unfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific. However, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor. These recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures. In the present study a large number of successful distance measures from different areas (statistics, psychology, medicine, social and economic sciences, etc.) were implemented and applied on MPEG-7 data vectors to verify whether or not the recommended MPEG-7 distance measures are really the best for any reasonable class of media objects. From the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets. The hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method. The paper is organised as follows. Section 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]). Section 3 gives an overview over the implemented distance measures. Section 4 describes the test setup, including the test data and the implemented evaluation methods. Finally, Section 5 presents the results per descriptor and over all descriptors. 2. BACKGROUND 2.1 MPEG-7: visual descriptors The visual part of the MPEG-7 standard defines several descriptors. Not all of them are really descriptors in the sense that they extract properties from visual media. Some of them are just structures for descriptor aggregation or localisation. The basic descriptors are Color Layout, Color Structure, Dominant Color, Scalable Color, Edge Histogram, Homogeneous Texture, Texture Browsing, Region-based Shape, Contour-based Shape, Camera Motion, Parametric Motion and Motion Activity. Other descriptors are based on low-level descriptors or semantic information: Group-of-Frames/Group-of-Pictures Color (based on Scalable Color), Shape 3D (based on 3D mesh information), Motion Trajectory (based on object segmentation) and Face Recognition (based on face extraction). Descriptors for spatiotemporal aggregation and localisation are: Spatial 2D Coordinates, Grid Layout, Region Locator (spatial), Time Series, Temporal Interpolation (temporal) and SpatioTemporal Locator (combined). Finally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects. These additional structures allow combining the basic descriptors in multiple ways and on different levels. But they do not change the characteristics of the extracted information. Consequently, structures for aggregation and localisation were not considered in the work described in this paper. 2.2 Similarity measurement on visual data Generally, similarity measurement on visual information aims at imitating human visual similarity perception. Unfortunately, human perception is much more complex than any of the existing similarity models (it includes perception, recognition and subjectivity). The common approach in visual information retrieval is measuring dis-similarity as distance. Both, query object and candidate object are represented by their corresponding feature vectors. The distance between these objects is measured by computing the distance between the two vectors. Consequently, the process is independent of the employed querying paradigm (e.g. query by example). The query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects). Goal of the measurement process is to express a relationship between the two objects by their distance. Iteration for multiple candidates allows then to define a partial order over the candidates and to address those in a (to be defined) neighbourhood being similar to the query object. At this point, it has to be mentioned that in a multi-descriptor environment--especially in MPEG-7--we are only half way towards a statement on similarity. If multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object. Still, distance measurement is the most important first step in similarity measurement. Obviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object. If distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible. Consequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms). Unfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity). To overcome this shortage, researchers from different areas have developed alternative models that are mostly predicate-based (descriptors are assumed to contain just binary elements, e.g. Tversky's Feature Contrast Model [17]) and fit better with human perception. In the following distance measures of both groups of approaches will be considered. 3. DISTANCE MEASURES 3.1 Sources 3.2 Quantisation model 3.3 Implemented measures 4. TEST SETUP 4.1 Test data 4.2 Evaluation method 4.3 Test environment 5. RESULTS 5.1 Best measure per descriptor 5.2 Best overall measures 5.3 Other interesting results 6. CONCLUSION The evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors. Eight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed. To be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data. In the evaluation the best overall distance measures for visual content--as extracted by the visual MPEG-7 descriptors--turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions). Since these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]). The choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity. This work offers suitable distance measures for various situations. In consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].
Distance Measures for MPEG-7-based Retrieval ABSTRACT In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better. 1. INTRODUCTION The MPEG-7 standard defines--among others--a set of descriptors for visual media. The MPEG-7 descriptors have been carefully designed to meet--partially complementary--requirements of different application domains: archival, browsing, retrieval, etc. [9]. In the following, we will exclusively deal with the visual MPEG-7 descriptors in the context of media retrieval. The visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors. For retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions. Common rules are distance functions, like the Euclidean distance and the Mahalanobis distance. Unfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific. However, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor. These recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures. From the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets. The hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method. The paper is organised as follows. Section 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]). Section 3 gives an overview over the implemented distance measures. Section 4 describes the test setup, including the test data and the implemented evaluation methods. Finally, Section 5 presents the results per descriptor and over all descriptors. 2. BACKGROUND 2.1 MPEG-7: visual descriptors The visual part of the MPEG-7 standard defines several descriptors. Not all of them are really descriptors in the sense that they extract properties from visual media. Some of them are just structures for descriptor aggregation or localisation. Other descriptors are based on low-level descriptors or semantic information: Group-of-Frames/Group-of-Pictures Color (based on Finally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects. These additional structures allow combining the basic descriptors in multiple ways and on different levels. But they do not change the characteristics of the extracted information. Consequently, structures for aggregation and localisation were not considered in the work described in this paper. 2.2 Similarity measurement on visual data Generally, similarity measurement on visual information aims at imitating human visual similarity perception. The common approach in visual information retrieval is measuring dis-similarity as distance. Both, query object and candidate object are represented by their corresponding feature vectors. The distance between these objects is measured by computing the distance between the two vectors. Consequently, the process is independent of the employed querying paradigm (e.g. query by example). The query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects). Goal of the measurement process is to express a relationship between the two objects by their distance. If multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object. Still, distance measurement is the most important first step in similarity measurement. Obviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object. If distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible. Consequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms). Unfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity). In the following distance measures of both groups of approaches will be considered. 6. CONCLUSION The evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors. Eight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed. To be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data. In the evaluation the best overall distance measures for visual content--as extracted by the visual MPEG-7 descriptors--turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions). Since these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]). The choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity. This work offers suitable distance measures for various situations. In consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].
J-53
A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters
In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed market-based resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.
[ "resourc alloc", "distribut share cluster", "util", "effici", "fair", "simul", "algorithm", "bid mechan", "price-anticip scheme", "nash equilibrium", "parallel", "anarchi price", "price-anticip mechan" ]
[ "P", "P", "P", "P", "P", "P", "U", "R", "R", "M", "U", "U", "R" ]
A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters Michal Feldman∗ mfeldman@sims.berkeley.edu Kevin Lai† kevin.lai@hp.com Li Zhang† l.zhang@hp.com ABSTRACT In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed marketbased resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; C.4 [Performance of Systems]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Performance, Design, Economics 1. INTRODUCTION The primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources. This allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users. Sharing nodes that are dispersed in the network allows lower delay because applications can store data close to users. Finally, sharing allows greater reliability because of redundancy in hosts and network connections. However, resource allocation in these systems remains the major challenge. The problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests. Several non-economic allocation algorithms have been proposed, but these typically assume that task values (i.e., their importance) are the same, or are inversely proportional to the resources required, or are set by an omniscient administrator. However, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set. Instead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism. In particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource. This proportional scheme is simpler, more scalable, and more responsive [15] than auction-based schemes [6, 21, 26]. Previous work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets. In this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6). In this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game. For evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users'' utilities are. Although rarely considered in previous game theoretical study, we believe fairness is a critical metric for a resource allocation schemes because the perception of unfairness will cause some users to reject a system with more efficient, but less fair resource allocation in favor of one with less efficient, more fair resource allocation. We use both utility uniformity and envy-freeness to measure fairness. Utility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users. Envyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others. Our contributions are as follows: • We analyze the existence and performance of 127 Nash equilibria. Using analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness. We also show the worst case performance bounds: for m players the efficiency at equilibrium is Ω(1/ √ m), the utility uniformity is ≥ 1/m, and the envyfreeness ≥ 2 √ 2−2 ≈ 0.83. Although these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic. • We describe algorithms that allow strategic users to optimize their utility. As part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead. We present variations of the best response algorithm for both finite and infinite parallelism tasks. In addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions. • We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness. Using simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness. Likewise, allocating according to only users'' preference weights results in a high fairness, but a mediocre efficiency. Intuition would suggest that efficiency and fairness are exclusive. Surprisingly, the Nash equilibrium, reached by each user iteratively applying the best response algorithm to adapt his bids, achieves nearly the efficiency of the social optimum and nearly the fairness of the weight-proportional allocation: the efficiency is ≥ 0.90, the utility uniformity is ≥ 0.65, and the envyfreeness is ≥ 0.97, independent of the number of users in the system. In addition, the time to converge to the equilibrium is ≤ 5 iterations when all users use the best response strategy. The local adjustment algorithm performs similarly when there is sufficient competitiveness, but takes 25 to 90 iterations to stabilize. As a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness. The rest of the paper is organized as follows. We describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3. In Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game. In Section 5, we describe our simulator and simulation results. We describe related work in Section 6. We conclude by discussing some limit of our model and future work in Section 7. 2. THE MODEL Price-Anticipating Resource Allocation. We study the problem of allocating a set of divisible resources (or machines). Suppose that there are m users and n machines. Each machine can be continuously divided for allocation to multiple users. An allocation scheme ω = (r1, ... , rm), where ri = (ri1, · · · , rin) with rij representing the share of machine j allocated to user i, satisfies that for any 1 ≤ i ≤ m and 1 ≤ j ≤ n, rij ≥ 0 and Pm i=1 rij ≤ 1. Let Ω denote the set of all the allocation schemes. We consider the price anticipating mechanism in which each user places a bid to each machine, and the price of the machine is determined by the total bids placed. Formally, suppose that user i submits a non-negative bid xij to machine j. The price of machine j is then set to Yj = Pn i=1 xij, the total bids placed on the machine j. Consequently, user i receives a fraction of rij = xij Yj of j. When Yj = 0, i.e. when there is no bid on a machine, the machine is not allocated to anyone. We call xi = (xi1, ... , xin) the bidding vector of user i. The additional consideration we have is that each user i has a budget constraint Xi. Therefore, user i``s total bids have to sum up to his budget, i.e. Pn j=1 xij = Xi. The budget constraints come from the fact that the users do not have infinite budget. Utility Functions. Each user i``s utility is represented by a function Ui of the fraction (ri1, ... , rin) the user receives from each machine. Given the problem domain we consider, we assume that each user has different and relatively independent preferences for different machines. Therefore, the basic utility function we consider is the linear utility function: Ui(ri1, · · · , rin) = wi1ri1 +· · ·+winrin, where wij ≥ 0 is user i``s private preference, also called his weight, on machine j. For example, suppose machine 1 has a faster CPU but less memory than machine 2, and user 1 runs CPU bounded applications, while user 2 runs memory bounded applications. As a result, w11 > w12 and w21 < w22. Our definition of utility functions corresponds to the user having enough jobs or enough parallelism within jobs to utilize all the machines. Consequently, the user``s goal is to grab as much of a resource as possible. We call this the infinite parallelism model. In practice, a user``s application may have an inherent limit on parallelization (e.g., some computations must be done sequentially) or there may be a system limit (e.g., the application``s data is being served from a file server with limited capacity). To model this, we also consider the more realistic finite parallelism model, where the user``s parallelism is bounded by ki, and the user``s utility Ui is the sum of the ki largest wijrij. In this model, the user only submits bids to up to ki machines. Our abstraction is to capture the essense of the problem and facilitate our analysis. In Section 7, we discuss the limit of the above definition of utility functions. Best Response. As typically, we assume the users are selfish and strategic - they all act to maximize their own utility, defined by their utility functions. From the perspective of user i, if the total bids of the other users placed on each machine j is yj, then the best response of user i to the system is the solution of the following optimization problem: maximize Ui( xij xij +yj ) subject to Pn j=1 xij = Xi, and xij ≥ 0. The difficulty of the above optimization problem depends on the formulation of Ui. We will show later how to solve it for the infinite parallelism model and provide a heuristic for finite parallelism model. Nash Equilibrium. By the assumption that the user is selfish, each user``s bidding vector is the best response to the system. The question we are most interested in is whether there exists a collection of bidding vectors, one for each user, such that each user``s bidding vector is the best response to those of the other users. Such a state is known as the Nash equilibrium, a central concept in Game Theory. Formally, the bidding vectors x1, ... , xm is a Nash equilibrium if for 128 any 1 ≤ i ≤ m, xi is the best response to the system, or, for any other bidding vector xi, Ui(x1, ... , xi, ... , xm) ≥ Ui(x1, ... , xi, ... , xm) . The Nash equilibrium is desirable because it is a stable state at which no one has incentive to change his strategy. But a game may not have an equilibrium. Indeed, a Nash equilibrium may not exist in the price anticipating scheme we define above. This can be shown by a simple example of two players and two machines. For example, let U1(r1, r2) = r1 and U2(r1, r2) = r1 + r2. Then player 1 should never bid on machine 2 because it has no value to him. Now, player 2 has to put a positive bid on machine 2 to claim the machine, but there is no lower limit, resulting in the non-existence of the Nash equilibrium. We should note that even the mixed strategy equilibrium does not exist in this example. Clearly, this happens whenever there is a resource that is wanted by only one player. To rule out this case, we consider those strongly competitive games.1 Under the infinite parallelism model, a game is called strongly competitive if for any 1 ≤ j ≤ n, there exists an i = k such that wij, wkj > 0. Under such a condition, we have that (see [5] for a proof), Theorem 1. There always exists a pure strategy Nash equilibrium in a strongly competitive game. Given the existence of the Nash equilibrium, the next important question is the performance at the Nash equilibrium, which is often measured by its efficiency and fairness. Efficiency (Price of Anarchy). For an allocation scheme ω ∈ Ω, denote by U(ω) = P i Ui(ri) the social welfare under ω. Let U∗ = maxω∈Ω U(ω) denote the optimal social welfare - the maximum possible aggregated user utilities. The efficiency at an allocation scheme ω is defined as π(ω) = U(ω) U∗ . Let Ω0 denote the set of the allocation at the Nash equilibrium. When there exists Nash equilibrium, i.e. Ω0 = ∅, define the efficiency of a game Q to be π(Q) = minω∈Ω0 π(ω). It is usually the case that π < 1, i.e. there is an efficiency loss at a Nash equilibrium. This is the price of anarchy [18] paid for not having central enforcement of the user``s good behavior. This price is interesting because central control results in the best possible outcome, but is not possible in most cases. Fairness. While the definition of efficiency is standard, there are multiple ways to define fairness. We consider two metrics. One is by comparing the users'' utilities. The utility uniformity τ(ω) of an allocation scheme ω is defined to be mini Ui(ω) maxi Ui(ω) , the ratio of the minimum utility and the maximum utility among the users. Such definition (or utility discrepancy defined similarly as maxi Ui(ω) mini Ui(ω) ) is used extensively in Computer Science literature. Under this definition, the utility uniformity τ(Q) of a game Q is defined to be τ(Q) = minω∈Ω0 τ(ω). The other metric extensively studied in Economics is the concept of envy-freeness [25]. Unlike the utility uniformity metric, the enviness concerns how the user perceives the value of the share assigned to him, compared to the shares other users receive. Within such a framework, define the envy-freeness of an allocation scheme ω by ρ(ω) = mini,j Ui(ri) Ui(rj ) . 1Alternatives include adding a reservation price or limiting the lowest allowable bid to each machine. These alternatives, however, introduce the problem of coming up with the right price or limit. When ρ(ω) ≥ 1, the scheme is known as an envy-free allocation scheme. Likewise, the envy-freeness ρ(Q) of a game Q is defined to be ρ(Q) = minω∈Ω0 ρ(ω). 3. NASH EQUILIBRIUM In this section, we present some theoretical results regarding the performance at Nash equilibrium under the infinite parallelism model. We assume that the game is strongly competitive to guarantee the existence of equilibria. For a meaningful discussion of efficiency and fairness, we assume that the users are symmetric by requiring that Xi = 1 andPn j=1 wij = 1 for all the 1 ≤ i ≤ m. Or informally, we require all the users have the same budget, and they have the same utility when they own all the resources. This precludes the case when a user has an extremely high budget, resulting in very low efficiency or low fairness at equilibrium. We first provide a characterization of the equilibria. By definition, the bidding vectors x1, ... , xm is a Nash equilibrium if and only if each player``s strategy is the best response to the group``s bids. Since Ui is a linear function and the domain of each users bids {(xi1, ... , xin)| P j xij = Xi , and xij ≥ 0} is a convex set, the optimality condition is that there exists λi > 0 such that ∂Ui ∂xij = wij Yj − xij Y 2 j = λi if xij > 0, and < λi if xij = 0. (1) Or intuitively, at an equilibrium, each user has the same marginal value on machines where they place positive bids and has lower marginal values on those machines where they do not bid. Under the infinite parallelism model, it is easy to compute the social optimum U∗ as it is achieved when we allocate each machine wholly to the person who has the maximum weight on the machine, i.e. U∗ = Pn j=1 max1≤i≤m wij. 3.1 Two-player Games We first show that even in the simplest nontrivial case when there are two users and two machines, the game has interesting properties. We start with two special cases to provide some intuition about the game. The weight matrices are shown in figure 1(a) and (b), which correspond respectively to the equal-weight and opposite-weight games. Let x and y denote the respective bids of users 1 and 2 on machine 1. Denote by s = x + y and δ = (2 − s)/s. Equal-weight game. In Figure 1, both users have equal valuations for the two machines. By the optimality condition, for the bid vectors to be in equilibrium, they need to satisfy the following equations according to (1) α y (x + y)2 = (1 − α) 1 − y (2 − x − y)2 α x (x + y)2 = (1 − α) 1 − x (2 − x − y)2 By simplifying the above equations, we obtain that δ = 1 − 1/α and x = y = α. Thus, there exists a unique Nash equilibrium of the game where the two users have the same bidding vector. At the equilibrium, the utility of each user is 1/2, and the social welfare is 1. On the other hand, the social optimum is clearly 1. Thus, the equal-weight game is ideal as the efficiency, utility uniformity, and the envyfreeness are all 1. 129 m1 m2 u1 α 1 − α u2 α 1 − α m1 m2 u1 α 1 − α u2 1 − α α (a) equal weight game (b) opposite weight game Figure 1: Two special cases of two-player games. Opposite-weight game. The situation is different for the opposite game in which the two users put the exact opposite weights on the two machines. Assume that α ≥ 1/2. Similarly, for the bid vectors to be at the equilibrium, they need to satisfy α y (x + y)2 = (1 − α) 1 − y (2 − x − y)2 (1 − α) x (x + y)2 = α 1 − x (2 − x − y)2 By simplifying the above equations, we have that each Nash equilibrium corresponds to a nonnegative root of the cubic equation f(δ) = δ3 − cδ2 + cδ − 1 = 0, where c = 1 2α(1−α) − 1. Clearly, δ = 1 is a root of f(δ). When δ = 1, we have that x = α, y = 1 − α, which is the symmetric equilibrium that is consistent with our intuition - each user puts a bid proportional to his preference of the machine. At this equilibrium, U = 2 − 4α(1 − α), U∗ = 2α, and U/U∗ = (2α + 1 α ) − 2, which is minimized when α = √ 2 2 with the minimum value of 2 √ 2 − 2 ≈ 0.828. However, when α is large enough, there exist two other roots, corresponding to less intuitive asymmetric equilibria. Intuitively, the asymmetric equilibrium arises when user 1 values machine 1 a lot, but by placing even a relatively small bid on machine 1, he can get most of the machine because user 2 values machine 1 very little, and thus places an even smaller bid. In this case, user 1 gets most of machine 1 and almost half of machine 2. The threshold is at when f (1) = 0, i.e. when c = 1 2α(1−α) = 4. This solves to α0 = 2+ √ 2 4 ≈ 0.854. Those asymmetric equilibria at δ = 1 are bad as they yield lower efficiency than the symmetric equilibrium. Let δ0 be the minimum root. When α → 0, c → +∞, and δ0 = 1/c + o(1/c) → 0. Then, x, y → 1. Thus, U → 3/2, U∗ → 2, and U/U∗ → 0.75. From the above simple game, we already observe that the Nash equilibrium may not be unique, which is different from many congestion games in which the Nash equilibrium is unique. For the general two player game, we can show that 0.75 is actually the worst efficiency bound with a proof in [5]. Further, at the asymmetric equilibrium, the utility uniformity approaches 1/2 when α → 1. This is the worst possible for two player games because as we show in Section 3.2, a user``s utility at any Nash equilibrium is at least 1/m in the m-player game. Another consequence is that the two player game is always envy-free. Suppose that the two user``s shares are r1 = (r11, ... , r1n) and r2 = (r21, ... , r2n) respectively. Then U1(r1) + U1(r2) = U1(r1 + r2) = U1(1, ... , 1) = 1 because ri1 + ri2 = 1 for all 1 ≤ i ≤ n. Again by that U1(r1) ≥ 1/2, we have that U1(r1) ≥ U1(r2), i.e. any equilibrium allocation is envy-free. Theorem 2. For a two player game, π(Q) ≥ 3/4, τ(Q) ≥ 0.5, and ρ(Q) = 1. All the bounds are tight in the worst case. 3.2 Multi-player Game For large numbers of players, the loss in social welfare can be unfortunately large. The following example shows the worst case bound. Consider a system with m = n2 + n players and n machines. Of the players, there are n2 who have the same weights on all the machines, i.e. 1/n on each machine. The other n players have weight 1, each on a different machine and 0 (or a sufficiently small ) on all the other machines. Clearly, U∗ = n. The following allocation is an equilibrium: the first n2 players evenly distribute their money among all the machines, the other n player invest all of their money on their respective favorite machine. Hence, the total money on each machine is n + 1. At this equilibrium, each of the first n2 players receives 1 n 1/n n+1 = 1 n2(n+1) on each machine, resulting in a total utility of n3 · 1 n2(n+1) < 1. The other n players each receives 1 n+1 on their favorite machine, resulting in a total utility of n · 1 n+1 < 1. Therefore, the total utility of the equilibrium is < 2, while the social optimum is n = Θ( √ m). This bound is the worst possible. What about the utility uniformity of the multi-player allocation game? We next show that the utility uniformity of the m-player allocation game cannot exceed m. Let (S1, ... , Sn) be the current total bids on the n machines, excluding user i. User i can ensure a utility of 1/m by distributing his budget proportionally to the current bids. That is, user i, by bidding sij = Xi/ Pn i=1 Si on machine j, obtains a resource level of: rij = sij sij + Sj = Sj/ Pn i=1 Si Sj/ Pn i=1 Si + Sj = 1 1 + Pn i=1 Si , where Pn j=1 Sj = Pm j=1 Xj − Xi = m − 1. Therefore, rij = 1 1+m−1 = 1 m . The total utility of user i is nX j=1 rijwij = (1/m) nX j=1 wij = 1/m . Since each user``s utility cannot exceed 1, the minimal possible uniformity is 1/m. While the utility uniformity can be small, the envy-freeness, on the other hand, is bounded by a constant of 2 √ 2 − 2 ≈ 0.828, as shown in [29]. To summarize, we have that Theorem 3. For the m-player game Q, π(Q) = Ω(1/ √ m), τ(Q) ≥ 1/m, and ρ(Q) ≥ 2 √ 2 − 2. All of these bounds are tight in the worst case. 4. ALGORITHMS In the previous section, we present the performance bounds of the game under the infinite parallelism model. However, the more interesting questions in practice are how the equilibrium can be reached and what is the performance at the Nash equilibrium for the typical distribution of utility functions. In particular, we would like to know if the intuitive strategy of each player constantly re-adjusting his bids according to the best response algorithm leads to the equilibrium. To answer these questions, we resort to simulations. In this section, we present the algorithms that we use to compute or approximate the best response and the social optimum in our experiments. We consider both the infinite parallelism and finite parallelism model. 130 4.1 Infinite Parallelism Model As we mentioned before, it is easy to compute the social optimum under the infinite parallelism model - we simply assign each machine to the user who likes it the most. We now present the algorithm for computing the best response. Recall that for weights w1, ... , wn, total bids y1, ... , yn, and the budget X, the best response is to solve the following optimization problem maximize U = Pn j=1 wj xj xj +yj subject to Pn j=1 xj = X, and xj ≥ 0. To compute the best response, we first sort wj yj in decreasing order. Without loss of generality, suppose that w1 y1 ≥ w2 y2 ≥ ... wn yn . Suppose that x∗ = (x∗ 1, ... , x∗ n) is the optimum solution. We show that if x∗ i = 0, then for any j > i, x∗ j = 0 too. Suppose this were not true. Then ∂U ∂xj (x∗ ) = wj yj (x∗ j + yj)2 < wj yj y2 j = wj yj ≤ wi yi = ∂U ∂xi (x∗ ) . Thus it contradicts with the optimality condition (1). Suppose that k = max{i|x∗ i > 0}. Again, by the optimality condition, there exists λ such that wi yi (x∗ i +yi)2 = λ for 1 ≤ i ≤ k, and x∗ i = 0 for i > k. Equivalently, we have that: x∗ i = r wiyi λ − yi , for 1 ≤ i ≤ k, and x∗ i = 0 for i > k. Replacing them in the equation Pn i=1 x∗ i = X, we can solve for λ = ( Pk i=1 √ wiyi)2 (X+ Pk i=1 yi)2 . Thus, x∗ i = √ wiyi Pk i=1 √ wiyi (X + kX i=1 yi) − yi . The remaining question is how to determine k. It is the largest value such that x∗ k > 0. Thus, we obtain the following algorithm to compute the best response of a user: 1. Sort the machines according to wi yi in decreasing order. 2. Compute the largest k such that √ wkyk Pk i=1 √ wiyi (X + kX i=1 yi) − yk ≥ 0. 3. Set xj = 0 for j > k, and for 1 ≤ j ≤ k, set: xj = √ wjyj Pk i=1 √ wiyi (X + kX i=1 yi) − yj. The computational complexity of this algorithm is O(n log n), dominated by the sorting. In practice, the best response can be computed infrequently (e.g. once a minute), so for a typically powerful modern host, this cost is negligible. The best response algorithm must send and receive O(n) messages because each user must obtain the total bids from each host. In practice, this is more significant than the computational cost. Note that hosts only reveal to users the sum of the bids on them. As a result, hosts do not reveal the private preferences and even the individual bids of one user to another. 4.2 Finite Parallelism Model Recall that in the finite parallelism model, each user i only places bids on at most ki machines. Of course, the infinite parallelism model is just a special case of finite parallelism model in which ki = n for all the i``s. In the finite parallelism model, computing the social optimum is no longer trivial due to bounded parallelism. It can instead be computed by using the maximum matching algorithm. Consider the weighted complete bipartite graph G = U × V , where U = {ui |1 ≤ i ≤ m , and 1 ≤ ≤ ki}, V = {1, 2, ... , n} with edge weight wij assigned to the edge (ui , vj). A matching of G is a set of edges with disjoint nodes, and the weight of a matching is the total weights of the edges in the matching. As a result, the following lemma holds. Lemma 1. The social optimum is the same as the maximum weight matching of G. Thus, we can use the maximum weight matching algorithm to compute the social optimum. The maximum weight matching is a classical network problem and can be solved in polynomial time [8, 9, 14]. We choose to implement the Hungarian algorithm [14, 19] because of its simplicity. There may exist a more efficient algorithm for computing the maximum matching by exploiting the special structure of G. This remains an interesting open question. However, we do not know an efficient algorithm to compute the best response under the finite parallelism model. Instead, we provide the following local search heuristic. Suppose we again have n machines with weights w1, ... , wn and total bids y1, ... , yn. Let the user``s budget be X and the parallelism bound be k. Our goal is to compute an allocation of X to up to k machines to maximize the user``s utility. For a subset of machines A, denote by x(A) the best response on A without parallelism bound and by U(A) the utility obtained by the best response algorithm. The local search works as follows: 1. Set A to be the k machines with the highest wi/yi. 2. Compute U(A) by the infinite parallelism best response algorithm (Sec 4.1) on A. 3. For each i ∈ A and each j /∈ A, repeat 4. Let B = A − {i} + {j}, compute U(B). 5. If(U(B) > U(A)), let A ← B, and goto 2. 6. Output x(A). Intuitively, by the local search heuristic, we test if we can swap a machine in A for one not in A to improve the best response utility. If yes, we swap the machines and repeat the process. Otherwise, we have reached a local maxima and output that value. We suspect that the local maxima that this algorithm finds is also the global maximum (with respect to an individual user) and that this process stop after a few number of iterations, but we are unable to establish it. However, in our simulations, this algorithm quickly converges to a high (≥ .7) efficiency. 131 4.3 Local Greedy Adjustment The above best response algorithms only work for the linear utility functions described earlier. In practice, utility functions may have more a complicated form, or even worse, a user may not have a formulation of his utility function. We do assume that the user still has a way to measure his utility, which is the minimum assumption necessary for any market-based resource allocation mechanism. In these situations, users can use a more general strategy, the local greedy adjustment method, which works as follows. A user finds the two machines that provide him with the highest and lowest marginal utility. He then moves a fixed small amount of money from the machine with low marginal utility to the machine with the higher one. This strategy aims to adjust the bids so that the marginal values at each machine being bid on are the same. This condition guarantees the allocation is the optimum when the utility function is concave. The tradeoff for local greedy adjustment is that it takes longer to stabilize than best-response. 5. SIMULATION RESULTS While the analytic results provide us with worst-case analysis for the infinite parallelism model, in this section we employ simulations to study the properties of the Nash equilibria in more realistic scenarios and for the finite parallelism model. First, we determine whether the user bidding process converges, and if so, what the rate of convergence is. Second, in cases of convergence, we look at the performance at equilibrium, using the efficiency and fairness metrics defined above. Iterative Method. In our simulations, each user starts with an initial bid vector and then iteratively updates his bids until a convergence criterion (described below) is met. The initial bid is set proportional to the user``s weights on the machines. We experiment with two update methods, the best response methods, as described in Section 4.1 and 4.2, and the local greedy adjustment method, as described in Section 4.3. Convergence Criteria. Convergence time measures how quickly the system reaches equilibrium. It is particularly important in the highly dynamic environment of distributed shared clusters, in which the system``s conditions may change before reaching the equilibrium. Thus, a high convergence rate may be more significant than the efficiency at the equilibrium. There are several different criteria for convergence. The strongest criterion is to require that there is only negligible change in the bids of each user. The problem with this criterion is that it is too strict: users may see negligible change in their utilities, but according to this definition the system has not converged. The less strict utility gap criterion requires there to be only negligible change in the users'' utility. Given users'' concern for utility, this is a more natural definition. Indeed, in practice, the user is probably not willing to re-allocate their bids dramatically for a small utility gain. Therefore, we use the utility gap criterion to measure convergence time for the best response update method, i.e. we consider that the system has converged if the utility gap of each user is smaller than (0.001 in our experiments). However, this criterion does not work for the local greedy adjustment method because users of that method will experience constant fluctuations in utility as they move money around. For this method, we use the marginal utility gap criterion. We compare the highest and lowest utility margins on the machines. If the difference is negligible, then we consider the system to be converged. In addition to convergence to the equilibrium, we also consider the criterion from the system provider``s view, the social welfare stabilization criterion. Under this criterion, a system has stabilized if the change in social welfare is ≤ . Individual users'' utility may not have converged. This criterion is useful to evaluate how quickly the system as a whole reaches a particular efficiency level. User preferences. We experiment with two models of user preferences, random distribution and correlated distribution. With random distribution, users'' weights on the different machines are independently and identically distributed, according the uniform distribution. In practice, users'' preferences are probably correlated based on factors like the hosts'' location and the types of applications that users run. To capture these correlations, we associate with each user and machine a resource profile vector where each dimension of the vector represents one resource (e.g., CPU, memory, and network bandwidth). For a user i with a profile pi = (pi1, ... , pi ), pik represents user i``s need for resource k. For machine j with profile qj = (qj1, ... , qj ), qjk represents machine j``s strength with respect to resource k. Then, wij is the dot product of user i``s and machine j``s resource profiles, i.e. wij = pi · qj = P k=1 pikqjk. By using these profiles, we compress the parameter space and introduce correlations between users and machines. In the following simulations, we fix the number of machines to 100 and vary the number of users from 5 to 250 (but we only report the results for the range of 5 − 150 users since the results remain similar for a larger number of users). Sections 5.1 and 5.2 present the simulation results when we apply the infinite parallelism and finite parallelism models, respectively. If the system converges, we report the number of iterations until convergence. A convergence time of 200 iterations indicates non-convergence, in which case we report the efficiency and fairness values at the point we terminate the simulation. 5.1 Infinite parallelism In this section, we apply the infinite parallelism model, which assumes that users can use an unlimited number of machines. We present the efficiency and fairness at the equilibrium, compared to two baseline allocation methods: social optimum and weight-proportional, in which users distribute their bids proportionally to their weights on the machines (which may seem a reasonable distribution method intuitively). We present results for the two user preference models. With uniform preferences, users'' weights for the different machines are independently and identically distributed according to the uniform distribution, U ∼ (0, 1) (and are normalized thereafter). In correlated preferences, each user``s and each machine``s resource profile vector has three dimensions, and their values are also taken from the uniform distribution, U ∼ (0, 1). Convergence Time. Figure 2 shows the convergence time, efficiency and fairness of the infinite parallelism model under uniform (left) and correlated (right) preferences. Plots (a) and (b) show the convergence and stabilization time of the best-response and local greedy adjustment methods. 132 0 50 100 150 200 0 20 40 60 80 100 120 140 160 Convergencetime(#iterations) Number of Users Uniform preferences (a) Best-Response Greedy (convergence) Greedy (stabilization) 0 50 100 150 200 0 20 40 60 80 100 120 140 160 Number of Users Correlated preferences (b) Best-response Greedy (convergence) Greedy (stabilization) 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160 Efficiency Number of Users (c) Nash equilibrium Weight-proportional Social Optimum 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160 Number of Users (d) Nash equilibrium Weight-proportional Social optimum 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60 80 100 120 140 160 Utilityuniformity Number of Users (e) Nash equilibrium Weight-proportional Social optimum 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60 80 100 120 140 160 Number of Users (f) Nash equilibrium Weight proportional Social optimum 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160 Envy-freeness Number of Users (g) Nash equilibrium Weight proportional Social optimum 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160 Number of Users (h) Nash equilibrium Weight proportional Social optimum Figure 2: Efficiency, utility uniformity, enviness and convergence time as a function of the number of users under the infinite parallelism model, with uniform and correlated preferences. n = 100. 133 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 Efficiency Iteration number Best-Response Greedy Figure 3: Efficiency level over time under the infinite parallelism model. number of users = 40. n = 100. The best-response algorithm converges within a few number of iterations for any number of users. In contrast, the local greedy adjustment algorithm does not converge even within 500 iterations when the number of users is smaller than 60, but does converge for a larger number of users. We believe that for small numbers of users, there are dependency cycles among the users that prevent the system from converging because one user``s decisions affects another user, whose decisions affect another user, etc.. Regardless, the local greedy adjustment method stabilizes within 100 iterations. Figure 3 presents the efficiency over time for a system with 40 users. It demonstrates that while both adjustment methods reach the same social welfare, the best-response algorithm is faster. In the remainder of this paper, we will refer to the (Nash) equilibrium, independent of the adjustment method used to reach it. Efficiency. Figure 2 (c) and (d) present the efficiency as a function of the number of users. We present the efficiency at equilibrium, and use the social optimum and the weightproportional static allocation methods for comparison. Social optimum provides an efficient allocation by definition. For both user preference models, the efficiency at the equilibrium is approximately 0.9, independent of the number of users, which is only slightly worse than the social optimum. The efficiency at the equilibrium is ≈ 50% improvement over the weight-proportional allocation method for uniform preferences, and ≈ 30% improvement for correlated preferences. Fairness. Figure 2(e) and (f) present the utility uniformity as a function of the number of users, and figures (g) and (h) present the envy-freeness. While the social optimum yields perfect efficiency, it has poor fairness. The weightproportional method achieves the highest fairness among the three allocation methods, but the fairness at the equilibrium is close. The utility uniformity is slightly better at the equilibrium under uniform preferences (> 0.7) than under correlated preferences (> 0.6), since when users'' preferences are more aligned, users'' happiness is more likely going to be at the expense of each other. Although utility uniformity decreases in the number of users, it remains reasonable even for a large number of users, and flattens out at some point. At the social optimum, utility uniformity can be infinitely poor, as some users may be allocated no resources at all. The same is true with respect to envy-freeness. The difference between uniform and correlated preferences is best demonstrated in the social optimum results. When the number of users is small, it may be possible to satisfy all users to some extent if their preferences are not aligned, but if they are aligned, even with a very small number of users, some users get no resources, thus both utility uniformity and envy-freeness go to zero. As the number of users increases, it becomes almost impossible to satisfy all users independent of the existence of correlation. These results demonstrate the tradeoff between the different allocation methods. The efficiency at the equilibrium is lower than the social optimum, but it performs much better with respect to fairness. The equilibrium allocation is completely envy-free under uniform preferences and almost envy-free under correlated preferences. 5.2 Finite parallelism 0 50 100 150 200 0 10 20 30 40 50 60 70 80 90 Convergencetime(#iterations) Number of Users 5 machines/user 20 machines/user Figure 4: Convergence time under the finite parallelism model. n = 100. 0.6 0.7 0.8 0.9 1 0 20 40 60 80 100 Efficiency Iteration number 5-machines/user (40 users) 20-machines/user (10 users) Figure 5: Efficiency level over time under the finite parallelism model with local search algorithm. n = 100. We also consider the finite parallelism model and use the local search algorithm, as described in Section 4.2, to adjust user``s bids. We again experimented with both the uniform and correlated preferences distributions and did not find significant differences in the results so we present the simulation results for only the uniform distribution. In our experiments, the local search algorithm stops quickly - it usually discovers a local maximum within two iterations. As mentioned before, we cannot prove that a local maximum is the global maximum, but our experiments indicate that the local search heuristic leads to high efficiency. 134 Convergence time. Let ∆ denote the parallelism bound that limits the maximum number of machines each user can bid on. We experiment with ∆ = 5 and ∆ = 20. In both cases, we use 100 machines and vary the number of users. Figure 4 shows that the system does not always converge, but if it does, the convergence happens quickly. The nonconvergence occurs when the number of users is between 20 and 40 for ∆ = 5, between 5 and 10 for ∆ = 20. We believe that the non-convergence is caused by moderate competition. No competition allows the system to equilibrate quickly because users do not have to change their bids in reaction to changes in others'' bids. High competition also allows convergence because each user``s decision has only a small impact on other users, so the system is more stable and can gradually reach convergence. However, when there is moderate competition, one user``s decisions may cause dramatic changes in another``s decisions and cause large fluctuations in bids. In both cases of non-convergence, the ratio of competitors per machine, δ = m×∆/n for m users and n machines, is in the interval [1, 2]. Although the system does not converge in these bad ranges, the system nontheless achieves and maintains a high level of overall efficiency after a few iterations (as shown in Figure 5). Performance. In Figure 6, we present the efficiency, utility uniformity, and envy-freeness at the Nash equilibrium for the finite parallelism model. When the system does not converge, we measure performance by taking the minimum value we observe after running for many iterations. When ∆ = 5, there is a performance drop, in particular with respect to the fairness metrics, in the range between 20 and 40 users (where it does not converge). For a larger number of users, the system converges and achieves a lower level of utility uniformity, but a high degree of efficiency and envy-freeness, similar to those under the infinite parallelism model. As described above, this is due the competition ratio falling into the head-to-head range. When the parallelism bound is large (∆ = 20), the performance is closer to the infinite parallelism model, and we do not observe this drop in performance. 6. RELATED WORK There are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not. One non-economic approach is scheduling (surveyed by Pinedo [20]). Examples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19]. These all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users. We view scheduling and resource allocation as two separate functions. Resource allocation divides a resource among different users while scheduling takes a given allocation and orders a user``s jobs. Examples of the economic approach are Spawn [26]), work by Stoica, et al. [24]. , the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]). Spawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated. Unfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider. The tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive. Bellagio[3] uses the SHARE centralized allocator. SHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities. Solving the NP-complete combinatorial auction problem provides an optimally efficient allocation. The priceanticipating scheme that we consider does not explicitly operate on complementarities, thereby possibly losing some efficiency, but it also avoids the complexity and overhead of combinatorial auctions. There have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows. Their methodology follows the study of congestion (potential) games [17, 22] by relating the Nash equilibrium to the solution of a (usually convex) global optimization problem. But those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines. For example, unlike those games, there may exist multiple Nash equilibria in our game. Milchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game. 7. CONCLUSIONS This work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods. We show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics. In addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method. While our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work. One direction is to consider more realistic utility functions. For example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine. In practice, both assumptions may not be correct. For examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine. When the job size is large enough and the degree of multiplexing is sufficiently low, we can probably ignore those effects, but those costs should be taken into account for a more realistic modeling. Another assumption is that users have infinite work, so the more resources they can acquire, the better. In practice, users have finite work. One approach to address this is to model the user``s utility according to the time to finish a task rather than the amount of resources he receives. Another direction is to study the dynamic properties of the system when the users'' needs change over time, according to some statistical model. In addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs. 135 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160 Number of Users (a) Limit: 5 machines/user Efficiency Utility uniformity Envy-freeness 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 Number of Users (b) Limit: 20 machines/user Efficiency Utility uniformity Envy-freeness Figure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model. n = 100. 8. ACKNOWLEDGEMENTS We thank Bernardo Huberman, Lars Rasmusson, Eytan Adar and Moshe Babaioff for fruitful discussions. We also thank the anonymous reviewers for their useful comments. 9. REFERENCES [1] http://planet-lab.org. [2] A. AuYoung, B. N. Chun, A. C. Snoeren, and A. Vahdat. Resource Allocation in Federated Distributed Computing Infrastructures. In Proceedings of the 1st Workshop on Operating System and Architectural Support for the On-demand IT InfraStructure, 2004. [3] B. Chun, C. Ng, J. Albrecht, D. C. Parkes, and A. Vahdat. Computational Resource Exchanges for Distributed Resource Allocation. 2004. [4] B. N. Chun and D. E. Culler. Market-based Proportional Resource Sharing for Clusters. Technical Report CSD-1092, University of California at Berkeley, Computer Science Division, January 2000. [5] M. Feldman, K. Lai, and L. Zhang. A Price-anticipating Resource Allocation Mechanism for Distributed Shared Clusters. Technical report, arXiv, 2005. http://arxiv.org/abs/cs.DC/0502019. [6] D. Ferguson, Y. Yemimi, and C. Nikolaou. Microeconomic Algorithms for Load Balancing in Distributed Computer Systems. In International Conference on Distributed Computer Systems, pages 491-499, 1988. [7] I. Foster and C. Kesselman. Globus: A Metacomputing Infrastructure Toolkit. The International Journal of Supercomputer Applications and High Performance Computing, 11(2):115-128, Summer 1997. [8] M. L. Fredman and R. E. Tarjan. Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms. Journal of the ACM, 34(3):596-615, 1987. [9] H. N. Gabow. Data Structures for Weighted Matching and Nearest Common Ancestors with Linking. In Proceedings of 1st Annual ACM-SIAM Symposium on Discrete algorithms, pages 434-443, 1990. [10] B. Hajek and S. Yang. Strategic Buyers in a Sum Bid Game for Flat Networks. Manuscript, http: //tesla.csl.uiuc.edu/~hajek/Papers/HajekYang.pdf, 2004. [11] R. Johari and J. N. Tsitsiklis. Efficiency Loss in a Network Resource Allocation Game. Mathematics of Operations Research, 2004. [12] F. P. Kelly. Charging and Rate Control for Elastic Traffic. European Transactions on Telecommunications, 8:33-37, 1997. [13] F. P. Kelly and A. K. Maulloo. Rate Control in Communication Networks: Shadow Prices, Proportional Fairness and Stability. Operational Research Society, 49:237-252, 1998. [14] H. W. Kuhn. The Hungarian Method for the Assignment Problem. Naval Res. Logis. Quart., 2:83-97, 1955. [15] K. Lai, L. Rasmusson, S. Sorkin, L. Zhang, and B. A. Huberman. Tycoon: an Implemention of a Distributed Market-Based Resource Allocation System. Manuscript, http://www.hpl.hp.com/research/tycoon/papers_and_ presentations, 2004. [16] I. Milchtaich. Congestion Games with Player-Specific Payoff Functions. Games and Economic Behavior, 13:111-124, 1996. [17] D. Monderer and L. S. Sharpley. Potential Games. Games and Economic Behavior, 14:124-143, 1996. [18] C. Papadimitriou. Algorithms, Games, and the Internet. In Proceedings of 33rd STOC, 2001. [19] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization. Dover Publications, Inc., 1982. [20] M. Pinedo. Scheduling. Prentice Hall, 2002. [21] O. Regev and N. Nisan. The Popcorn Market: Online Markets for Computational Resources. In Proceedings of 1st International Conference on Information and Computation Economies, pages 148-157, 1998. [22] R. W. Rosenthal. A Class of Games Possessing Pure-Strategy Nash Equilibria. Internation Journal of Game Theory, 2:65-67, 1973. [23] S. Sanghavi and B. Hajek. Optimal Allocation of a Divisible Good to Strategic Buyers. Manuscript, http: //tesla.csl.uiuc.edu/~hajek/Papers/OptDivisible.pdf, 2004. [24] I. Stoica, H. Abdel-Wahab, and A. Pothen. A Microeconomic Scheduler for Parallel Computers. In Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing, pages 122-135, April 1995. [25] H. R. Varian. Equity, Envy, and Efficiency. Journal of Economic Theory, 9:63-91, 1974. [26] C. A. Waldspurger, T. Hogg, B. A. Huberman, J. O. Kephart, and S. Stornetta. Spawn: A Distributed Computational Economy. IEEE Transactions on Software Engineering, 18(2):103-117, February 1992. [27] M. P. Wellman, W. E. Walsh, P. R. Wurman, and J. K. MacKie-Mason. Auction Protocols for Decentralized Scheduling. Games and Economic Behavior, 35:271-303, 2001. [28] A. Wierman and M. Harchol-Balter. Classifying Scheduling Policies with respect to Unfairness in an M/GI/1. In Proceedings of the ACM SIGMETRICS 2003 Conference on Measurement and Modeling of Computer Systems, 2003. [29] L. Zhang. On the Efficiency and Fairness of a Fixed Budget Resource Allocation Game. Manuscript, 2004. 136
A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters ABSTRACT In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed marketbased resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium. 1. INTRODUCTION The primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources. This allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users. Sharing nodes that are dispersed in the network allows lower delay because applications can store data close to users. Finally, sharing allows greater reliability because of redundancy in hosts and network connections. However, resource allocation in these systems remains the major challenge. The problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests. Several non-economic allocation algorithms have been proposed, but these typically assume that task values (i.e., their importance) are the same, or are inversely proportional to the resources required, or are set by an omniscient administrator. However, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set. Instead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism. In particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource. This proportional scheme is simpler, more scalable, and more responsive [15] than auction-based schemes [6, 21, 26]. Previous work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets. In this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6). In this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game. For evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users' utilities are. Although rarely considered in previous game theoretical study, we believe fairness is a critical metric for a resource allocation schemes because the perception of unfairness will cause some users to reject a system with more efficient, but less fair resource allocation in favor of one with less efficient, more fair resource allocation. We use both utility uniformity and envy-freeness to measure fairness. Utility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users. Envyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others. Our contributions are as follows: • We analyze the existence and performance of Nash equilibria. Using analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness. We also show the worst case performance bounds: for m players the efficiency at equilibrium is SZ (1 / -, / m), the utility uniformity is> 1/m, and the envyfreeness> 2 -, / 2--2 Pz 0.83. Although these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic. 9 We describe algorithms that allow strategic users to optimize their utility. As part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead. We present variations of the best response algorithm for both finite and infinite parallelism tasks. In addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions. 9 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness. Using simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness. Likewise, allocating according to only users' preference weights results in a high fairness, but a mediocre efficiency. Intuition would suggest that efficiency and fairness are exclusive. Surprisingly, the Nash equilibrium, reached by each user iteratively applying the best response algorithm to adapt his bids, achieves nearly the efficiency of the social optimum and nearly the fairness of the weight-proportional allocation: the efficiency is> 0.90, the utility uniformity is> 0.65, and the envyfreeness is> 0.97, independent of the number of users in the system. In addition, the time to converge to the equilibrium is <5 iterations when all users use the best response strategy. The local adjustment algorithm performs similarly when there is sufficient competitiveness, but takes 25 to 90 iterations to stabilize. As a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness. The rest of the paper is organized as follows. We describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3. In Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game. In Section 5, we describe our simulator and simulation results. We describe related work in Section 6. We conclude by discussing some limit of our model and future work in Section 7. 2. THE MODEL Price-Anticipating Resource Allocation. We study the problem of allocating a set of divisible resources (or machines). Suppose that there are m users and n machines. Each machine can be continuously divided for allocation to multiple users. An allocation scheme = (r1,..., rm), where ri = (ri1, - - -, rin) with rij representing the share of machine j allocated to user i, satisfies that for any 1 <i <m and 1 <j <n, rij> 0 and Pmi = 1 rij <1. Let 0 denote the set of all the allocation schemes. We consider the price anticipating mechanism in which each user places a bid to each machine, and the price of the machine is determined by the total bids placed. Formally, suppose that user i submits a non-negative bid xij to machine j. The price of machine j is then set to Yj = Pn i = 1 xij, the total bids placed on the machine j. Consequently, user i receives a fraction of rij = x% j Yj of j. When Yj = 0, i.e. when there is no bid on a machine, the machine is not allocated to anyone. We call xi = (xi1,..., xin) the bidding vector of user i. The additional consideration we have is that each user i has a budget constraint Xi. Therefore, user i's total bids have to sum up to his budget, i.e. Pn j = 1 xij = Xi. The budget constraints come from the fact that the users do not have infinite budget. Utility Functions. Each user i's utility is represented by a function Ui of the fraction (ri1,..., rin) the user receives from each machine. Given the problem domain we consider, we assume that each user has different and relatively independent preferences for different machines. Therefore, the basic utility function we consider is the linear utility function: Ui (ri1,--, rin) = wi1ri1 +--+ winrin, where wij> 0 is user i's private preference, also called his weight, on machine j. For example, suppose machine 1 has a faster CPU but less memory than machine 2, and user 1 runs CPU bounded applications, while user 2 runs memory bounded applications. As a result, w11> w12 and w21 <w22. Our definition of utility functions corresponds to the user having enough jobs or enough parallelism within jobs to utilize all the machines. Consequently, the user's goal is to grab as much of a resource as possible. We call this the infinite parallelism model. In practice, a user's application may have an inherent limit on parallelization (e.g., some computations must be done sequentially) or there may be a system limit (e.g., the application's data is being served from a file server with limited capacity). To model this, we also consider the more realistic finite parallelism model, where the user's parallelism is bounded by ki, and the user's utility Ui is the sum of the ki largest wijrij. In this model, the user only submits bids to up to ki machines. Our abstraction is to capture the essense of the problem and facilitate our analysis. In Section 7, we discuss the limit of the above definition of utility functions. Best Response. As typically, we assume the users are selfish and strategic--they all act to maximize their own utility, defined by their utility functions. From the perspective of user i, if the total bids of the other users placed on each machine j is yj, then the best response of user i to the system is the solution of the following optimization problem: maximize Ui (x% j x% j + yj) subject to Pnj = 1 xij = Xi, and xij> 0. The difficulty of the above optimization problem depends on the formulation of Ui. We will show later how to solve it for the infinite parallelism model and provide a heuristic for finite parallelism model. Nash Equilibrium. By the assumption that the user is selfish, each user's bidding vector is the best response to the system. The question we are most interested in is whether there exists a collection of bidding vectors, one for each user, such that each user's bidding vector is the best response to those of the other users. Such a state is known as the Nash equilibrium, a central concept in Game Theory. Formally, the bidding vectors x1,..., xm is a Nash equilibrium if for any 1 <i <m, xi is the best response to the system, or, for any other bidding vector x i, The Nash equilibrium is desirable because it is a stable state at which no one has incentive to change his strategy. But a game may not have an equilibrium. Indeed, a Nash equilibrium may not exist in the price anticipating scheme we define above. This can be shown by a simple example of two players and two machines. For example, let U1 (r1, r2) = r1 and U2 (r1, r2) = r1 + r2. Then player 1 should never bid on machine 2 because it has no value to him. Now, player 2 has to put a positive bid on machine 2 to claim the machine, but there is no lower limit, resulting in the non-existence of the Nash equilibrium. We should note that even the mixed strategy equilibrium does not exist in this example. Clearly, this happens whenever there is a resource that is "wanted" by only one player. To rule out this case, we consider those strongly competitive games .1 Under the infinite parallelism model, a game is called strongly competitive if for any 1 <j <n, there exists an i = k such that wij, wkj> 0. Under such a condition, we have that (see [5] for a proof), Given the existence of the Nash equilibrium, the next important question is the performance at the Nash equilibrium, which is often measured by its efficiency and fairness. G 0, denote by U () = P Efficiency (Price of Anarchy). For an allocation scheme i Ui (ri) the social welfare under. Let U * = max E U () denote the optimal social welfare--the maximum possible aggregated user utilities. The efficiency at an allocation scheme is defined as () = U () U. Let 520 denote the set of the allocation at the Nash equilibrium. When there exists Nash equilibrium, i.e. D0 = 0, define the efficiency of a game Q to be (Q) = min E 0 (). It is usually the case that <1, i.e. there is an efficiency loss at a Nash equilibrium. This is the price of anarchy [18] paid for not having central enforcement of the user's good behavior. This price is interesting because central control results in the best possible outcome, but is not possible in most cases. Fairness. While the definition of efficiency is standard, there are multiple ways to define fairness. We consider two metrics. One is by comparing the users' utilities. The utility uniformity () of an allocation scheme is defined to be maxi Ui (), the ratio of the minimum utility and the maxmini Ui () imum utility among the users. Such definition (or utility discrepancy defined similarly as maxi Ui () mini Ui ()) is used extensively in Computer Science literature. Under this definition, the utility uniformity (Q) of a game Q is defined to be (Q) = min E 0 (). The other metric extensively studied in Economics is the concept of envy-freeness [25]. Unlike the utility uniformity metric, the enviness concerns how the user perceives the value of the share assigned to him, compared to the shares other users receive. Within such a framework, define the envy-freeness of an allocation scheme by () = mini, j Ui (rj). 1Alternatives include adding a reservation price or limiting the lowest allowable bid to each machine. These alternatives, however, introduce the problem of coming up with the" right" price or limit. When ()> 1, the scheme is known as an envy-free allocation scheme. Likewise, the envy-freeness (Q) of a game Q is defined to be (Q) = min E 0 (). 3. NASH EQUILIBRIUM In this section, we present some theoretical results regarding the performance at Nash equilibrium under the infinite parallelism model. We assume that the game is strongly competitive to guarantee the existence of equilibria. For a meaningful discussion of efficiency and fairness, we assume that the users are symmetric by requiring that Xi = 1 and Pnj = 1 wij = 1 for all the 1 <i <m. Or informally, we require all the users have the same budget, and they have the same utility when they own all the resources. This precludes the case when a user has an extremely high budget, resulting in very low efficiency or low fairness at equilibrium. We first provide a characterization of the equilibria. By definition, the bidding vectors x1,..., xm is a Nash equilibrium if and only if each player's strategy is the best response to the group's bids. Since Ui is a linear function and the domain of each users bids {(xi1,..., xin) I Pj xij = Xi, and xij> 01 is a convex set, the optimality condition is that there exists i> 0 such that Or intuitively, at an equilibrium, each user has the same marginal value on machines where they place positive bids and has lower marginal values on those machines where they do not bid. Under the infinite parallelism model, it is easy to compute the social optimum U * as it is achieved when we allocate weight on the machine, i.e. U * = Pn each machine wholly to the person who has the maximum j = 1 max1 <i <m wij. 3.1 Two-player Games We first show that even in the simplest nontrivial case when there are two users and two machines, the game has interesting properties. We start with two special cases to provide some intuition about the game. The weight matrices are shown in figure 1 (a) and (b), which correspond respectively to the equal-weight and opposite-weight games. Let x and y denote the respective bids of users 1 and 2 on machine 1. Denote by s = x + y and = (2--s) / s. Equal-weight game. In Figure 1, both users have equal valuations for the two machines. By the optimality condition, for the bid vectors to be in equilibrium, they need to satisfy the following equations according to (1) By simplifying the above equations, we obtain that = 1--1 / and x = y =. Thus, there exists a unique Nash equilibrium of the game where the two users have the same bidding vector. At the equilibrium, the utility of each user is 1/2, and the social welfare is 1. On the other hand, the social optimum is clearly 1. Thus, the equal-weight game is ideal as the efficiency, utility uniformity, and the envyfreeness are all 1. Figure 1: Two special cases of two-player games. Opposite-weight game. The situation is different for the opposite game in which the two users put the exact opposite weights on the two machines. Assume that 1/2. Similarly, for the bid vectors to be at the equilibrium, they need to satisfy By simplifying the above equations, we have that each Nash equilibrium corresponds to a nonnegative root of the cubic equation f () = 3 − c 2 + c − 1 = 0, where c = Clearly, = 1 is a root of f (). When = 1, we have that x =, y = 1 −, which is the symmetric equilibrium that is consistent with our intuition--each user puts a bid proportional to his preference of the machine. At this equilibrium, U = 2 − 4 (1 −), U * = 2, and U/U * = 2 (2 + 1) − 2, which is minimized when = 2 with the minimum value of 2 2 − 2 0.828. However, when is large enough, there exist two other roots, corresponding to less intuitive asymmetric equilibria. Intuitively, the asymmetric equilibrium arises when user 1 values machine 1 a lot, but by placing even a relatively small bid on machine 1, he can get most of the machine because user 2 values machine 1 very little, and thus places an even smaller bid. In this case, user 1 gets most of machine 1 and almost half of machine 2. The threshold is at when f' (1) = 0, i.e. when c = 1 4. This solves to 0 = 2 + 2 4 0.854. Those asymmetric equilibria at = 1 are "bad" as they yield lower efficiency than the symmetric equilibrium. Let 0 be the minimum root. When 0, c +, and 0 = 1/c + o (1/c) 0. Then, x, y 1. Thus, U 3/2, U * 2, and U/U * 0.75. From the above simple game, we already observe that the Nash equilibrium may not be unique, which is different from many congestion games in which the Nash equilibrium is unique. For the general two player game, we can show that 0.75 is actually the worst efficiency bound with a proof in [5]. Further, at the asymmetric equilibrium, the utility uniformity approaches 1/2 when 1. This is the worst possible for two player games because as we show in Section 3.2, a user's utility at any Nash equilibrium is at least 1/m in the m-player game. Another consequence is that the two player game is always envy-free. Suppose that the two user's shares are r1 = (r11,..., r1n) and r2 = (r21,..., r2n) respectively. Then U1 (r1) + U1 (r2) = U1 (r1 + r2) = U1 (1,..., 1) = 1 because ri1 + ri2 = 1 for all 1 i n. Again by that U1 (r1) 1/2, we have that U1 (r1) U1 (r2), i.e. any equilibrium allocation is envy-free. THEOREM 2. For a two player game, (Q) 3/4, (Q) 0.5, and (Q) = 1. All the bounds are tight in the worst case. 3.2 Multi-player Game For large numbers of players, the loss in social welfare can be unfortunately large. The following example shows the worst case bound. Consider a system with m = n2 + n players and n machines. Of the players, there are n2 who have the same weights on all the machines, i.e. 1/n on each machine. The other n players have weight 1, each on a different machine and 0 (or a sufficiently small) on all the other machines. Clearly, U * = n. The following allocation is an equilibrium: the first n2 players evenly distribute their money among all the machines, the other n player invest all of their money on their respective favorite machine. Hence, the total money on each machine is n + 1. At this equilib each machine, resulting in a total utility of n3 · 1 n2 (n +1) <1. The other n players each receives 1 n +1 on their favorite machine, resulting in a total utility of n · 1 n +1 <1. Therefore, the total utility of the equilibrium is <2, while the social optimum is n = o (m). This bound is the worst possible. What about the utility uniformity of the multi-player allocation game? We next show that the utility uniformity of the m-player allocation game cannot exceed m. Let (S1,..., Sn) be the current total bids on the n machines, excluding user i. User i can ensure a utility of 1/m by distributing his budget proportionally to the current bids. That is, user i, by bidding sij = Xi/En i = 1 Si on machine j, obtains a resource level of: where Enj = 1 Sj = m j = 1 Xj − Xi = m − 1. Therefore, rij = 1 Since each user's utility cannot exceed 1, the minimal possible uniformity is 1/m. on the other hand, is bounded by a constant of 2 2 − 2 While the utility uniformity can be small, the envy-freeness, 0.828, as shown in [29]. To summarize, we have that (Q) 1/m, and (Q) 2 2 − 2. All of these bounds are THEOREM 3. For the m-player game Q, (Q) = 52 (1 / m), tight in the worst case. 4. ALGORITHMS In the previous section, we present the performance bounds of the game under the infinite parallelism model. However, the more interesting questions in practice are how the equilibrium can be reached and what is the performance at the Nash equilibrium for the typical distribution of utility functions. In particular, we would like to know if the intuitive strategy of each player constantly re-adjusting his bids according to the best response algorithm leads to the equilibrium. To answer these questions, we resort to simulations. In this section, we present the algorithms that we use to compute or approximate the best response and the social optimum in our experiments. We consider both the infinite parallelism and finite parallelism model. 4.1 Infinite Parallelism Model As we mentioned before, it is easy to compute the social optimum under the infinite parallelism model--we simply assign each machine to the user who likes it the most. We now present the algorithm for computing the best response. Recall that for weights w1,..., wn, total bids y1,..., yn, and the budget X, the best response is to solve the following optimization problem maximize U = Pnj = 1 wj xj + yj subject to To compute the best response, we first sort wj yj in decreasing order. Without loss of generality, suppose that Suppose that x * = (x * 1,..., x * n) is the optimum solution. We show that if x * i = 0, then for any j> i, x * j = 0 too. Suppose this were not true. Then Thus it contradicts with the optimality condition (1). Suppose that k = max {i | x * i> 0}. Again, by the optimality condition, there exists such that wi i + yi) 2 = for 1 i k, and x * i = 0 for i> k. Equivalently, we have that: The remaining question is how to determine k. It is the largest value such that x * k> 0. Thus, we obtain the following algorithm to compute the best response of a user: 1. Sort the machines according to wi yi in decreasing order. 2. Compute the largest k such that The computational complexity of this algorithm is O (n log n), dominated by the sorting. In practice, the best response can be computed infrequently (e.g. once a minute), so for a typically powerful modern host, this cost is negligible. The best response algorithm must send and receive O (n) messages because each user must obtain the total bids from each host. In practice, this is more significant than the computational cost. Note that hosts only reveal to users the sum of the bids on them. As a result, hosts do not reveal the private preferences and even the individual bids of one user to another. 4.2 Finite Parallelism Model Recall that in the finite parallelism model, each user i only places bids on at most ki machines. Of course, the infinite parallelism model is just a special case of finite parallelism model in which ki = n for all the i's. In the finite parallelism model, computing the social optimum is no longer trivial due to bounded parallelism. It can instead be computed by using the maximum matching algorithm. Consider the weighted complete bipartite graph G = U × V, where U = {ui | 1 i m, and 1 ki}, V = {1, 2,..., n} with edge weight wij assigned to the edge (ui, vj). A matching of G is a set of edges with disjoint nodes, and the weight of a matching is the total weights of the edges in the matching. As a result, the following lemma holds. LEMMA 1. The social optimum is the same as the maximum weight matching of G. Thus, we can use the maximum weight matching algorithm to compute the social optimum. The maximum weight matching is a classical network problem and can be solved in polynomial time [8, 9, 14]. We choose to implement the Hungarian algorithm [14, 19] because of its simplicity. There may exist a more efficient algorithm for computing the maximum matching by exploiting the special structure of G. This remains an interesting open question. However, we do not know an efficient algorithm to compute the best response under the finite parallelism model. Instead, we provide the following local search heuristic. Suppose we again have n machines with weights w1,..., wn and total bids y1,..., yn. Let the user's budget be X and the parallelism bound be k. Our goal is to compute an allocation of X to up to k machines to maximize the user's utility. For a subset of machines A, denote by x (A) the best response on A without parallelism bound and by U (A) the utility obtained by the best response algorithm. The local search works as follows: 1. Set A to be the k machines with the highest wi/yi. 2. Compute U (A) by the infinite parallelism best response algorithm (Sec 4.1) on A. 3. For each i A and each j / A, repeat 4. Let B = A − {i} + {j}, compute U (B). 5. If (U (B)> U (A)), let A B, and goto 2. 6. Output x (A). Intuitively, by the local search heuristic, we test if we can swap a machine in A for one not in A to improve the best response utility. If yes, we swap the machines and repeat the process. Otherwise, we have reached a local maxima and output that value. We suspect that the local maxima that this algorithm finds is also the global maximum (with respect to an individual user) and that this process stop after a few number of iterations, but we are unable to establish it. However, in our simulations, this algorithm quickly converges to a high (.7) efficiency. 4.3 Local Greedy Adjustment The above best response algorithms only work for the linear utility functions described earlier. In practice, utility functions may have more a complicated form, or even worse, a user may not have a formulation of his utility function. We do assume that the user still has a way to measure his utility, which is the minimum assumption necessary for any market-based resource allocation mechanism. In these situations, users can use a more general strategy, the local greedy adjustment method, which works as follows. A user finds the two machines that provide him with the highest and lowest marginal utility. He then moves a fixed small amount of money from the machine with low marginal utility to the machine with the higher one. This strategy aims to adjust the bids so that the marginal values at each machine being bid on are the same. This condition guarantees the allocation is the optimum when the utility function is concave. The tradeoff for local greedy adjustment is that it takes longer to stabilize than best-response. 5. SIMULATION RESULTS While the analytic results provide us with worst-case analysis for the infinite parallelism model, in this section we employ simulations to study the properties of the Nash equilibria in more realistic scenarios and for the finite parallelism model. First, we determine whether the user bidding process converges, and if so, what the rate of convergence is. Second, in cases of convergence, we look at the performance at equilibrium, using the efficiency and fairness metrics defined above. Iterative Method. In our simulations, each user starts with an initial bid vector and then iteratively updates his bids until a convergence criterion (described below) is met. The initial bid is set proportional to the user's weights on the machines. We experiment with two update methods, the best response methods, as described in Section 4.1 and 4.2, and the local greedy adjustment method, as described in Section 4.3. Convergence Criteria. Convergence time measures how quickly the system reaches equilibrium. It is particularly important in the highly dynamic environment of distributed shared clusters, in which the system's conditions may change before reaching the equilibrium. Thus, a high convergence rate may be more significant than the efficiency at the equilibrium. There are several different criteria for convergence. The strongest criterion is to require that there is only negligible change in the bids of each user. The problem with this criterion is that it is too strict: users may see negligible change in their utilities, but according to this definition the system has not converged. The less strict utility gap criterion requires there to be only negligible change in the users' utility. Given users' concern for utility, this is a more natural definition. Indeed, in practice, the user is probably not willing to re-allocate their bids dramatically for a small utility gain. Therefore, we use the utility gap criterion to measure convergence time for the best response update method, i.e. we consider that the system has converged if the utility gap of each user is smaller than e (0.001 in our experiments). However, this criterion does not work for the local greedy adjustment method because users of that method will experience constant fluctuations in utility as they move money around. For this method, we use the marginal utility gap criterion. We compare the highest and lowest utility margins on the machines. If the difference is negligible, then we consider the system to be converged. In addition to convergence to the equilibrium, we also consider the criterion from the system provider's view, the social welfare stabilization criterion. Under this criterion, a system has stabilized if the change in social welfare is <E. Individual users' utility may not have converged. This criterion is useful to evaluate how quickly the system as a whole reaches a particular efficiency level. User preferences. We experiment with two models of user preferences, random distribution and correlated distribution. With random distribution, users' weights on the different machines are independently and identically distributed, according the uniform distribution. In practice, users' preferences are probably correlated based on factors like the hosts' location and the types of applications that users run. To capture these correlations, we associate with each user and machine a resource profile vector where each dimension of the vector represents one resource (e.g., CPU, memory, and network bandwidth). For a user i with a profile pi = (pi1,..., pie), pik represents user i's need for resource k. For machine j with profile qj = (qj1,..., qje), qjk represents machine j's strength with respect to resource k. Then, wij is the dot product of user i's and machine j's resource profiles, i.e. wij = pi · qj = Pk = 1 pikqjk. By using these profiles, we compress the parameter space and introduce correlations between users and machines. In the following simulations, we fix the number of machines to 100 and vary the number of users from 5 to 250 (but we only report the results for the range of 5 − 150 users since the results remain similar for a larger number of users). Sections 5.1 and 5.2 present the simulation results when we apply the infinite parallelism and finite parallelism models, respectively. If the system converges, we report the number of iterations until convergence. A convergence time of 200 iterations indicates non-convergence, in which case we report the efficiency and fairness values at the point we terminate the simulation. 5.1 Infinite parallelism In this section, we apply the infinite parallelism model, which assumes that users can use an unlimited number of machines. We present the efficiency and fairness at the equilibrium, compared to two baseline allocation methods: social optimum and weight-proportional, in which users distribute their bids proportionally to their weights on the machines (which may seem a reasonable distribution method intuitively). We present results for the two user preference models. With uniform preferences, users' weights for the different machines are independently and identically distributed according to the uniform distribution, U--(0, 1) (and are normalized thereafter). In correlated preferences, each user's and each machine's resource profile vector has three dimensions, and their values are also taken from the uniform distribution, U--(0, 1). Convergence Time. Figure 2 shows the convergence time, efficiency and fairness of the infinite parallelism model under uniform (left) and correlated (right) preferences. Plots (a) and (b) show the convergence and stabilization time of the best-response and local greedy adjustment methods. Figure 2: Efficiency, utility uniformity, enviness and convergence time as a function of the number of users under the infinite parallelism model, with uniform and correlated preferences. n = 100. Figure 3: Efficiency level over time under the infinite parallelism model. number of users = 40. n = 100. The best-response algorithm converges within a few number of iterations for any number of users. In contrast, the local greedy adjustment algorithm does not converge even within 500 iterations when the number of users is smaller than 60, but does converge for a larger number of users. We believe that for small numbers of users, there are dependency cycles among the users that prevent the system from converging because one user's decisions affects another user, whose decisions affect another user, etc. . Regardless, the local greedy adjustment method stabilizes within 100 iterations. Figure 3 presents the efficiency over time for a system with 40 users. It demonstrates that while both adjustment methods reach the same social welfare, the best-response algorithm is faster. In the remainder of this paper, we will refer to the (Nash) equilibrium, independent of the adjustment method used to reach it. Efficiency. Figure 2 (c) and (d) present the efficiency as a function of the number of users. We present the efficiency at equilibrium, and use the social optimum and the weightproportional static allocation methods for comparison. Social optimum provides an efficient allocation by definition. For both user preference models, the efficiency at the equilibrium is approximately 0.9, independent of the number of users, which is only slightly worse than the social optimum. The efficiency at the equilibrium is R 50% improvement over the weight-proportional allocation method for uniform preferences, and R 30% improvement for correlated preferences. Fairness. Figure 2 (e) and (f) present the utility uniformity as a function of the number of users, and figures (g) and (h) present the envy-freeness. While the social optimum yields perfect efficiency, it has poor fairness. The weightproportional method achieves the highest fairness among the three allocation methods, but the fairness at the equilibrium is close. The utility uniformity is slightly better at the equilibrium under uniform preferences (> 0.7) than under correlated preferences (> 0.6), since when users' preferences are more aligned, users' happiness is more likely going to be at the expense of each other. Although utility uniformity decreases in the number of users, it remains reasonable even for a large number of users, and flattens out at some point. At the social optimum, utility uniformity can be infinitely poor, as some users may be allocated no resources at all. The same is true with respect to envy-freeness. The difference between uniform and correlated preferences is best demonstrated in the social optimum results. When the number of users is small, it may be possible to satisfy all users to some extent if their preferences are not aligned, but if they are aligned, even with a very small number of users, some users get no resources, thus both utility uniformity and envy-freeness go to zero. As the number of users increases, it becomes almost impossible to satisfy all users independent of the existence of correlation. These results demonstrate the tradeoff between the different allocation methods. The efficiency at the equilibrium is lower than the social optimum, but it performs much better with respect to fairness. The equilibrium allocation is completely envy-free under uniform preferences and almost envy-free under correlated preferences. 5.2 Finite parallelism Figure 4: Convergence time under the finite parallelism model. n = 100. Figure 5: Efficiency level over time under the finite parallelism model with local search algorithm. n = 100. We also consider the finite parallelism model and use the local search algorithm, as described in Section 4.2, to adjust user's bids. We again experimented with both the uniform and correlated preferences distributions and did not find significant differences in the results so we present the simulation results for only the uniform distribution. In our experiments, the local search algorithm stops quickly--it usually discovers a local maximum within two iterations. As mentioned before, we cannot prove that a local maximum is the global maximum, but our experiments indicate that the local search heuristic leads to high efficiency. Convergence time. Let A denote the parallelism bound that limits the maximum number of machines each user can bid on. We experiment with A = 5 and A = 20. In both cases, we use 100 machines and vary the number of users. Figure 4 shows that the system does not always converge, but if it does, the convergence happens quickly. The nonconvergence occurs when the number of users is between 20 and 40 for A = 5, between 5 and 10 for A = 20. We believe that the non-convergence is caused by moderate competition. No competition allows the system to equilibrate quickly because users do not have to change their bids in reaction to changes in others' bids. High competition also allows convergence because each user's decision has only a small impact on other users, so the system is more stable and can gradually reach convergence. However, when there is moderate competition, one user's decisions may cause dramatic changes in another's decisions and cause large fluctuations in bids. In both cases of non-convergence, the ratio of "competitors" per machine, S = m × A/n for m users and n machines, is in the interval [1, 2]. Although the system does not converge in these "bad" ranges, the system nontheless achieves and maintains a high level of overall efficiency after a few iterations (as shown in Figure 5). Performance. In Figure 6, we present the efficiency, utility uniformity, and envy-freeness at the Nash equilibrium for the finite parallelism model. When the system does not converge, we measure performance by taking the minimum value we observe after running for many iterations. When A = 5, there is a performance drop, in particular with respect to the fairness metrics, in the range between 20 and 40 users (where it does not converge). For a larger number of users, the system converges and achieves a lower level of utility uniformity, but a high degree of efficiency and envy-freeness, similar to those under the infinite parallelism model. As described above, this is due the competition ratio falling into the "head-to-head" range. When the parallelism bound is large (A = 20), the performance is closer to the infinite parallelism model, and we do not observe this drop in performance. 6. RELATED WORK There are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not. One non-economic approach is scheduling (surveyed by Pinedo [20]). Examples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19]. These all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users. We view scheduling and resource allocation as two separate functions. Resource allocation divides a resource among different users while scheduling takes a given allocation and orders a user's jobs. Examples of the economic approach are Spawn [26]), work by Stoica, et al. [24]. , the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]). Spawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated. Unfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider. The tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive. Bellagio [3] uses the SHARE centralized allocator. SHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities. Solving the NP-complete combinatorial auction problem provides an optimally efficient allocation. The priceanticipating scheme that we consider does not explicitly operate on complementarities, thereby possibly losing some efficiency, but it also avoids the complexity and overhead of combinatorial auctions. There have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows. Their methodology follows the study of congestion (potential) games [17, 22] by relating the Nash equilibrium to the solution of a (usually convex) global optimization problem. But those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines. For example, unlike those games, there may exist multiple Nash equilibria in our game. Milchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game. 7. CONCLUSIONS This work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods. We show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics. In addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method. While our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work. One direction is to consider more realistic utility functions. For example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine. In practice, both assumptions may not be correct. For examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine. When the job size is large enough and the degree of multiplexing is sufficiently low, we can probably ignore those effects, but those costs should be taken into account for a more realistic modeling. Another assumption is that users have infinite work, so the more resources they can acquire, the better. In practice, users have finite work. One approach to address this is to model the user's utility according to the time to finish a task rather than the amount of resources he receives. Another direction is to study the dynamic properties of the system when the users' needs change over time, according to some statistical model. In addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs. Figure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model. n = 100.
A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters ABSTRACT In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed marketbased resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium. 1. INTRODUCTION The primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources. This allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users. Sharing nodes that are dispersed in the network allows lower delay because applications can store data close to users. Finally, sharing allows greater reliability because of redundancy in hosts and network connections. However, resource allocation in these systems remains the major challenge. The problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests. Several non-economic allocation algorithms have been proposed, but these typically assume that task values (i.e., their importance) are the same, or are inversely proportional to the resources required, or are set by an omniscient administrator. However, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set. Instead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism. In particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource. This proportional scheme is simpler, more scalable, and more responsive [15] than auction-based schemes [6, 21, 26]. Previous work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets. In this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6). In this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game. For evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users' utilities are. Although rarely considered in previous game theoretical study, we believe fairness is a critical metric for a resource allocation schemes because the perception of unfairness will cause some users to reject a system with more efficient, but less fair resource allocation in favor of one with less efficient, more fair resource allocation. We use both utility uniformity and envy-freeness to measure fairness. Utility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users. Envyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others. Our contributions are as follows: • We analyze the existence and performance of Nash equilibria. Using analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness. We also show the worst case performance bounds: for m players the efficiency at equilibrium is SZ (1 / -, / m), the utility uniformity is> 1/m, and the envyfreeness> 2 -, / 2--2 Pz 0.83. Although these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic. 9 We describe algorithms that allow strategic users to optimize their utility. As part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead. We present variations of the best response algorithm for both finite and infinite parallelism tasks. In addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions. 9 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness. Using simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness. Likewise, allocating according to only users' preference weights results in a high fairness, but a mediocre efficiency. Intuition would suggest that efficiency and fairness are exclusive. Surprisingly, the Nash equilibrium, reached by each user iteratively applying the best response algorithm to adapt his bids, achieves nearly the efficiency of the social optimum and nearly the fairness of the weight-proportional allocation: the efficiency is> 0.90, the utility uniformity is> 0.65, and the envyfreeness is> 0.97, independent of the number of users in the system. In addition, the time to converge to the equilibrium is <5 iterations when all users use the best response strategy. The local adjustment algorithm performs similarly when there is sufficient competitiveness, but takes 25 to 90 iterations to stabilize. As a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness. The rest of the paper is organized as follows. We describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3. In Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game. In Section 5, we describe our simulator and simulation results. We describe related work in Section 6. We conclude by discussing some limit of our model and future work in Section 7. 2. THE MODEL Efficiency (Price of Anarchy). For an allocation scheme 3. NASH EQUILIBRIUM 3.1 Two-player Games 3.2 Multi-player Game 4. ALGORITHMS 4.1 Infinite Parallelism Model 4.2 Finite Parallelism Model 4.3 Local Greedy Adjustment 5. SIMULATION RESULTS 5.1 Infinite parallelism 5.2 Finite parallelism 6. RELATED WORK There are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not. One non-economic approach is scheduling (surveyed by Pinedo [20]). Examples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19]. These all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users. We view scheduling and resource allocation as two separate functions. Resource allocation divides a resource among different users while scheduling takes a given allocation and orders a user's jobs. Examples of the economic approach are Spawn [26]), work by Stoica, et al. [24]. , the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]). Spawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated. Unfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider. The tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive. Bellagio [3] uses the SHARE centralized allocator. SHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities. Solving the NP-complete combinatorial auction problem provides an optimally efficient allocation. The priceanticipating scheme that we consider does not explicitly operate on complementarities, thereby possibly losing some efficiency, but it also avoids the complexity and overhead of combinatorial auctions. There have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows. Their methodology follows the study of congestion (potential) games [17, 22] by relating the Nash equilibrium to the solution of a (usually convex) global optimization problem. But those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines. For example, unlike those games, there may exist multiple Nash equilibria in our game. Milchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game. 7. CONCLUSIONS This work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods. We show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics. In addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method. While our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work. One direction is to consider more realistic utility functions. For example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine. In practice, both assumptions may not be correct. For examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine. When the job size is large enough and the degree of multiplexing is sufficiently low, we can probably ignore those effects, but those costs should be taken into account for a more realistic modeling. Another assumption is that users have infinite work, so the more resources they can acquire, the better. In practice, users have finite work. One approach to address this is to model the user's utility according to the time to finish a task rather than the amount of resources he receives. Another direction is to study the dynamic properties of the system when the users' needs change over time, according to some statistical model. In addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs. Figure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model. n = 100.
A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters ABSTRACT In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed marketbased resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium. 1. INTRODUCTION The primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources. This allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users. Sharing nodes that are dispersed in the network allows lower delay because applications can store data close to users. Finally, sharing allows However, resource allocation in these systems remains the major challenge. The problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests. However, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set. Instead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism. In particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource. Previous work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets. In this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6). In this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game. For evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users' utilities are. We use both utility uniformity and envy-freeness to measure fairness. Utility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users. Envyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others. Our contributions are as follows: • We analyze the existence and performance of Nash equilibria. Using analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness. Although these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic. 9 We describe algorithms that allow strategic users to optimize their utility. As part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead. We present variations of the best response algorithm for both finite and infinite parallelism tasks. In addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions. 9 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness. Using simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness. Likewise, allocating according to only users' preference weights results in a high fairness, but a mediocre efficiency. Intuition would suggest that efficiency and fairness are exclusive. In addition, the time to converge to the equilibrium is <5 iterations when all users use the best response strategy. As a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness. The rest of the paper is organized as follows. We describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3. In Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game. In Section 5, we describe our simulator and simulation results. We describe related work in Section 6. We conclude by discussing some limit of our model and future work in Section 7. 6. RELATED WORK There are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not. Examples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19]. These all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users. We view scheduling and resource allocation as two separate functions. Resource allocation divides a resource among different users while scheduling takes a given allocation and orders a user's jobs. Examples of the economic approach are Spawn [26]), work by Stoica, et al. [24]. , the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]). Spawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated. Unfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider. The tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive. Bellagio [3] uses the SHARE centralized allocator. SHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities. Solving the NP-complete combinatorial auction problem provides an optimally efficient allocation. There have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows. But those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines. For example, unlike those games, there may exist multiple Nash equilibria in our game. Milchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game. 7. CONCLUSIONS This work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods. We show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics. In addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method. While our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work. One direction is to consider more realistic utility functions. For example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine. For examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine. Another assumption is that users have infinite work, so the more resources they can acquire, the better. In practice, users have finite work. One approach to address this is to model the user's utility according to the time to finish a task rather than the amount of resources he receives. Another direction is to study the dynamic properties of the system when the users' needs change over time, according to some statistical model. In addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs. Figure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model. n = 100.
C-77
Tracking Immediate Predecessors in Distributed Computations
A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited.
[ "immedi predecessor", "distribut comput", "relev event", "immedi predecessor track", "transit reduct", "hass diagram", "timestamp", "piggyback", "control inform", "ipt protocol", "common global memori", "messag transfer delai", "vector clock", "track causal", "vector timestamp", "channel order properti", "checkpoint problem", "causal track", "messag-pass" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "R", "M", "M", "M", "R", "U" ]
Tracking Immediate Predecessors in Distributed Computations Emmanuelle Anceaume Jean-Michel H´elary Michel Raynal IRISA, Campus Beaulieu 35042 Rennes Cedex, France FirstName.LastName@irisa.fr ABSTRACT A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited. Categories and Subject Descriptors C.2.4 [Distributed Systems]: General Terms Asynchronous Distributed Computations 1. INTRODUCTION A distributed computation consists of a set of processes that cooperate to achieve a common goal. A main characteristic of these computations lies in the fact that the processes do not share a common global memory, and communicate only by exchanging messages over a communication network. Moreover, message transfer delays are finite but unpredictable. This computation model defines what is known as the asynchronous distributed system model. It is particularly important as it includes systems that span large geographic areas, and systems that are subject to unpredictable loads. Consequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general. Causality is a key concept to understand and master the behavior of asynchronous distributed systems [18]. More precisely, given two events e and f of a distributed computation, a crucial problem that has to be solved in a lot of distributed applications is to know whether they are causally related, i.e., if the occurrence of one of them is a consequence of the occurrence of the other. The causal past of an event e is the set of events from which e is causally dependent. Events that are not causally dependent are said to be concurrent. Vector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce. The timestamp of an event produced by a process is the current value of the vector clock of the corresponding process. In that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not. Usually, according to the problem he focuses on, a designer is interested only in a subset of the events produced by a distributed execution (e.g., only the checkpoint events are meaningful when one is interested in determining consistent global checkpoints [12]). It follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15]. In other words, among all the events that may occur in a distributed computation, only a subset of them are relevant. In this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation. Being a strict partial order, the causality relation is transitive. As a consequence, among all the relevant events that causally precede a given relevant event e, only a subset are its immediate predecessors: those are the events f such that there is no relevant event on any causal path from f to e. Unfortunately, given only the vector timestamp associated with an event it is not possible to determine which events of its causal past are its immediate predecessors. This comes from the fact that the vector timestamp associated with e determines, for each process, the last relevant event belong210 ing to the causal past of e, but such an event is not necessarily an immediate predecessor of e. However, some applications [4, 6] require to associate with each relevant event only the set of its immediate predecessors. Those applications are mainly related to the analysis of distributed computations. Some of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16]. It is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice. More generally, these applications are interested in the very structure of the causal past. In this context, the determination of the immediate predecessors becomes a major issue [6]. Additionally, in some circumstances, this determination has to satisfy behavior constraints. If the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages. When the immediate predecessors are used to monitor the computation, it has to be done on the fly. We call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events. This problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation. Solving this problem requires tracking causality, hence using vector clocks. Previous works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events. Their aim was to reduce the size of timestamps attached to messages. An efficient vector clock implementation suited to systems with fifo channels is proposed in [19]. Another efficient implementation that does not depend on channel ordering property is described in [11]. The notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast. However, none of these papers considers the IPT problem. This problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof. Moreover, in this protocol, timestamps attached to messages are of size n. This raises the following question which, to our knowledge, has never been answered: Are there efficient vector clock implementation techniques that are suitable for the IPT problem? . This paper has three main contributions: (1) a positive answer to the previous open question, (2) the design of a family of efficient IPT protocols, and (3) a formal correctness proof of the associated protocols. From a methodological point of view the paper uses a top-down approach. It states abstract properties from which more concrete properties and protocols are derived. The family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system). In that sense, this family defines low cost IPT protocols when we consider the message size. In addition to efficiency, the proposed approach has an interesting design property. Namely, the family is incrementally built in three steps. The basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events. Then, a general condition is stated to reduce the size of the control information carried by messages. Finally, according to the way this condition is implemented, three IPT protocols are obtained. The paper is composed of seven sections. Sections 2 introduces the computation model, vector clocks and the notion of relevant events. Section 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes). Section 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition. Section 6 provides a simulation study comparing the behaviors of the proposed protocols. Finally, Section 7 concludes the paper. (Due to space limitations, proofs of lemmas and theorems are omitted. They can be found in [1].) 2. MODEL AND VECTOR CLOCK 2.1 Distributed Computation A distributed program is made up of sequential local programs which communicate and synchronize only by exchanging messages. A distributed computation describes the execution of a distributed program. The execution of a local program gives rise to a sequential process. Let {P1, P2, ... , Pn} be the finite set of sequential processes of the distributed computation. Each ordered pair of communicating processes (Pi, Pj ) is connected by a reliable channel cij through which Pi can send messages to Pj. We assume that each message is unique and a process does not send messages to itself1 . Message transmission delays are finite but unpredictable. Moreover, channels are not necessarily fifo. Process speeds are positive but arbitrary. In other words, the underlying computation model is asynchronous. The local program associated with Pi can include send, receive and internal statements. The execution of such a statement produces a corresponding send/receive/internal event. These events are called primitive events. Let ex i be the x-th event produced by process Pi. The sequence hi = e1 i e2 i ... ex i ... constitutes the history of Pi, denoted Hi. Let H = ∪n i=1Hi be the set of events produced by a distributed computation. This set is structured as a partial order by Lamport``s happened before relation [14] (denoted hb →) and defined as follows: ex i hb → ey j if and only if (i = j ∧ x + 1 = y) (local precedence) ∨ (∃m : ex i = send(m) ∧ ey j = receive(m)) (msg prec.) ∨ (∃ ez k : ex i hb → ez k ∧ e z k hb → ey j ) (transitive closure). max(ex i , ey j ) is a partial function defined only when ex i and ey j are ordered. It is defined as follows: max(ex i , ey j ) = ex i if ey j hb → ex i , max(ex i , ey j ) = ey i if ex i hb → ey j . Clearly the restriction of hb → to Hi, for a given i, is a total order. Thus we will use the notation ex i < ey i iff x < y. Throughout the paper, we will use the following notation: if e ∈ Hi is not the first event produced by Pi, then pred(e) denotes the event immediately preceding e in the sequence Hi. If e is the first event produced by Pi, then pred(e) is denoted by ⊥ (meaning that there is no such event), and ∀e ∈ Hi : ⊥ < e. The partial order bH = (H, hb →) constitutes a formal model of the distributed computation it is associated with. 1 This assumption is only in order to get simple protocols. 211 P1 P2 P3 [1, 1, 2] [1, 0, 0] [3, 2, 1] [1, 1, 0] (2, 1) [0, 0, 1] (3, 1) [2, 0, 1] (1, 1) (1, 3)(1, 2) (2, 2) (2, 3) (3, 2) [2, 2, 1] [2, 3, 1] (1, 1) (1, 2) (1, 3) (2, 1) (2, 2) (2, 3) (3, 1) (3, 2) Figure 1: Timestamped Relevant Events and Immediate Predecessors Graph (Hasse Diagram) 2.2 Relevant Events For a given observer of a distributed computation, only some events are relevant2 [7, 9, 15]. An interesting example of what an observation is, is the detection of predicates on consistent global states of a distributed computation [3, 6, 8, 9, 13, 15]. In that case, a relevant event corresponds to the modification of a local variable involved in the global predicate. Another example is the checkpointing problem where a relevant event is the definition of a local checkpoint [10, 12, 20]. The left part of Figure 1 depicts a distributed computation using the classical space-time diagram. In this figure, only relevant events are represented. The sequence of relevant events produced by process Pi is denoted by Ri, and R = ∪n i=1Ri ⊆ H denotes the set of all relevant events. Let → be the relation on R defined in the following way: ∀ (e, f) ∈ R × R : (e → f) ⇔ (e hb → f). The poset (R, →) constitutes an abstraction of the distributed computation [7]. In the following we consider a distributed computation at such an abstraction level. Moreover, without loss of generality we consider that the set of relevant events is a subset of the internal events (if a communication event has to be observed, a relevant internal event can be generated just before a send and just after a receive communication event occurred). Each relevant event is identified by a pair (process id, sequence number) (see Figure 1). Definition 1. The relevant causal past of an event e ∈ H is the (partially ordered) subset of relevant events f such that f hb → e. It is denoted ↑ (e). We have ↑ (e) = {f ∈ R | f hb → e}. Note that, if e ∈ R then ↑ (e) = {f ∈ R | f → e}. In the computation described in Figure 1, we have, for the event e identified (2, 2): ↑ (e) = {(1, 1), (1, 2), (2, 1), (3, 1)}. The following properties are immediate consequences of the previous definitions. Let e ∈ H. CP1 If e is not a receive event then ↑ (e) = 8 < : ∅ if pred(e) = ⊥, ↑ (pred(e)) ∪ {pred(e)} if pred(e) ∈ R, ↑ (pred(e)) if pred(e) ∈ R. CP2 If e is a receive event (of a message m) then ↑ (e) = 8 >>< >>: ↑ (send(m)) if pred(e) = ⊥, ↑ (pred(e))∪ ↑ (send(m)) ∪ {pred(e)} if pred(e) ∈ R, ↑ (pred(e))∪ ↑ (send(m)) if pred(e) ∈ R. 2 Those events are sometimes called observable events. Definition 2. Let e ∈ Hi. For every j such that ↑ (e) ∩ Rj = ∅, the last relevant event of Pj with respect to e is: lastr(e, j) = max{f | f ∈↑ (e) ∩ Rj}. When ↑ (e) ∩ Rj = ∅, lastr(e, j) is denoted by ⊥ (meaning that there is no such event). Let us consider the event e identified (2,2) in Figure 1. We have lastr(e, 1) = (1, 2), lastr(e, 2) = (2, 1), lastr(e, 3) = (3, 1). The following properties relate the events lastr(e, j) and lastr(f, j) for all the predecessors f of e in the relation hb →. These properties follow directly from the definitions. Let e ∈ Hi. LR0 ∀e ∈ Hi: lastr(e, i) = 8 < : ⊥ if pred(e) = ⊥, pred(e) if pred(e) ∈ R, lastr(pred(e),i) if pred(e) ∈ R. LR1 If e is not a receipt event: ∀j = i : lastr(e, j) = lastr(pred(e),j). LR2 If e is a receive event of m: ∀j = i : lastr(e, j) = max(lastr(pred(e),j), lastr(send(m),j)). 2.3 Vector Clock System Definition As a fundamental concept associated with the causality theory, vector clocks have been introduced in 1988, simultaneously and independently by Fidge [5] and Mattern [16]. A vector clock system is a mechanism that associates timestamps with events in such a way that the comparison of their timestamps indicates whether the corresponding events are or are not causally related (and, if they are, which one is the first). More precisely, each process Pi has a vector of integers V Ci[1. . n] such that V Ci[j] is the number of relevant events produced by Pj, that belong to the current relevant causal past of Pi. Note that V Ci[i] counts the number of relevant events produced so far by Pi. When a process Pi produces a (relevant) event e, it associates with e a vector timestamp whose value (denoted e.V C) is equal to the current value of V Ci. Vector Clock Implementation The following implementation of vector clocks [5, 16] is based on the observation that ∀i, ∀e ∈ Hi, ∀j : e.V Ci[j] = y ⇔ lastr(e, j) = ey j where e.V Ci is the value of V Ci just after the occurrence of e (this relation results directly from the properties LR0, LR1, and LR2). Each process Pi manages its vector clock V Ci[1. . n] according to the following rules: VC0 V Ci[1. . n] is initialized to [0, ... , 0]. VC1 Each time it produces a relevant event e, Pi increments its vector clock entry V Ci[i] (V Ci[i] := V Ci[i] + 1) to 212 indicate it has produced one more relevant event, then Pi associates with e the timestamp e.V C = V Ci. VC2 When a process Pi sends a message m, it attaches to m the current value of V Ci. Let m.V C denote this value. VC3 When Pi receives a message m, it updates its vector clock as follows: ∀k : V Ci[k] := max(V Ci[k], m.V C[k]). 3. IMMEDIATE PREDECESSORS In this section, the Immediate Predecessor Tracking (IPT) problem is stated (Section 3.1). Then, some technical properties of immediate predecessors are stated and proved (Section 3.2). These properties are used to design the basic IPT protocol and prove its correctness (Section 3.3). This IPT protocol, previously presented in [4] without proof, is built from a vector clock protocol by adding the management of a local boolean array at each process. 3.1 The IPT Problem As indicated in the introduction, some applications (e.g., analysis of distributed executions [6], detection of distributed properties [7]) require to determine (on-the-fly and without additional messages) the transitive reduction of the relation → (i.e., we must not consider transitive causal dependency). Given two relevant events f and e, we say that f is an immediate predecessor of e if f → e and there is no relevant event g such that f → g → e. Definition 3. The Immediate Predecessor Tracking (IPT) problem consists in associating with each relevant event e the set of relevant events that are its immediate predecessors. Moreover, this has to be done on the fly and without additional control message (i.e., without modifying the communication pattern of the computation). As noted in the Introduction, the IPT problem is the computation of the Hasse diagram associated with the partially ordered set of the relevant events produced by a distributed computation. 3.2 Formal Properties of IPT In order to design a protocol solving the IPT problem, it is useful to consider the notion of immediate relevant predecessor of any event, whether relevant or not. First, we observe that, by definition, the immediate predecessor on Pj of an event e is necessarily the lastr(e, j) event. Second, for lastr(e, j) to be immediate predecessor of e, there must not be another lastr(e, k) event on a path between lastr(e, j) and e. These observations are formalized in the following definition: Definition 4. Let e ∈ Hi. The set of immediate relevant predecessors of e (denoted IP(e)), is the set of the relevant events lastr(e, j) (j = 1, ... , n) such that ∀k : lastr(e, j) ∈↑ (lastr(e, k)). It follows from this definition that IP(e) ⊆ {lastr(e, j)|j = 1, ... , n} ⊂↑ (e). When we consider Figure 1, The graph depicted in its right part describes the immediate predecessors of the relevant events of the computation defined in its left part, more precisely, a directed edge (e, f) means that the relevant event e is an immediate predecessor of the relevant event f (3 ). The following lemmas show how the set of immediate predecessors of an event is related to those of its predecessors in the relation hb →. They will be used to design and prove the protocols solving the IPT problem. To ease the reading of the paper, their proofs are presented in Appendix A. The intuitive meaning of the first lemma is the following: if e is not a receive event, all the causal paths arriving at e have pred(e) as next-to-last event (see CP1). So, if pred(e) is a relevant event, all the relevant events belonging to its relevant causal past are separated from e by pred(e), and pred(e) becomes the only immediate predecessor of e. In other words, the event pred(e) constitutes a reset w.r.t. the set of immediate predecessors of e. On the other hand, if pred(e) is not relevant, it does not separate its relevant causal past from e. Lemma 1. If e is not a receive event, IP(e) is equal to: ∅ if pred(e) = ⊥, {pred(e)} if pred(e) ∈ R, IP(pred(e)) if pred(e) ∈ R. The intuitive meaning of the next lemma is as follows: if e is a receive event receive(m), the causal paths arriving at e have either pred(e) or send(m) as next-to-last events. If pred(e) is relevant, as explained in the previous lemma, this event hides from e all its relevant causal past and becomes an immediate predecessor of e. Concerning the last relevant predecessors of send(m), only those that are not predecessors of pred(e) remain immediate predecessors of e. Lemma 2. Let e ∈ Hi be the receive event of a message m. If pred(e) ∈ Ri, then, ∀j, IP(e) ∩ Rj is equal to: {pred(e)} if j = i, ∅ if lastr(pred(e),j) ≥ lastr(send(m),j), IP(send(m)) ∩ Rj if lastr(pred(e),j) < lastr(send(m),j). The intuitive meaning of the next lemma is the following: if e is a receive event receive(m), and pred(e) is not relevant, the last relevant events in the relevant causal past of e are obtained by merging those of pred(e) and those of send(m) and by taking the latest on each process. So, the immediate predecessors of e are either those of pred(e) or those of send(m). On a process where the last relevant events of pred(e) and of send(m) are the same event f, none of the paths from f to e must contain another relevant event, and thus, f must be immediate predecessor of both events pred(e) and send(m). Lemma 3. Let e ∈ Hi be the receive event of a message m. If pred(e) ∈ Ri, then, ∀j, IP(e) ∩ Rj is equal to: IP(pred(e)) ∩ Rj if lastr(pred(e),j) > lastr(send(m),j), IP(send(m)) ∩ Rj if lastr(pred(e),j) < lastr(send(m),j) IP(pred(e))∩IP(send(m))∩Rj if lastr(pred(e),j) = lastr (send(m), j). 3.3 A Basic IPT Protocol The basic protocol proposed here associates with each relevant event e, an attribute encoding the set IP(e) of its immediate predecessors. From the previous lemmas, the set 3 Actually, this graph is the Hasse diagram of the partial order associated with the distributed computation. 213 IP(e) of any event e depends on the sets IP of the events pred(e) and/or send(m) (when e = receive(m)). Hence the idea to introduce a data structure allowing to manage the sets IPs inductively on the poset (H, hb →). To take into account the information from pred(e), each process manages a boolean array IPi such that, ∀e ∈ Hi the value of IPi when e occurs (denoted e.IPi) is the boolean array representation of the set IP(e). More precisely, ∀j : IPi[j] = 1 ⇔ lastr(e, j) ∈ IP(e). As recalled in Section 2.3, the knowledge of lastr(e,j) (for every e and every j) is based on the management of vectors V Ci. Thus, the set IP(e) is determined in the following way: IP(e) = {ey j | e.V Ci[j] = y ∧ e.IPi[j] = 1, j = 1, ... , n} Each process Pi updates IPi according to the Lemmas 1, 2, and 3: 1. It results from Lemma 1 that, if e is not a receive event, the current value of IPi is sufficient to determine e.IPi. It results from Lemmas 2 and 3 that, if e is a receive event (e = receive(m)), then determining e.IPi involves information related to the event send(m). More precisely, this information involves IP(send(m)) and the timestamp of send(m) (needed to compare the events lastr(send(m),j) and lastr(pred(e),j), for every j). So, both vectors send(m). V Cj and send(m). IPj (assuming send(m) produced by Pj ) are attached to message m. 2. Moreover, IPi must be updated upon the occurrence of each event. In fact, the value of IPi just after an event e is used to determine the value succ(e). IPi. In particular, as stated in the Lemmas, the determination of succ(e). IPi depends on whether e is relevant or not. Thus, the value of IPi just after the occurrence of event e must keep track of this event. The following protocol, previously presented in [4] without proof, ensures the correct management of arrays V Ci (as in Section 2.3) and IPi (according to the Lemmas of Section 3.2). The timestamp associated with a relevant event e is denoted e.TS. R0 Initialization: Both V Ci[1. . n] and IPi[1. . n] are initialized to [0, ... , 0]. R1 Each time it produces a relevant event e: - Pi associates with e the timestamp e.TS defined as follows e.TS = {(k, V Ci[k]) | IPi[k] = 1}, - Pi increments its vector clock entry V Ci[i] (namely it executes V Ci[i] := V Ci[i] + 1), - Pi resets IPi: ∀ = i : IPi[ ] := 0; IPi[i] := 1. R2 When Pi sends a message m to Pj, it attaches to m the current values of V Ci (denoted m.V C) and the boolean array IPi (denoted m.IP). R3 When it receives a message m from Pj , Pi executes the following updates: ∀k ∈ [1. . n] : case V Ci[k] < m.V C[k] thenV Ci[k] := m.V C[k]; IPi[k] := m.IP[k] V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]) V Ci[k] > m.V C[k] then skip endcase The proof of the following theorem directly follows from Lemmas 1, 2 and 3. Theorem 1. The protocol described in Section 3.3 solves the IPT problem: for any relevant event e, the timestamp e.TS contains the identifiers of all its immediate predecessors and no other event identifier. 4. A GENERAL CONDITION This section addresses a previously open problem, namely, How to solve the IPT problem without requiring each application message to piggyback a whole vector clock and a whole boolean array? . First, a general condition that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message sent in the computation, is defined (Section 4.1). It is then shown (Section 4.2) that this condition is both sufficient and necessary. However, this general condition cannot be locally evaluated by a process that is about to send a message. Thus, locally evaluable approximations of this general condition must be defined. To each approximation corresponds a protocol, implemented with additional local data structures. In that sense, the general condition defines a family of IPT protocols, that solve the previously open problem. This issue is addressed in Section 5. 4.1 To Transmit or Not to Transmit Control Information Let us consider the previous IPT protocol (Section 3.3). Rule R3 shows that a process Pj does not systematically update each entry V Cj[k] each time it receives a message m from a process Pi: there is no update of V Cj[k] when V Cj[k] ≥ m.V C[k]. In such a case, the value m.V C[k] is useless, and could be omitted from the control information transmitted with m by Pi to Pj. Similarly, some entries IPj[k] are not updated when a message m from Pi is received by Pj. This occurs when 0 < V Cj[k] = m.V C[k] ∧ m.IP[k] = 1, or when V Cj [k] > m.V C[k], or when m.V C[k] = 0 (in the latest case, as m.IP[k] = IPi[k] = 0 then no update of IPj[k] is necessary). Differently, some other entries are systematically reset to 0 (this occurs when 0 < V Cj [k] = m.V C[k] ∧ m.IP[k] = 0). These observations lead to the definition of the condition K(m, k) that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message m sent by a process Pi to a process Pj: Definition 5. K(m, k) ≡ (send(m). V Ci[k] = 0) ∨ (send(m). V Ci[k] < pred(receive(m)). V Cj[k]) ∨ ; (send(m). V Ci[k] = pred(receive(m)). V Cj[k]) ∧(send(m). IPi[k] = 1) . 4.2 A Necessary and Sufficient Condition We show here that the condition K(m, k) is both necessary and sufficient to decide which triples of the form (k, send(m). V Ci[k], send(m). IPi[k]) can be omitted in an outgoing message m sent by Pi to Pj. A triple attached to m will also be denoted (k, m.V C[k], m.IP[k]). Due to space limitations, the proofs of Lemma 4 and Lemma 5 are given in [1]. (The proof of Theorem 2 follows directly from these lemmas.) 214 Lemma 4. (Sufficiency) If K(m, k) is true, then the triple (k, m.V C[k], m.IP[k]) is useless with respect to the correct management of IPj[k] and V Cj [k]. Lemma 5. (Necessity) If K(m, k) is false, then the triple (k, m.V C[k], m.IP[k]) is necessary to ensure the correct management of IPj[k] and V Cj [k]. Theorem 2. When a process Pi sends m to a process Pj, the condition K(m, k) is both necessary and sufficient not to transmit the triple (k, send(m). V Ci[k], send(m). IPi[k]). 5. A FAMILY OF IPT PROTOCOLS BASED ON EVALUABLE CONDITIONS It results from the previous theorem that, if Pi could evaluate K(m, k) when it sends m to Pj, this would allow us improve the previous IPT protocol in the following way: in rule R2, the triple (k, V Ci[k], IPi[k]) is transmitted with m only if ¬K(m, k). Moreover, rule R3 is appropriately modified to consider only triples carried by m. However, as previously mentioned, Pi cannot locally evaluate K(m, k) when it is about to send m. More precisely, when Pi sends m to Pj , Pi knows the exact values of send(m). V Ci[k] and send(m). IPi[k] (they are the current values of V Ci[k] and IPi[k]). But, as far as the value of pred(receive(m)). V Cj[k] is concerned, two cases are possible. Case (i): If pred(receive(m)) hb → send(m), then Pi can know the value of pred(receive(m)). V Cj[k] and consequently can evaluate K(m, k). Case (ii): If pred(receive(m)) and send(m) are concurrent, Pi cannot know the value of pred(receive(m)). V Cj[k] and consequently cannot evaluate K(m, k). Moreover, when it sends m to Pj , whatever the case (i or ii) that actually occurs, Pi has no way to know which case does occur. Hence the idea to define evaluable approximations of the general condition. Let K (m, k) be an approximation of K(m, k), that can be evaluated by a process Pi when it sends a message m. To be correct, the condition K must ensure that, every time Pi should transmit a triple (k, V Ci[k], IPi[k]) according to Theorem 2 (i.e., each time ¬K(m, k)), then Pi transmits this triple when it uses condition K . Hence, the definition of a correct evaluable approximation: Definition 6. A condition K , locally evaluable by a process when it sends a message m to another process, is correct if ∀(m, k) : ¬K(m, k) ⇒ ¬K (m, k) or, equivalently, ∀(m, k) : K (m, k) ⇒ K(m, k). This definition means that a protocol evaluating K to decide which triples must be attached to messages, does not miss triples whose transmission is required by Theorem 2. Let us consider the constant condition (denoted K1), that is always false, i.e., ∀(m, k) : K1(m, k) = false. This trivially correct approximation of K actually corresponds to the particular IPT protocol described in Section 3 (in which each message carries a whole vector clock and a whole boolean vector). The next section presents a better approximation of K (denoted K2). 5.1 A Boolean Matrix-Based Evaluable Condition Condition K2 is based on the observation that condition K is composed of sub-conditions. Some of them can be Pj send(m) Pi V Ci[k] = x IPi[k] = 1 V Cj[k] ≥ x receive(m) Figure 2: The Evaluable Condition K2 locally evaluated while the others cannot. More precisely, K ≡ a ∨ α ∨ (β ∧ b), where a ≡ send(m). V Ci[k] = 0 and b ≡ send(m). IPi[k] = 1 are locally evaluable, whereas α ≡ send(m). V Ci[k] < pred(receive(m)). V Cj[k] and β ≡ send(m). V Ci[k] = pred(receive(m)). V Cj[k] are not. But, from easy boolean calculus, a∨((α∨β)∧b) =⇒ a∨α∨ (β ∧ b) ≡ K. This leads to condition K ≡ a ∨ (γ ∧ b), where γ = α ∨ β ≡ send(m). V Ci[k] ≤ pred(receive(m)). V Cj[k] , i.e., K ≡ (send(m). V Ci[k] ≤ pred(receive(m)). V Cj[k] ∧ send(m). IPi[k] = 1) ∨ send(m). V Ci[k] = 0. So, Pi needs to approximate the predicate send(m). V Ci[k] ≤ pred(receive(m)). V Cj[k]. To be correct, this approximation has to be a locally evaluable predicate ci(j, k) such that, when Pi is about to send a message m to Pj, ci(j, k) ⇒ (send(m). V Ci[k] ≤ pred(receive(m)). V Cj[k]). Informally, that means that, when ci(j, k) holds, the local context of Pi allows to deduce that the receipt of m by Pj will not lead to V Cj[k] update (Pj knows as much as Pi about Pk). Hence, the concrete condition K2 is the following: K2 ≡ send(m). V Ci[k] = 0 ∨ (ci(j, k) ∧ send(m). IPi[k] = 1). Let us now examine the design of such a predicate (denoted ci). First, the case j = i can be ignored, since it is assumed (Section 2.1) that a process never sends a message to itself. Second, in the case j = k, the relation send(m). V Ci[j] ≤ pred(receive(m)). V Cj [j] is always true, because the receipt of m by Pj cannot update V Cj[j]. Thus, ∀j = i : ci(j, j) must be true. Now, let us consider the case where j = i and j = k (Figure 2). Suppose that there exists an event e = receive(m ) with e < send(m), m sent by Pj and piggybacking the triple (k, m . V C[k], m . IP[k]), and m . V C[k] ≥ V Ci[k] (hence m . V C[k] = receive(m ). V Ci[k]). As V Cj[k] cannot decrease this means that, as long as V Ci[k] does not increase, for every message m sent by Pi to Pj we have the following: send(m). V Ci[k] = receive(m ). V Ci[k] = send(m ). V Cj[k] ≤ receive(m). V Cj [k], i.e., ci(j, k) must remain true. In other words, once ci(j, k) is true, the only event of Pi that could reset it to false is either the receipt of a message that increases V Ci[k] or, if k = i, the occurrence of a relevant event (that increases V Ci[i]). Similarly, once ci(j, k) is false, the only event that can set it to true is the receipt of a message m from Pj, piggybacking the triple (k, m . V C[k], m . IP[k]) with m . V C[k] ≥ V Ci[k]. In order to implement the local predicates ci(j, k), each process Pi is equipped with a boolean matrix Mi (as in [11]) such that M[j, k] = 1 ⇔ ci(j, k). It follows from the previous discussion that this matrix is managed according to the following rules (note that its i-th line is not significant (case j = i), and that its diagonal is always equal to 1): M0 Initialization: ∀ (j, k) : Mi[j, k] is initialized to 1. 215 M1 Each time it produces a relevant event e: Pi resets4 the ith column of its matrix: ∀j = i : Mi[j, i] := 0. M2 When Pi sends a message: no update of Mi occurs. M3 When it receives a message m from Pj , Pi executes the following updates: ∀ k ∈ [1. . n] : case V Ci[k] < m.V C[k] then ∀ = i, j, k : Mi[ , k] := 0; Mi[j, k] := 1 V Ci[k] = m.V C[k] then Mi[j, k] := 1 V Ci[k] > m.V C[k] then skip endcase The following lemma results from rules M0-M3. The theorem that follows shows that condition K2(m, k) is correct. (Both are proved in [1].) Lemma 6. ∀i, ∀m sent by Pi to Pj, ∀k, we have: send(m). Mi[j, k] = 1 ⇒ send(m). V Ci[k] ≤ pred(receive(m)). V Cj [k]. Theorem 3. Let m be a message sent by Pi to Pj . Let K2(m, k) ≡ ((send(m). Mi[j, k] = 1) ∧ (send(m). IPi[k] = 1)∨(send(m). V Ci[k] = 0)). We have: K2(m, k) ⇒ K(m, k). 5.2 Resulting IPT Protocol The complete text of the IPT protocol based on the previous discussion follows. RM0 Initialization: - Both V Ci[1. . n] and IPi[1. . n] are set to [0, ... , 0], and ∀ (j, k) : Mi[j, k] is set to 1. RM1 Each time it produces a relevant event e: - Pi associates with e the timestamp e.TS defined as follows: e.TS = {(k, V Ci[k]) | IPi[k] = 1}, - Pi increments its vector clock entry V Ci[i] (namely, it executes V Ci[i] := V Ci[i] + 1), - Pi resets IPi: ∀ = i : IPi[ ] := 0; IPi[i] := 1. - Pi resets the ith column of its boolean matrix: ∀j = i : Mi[j, i] := 0. RM2 When Pi sends a message m to Pj, it attaches to m the set of triples (each made up of a process id, an integer and a boolean): {(k, V Ci[k], IPi[k]) | (Mi[j, k] = 0 ∨ IPi[k] = 0) ∧ (V Ci[k] > 0)}. RM3 When Pi receives a message m from Pj , it executes the following updates: ∀(k,m.V C[k], m.IP[k]) carried by m: case V Ci[k] < m.V C[k] then V Ci[k] := m.V C[k]; IPi[k] := m.IP[k]; ∀ = i, j, k : Mi[ , k] := 0; 4 Actually, the value of this column remains constant after its first update. In fact, ∀j, Mi[j, i] can be set to 1 only upon the receipt of a message from Pj, carrying the value V Cj[i] (see R3). But, as Mj [i, i] = 1, Pj does not send V Cj[i] to Pi. So, it is possible to improve the protocol by executing this reset of the column Mi[∗, i] only when Pi produces its first relevant event. Mi[j, k] := 1 V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]); Mi[j, k] := 1 V Ci[k] > m.V C[k] then skip endcase 5.3 A Tradeoff The condition K2(m, k) shows that a triple has not to be transmitted when (Mi[j, k] = 1 ∧ IPi[k] = 1) ∨ (V Ci[k] > 0). Let us first observe that the management of IPi[k] is governed by the application program. More precisely, the IPT protocol does not define which are the relevant events, it has only to guarantee a correct management of IPi[k]. Differently, the matrix Mi does not belong to the problem specification, it is an auxiliary variable of the IPT protocol, which manages it so as to satisfy the following implication when Pi sends m to Pj : (Mi[j, k] = 1) ⇒ (pred(receive(m)). V Cj [k] ≥ send(m). V Ci[k]). The fact that the management of Mi is governed by the protocol and not by the application program leaves open the possibility to design a protocol where more entries of Mi are equal to 1. This can make the condition K2(m, k) more often satisfied5 and can consequently allow the protocol to transmit less triples. We show here that it is possible to transmit less triples at the price of transmitting a few additional boolean vectors. The previous IPT matrix-based protocol (Section 5.2) is modified in the following way. The rules RM2 and RM3 are replaced with the modified rules RM2'' and RM3'' (Mi[∗, k] denotes the kth column of Mi). RM2'' When Pi sends a message m to Pj, it attaches to m the following set of 4-uples (each made up of a process id, an integer, a boolean and a boolean vector): {(k, V Ci[k], IPi[k], Mi[∗, k]) | (Mi[j, k] = 0 ∨ IPi[k] = 0) ∧ V Ci[k] > 0}. RM3'' When Pi receives a message m from Pj , it executes the following updates: ∀(k,m.V C[k], m.IP[k], m.M[1. . n, k]) carried by m: case V Ci[k] < m.V C[k] then V Ci[k] := m.V C[k]; IPi[k] := m.IP[k]; ∀ = i : Mi[ , k] := m.M[ , k] V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]); ∀ =i : Mi[ , k] := max(Mi[ , k], m.M[ , k]) V Ci[k] > m.V C[k] then skip endcase Similarly to the proofs described in [1], it is possible to prove that the previous protocol still satisfies the property proved in Lemma 6, namely, ∀i, ∀m sent by Pi to Pj, ∀k we have (send(m). Mi[j, k] = 1) ⇒ (send(m). V Ci[k] ≤ pred(receive(m)). V Cj[k]). 5 Let us consider the previously described protocol (Section 5.2) where the value of each matrix entry Mi[j, k] is always equal to 0. The reader can easily verify that this setting correctly implements the matrix. Moreover, K2(m, k) is then always false: it actually coincides with K1(k, m) (which corresponds to the case where whole vectors have to be transmitted with each message). 216 Intuitively, the fact that some columns of matrices M are attached to application messages allows a transitive transmission of information. More precisely, the relevant history of Pk known by Pj is transmitted to a process Pi via a causal sequence of messages from Pj to Pi. In contrast, the protocol described in Section 5.2 used only a direct transmission of this information. In fact, as explained Section 5.1, the predicate c (locally implemented by the matrix M) was based on the existence of a message m sent by Pj to Pi, piggybacking the triple (k, m . V C[k], m . IP[k]), and m . V C[k] ≥ V Ci[k], i.e., on the existence of a direct transmission of information (by the message m ). The resulting IPT protocol (defined by the rules RM0, RM1, RM2'' and RM3'') uses the same condition K2(m, k) as the previous one. It shows an interesting tradeoff between the number of triples (k, V Ci[k], IPi[k]) whose transmission is saved and the number of boolean vectors that have to be additionally piggybacked. It is interesting to notice that the size of this additional information is bounded while each triple includes a non-bounded integer (namely a vector clock value). 6. EXPERIMENTAL STUDY This section compares the behaviors of the previous protocols. This comparison is done with a simulation study. IPT1 denotes the protocol presented in Section 3.3 that uses the condition K1(m, k) (which is always equal to false). IPT2 denotes the protocol presented in Section 5.2 that uses the condition K2(m, k) where messages carry triples. Finally, IPT3 denotes the protocol presented in Section 5.3 that also uses the condition K2(m, k) but where messages carry additional boolean vectors. This section does not aim to provide an in-depth simulation study of the protocols, but rather presents a general view on the protocol behaviors. To this end, it compares IPT2 and IPT3 with regard to IPT1. More precisely, for IPT2 the aim was to evaluate the gain in terms of triples (k, V Ci[k], IPi[k]) not transmitted with respect to the systematic transmission of whole vectors as done in IPT1. For IPT3, the aim was to evaluate the tradeoff between the additional boolean vectors transmitted and the number of saved triples. The behavior of each protocol was analyzed on a set of programs. 6.1 Simulation Parameters The simulator provides different parameters enabling to tune both the communication and the processes features. These parameters allow to set the number of processes for the simulated computation, to vary the rate of communication (send/receive) events, and to alter the time duration between two consecutive relevant events. Moreover, to be independent of a particular topology of the underlying network, a fully connected network is assumed. Internal events have not been considered. Since the presence of the triples (k, V Ci[k], IPi[k]) piggybacked by a message strongly depends on the frequency at which relevant events are produced by a process, different time distributions between two consecutive relevant events have been implemented (e.g., normal, uniform, and Poisson distributions). The senders of messages are chosen according to a random law. To exhibit particular configurations of a distributed computation a given scenario can be provided to the simulator. Message transmission delays follow a standard normal distribution. Finally, the last parameter of the simulator is the number of send events that occurred during a simulation. 6.2 Parameter Settings To compare the behavior of the three IPT protocols, we performed a large number of simulations using different parameters setting. We set to 10 the number of processes participating to a distributed computation. The number of communication events during the simulation has been set to 10 000. The parameter λ of the Poisson time distribution (λ is the average number of relevant events in a given time interval) has been set so that the relevant events are generated at the beginning of the simulation. With the uniform time distribution, a relevant event is generated (in the average) every 10 communication events. The location parameter of the standard normal time distribution has been set so that the occurrence of relevant events is shifted around the third part of the simulation experiment. As noted previously, the simulator can be fed with a given scenario. This allows to analyze the worst case scenarios for IPT2 and IPT3. These scenarios correspond to the case where the relevant events are generated at the maximal frequency (i.e., each time a process sends or receives a message, it produces a relevant event). Finally, the three IPT protocols are analyzed with the same simulation parameters. 6.3 Simulation Results The results are displayed on the Figures 3.a-3. d. These figures plot the gain of the protocols in terms of the number of triples that are not transmitted (y axis) with respect to the number of communication events (x axis). From these figures, we observe that, whatever the time distribution followed by the relevant events, both IPT2 and IPT3 exhibit a behavior better than IPT1 (i.e., the total number of piggybacked triples is lower in IPT2 and IPT3 than in IPT1), even in the worst case (see Figure 3. d). Let us consider the worst scenario. In that case, the gain is obtained at the very beginning of the simulation and lasts as long as it exists a process Pj for which ∀k : V Cj[k] = 0. In that case, the condition ∀k : K(m, k) is satisfied. As soon as ∃k : V Cj[k] = 0, both IPT2 and IPT3 behave as IPT1 (the shape of the curve becomes flat) since the condition K(m, k) is no longer satisfied. Figure 3. a shows that during the first events of the simulation, the slope of curves IPT2 and IPT3 are steep. The same occurs in Figure 3. d (that depicts the worst case scenario). Then the slope of these curves decreases and remains constant until the end of the simulation. In fact, as soon as V Cj[k] becomes greater than 0, the condition ¬K(m, k) reduces to (Mi[j, k] = 0 ∨ IPi[k] = 0). Figure 3. b displays an interesting feature. It considers λ = 100. As the relevant events are taken only during the very beginning of the simulation, this figure exhibits a very steep slope as the other figures. The figure shows that, as soon as no more relevant events are taken, on average, 45% of the triples are not piggybacked by the messages. This shows the importance of matrix Mi. Furthermore, IPT3 benefits from transmitting additional boolean vectors to save triple transmissions. The Figures 3.a-3.c show that the average gain of IPT3 with respect to IPT2 is close to 10%. Finally, Figure 3.c underlines even more the importance 217 of matrix Mi. When very few relevant events are taken, IPT2 and IPT3 turn out to be very efficient. Indeed, this figure shows that, very quickly, the gain in number of triples that are saved is very high (actually, 92% of the triples are saved). 6.4 Lessons Learned from the Simulation Of course, all simulation results are consistent with the theoretical results. IPT3 is always better than or equal to IPT2, and IPT2 is always better than IPT1. The simulation results teach us more: • The first lesson we have learnt concerns the matrix Mi. Its use is quite significant but mainly depends on the time distribution followed by the relevant events. On the one hand, when observing Figure 3. b where a large number of relevant events are taken in a very short time, IPT2 can save up to 45% of the triples. However, we could have expected a more sensitive gain of IPT2 since the boolean vector IP tends to stabilize to [1, ..., 1] when no relevant events are taken. In fact, as discussed in Section 5.3, the management of matrix Mi within IPT2 does not allow a transitive transmission of information but only a direct transmission of this information. This explains why some columns of Mi may remain equal to 0 while they could potentially be equal to 1. Differently, as IPT3 benefits from transmitting additional boolean vectors (providing a transitive transmission information) it reaches a gain of 50%. On the other hand, when very few relevant events are taken in a large period of time (see Figure 3. c), the behavior of IPT2 and IPT3 turns out to be very efficient since the transmission of up to 92% of the triples is saved. This comes from the fact that very quickly the boolean vector IPi tends to stabilize to [1, ..., 1] and that matrix Mi contains very few 0 since very few relevant events have been taken. Thus, a direct transmission of the information is sufficient to quickly get matrices Mi equal to [1, ..., 1], ... , [1, ..., 1]. • The second lesson concerns IPT3, more precisely, the tradeoff between the additional piggybacking of boolean vectors and the number of triples whose transmission is saved. With n = 10, adding 10 booleans to a triple does not substantially increases its size. The Figures 3.a-3.c exhibit the number of triples whose transmission is saved: the average gain (in number of triples) of IPT3 with respect to IPT2 is about 10%. 7. CONCLUSION This paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem. It has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Three of them have been described and analyzed with simulation experiments. Interestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events. Last but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi). The resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned. Interestingly, this efficiency is not obtained at the price of additional assumptions (such as fifo channels). 8. REFERENCES [1] Anceaume E., H´elary J.-M. and Raynal M., Tracking Immediate Predecessors in Distributed Computations. Res. Report #1344, IRISA, Univ.. Rennes (France), 2001. [2] Baldoni R., Prakash R., Raynal M. and Singhal M., Efficient ∆-Causal Broadcasting. Journal of Computer Systems Science and Engineering, 13(5):263-270, 1998. [3] Chandy K.M. and Lamport L., Distributed Snapshots: Determining Global States of Distributed Systems, ACM Transactions on Computer Systems, 3(1):63-75, 1985. [4] Diehl C., Jard C. and Rampon J.-X., Reachability Analysis of Distributed Executions, Proc. TAPSOFT``93, Springer-Verlag LNCS 668, pp. 629-643, 1993. [5] Fidge C.J., Timestamps in Message-Passing Systems that Preserve Partial Ordering, Proc. 11th Australian Computing Conference, pp. 56-66, 1988. [6] Fromentin E., Jard C., Jourdan G.-V. and Raynal M., On-the-fly Analysis of Distributed Computations, IPL, 54:267-274, 1995. [7] Fromentin E. and Raynal M., Shared Global States in Distributed Computations, JCSS, 55(3):522-528, 1997. [8] Fromentin E., Raynal M., Garg V.K. and Tomlinson A., On-the-Fly Testing of Regular Patterns in Distributed Computations. Proc. ICPP``94, Vol. 2:73-76, 1994. [9] Garg V.K., Principles of Distributed Systems, Kluwer Academic Press, 274 pages, 1996. [10] H´elary J.-M., Most´efaoui A., Netzer R.H.B. and Raynal M., Communication-Based Prevention of Useless Ckeckpoints in Distributed Computations. Distributed Computing, 13(1):29-43, 2000. [11] H´elary J.-M., Melideo G. and Raynal M., Tracking Causality in Distributed Systems: a Suite of Efficient Protocols. Proc. SIROCCO``00, Carleton University Press, pp. 181-195, L``Aquila (Italy), June 2000. [12] H´elary J.-M., Netzer R. and Raynal M., Consistency Issues in Distributed Checkpoints. IEEE TSE, 25(4):274-281, 1999. [13] Hurfin M., Mizuno M., Raynal M. and Singhal M., Efficient Distributed Detection of Conjunction of Local Predicates in Asynch Computations. IEEE TSE, 24(8):664-677, 1998. [14] Lamport L., Time, Clocks and the Ordering of Events in a Distributed System. Comm. ACM, 21(7):558-565, 1978. [15] Marzullo K. and Sabel L., Efficient Detection of a Class of Stable Properties. Distributed Computing, 8(2):81-91, 1994. [16] Mattern F., Virtual Time and Global States of Distributed Systems. Proc. Int. Conf. Parallel and Distributed Algorithms, (Cosnard, Quinton, Raynal, Robert Eds), North-Holland, pp. 215-226, 1988. [17] Prakash R., Raynal M. and Singhal M., An Adaptive Causal Ordering Algorithm Suited to Mobile Computing Environment. JPDC, 41:190-204, 1997. [18] Raynal M. and Singhal S., Logical Time: Capturing Causality in Distributed Systems. IEEE Computer, 29(2):49-57, 1996. [19] Singhal M. and Kshemkalyani A., An Efficient Implementation of Vector Clocks. IPL, 43:47-52, 1992. [20] Wang Y.M., Consistent Global Checkpoints That Contain a Given Set of Local Checkpoints. IEEE TOC, 46(4):456-468, 1997. 218 0 1000 2000 3000 4000 5000 6000 0 2000 4000 6000 8000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (a) The relevant events follow a uniform distribution (ratio=1/10) -5000 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 2000 4000 6000 8000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (b) The relevant events follow a Poisson distribution (λ = 100) 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0 2000 4000 6000 8000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (c) The relevant events follow a normal distribution 0 50 100 150 200 250 300 350 400 450 1 10 100 1000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (d) For each pi, pi takes a relevant event and broadcast to all processes Figure 3: Experimental Results 219
Tracking Immediate Predecessors in Distributed Computations ABSTRACT A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited. 1. INTRODUCTION A distributed computation consists of a set of processes that cooperate to achieve a common goal. A main characteristic of these computations lies in the fact that the processes do not share a common global memory, and communicate only by exchanging messages over a communication network. Moreover, message transfer delays are finite but unpredictable. This computation model defines what is known as the asynchronous distributed system model. It is particularly important as it includes systems that span large geographic areas, and systems that are subject to unpredictable loads. Consequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general. Causality is a key concept to understand and master the behavior of asynchronous distributed systems [18]. More precisely, given two events e and f of a distributed computation, a crucial problem that has to be solved in a lot of distributed applications is to know whether they are causally related, i.e., if the occurrence of one of them is a consequence of the occurrence of the other. The causal past of an event e is the set of events from which e is causally dependent. Events that are not causally dependent are said to be concurrent. Vector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce. The timestamp of an event produced by a process is the current value of the vector clock of the corresponding process. In that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not. Usually, according to the problem he focuses on, a designer is interested only in a subset of the events produced by a distributed execution (e.g., only the checkpoint events are meaningful when one is interested in determining consistent global checkpoints [12]). It follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15]. In other words, among all the events that may occur in a distributed computation, only a subset of them are relevant. In this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation. Being a strict partial order, the causality relation is transitive. As a consequence, among all the relevant events that causally precede a given relevant event e, only a subset are its immediate predecessors: those are the events f such that there is no relevant event on any causal path from f to e. Unfortunately, given only the vector timestamp associated with an event it is not possible to determine which events of its causal past are its immediate predecessors. This comes from the fact that the vector timestamp associated with e determines, for each process, the last relevant event belong210 ing to the causal past of e, but such an event is not necessarily an immediate predecessor of e. However, some applications [4, 6] require to associate with each relevant event only the set of its immediate predecessors. Those applications are mainly related to the analysis of distributed computations. Some of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16]. It is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice. More generally, these applications are interested in the very structure of the causal past. In this context, the determination of the immediate predecessors becomes a major issue [6]. Additionally, in some circumstances, this determination has to satisfy behavior constraints. If the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages. When the immediate predecessors are used to monitor the computation, it has to be done on the fly. We call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events. This problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation. Solving this problem requires tracking causality, hence using vector clocks. Previous works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events. Their aim was to reduce the size of timestamps attached to messages. An efficient vector clock implementation suited to systems with FIFO channels is proposed in [19]. Another efficient implementation that does not depend on channel ordering property is described in [11]. The notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast. However, none of these papers considers the IPT problem. This problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof. Moreover, in this protocol, timestamps attached to messages are of size n. This raises the following question which, to our knowledge, has never been answered: "Are there efficient vector clock implementation techniques that are suitable for the IPT problem?" . This paper has three main contributions: (1) a positive answer to the previous open question, (2) the design of a family of efficient IPT protocols, and (3) a formal correctness proof of the associated protocols. From a methodological point of view the paper uses a top-down approach. It states abstract properties from which more concrete properties and protocols are derived. The family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system). In that sense, this family defines low cost IPT protocols when we consider the message size. In addition to efficiency, the proposed approach has an interesting design property. Namely, the family is incrementally built in three steps. The basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events. Then, a general condition is stated to reduce the size of the control information carried by messages. Finally, according to the way this condition is implemented, three IPT protocols are obtained. The paper is composed of seven sections. Sections 2 introduces the computation model, vector clocks and the notion of relevant events. Section 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes). Section 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition. Section 6 provides a simulation study comparing the behaviors of the proposed protocols. Finally, Section 7 concludes the paper. (Due to space limitations, proofs of lemmas and theorems are omitted. They can be found in [1].) 2. MODEL AND VECTOR CLOCK 2.1 Distributed Computation A distributed program is made up of sequential local programs which communicate and synchronize only by exchanging messages. A distributed computation describes the execution of a distributed program. The execution of a local program gives rise to a sequential process. Let {P1, P2,..., Pn} be the finite set of sequential processes of the distributed computation. Each ordered pair of communicating processes (Pi, Pj) is connected by a reliable channel cij through which Pi can send messages to Pj. We assume that each message is unique and a process does not send messages to itself1. Message transmission delays are finite but unpredictable. Moreover, channels are not necessarily FIFO. Process speeds are positive but arbitrary. In other words, the underlying computation model is asynchronous. The local program associated with Pi can include send, receive and internal statements. The execution of such a statement produces a corresponding send/receive/internal event. These events are called primitive events. Let exi be the x-th event produced by process Pi. The sequence hi = e1i e2i...exi...constitutes the history of Pi, denoted Hi. Let H = Uni = 1Hi be the set of events produced by a distributed computation. This set is structured as a partial order by Lamport's happened before relation [14] (denoted Clearly the restriction of hb + to Hi, for a given i, is a total order. Thus we will use the notation exi <eyi iff x <y. Throughout the paper, we will use the following notation: if e E Hi is not the first event produced by Pi, then pred (e) denotes the event immediately preceding e in the sequence Hi. If e is the first event produced by Pi, then pred (e) is Ve E Hi: L <e. The partial order H b = (H, hb denoted by L (meaning that there is no such event), and +) constitutes a formal model of the distributed computation it is associated with. Figure 1: Timestamped Relevant Events and Immediate Predecessors Graph (Hasse Diagram) 2.2 Relevant Events For a given observer of a distributed computation, only some events are relevant2 [7, 9, 15]. An interesting example of "what an observation is", is the detection of predicates on consistent global states of a distributed computation [3, 6, 8, 9, 13, 15]. In that case, a relevant event corresponds to the modification of a local variable involved in the global predicate. Another example is the checkpointing problem where a relevant event is the definition of a local checkpoint [10, 12, 20]. The left part of Figure 1 depicts a distributed computation using the classical space-time diagram. In this figure, only relevant events are represented. The sequence of relevant events produced by process Pi is denoted by Ri, and R = Uni = 1Ri C _ H denotes the set of all relevant events. Let--+ be the relation on R defined in the following way: ` d (e, f) ERxR: (e--+ f) 4 * (e hb--+ f). The poset (R,--+) constitutes an abstraction of the distributed computation [7]. In the following we consider a distributed computation at such an abstraction level. Moreover, without loss of generality we consider that the set of relevant events is a subset of the internal events (if a communication event has to be observed, a relevant internal event can be generated just before a send and just after a receive communication event occurred). Each relevant event is identified by a pair (process id, sequence number) (see Figure 1). Note that, if e E R then 1 (e) = {f E R I f--+ e}. In the computation described in Figure 1, we have, for the event e identified (2, 2): 1 (e) = {(1, 1), (1, 2), (2, 1), (3, 1)}. The following properties are immediate consequences of the previous definitions. Let e E H. Let us consider the event e identified (2,2) in Figure 1. We have lastr (e, 1) = (1, 2), lastr (e, 2) = (2, 1), lastr (e, 3) = (3, 1). The following properties relate the events lastr (e, j) and lastr (f, j) for all the predecessors f of e in the relation--+. These properties follow directly from the definitions. LR2 If e is a receive event of m: ` dj = ~ i: lastr (e, j) = max (lastr (pred (e), j), lastr (send (m), j)). 2.3 Vector Clock System Definition As a fundamental concept associated with the causality theory, vector clocks have been introduced in 1988, simultaneously and independently by Fidge [5] and Mattern [16]. A vector clock system is a mechanism that associates timestamps with events in such a way that the comparison of their timestamps indicates whether the corresponding events are or are not causally related (and, if they are, which one is the first). More precisely, each process Pi has a vector of integers VCi [1. . n] such that V Ci [j] is the number of relevant events produced by Pj, that belong to the current relevant causal past of Pi. Note that VCi [i] counts the number of relevant events produced so far by Pi. When a process Pi produces a (relevant) event e, it associates with e a vector timestamp whose value (denoted e.V C) is equal to the current value of VCi. Vector Clock Implementation The following implementation of vector clocks [5, 16] is based on the observation that ` di, ` de E Hi, ` dj: e.V Ci [j] = y 4 * lastr (e, j) = ey j where e.V Ci is the value of V Ci just after the occurrence of e (this relation results directly from the properties LR0, LR1, and LR2). Each process Pi manages its vector clock V Ci [1. . n] according to the following rules: indicate it has produced one more relevant event, then Pi associates with e the timestamp e.VC = V Ci. VC2 When a process Pi sends a message m, it attaches to m the current value of VCi. Let m.V C denote this value. VC3 When Pi receives a message m, it updates its vector clock as follows: ` dk: VCi [k]: = max (VCi [k], m.V C [k]). 3. IMMEDIATE PREDECESSORS In this section, the Immediate Predecessor Tracking (IPT) problem is stated (Section 3.1). Then, some technical properties of immediate predecessors are stated and proved (Section 3.2). These properties are used to design the basic IPT protocol and prove its correctness (Section 3.3). This IPT protocol, previously presented in [4] without proof, is built from a vector clock protocol by adding the management of a local boolean array at each process. 3.1 The IPT Problem As indicated in the introduction, some applications (e.g., analysis of distributed executions [6], detection of distributed properties [7]) require to determine (on-the-fly and without additional messages) the transitive reduction of the relation--+ (i.e., we must not consider transitive causal dependency). Given two relevant events f and e, we say that f is an immediate predecessor of e if f--+ e and there is no relevant event g such that f--+ g--+ e. As noted in the Introduction, the IPT problem is the computation of the Hasse diagram associated with the partially ordered set of the relevant events produced by a distributed computation. 3.2 Formal Properties of IPT In order to design a protocol solving the IPT problem, it is useful to consider the notion of immediate relevant predecessor of any event, whether relevant or not. First, we observe that, by definition, the immediate predecessor on Pj of an event e is necessarily the lastr (e, j) event. Second, for lastr (e, j) to be immediate predecessor of e, there must not be another lastr (e, k) event on a path between lastr (e, j) and e. These observations are formalized in the following definition: It follows from this definition that ZP (e) C _ {lastr (e, j) | j = 1,..., n} CT (e). When we consider Figure 1, The graph depicted in its right part describes the immediate predecessors of the relevant events of the computation defined in its left part, more precisely, a directed edge (e, f) means that the relevant event e is an immediate predecessor of the relevant event f (3). The following lemmas show how the set of immediate predecessors of an event is related to those of its predecessors in the relation hb--+. They will be used to design and prove the protocols solving the IPT problem. To ease the reading of the paper, their proofs are presented in Appendix A. The intuitive meaning of the first lemma is the following: if e is not a receive event, all the causal paths arriving at e have pred (e) as next-to-last event (see CP1). So, if pred (e) is a relevant event, all the relevant events belonging to its relevant causal past are "separated" from e by pred (e), and pred (e) becomes the only immediate predecessor of e. In other words, the event pred (e) constitutes a "reset" w.r.t. the set of immediate predecessors of e. On the other hand, if pred (e) is not relevant, it does not separate its relevant causal past from e. The intuitive meaning of the next lemma is as follows: if e is a receive event receive (m), the causal paths arriving at e have either pred (e) or send (m) as next-to-last events. If pred (e) is relevant, as explained in the previous lemma, this event "hides" from e all its relevant causal past and becomes an immediate predecessor of e. Concerning the last relevant predecessors of send (m), only those that are not predecessors of pred (e) remain immediate predecessors of e. The intuitive meaning of the next lemma is the following: if e is a receive event receive (m), and pred (e) is not relevant, the last relevant events in the relevant causal past of e are obtained by merging those of pred (e) and those of send (m) and by taking the latest on each process. So, the immediate predecessors of e are either those of pred (e) or those of send (m). On a process where the last relevant events of pred (e) and of send (m) are the same event f, none of the paths from f to e must contain another relevant event, and thus, f must be immediate predecessor of both events pred (e) and send (m). 3.3 A Basic IPT Protocol The basic protocol proposed here associates with each relevant event e, an attribute encoding the set ZP (e) of its immediate predecessors. From the previous lemmas, the set 3Actually, this graph is the Hasse diagram of the partial order associated with the distributed computation. 213 ZP (e) of any event e depends on the sets ZP of the events pred (e) and/or send (m) (when e = receive (m)). Hence the idea to introduce a data structure allowing to manage the sets ZPs inductively on the poset (H, hb →). To take into account the information from pred (e), each process manages a boolean array IPi such that, ` de E Hi the value of IPi when e occurs (denoted e.IPi) is the boolean array representation of the set ZP (e). More precisely, ` dj: IPi [j] = 1 4 * lastr (e, j) E ZP (e). As recalled in Section 2.3, the knowledge of lastr (e, j) (for every e and every j) is based on the management of vectors VCi. Thus, the set ZP (e) is determined in the following way: Each process Pi updates IPi according to the Lemmas 1, 2, and 3: 1. It results from Lemma 1 that, if e is not a receive event, the current value of IPi is sufficient to determine e.IPi. It results from Lemmas 2 and 3 that, if e is a receive event (e = receive (m)), then determining e.IPi involves information related to the event send (m). More precisely, this information involves ZP (send (m)) and the timestamp of send (m) (needed to compare the events lastr (send (m), j) and lastr (pred (e), j), for every j). So, both vectors send (m). V Cj and send (m). IPj (assuming send (m) produced by Pj) are attached to message m. 2. Moreover, IPi must be updated upon the occurrence of each event. In fact, the value of IPi just after an event e is used to determine the value succ (e). IPi. In particular, as stated in the Lemmas, the determination of succ (e). IPi depends on whether e is relevant or not. Thus, the value of IPi just after the occurrence of event e must "keep track" of this event. The following protocol, previously presented in [4] without proof, ensures the correct management of arrays V Ci (as in Section 2.3) and IPi (according to the Lemmas of Section 3.2). The timestamp associated with a relevant event e is denoted e.TS. R0 Initialization: Both VCi [1. . n] and IPi [1. . n] are initialized to [0,..., 0]. R1 Each time it produces a relevant event e:--Pi associates with e the timestamp e.TS defined as follows e.TS = {(k, V Ci [k]) I IPi [k] = 11,--Pi increments its vector clock entry V Ci [i] (namely it executes VCi [i]: = VCi [i] + 1),--Pi resets IPi: ` d B = ~ i: IPi [f]: = 0; IPi [i]: = 1. R2 When Pi sends a message m to Pj, it attaches to m the current values of V Ci (denoted m.VC) and the boolean array IPi (denoted m.IP). R3 When it receives a message m from Pj, Pi executes the following updates: The proof of the following theorem directly follows from Lemmas 1, 2 and 3. 4. A GENERAL CONDITION This section addresses a previously open problem, namely, "How to solve the IPT problem without requiring each application message to piggyback a whole vector clock and a whole boolean array?" . First, a general condition that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message sent in the computation, is defined (Section 4.1). It is then shown (Section 4.2) that this condition is both sufficient and necessary. However, this general condition cannot be locally evaluated by a process that is about to send a message. Thus, locally evaluable approximations of this general condition must be defined. To each approximation corresponds a protocol, implemented with additional local data structures. In that sense, the general condition defines a family of IPT protocols, that solve the previously open problem. This issue is addressed in Section 5. 4.1 To Transmit or Not to Transmit Control Information Let us consider the previous IPT protocol (Section 3.3). Rule R3 shows that a process Pj does not systematically update each entry V Cj [k] each time it receives a message m from a process Pi: there is no update of V Cj [k] when V Cj [k]> m.V C [k]. In such a case, the value m.V C [k] is useless, and could be omitted from the control information transmitted with m by Pi to Pj. Similarly, some entries IPj [k] are not updated when a message m from Pi is received by Pj. This occurs when 0 <V Cj [k] = m.V C [k] n m.IP [k] = 1, or when V Cj [k]> m.V C [k], or when m.VC [k] = 0 (in the latest case, as m.IP [k] = IPi [k] = 0 then no update of IPj [k] is necessary). Differently, some other entries are systematically reset to 0 (this occurs when 0 <V Cj [k] = m.V C [k] n m.IP [k] = 0). These observations lead to the definition of the condition K (m, k) that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message m sent by a process Pi to a process Pj: 4.2 A Necessary and Sufficient Condition We show here that the condition K (m, k) is both necessary and sufficient to decide which triples of the form (k, send (m). V Ci [k], send (m). IPi [k]) can be omitted in an outgoing message m sent by Pi to Pj. A triple attached to m will also be denoted (k, m.V C [k], m.IP [k]). Due to space limitations, the proofs of Lemma 4 and Lemma 5 are given in [1]. (The proof of Theorem 2 follows directly from these lemmas.) 5. A FAMILY OF IPT PROTOCOLS BASED ON EVALUABLE CONDITIONS It results from the previous theorem that, if Pi could evaluate K (m, k) when it sends m to Pj, this would allow us improve the previous IPT protocol in the following way: in rule R2, the triple (k, V Ci [k], IPi [k]) is transmitted with m only if - K (m, k). Moreover, rule R3 is appropriately modified to consider only triples carried by m. However, as previously mentioned, Pi cannot locally evaluate K (m, k) when it is about to send m. More precisely, when Pi sends m to Pj, Pi knows the exact values of send (m). V Ci [k] and send (m). IPi [k] (they are the current values of V Ci [k] and IPi [k]). But, as far as the value of pred (receive (m)). VCj [k] is concerned, two cases are possible. Case (i): If pred (receive (m)) hb → send (m), then Pi can know the value of pred (receive (m)). VCj [k] and consequently can evaluate K (m, k). Case (ii): If pred (receive (m)) and send (m) are concurrent, Pi cannot know the value of pred (receive (m)). VCj [k] and consequently cannot evaluate K (m, k). Moreover, when it sends m to Pj, whatever the case (i or ii) that actually occurs, Pi has no way to know which case does occur. Hence the idea to define evaluable approximations of the general condition. Let K' (m, k) be an approximation of K (m, k), that can be evaluated by a process Pi when it sends a message m. To be correct, the condition K' must ensure that, every time Pi should transmit a triple (k, VCi [k], IPi [k]) according to Theorem 2 (i.e., each time - K (m, k)), then Pi transmits this triple when it uses condition K'. Hence, the definition of a correct evaluable approximation: This definition means that a protocol evaluating K' to decide which triples must be attached to messages, does not miss triples whose transmission is required by Theorem 2. Let us consider the "constant" condition (denoted K1), that is always false, i.e., V (m, k): K1 (m, k) = false. This trivially correct approximation of K actually corresponds to the particular IPT protocol described in Section 3 (in which each message carries a whole vector clock and a whole boolean vector). The next section presents a better approximation of K (denoted K2). 5.1 A Boolean Matrix-Based Evaluable Condition Figure 2: The Evaluable Condition K2 locally evaluated while the others cannot. More precisely, K = _ a V α V (, Q n b), where a = _ send (m). VCi [k] = 0 and b = _ send (m). IPi [k] = 1 are locally evaluable, whereas α = _ send (m). V Ci [k] <pred (receive (m)). VCj [k] and, Q = _ send (m). V Ci [k] = pred (receive (m)). VCj [k] are not. But, from easy boolean calculus, a V ((αV, Q) nb) ==:>. aVαV So, Pi needs to approximate the predicate send (m). VCi [k] <pred (receive (m)). VCj [k]. To be correct, this approximation has to be a locally evaluable predicate ci (j, k) such that, when Pi is about to send a message m to Pj, ci (j, k) =:>. (send (m). V Ci [k] <pred (receive (m)). VCj [k]). Informally, that means that, when ci (j, k) holds, the local context of Pi allows to deduce that the receipt of m by Pj will not lead to VCj [k] update ("Pj knows as much as Pi about Pk"). Hence, the "concrete" condition K2 is the following: K2 = _ send (m). VCi [k] = 0 V (ci (j, k) n send (m). IPi [k] = 1). Let us now examine the design of such a predicate (denoted ci). First, the case j = i can be ignored, since it is assumed (Section 2.1) that a process never sends a message to itself. Second, in the case j = k, the relation send (m). V Ci [j] <pred (receive (m)). VCj [j] is always true, because the receipt of m by Pj cannot update VCj [j]. Thus, Vj = ~ i: ci (j, j) must be true. Now, let us consider the case where j = ~ i and j = ~ k (Figure 2). Suppose that there exists an event e' = receive (m') with e' <send (m), m' sent by Pj and piggybacking the triple (k, m'. V C [k], m'. IP [k]), and m'. V C [k]> VCi [k] (hence m'. V C [k] = receive (m'). V Ci [k]). As V Cj [k] cannot decrease this means that, as long as V Ci [k] does not increase, for every message m sent by Pi to Pj we have the following: send (m). V Ci [k] = receive (m'). V Ci [k] = send (m'). V Cj [k] <receive (m). VCj [k], i.e., ci (j, k) must remain true. In other words, once ci (j, k) is true, the only event of Pi that could reset it to false is either the receipt of a message that increases VCi [k] or, if k = i, the occurrence of a relevant event (that increases V Ci [i]). Similarly, once ci (j, k) is false, the only event that can set it to true is the receipt of a message m' from Pj, piggybacking the triple (k, m'. V C [k], m'. IP [k]) with m'. V C [k]> VCi [k]. In order to implement the local predicates ci (j, k), each process Pi is equipped with a boolean matrix Mi (as in [11]) such that M [j, k] = 1 4 * ci (j, k). It follows from the previous discussion that this matrix is managed according to the following rules (note that its i-th line is not significant (case j = i), and that its diagonal is always equal to 1): The following lemma results from rules M0-M3. The theorem that follows shows that condition K2 (m, k) is correct. (Both are proved in [1].) 5.2 Resulting IPT Protocol The complete text of the IPT protocol based on the previous discussion follows. RM0 Initialization:--Both V Ci [1. . n] and IPi [1. . n] are set to [0,..., 0], and ∀ (j, k): Mi [j, k] is set to 1. RM1 Each time it produces a relevant event e:--Pi associates with e the timestamp e.TS defined as follows: e.TS = {(k, V Ci [k]) | IPi [k] = 1},--Pi increments its vector clock entry V Ci [i] (namely, it executes V Ci [i]: = V Ci [i] + 1),--Pi resets IPi: ∀ B = ~ i: IPi [B]: = 0; IPi [i]: = 1. -- Pi resets the ith column of its boolean matrix: ∀ j = ~ i: Mi [j, i]: = 0. RM2 When Pi sends a message m to Pj, it attaches to m the set of triples (each made up of a process id, an integer and a boolean): {(k, V Ci [k], IPi [k]) | (Mi [j, k] = 0 ∨ 4Actually, the value of this column remains constant after its first update. In fact, ∀ j, Mi [j, i] can be set to 1 only upon the receipt of a message from Pj, carrying the value VCj [i] (see R3). But, as Mj [i, i] = 1, Pj does not send VCj [i] to Pi. So, it is possible to improve the protocol by executing this "reset" of the column Mi [∗, i] only when Pi produces its first relevant event. endcase 5.3 A Tradeoff The condition K2 (m, k) shows that a triple has not to be transmitted when (Mi [j, k] = 1 ∧ IPi [k] = 1) ∨ (VCi [k]> 0). Let us first observe that the management of IPi [k] is governed by the application program. More precisely, the IPT protocol does not define which are the relevant events, it has only to guarantee a correct management of IPi [k]. Differently, the matrix Mi does not belong to the problem specification, it is an auxiliary variable of the IPT protocol, which manages it so as to satisfy the following implication when Pi sends m to Pj: (Mi [j, k] = 1) ⇒ (pred (receive (m)). VCj [k] ≥ send (m). VCi [k]). The fact that the management of Mi is governed by the protocol and not by the application program leaves open the possibility to design a protocol where more entries of Mi are equal to 1. This can make the condition K2 (m, k) more often satisfied5 and can consequently allow the protocol to transmit less triples. We show here that it is possible to transmit less triples at the price of transmitting a few additional boolean vectors. The previous IPT matrix-based protocol (Section 5.2) is modified in the following way. The rules RM2 and RM3 are replaced with the modified rules RM2' and RM3' (Mi [∗, k] denotes the kth column of Mi). RM2' When Pi sends a message m to Pj, it attaches to m the following set of 4-uples (each made up of a process id, an integer, a boolean and a boolean vector): {(k, V Ci [k], IPi [k], Mi [∗, k]) | (Mi [j, k] = 0 ∨ IPi [k] = 0) ∧ V Ci [k]> 0}. RM3' When Pi receives a message m from Pj, it executes the following updates: Similarly to the proofs described in [1], it is possible to prove that the previous protocol still satisfies the property proved in Lemma 6, namely, ∀ i, ∀ m sent by Pi to Pj, ∀ k we have (send (m). Mi [j, k] = 1) ⇒ (send (m). VCi [k] ≤ pred (receive (m)). VCj [k]). 5Let us consider the previously described protocol (Section 5.2) where the value of each matrix entry Mi [j, k] is always equal to 0. The reader can easily verify that this setting correctly implements the matrix. Moreover, K2 (m, k) is then always false: it actually coincides with K1 (k, m) (which corresponds to the case where whole vectors have to be transmitted with each message). 216 Intuitively, the fact that some columns of matrices M are attached to application messages allows a transitive transmission of information. More precisely, the relevant history of Pk known by Pj is transmitted to a process Pi via a causal sequence of messages from Pj to Pi. In contrast, the protocol described in Section 5.2 used only a direct transmission of this information. In fact, as explained Section 5.1, the predicate c (locally implemented by the matrix M) was based on the existence of a message m' sent by Pj to Pi, piggybacking the triple (k, m'. V C [k], m'. IP [k]), and m'. V C [k]> V Ci [k], i.e., on the existence of a direct transmission of information (by the message m'). The resulting IPT protocol (defined by the rules RM0, RM1, RM2' and RM3') uses the same condition K2 (m, k) as the previous one. It shows an interesting tradeoff between the number of triples (k, V Ci [k], IPi [k]) whose transmission is saved and the number of boolean vectors that have to be additionally piggybacked. It is interesting to notice that the size of this additional information is bounded while each triple includes a non-bounded integer (namely a vector clock value). 6. EXPERIMENTAL STUDY This section compares the behaviors of the previous protocols. This comparison is done with a simulation study. IPT1 denotes the protocol presented in Section 3.3 that uses the condition K1 (m, k) (which is always equal to false). IPT2 denotes the protocol presented in Section 5.2 that uses the condition K2 (m, k) where messages carry triples. Finally, IPT3 denotes the protocol presented in Section 5.3 that also uses the condition K2 (m, k) but where messages carry additional boolean vectors. This section does not aim to provide an in-depth simulation study of the protocols, but rather presents a general view on the protocol behaviors. To this end, it compares IPT2 and IPT3 with regard to IPT1. More precisely, for IPT2 the aim was to evaluate the gain in terms of triples (k, V Ci [k], IPi [k]) not transmitted with respect to the systematic transmission of whole vectors as done in IPT1. For IPT3, the aim was to evaluate the tradeoff between the additional boolean vectors transmitted and the number of saved triples. The behavior of each protocol was analyzed on a set of programs. 6.1 Simulation Parameters The simulator provides different parameters enabling to tune both the communication and the processes features. These parameters allow to set the number of processes for the simulated computation, to vary the rate of communication (send/receive) events, and to alter the time duration between two consecutive relevant events. Moreover, to be independent of a particular topology of the underlying network, a fully connected network is assumed. Internal events have not been considered. Since the presence of the triples (k, V Ci [k], IPi [k]) piggybacked by a message strongly depends on the frequency at which relevant events are produced by a process, different time distributions between two consecutive relevant events have been implemented (e.g., normal, uniform, and Poisson distributions). The senders of messages are chosen according to a random law. To exhibit particular configurations of a distributed computation a given scenario can be provided to the simulator. Message transmission delays follow a standard normal distribution. Finally, the last parameter of the simulator is the number of send events that occurred during a simulation. 6.2 Parameter Settings To compare the behavior of the three IPT protocols, we performed a large number of simulations using different parameters setting. We set to 10 the number of processes participating to a distributed computation. The number of communication events during the simulation has been set to 10 000. The parameter λ of the Poisson time distribution (λ is the average number of relevant events in a given time interval) has been set so that the relevant events are generated at the beginning of the simulation. With the uniform time distribution, a relevant event is generated (in the average) every 10 communication events. The location parameter of the standard normal time distribution has been set so that the occurrence of relevant events is shifted around the third part of the simulation experiment. As noted previously, the simulator can be fed with a given scenario. This allows to analyze the worst case scenarios for IPT2 and IPT3. These scenarios correspond to the case where the relevant events are generated at the maximal frequency (i.e., each time a process sends or receives a message, it produces a relevant event). Finally, the three IPT protocols are analyzed with the same simulation parameters. 6.3 Simulation Results The results are displayed on the Figures 3.a-3. d. These figures plot the gain of the protocols in terms of the number of triples that are not transmitted (y axis) with respect to the number of communication events (x axis). From these figures, we observe that, whatever the time distribution followed by the relevant events, both IPT2 and IPT3 exhibit a behavior better than IPT1 (i.e., the total number of piggybacked triples is lower in IPT2 and IPT3 than in IPT1), even in the worst case (see Figure 3. d). Let us consider the worst scenario. In that case, the gain is obtained at the very beginning of the simulation and lasts as long as it exists a process Pj for which Vk: V Cj [k] = 0. In that case, the condition Vk: K (m, k) is satisfied. As soon as 3k: VCj [k] = ~ 0, both IPT2 and IPT3 behave as IPT1 (the shape of the curve becomes flat) since the condition K (m, k) is no longer satisfied. Figure 3. a shows that during the first events of the simulation, the slope of curves IPT2 and IPT3 are steep. The same occurs in Figure 3. d (that depicts the worst case scenario). Then the slope of these curves decreases and remains constant until the end of the simulation. In fact, as soon as V Cj [k] becomes greater than 0, the condition - K (m, k) reduces to (Mi [j, k] = 0 V IPi [k] = 0). Figure 3. b displays an interesting feature. It considers λ = 100. As the relevant events are taken only during the very beginning of the simulation, this figure exhibits a very steep slope as the other figures. The figure shows that, as soon as no more relevant events are taken, on average, 45% of the triples are not piggybacked by the messages. This shows the importance of matrix Mi. Furthermore, IPT3 benefits from transmitting additional boolean vectors to save triple transmissions. The Figures 3.a-3.c show that the average gain of IPT3 with respect to IPT2 is close to 10%. Finally, Figure 3.c underlines even more the importance 217 of matrix Mi. When very few relevant events are taken, IPT2 and IPT3 turn out to be very efficient. Indeed, this figure shows that, very quickly, the gain in number of triples that are saved is very high (actually, 92% of the triples are saved). 6.4 Lessons Learned from the Simulation Of course, all simulation results are consistent with the theoretical results. IPT3 is always better than or equal to IPT2, and IPT2 is always better than IPT1. The simulation results teach us more: 9 The first lesson we have learnt concerns the matrix Mi. Its use is quite significant but mainly depends on the time distribution followed by the relevant events. On the one hand, when observing Figure 3. b where a large number of relevant events are taken in a very short time, IPT2 can save up to 45% of the triples. However, we could have expected a more sensitive gain of IPT2 since the boolean vector IP tends to stabilize to [1,..., 1] when no relevant events are taken. In fact, as discussed in Section 5.3, the management of matrix Mi within IPT2 does not allow a transitive transmission of information but only a direct transmission of this information. This explains why some columns of Mi may remain equal to 0 while they could potentially be equal to 1. Differently, as IPT3 benefits from transmitting additional boolean vectors (providing a transitive transmission information) it reaches a gain of 50%. On the other hand, when very few relevant events are taken in a large period of time (see Figure 3. c), the behavior of IPT2 and IPT3 turns out to be very efficient since the transmission of up to 92% of the triples is saved. This comes from the fact that very quickly the boolean vector IPi tends to stabilize to [1,..., 1] and that matrix Mi contains very few 0 since very few relevant events have been taken. Thus, a direct transmission of the information is sufficient to quickly get matrices Mi equal to [1,..., 1],..., [1,..., 1]. 9 The second lesson concerns IPT3, more precisely, the tradeoff between the additional piggybacking of boolean vectors and the number of triples whose transmission is saved. With n = 10, adding 10 booleans to a triple does not substantially increases its size. The Figures 3.a-3.c exhibit the number of triples whose transmission is saved: the average gain (in number of triples) of IPT3 with respect to IPT2 is about 10%. 7. CONCLUSION This paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem. It has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Three of them have been described and analyzed with simulation experiments. Interestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events. Last but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi). The resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned. Interestingly, this efficiency is not obtained at the price of additional assumptions (such as FIFO channels).
Tracking Immediate Predecessors in Distributed Computations ABSTRACT A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited. 1. INTRODUCTION A distributed computation consists of a set of processes that cooperate to achieve a common goal. A main characteristic of these computations lies in the fact that the processes do not share a common global memory, and communicate only by exchanging messages over a communication network. Moreover, message transfer delays are finite but unpredictable. This computation model defines what is known as the asynchronous distributed system model. It is particularly important as it includes systems that span large geographic areas, and systems that are subject to unpredictable loads. Consequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general. Causality is a key concept to understand and master the behavior of asynchronous distributed systems [18]. More precisely, given two events e and f of a distributed computation, a crucial problem that has to be solved in a lot of distributed applications is to know whether they are causally related, i.e., if the occurrence of one of them is a consequence of the occurrence of the other. The causal past of an event e is the set of events from which e is causally dependent. Events that are not causally dependent are said to be concurrent. Vector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce. The timestamp of an event produced by a process is the current value of the vector clock of the corresponding process. In that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not. Usually, according to the problem he focuses on, a designer is interested only in a subset of the events produced by a distributed execution (e.g., only the checkpoint events are meaningful when one is interested in determining consistent global checkpoints [12]). It follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15]. In other words, among all the events that may occur in a distributed computation, only a subset of them are relevant. In this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation. Being a strict partial order, the causality relation is transitive. As a consequence, among all the relevant events that causally precede a given relevant event e, only a subset are its immediate predecessors: those are the events f such that there is no relevant event on any causal path from f to e. Unfortunately, given only the vector timestamp associated with an event it is not possible to determine which events of its causal past are its immediate predecessors. This comes from the fact that the vector timestamp associated with e determines, for each process, the last relevant event belong210 ing to the causal past of e, but such an event is not necessarily an immediate predecessor of e. However, some applications [4, 6] require to associate with each relevant event only the set of its immediate predecessors. Those applications are mainly related to the analysis of distributed computations. Some of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16]. It is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice. More generally, these applications are interested in the very structure of the causal past. In this context, the determination of the immediate predecessors becomes a major issue [6]. Additionally, in some circumstances, this determination has to satisfy behavior constraints. If the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages. When the immediate predecessors are used to monitor the computation, it has to be done on the fly. We call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events. This problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation. Solving this problem requires tracking causality, hence using vector clocks. Previous works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events. Their aim was to reduce the size of timestamps attached to messages. An efficient vector clock implementation suited to systems with FIFO channels is proposed in [19]. Another efficient implementation that does not depend on channel ordering property is described in [11]. The notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast. However, none of these papers considers the IPT problem. This problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof. Moreover, in this protocol, timestamps attached to messages are of size n. This raises the following question which, to our knowledge, has never been answered: "Are there efficient vector clock implementation techniques that are suitable for the IPT problem?" . This paper has three main contributions: (1) a positive answer to the previous open question, (2) the design of a family of efficient IPT protocols, and (3) a formal correctness proof of the associated protocols. From a methodological point of view the paper uses a top-down approach. It states abstract properties from which more concrete properties and protocols are derived. The family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system). In that sense, this family defines low cost IPT protocols when we consider the message size. In addition to efficiency, the proposed approach has an interesting design property. Namely, the family is incrementally built in three steps. The basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events. Then, a general condition is stated to reduce the size of the control information carried by messages. Finally, according to the way this condition is implemented, three IPT protocols are obtained. The paper is composed of seven sections. Sections 2 introduces the computation model, vector clocks and the notion of relevant events. Section 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes). Section 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition. Section 6 provides a simulation study comparing the behaviors of the proposed protocols. Finally, Section 7 concludes the paper. (Due to space limitations, proofs of lemmas and theorems are omitted. They can be found in [1].) 2. MODEL AND VECTOR CLOCK 2.1 Distributed Computation 2.2 Relevant Events 2.3 Vector Clock System 3. IMMEDIATE PREDECESSORS 3.1 The IPT Problem 3.2 Formal Properties of IPT 3.3 A Basic IPT Protocol 4. A GENERAL CONDITION 4.1 To Transmit or Not to Transmit Control Information 4.2 A Necessary and Sufficient Condition 5. A FAMILY OF IPT PROTOCOLS BASED ON EVALUABLE CONDITIONS 5.1 A Boolean Matrix-Based Evaluable Condition 5.2 Resulting IPT Protocol 5.3 A Tradeoff 6. EXPERIMENTAL STUDY 6.1 Simulation Parameters 6.2 Parameter Settings 6.3 Simulation Results 6.4 Lessons Learned from the Simulation 7. CONCLUSION This paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem. It has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Three of them have been described and analyzed with simulation experiments. Interestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events. Last but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi). The resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned. Interestingly, this efficiency is not obtained at the price of additional assumptions (such as FIFO channels).
Tracking Immediate Predecessors in Distributed Computations ABSTRACT A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited. 1. INTRODUCTION A distributed computation consists of a set of processes that cooperate to achieve a common goal. A main characteristic of these computations lies in the fact that the processes do not share a common global memory, and communicate only by exchanging messages over a communication network. Moreover, message transfer delays are finite but unpredictable. This computation model defines what is known as the asynchronous distributed system model. Consequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general. Causality is a key concept to understand and master the behavior of asynchronous distributed systems [18]. The causal past of an event e is the set of events from which e is causally dependent. Events that are not causally dependent are said to be concurrent. Vector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce. The timestamp of an event produced by a process is the current value of the vector clock of the corresponding process. In that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not. It follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15]. In other words, among all the events that may occur in a distributed computation, only a subset of them are relevant. In this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation. Being a strict partial order, the causality relation is transitive. Those applications are mainly related to the analysis of distributed computations. Some of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16]. It is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice. More generally, these applications are interested in the very structure of the causal past. In this context, the determination of the immediate predecessors becomes a major issue [6]. If the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages. When the immediate predecessors are used to monitor the computation, it has to be done on the fly. We call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events. This problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation. Solving this problem requires tracking causality, hence using vector clocks. Previous works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events. Their aim was to reduce the size of timestamps attached to messages. An efficient vector clock implementation suited to systems with FIFO channels is proposed in [19]. Another efficient implementation that does not depend on channel ordering property is described in [11]. The notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast. However, none of these papers considers the IPT problem. This problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof. Moreover, in this protocol, timestamps attached to messages are of size n. This raises the following question which, to our knowledge, has never been answered: "Are there efficient vector clock implementation techniques that are suitable for the IPT problem?" . From a methodological point of view the paper uses a top-down approach. It states abstract properties from which more concrete properties and protocols are derived. The family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system). In that sense, this family defines low cost IPT protocols when we consider the message size. In addition to efficiency, the proposed approach has an interesting design property. Namely, the family is incrementally built in three steps. The basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events. Then, a general condition is stated to reduce the size of the control information carried by messages. Finally, according to the way this condition is implemented, three IPT protocols are obtained. The paper is composed of seven sections. Sections 2 introduces the computation model, vector clocks and the notion of relevant events. Section 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes). Section 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition. Section 6 provides a simulation study comparing the behaviors of the proposed protocols. Finally, Section 7 concludes the paper. 7. CONCLUSION This paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem. It has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Three of them have been described and analyzed with simulation experiments. Interestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events. Last but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi). The resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned.
J-47
On the Computational Power of Iterative Auctions
We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their demand under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions.
[ "combinatori auction", "price", "bidder", "demand queri", "polynomi demand", "ascend-price auction", "bound", "approxim factor", "optim alloc", "prefer elicit", "ascend auction", "commun complex" ]
[ "P", "P", "P", "P", "R", "M", "U", "U", "U", "U", "M", "U" ]
On the Computational Power of Iterative Auctions∗ [Extended Abstract] Liad Blumrosen School of Engineering and Computer Science The Hebrew University of Jerusalem Jerusalem, Israel liad@cs.huji.ac.il Noam Nisan School of Engineering and Computer Science The Hebrew University of Jerusalem Jerusalem, Israel noam@cs.huji.ac.il ABSTRACT We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their demand under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions. Categories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION Combinatorial auctions have recently received a lot of attention. In a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders. The bidders have preferences regarding the bundles of items that they may receive. The preferences of bidder i are specified by a valuation function vi : 2M → R+ , where vi(S) denotes the value that bidder i attaches to winning the bundle of items S. We assume free disposal, i.e., that the vi``s are monotone non-decreasing. The usual goal of the auctioneer is to optimize the social welfare P i vi(Si), where the allocation S1...Sn must be a partition of the items. Applications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems. Combinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science. A forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions. The design of a combinatorial auction involves many considerations. In this paper we focus on just one central issue: the communication between bidders and the allocation mechanism - preference elicitation. Transferring all information about bidders'' preferences requires an infeasible (exponential in m) amount of communication. Thus, direct revelation auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences. We have therefore seen a multitude of suggested iterative auctions in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders'' preferences as to be able to find a good (optimal or close to optimal) allocation. Most of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]). Effectively, such an iterative auction accesses the bidders'' preferences by repeatedly making the following type of demand query to bidders: Query to bidder i: a vector of bundle prices p = {p(S)}S⊆M ; Answer: a bundle of items S ⊆ M that maximizes vi(S) − p(S). . These types of queries are very natural in an economic setting as they capture the revealed preferences of the bidders. Some auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p(S) = P i∈S pi. Other auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p(S) for bundles. Another important differentiation between models of iterative auctions is 29 based on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices). In other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices. In ascending-price auctions, forcing prices to be anonymous may be a significant restriction. In this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries. We do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions. In the first part of this paper, our main question is what can be done using a polynomial number of these types of queries? That is, polynomial in the main parameters of the problem: n, m and the number of bits t needed for representing a single value vi(S). Note that from an algorithmic point of view we are talking about sub-linear time algorithms: the input size here is really n(2m − 1) numbers - the descriptions of the valuation functions of all bidders. There are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the usual computational tractability. Our lower bounds will depend only on the number of queriesand hold independently of any computational assumptions like P = NP. Our upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation. As mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives. This strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction.1 The second part of this paper studies the power of ascending -price auctions. Ascending auctions are iterative auctions where the published prices cannot decrease in time. In this work, we try to systematically analyze what do the differences between various models of ascending auctions mean. We try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations? (ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare? (iii) How do the different models for ascending auctions compare? Are some models computationally stronger than others? Ascending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]). Most of this work presented ``upper bounds'', i.e., proposed mechanisms with ascending prices and analyzed their properties. A result which is closer in spirit to ours, is by Gul and Stacchetti [17], who showed that no item-price ascending auction can always determine the VCG prices, even for substitutes valuations.2 Our framework is more general than the traditional line of research that concentrates on the final allocation and 1 We do observe however that some weak incentive property comes for free in demand-query auctions since myopic players will answer all demand queries truthfully. We also note that in some cases (but not always!) the incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]). 2 We further discuss this result in Section 5.3. Iterative auctions Demand auctions Item-price auctions Anonymous price auctions Ascending auctions 1 2 3 4 5 6 97 8 10 Figure 1: The diagram classifies the following auctions according to their properties: (1) The adaptation [12] for Kelso & Crawford``s [22] auction. (2) The Proxy Auction [3] by Ausubel & Milgrom. (3) iBundle(3) by Parkes & Ungar [34]. (4) iBundle(2) by Parkes & Ungar [37]. (5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4). (6) Ausubel``s [4] auction for substitutes valuations. (7) The adaptation by Nisan & Segal [32] of the O( √ m) approximation by [26]. (8) The duplicate-item auction by [5]. (9) Auction for Read-Once formulae by [43]. (10) The AkBA Auction by Wurman & Wellman [42]. payments and in particular, on reaching ``Walrasian equilibria'' or ``Competitive equilibria''. A Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16]. This does not rule out other allocations by ascending auctions: in this paper we view the auctions as a computational process where the outcome - both the allocation and the payments - can be determined according to all the data elicited throughout the auction; This general framework strengthens our negative results.4 We find the study of ascending auctions appealing for various reasons. First, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]). Actually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods. Ascending auctions are also considered more intuitive for many bidders, and are believed to increase the trust of the bidders in the auctioneer, as they see the result gradually emerging from the bidders'' responses. Ascending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions). 1.1 Extant Work Many iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]). Figure 1 summa3 A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set. 4 In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions. 30 Valuation family Upper bound Reference Lower bound Reference General min(n, O( √ m)) [26], Section 4.2 min(n, m1/2− ) [32] Substitutes 1 [32] Submodular 2 [25], 1+ 1 2m , 1-1 e (*) [32],[23] Subadditive O(logm) [13] 2 [13] k-duplicates O(m1/k+1 ) [14] O(m1/k+1 ) [14] Procurement ln m [32] (log m)/2 [29, 32] Figure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations. All lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries. rizes the basic classes of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification. For our purposes, two families of these auctions serve as the main motivating starting points: the first is the ascending item-price auctions of [22, 17] that with computational efficiency find an optimal allocation among (gross) substitutes valuations, and the second is the ascending bundleprice auctions of [37, 3] that find an optimal allocation among general valuations - but not necessarily with computational efficiency. The main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries. A similar exponential lower bound was shown in [32] also for even approximating the optimal allocation to within a factor of m1/2− . Several lower bounds and upper bounds for approximation are known for some natural classes of valuations - these are summarized in Figure 2. In [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices). In [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness). However, in [33] it was demonstrated that this completeness of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication. Bundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions. It is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5 , where k is small (say, polynomial). Lahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial. In contrast, [7] show that there exist valuations that are XORs of k = √ m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries. These results are part of a recent line of research ([7, 43, 24, 40]) that study the preference elicitation problem in combinatorial auctions and its relation to the full elicitation problem (i.e., learn5 These are valuations where bidders have values for k specific packages, and the value of each bundle is the maximal value of one of these packages that it contains. ing the exact valuations of the bidders). These papers adapt methods from machine-learning theory to the combinatorialauction setting. The preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]). This problem studies if and when one can derive the utility function of a consumer from her demand function. Paper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2. After describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4. Then, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6). Readers who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4. Missing proofs from Section 4 can be found in part I of the full paper ([8]). Missing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]). 2. A SURVEY OF OUR RESULTS Our systematic analysis is composed of the combination of a rather large number of results characterizing the power and limitations of various classes of auctions. In this section, we will present an exposition describing our new results. We first discuss the power of demand-query iterative auctions, and then we turn our attention to ascending auctions. Figure 3 summarizes some of our main results. 2.1 Demand Queries Comparison of query types We first ask what other natural types of queries could we imagine iterative auctions using? Here is a list of such queries that are either natural, have been used in the literature, or that we found useful. 1. Value query: The auctioneer presents a bundle S, the bidder reports his value v(S) for this bundle. 2. Marginal-value query: The auctioneer presents a bundle A and an item j, the bidder reports how much he is willing to pay for j, given that he already owns A, i.e., v(j|A) = v(A ∪ {j}) − v(A). 3. Demand query (with item prices): The auctioneer presents a vector of item prices p1...pm; the bidder reports his demand under these prices, i.e., some set S that maximizes v(S) − P i∈S pi.6 6 A tie breaking rule should be specified. All of our results 31 Communication Constraint Can find an optimal allocation? Upper bound for welfare approx. Lower bound for welfare approx. Item-Price Demand Queries Yes 1 1 Poly. Communication No [32] min(n, O(m1/2 )) [26] min(n, m1/2− ) [32] Poly. Item-Price Demand Queries No [32] min(n, O(m1/2 )) min(n, m1/2− ) [32] Poly. Value Queries No [32] O( m√ log m ) [19] O( m log m ) Anonymous Item-Price AA No - min(O(n), O(m1/2− )) Non-anonymous Item-Price AA No -Anonymous Bundle-Price AA No - min(O(n), O(m1/2− )) Non-anonymous Bundle-Price AA Yes [37] 1 1 Poly Number of Item-Price AA No min(n, O(m1/2 ))Figure 3: This paper studies the economic efficiency of auctions that follow certain communication constraints. For each class of auctions, the table shows whether the optimal allocation can be achieved, or else, how well can it be approximated (both upper bounds and lower bounds). New results are highlighted. Abbreviations: Poly. (Polynomial number/size), AA (ascending auctions). - means that nothing is currently known except trivial solutions. 4. Indirect-utility query: The auctioneer presents a set of item prices p1...pm, and the bidder responds with his indirect-utility under these prices, that is, the highest utility he can achieve from a bundle under these prices: maxS⊆M (v(S) − P i∈S pi).7 5. Relative-demand query: the auctioneer presents a set of non-zero prices p1...pm, and the bidder reports the bundle that maximizes his value per unit of money, i.e., some set that maximizes v(S)P i∈S pi .8 Theorem: Each of these queries can be efficiently (i.e., in time polynomial in n, m, and the number of bits of precision t needed to represent a single value vi(S)) simulated by a sequence of demand queries with item prices. In particular this shows that demand queries can elicit all information about a valuation by simulating all 2m −1 value queries. We also observe that value queries and marginalvalue queries can simulate each other in polynomial time and that demand queries and indirect-utility queries can also simulate each other in polynomial time. We prove that exponentially many value queries may be needed in order to simulate a single demand query. It is interesting to note that for the restricted class of substitutes valuations, demand queries may be simulated by polynomial number of value queries [6]. Welfare approximation The next question that we ask is how well can a computationally-efficient auction that uses only demand queries approximate the optimal allocation? Two separate obstacles are known: In [32], a lower bound of min(n, m1/2− ), for any fixed > 0, was shown for the approximation factor apply for any fixed tie breaking rule. 7 This is exactly the utility achieved by the bundle which would be returned in a demand query with the same prices. This notion relates to the Indirect-utility function studied in the Microeconomic literature (see, e.g., [27]). 8 Note that when all the prices are 1, the bidder actually reports the bundle with the highest per-item price. We found this type of query useful, for example, in the design of the approximation algorithm described in Figure 5 in Section 4.2. obtained using any polynomial amount of communication. A computational bound with the same value applies even for the case of single-minded bidders, but under the assumption of NP = ZPP [39]. As noted in [32], the computationallyefficient greedy algorithm of [26] can be adapted to become a polynomial-time iterative auction that achieves a nearly matching approximation factor of min(n, O( √ m)). This iterative auction may be implemented with bundle-price demand queries but, as far as we see, not as one with item prices. Since in a single bundle-price demand query an exponential number of prices can be presented, this algorithm can have an exponential communication cost. In Section 4.2, we describe a different item-price auction that achieves the same approximation factor with a polynomial number of queries (and thus with a polynomial communication). Theorem: There exists a computationally-efficient iterative auction with item-price demand queries that finds an allocation that approximates the optimal welfare between arbitrary valuations to within a factor of min(n, O( √ m)). One may then attempt obtaining such an approximation factor using iterative auctions that use only the weaker value queries. However, we show that this is impossible: Theorem: Any iterative auction that uses a polynomial (in n and m) number of value queries can not achieve an approximation factor that is better than O( m log m ).9 Note however that auctions with only value queries are not completely trivial in power: the bundling auctions of Holzman et al. [19] can easily be implemented by a polynomial number of value queries and can achieve an approximation factor of O( m√ log m ) by using O(log m) equi-sized bundles. We do not know how to close the (tiny) gap between this upper bound and our lower bound. Representing bundle-prices We then deal with a critical issue with bundle-price auctions that was side-stepped by our model, as well as by all previous works that used bundle-price auctions: how are 9 This was also proven independently by Shahar Dobzinski and Michael Schapira. 32 the bundle prices represented? For item-price auctions this is not an issue since a query needs only to specify a small number, m, of prices. In bundle-price auctions that situation is more difficult since there are exponentially many bundles that require pricing. Our basic model (like all previous work that used bundle prices, e.g., [37, 34, 3]), ignores this issue, and only requires that the prices be determined, somehow, by the protocol. A finer model would fix a specific language for denoting bundle prices, force the protocol to represent the bundle-prices in this language, and require that the representations of the bundle-prices also be polynomial. What could such a language for denoting prices for all bundles look like? First note that specifying a price for each bundle is equivalent to specifying a valuation. Second, as noted in [31], most of the proposed bidding languages are really just languages for representing valuations, i.e., a syntactic representation of valuations - thus we could use any of them. This point of view opens up the general issue of which language should be used in bundle-price auctions and what are the implications of this choice. Here we initiate this line of investigation. We consider bundle-price auctions where the prices must be given as a XOR-bid, i.e., the protocol must explicitly indicate the price of every bundle whose value is different than that of all of its proper subsets. Note that all bundle-price auctions that do not explicitly specify a bidding language must implicitly use this language or a weaker one, since without a specific language one would need to list prices for all bundles, perhaps except for trivial ones (those with value 0, or more generally, those with a value that is determined by one of their proper subsets.) We show that once the representation length of bundle prices is taken into account (using the XOR-language), bundle-price auctions are no more strictly stronger than item-price auctions. Define the cost of an iterative auction as the total length of the queries and answers used throughout the auction (in the worst case). Theorem: For some class of valuations, bundle price auctions that use the XOR-language require an exponential cost for finding the optimal allocation. In contrast, item-price auctions can find the optimal allocation for this class within polynomial cost.10 This put doubts on the applicability of bundle-price auctions like [3, 37], and it may justify the use of hybrid pricing methods such as Ausubel, Cramton and Milgrom``s Clock-Proxy auction ([10]). Demand queries and linear programs The winner determination problem in combinatorial auctions may be formulated as an integer program. In many cases solving the linear-program relaxation of this integer program is useful: for some restricted classes of valuations it finds the optimum of the integer program (e.g., substitute valuations [22, 17]) or helps approximating the optimum (e.g., by randomized rounding [13, 14]). However, the linear program has an exponential number of variables. Nisan and Segal [32] observed the surprising fact that despite the ex10 Our proof relies on the sophisticated known lower bounds for constant depth circuits. We were not able to find an elementary proof. ponential number of variables, this linear program may be solved within polynomial communication. The basic idea is to solve the dual program using the Ellipsoid method (see, e.g., [20]). The dual program has a polynomial number of variables, but an exponential number of constraints. The Ellipsoid algorithm runs in polynomial time even on such programs, provided that a separation oracle is given for the set of constraints. Surprisingly, such a separation oracle can be implemented using a single demand query (with item prices) to each of the bidders. The treatment of [32] was somewhat ad-hoc to the problem at hand (the case of substitute valuations). Here we give a somewhat more general form of this important observation. Let us call the following class of linear programs generalized-winner-determination-relaxation (GWDR) LPs: Maximize X i∈N,S⊆M wi xi,S vi(S) s.t. X i∈N, S|j∈S xi,S ≤ qj ∀j ∈ M X S⊆M xi,S ≤ di ∀i ∈ N xi,S ≥ 0 ∀i ∈ N, S ⊆ M The case where wi = 1, di = 1, qj = 1 (for every i, j) is the usual linear relaxation of the winner determination problem. More generally, wi may be viewed as the weight given to bidder i``s welfare, qj may be viewed as the quantity of units of good j, and di may be viewed as duplicity of the number of bidders of type i. Theorem: Any GWDR linear program may be solved in polynomial time (in n, m, and the number of bits of precision t) using only demand queries with item prices.11 2.2 Ascending Auctions Ascending item-price auctions: It is well known that the item-price ascending auctions of Kelso and Crawford [22] and its variants [12, 16] find the optimal allocation as long as all players'' valuations have the substitutes property. The obvious question is whether the optimal allocation can be found for a larger class of valuations. Our main result here is a strong negative result: Theorem: There is a 2-item 2-player problem where no ascending item-price auction can find the optimal allocation. This is in contrast to both the power of bundle-price ascending auctions and to the power of general item-price demand queries (see above), both of which can always find the optimal allocation and in fact even provide full preference elicitation. The same proof proves a similar impossibility result for other types of auctions (e.g., descending auctions, non-anonymous auctions). More extension of this result: • Eliciting some classes of valuations requires an exponential number of ascending item-price trajectories. 11 The produced optimal solution will have polynomial support and thus can be listed fully. 33 • At least k − 1 ascending item-price trajectories are needed to elicit XOR formulae with k terms. This result is in some sense tight, since we show that any k-term XOR formula can be fully elicited by k−1 nondeterministic (i.e., when some exogenous teacher instructs the auctioneer on how to increase the prices) ascending auctions.12 We also show that item-price ascending auctions and iterative auctions that are limited to a polynomial number of queries (of any kind, not necessarily ascending) are incomparable in their power: ascending auctions, with small enough increments, can elicit the preferences in cases where any polynomial number of queries cannot. Motivated by several recent papers that studied the relation between eliciting and fully-eliciting the preferences in combinatorial auctions (e.g., [7, 24]), we explore the difference between these problems in the context of ascending auctions. We show that although a single ascending auction can determine the optimal allocation among any number of bidders with substitutes valuations, it cannot fully-elicit such a valuation even for a single bidder. While it was shown in [25] that the set of substitutes valuations has measure zero in the space of general valuations, its dimension is not known, and in particular it is still open whether a polynomial amount of information suffices to describe a substitutes valuation. While our result may be a small step in that direction (a polynomial full elicitation may still be possible with other communication protocols), we note that our impossibility result also holds for valuations in the class OXS defined by [25], valuations that we are able to show have a compact representation. We also give several results separating the power of different models for ascending combinatorial auctions that use item-prices: we prove, not surprisingly, that adaptive ascending auctions are more powerful than oblivious ascending auctions and that non-deterministic ascending auctions are more powerful than deterministic ascending auctions. We also compare different kinds of non-anonymous auctions (e.g., simultaneous or sequential), and observe that anonymous bundle-price auctions and non-anonymous item-price auctions are incomparable in their power. Finally, motivated by Dutch auctions, we consider descending auctions, and how they compare to ascending ones; we show classes of valuations that can be elicited by ascending item-price auctions but not by descending item-price auctions, and vice versa. Ascending bundle-price auctions: All known ascending bundle-price auctions that are able to find the optimal allocation between general valuations (with free disposal) use non-anonymous prices. Anonymous ascending-price auctions (e.g., [42, 21, 37]) are only known to be able to find the optimal allocation among superadditive valuations or few other simple classes ([36]). We show that this is no mistake: Theorem: No ascending auction with anonymous prices can find the optimal allocation between general valuations. 12 Non-deterministic computation is widely used in CS and also in economics (e.g, a Walrasian equilibrium or [38]). In some settings, deterministic and non-deterministic models have equal power (e.g., computation with finite automata). This bound is regardless of the running time, and it also holds for descending auctions and non-deterministic auctions. We strengthen this result significantly by showing that anonymous ascending auctions cannot produce a better than O( √ m) approximation - the approximation ratio that can be achieved with a polynomial number of queries ([26, 32]) and, as mentioned, with a polynomial number of item-price demand queries. The same lower bound clearly holds for anonymous item-price ascending auctions since such auctions can be simulated by anonymous bundle-price ascending auctions. We currently do not have any lower bound on the approximation achievable by non-anonymous item-price ascending auctions. Finally, we study the performance of the existing computationally-efficient ascending auctions. These protocols ([37, 3]) require exponential time in the worst case, and this is unavoidable as shown by [32]. However, we also observe that these auctions, as well as the whole class of similar ascending bundle-price auctions, require an exponential time even for simple additive valuations. This is avoidable and indeed the ascending item-price auctions of [22] can find the optimal allocation for these simple valuations with polynomial communication. 3. THE MODEL 3.1 Discrete Auctions for Continuous Values Our model aims to capture iterative auctions that operate on real-valued valuations. There is a slight technical difficulty here in bridging the gap between the discrete nature of an iterative auction, and the continuous nature of the valuations. This is exactly the same problem as in modeling a simple English auction. There are three standard formal ways to model it: 1. Model the auction as a continuous process and study its trajectory in time. For example, the so-called Japanese auction is basically a continuous model of an English model.13 2. Model the auction as discrete and the valuations as continuously valued. In this case we introduce a parameter and usually require the auction to produce results that are -close to optimal. 3. Model the valuations as discrete. In this case we will assume that all valuations are integer multiples of some small fixed quantity δ, e.g., 1 penny. All communication in this case is then naturally finite. In this paper we use the latter formulation and assume that all values are multiples of some δ. Thus, in some parts of the paper we assume without loss of generality that δ = 1, hence all valuations are integral. Almost all (if not all) of our results can be translated to the other two models with little effort. 3.2 Valuations A single auctioneer is selling m indivisible non-homogeneous items in a single auction, and let M be the set of these 13 Another similar model is the moving knives model in the cake-cutting literature. 34 items and N be the set of bidders. Each one of the n bidders in the auction has a valuation function vi : 2m → {0, δ, 2δ, ..., L}, where for every bundle of items S ⊆ M, vi(S) denotes the value of bidder i for the bundle S and is a multiple of δ in the range 0...L. We will sometimes denote the number of bits needed to represent such values in the range 0...L by t = log L. We assume free disposal, i.e., S ⊆ T implies vi(S) ≤ vi(T) and that vi(∅) = 0 for all bidders. We will mention the following classes of valuations: • A valuation is called sub-modular if for all sets of items A and B we have that v(A ∪ B) + v(A ∩ B) ≤ v(A) + v(B). • A valuation is called super-additive if for all disjoint sets of items A and B we have that v(A∪B) ≥ v(A)+ v(B). • A valuation is called a k-bundle XOR if it can be represented as a XOR combination of at most k atomic bids [30], i.e., if there are at most k bundles Si and prices pi such that for all S, v(S) = maxi|S⊇Si pi. Such valuations will be denoted by v = (S1 : p1) ⊕ (S2 : p2) ⊕ ... ⊕ (Sk : pk).14 3.3 Iterative Auctions The auctioneer sets up a protocol (equivalently an algorithm), where at each stage of the protocol some information q - termed the query - is sent to some bidder i, and then bidder i replies with some reply that depends on the query as well as on his own valuation. In this paper, we assume that we have complete control over the bidders'' behavior, and thus the protocol also defines a reply function ri(q, vi) that specifies bidder i``s reply to query q. The protocol may be adaptive: the query value as well as the queried bidder may depend on the replies received for past queries. At the end of the protocol, an allocation S1...Sn must be declared, where Si ∩ Sj = ∅ for i = j. We say that the auction finds an optimal allocation if it finds the allocation that maximizes the social welfareP i vi(Si). We say that it finds a c-approximation if P i vi(Si) ≥ P i vi(Ti)/c where T1...Tn is an optimal allocation. The running time of the auction on a given instance of the bidders'' valuations is the total number of queries made on this instance. The running time of a protocol is the worst case cost over all instances. Note that we impose no computational limitations on the protocol or on the players.15 This of course only strengthens our hardness results. Yet, our positive results will not use this power and will be efficient also in the usual computational sense. Our goal will be to design computationally-efficient protocols. We will deem a protocol computationally-efficient if its cost is polynomial in the relevant parameters: the number of bidders n, the number of items m, and t = log L, where L is the largest possible value of a bundle. However, when we discuss ascending-price auctions and their variants, a computationally-efficient protocol will be required to be 14 For example, v = (abcd : 5) ⊕ (ab : 3) ⊕ (c : 4) denotes the XOR valuation with the terms abcd, ab, c and prices 5, 3, 4 respectively. For this valuation, v(abcd) = 5, v(abd) = 3, v(abc) = 4. 15 The running time really measures communication costs and not computational running time. pseudo-polynomial, i.e., it should ask a number of queries which is polynomial in m, n and L δ . This is because that ascending auctions can usually not achieve such running times (consider even the English auction on a single item).16 Note that all of our results give concrete bounds, where the dependence on the parameters is given explicitly; we use the standard big-Oh notation just as a shorthand. We say than an auction elicits some class V of valuations, if it determines the optimal allocation for any profile of valuations drawn from V ; We say that an auction fully elicits some class of valuations V , if it can fully learn any single valuation v ∈ V (i.e., learn v(S) for every S). 3.4 Demand Queries and Ascending Auctions Most of the paper will be concerned with a common special case of iterative auctions that we term demand auctions. In such auctions, the queries that are sent to bidders are demand queries: the query specifies a price p(S) ∈ + for each bundle S. The reply of bidder i is simply the set most desired - demanded - under these prices. Formally, a set S that maximizes vi(S) − p(S). It may happen that more than one set S maximizes this value. In which case, ties are broken according to some fixed tie-breaking rule, e.g., the lexicographically first such set is returned. All of our results hold for any fixed tie-breaking rule. Ascending auctions are iterative auctions with non-decreasing prices: Definition 1. In an ascending auction, the prices in the queries to the same bidder can only increase in time. Formally, let p be a query made for bidder i, and q be a query made for bidder i at a later stage in the protocol. Then for all sets S, q(S) ≥ p(S). A similar variant, which we also study and that is also common in real life, is descending auctions, in which prices can only decrease in time. Note that the term ascending auction refers to an auction with a single ascending trajectory of prices. It may be useful to define multi-trajectory ascending auctions, in which the prices maybe reset to zero a number of times (see, e.g., [4]). We consider two main restrictions on the types of allowed demand queries: Definition 2. Item Prices: The prices in each query are given by prices pj for each item j. The price of a set S is additive: p(S) = P j∈S pj. Definition 3. Anonymous prices: The prices seen by the bidders at any stage in the auction are the same, i.e. whenever a query is made to some bidder, the same query is also made to all other bidders (with the prices unchanged). In auctions with non-anonymous (discriminatory) prices, each bidder i has personalized prices denoted by pi (S).17 In this paper, all auctions are anonymous unless otherwise specified. Note that even though in our model valuations are integral (or multiples of some δ), we allow the demand query to 16 Most of the auctions we present may be adapted to run in time polynomial in log L, using a binary-search-like procedure, losing their ascending nature. 17 Note that a non-anonymous auction can clearly be simulated by n parallel anonymous auctions. 35 use arbitrary real numbers in +. That is, we assume that the increment we use in the ascending auctions may be significantly smaller than δ. All our hardness results hold for any , even for continuous price increments. A practical issue here is how will the query be specified: in the general case, an exponential number of prices needs to be sent in a single query. Formally, this is not a problem as the model does not limit the length of queries in any way - the protocol must simply define what the prices are in terms of the replies received for previous queries. We look into this issue further in Section 4.3. 4. THE POWER OF DEMAND QUERIES In this section, we study the power of iterative auctions that use demand queries (not necessarily ascending). We start by comapring demand queries to other types of queries. Then, we discuss how well can one approximate the optimal welfare using a polynomial number of demand queries. We also initiate the study of the representation of bundle-price demand queries, and finally, we show how demand queries help solving the linear-programming relaxation of combinatorial auctions in polynomial time. 4.1 The Power of Different Types of Queries In this section we compare the power of the various types of queries defined in Section 2. We will present computationally -efficient simulations of these query types using item-price demand queries. In Section 5.1 we show that these simulations can also be done using item-price ascending auctions. Lemma 4.1. A value query can be simulated by m marginalvalue queries. A marginal-value query can be simulated by two value queries. Lemma 4.2. A value query can be simulated by mt demand queries (where t = log L is the number of bits needed to represent a single bundle value).18 As a direct corollary we get that demand auctions can always fully elicit the bidders'' valuations by simulating all possible 2m − 1 queries and thus elicit enough information for determining the optimal allocation. Note, however, that this elicitation may be computationally inefficient. The next lemma shows that demand queries can be exponentially more powerful than value queries. Lemma 4.3. An exponential number of value queries may be required for simulating a single demand query. Indirect utility queries are, however, equivalent in power to demand queries: Lemma 4.4. An indirect-utility query can be simulated by mt + 1 demand queries. A demand query can be simulated by m + 1 indirect-utility queries. Demand queries can also simulate relative-demand queries:19 18 Note that t bundle-price demand queries can easily simulate a value query by setting the prices of all the bundles except S (the bundle with the unknown value) to be L, and performing a binary search on the price of S. 19 Note: although in our model values are integral (our multiples of δ), we allow the query prices to be arbitrary real numV MV D IU RD V 1 2 exp exp exp MV m 1 exp exp exp D mt poly 1 mt+1 poly IU 1 2 m+1 1 poly RD - - - - 1 Figure 4: Each entry in the table specifies how many queries of this row are needed to simulate a query from the relevant column. Abbreviations: V (value query), MV (marginal-value query), D (demand query), IU (Indirect-utility query), RD (relative demand query). Lemma 4.5. Relative-demand queries can be simulated by a polynomial number of demand queries. According to our definition of relative-demand queries, they clearly cannot simulate even value queries. Figure 4 summarizes the relations between these query types. 4.2 Approximating the Social Welfare with Value and Demand Queries We know from [32] that iterative combinatorial auctions that only use a polynomial number of queries can not find an optimal allocation among general valuations and in fact can not even approximate it to within a factor better than min{n, m1/2− }. In this section we ask how well can this approximation be done using demand queries with item prices, or using the weaker value queries. We show that, using demand queries, the lower bound can be matched, while value queries can only do much worse. Figure 5 describes a polynomial-time algorithm that achieves a min(n, O( √ m)) approximation ratio. This algorithm greedily picks the bundles that maximize the bidders'' per-item value (using relative-demand queries, see Section 4.1). As a final step, it allocates all the items to a single bidder if it improves the social welfare (this can be checked using value queries). Since both value queries and relative-demand queries can be simulated by a polynomial number of demand queries with item prices (Lemmas 4.2 and 4.5), this algorithm can be implemented by a polynomial number of demand queries with item prices.20 Theorem 4.6. The auction described in Figure 5 can be implemented by a polynomial number of demand queries and achieves a min{n, 4 √ m}-approximation for the social welfare. We now ask how well can the optimal welfare be approximated by a polynomial number of value queries. First we note that value queries are not completely powerless: In [19] it is shown that if the m items are split into k fixed bundles of size m/k each, and these fixed bundles are auctioned as though each was indivisible, then the social welfare bers, thus we may have bundles with arbitrarily close relative demands. In this sense the simulation above is only up to any given (and the number of queries is O(log L+log 1 )). When the relative-demand query prices are given as rational numbers, exact simulation is implied when log is linear in the input length. 20 In the full paper [8], we observe that this algorithm can be implemented by two descending item-price auctions (where we allow removing items along the auction). 36 generated by such an auction is at least m√ k -approximation of that possible in the original auction. Notice that such an auction can be implemented by 2k − 1 value queries to each bidder - querying the value of each bundle of the fixed bundles. Thus, if we choose k = log m bundles we get an m√ log m -approximation while still using a polynomial number of queries. The following lemma shows that not much more is possible using value queries: Lemma 4.7. Any iterative auction that uses only value queries and distinguishes between k-tuples of 0/1 valuations where the optimal allocation has value 1, and those where the optimal allocation has value k requires at least 2 m k queries. Proof. Consider the following family of valuations: for every S, such that |S| > m/2, v(S) = 1, and there exists a single set T, such that for |S| ≤ m/2, v(S) = 1 iff T ⊆ S and v(S) = 0 otherwise. Now look at the behavior of the protocol when all valuations vi have T = {1...m}. Clearly in this case the value of the best allocation is 1 since no set of size m 2 or lower has non-zero value for any player. Fix the sequence of queries and answers received on this k-tuple of valuations. Now consider the k-tuple of valuations chosen at random as follows: a partition of the m items into k sets T1...Tk each of size m k each is chosen uniformly at random among all such partitions. Now consider the k-tuple of valuations from our family that correspond to this partition - clearly Ti can be allocated to i, for each i, getting a total value of k. Now look at the protocol when running on these valuations and compare its behavior to the original case. Note that the answer to a query S to player i differs between the case of Ti and the original case of T = {1...m} only if |S| ≤ m 2 and Ti ⊆ S. Since Ti is distributed uniformly among all sets of size exactly m k , we have that for any fixed query S to player i, where |S| ≤ m 2 , Pr[Ti ⊆ S] ≤ „ |S| m ``|Ti| ≤ 2− m k Using the union-bound, if the original sequence of queries was of length less than 2 m k , then with positive probability none of the queries in the sequence would receive a different answer than for the original input tuple. This is forbidden since the protocol must distinguish between this case and the original case - which cannot happen if all queries receive the same answer. Hence there must have been at least 2 m k queries for the original tuple of valuations. We conclude that a polynomial time protocol that uses only value queries cannot obtain a better than O( m log m ) approximation of the welfare: Theorem 4.8. An iterative auction that uses a polynomial number of value queries cannot achieve better than O( m log m )-approximation for the social welfare. Proof. Immediate from Lemma 4.7: achieving any approximation ratio k which is asymptotically greater than m log m needs an exponential number of value queries. An Approximation Algorithm: Initialization: Let T ← M be the current items for sale. Let K ← N be the currently participating bidders. Let S∗ 1 ← ∅, ..., S∗ n ← ∅ be the provisional allocation. Repeat until T = ∅ or K = ∅: Ask each bidder i in K for the bundle Si that maximizes her per-item value, i.e., Si ∈ argmaxS⊆T vi(S) |S| . Let i be the bidder with the maximal per-item value, i.e., i ∈ argmaxi∈K vi(Si) |Si| . Set: s∗ i = si, K = K \ i, M = M \ Si Finally: Ask the bidders for their values vi(M) for the grand bundle. If allocating all the items to some bidder i improves the social welfare achieved so far (i.e., ∃i ∈ N such that vi(M) > P i∈N vi(S∗ i )), then allocate all items to this bidder i. Figure 5: This algorithm achieves a min{n, 4 √ m}approximation for the social welfare, which is asymptotically the best worst-case approximation possible with polynomial communication. This algorithm can be implemented with a polynomial number of demand queries. 4.3 The Representation of Bundle Prices In this section we explicitly fix the language in which bundle prices are presented to the bidders in bundle-price auctions. This language requires the algorithm to explicitly list the price of every bundle with a non-trivial price. Trivial in this context is a price that is equal to that of one of its proper subsets (which was listed explicitly). This representation is equivalent to the XOR-language for expressing valuations. Formally, each query q is given by an expression: q = (S1 : p1) ⊕ (S2 : p2) ⊕ ... ⊕ (Sl : pl). In this representation, the price demanded for every set S is simply p(S) = max{k=1...l|Sk⊆S}pk. Definition 4. The length of the query q = (S1 : p1) ⊕ (S2 : p2) ⊕ ... ⊕ (Sl : pl) is l. The cost of an algorithm is the sum of the lengths of the queries asked during the operation of the algorithm on the worst case input. Note that under this definition, bundle-price auctions are not necessarily stronger than item-price auctions. An itemprice query that prices each item for 1, is translated to an exponentially long bundle-price query that needs to specify the price |S| for each bundle S. But perhaps bundle-price auctions can still find optimal allocations whenever itemprice auction can, without directly simulating such queries? We show that this is not the case: indeed, when the representation length is taken into account, bundle price auctions are sometimes seriously inferior to item price auctions. Consider the following family of valuations: Each item is valued at 3, except that for some single set S, its value is a bit more: 3|S| + b, where b ∈ {0, 1, 2}. Note that an item price auction can easily find the optimal allocation between any two such valuations: Set the prices of each item to 3+ ; if the demand sets of the two players are both empty, then b = 0 for both valuations, and an arbitrary allocation is fine. If one of them is empty and the other non-empty, allocate the non-empty demand set to its bidder, and the rest to the other. If both demand sets are non-empty then, unless they form an exact partition, we need to see which b is larger, which we can do by increasing the price of a single item in each demand set. 37 We will show that any bundle-price auction that uses only the XOR-language to describe bundle prices requires an exponential cost (which includes the sum of all description lengths of prices) to find an optimal allocation between any two such valuations. Lemma 4.9. Every bundle-price auction that uses XORexpressions to denote bundle prices requires 2Ω( √ m) cost in order to find the optimal allocation among two valuations from the above family. The complication in the proof stems from the fact that using XOR-expressions, the length of the price description depends on the number of bundles whose price is strictly larger than each of their subsets - this may be significantly smaller than the number of bundles that have a non-zero price. (The proof becomes easy if we require the protocol to explicitly name every bundle with non-zero price.) We do not know of any elementary proof for this lemma (although we believe that one can be found). Instead we reduce the problem to a well known lower bound in boolean circuit complexity [18] stating that boolean circuits of depth 3 that compute the majority function on m variables require 2Ω( √ m) size. 4.4 Demand Queries and Linear Programming Consider the following linear-programming relaxation for the generalized winner-determination problem in combinatorial auctions (the primal program): Maximize X i∈N,S⊆M wi xi,S vi(S) s.t. X i∈N, S|j∈S xi,S ≤ qj ∀j ∈ M X S⊆M xi,S ≤ di ∀i ∈ N xi,S ≥ 0 ∀i ∈ N, S ⊆ M Note that the primal program has an exponential number of variables. Yet, we will be able to solve it in polynomial time using demand queries to the bidders. The solution will have a polynomial size support (non-zero values for xi,S), and thus we will be able to describe it in polynomial time. Here is its dual: Minimize X j∈M qjpj + X i∈N diui s.t. ui + X j∈S pj ≥ wivi(S) ∀i ∈ N, S ⊆ M pi ≥ 0, uj ≥ 0 ∀i ∈ M, j ∈ N Notice that the dual problem has exactly n + m variables but an exponential number of constraints. Thus, the dual can be solved using the Ellipsoid method in polynomial time - if a separation oracle can be implemented in polynomial time. Recall that a separation oracle, when given a possible solution, either confirms that it is a feasible solution, or responds with a constraint that is violated by the possible solution. We construct a separation oracle for solving the dual program, using a single demand query to each of the bidders. Consider a possible solution (u, p) for the dual program. We can re-write the constraints of the dual program as: ui/wi ≥ vi(S) − X j∈S pj/wi Now a demand query to bidder i with prices pj/wi reveals exactly the set S that maximizes the RHS of the previous inequality. Thus, in order to check whether (u, p) is feasible it suffices to (1) query each bidder i for his demand Di under the prices pj/wi; (2) check only the n constraints ui + P j∈Di pj ≥ wivi(Di) (where vi(Di) can be simulated using a polynomial sequence of demand queries as shown in Lemma 4.2). If none of these is violated then we are assured that (u, p) is feasible; otherwise we get a violated constraint. What is left to be shown is how the primal program can be solved. (Recall that the primal program has an exponential number of variables.) Since the Ellipsoid algorithm runs in polynomial time, it encounters only a polynomial number of constraints during its operation. Clearly, if all other constraints were removed from the dual program, it would still have the same solution (adding constraints can only decrease the space of feasible solutions). Now take the reduced dual where only the constraints encountered exist, and look at its dual. It will have the same solution as the original dual and hence of the original primal. However, look at the form of this dual of the reduced dual. It is just a version of the primal program with a polynomial number of variables - those corresponding to constraints that remained in the reduced dual. Thus, it can be solved in polynomial time, and this solution clearly solves the original primal program, setting all other variables to zero. 5. ITEM-PRICE ASCENDING AUCTIONS In this section we characterize the power of ascending item-price auctions. We first show that this power is not trivial: such auctions can in general elicit an exponential amount of information. On the other hand, we show that the optimal allocation cannot always be determined by a single ascending auction, and in some cases, nor by an exponential number of ascending-price trajectories. Finally, we separate the power of different models of ascending auctions. 5.1 The Power of Item-Price Ascending Auctions We first show that if small enough increments are allowed, a single ascending trajectory of item-prices can elicit preferences that cannot be elicited with polynomial communication. As mentioned, all our hardness results hold for any increment, even infinitesimal. Theorem 5.1. Some classes of valuations can be elicited by item-price ascending auctions, but cannot be elicited by a polynomial number of queries of any kind. Proof. (sketch) Consider two bidders with v(S) = 1 if |S| > n 2 , v(S) = 0 if |S| < n 2 and every S such that |S| = n 2 has an unknown value from {0, 1}. Due to [32], determining the optimal allocation here requires exponential communication in the worst case. Nevertheless, we show (see [9]) that an item-price ascending auction can do it, as long as it can use exponentially small increments. We now describe another positive result for the power of item-price ascending auctions. In section 4.1, we showed 38 v(ab) v(a) v(b) Bidder 1 2 α ∈ (0, 1) β ∈ (0, 1) Bidder 2 2 2 2 Figure 6: No item-price ascending auctions can determine the optimal allocation for this class of valuations. that a value query can be simulated with a (truly) polynomial number of item-price demand queries. Here, we show that every value query can be simulated by a (pseudo) polynomial number of ascending item-price demand queries. (In the next subsection, we show that we cannot always simulate even a pair of value queries using a single item-price ascending auction.) In the full paper (part II,[9]), we show that we can simulate other types of queries using item-price ascending auctions. Proposition 5.2. A value query can be simulated by an item-price ascending auction. This simulation requires a polynomial number of queries. Actually, the proof for Proposition 5.2 proves a stronger useful result regarding the information elicited by iterative auctions. It says that in any iterative auction in which the changes of prices are small enough in each stage (pseudocontinuous auctions), the value of all bundles demanded during the auction can be computed. The basic idea is that when the bidder moves from demanding some bundle Ti to demanding another bundle Ti+1, there is a point in which she is indifferent between these two bundles. Thus, knowing the value of some demanded bundle (e.g., the empty set) enables computing the values of all other demanded bundles. We say that an auction is pseudo-continuous, if it only uses demand queries, and in each step, the price of at most one item is changed by (for some ∈ (0, δ]) with respect to the previous query. Proposition 5.3. Consider any pseudo-continuous auction (not necessarily ascending), in which bidder i demands the empty set at least once along the auction. Then, the value of every bundle demanded by bidder i throughout the auction can be calculated at the end of the auction. 5.2 Limitations of Item-Price Ascending Auctions Although we observed that demand queries can solve any combinatorial auction problem, when the queries are restricted to be ascending, some classes of valuations cannot be elicited nor fully-elicited. An example for such class of valuations is given in Figure 6. Theorem 5.4. There are classes of valuations that cannot be elicited nor fully elicited by any item-price ascending auction. Proof. Let bidder 1 have the valuation described in the first row of Figure 6, where α and β are unknown values in (0, 1). First, we prove that this class cannot be fully elicited by a single ascending auction. Specifically, an ascending auction cannot reveal the values of both α and β. As long as pa and pb are both below 1, the bidder will always demand the whole bundle ab: her utility from ab is strictly greater than the utility from either a or b separately. For example, we show that u1(ab) > u1(a): u1(ab) = 2 − (pa + pb) = 1 − pa + 1 − pb > vA(a) − pa + 1 − pb > u1(a) Thus, in order to gain any information about α or β, the price of one of the items should become at least 1, w.l.o.g. pa ≥ 1. But then, the bundle a will not be demanded by bidder 1 throughout the auction, thus no information at all will be gained about α. Now, assume that bidder 2 is known to have the valuation described in the second row of Figure 6. The optimal allocation depends on whether α is greater than β (in bidder 1``s valuation), and we proved that an ascending auction cannot determine this. The proof of the theorem above shows that for an unknown value to be revealed, the price of one item should be greater than 1, and the other price should be smaller than 1. Therefore, in a price-monotonic trajectory of prices, only one of these values can be revealed. An immediate conclusion is that this impossibility result also holds for item-price descending auctions. Since no such trajectory exists, then the same conclusion even holds for non-deterministic itemprice auctions (in which exogenous data tells us how to increase the prices). Also note that since the hardness stems from the impossibility to fully-elicit a valuation of a single bidder, this result also holds for non-anonymous ascending item-price auctions. 5.3 Limitations of Multi-Trajectory Ascending Auctions According to Theorem 5.4, no ascending item-price auction can always elicit the preferences (we prove a similar result for bundle prices in section 6). But can two ascending trajectories do the job? Or a polynomial number of ascending trajectories? We give negative answers for such suggestions. We define a k-trajectory ascending auction as a demandquery iterative auction in which the demand queries can be partitioned to k sets of queries, where the prices published in each set only increase in time. Note that we use a general definition; It allows the trajectories to run in parallel or sequentially, and to use information elicited in some trajectories for determining the future queries in other trajectories. The power of multiple-trajectory auctions can be demonstrated by the negative result of Gul and Stacchetti [17] who showed that even for an auction among substitutes valuations, an anonymous ascending item-price auction cannot compute VCG prices for all players.21 Ausubel [4] overcame this impossibility result and designed auctions that do compute VCG prices by organizing the auction as a sequence of n + 1 ascending auctions. Here, we prove that one cannot elicit XOR valuations with k terms by less than k − 1 ascending trajectories. On the other hand, we show that an XOR formula can be fully elicited by k−1 non-deterministic ascending auctions (or by k−1 deterministic ascending auctions if the auctioneer knows the atomic bundles).22 21 A recent unpublished paper by Mishra and Parkes extends this result, and shows that non-anonymous prices with bundle-prices are necessary in order that an ascending auction will end up with a universal-competitive-equilibrium (that leads to VCG payments). 22 This result actually separates the power of deterministic 39 Proposition 5.5. XOR valuations with k terms cannot be elicited (or fully elicited) by any (k-2)-trajectory itemprice ascending auction, even when the atomic bundles are known to the elicitor. However, these valuations can be elicited (and fully elicited) by (k-1)-trajectory non-deterministic non-anonymous item-price ascending auctions. Moreover, an exponential number of trajectories is required for eliciting some classes of valuations: Proposition 5.6. Elicitation and full-elicitation of some classes of valuations cannot be done by any k-trajectory itemprice ascending auction, where k = o(2m ). Proof. (sketch) Consider the following class of valuations: For |S| < m 2 , v(S) = 0 and for |S| > m 2 , v(S) = 2; every bundle S of size m 2 has some unknown value in (0, 1). We show ([9]) that a single item-price ascending auction can reveal the value of at most one bundle of size n 2 , and therefore an exponential number of ascending trajectories is needed in order to elicit such valuations. We observe that the algorithm we presented in Section 4.2 can be implemented by a polynomial number of ascending auctions (each item-price demand query can be considered as a separate ascending auction), and therefore a min(n, 4 √ m)-approximation can be achieved by a polynomial number of ascending auctions. We do not currently have a better upper bound or any lower bound. 5.4 Separating the Various Models of Ascending Auctions Various models for ascending auctions have been suggested in the literature. In this section, we compare the power of the different models. As mentioned, all auctions are considered anonymous and deterministic, unless specified otherwise. Ascending vs. Descending Auctions: We begin the discussion of the relation between ascending auctions and descending auctions with an example. The algorithm by Lehmann, Lehmann and Nisan [25] can be implemented by a simple item-price descending auction (see the full paper for details [9]). This algorithm guarantees at least half of the optimal efficiency for submodular valuations. However, we are not familiar with any ascending auction that guarantees a similar fraction of the efficiency. This raises a more general question: can ascending auctions solve any combinatorialauction problem that is solvable using a descending auction (and vice versa)? We give negative answers to these questions. The idea behind the proofs is that the information that the auctioneer can get for free at the beginning of each type of auction is different.23 and non-deterministic iterative auctions: our proof shows that a non-deterministic iterative auction can elicit the kterm XOR valuations with a polynomial number of demand queries, and [7] show that this elicitation must take an exponential number of demand queries. 23 In ascending auctions, the auctioneer can reveal the most valuable bundle (besides M) before she starts raising the prices, thus she can use this information for adaptively choose the subsequent queries. In descending auctions, one can easily find the bundle with the highest average per-item price, keeping all other bundles with non-positive utilities, and use this information in the adaptive price change. Proposition 5.7. There are classes that cannot be elicited (fully elicited) by ascending item-price auctions, but can be elicited (resp. fully elicited) with a descending item-price auction. Proposition 5.8. There are classes that cannot be elicited (fully elicited) by item-price descending auctions, but can be elicited (resp. fully elicited) by item-price ascending auctions. Deterministic vs. Non-Deterministic Auctions: Nondeterministic ascending auctions can be viewed as auctions where some benevolent teacher that has complete information guides the auctioneer on how she should raise the prices. That is, preference elicitation can be done by a non-deterministic ascending auction, if there is some ascending trajectory that elicits enough information for determining the optimal allocation (and verifying that it is indeed optimal). We show that non-deterministic ascending auctions are more powerful than deterministic ascending auctions: Proposition 5.9. Some classes can be elicited (fully elicited) by an item-price non-deterministic ascending auction, but cannot be elicited (resp. fully elicited) by item-price deterministic ascending auctions. Anonymous vs. Non-Anonymous Auctions: As will be shown in Section 6, the power of anonymous and nonanonymous bundle-price ascending auctions differs significantly. Here, we show that a difference also exists for itemprice ascending auctions. Proposition 5.10. Some classes cannot be elicited by anonymous item-price ascending auctions, but can be elicited by a non-anonymous item-price ascending auction. Sequential vs. Simultaneous Auctions: A non-anonymous auction is called simultaneous if at each stage, the price of some item is raised by for every bidder. The auctioneer can use the information gathered until each stage, in all the personalized trajectories, to determine the next queries. A non-anonymous auction is called sequential if the auctioneer performs an auction for each bidder separately, in sequential order. The auctioneer can determine the next query based on the information gathered in the trajectories completed so far and on the history of the current trajectory. Proposition 5.11. There are classes that cannot be elicited by simultaneous non-anonymous item-price ascending auctions, but can be elicited by a sequential non-anonymous item-price ascending auction. Adaptive vs. Oblivious Auctions: If the auctioneer determines the queries regardless of the bidders'' responses (i.e., the queries are predefined) we say that the auction is oblivious. Otherwise, the auction is adaptive. We prove that an adaptive behaviour of the auctioneer may be beneficial. Proposition 5.12. There are classes that cannot be elicited (fully elicited) using oblivious item-price ascending auctions, but can be elicited (resp. fully elicited) by an adaptive item-price ascending auction. 40 5.5 Preference Elicitation vs. Full Elicitation Preference elicitation and full elicitation are closely related problems. If full elicitation is easy (e.g., in polynomial time) then clearly elicitation is also easy (by a nonanonymous auction, simply by learning all the valuations separately24 ). On the other hand, there are examples where preference elicitation is considered easy but learning is hard (typically, elicitation requires smaller amount of information; some examples can be found in [7]). The tatonnement algorithms by [22, 12, 16] end up with the optimal allocation for substitutes valuations.25 We prove that we cannot fully elicit substitutes valuations (or even their sub-class of OXS valuations defined in [25]), even for a single bidder, by an item-price ascending auction (although the optimal allocation can be found by an ascending auction for any number of bidders!) . Theorem 5.13. Substitute valuations cannot be fully elicited by ascending item-price auctions. Moreover, they cannot be fully elicited by any m 2 ascending trajectories (m > 3). Whether substitutes valuations have a compact representation (i.e., polynomial in the number of goods) is an important open question. As a step in this direction, we show that its sub-class of OXS valuations does have a compact representation: every OXS valuation can be represented by at most m2 values.26 Lemma 5.14. Any OXS valuation can be represented by no more than m2 values. 6. BUNDLE-PRICE ASCENDING AUCTIONS All the ascending auctions in the literature that are proved to find the optimal allocation for unrestricted valuations are non-anonymous bundle-price auctions (iBundle(3) by Parkes and Ungar [37] and the Proxy Auction by Ausubel and Milgrom [3]). Yet, several anonymous ascending auctions have been suggested (e.g., AkBA [42], [21] and iBundle(2) [37]). In this section, we prove that anonymous bundle-price ascending auctions achieve poor results in the worst-case. We also show that the family of non-anonymous bundleprice ascending auctions can run exponentially slower than simple item-price ascending auctions. 6.1 Limitations of Anonymous Bundle-Price Ascending Auctions We present a class of valuations that cannot be elicited by anonymous bundle-price ascending auctions. These valuations are described in Figure 7. The basic idea: for determining some unknown value of one bidder we must raise 24 Note that an anonymous ascending auction cannot necessarily elicit a class that can be fully elicited by an ascending auction. 25 Substitute valuations are defined, e.g., in [16]. Roughly speaking, a bidder with a substitute valuation will continue demand a certain item after the price of some other items was increased. For completeness, we present in the full paper [9] a proof for the efficiency of such auctions for substitutes valuations. 26 A unit-demand valuation is an XOR valuation in which all the atomic bundles are singletons. OXS valuations can be interpreted as an aggregation (OR) of any number of unit-demand bidders. Bid. 1 v1(ac) = 2 v1(bd) = 2 v1(cd) = α ∈ (0, 1) Bid. 2 v2(ab) = 2 v2(cd) = 2 v2(bd) = β ∈ (0, 1) Figure 7: Anonymous ascending bundle-price auctions cannot determine the optimal allocation for this class of valuations. a price of a bundle that should be demanded by the other bidder in the future. Theorem 6.1. Some classes of valuations cannot be elicited by anonymous bundle-price ascending auctions. Proof. Consider a pair of XOR valuations as described in Figure 7. For finding the optimal allocation we must know which value is greater between α and β.27 However, we cannot learn the value of both α and β by a single ascending trajectory: assume w.l.o.g. that bidder 1 demands cd before bidder 2 demands bd (no information will be elicited if none of these happens). In this case, the price for bd must be greater than 1 (otherwise, bidder 1 prefers bd to cd). Thus, bidder 2 will never demand the bundle bd, and no information will be elicited about β. The valuations described in the proof of Theorem 6.1 can be easily elicited by a non-anonymous item-price ascending auction. On the other hand, the valuations in Figure 6 can be easily elicited by an anonymous bundle-price ascending auction. We conclude that the power of these two families of ascending auctions is incomparable. We strengthen the impossibility result above by showing that anonymous bundle-price auctions cannot even achieve better than a min{O(n), O( √ m)}-approximation for the social welfare. This approximation ratio can be achieved with polynomial communication, and specifically with a polynomial number of item-price demand queries.28 Theorem 6.2. An anonymous bundle-price ascending auction cannot guarantee better than a min{ n 2 , √ m 2 } approximation for the optimal welfare. Proof. (Sketch) Assume we have n bidders and n2 items for sale, and that n is prime. We construct n2 distinct bundles with the following properties: for each bidder, we define a partition Si = (Si 1, ..., Si n) of the n2 items to n bundles, such that any two bundles from different partitions intersect. In the full paper, part II [9] we show an explicit construction using the properties of linear functions over finite fields. The rest of the proof is independent of the specific construction. Using these n2 bundles we construct a hard-to-elicit class. Every bidder has an atomic bid, in his XOR valuation, for each of these n2 bundles. A bidder i has a value of 2 for any bundle Si j in his partition. For all bundles in the other partitions, he has a value of either 0 or of 1 − δ, and these values are unknown to the auctioneer. Since every pair of bundles from different partitions intersect, only one bidder can receive a bundle with a value of 2. 27 If α > β, the optimal allocation will allocate cd to bidder 1 and ab to bidder 2. Otherwise, we give bd to bidder 2 and ac to bidder 1. Note that both bidders cannot gain a value of 2 in the same allocation, due to the intersections of the high-valued bundles. 28 Note that bundle-price queries may use exponential communication, thus the lower bound of [32] does not hold. 41 Non-anonymous Bundle-Price Economically-Efficient Ascending Auctions: Initialization: All prices are initialized to zero (non-anonymous bundle prices). Repeat: - Each bidder submits a bundle that maximizes his utility under his current personalized prices. - The auctioneer calculates a provisional allocation that maximizes his revenue under the current prices. - The prices of bundles that were demanded by losing bidders are increased by . Finally: Terminate when the provisional allocation assigns to each bidder the bundle he demanded. Figure 8: Auctions from this family (denoted by NBEA auctions) are known to achieve the optimal welfare. No bidder will demand a low-valued bundle, as long as the price of one of his high-valued bundles is below 1 (and thus gain him a utility greater than 1). Therefore, for eliciting any information about the low-valued bundles, the auctioneer should first arbitrarily choose a bidder (w.l.o.g bidder 1) and raise the prices of all the bundles (S1 1 , ..., S1 n) to be greater than 1. Since the prices cannot decrease, the other bidders will clearly never demand these bundles in future stages. An adversary may choose the values such that the low values of all the bidders for the bundles not in bidder 1``s partition are zero (i.e., vi(S1 j ) = 0 for every i = 1 and every j), however, allocating each bidder a different bundle from bidder 1``s partition, might achieve a welfare of n+1−(n−1)δ (bidder 1``s valuation is 2, and 1 − δ for all other bidders); If these bundles were wrongly allocated, only a welfare of 2 might be achieved (2 for bidder 1``s high-valued bundle, 0 for all other bidders). At this point, the auctioneer cannot have any information about the identity of the bundles with the non-zero values. Therefore, an adversary can choose the values of the bundles received by bidders 2, ..., n in the final allocation to be zero. We conclude that anonymous bundleprice auctions cannot guarantee a welfare greater than 2 for this class, where the optimal welfare can be arbitrarily close to n + 1. 6.2 Bundle Prices vs. Item Prices The core of the auctions in [37, 3] is the scheme described in Figure 8 (in the spirit of [35]) for auctions with nonanonymous bundle prices. Auctions from this scheme end up with the optimal allocation for any class of valuations. We denote this family of ascending auctions as NBEA auctions29 . NBEA auctions can elicit k-term XOR valuations by a polynomial (in k) number of steps , although the elicitation of such valuations may require an exponential number of item-price queries ([7]), and item-price ascending auctions cannot do it at all (Theorem 5.4). Nevertheless, we show that NBEA auctions (and in particular, iBundle(3) and the proxy auction) are sometimes inferior to simple item-price demand auctions. This may justify the use of hybrid auctions that use both linear and non-linear prices (e.g., the clock-proxy auction [10]). We show that auctions from this 29 Non-anonymous Bundle-price economically Efficient Ascending auctions. For completeness, we give in the full paper [9] a simple proof for the efficiency (up to an ) of auctions of this scheme . family may use an exponential number of queries even for determining the optimal allocation among two bidders with additive valuations30 , where such valuations can be elicited by a simple item-price ascending auction. We actually prove this property for a wider class of auctions we call conservative auctions. We also observe that in conservative auctions, allowing the bidders to submit all the bundles in their demand sets ensures that the auction runs a polynomial number of steps - if L is not too high (but with exponential communication, of course). An ascending auction is called conservative if it is nonanonymous, uses bundle prices initialized to zero and at every stage the auctioneer can only raise prices of bundles demanded by the bidders until this stage. In addition, each bidder can only receive bundles he demanded during the auction. Note that NBEA auctions are by definition conservative. Proposition 6.3. If every bidder demands a single bundle in each step of the auction, conservative auctions may run for an exponential number of steps even for additive valuations. If the bidders are allowed to submit all the bundles in their demand sets in each step, then conservative auctions can run in a polynomial number of steps for any profile of valuations, as long as the maximal valuation L is polynomial in m, n and 1 δ . Acknowledgments: The authors thank Moshe Babaioff, Shahar Dobzinski, Ron Lavi, Daniel Lehmann, Ahuva Mu``alem, David Parkes, Michael Schapira and Ilya Segal for helpful discussions. Supported by grants from the Israeli Academy of Sciences and the USAIsrael Binational Science Foundation. 7. REFERENCES [1] amazon. Web Page: http://www.amazon.com. [2] ebay. Web Page: http://www.ebay.com. [3] L. M. Ausubel and P. R. Milgrom. Ascending auctions with package bidding. Frontiers of Theoretical Economics, 1:1-42, 2002. [4] Lawrence Ausubel. An efficient dynamic auction for heterogeneous commodities, 2000. Working paper, University of Maryland. [5] Yair Bartal, Rica Gonen, and Noam Nisan. Incentive compatible multi unit combinatorial auctions. In TARK 03, 2003. [6] Alejandro Bertelsen. Substitutes valuations and m -concavity. M.Sc. Thesis, The Hebrew University of Jerusalem, 2005. [7] Avrim Blum, Jeffrey C. Jackson, Tuomas Sandholm, and Martin A. Zinkevich. Preference elicitation and query learning. Journal of Machine Learning Research, 5:649-667, 2004. [8] Liad Blumrosen and Noam Nisan. On the computational power of iterative auctions I: demand queries. Working paper, The Hebrew University of 30 Valuations are called additive if for any disjoint bundles A and B, v(A ∪ B) = v(A) + v(B). Additive valuations are both sub-additive and super-additive and are determined by the m values assigned for the singletons. 42 Jerusalem. Available from http://www.cs.huji.ac.il/˜noam/mkts.html. [9] Liad Blumrosen and Noam Nisan. On the computational power of iterative auctions II: ascending auctions. Working paper, The Hebrew University of Jerusalem. Available from http://www.cs.huji.ac.il/˜noam/mkts.html. [10] P. Cramton, L.M. Ausubel, and P.R. Milgrom. In P. Cramton and Y. Shoham and R. Steinberg (Editors), Combinatorial Auctions. Chapter 5. The Clock-Proxy Auction: A Practical Combinatorial Auction Design. MIT Press. Forthcoming, 2005. [11] P. Cramton, Y. Shoham, and R. Steinberg (Editors). Combinatorial Auctions. MIT Press. Forthcoming, 2005. [12] G. Demange, D. Gale, and M. Sotomayor. Multi-item auctions. Journal of Political Economy, 94:863-872, 1986. [13] Shahar Dobzinski, Noam Nisan, and Michael Schapira. Approximation algorithms for cas with complement-free bidders. In The 37th ACM symposium on theory of computing (STOC). , 2005. [14] Shahar Dobzinski and Michael Schapira. Optimal upper and lower approximation bounds for k-duplicates combinatorial auctions. Working paper, the Hebrew University. [15] Combinatorial bidding conference. Web Page: http://wireless.fcc.gov/auctions/conferences/combin2003. [16] Faruk Gul and Ennio Stacchetti. Walrasian equilibrium with gross substitutes. Journal of Economic Theory, 87:95 - 124, 1999. [17] Faruk Gul and Ennio Stacchetti. The english auction with differentiated commodities. Journal of Economic Theory, 92(3):66 - 95, 2000. [18] J. Hastad. Almost optimal lower bounds for small depth circuits. In 18th STOC, pages 6-20, 1986. [19] Ron Holzman, Noa Kfir-Dahav, Dov Monderer, and Moshe Tennenholtz. Bundling equilibrium in combinatrial auctions. Games and Economic Behavior, 47:104-123, 2004. [20] H. Karloff. Linear Programming. Birkh¨auser Verlag, 1991. [21] Frank Kelly and Richard Steinberg. A combinatorial auction with multiple winners for universal service. Management Science, 46:586-596, 2000. [22] A.S. Kelso and V.P. Crawford. Job matching, coalition formation, and gross substitute. Econometrica, 50:1483-1504, 1982. [23] Subhash Khot, Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta. Inapproximability results for combinatorial auctions with submodular utility functions. In Working paper., 2004. [24] Sebastien Lahaie and David C. Parkes. Applying learning algorithms to preference elicitation. In EC 04. [25] Benny Lehmann, Daniel Lehmann, and Noam Nisan. Combinatorial auctions with decreasing marginal utilities. In ACM conference on electronic commerce. To appear, Games and Economic Behaviour., 2001. [26] D. Lehmann, L. O``Callaghan, and Y. Shoham. Truth revelation in approximately efficient combinatorial auctions. JACM, 49(5):577-602, Sept. 2002. [27] A. Mas-Collel, W. Whinston, and J. Green. Microeconomic Theory. Oxford university press, 1995. [28] Debasis Mishra and David Parkes. Ascending price vickrey auctions using primal-dual algorithms., 2004. Working paper, Harvard University. [29] Noam Nisan. The communication complexity of approximate set packing and covering. In ICALP 2002. [30] Noam Nisan. Bidding and allocation in combinatorial auctions. In ACM Conference on Electronic Commerce, 2000. [31] Noam Nisan. In P. Cramton and Y. Shoham and R. Steinberg (Editors), Combinatorial Auctions. Chapter 1. Bidding Languages. MIT Press. Forthcoming, 2005. [32] Noam Nisan and Ilya Segal. The communication requirements of efficient allocations and supporting prices, 2003. Working paper. Available from http://www.cs.huji.ac.il/˜noam/mkts.html Forthcoming in the Journal of Economic Theory. [33] Noam Nisan and Ilya Segal. Exponential communication inefficiency of demand queries, 2004. Working paper. Available from http://www.stanford.edu/ isegal/queries1. pdf. [34] D. C. Parkes and L. H. Ungar. An ascending-price generalized vickrey auction. Tech. Rep., Harvard University, 2002. [35] David Parkes. In P. Cramton and Y. Shoham and R. Steinberg (Editors), Combinatorial Auctions. Chapter 3. Iterative Combinatorial Auctions. MIT Press. Forthcoming, 2005. [36] David C. Parkes. Iterative combinatorial auctions: Achieving economic and computational efficiency. Ph.D.. Thesis, Department of Computer and Information Science, University of Pennsylvania., 2001. [37] David C. Parkes and Lyle H. Ungar. Iterative combinatorial auctions: Theory and practice. In AAAI/IAAI, pages 74-81, 2000. [38] Ariel Rubinstein. Why are certain properties of binary relations relatively more common in natural languages. Econometrica, 64:343-356, 1996. [39] Tuomas Sandholm. Algorithm for optimal winner determination in combinatorial auctions. In Artificial Intelligence, volume 135, pages 1-54, 2002. [40] P. Santi, V. Conitzer, and T. Sandholm. Towards a characterization of polynomial preference elicitation with value queries in combinatorial auctions. In The 17th Annual Conference on Learning Theory, 2004. [41] Ilya Segal. The communication requirements of social choice rules and supporting budget sets, 2004. Working paper. Available from http://www.stanford.edu/ isegal/rules. pdf. [42] P.R. Wurman and M.P. Wellman. Akba: A progressive, anonymous-price combinatorial auction. In Second ACM Conference on Electronic Commerce, 2000. [43] Martin A. Zinkevich, Avrim Blum, and Tuomas Sandholm. On polynomial-time preference elicitation with value queries. In ACM Conference on Electronic Commerce, 2003. 43
On the Computational Power of Iterative Auctions * ABSTRACT We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their "demand" under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions. 1. INTRODUCTION Combinatorial auctions have recently received a lot of attention. In a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders. The bidders have preferences regarding the bundles of items that they may receive. The preferences of bidder i are specified by a valuation function vi: 2M--+ R +, where * This paper is a merger of two papers accepted to EC' 05, merged at the request of the program committee. Full versions of the two papers [8, 9] can be obtained from our webpages. vi (S) denotes the value that bidder i attaches to winning the bundle of items S. We assume "free disposal", i.e., that the tioneer is to optimize the social welfare P vi's are monotone non-decreasing. The usual goal of the auci vi (Si), where the allocation S1...Sn must be a partition of the items. Applications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems. Combinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science. A forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions. The design of a combinatorial auction involves many considerations. In this paper we focus on just one central issue: the communication between bidders and the allocation mechanism--"preference elicitation". Transferring all information about bidders' preferences requires an infeasible (exponential in m) amount of communication. Thus, "direct revelation" auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences. We have therefore seen a multitude of suggested "iterative auctions" in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders' preferences as to be able to find a good (optimal or close to optimal) allocation. Most of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]). Effectively, such an iterative auction accesses the bidders' preferences by repeatedly making the following type of demand query to bidders: "Query to bidder i: a vector of bundle prices p = {p (S)} ScM; Answer: a bundle of items S C _ M that maximizes vi (S) − p (S)." . These types of queries are very natural in an economic setting as they capture the "revealed preferences" of the bidders. Some auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p (S) = PicS pi. Other auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p (S) for bundles. Another important differentiation between models of iterative auctions is based on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices). In other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices. In ascending-price auctions, forcing prices to be anonymous may be a significant restriction. In this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries. We do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions. In the first part of this paper, our main question is what can be done using a polynomial number of these types of queries? That is, polynomial in the main parameters of the problem: n, m and the number of bits t needed for representing a single value vi (S). Note that from an algorithmic point of view we are talking about sub-linear time algorithms: the input size here is really n (2m − 1) numbers--the descriptions of the valuation functions of all bidders. There are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the "usual" computational tractability. Our lower bounds will depend only on the number of queries--and hold independently of any computational assumptions like P = NP. Our upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation. As mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives. This strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction .1 The second part of this paper studies the power of ascending - price auctions. Ascending auctions are iterative auctions where the published prices cannot decrease in time. In this work, we try to systematically analyze what do the differences between various models of ascending auctions mean. We try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations? (ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare? (iii) How do the different models for ascending auctions compare? Are some models computationally stronger than others? Ascending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]). Most of this work presented' upper bounds', i.e., proposed mechanisms with ascending prices and analyzed their properties. A result which is closer in spirit to ours, is by Gul and Stacchetti [17], who showed that no item-price ascending auction can always determine the VCG prices, even for substitutes valuations .2 Our framework is more general than the traditional line of research that concentrates on the final allocation and 1We do observe however that some weak incentive property comes for free in demand-query auctions since "myopic" players will answer all demand queries truthfully. We also note that in some cases (but not always!) the incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]). 2We further discuss this result in Section 5.3. Figure 1: The diagram classifies the following auctions according to their properties: (1) The adaptation [12] for Kelso & Crawford's [22] auction. (2) The Proxy Auction [3] by Ausubel & Milgrom. (3) iBundle (3) by Parkes & Ungar [34]. (4) iBundle (2) by Parkes & Ungar [37]. (5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4). (6) Ausubel's [4] auction for substitutes valuations. (7) The adaptation by Nisan & Segal [32] of the O (/ m) approximation by [26]. (8) The duplicate-item auction by [5]. (9) Auction for Read-Once formulae by [43]. (10) The AkBA Auction by Wurman & Wellman [42]. payments and in particular, on reaching' Walrasian equilibria' or' Competitive equilibria'. A Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16]. This does not rule out other allocations by ascending auctions: in this paper we view the auctions as a computational process where the outcome - both the allocation and the payments - can be determined according to all the data elicited throughout the auction; This general framework strengthens our negative results .4 We find the study of ascending auctions appealing for various reasons. First, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]). Actually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods. Ascending auctions are also considered more intuitive for many bidders, and are believed to increase the "trust" of the bidders in the auctioneer, as they see the result gradually emerging from the bidders' responses. Ascending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions). 1.1 Extant Work Many iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]). Figure 1 summa3A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set. 4In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions. Figure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations. All lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries. rizes the basic "classes" of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification. For our purposes, two families of these auctions serve as the main motivating starting points: the first is the ascending item-price auctions of [22, 17] that with computational efficiency find an optimal allocation among "(gross) substitutes" valuations, and the second is the ascending bundleprice auctions of [37, 3] that find an optimal allocation among general valuations--but not necessarily with computational efficiency. The main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries. A similar exponential lower bound was shown in [32] also for even approximating the optimal allocation to within a factor of m1/2 − E. Several lower bounds and upper bounds for approximation are known for some natural classes of valuations--these are summarized in Figure 2. In [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices). In [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness). However, in [33] it was demonstrated that this "completeness" of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication. Bundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions. It is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5, where k is small (say, polynomial). Lahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial. In contrast, [7] show that there exist valuations that are XORs of k =, / m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries. These results are part of a recent line of research ([7, 43, 24, 40]) that study the "preference elicitation" problem in combinatorial auctions and its relation to the "full elicitation" problem (i.e., learn5These are valuations where bidders have values for k specific packages, and the value of each bundle is the maximal value of one of these packages that it contains. ing the exact valuations of the bidders). These papers adapt methods from machine-learning theory to the combinatorialauction setting. The preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]). This problem studies if and when one can derive the utility function of a consumer from her demand function. Paper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2. After describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4. Then, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6). Readers who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4. Missing proofs from Section 4 can be found in part I of the full paper ([8]). Missing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]). 2. A SURVEY OF OUR RESULTS Our systematic analysis is composed of the combination of a rather large number of results characterizing the power and limitations of various classes of auctions. In this section, we will present an exposition describing our new results. We first discuss the power of demand-query iterative auctions, and then we turn our attention to ascending auctions. Figure 3 summarizes some of our main results. 2.1 Demand Queries Comparison of query types We first ask what other natural types of queries could we imagine iterative auctions using? Here is a list of such queries that are either natural, have been used in the literature, or that we found useful. 1. Value query: The auctioneer presents a bundle S, the bidder reports his value v (S) for this bundle. 2. Marginal-value query: The auctioneer presents a bundle A and an item j, the bidder reports how much he is willing to pay for j, given that he already owns A, i.e., v (j | A) = v (A U {j}) − v (A). 3. Demand query (with item prices): The auctioneer presents a vector of item prices p1...pm; the bidder reports his demand under these prices, i.e., some set S that maximizes v (S) − Pi, S pi .6 Figure 3: This paper studies the economic efficiency of auctions that follow certain communication constraints. For each class of auctions, the table shows whether the optimal allocation can be achieved, or else, how well can it be approximated (both upper bounds and lower bounds). New results are highlighted. Abbreviations: "Poly." (Polynomial number/size), AA (ascending auctions). "-" means that nothing is currently known except trivial solutions. 4. Indirect-utility query: The auctioneer presents a set of item prices p1...pm, and the bidder responds with his "indirect-utility" under these prices, that is, the highest utility he can achieve from a bundle under these prices: maxScM (v (S) − PiES pi).7 5. Relative-demand query: the auctioneer presents a set of non-zero prices p1...pm, and the bidder reports the bundle that maximizes his value per unit of money, i.e., some set that maximizes v (S) P% ES p% .8 Theorem: Each of these queries can be efficiently (i.e., in time polynomial in n, m, and the number of bits of precision t needed to represent a single value vi (S)) simulated by a sequence of demand queries with item prices. In particular this shows that demand queries can elicit all information about a valuation by simulating all 2m − 1 value queries. We also observe that value queries and marginalvalue queries can simulate each other in polynomial time and that demand queries and indirect-utility queries can also simulate each other in polynomial time. We prove that exponentially many value queries may be needed in order to simulate a single demand query. It is interesting to note that for the restricted class of substitutes valuations, demand queries may be simulated by polynomial number of value queries [6]. Welfare approximation The next question that we ask is how well can a computationally-efficient auction that uses only demand queries approximate the optimal allocation? Two separate obstacles are known: In [32], a lower bound of min (n, m1/2 −), for any fixed e> 0, was shown for the approximation factor apply for any fixed tie breaking rule. 7This is exactly the utility achieved by the bundle which would be returned in a demand query with the same prices. This notion relates to the Indirect-utility function studied in the Microeconomic literature (see, e.g., [27]). 8Note that when all the prices are 1, the bidder actually reports the bundle with the highest per-item price. We found this type of query useful, for example, in the design of the approximation algorithm described in Figure 5 in Section 4.2. obtained using any polynomial amount of communication. A computational bound with the same value applies even for the case of single-minded bidders, but under the assumption of NP = ZPP [39]. As noted in [32], the computationallyefficient greedy algorithm of [26] can be adapted to become a polynomial-time iterative auction that achieves a nearly matching approximation factor of min (n, O (, / m)). This iterative auction may be implemented with bundle-price demand queries but, as far as we see, not as one with item prices. Since in a single bundle-price demand query an exponential number of prices can be presented, this algorithm can have an exponential communication cost. In Section 4.2, we describe a different item-price auction that achieves the same approximation factor with a polynomial number of queries (and thus with a polynomial communication). Theorem: There exists a computationally-efficient iterative auction with item-price demand queries that finds an allocation that approximates the optimal welfare between arbitrary valuations to within a factor of min (n, O (, / m)). One may then attempt obtaining such an approximation factor using iterative auctions that use only the weaker value queries. However, we show that this is impossible: Theorem: Any iterative auction that uses a polynomial (in n and m) number of value queries cannot achieve an approximation factor that is better than O (m Note however that auctions with only value queries are not completely trivial in power: the bundling auctions of Holzman et al. [19] can easily be implemented by a polynomial number of value queries and can achieve an approximation factor of O (m logm) by using O (log m) equi-sized bundles. We do not know how to close the (tiny) gap between this upper bound and our lower bound. Representing bundle-prices We then deal with a critical issue with bundle-price auctions that was side-stepped by our model, as well as by all previous works that used bundle-price auctions: how are the bundle prices represented? For item-price auctions this is not an issue since a query needs only to specify a small number, m, of prices. In bundle-price auctions that situation is more difficult since there are exponentially many bundles that require pricing. Our basic model (like all previous work that used bundle prices, e.g., [37, 34, 3]), ignores this issue, and only requires that the prices be determined, somehow, by the protocol. A finer model would fix a specific language for denoting bundle prices, force the protocol to represent the bundle-prices in this language, and require that the representations of the bundle-prices also be polynomial. What could such a language for denoting prices for all bundles look like? First note that specifying a price for each bundle is equivalent to specifying a valuation. Second, as noted in [31], most of the proposed bidding languages are really just languages for representing valuations, i.e., a syntactic representation of valuations--thus we could use any of them. This point of view opens up the general issue of which language should be used in bundle-price auctions and what are the implications of this choice. Here we initiate this line of investigation. We consider bundle-price auctions where the prices must be given as a XOR-bid, i.e., the protocol must explicitly indicate the price of every bundle whose value is different than that of all of its proper subsets. Note that all bundle-price auctions that do not explicitly specify a bidding language must implicitly use this language or a weaker one, since without a specific language one would need to list prices for all bundles, perhaps except for trivial ones (those with value 0, or more generally, those with a value that is determined by one of their proper subsets.) We show that once the representation length of bundle prices is taken into account (using the XOR-language), bundle-price auctions are no more strictly stronger than item-price auctions. Define the cost of an iterative auction as the total length of the queries and answers used throughout the auction (in the worst case). Theorem: For some class of valuations, bundle price auctions that use the XOR-language require an exponential cost for finding the optimal allocation. In contrast, item-price auctions can find the optimal allocation for this class within polynomial cost .10 This put doubts on the applicability of bundle-price auctions like [3, 37], and it may justify the use of "hybrid" pricing methods such as Ausubel, Cramton and Milgrom's Clock-Proxy auction ([10]). Demand queries and linear programs The winner determination problem in combinatorial auctions may be formulated as an integer program. In many cases solving the linear-program relaxation of this integer program is useful: for some restricted classes of valuations it finds the optimum of the integer program (e.g., substitute valuations [22, 17]) or helps approximating the optimum (e.g., by randomized rounding [13, 14]). However, the linear program has an exponential number of variables. Nisan and Segal [32] observed the surprising fact that despite the ex10Our proof relies on the sophisticated known lower bounds for constant depth circuits. We were not able to find an elementary proof. ponential number of variables, this linear program may be solved within polynomial communication. The basic idea is to solve the dual program using the Ellipsoid method (see, e.g., [20]). The dual program has a polynomial number of variables, but an exponential number of constraints. The Ellipsoid algorithm runs in polynomial time even on such programs, provided that a "separation oracle" is given for the set of constraints. Surprisingly, such a separation oracle can be implemented using a single demand query (with item prices) to each of the bidders. The treatment of [32] was somewhat ad-hoc to the problem at hand (the case of substitute valuations). Here we give a somewhat more general form of this important observation. Let us call the following class of linear programs "generalized-winner-determination-relaxation (GWDR) LPs": The case where wi = 1, di = 1, qj = 1 (for every i, j) is the usual linear relaxation of the winner determination problem. More generally, wi may be viewed as the weight given to bidder i's welfare, qj may be viewed as the quantity of units of good j, and di may be viewed as duplicity of the number of bidders of type i. Theorem: Any GWDR linear program may be solved in polynomial time (in n, m, and the number of bits of precision t) using only demand queries with item prices .11 2.2 Ascending Auctions Ascending item-price auctions: It is well known that the item-price ascending auctions of Kelso and Crawford [22] and its variants [12, 16] find the optimal allocation as long as all players' valuations have the substitutes property. The obvious question is whether the optimal allocation can be found for a larger class of valuations. Our main result here is a strong negative result: Theorem: There is a 2-item 2-player problem where no ascending item-price auction can find the optimal allocation. This is in contrast to both the power of bundle-price ascending auctions and to the power of general item-price demand queries (see above), both of which can always find the optimal allocation and in fact even provide full preference elicitation. The same proof proves a similar impossibility result for other types of auctions (e.g., descending auctions, non-anonymous auctions). More extension of this result: 9 Eliciting some classes of valuations requires an exponential number of ascending item-price trajectories. 11The produced optimal solution will have polynomial support and thus can be listed fully. • At least k − 1 ascending item-price trajectories are needed to elicit XOR formulae with k terms. This result is in some sense tight, since we show that any k-term XOR formula can be fully elicited by k − 1 nondeterministic (i.e., when some exogenous "teacher" instructs the auctioneer on how to increase the prices) ascending auctions .12 We also show that item-price ascending auctions and iterative auctions that are limited to a polynomial number of queries (of any kind, not necessarily ascending) are incomparable in their power: ascending auctions, with small enough increments, can elicit the preferences in cases where any polynomial number of queries cannot. Motivated by several recent papers that studied the relation between eliciting and fully-eliciting the preferences in combinatorial auctions (e.g., [7, 24]), we explore the difference between these problems in the context of ascending auctions. We show that although a single ascending auction can determine the optimal allocation among any number of bidders with substitutes valuations, it cannot fully-elicit such a valuation even for a single bidder. While it was shown in [25] that the set of substitutes valuations has measure zero in the space of general valuations, its dimension is not known, and in particular it is still open whether a polynomial amount of information suffices to describe a substitutes valuation. While our result may be a small step in that direction (a polynomial full elicitation may still be possible with other communication protocols), we note that our impossibility result also holds for valuations in the class OXS defined by [25], valuations that we are able to show have a compact representation. We also give several results separating the power of different models for ascending combinatorial auctions that use item-prices: we prove, not surprisingly, that adaptive ascending auctions are more powerful than oblivious ascending auctions and that non-deterministic ascending auctions are more powerful than deterministic ascending auctions. We also compare different kinds of non-anonymous auctions (e.g., simultaneous or sequential), and observe that anonymous bundle-price auctions and non-anonymous item-price auctions are incomparable in their power. Finally, motivated by Dutch auctions, we consider descending auctions, and how they compare to ascending ones; we show classes of valuations that can be elicited by ascending item-price auctions but not by descending item-price auctions, and vice versa. Ascending bundle-price auctions: All known ascending bundle-price auctions that are able to find the optimal allocation between general valuations (with "free disposal") use non-anonymous prices. Anonymous ascending-price auctions (e.g., [42, 21, 37]) are only known to be able to find the optimal allocation among superadditive valuations or few other simple classes ([36]). We show that this is no mistake: Theorem: No ascending auction with anonymous prices can find the optimal allocation between general valuations. 12Non-deterministic computation is widely used in CS and also in economics (e.g, a Walrasian equilibrium or [38]). In some settings, deterministic and non-deterministic models have equal power (e.g., computation with finite automata). This bound is regardless of the running time, and it also holds for descending auctions and non-deterministic auctions. We strengthen this result significantly by showing that anonymous ascending auctions cannot produce a better than O (, / m) approximation--the approximation ratio that can be achieved with a polynomial number of queries ([26, 32]) and, as mentioned, with a polynomial number of item-price demand queries. The same lower bound clearly holds for anonymous item-price ascending auctions since such auctions can be simulated by anonymous bundle-price ascending auctions. We currently do not have any lower bound on the approximation achievable by non-anonymous item-price ascending auctions. Finally, we study the performance of the existing computationally-efficient ascending auctions. These protocols ([37, 3]) require exponential time in the worst case, and this is unavoidable as shown by [32]. However, we also observe that these auctions, as well as the whole class of similar ascending bundle-price auctions, require an exponential time even for simple additive valuations. This is avoidable and indeed the ascending item-price auctions of [22] can find the optimal allocation for these simple valuations with polynomial communication. 3. THE MODEL 3.1 Discrete Auctions for Continuous Values Our model aims to capture iterative auctions that operate on real-valued valuations. There is a slight technical difficulty here in bridging the gap between the discrete nature of an iterative auction, and the continuous nature of the valuations. This is exactly the same problem as in modeling a simple English auction. There are three standard formal ways to model it: 1. Model the auction as a continuous process and study its trajectory in time. For example, the so-called Japanese auction is basically a continuous model of an English model .13 2. Model the auction as discrete and the valuations as continuously valued. In this case we introduce a parameter a and usually require the auction to produce results that are a-close to optimal. 3. Model the valuations as discrete. In this case we will assume that all valuations are integer multiples of some small fixed quantity S, e.g., 1 penny. All communication in this case is then naturally finite. In this paper we use the latter formulation and assume that all values are multiples of some S. Thus, in some parts of the paper we assume without loss of generality that S = 1, hence all valuations are integral. Almost all (if not all) of our results can be translated to the other two models with little effort. 3.2 Valuations A single auctioneer is selling m indivisible non-homogeneous items in a single auction, and let M be the set of these 13Another similar model is the "moving knives" model in the cake-cutting literature. items and N be the set of bidders. Each one of the n bidders in the auction has a valuation function vi: 2m--+ {0, S, 2S,..., L}, where for every bundle of items S C _ M, vi (S) denotes the value of bidder i for the bundle S and is a multiple of S in the range 0...L. We will sometimes denote the number of bits needed to represent such values in the range 0...L by t = log L. We assume free disposal, i.e., S C _ T implies vi (S) <vi (T) and that vi (0) = 0 for all bidders. We will mention the following classes of valuations: • A valuation is called sub-modular if for all sets of items A and B we have that v (A U B) + v (A n B) <v (A) + v (B). • A valuation is called super-additive if for all disjoint sets of items A and B we have that v (AUB)> v (A) + v (B). • A valuation is called a k-bundle XOR if it can be represented as a XOR combination of at most k atomic bids [30], i.e., if there are at most k bundles Si and prices pi such that for all S, v (S) = maxi | SDSipi. Such valuations will be denoted by v = (S1: p1) ® (S2: p2) ®...® (Sk: pk).14 3.3 Iterative Auctions The auctioneer sets up a protocol (equivalently an "algorithm"), where at each stage of the protocol some information q--termed the "query"--is sent to some bidder i, and then bidder i replies with some reply that depends on the query as well as on his own valuation. In this paper, we assume that we have complete control over the bidders' behavior, and thus the protocol also defines a reply function ri (q, vi) that specifies bidder i's reply to query q. The protocol may be adaptive: the query value as well as the queried bidder may depend on the replies received for past queries. At the end of the protocol, an allocation S1...Sn must be declared, where Si n Sj = 0 for i = j. We say that the auction finds an optimal allocation if it finds the allocation that maximizes the social welfare Pi vi (Si). We say that it finds a c-approximation if Pi vi (Si)> Pi vi (Ti) / c where T1...Tn is an optimal allocation. The running time of the auction on a given instance of the bidders' valuations is the total number of queries made on this instance. The running time of a protocol is the worst case cost over all instances. Note that we impose no computational limitations on the protocol or on the players .15 This of course only strengthens our hardness results. Yet, our positive results will not use this power and will be efficient also in the usual computational sense. Our goal will be to design computationally-efficient protocols. We will deem a protocol computationally-efficient if its cost is polynomial in the relevant parameters: the number of bidders n, the number of items m, and t = log L, where L is the largest possible value of a bundle. However, when we discuss ascending-price auctions and their variants, a computationally-efficient protocol will be required to be 14For example, v = (abcd: 5) ® (ab: 3) ® (c: 4) denotes the XOR valuation with the terms abcd, ab, c and prices 5, 3, 4 respectively. For this valuation, v (abcd) = 5, v (abd) = 3, v (abc) = 4. 15The running time really measures communication costs and not computational running time. "pseudo-polynomial", i.e., it should ask a number of queries which is polynomial in m, n and L. This is because that ascending auctions can usually not achieve such running times (consider even the English auction on a single item).16 Note that all of our results give concrete bounds, where the dependence on the parameters is given explicitly; we use the standard big-Oh notation just as a shorthand. We say than an auction elicits some class V of valuations, if it determines the optimal allocation for any profile of valuations drawn from V; We say that an auction fully elicits some class of valuations V, if it can fully learn any single valuation v E V (i.e., learn v (S) for every S). 3.4 Demand Queries and Ascending Auctions Most of the paper will be concerned with a common special case of iterative auctions that we term "demand auctions". In such auctions, the queries that are sent to bidders are demand queries: the query specifies a price p (S) E R + for each bundle S. The reply of bidder i is simply the set most desired--"demanded"--under these prices. Formally, a set S that maximizes vi (S) − p (S). It may happen that more than one set S maximizes this value. In which case, ties are broken according to some fixed tie-breaking rule, e.g., the lexicographically first such set is returned. All of our results hold for any fixed tie-breaking rule. Ascending auctions are iterative auctions with non-decreasing prices: Note that the term "ascending auction" refers to an auction with a single ascending trajectory of prices. It may be useful to define multi-trajectory ascending auctions, in which the prices maybe reset to zero a number of times (see, e.g., [4]). We consider two main restrictions on the types of allowed demand queries: Note that even though in our model valuations are integral (or multiples of some S), we allow the demand query to 16Most of the auctions we present may be adapted to run in time polynomial in log L, using a binary-search-like procedure, losing their ascending nature. 17Note that a non-anonymous auction can clearly be simulated by n parallel anonymous auctions. use arbitrary real numbers in R +. That is, we assume that the increment a we use in the ascending auctions may be significantly smaller than S. All our hardness results hold for any e, even for continuous price increments. A practical issue here is how will the query be specified: in the general case, an exponential number of prices needs to be sent in a single query. Formally, this is not a problem as the model does not limit the length of queries in any way--the protocol must simply define what the prices are in terms of the replies received for previous queries. We look into this issue further in Section 4.3. 4. THE POWER OF DEMAND QUERIES In this section, we study the power of iterative auctions that use demand queries (not necessarily ascending). We start by comapring demand queries to other types of queries. Then, we discuss how well can one approximate the optimal welfare using a polynomial number of demand queries. We also initiate the study of the representation of bundle-price demand queries, and finally, we show how demand queries help solving the linear-programming relaxation of combinatorial auctions in polynomial time. 4.1 The Power of Different Types of Queries In this section we compare the power of the various types of queries defined in Section 2. We will present computationally - efficient simulations of these query types using item-price demand queries. In Section 5.1 we show that these simulations can also be done using item-price ascending auctions. LEMMA 4.2. A value query can be simulated by mt demand queries (where t = log L is the number of bits needed to represent a single bundle value).18 As a direct corollary we get that demand auctions can always fully elicit the bidders' valuations by simulating all possible 2m − 1 queries and thus elicit enough information for determining the optimal allocation. Note, however, that this elicitation may be computationally inefficient. The next lemma shows that demand queries can be exponentially more powerful than value queries. LEMMA 4.3. An exponential number of value queries may be required for simulating a single demand query. Indirect utility queries are, however, equivalent in power to demand queries: LEMMA 4.4. An indirect-utility query can be simulated by mt + 1 demand queries. A demand query can be simulated by m + 1 indirect-utility queries. Demand queries can also simulate relative-demand queries:19 18Note that t bundle-price demand queries can easily simulate a value query by setting the prices of all the bundles except S (the bundle with the unknown value) to be L, and performing a binary search on the price of S. 19Note: although in our model values are integral (our multiples of S), we allow the query prices to be arbitrary real num Figure 4: Each entry in the table specifies how many queries of this row are needed to simulate a query from the relevant column. Abbreviations: V (value query), MV (marginal-value query), D (demand query), IU (Indirect-utility query), RD (relative demand query). LEMMA 4.5. Relative-demand queries can be simulated by a polynomial number of demand queries. According to our definition of relative-demand queries, they clearly cannot simulate even value queries. Figure 4 summarizes the relations between these query types. 4.2 Approximating the Social Welfare with Value and Demand Queries We know from [32] that iterative combinatorial auctions that only use a polynomial number of queries cannot find an optimal allocation among general valuations and in fact cannot even approximate it to within a factor better than min {n, m1/2 − E}. In this section we ask how well can this approximation be done using demand queries with item prices, or using the weaker value queries. We show that, using demand queries, the lower bound can be matched, while value queries can only do much worse. Figure 5 describes a polynomial-time algorithm that achieves a min (n, O (, / m)) approximation ratio. This algorithm greedily picks the bundles that maximize the bidders' per-item value (using "relative-demand" queries, see Section 4.1). As a final step, it allocates all the items to a single bidder if it improves the social welfare (this can be checked using value queries). Since both value queries and relative-demand queries can be simulated by a polynomial number of demand queries with item prices (Lemmas 4.2 and 4.5), this algorithm can be implemented by a polynomial number of demand queries with item prices .20 THEOREM 4.6. The auction described in Figure 5 can be implemented by a polynomial number of demand queries and achieves a min {n, 4, / m} - approximation for the social welfare. We now ask how well can the optimal welfare be approximated by a polynomial number of value queries. First we note that value queries are not completely powerless: In [19] it is shown that if the m items are split into k fixed bundles of size m/k each, and these fixed bundles are auctioned as though each was indivisible, then the social welfare bers, thus we may have bundles with arbitrarily close relative demands. In this sense the simulation above is only up to any given a (and the number of queries is O (log L + log 1)). When the relative-demand query prices are given as rational numbers, exact simulation is implied when log a is linear in the input length. 20In the full paper [8], we observe that this algorithm can be implemented by two descending item-price auctions (where we allow removing items along the auction). generated by such an auction is at least mk-approximation of that possible in the original auction. Notice that such an auction can be implemented by 2k − 1 value queries to each bidder--querying the value of each bundle of the fixed bundles. Thus, if we choose k = log m bundles we get an m log m-approximation while still using a polynomial number of queries. The following lemma shows that not much more is possible using value queries: LEMMA 4.7. Any iterative auction that uses only value queries and distinguishes between k-tuples of 0/1 valuations where the optimal allocation has value 1, and those where the optimal allocation has value k requires at least 2 mk queries. PROOF. Consider the following family of valuations: for every S, such that | S |> m/2, v (S) = 1, and there exists a single set T, such that for | S | m/2, v (S) = 1 iff T S and v (S) = 0 otherwise. Now look at the behavior of the protocol when all valuations vi have T = {1...m}. Clearly in this case the value of the best allocation is 1 since no set of size m2 or lower has non-zero value for any player. Fix the sequence of queries and answers received on this k-tuple of valuations. Now consider the k-tuple of valuations chosen at random as follows: a partition of the m items into k sets T1...Tk each of size mk each is chosen uniformly at random among all such partitions. Now consider the k-tuple of valuations from our family that correspond to this partition--clearly Ti can be allocated to i, for each i, getting a total value of k. Now look at the protocol when running on these valuations and compare its behavior to the original case. Note that the answer to a query S to player i differs between the case of Ti and the original case of T = {1...m} only if | S | m2 and Ti S. Since Ti is distributed uniformly among all sets of size exactly mk, we have that for any fixed query S to player i, where | S | m2, „ | S | "| Ti | 2 mk m Using the union-bound, if the original sequence of queries was of length less than 2mk, then with positive probability none of the queries in the sequence would receive a different answer than for the original input tuple. This is forbidden since the protocol must distinguish between this case and the original case--which cannot happen if all queries receive the same answer. Hence there must have been at least 2 mk queries for the original tuple of valuations. We conclude that a polynomial time protocol that uses only value queries cannot obtain a better than O (m log m) approximation of the welfare: O (m log m) - approximation for the social welfare. PROOF. Immediate from Lemma 4.7: achieving any approximation ratio k which is asymptotically greater than log m needs an exponential number of value queries. m An Approximation Algorithm: Initialization: Let T +--M be the current items for sale. Let K +--N be the currently participating bidders. Set: s * i = si, K = K \ i, M = M \ Si Finally: Ask the bidders for their values vi (M) for the grand bundle. If allocating all the items to some bidder i improves the social welfare achieved so far (i.e.,] i E N such that vi (M)> PiEN vi (S * i)), then allocate all items to this bidder i. Figure 5: This algorithm achieves a min {n, 4, / m} approximation for the social welfare, which is asymptotically the best worst-case approximation possible with polynomial communication. This algorithm can be implemented with a polynomial number of demand queries. 4.3 The Representation of Bundle Prices In this section we explicitly fix the language in which bundle prices are presented to the bidders in bundle-price auctions. This language requires the algorithm to explicitly list the price of every bundle with a non-trivial price. "Trivial" in this context is a price that is equal to that of one of its proper subsets (which was listed explicitly). This representation is equivalent to the XOR-language for expressing valuations. Formally, each query q is given by an expression: q = (S1: p1) (S2: p2)... (Sl: pl). In this representation, the price demanded for every set S is simply p (S) = max {k = 1...l | SkCS} pk. DEFINITION 4. The length of the query q = (S1: p1) (S2: p2)... (Sl: pl) is l. The cost of an algorithm is the sum of the lengths of the queries asked during the operation of the algorithm on the worst case input. Note that under this definition, bundle-price auctions are not necessarily stronger than item-price auctions. An itemprice query that prices each item for 1, is translated to an exponentially long bundle-price query that needs to specify the price | S | for each bundle S. But perhaps bundle-price auctions can still find optimal allocations whenever itemprice auction can, without directly simulating such queries? We show that this is not the case: indeed, when the representation length is taken into account, bundle price auctions are sometimes seriously inferior to item price auctions. Consider the following family of valuations: Each item is valued at 3, except that for some single set S, its value is a bit more: 3 | S | + b, where b {0, 1, 2}. Note that an item price auction can easily find the optimal allocation between any two such valuations: Set the prices of each item to 3 + e; if the demand sets of the two players are both empty, then b = 0 for both valuations, and an arbitrary allocation is fine. If one of them is empty and the other non-empty, allocate the non-empty demand set to its bidder, and the rest to the other. If both demand sets are non-empty then, unless they form an exact partition, we need to see which b is larger, which we can do by increasing the price of a single item in each demand set. We will show that any bundle-price auction that uses only the XOR-language to describe bundle prices requires an exponential cost (which includes the sum of all description lengths of prices) to find an optimal allocation between any two such valuations. LEMMA 4.9. Every bundle-price auction that uses XORexpressions to denote bundle prices requires 20 (1xm) cost in order to find the optimal allocation among two valuations from the above family. The complication in the proof stems from the fact that using XOR-expressions, the length of the price description depends on the number of bundles whose price is strictly larger than each of their subsets--this may be significantly smaller than the number of bundles that have a non-zero price. (The proof becomes easy if we require the protocol to explicitly name every bundle with non-zero price.) We do not know of any elementary proof for this lemma (although we believe that one can be found). Instead we reduce the problem to a well known lower bound in boolean circuit complexity [18] stating that boolean circuits of depth 3 that compute the majority function on m variables require 20 (1xm) size. 4.4 Demand Queries and Linear Programming Consider the following linear-programming relaxation for the generalized winner-determination problem in combinatorial auctions (the "primal" program): Note that the primal program has an exponential number of variables. Yet, we will be able to solve it in polynomial time using demand queries to the bidders. The solution will have a polynomial size support (non-zero values for xi, S), and thus we will be able to describe it in polynomial time. Here is its dual: pi> 0, uj> 0 ` di E M, j E N Notice that the dual problem has exactly n + m variables but an exponential number of constraints. Thus, the dual can be solved using the Ellipsoid method in polynomial time--if a "separation oracle" can be implemented in polynomial time. Recall that a separation oracle, when given a possible solution, either confirms that it is a feasible solution, or responds with a constraint that is violated by the possible solution. We construct a separation oracle for solving the dual program, using a single demand query to each of the bidders. Consider a possible solution (u, p) for the dual program. We can re-write the constraints of the dual program as: Now a demand query to bidder i with prices pj/wi reveals exactly the set S that maximizes the RHS of the previous inequality. Thus, in order to check whether (u, p) is feasible it suffices to (1) query each bidder i for his demand Di under the prices pj/wi; (2) check only the n constraints ui + EjEDi pj> wivi (Di) (where vi (Di) can be simulated using a polynomial sequence of demand queries as shown in Lemma 4.2). If none of these is violated then we are assured that (u, p) is feasible; otherwise we get a violated constraint. What is left to be shown is how the primal program can be solved. (Recall that the primal program has an exponential number of variables.) Since the Ellipsoid algorithm runs in polynomial time, it encounters only a polynomial number of constraints during its operation. Clearly, if all other constraints were removed from the dual program, it would still have the same solution (adding constraints can only decrease the space of feasible solutions). Now take the "reduced dual" where only the constraints encountered exist, and look at its dual. It will have the same solution as the original dual and hence of the original primal. However, look at the form of this "dual of the reduced dual". It is just a version of the primal program with a polynomial number of variables--those corresponding to constraints that remained in the reduced dual. Thus, it can be solved in polynomial time, and this solution clearly solves the original primal program, setting all other variables to zero. 5. ITEM-PRICE ASCENDING AUCTIONS In this section we characterize the power of ascending item-price auctions. We first show that this power is not trivial: such auctions can in general elicit an exponential amount of information. On the other hand, we show that the optimal allocation cannot always be determined by a single ascending auction, and in some cases, nor by an exponential number of ascending-price trajectories. Finally, we separate the power of different models of ascending auctions. 5.1 The Power of Item-Price Ascending Auctions We first show that if small enough increments are allowed, a single ascending trajectory of item-prices can elicit preferences that cannot be elicited with polynomial communication. As mentioned, all our hardness results hold for any increment, even infinitesimal. THEOREM 5.1. Some classes of valuations can be elicited by item-price ascending auctions, but cannot be elicited by a polynomial number of queries of any kind. PROOF. (sketch) Consider two bidders with v (S) = 1 if ISI> n2, v (S) = 0 if S <n2 andevery S such that S = n2 has an unknown value from {0, 11. Due to [32], determining the optimal allocation here requires exponential communication in the worst case. Nevertheless, we show (see [9]) that an item-price ascending auction can do it, as long as it can use exponentially small increments. We now describe another positive result for the power of item-price ascending auctions. In section 4.1, we showed Figure 6: No item-price ascending auctions can determine the optimal allocation for this class of valuations. that a value query can be simulated with a (truly) polynomial number of item-price demand queries. Here, we show that every value query can be simulated by a (pseudo) polynomial number of ascending item-price demand queries. (In the next subsection, we show that we cannot always simulate even a pair of value queries using a single item-price ascending auction.) In the full paper (part II, [9]), we show that we can simulate other types of queries using item-price ascending auctions. PROPOSITION 5.2. A value query can be simulated by an item-price ascending auction. This simulation requires a polynomial number of queries. Actually, the proof for Proposition 5.2 proves a stronger useful result regarding the information elicited by iterative auctions. It says that in any iterative auction in which the changes of prices are small enough in each stage ("pseudocontinuous" auctions), the value of all bundles demanded during the auction can be computed. The basic idea is that when the bidder moves from demanding some bundle Ti to demanding another bundle Ti +1, there is a point in which she is indifferent between these two bundles. Thus, knowing the value of some demanded bundle (e.g., the empty set) enables computing the values of all other demanded bundles. We say that an auction is "pseudo-continuous", if it only uses demand queries, and in each step, the price of at most one item is changed by e (for some e G (0, S]) with respect to the previous query. PROPOSITION 5.3. Consider any pseudo-continuous auction (not necessarily ascending), in which bidder i demands the empty set at least once along the auction. Then, the value of every bundle demanded by bidder i throughout the auction can be calculated at the end of the auction. 5.2 Limitations of Item-Price Ascending Auctions Although we observed that demand queries can solve any combinatorial auction problem, when the queries are restricted to be ascending, some classes of valuations cannot be elicited nor fully-elicited. An example for such class of valuations is given in Figure 6. THEOREM 5.4. There are classes of valuations that cannot be elicited nor fully elicited by any item-price ascending auction. PROOF. Let bidder 1 have the valuation described in the first row of Figure 6, where a and,3 are unknown values in (0, 1). First, we prove that this class cannot be fully elicited by a single ascending auction. Specifically, an ascending auction cannot reveal the values of both a and,3. As long as pa and pb are both below 1, the bidder will always demand the whole bundle ab: her utility from ab is strictly greater than the utility from either a or b separately. For example, we show that u1 (ab)> u1 (a): u1 (ab) = 2--(pa + pb) = 1--pa + 1--pb> vA (a)--pa + 1--pb> u1 (a) Thus, in order to gain any information about a or,3, the price of one of the items should become at least 1, w.l.o.g. pa> 1. But then, the bundle a will not be demanded by bidder 1 throughout the auction, thus no information at all will be gained about a. Now, assume that bidder 2 is known to have the valuation described in the second row of Figure 6. The optimal allocation depends on whether a is greater than,3 (in bidder 1's valuation), and we proved that an ascending auction cannot determine this. The proof of the theorem above shows that for an unknown value to be revealed, the price of one item should be greater than 1, and the other price should be smaller than 1. Therefore, in a price-monotonic trajectory of prices, only one of these values can be revealed. An immediate conclusion is that this impossibility result also holds for item-price descending auctions. Since no such trajectory exists, then the same conclusion even holds for non-deterministic itemprice auctions (in which exogenous data tells us how to increase the prices). Also note that since the hardness stems from the impossibility to fully-elicit a valuation of a single bidder, this result also holds for non-anonymous ascending item-price auctions. 5.3 Limitations of Multi-Trajectory Ascending Auctions According to Theorem 5.4, no ascending item-price auction can always elicit the preferences (we prove a similar result for bundle prices in section 6). But can two ascending trajectories do the job? Or a polynomial number of ascending trajectories? We give negative answers for such suggestions. We define a k-trajectory ascending auction as a demandquery iterative auction in which the demand queries can be partitioned to k sets of queries, where the prices published in each set only increase in time. Note that we use a general definition; It allows the trajectories to run in parallel or sequentially, and to use information elicited in some trajectories for determining the future queries in other trajectories. The power of multiple-trajectory auctions can be demonstrated by the negative result of Gul and Stacchetti [17] who showed that even for an auction among substitutes valuations, an anonymous ascending item-price auction cannot compute VCG prices for all players .21 Ausubel [4] overcame this impossibility result and designed auctions that do compute VCG prices by organizing the auction as a sequence of n + 1 ascending auctions. Here, we prove that one cannot elicit XOR valuations with k terms by less than k--1 ascending trajectories. On the other hand, we show that an XOR formula can be fully elicited by k--1 non-deterministic ascending auctions (or by k--1 deterministic ascending auctions if the auctioneer knows the atomic bundles).22 21A recent unpublished paper by Mishra and Parkes extends this result, and shows that non-anonymous prices with bundle-prices are necessary in order that an ascending auction will end up with a "universal-competitive-equilibrium" (that leads to VCG payments). 22This result actually separates the power of deterministic PROPOSITION 5.5. XOR valuations with k terms cannot be elicited (or fully elicited) by any (k-2) - trajectory itemprice ascending auction, even when the atomic bundles are known to the elicitor. However, these valuations can be elicited (and fully elicited) by (k-1) - trajectory non-deterministic non-anonymous item-price ascending auctions. Moreover, an exponential number of trajectories is required for eliciting some classes of valuations: PROPOSITION 5.6. Elicitation and full-elicitation of some classes of valuations cannot be done by any k-trajectory itemprice ascending auction, where k = o (2m). PROOF. (sketch) Consider the following class of valuations: For | S | <m2, v (S) = 0 and for | S |> m2, v (S) = 2; every bundle S of size m2 has some unknown value in (0, 1). We show ([9]) that a single item-price ascending auction can reveal the value of at most one bundle of size n2, and therefore an exponential number of ascending trajectories is needed in order to elicit such valuations. We observe that the algorithm we presented in Section 4.2 can be implemented by a polynomial number of ascending auctions (each item-price demand query can be considered as a separate ascending auction), and therefore a min (n, 4, / m) - approximation can be achieved by a polynomial number of ascending auctions. We do not currently have a better upper bound or any lower bound. 5.4 Separating the Various Models of Ascending Auctions Various models for ascending auctions have been suggested in the literature. In this section, we compare the power of the different models. As mentioned, all auctions are considered anonymous and deterministic, unless specified otherwise. Ascending vs. Descending Auctions: We begin the discussion of the relation between ascending auctions and descending auctions with an example. The algorithm by Lehmann, Lehmann and Nisan [25] can be implemented by a simple item-price descending auction (see the full paper for details [9]). This algorithm guarantees at least half of the optimal efficiency for submodular valuations. However, we are not familiar with any ascending auction that guarantees a similar fraction of the efficiency. This raises a more general question: can ascending auctions solve any combinatorialauction problem that is solvable using a descending auction (and vice versa)? We give negative answers to these questions. The idea behind the proofs is that the information that the auctioneer can get "for free" at the beginning of each type of auction is different .23 and non-deterministic iterative auctions: our proof shows that a non-deterministic iterative auction can elicit the kterm XOR valuations with a polynomial number of demand queries, and [7] show that this elicitation must take an exponential number of demand queries. 23In ascending auctions, the auctioneer can reveal the most valuable bundle (besides M) before she starts raising the prices, thus she can use this information for adaptively choose the subsequent queries. In descending auctions, one can easily find the bundle with the highest average per-item price, keeping all other bundles with non-positive utilities, and use this information in the adaptive price change. PROPOSITION 5.7. There are classes that cannot be elicited (fully elicited) by ascending item-price auctions, but can be elicited (resp. fully elicited) with a descending item-price auction. PROPOSITION 5.8. There are classes that cannot be elicited (fully elicited) by item-price descending auctions, but can be elicited (resp. fully elicited) by item-price ascending auctions. Deterministic vs. Non-Deterministic Auctions: Nondeterministic ascending auctions can be viewed as auctions where some benevolent teacher that has complete information guides the auctioneer on how she should raise the prices. That is, preference elicitation can be done by a non-deterministic ascending auction, if there is some ascending trajectory that elicits enough information for determining the optimal allocation (and verifying that it is indeed optimal). We show that non-deterministic ascending auctions are more powerful than deterministic ascending auctions: PROPOSITION 5.9. Some classes can be elicited (fully elicited) by an item-price non-deterministic ascending auction, but cannot be elicited (resp. fully elicited) by item-price deterministic ascending auctions. Anonymous vs. Non-Anonymous Auctions: As will be shown in Section 6, the power of anonymous and nonanonymous bundle-price ascending auctions differs significantly. Here, we show that a difference also exists for itemprice ascending auctions. PROPOSITION 5.10. Some classes cannot be elicited by anonymous item-price ascending auctions, but can be elicited by a non-anonymous item-price ascending auction. Sequential vs. Simultaneous Auctions: A non-anonymous auction is called simultaneous if at each stage, the price of some item is raised by e for every bidder. The auctioneer can use the information gathered until each stage, in all the personalized trajectories, to determine the next queries. A non-anonymous auction is called sequential if the auctioneer performs an auction for each bidder separately, in sequential order. The auctioneer can determine the next query based on the information gathered in the trajectories completed so far and on the history of the current trajectory. Adaptive vs. Oblivious Auctions: If the auctioneer determines the queries regardless of the bidders' responses (i.e., the queries are predefined) we say that the auction is oblivious. Otherwise, the auction is adaptive. We prove that an adaptive behaviour of the auctioneer may be beneficial. PROPOSITION 5.12. There are classes that cannot be elicited (fully elicited) using oblivious item-price ascending auctions, but can be elicited (resp. fully elicited) by an adaptive item-price ascending auction. 5.5 Preference Elicitation vs. Full Elicitation Preference elicitation and full elicitation are closely related problems. If full elicitation is "easy" (e.g., in polynomial time) then clearly elicitation is also easy (by a nonanonymous auction, simply by learning all the valuations separately24). On the other hand, there are examples where preference elicitation is considered "easy" but learning is hard (typically, elicitation requires smaller amount of information; some examples can be found in [7]). The tatonnement algorithms by [22, 12, 16] end up with the optimal allocation for substitutes valuations .25 We prove that we cannot fully elicit substitutes valuations (or even their sub-class of OXS valuations defined in [25]), even for a single bidder, by an item-price ascending auction (although the optimal allocation can be found by an ascending auction for any number of bidders!) . THEOREM 5.13. Substitute valuations cannot be fully elicited by ascending item-price auctions. Moreover, they cannot be fully elicited by any m2 ascending trajectories (m> 3). Whether substitutes valuations have a compact representation (i.e., polynomial in the number of goods) is an important open question. As a step in this direction, we show that its sub-class of OXS valuations does have a compact representation: every OXS valuation can be represented by at most m2 values .26 LEMMA 5.14. Any OXS valuation can be represented by no more than m2 values. 6. BUNDLE-PRICE ASCENDING AUCTIONS All the ascending auctions in the literature that are proved to find the optimal allocation for unrestricted valuations are non-anonymous bundle-price auctions (iBundle (3) by Parkes and Ungar [37] and the "Proxy Auction" by Ausubel and Milgrom [3]). Yet, several anonymous ascending auctions have been suggested (e.g., AkBA [42], [21] and iBundle (2) [37]). In this section, we prove that anonymous bundle-price ascending auctions achieve poor results in the worst-case. We also show that the family of non-anonymous bundleprice ascending auctions can run exponentially slower than simple item-price ascending auctions. 6.1 Limitations of Anonymous Bundle-Price Ascending Auctions We present a class of valuations that cannot be elicited by anonymous bundle-price ascending auctions. These valuations are described in Figure 7. The basic idea: for determining some unknown value of one bidder we must raise 24Note that an anonymous ascending auction cannot necessarily elicit a class that can be fully elicited by an ascending auction. 25Substitute valuations are defined, e.g., in [16]. Roughly speaking, a bidder with a substitute valuation will continue demand a certain item after the price of some other items was increased. For completeness, we present in the full paper [9] a proof for the efficiency of such auctions for substitutes valuations. 26A unit-demand valuation is an XOR valuation in which all the atomic bundles are singletons. OXS valuations can be interpreted as an aggregation ("OR") of any number of unit-demand bidders. Figure 7: Anonymous ascending bundle-price auctions cannot determine the optimal allocation for this class of valuations. a price of a bundle that should be demanded by the other bidder in the future. THEOREM 6.1. Some classes of valuations cannot be elicited by anonymous bundle-price ascending auctions. PROOF. Consider a pair of XOR valuations as described in Figure 7. For finding the optimal allocation we must know which value is greater between a and 3.27 However, we cannot learn the value of both a and 3 by a single ascending trajectory: assume w.l.o.g. that bidder 1 demands cd before bidder 2 demands bd (no information will be elicited if none of these happens). In this case, the price for bd must be greater than 1 (otherwise, bidder 1 prefers bd to cd). Thus, bidder 2 will never demand the bundle bd, and no information will be elicited about 3. The valuations described in the proof of Theorem 6.1 can be easily elicited by a non-anonymous item-price ascending auction. On the other hand, the valuations in Figure 6 can be easily elicited by an anonymous bundle-price ascending auction. We conclude that the power of these two families of ascending auctions is incomparable. We strengthen the impossibility result above by showing that anonymous bundle-price auctions cannot even achieve better than a min {O (n), O (\ / m)} - approximation for the social welfare. This approximation ratio can be achieved with polynomial communication, and specifically with a polynomial number of item-price demand queries .28 THEOREM 6.2. An anonymous bundle-price ascending auc Jm tion cannot guarantee better than a min {n2, 2} approximation for the optimal welfare. PROOF. (Sketch) Assume we have n bidders and n2 items for sale, and that n is prime. We construct n2 distinct bundles with the following properties: for each bidder, we define a partition Si = (Si1,..., Sin) of the n2 items to n bundles, such that any two bundles from different partitions intersect. In the full paper, part II [9] we show an explicit construction using the properties of linear functions over finite fields. The rest of the proof is independent of the specific construction. Using these n2 bundles we construct a "hard-to-elicit" class. Every bidder has an atomic bid, in his XOR valuation, for each of these n2 bundles. A bidder i has a value of 2 for any bundle Sij in his partition. For all bundles in the other partitions, he has a value of either 0 or of 1 − S, and these values are unknown to the auctioneer. Since every pair of bundles from different partitions intersect, only one bidder can receive a bundle with a value of 2. 27If a> 3, the optimal allocation will allocate cd to bidder 1 and ab to bidder 2. Otherwise, we give bd to bidder 2 and ac to bidder 1. Note that both bidders cannot gain a value of 2 in the same allocation, due to the intersections of the high-valued bundles. 28Note that bundle-price queries may use exponential communication, thus the lower bound of [32] does not hold. Non-anonymous Bundle-Price Economically-Efficient Ascending Auctions: Initialization: All prices are initialized to zero (non-anonymous bundle prices). Repeat: - Each bidder submits a bundle that maximizes his utility under his current personalized prices. - The auctioneer calculates a provisional allocation that maximizes his revenue under the current prices. - The prices of bundles that were demanded by losing bidders are increased by E. Finally: Terminate when the provisional allocation assigns to each bidder the bundle he demanded. Figure 8: Auctions from this family (denoted by NBEA auctions) are known to achieve the optimal welfare. No bidder will demand a low-valued bundle, as long as the price of one of his high-valued bundles is below 1 (and thus gain him a utility greater than 1). Therefore, for eliciting any information about the low-valued bundles, the auctioneer should first arbitrarily choose a bidder (w.l.o.g bidder 1) and raise the prices of all the bundles (S11,..., S1n) to be greater than 1. Since the prices cannot decrease, the other bidders will clearly never demand these bundles in future stages. An adversary may choose the values such that the low values of all the bidders for the bundles not in bidder 1's partition are zero (i.e., vi (S1j) = 0 for every i = 1 and every j), however, allocating each bidder a different bundle from bidder 1's partition, might achieve a welfare of n +1 − (n − 1) 6 (bidder 1's valuation is 2, and 1 − S for all other bidders); If these bundles were wrongly allocated, only a welfare of 2 might be achieved (2 for bidder 1's high-valued bundle, 0 for all other bidders). At this point, the auctioneer cannot have any information about the identity of the bundles with the non-zero values. Therefore, an adversary can choose the values of the bundles received by bidders 2,..., n in the final allocation to be zero. We conclude that anonymous bundleprice auctions cannot guarantee a welfare greater than 2 for this class, where the optimal welfare can be arbitrarily close to n + 1. 6.2 Bundle Prices vs. Item Prices The core of the auctions in [37, 3] is the scheme described in Figure 8 (in the spirit of [35]) for auctions with nonanonymous bundle prices. Auctions from this scheme end up with the optimal allocation for any class of valuations. We denote this family of ascending auctions as NBEA auctions29. NBEA auctions can elicit k-term XOR valuations by a polynomial (in k) number of steps, although the elicitation of such valuations may require an exponential number of item-price queries ([7]), and item-price ascending auctions cannot do it at all (Theorem 5.4). Nevertheless, we show that NBEA auctions (and in particular, iBundle (3) and the "proxy" auction) are sometimes inferior to simple item-price demand auctions. This may justify the use of hybrid auctions that use both linear and non-linear prices (e.g., the clock-proxy auction [10]). We show that auctions from this 29Non-anonymous Bundle-price economically Efficient Ascending auctions. For completeness, we give in the full paper [9] a simple proof for the efficiency (up to an e) of auctions of this scheme. family may use an exponential number of queries even for determining the optimal allocation among two bidders with additive valuations30, where such valuations can be elicited by a simple item-price ascending auction. We actually prove this property for a wider class of auctions we call conservative auctions. We also observe that in conservative auctions, allowing the bidders to submit all the bundles in their demand sets ensures that the auction runs a polynomial number of steps--if L is not too high (but with exponential communication, of course). An ascending auction is called conservative if it is nonanonymous, uses bundle prices initialized to zero and at every stage the auctioneer can only raise prices of bundles demanded by the bidders until this stage. In addition, each bidder can only receive bundles he demanded during the auction. Note that NBEA auctions are by definition conservative. PROPOSITION 6.3. If every bidder demands a single bundle in each step of the auction, conservative auctions may run for an exponential number of steps even for additive valuations. If the bidders are allowed to submit all the bundles in their demand sets in each step, then conservative auctions can run in a polynomial number of steps for any profile of valuations, as long as the maximal valuation L is polynomial in m, n and 1. Acknowledgments: The authors thank Moshe Babaioff, Shahar Dobzinski, Ron Lavi, Daniel Lehmann, Ahuva Mu'alem, David Parkes, Michael Schapira and Ilya Segal for helpful discussions. Supported by grants from the Israeli Academy of Sciences and the USAIsrael Binational Science Foundation.
On the Computational Power of Iterative Auctions * ABSTRACT We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their "demand" under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions. 1. INTRODUCTION Combinatorial auctions have recently received a lot of attention. In a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders. The bidders have preferences regarding the bundles of items that they may receive. The preferences of bidder i are specified by a valuation function vi: 2M--+ R +, where * This paper is a merger of two papers accepted to EC' 05, merged at the request of the program committee. Full versions of the two papers [8, 9] can be obtained from our webpages. vi (S) denotes the value that bidder i attaches to winning the bundle of items S. We assume "free disposal", i.e., that the tioneer is to optimize the social welfare P vi's are monotone non-decreasing. The usual goal of the auci vi (Si), where the allocation S1...Sn must be a partition of the items. Applications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems. Combinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science. A forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions. The design of a combinatorial auction involves many considerations. In this paper we focus on just one central issue: the communication between bidders and the allocation mechanism--"preference elicitation". Transferring all information about bidders' preferences requires an infeasible (exponential in m) amount of communication. Thus, "direct revelation" auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences. We have therefore seen a multitude of suggested "iterative auctions" in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders' preferences as to be able to find a good (optimal or close to optimal) allocation. Most of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]). Effectively, such an iterative auction accesses the bidders' preferences by repeatedly making the following type of demand query to bidders: "Query to bidder i: a vector of bundle prices p = {p (S)} ScM; Answer: a bundle of items S C _ M that maximizes vi (S) − p (S)." . These types of queries are very natural in an economic setting as they capture the "revealed preferences" of the bidders. Some auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p (S) = PicS pi. Other auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p (S) for bundles. Another important differentiation between models of iterative auctions is based on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices). In other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices. In ascending-price auctions, forcing prices to be anonymous may be a significant restriction. In this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries. We do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions. In the first part of this paper, our main question is what can be done using a polynomial number of these types of queries? That is, polynomial in the main parameters of the problem: n, m and the number of bits t needed for representing a single value vi (S). Note that from an algorithmic point of view we are talking about sub-linear time algorithms: the input size here is really n (2m − 1) numbers--the descriptions of the valuation functions of all bidders. There are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the "usual" computational tractability. Our lower bounds will depend only on the number of queries--and hold independently of any computational assumptions like P = NP. Our upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation. As mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives. This strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction .1 The second part of this paper studies the power of ascending - price auctions. Ascending auctions are iterative auctions where the published prices cannot decrease in time. In this work, we try to systematically analyze what do the differences between various models of ascending auctions mean. We try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations? (ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare? (iii) How do the different models for ascending auctions compare? Are some models computationally stronger than others? Ascending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]). Most of this work presented' upper bounds', i.e., proposed mechanisms with ascending prices and analyzed their properties. A result which is closer in spirit to ours, is by Gul and Stacchetti [17], who showed that no item-price ascending auction can always determine the VCG prices, even for substitutes valuations .2 Our framework is more general than the traditional line of research that concentrates on the final allocation and 1We do observe however that some weak incentive property comes for free in demand-query auctions since "myopic" players will answer all demand queries truthfully. We also note that in some cases (but not always!) the incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]). 2We further discuss this result in Section 5.3. Figure 1: The diagram classifies the following auctions according to their properties: (1) The adaptation [12] for Kelso & Crawford's [22] auction. (2) The Proxy Auction [3] by Ausubel & Milgrom. (3) iBundle (3) by Parkes & Ungar [34]. (4) iBundle (2) by Parkes & Ungar [37]. (5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4). (6) Ausubel's [4] auction for substitutes valuations. (7) The adaptation by Nisan & Segal [32] of the O (/ m) approximation by [26]. (8) The duplicate-item auction by [5]. (9) Auction for Read-Once formulae by [43]. (10) The AkBA Auction by Wurman & Wellman [42]. payments and in particular, on reaching' Walrasian equilibria' or' Competitive equilibria'. A Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16]. This does not rule out other allocations by ascending auctions: in this paper we view the auctions as a computational process where the outcome - both the allocation and the payments - can be determined according to all the data elicited throughout the auction; This general framework strengthens our negative results .4 We find the study of ascending auctions appealing for various reasons. First, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]). Actually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods. Ascending auctions are also considered more intuitive for many bidders, and are believed to increase the "trust" of the bidders in the auctioneer, as they see the result gradually emerging from the bidders' responses. Ascending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions). 1.1 Extant Work Many iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]). Figure 1 summa3A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set. 4In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions. Figure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations. All lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries. rizes the basic "classes" of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification. For our purposes, two families of these auctions serve as the main motivating starting points: the first is the ascending item-price auctions of [22, 17] that with computational efficiency find an optimal allocation among "(gross) substitutes" valuations, and the second is the ascending bundleprice auctions of [37, 3] that find an optimal allocation among general valuations--but not necessarily with computational efficiency. The main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries. A similar exponential lower bound was shown in [32] also for even approximating the optimal allocation to within a factor of m1/2 − E. Several lower bounds and upper bounds for approximation are known for some natural classes of valuations--these are summarized in Figure 2. In [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices). In [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness). However, in [33] it was demonstrated that this "completeness" of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication. Bundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions. It is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5, where k is small (say, polynomial). Lahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial. In contrast, [7] show that there exist valuations that are XORs of k =, / m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries. These results are part of a recent line of research ([7, 43, 24, 40]) that study the "preference elicitation" problem in combinatorial auctions and its relation to the "full elicitation" problem (i.e., learn5These are valuations where bidders have values for k specific packages, and the value of each bundle is the maximal value of one of these packages that it contains. ing the exact valuations of the bidders). These papers adapt methods from machine-learning theory to the combinatorialauction setting. The preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]). This problem studies if and when one can derive the utility function of a consumer from her demand function. Paper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2. After describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4. Then, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6). Readers who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4. Missing proofs from Section 4 can be found in part I of the full paper ([8]). Missing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]). 2. A SURVEY OF OUR RESULTS 2.1 Demand Queries Comparison of query types P% ES p% .8 Welfare approximation Representing bundle-prices Demand queries and linear programs 2.2 Ascending Auctions Ascending bundle-price auctions: 3. THE MODEL 3.1 Discrete Auctions for Continuous Values 3.2 Valuations 3.3 Iterative Auctions 3.4 Demand Queries and Ascending Auctions 4. THE POWER OF DEMAND QUERIES 4.1 The Power of Different Types of Queries 4.2 Approximating the Social Welfare with Value and Demand Queries O (m m An Approximation Algorithm: 4.3 The Representation of Bundle Prices 4.4 Demand Queries and Linear Programming 5. ITEM-PRICE ASCENDING AUCTIONS 5.1 The Power of Item-Price Ascending Auctions 5.2 Limitations of Item-Price Ascending Auctions 5.3 Limitations of Multi-Trajectory Ascending Auctions 5.4 Separating the Various Models of Ascending Auctions 5.5 Preference Elicitation vs. Full Elicitation 6. BUNDLE-PRICE ASCENDING AUCTIONS 6.1 Limitations of Anonymous Bundle-Price Ascending Auctions THEOREM 6.2. An anonymous bundle-price ascending auc Jm Non-anonymous Bundle-Price Economically-Efficient Ascending Auctions: 6.2 Bundle Prices vs. Item Prices Acknowledgments: The authors thank Moshe Babaioff, Shahar Dobzinski, Ron Lavi, Daniel Lehmann, Ahuva Mu'alem, David Parkes, Michael Schapira and Ilya Segal for helpful discussions. Supported by grants from the Israeli Academy of Sciences and the USAIsrael Binational Science Foundation.
On the Computational Power of Iterative Auctions * ABSTRACT We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their "demand" under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions. 1. INTRODUCTION Combinatorial auctions have recently received a lot of attention. In a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders. The bidders have preferences regarding the bundles of items that they may receive. The preferences of bidder i are specified by a valuation function vi: 2M--+ R +, where * This paper is a merger of two papers accepted to EC' 05, merged at the request of the program committee. vi (S) denotes the value that bidder i attaches to winning the bundle of items S. Applications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems. Combinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science. A forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions. The design of a combinatorial auction involves many considerations. In this paper we focus on just one central issue: the communication between bidders and the allocation mechanism--"preference elicitation". Transferring all information about bidders' preferences requires an infeasible (exponential in m) amount of communication. Thus, "direct revelation" auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences. We have therefore seen a multitude of suggested "iterative auctions" in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders' preferences as to be able to find a good (optimal or close to optimal) allocation. Most of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]). Effectively, such an iterative auction accesses the bidders' preferences by repeatedly making the following type of demand query to bidders: "Query to bidder i: a vector of bundle prices p = {p (S)} ScM; Answer: a bundle of items S C _ M that maximizes vi (S) − p (S)." . These types of queries are very natural in an economic setting as they capture the "revealed preferences" of the bidders. Some auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p (S) = PicS pi. Other auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p (S) for bundles. Another important differentiation between models of iterative auctions is based on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices). In other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices. In ascending-price auctions, forcing prices to be anonymous may be a significant restriction. In this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries. We do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions. In the first part of this paper, our main question is what can be done using a polynomial number of these types of queries? There are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the "usual" computational tractability. Our lower bounds will depend only on the number of queries--and hold independently of any computational assumptions like P = NP. Our upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation. As mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives. This strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction .1 The second part of this paper studies the power of ascending - price auctions. Ascending auctions are iterative auctions where the published prices cannot decrease in time. In this work, we try to systematically analyze what do the differences between various models of ascending auctions mean. We try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations? (ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare? (iii) How do the different models for ascending auctions compare? Are some models computationally stronger than others? Ascending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]). Most of this work presented' upper bounds', i.e., proposed mechanisms with ascending prices and analyzed their properties. We also note that in some cases (but not always!) the incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]). 2We further discuss this result in Section 5.3. Figure 1: The diagram classifies the following auctions according to their properties: (1) The adaptation [12] for Kelso & Crawford's [22] auction. (2) The Proxy Auction [3] by Ausubel & Milgrom. (3) iBundle (3) by Parkes & Ungar [34]. (4) iBundle (2) by Parkes & Ungar [37]. (5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4). (6) Ausubel's [4] auction for substitutes valuations. (8) The duplicate-item auction by [5]. (9) Auction for Read-Once formulae by [43]. (10) The AkBA Auction by Wurman & Wellman [42]. A Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16]. First, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]). Actually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods. Ascending auctions are also considered more intuitive for many bidders, and are believed to increase the "trust" of the bidders in the auctioneer, as they see the result gradually emerging from the bidders' responses. Ascending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions). 1.1 Extant Work Many iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]). Figure 1 summa3A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set. 4In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions. Figure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations. All lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries. rizes the basic "classes" of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification. The main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries. In [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices). In [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness). However, in [33] it was demonstrated that this "completeness" of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication. Bundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions. It is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5, where k is small (say, polynomial). Lahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial. In contrast, [7] show that there exist valuations that are XORs of k =, / m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries. ing the exact valuations of the bidders). These papers adapt methods from machine-learning theory to the combinatorialauction setting. The preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]). This problem studies if and when one can derive the utility function of a consumer from her demand function. Paper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2. After describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4. Then, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6). Readers who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4. Missing proofs from Section 4 can be found in part I of the full paper ([8]). Missing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]). Acknowledgments:
C-62
Network Monitors and Contracting Systems: Competition and Innovation
Today's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network's primitive system of contracts does not align incentives properly. In this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.
[ "network monitor", "monitor", "contract system", "contract", "innov", "commodit", "incent", "smart market", "rout stagequ", "verifi monitor", "contract monitor", "rout polici", "clean-slate architectur design" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "M", "M", "R", "M", "M" ]
Network Monitors and Contracting Systems: Competition and Innovation Paul Laskowski John Chuang UC Berkeley {paul,chuang}@sims. berkeley.edu ABSTRACT Today``s Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network``s primitive system of contracts does not align incentives properly. In this study, we identify the network``s lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; J.4 [Social And Behavioral Sciences]: Economics General Terms Economics, Theory, Measurement, Design, Legal Aspects. 1. INTRODUCTION Many studies before us have noted the Internet``s resistance to new services and evolution. In recent decades, numerous ideas have been developed in universities, implemented in code, and even written into the routers and end systems of the network, only to languish as network operators fail to turn them on on a large scale. The list includes Multicast, IPv6, IntServ, and DiffServ. Lacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas. In the long term, this pathology stands out as a critical obstacle to the network``s continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]). On a smaller time scale, ISPs shun new services in favor of cost cutting measures. Thus, the network has characteristics of a commodity market. Although in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2]. On one hand, this leads directly to suboptimal routing. More importantly, commoditization in the short term is surely related to the lack of innovation in the long term. When the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs. There is simply no reward for introducing new services or investing in quality improvements. In response to these pathologies and others, researchers have put forth various proposals for improving the situation. These can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users. Clark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5]. Ratnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11]. They propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol. The extra traffic to the ISP, the authors suggest, will motivate the initial investment. The second strategy suggests a revision of the contracting system. This is exemplified by MacKie-Mason and Varian, who propose a smart market to control access to network resources [10]. Prices are set to the market-clearing level based on bids that users associate to their traffic. In another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2]. They argue that such a move would improve stability and align incentives. The third high-level strategy calls for greater network accountability. In this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3]. They argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations. Unlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation. It is clear that these three strategies are closely linked to each other (for example, [2], [5], and [9] each argue that giving end-users routing control within the current contracting system is problematic). Until today, however, the relationship between them has been poorly understood. There is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation. This paper will address both issues. We will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation. Our model is highly stylized and may be considered preliminary: it is based on a single source sending data to a single destination. Nevertheless, the structure is rich enough to expose previously unseen features of network behavior. We will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today``s network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation. In other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics. This result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry. We will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure. When we say that today``s Internet has poor accountability, we mean that it reveals little information about the behavior - or misbehavior - of ISPs. This well-known trait is largely rooted in the network``s history. In describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven second level goals. [4] Accordingly, accountability received little attention during the network``s formative years. Clark relates this to the network``s military context, and finds that had the network been designed for commercial development, accountability would have been a top priority. Argyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network``s lack of accountability [3]. According to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt. They notice when packet drops occur so that they can perform congestion control and retransmit packets. Details of where and why drops occur are deliberately concealed. The network``s lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts. This is because contracts depend upon external institutions to function - the judge in the language of incomplete contract theory, or simply the legal system. Ultimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition. Of course, the vast majority of contracts never end up in court. Especially when a judge``s ruling is easily predicted, the parties will typically comply with the contract terms on their own volition. This would not be possible, however, without the judge acting as a last resort. An institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time. This payment is a function of the past and present behaviors of the participants, but only those that are verifiable. Hence, we imagine that a contract only accepts proofs as inputs. We will call any process that generates these proofs a contractible monitor. Such a monitor includes metering or sensing devices on the physical network, but it is a more general concept. Constructing a proof of a particular behavior may require readings from various devices distributed among many ISPs. The contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information. Figure 1 demonstrates how our model of contracts fits together. We make the assumption that all payments are mediated by contracts. This means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency. Figure 1: Relationship between monitors and contracts With this model, we may conclude that the level of accountability in today``s Internet only permits best effort contracts. Nodes cannot condition payments on either quality or path characteristics. Is there anything wrong with best-effort contracts? The reader might wonder why the Internet needs contracts at all. After all, in non-network industries, traditional firms invest in research and differentiate their products, all in the hopes of keeping their customers and securing new ones. One might believe that such market forces apply to ISPs as well. We may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets. There is a popular intuitive argument that supports this hypothesis, and it may be summarized as follows: Intuitive argument supporting null hypothesis: 1. Access providers try to increase their quality to get more consumers. 2. Access providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers. Access providers try to select high quality transit because that increases their quality. 3. The process continues through the network, giving every ISP a competitive reason to increase quality. We are careful to model our network in continuous time, in order to capture the essence of this argument. We can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop. Moreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer. By contrast, customers in the real world rarely respond collectively, and often simply seek the best deal currently offered. These constraints limit their ability to punish cheaters. Even with these liberal assumptions, however, we find that we must reject our null hypothesis. Our model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment. We will define an index of commoditization and show that it increases without bound as data paths grow long. Furthermore, we will demonstrate a framework in which an ISP``s maximum research investment decreases hyperbolically with its distance from the end-user. Network Behavior Monitor Contract Proof Payments 184 To summarize, we argue that the Internet``s lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved. This leads us to our next topic: How can we leverage accountability to overcome these pathologies? We approach this question from a clean-slate perspective. Instead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective. According to this approach, we first craft a new equilibrium concept appropriate for network competition. Our concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path. Rerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly. Next, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments. Finally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition. The last requirement is somewhat unconventional from an economic perspective, but we maintain that it is crucial for any reasonable solution. Although ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well. If innocent nodes may be punished, an ISP may decide to deliberately cheat and draw punishment onto itself and its neighbors. By cheating, the ISP may save resources, thereby ensuring that the punishment is more damaging to the other ISPs, which probably compete with the cheater directly for some customers. In the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes. Applying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes. The solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination. It turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect. Rest of path monitors can be implemented in various ways. They may be purely local algorithms that listen for packet echoes. Alternately, they can be distributed in nature. We describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path. This requires a mechanism to motivate ISPs to share their monitor outputs with each other. The rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required. This example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics. Our study has several practical implications for future protocol design. We show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome. Moreover, we derive exactly what monitors are needed to optimize routes and support innovation. In addition, our results provide useful input for clean-slate architectural design, and we use several novel techniques that we expect will be applicable to a variety of future research. The rest of this paper is organized as follows: In section 2, we lay out our basic network model. In section 3, we present a lowaccountability network, modeled after today``s Internet. We demonstrate how poor monitoring causes commoditization and a lack of innovation. In section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo. In section 5, we turn our attention to contractible monitors. We show that rest of path monitors can support competition games with optimal routing and innovation. We further show that rest of path monitors are required to support such competition games. We continue by discussing how such monitors may be constructed using other monitors as building blocks. In section 6, we conclude and present several directions for future research. 2. BASIC NETWORK MODEL A source, S, wants to send data to destination, D. S and D are nodes on a directed, acyclic graph, with a finite set of intermediate nodes, { }NV ,...2,1= , representing ISPs. All paths lead to D, and every node not connected to D has at least two choices for next hop. We will represent quality by a finite dimensional vector space, Q, called the quality space. Each dimension represents a distinct network characteristic that end-users care about. For example, latency, loss probability, jitter, and IP version can each be assigned to a dimension. To each node, i, we associate a vector in the quality space, Qqi ∈ . This corresponds to the quality a user would experience if i were the only ISP on the data path. Let N Q∈q be the vector of all node qualities. Of course, when data passes through multiple nodes, their qualities combine in some way to yield a path quality. We represent this by an associative binary operation, *: QQQ →× . For path ( )nvvv ,...,, 21 , the quality is given by nvvv qqq ∗∗∗ ...21 . The * operation reflects the characteristics of each dimension of quality. For example, * can act as an addition in the case of latency, multiplication in the case of loss probability, or a minimumargument function in the case of security. When data flows along a complete path from S to D, the source and destination, generally regarded as a single player, enjoy utility given by a function of the path quality, →Qu : . Each node along the path, i, experiences some cost of transmission, ci. 2.1 Game Dynamics Ultimately, we are most interested in policies that promote innovation on the network. In this study, we will use innovation in a fairly general sense. Innovation describes any investment by an ISP that alters its quality vector so that at least one potential data path offers higher utility. This includes researching a new routing algorithm that decreases the amount of jitter users experience. It also includes deploying a new protocol that supports quality of service. Even more broadly, buying new equipment to decrease S D 185 latency may also be regarded as innovation. Innovation may be thought of as the micro-level process by which the network evolves. Our analysis is limited in one crucial respect: We focus on inventions that a single ISP can implement to improve the end-user experience. This excludes technologies that require adoption by all ISPs on the network to function. Because such technologies do not create a competitive advantage, rewarding them is difficult and may require intellectual property or some other market distortion. We defer this interesting topic to future work. At first, it may seem unclear how a large-scale distributed process such as innovation can be influenced by mechanical details like networks monitors. Our model must draw this connection in a realistic fashion. The rate of innovation depends on the profits that potential innovators expect in the future. The reward generated by an invention must exceed the total cost to develop it, or the inventor will not rationally invest. This reward, in turn, is governed by the competitive environment in which the firm operates, including the process by which firms select prices, and agree upon contracts with each other. Of course, these decisions depend on how routes are established, and how contracts determine actual monetary exchanges. Any model of network innovation must therefore relate at least three distinct processes: innovation, competition, and routing. We select a game dynamics that makes the relation between these processes as explicit as possible. This is represented schematically in Figure 2. The innovation stage occurs first, at time 2−=t . In this stage, each agent decides whether or not to make research investments. If she chooses not to, her quality remains fixed. If she makes an investment, her quality may change in some way. It is not necessary for us to specify how such changes take place. The agents'' choices in this stage determine the vector of qualities, q, common knowledge for the rest of the game. Next, at time 1−=t , agents participate in the competition stage, in which contracts are agreed upon. In today``s industry, these contracts include prices for transit access, and peering agreements. Since access is provided on a best-effort basis, a transit agreement can simply be represented by its price. Other contracting systems we will explore will require more detail. Finally, beginning at 0=t , firms participate in the routing stage. Other research has already employed repeated games to study routing, for example [1], [12]. Repetition reveals interesting effects not visible in a single stage game, such as informal collusion to elevate prices in [12]. We use a game in continuous time in order to study such properties. For example, we will later ask whether a player will maintain higher quality than her contracts require, in the hope of keeping her customer base or attracting future customers. Our dynamics reflect the fact that ISPs make innovation decisions infrequently. Although real firms have multiple opportunities to innovate, each opportunity is followed by a substantial length of time in which qualities are fixed. The decision to invest focuses on how the firm``s new quality will improve the contracts it can enter into. Hence, our model places innovation at the earliest stage, attempting to capture a single investment decision. Contracting decisions are made on an intermediate time scale, thus appearing next in the dynamics. Routing decisions are made very frequently, mainly to maximize immediate profit flows, so they appear in the last stage. Because of this ordering, our model does not allow firms to route strategically to affect future innovation or contracting decisions. In opposition, Afergan and Wroclawski argue that contracts are formed in response to current traffic patterns, in a feedback loop [2]. Although we are sympathetic to their observation, such an addition would make our analysis intractable. Our model is most realistic when contracting decisions are infrequent. Throughout this paper, our solution concept will be a subgame perfect equilibrium (SPE). An SPE is a strategy point that is a Nash equilibrium when restricted to each subgame. Three important subgames have been labeled in Figure 2. The innovation game includes all three stages. The competition game includes only the competition stage and the routing stage. The routing game includes only the routing stage. An SPE guarantees that players are forward-looking. This means, for example, that in the competition stage, firms must act rationally, maximizing their expected profits in the routing stage. They cannot carry out threats they made in the innovation stage if it lowers their expected payoff. Our schematic already suggests that the routing game is crucial for promoting innovation. To support innovation, the competition game must somehow reward ISPs with high quality. But that means that the routing game must tend to route to nodes with high quality. If the routing game always selects the lowest-cost routes, for example, innovation will not be supported. We will support this observation with analysis later. 2.2 The Routing Game The routing game proceeds in continuous time, with all players discounting by a common factor, r. The outputs from previous stages, q and the set of contracts, are treated as exogenous parameters for this game. For each time 0≥t , each node must select a next hop to route data to. Data flows across the resultant path, causing utility flow to S and D, and a flow cost to the nodes on the path, as described above. Payment flows are also created, based on the contracts in place. Relating our game to the familiar repeated prisoners'' dilemma, imagine that we are trying to impose a high quality, but costly path. As we argued loosely above, such paths must be sustainable in order to support innovation. Each ISP on the path tries to maximize her own payment, net of costs, so she may not want to cooperate with our plan. Rather, if she can find a way to save on costs, at the expense of the high quality we desire, she will be tempted to do so. Innovation Game Competition Game Routing Game Innovation stage Competition stage Routing stageQualities (q) Contracts (prices) Profits t = -2 t = -1 t ∈ [ 0 , ) Figure 2: Game Dynamics 186 Analogously to the prisoners'' dilemma, we will call such a decision cheating. A little more formally, Cheating refers to any action that an ISP can take, contrary to some target strategy point that we are trying to impose, that enhances her immediate payoff, but compromises the quality of the data path. One type of cheating relates to the data path. Each node on the path has to pay the next node to deliver its traffic. If the next node offers high quality transit, we may expect that a lower quality node will offer a lower price. Each node on the path will be tempted to route to a cheaper next hop, increasing her immediate profits, but lowering the path quality. We will call this type of action cheating in route. Another possibility we can model, is that a node finds a way to save on its internal forwarding costs, at the expense of its own quality. We will call this cheating internally to distinguish it from cheating in route. For example, a node might drop packets beyond the rate required for congestion control, in order to throttle back TCP flows and thus save on forwarding costs [3]. Alternately, a node employing quality of service could give high priority packets a lower class of service, thus saving on resources and perhaps allowing itself to sell more high priority service. If either cheating in route or cheating internally is profitable, the specified path will not be an equilibrium. We assume that cheating can never be caught instantaneously. Rather, a cheater can always enjoy the payoff from cheating for some positive time, which we label 0t . This includes the time for other players to detect and react to the cheating. If the cheater has a contract which includes a customer lock-in period, 0t also includes the time until customers are allowed to switch to a new ISP. As we will see later, it is socially beneficial to decrease 0t , so such lock-in is detrimental to welfare. 3. PATHOLOGIES OF A LOWACCOUNTABILITY NETWORK In order to motivate an exploration of monitoring systems, we begin in this section by considering a network with a poor degree of accountability, modeled after today``s Internet. We will show how the lack of monitoring necessarily leads to poor routing and diminishes the rate of innovation. Thus, the network``s lack of accountability is a fundamental obstacle to resolving these pathologies. 3.1 Accountability in the Current Internet First, we reflect on what accountability characteristics the present Internet has. Argyraki, et al., point out that end hosts are given minimal information about packet drops [3]. Users know when drops occur, but not where they occur, nor why. Dropped packets may represent the innocent signaling of congestion, or, as we mentioned above, they may be a form of cheating internally. The problem is similar for other dimensions of quality, or in fact more acute. Finding an ISP that gives high priority packets a lower class of service, for example, is further complicated by the lack of even basic diagnostic tools. In fact, it is similarly difficult to identify an ISP that cheats in route. Huston notes that Internet traffic flows do not always correspond to routing information [8]. An ISP may hand a packet off to a neighbor regardless of what routes that neighbor has advertised. Furthermore, blocks of addresses are summarized together for distant hosts, so a destination may not even be resolvable until packets are forwarded closer. One might argue that diagnostic tools like ping and traceroute can identify cheaters. Unfortunately, Argyraki, et al., explain that these tools only reveal whether probe packets are echoed, not the fate of past packets [3]. Thus, for example, they are ineffective in detecting low-frequency packet drops. Even more fundamentally, a sophisticated cheater can always spot diagnostic packets and give them special treatment. As a further complication, a cheater may assume different aliases for diagnostic packets arriving over different routes. As we will see below, this gives the cheater a significant advantage in escaping punishment for bad behavior, even if the data path is otherwise observable. 3.2 Modeling Low-Accountability As the above evidence suggests, the current industry allows for very little insight into the behavior of the network. In this section, we attempt to capture this lack of accountability in our model. We begin by defining a monitor, our model of the way that players receive external information about network behavior, A monitor is any distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, informational statements about current or past network behavior. We assume that all external information about network behavior is mediated in this way. The accountability properties of the Internet can be represented by the following monitors: E2E (End to End): A monitor that informs S/D about what the total path quality is at any time (this is the quality they experience). ROP (Rest of Path): A monitor that informs each node along the data path what the quality is for the rest of the path to the destination. PRc (Packets Received): A monitor that tells nodes how much data they accept from each other, so that they can charge by volume. It is important to note, however, that this information is aggregated over many source-destination pairs. Hence, for the sake of realism, it cannot be used to monitor what the data path is. Players cannot measure the qualities of other, single nodes, just the rest of the path. Nodes cannot see the path past the next hop. This last assumption is stricter than needed for our results. The critical ingredient is that nodes cannot verify that the path avoids a specific hop. This holds, for example, if the path is generally visible, except nodes can use different aliases for different parents. Similar results also hold if alternate paths always converge after some integer number, m, of hops. It is important to stress that E2E and ROP are not the contractible monitors we described in the introduction - they do not generate proofs. Thus, even though a player observes certain information, she generally cannot credibly share it with another player. For example, if a node after the first hop starts cheating, the first hop will detect the sudden drop in quality for the rest of the path, but the first hop cannot make the source believe this observation - the 187 source will suspect that the first hop was the cheater, and fabricated the claim against the rest of the path. Typically, E2E and ROP are envisioned as algorithms that run on a single node, and listen for packet echoes. This is not the only way that they could be implemented, however; an alternate strategy is to aggregate quality measurements from multiple points in the network. These measurements can originate in other monitors, located at various ISPs. The monitor then includes the component monitors as well as whatever mechanisms are in place to motivate nodes to share information honestly as needed. For example, if the source has monitors that reveal the qualities of individual nodes, they could be combined with path information to create an ROP monitor. Since we know that contracts only accept proofs as input, we can infer that payments in this environment can only depend on the number of packets exchanged between players. In other words, contracts are best-effort. For the remainder of this section, we will assume that contracts are also linear - there is a constant payment flow so long as a node accepts data, and all conditions of the contract are met. Other, more complicated tariffs are also possible, and are typically used to generate lock-in. We believe that our parameter t0 is sufficient to describe lock-in effects, and we believe that the insights in this section apply equally to any tariffs that are bounded so that the routing game remains continuous at infinity. Restricting attention to linear contracts allows us to represent some node i``s contract by its price, pi. Because we further know that nodes cannot observe the path after the next hop, we can infer that contracts exist only between neighboring nodes on the graph. We will call this arrangement of contracts bilateral. When a competition game exclusively uses bilateral contracts, we will call it a bilateral contract competition game. We first focus on the routing game and ask whether a high quality route can be maintained, even when a low quality route is cheaper. Recall that this is a requirement in order for nodes to have any incentive to innovate. If nodes tend to route to low price next hops, regardless of quality, we say that the network is commoditized. To measure this tendency, we define an index of commoditization as follows: For a node on the data path, i, define its quality premium, minppd ji −= , where pj is the flow payment to the next hop in equilibrium, and pmin is the price of the lowest cost next hop. Definition: The index of commoditization, CI , is the average, over each node on the data path, i, of i``s flow profit as a fraction of i``s quality premium, ( ) ijii dpcp /−− . CI ranges from 0, when each node spends all of its potential profit on its quality premium, to infinite, when a node absorbs positive profit, but uses the lowest price next hop. A high value for CI implies that nodes are spending little of their money inflow on purchasing high quality for the rest of the path. As the next claim shows, this is exactly what happens as the path grows long: Claim 1. If the only monitors are E2E, ROP, and PRc, ∞→CI as ∞→n , where n is the number of nodes on the data path. To show that this is true, we first need the following lemma, which will establish the difficulty of punishing nodes in the network. First a bit of notation: Recall that a cheater can benefit from its actions for 00 >t before other players can react. When a node cheats, it can expect a higher profit flow, at least until it is caught and other players react, perhaps by diverting traffic. Let node i``s normal profit flow be iπ , and her profit flow during cheating be some greater value, yi. We will call the ratio, iiy π/ , the temptation to cheat. Lemma 1. If the only monitors are E2E, ROP, and PRc, the discounted time, −nt rt e 0 , needed to punish a cheater increases at least as fast as the product of the temptations to cheat along the data path, ∏ −− ≥ 0 0 pathdataon 0 t rt i i i t rt e y e n π (1) Corollary. If nodes share a minimum temptation to cheat, π/y , the discounted time needed to punish cheating increases at least exponentially in the length of the data path, n, −− ≥ 0 00 t rt nt rt e y e n π (2) Since it is the discounted time that increases exponentially, the actual time increases faster than exponentially. If n is so large that tn is undefined, the given path cannot be maintained in equilibrium. Proof. The proof proceeds by induction on the number of nodes on the equilibrium data path, n. For 1=n , there is a single node, say i. By cheating, the node earns extra profit ( ) − − 0 0 t rt ii ey π . If node i is then punished until time 1t , the extra profit must be cancelled out by the lost profit between time 0t and 1t , −1 0 t t rt i eπ . A little manipulation gives −− = 01 00 t rt i i t rt e y e π , as required. For 1>n , assume for induction that the claim holds for 1−n . The source does not know whether the cheater is the first hop, or after the first hop. Because the source does not know the data path after the first hop, it is unable to punish nodes beyond it. If it chooses a new first hop, it might not affect the rest of the data path. Because of this, the source must rely on the first hop to punish cheating nodes farther along the path. The first hop needs discounted time, ∏ −0 0 hopfirstafter t rt i i i e y π , to accomplish this by assumption. So the source must give the first hop this much discounted time in order to punish defectors further down the line (and the source will expect poor quality during this period). Next, the source must be protected against a first hop that cheats, and pretends that the problem is later in the path. The first hop can 188 do this for the full discounted time, ∏ −0 0 hopfirstafter t rt i i i e y π , so the source must punish the first hop long enough to remove the extra profit it can make. Following the same argument as for 1=n , we can show that the full discounted time is ∏ −0 0 pathdataon t rt i i i e y π , which completes the proof. The above lemma and its corollary show that punishing cheaters becomes more and more difficult as the data path grows long, until doing so is impossible. To capture some intuition behind this result, imagine that you are an end user, and you notice a sudden drop in service quality. If your data only travels through your access provider, you know it is that provider``s fault. You can therefore take your business elsewhere, at least for some time. This threat should motivate your provider to maintain high quality. Suppose, on the other hand, that your data traverses two providers. When you complain to your ISP, he responds, yes, we know your quality went down, but it``s not our fault, it``s the next ISP. Give us some time to punish them and then normal quality will resume. If your access provider is telling the truth, you will want to listen, since switching access providers may not even route around the actual offender. Thus, you will have to accept lower quality service for some longer time. On the other hand, you may want to punish your access provider as well, in case he is lying. This means you have to wait longer to resume normal service. As more ISPs are added to the path, the time increases in a recursive fashion. With this lemma in hand, we can return to prove Claim 1. Proof of Claim 1. Fix an equilibrium data path of length n. Label the path nodes 1,2,...,n. For each node i, let i``s quality premium be '11 ++ −= iii ppd . Then we have, [ ] = − = − + + = − + ++ = + −=− −− −− = −− − = −− = n i i n i iii iii n i iii ii n i i iii C g npcp pcp n pcp pp nd pcp n I 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 '1 '11 , (3) where gi is node i``s temptation to cheat by routing to the lowest price next hop. Lemma 1 tells us that Tg n i i <∏ =1 , where ( )01 rt eT − −= . It requires a bit of calculus to show that IC is minimized by setting each gi equal to n T /1 . However, as ∞→n , we have 1/1 →n T , which shows that ∞→CI . According to the claim, as the data path grows long, it increasingly resembles a lowest-price path. Since lowest-price routing does not support innovation, we may speculate that innovation degrades with the length of the data path. Though we suspect stronger claims are possible, we can demonstrate one such result by including an extra assumption: Available Bargain Path: A competitive market exists for lowcost transit, such that every node can route to the destination for no more than flow payment, lp . Claim 2. Under the available bargain path assumption, if node i , a distance n from S, can invest to alter its quality, and the source will spend no more than sP for a route including node i``s new quality, then the payment to node i, p, decreases hyperbolically with n, ( ) ( ) s n l P n T pp 1 1/1 − +≤ − , (4) where ( )01 rt eT − −= is the bound on the product of temptations from the previous claim. Thus, i will spend no more than ( ) ( )− + − s n l P n T p r 1 1 1/1 on this quality improvement, which approaches the bargain path``s payment, r pl , as ∞→n . The proof is given in the appendix. As a node gets farther from the source, its maximum payment approaches the bargain price, pl. Hence, the reward for innovation is bounded by the same amount. Large innovations, meaning substantially more expensive than rpl / , will not be pursued deep into the network. Claim 2 can alternately be viewed as a lower bound on how much it costs to elicit innovation in a network. If the source S wants node i to innovate, it needs to get a motivating payment, p, to i during the routing stage. However, it must also pay the nodes on the way to i a premium in order to motivate them to route properly. The claim shows that this premium increases with the distance to i, until it dwarfs the original payment, p. Our claims stand in sharp contrast to our null hypothesis from the introduction. Comparing the intuitive argument that supported our hypothesis with these claims, we can see that we implicitly used an oversimplified model of market pressure (as either present or not). As is now clear, market pressure relies on the decisions of customers, but these are limited by the lack of information. Hence, competitive forces degrade as the network deepens. 4. VERIFIABLE MONITORS In this section, we begin to introduce more accountability into the network. Recall that in the previous section, we assumed that players couldn``t convince each other of their private information. What would happen if they could? If a monitor``s informational signal can be credibly conveyed to others, we will call it a verifiable monitor. The monitor``s output in this case can be thought of as a statement accompanied by a proof, a string that can be processed by any player to determine that the statement is true. A verifiable monitor is a distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, proofs about current or past network behavior. Along these lines, we can imagine verifiable counterparts to E2E and ROP. We will label these E2Ev and ROPv. With these monitors, each node observes the quality of the rest of the path and can also convince other players of these observations by giving them a proof. 189 By adding verifiability to our monitors, identifying a single cheater is straightforward. The cheater is the node that cannot produce proof that the rest of path quality decreased. This means that the negative results of the previous section no longer hold. For example, the following lemma stands in contrast to Lemma 1. Lemma 2. With monitors E2Ev, ROPv, and PRc, and provided that the node before each potential cheater has an alternate next hop that isn``t more expensive, it is possible to enforce any data path in SPE so long as the maximum temptation is less than what can be deterred in finite time, − ≤ 0 0 max 1 t rt er y π (5) Proof. This lemma follows because nodes can share proofs to identify who the cheater is. Only that node must be punished in equilibrium, and the preceding node does not lose any payoff in administering the punishment. With this lemma in mind, it is easy to construct counterexamples to Claim 1 and Claim 2 in this new environment. Unfortunately, there are at least four reasons not to be satisfied with this improved monitoring system. The first, and weakest reason is that the maximum temptation remains finite, causing some distortion in routes or payments. Each node along a route must extract some positive profit unless the next hop is also the cheapest. Of course, if t0 is small, this effect is minimal. The second, and more serious reason is that we have always given our source the ability to commit to any punishment. Real world users are less likely to act collectively, and may simply search for the best service currently offered. Since punishment phases are generally characterized by a drop in quality, real world end-users may take this opportunity to shop for a new access provider. This will make nodes less motivated to administer punishments. The third reason is that Lemma 2 does not apply to cheating by coalitions. A coalition node may pretend to punish its successor, but instead enjoy a secret payment from the cheating node. Alternately, a node may bribe its successor to cheat, if the punishment phase is profitable, and so forth. The required discounted time for punishment may increase exponentially in the number of coalition members, just as in the previous section! The final reason not to accept this monitoring system is that when a cheater is punished, the path will often be routed around not just the offender, but around other nodes as well. Effectively, innocent nodes will be punished along with the guilty. In our abstract model, this doesn``t cause trouble since the punishment falls off the equilibrium path. The effects are not so benign in the real world. When ISPs lie in sequence along a data path, they contribute complementary services, and their relationship is vertical. From the perspective of other source-destination pairs, however, these same firms are likely to be horizontal competitors. Because of this, a node might deliberately cheat, in order to trigger punishment for itself and its neighbors. By cheating, the node will save money to some extent, so the cheater is likely to emerge from the punishment phase better off than the innocent nodes. This may give the cheater a strategic advantage against its competitors. In the extreme, the cheater may use such a strategy to drive neighbors out of business, and thereby gain a monopoly on some routes. 5. CONTRACTIBLE MONITORS At the end of the last section, we identified several drawbacks that persist in an environment with E2Ev, ROPv, and PRc. In this section, we will show how all of these drawbacks can be overcome. To do this, we will require our third and final category of monitor: A contractible monitor is simply a verifiable monitor that generates proofs that can serve as input to a contract. Thus, contractible is jointly a property of the monitor and the institutions that must verify its proofs. Contractibility requires that a court, 1. Can verify the monitor``s proofs. 2. Can understand what the proofs and contracts represent to the extent required to police illegal activity. 3. Can enforce payments among contracting parties. Understanding the agreements between companies has traditionally been a matter of reading contracts on paper. This may prove to be a harder task in a future network setting. Contracts may plausibly be negotiated by machine, be numerous, even per-flow, and be further complicated by the many dimensions of quality. When a monitor (together with institutional infrastructure) meets these criteria, we will label it with a subscript c, for contractible. The reader may recall that this is how we labeled the packets received monitor, PRc, which allows ISPs to form contracts with per-packet payments. Similarly, E2Ec and ROPc are contractible versions of the monitors we are now familiar with. At the end of the previous section, we argued for some desirable properties that we``d like our solution to have. Briefly, we would like to enforce optimal data paths with an equilibrium concept that doesn``t rely on re-routing for punishment, is coalition proof, and doesn``t punish innocent nodes when a coalition cheats. We will call such an equilibrium a fixed-route coalition-proof protect-theinnocent equilibrium. As the next claim shows, ROPc allows us to create a system of linear (price, quality) contracts under just such an equilibrium. Claim 3. With ROPc, for any feasible and consistent assignment of rest of path qualities to nodes, and any corresponding payment schedule that yields non-negative payoffs, these qualities can be maintained with bilateral contracts in a fixed-route coalition-proof protect-the-innocent equilibrium. Proof: Fix any data path consistent with the given rest of path qualities. Select some monetary punishment, P, large enough to prevent any cheating for time t0 (the discounted total payment from the source will work). Let each node on the path enter into a contract with its parent, which fixes an arbitrary payment schedule so long as the rest of path quality is as prescribed. When the parent node, which has ROPc, submits a proof that the rest of path quality is less than expected, the contract awards her an instantaneous transfer, P, from the downstream node. Such proofs can be submitted every 0t for the previous interval. Suppose now that a coalition, C, decides to cheat. The source measures a decrease in quality, and according to her contract, is awarded P from the first hop. This means that there is a net outflow of P from the ISPs as a whole. Suppose that node i is not in C. In order for the parent node to claim P from i, it must submit proof that the quality of the path starting at i is not as prescribed. This means 190 that there is a cheater after i. Hence, i would also have detected a change in quality, so i can claim P from the next node on the path. Thus, innocent nodes are not punished. The sequence of payments must end by the destination, so the net outflow of P must come from the nodes in C. This establishes all necessary conditions of the equilibrium. Essentially, ROPc allows for an implementation of (price, quality) contracts. Building upon this result, we can construct competition games in which nodes offer various qualities to each other at specified prices, and can credibly commit to meet these performance targets, even allowing for coalitions and a desire to damage other ISPs. Example 1. Define a Stackelberg price-quality competition game as follows: Extend the partial order of nodes induced by the graph to any complete ordering, such that downstream nodes appear before their parents. In this order, each node selects a contract to offer to its parents, consisting of a rest of path quality, and a linear price. In the routing game, each node selects a next hop at every time, consistent with its advertised rest of path quality. The Stackelberg price-quality competition game can be implemented in our model with ROPc monitors, by using the strategy in the proof, above. It has the following useful property: Claim 4. The Stackelberg price-quality competition game yields optimal routes in SPE. The proof is given in the appendix. This property is favorable from an innovation perspective, since firms that invest in high quality will tend to fall on the optimal path, gaining positive payoff. In general, however, investments may be over or under rewarded. Extra conditions may be given under which innovation decisions approach perfect efficiency for large innovations. We omit the full analysis here. Example 2. Alternately, we can imagine that players report their private information to a central authority, which then assigns all contracts. For example, contracts could be computed to implement the cost-minimizing VCG mechanism proposed by Feigenbaum, et al. in [7]. With ROPc monitors, we can adapt this mechanism to maximize welfare. For node, i, on the optimal path, L, the net payment must equal, essentially, its contribution to the welfare of S, D, and the other nodes. If L'' is an optimal path in the graph with i removed, the profit flow to i is, ( ) ( ) ∈≠∈ +−− ', ' Lj j ijLj jLL ccququ , (6) where Lq and `Lq are the qualities of the two paths. Here, (price, quality) contracts ensure that nodes report their qualities honestly. The incentive structure of the VCG mechanism is what motivates nodes to report their costs accurately. A nice feature of this game is that individual innovation decisions are efficient, meaning that a node will invest in an innovation whenever the investment cost is less than the increased welfare of the optimal data path. Unfortunately, the source may end up paying more than the utility of the path. Notice that with just E2Ec, a weaker version of Claim 3 holds. Bilateral (price, quality) contracts can be maintained in an equilibrium that is fixed-route and coalition-proof, but not protectthe-innocent. This is done by writing contracts to punish everyone on the path when the end to end quality drops. If the path length is n, the first hop pays nP to the source, the second hop pays ( )Pn 1− to the first, and so forth. This ensures that every node is punished sufficiently to make cheating unprofitable. For the reasons we gave previously, we believe that this solution concept is less than ideal, since it allows for malicious nodes to deliberately trigger punishments for potential competitors. Up to this point, we have adopted fixed-route coalition-proof protect-the-innocent equilibrium as our desired solution concept, and shown that ROPc monitors are sufficient to create some competition games that are desirable in terms of service diversity and innovation. As the next claim will show, rest of path monitoring is also necessary to construct such games under our solution concept. Before we proceed, what does it mean for a game to be desirable from the perspective of service diversity and innovation? We will use a very weak assumption, essentially, that the game is not fully commoditized for any node. The claim will hold for this entire class of games. Definition: A competition game is nowhere-commoditized if for each node, i, not adjacent to D, there is some assignment of qualities and marginal costs to nodes, such that the optimal data path includes i, and i has a positive temptation to cheat. In the case of linear contracts, it is sufficient to require that ∞<CI , and that every node make positive profit under some assignment of qualities and marginal costs. Strictly speaking, ROPc monitors are not the only way to construct these desirable games. To prove the next claim, we must broaden our notion of rest of path monitoring to include the similar ROPc'' monitor, which attests to the quality starting at its own node, through the end of the path. Compare the two monitors below: ROPc: gives a node proof that the path quality from the next node to the destination is not correct. ROPc'': gives a node proof that the path quality from that node to the destination is correct. We present a simplified version of this claim, by including an assumption that only one node on the path can cheat at a time (though conspirators can still exchange side payments). We will discuss the full version after the proof. Claim 5. Assume a set of monitors, and a nowhere-commoditized bilateral contract competition game that always maintains the optimal quality in fixed-route coalition-proof protect-the-innocent equilibrium, with only one node allowed to cheat at a time. Then for each node, i, not adjacent to D, either i has an ROPc monitor, or i``s children each have an ROPc'' monitor. Proof: First, because of the fixed-route assumption, punishments must be purely monetary. Next, when cheating occurs, if the payment does not go to the source or destination, it may go to another coalition member, rendering it ineffective. Thus, the source must accept some monetary compensation, net of its normal flow payment, when cheating occurs. Since the source only contracts with the first hop, it must accept this money from the first hop. The source``s contract must therefore distinguish when the path quality is normal from when it is lowered by cheating. To do so, it can either accept proofs 191 from the source, that the quality is lower than required, or it can accept proofs from the first hop, that the quality is correct. These nodes will not rationally offer the opposing type of proof. By definition, any monitor that gives the source proof that the path quality is wrong is an ROPc monitor. Any monitor that gives the first hop proof that the quality is correct is a ROPc'' monitor. Thus, at least one of these monitors must exist. By the protect-the-innocent assumption, if cheating occurs, but the first hop is not a cheater, she must be able to claim the same size reward from the next ISP on the path, and thus pass on the punishment. The first hop``s contract with the second must then distinguish when cheating occurs after the first hop. By argument similar to that for the source, either the first hop has a ROPc monitor, or the second has a ROPc'' monitor. This argument can be iterated along the entire path to the penultimate node before D. Since the marginal costs and qualities can be arranged to make any path the optimal path, these statements must hold for all nodes and their children, which completes the proof. The two possibilities for monitor correspond to which node has the burden of proof. In one case, the prior node must prove the suboptimal quality to claim its reward. In the other, the subsequent node must prove that the quality was correct to avoid penalty. Because the two monitors are similar, it seems likely that they require comparable costs to implement. If submitting the proofs is costly, it seems natural that nodes would prefer to use the ROPc monitor, placing the burden of proof on the upstream node. Finally, we note that it is straightforward to derive the full version of the claim, which allows for multiple cheaters. The only complication is that cheaters can exchange side payments, which makes any money transfers between them redundant. Because of this, we have to further generalize our rest of path monitors, so they are less constrained in the case that there are cheaters on either side. 5.1 Implementing Monitors Claim 5 should not be interpreted as a statement that each node must compute the rest of path quality locally, without input from other nodes. Other monitors, besides ROPc and ROPc'' can still be used, loosely speaking, as building blocks. For instance, network tomography is concerned with measuring properties of the network interior with tools located at the edge. Using such techniques, our source might learn both individual node qualities and the data path. This is represented by the following two monitors: SHOPc i : (source-based hop quality) A monitor that gives the source proof of what the quality of node i is. SPATHc: (source-based path) A monitor that gives the source proof of what the data path is at any time, at least as far as it matches the equilibrium path. With these monitors, a punishment mechanism can be designed to fulfill the conditions of Claim 5. It involves the source sharing the proofs it generates with nodes further down the path, which use them to determine bilateral payments. Ultimately however, the proof of Claim 5 shows us that each node i``s bilateral contracts require proof of the rest of path quality. This means that node i (or possibly its children) will have to combine the proofs that they receive to generate a proof of the rest of path quality. Thus, the combined process is itself a rest of path monitor. What we have done, all in all, is constructed a rest of path monitor using SPATHc and SHOPc i as building blocks. Our new monitor includes both the component monitors and whatever distributed algorithmic mechanism exists to make sure nodes share their proofs correctly. This mechanism can potentially involve external institutions. For a concrete example, suppose that when node i suspects it is getting poor rest of path quality from its successor, it takes the downstream node to court. During the discovery process, the court subpoenas proofs of the path and of node qualities from the source (ultimately, there must be some threat to ensure the source complies). Finally, for the court to issue a judgment, one party or the other must compile a proof of what the rest of path quality was. Hence, the entire discovery process acts as a rest of path monitor, albeit a rather costly monitor in this case. Of course, mechanisms can be designed to combine these monitors at much lower cost. Typically, such mechanisms would call for automatic sharing of proofs, with court intervention only as a last resort. We defer these interesting mechanisms to future work. As an aside, intuition might dictate that SHOPc i generates more information than ROPc; after all, inferring individual node qualities seems a much harder problem. Yet, without path information, SHOPc i is not sufficient for our first-best innovation result. The proof of this demonstrates a useful technique: Claim 6. With monitors E2E, ROP, SHOPc i and PRc, and a nowhere-commoditized bilateral contract competition game, the optimal quality cannot be maintained for all assignments of quality and marginal cost, in fixed-route coalition-proof protect-theinnocent equilibrium. Proof: Because nodes cannot verify the data path, they cannot form a proof of what the rest of path quality is. Hence, ROPc monitors do not exist, and therefore the requirements of Claim 5 cannot hold. 6. CONCLUSIONS AND FUTURE WORK It is our hope that this study will have a positive impact in at least three different ways. The first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy. For protocol designers, we first provide fresh motivation to create monitoring systems. We have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation. Unless accountability improves, these pathologies are guaranteed to remain. Secondly, we suggest directions for future advances in monitoring. We have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition. At the same time, this does not present a fully satisfying solution. This paper has suggested a novel standard for monitors to aspire to - one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium. We have shown that under bilateral contracts, this specifically requires contractible rest of path monitors. This is not to say that other types of monitors are unimportant. We included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition. However, in order for this to happen, a mechanism must be included 192 to combine proofs from these monitors to form a proof of rest of path quality. In other words, the monitors must ultimately be combined to form contractible rest of path monitors. To support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors. As far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality. These institutions must be equipped to verify proofs of quality, and police illegal contracting behavior. As quality-based contracts become numerous and complicated, and possibly negotiated by machine, this may become a challenging task, and new standards and regulations may have to emerge in response. This remains an interesting and unexplored area for research. The second area we hope our study will benefit is that of clean-slate architectural design. Traditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements. Thus, the approach is often one of engineering, which tends to neglect competitive effects. We agree with Ratnasamy, Shenker, and McCanne, that designing for evolution should be a top priority [11]. We have demonstrated that the network``s monitoring ability is critical to supporting innovation, as are the institutions that support contracting. These elements should feature prominently in new designs. Our analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring. From a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems. Finally, the last contribution our study makes is methodological. We believe that the mathematical formalization we present is applicable to a variety of future research questions. While a significant literature addresses innovation in the presence of network effects, to the best of our knowledge, ours is the first model of innovation in a network industry that successfully incorporates the actual topological structure as input. This allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability. Our method also stands in contrast to the typical approach of distributed algorithmic mechanism design. Because this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes. Our technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract. We believe this is a closer reflection of the industry. Based on the insights in this study, the possible directions for future research are numerous and exciting. To some degree, contracting based on quality opens a Pandora``s Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality? Should ISPs be allowed to offer a choice of contracts at different quality levels? What anti-competitive behaviors are enabled by quality-based contracts? Can a contracting system support optimal multicast trees? In this study, we have focused on bilateral contracts. This system has seemed natural, especially since it is the prevalent system on the current network. Perhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction. There is no need to worry about who will enforce a punishment against another ISP on the opposite side of the planet, nor is there a dispute over whose legal rules to apply in interpreting a contract. Although this benefit is compelling, it is worth considering other systems. The clearest alternative is to form a contract between the source and every node on the path. We may call these source contracts. Source contracting may present surprising advantages. For instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop. Additionally, if the source only has contracts with nodes on the intended path, other nodes won``t even be willing to accept packets from this source since they won``t receive compensation for carrying them. This combination seems to eliminate all temptation for a single cheater to cheat in route. Because of this and other encouraging features, we believe source contracts are a fertile topic for further study. Another important research task is to relax our assumption that quality can be measured fully and precisely. One possibility is to assume that monitoring is only probabilistic or suffers from noise. Even more relevant is the possibility that quality monitors are fundamentally incomplete. A quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces. We may define a monitor space as a subspace of the quality space that a monitor can measure, QM ⊂ , and a corresponding monitoring function that simply projects the full range of qualities onto the monitor space, MQm →: . Clearly, innovations that leave quality invariant under m are not easy to support - they are invisible to the monitoring system. In this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs. Further research is needed to understand this process. 7. ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers, Jens Grossklags, Moshe Babaioff, Scott Shenker, Sylvia Ratnasamy, and Hal Varian for their comments. This work is supported in part by the National Science Foundation under ITR award ANI-0331659. 8. REFERENCES [1] Afergan, M. Using Repeated Games to Design IncentiveBased Routing Systems. In Proceedings of IEEE INFOCOM (April 2006). [2] Afergan, M. and Wroclawski, J. On the Benefits and Feasibility of Incentive Based Routing Infrastructure. In ACM SIGCOMM'04 Workshop on Practice and Theory of Incentives in Networked Systems (PINS) (August 2004). [3] Argyraki, K., Maniatis, P., Cheriton, D., and Shenker, S. Providing Packet Obituaries. In Third Workshop on Hot Topics in Networks (HotNets) (November 2004). [4] Clark, D. D. The Design Philosophy of the DARPA Internet Protocols. In Proceedings of ACM SIGCOMM (1988). 193 [5] Clark, D. D., Wroclawski, J., Sollins, K. R., and Braden, R. Tussle in cyberspace: Defining tomorrow's internet. In Proceedings of ACM SIGCOMM (August 2002). [6] Dang-Nguyen, G. and Pénard, T. Interconnection Agreements: Strategic Behaviour and Property Rights. In Brousseau, E. and Glachant, J.M. Eds. The Economics of Contracts: Theories and Applications, Cambridge University Press, 2002. [7] Feigenbaum, J., Papadimitriou, C., Sami, R., and Shenker, S. A BGP-based Mechanism for Lowest-Cost Routing. Distributed Computing 18 (2005), pp. 61-72. [8] Huston, G. Interconnection, Peering, and Settlements. Telstra, Australia. [9] Liu, Y., Zhang, H., Gong, W., and Towsley, D. On the Interaction Between Overlay Routing and Traffic Engineering. In Proceedings of IEEE INFOCOM (2005). [10] MacKie-Mason, J. and Varian, H. Pricing the Internet. In Kahin, B. and Keller, J. Eds. Public access to the Internet. Englewood Cliffs, NJ; Prentice-Hall, 1995. [11] Ratnasamy, S., Shenker, S., and McCanne, S. Towards an Evolvable Internet Architecture. In Proceeding of ACM SIGCOMM (2005). [12] Shakkottai, S., and Srikant, R. Economics of Network Pricing with Multiple ISPs. In Proceedings of IEEE INFOCOM (2005). 9. APPENDIX Proof of Claim 2. Node i must fall on the equilibrium data path to receive any payment. Let the prices along the data path be ppppP nS == ,..., 21 , with marginal costs, ncc ,...,1 . We may assume the prices on the path are greater than lp or the claim follows trivially. Each node along the data path can cheat in route by giving data to the bargain path at price no more than lp . So node j``s temptation to cheat is at least 11 ++ − − ≥ −− −− jj l jjj ljj pp pp pcp pcp . Then Lemma 1 gives, 1 13221 1... − − − −> − − ⋅⋅ − − − − ≥ n S l n lll P pp n pp pp pp pp pp pp T (7) This can be rearranged to give ( ) ( ) s n l P n T pp 1 1/1 − +≤ − , as required. The rest of the claim simply recognizes that rp / is the greatest reward node i can receive for its investment, so it will not invest sums greater than this. Proof of Claim 4. Label the nodes 1,2,. . N in the order in which they select contracts. Let subgame n be the game that begins with n choosing its contract. Let Ln be the set of possible paths restricted to nodes n,...,N. That is, Ln is the set of possible routes from S to reach some node that has already moved. For subgame n, define the local welfare over paths nLl ∈ , and their possible next hops, nj < as follows, ( ) ( ) j li ipathjl pcqqujlV −−= ∈ *, , (8) where ql is the quality of path l in the set {n,...,N}, and pathjq and pj are the quality and price of the contract j has offered. For induction, assume that subgame n + 1 maximizes local welfare. We show that subgame n does as well. If node n selects next hop k, we can write the following relation, ( ) ( )( ) ( )( ) nknn knlVpcpknlVnlV π−=++−= ,,,,, , (9) where n is node n``s profit if the path to n is chosen. This path is chosen whenever ( )nlV , is maximal over Ln+1 and possible next hops. If ( )( )knlV ,, is maximal over Ln, it is also maximal over the paths in Ln+1 that don``t lead to n. This means that node n can choose some n small enough so that ( )nlV , is maximal over Ln+1, so the route will lead to k. Conversely, if ( )( )knlV ,, is not maximal over Ln, either V is greater for another of n``s next hops, in which case n will select that one in order to increase n, or V is greater for some path in Ln+1 that don``t lead to n, in which case ( )nlV , cannot be maximal for any nonnegative n. Thus, we conclude that subgame n maximizes local welfare. For the initial case, observe that this assumption holds for the source. Finally, we deduce that subgame 1, which is the entire game, maximizes local welfare, which is equivalent to actual welfare. Hence, the Stackelberg price-quality game yields an optimal route. 194
Network Monitors and Contracting Systems: Competition and Innovation ABSTRACT Today's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network's primitive system of contracts does not align incentives properly. In this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation. 1. INTRODUCTION Many studies before us have noted the Internet's resistance to new services and evolution. In recent decades, numerous ideas have been developed in universities, implemented in code, and even written into the routers and end systems of the network, only to languish as network operators fail to turn them on on a large scale. The list includes Multicast, IPv6, IntServ, and DiffServ. Lacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas. In the long term, this pathology stands out as a critical obstacle to the network's continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]). On a smaller time scale, ISPs shun new services in favor of cost cutting measures. Thus, the network has characteristics of a commodity market. Although in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2]. On one hand, this leads directly to suboptimal routing. More importantly, commoditization in the short term is surely related to the lack of innovation in the long term. When the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs. There is simply no reward for introducing new services or investing in quality improvements. In response to these pathologies and others, researchers have put forth various proposals for improving the situation. These can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users. Clark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5]. Ratnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11]. They propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol. The extra traffic to the ISP, the authors suggest, will motivate the initial investment. The second strategy suggests a revision of the contracting system. This is exemplified by MacKie-Mason and Varian, who propose a "smart market" to control access to network resources [10]. Prices are set to the market-clearing level based on bids that users associate to their traffic. In another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2]. They argue that such a move would improve stability and align incentives. The third high-level strategy calls for greater network accountability. In this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3]. They argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations. Unlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation. It is clear that these three strategies are closely linked to each other (for example, [2], [5], and [9] each argue that giving end-users routing control within the current contracting system is problematic). Until today, however, the relationship between them has been poorly understood. There is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation. This paper will address both issues. We will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation. Our model is highly stylized and may be considered preliminary: it is based on a single source sending data to a single destination. Nevertheless, the structure is rich enough to expose previously unseen features of network behavior. We will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today's network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation. In other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics. This result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry. We will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure. When we say that today's Internet has poor accountability, we mean that it reveals little information about the behavior--or misbehavior--of ISPs. This well-known trait is largely rooted in the network's history. In describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven "second level goals." [4] Accordingly, accountability received little attention during the network's formative years. Clark relates this to the network's military context, and finds that had the network been designed for commercial development, accountability would have been a top priority. Argyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network's lack of accountability [3]. According to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt. They notice when packet drops occur so that they can perform congestion control and retransmit packets. Details of where and why drops occur are deliberately concealed. The network's lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts. This is because contracts depend upon external institutions to function--the "judge" in the language of incomplete contract theory, or simply the legal system. Ultimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition. Of course, the vast majority of contracts never end up in court. Especially when a judge's ruling is easily predicted, the parties will typically comply with the contract terms on their own volition. This would not be possible, however, without the judge acting as a last resort. An institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time. This payment is a function of the past and present behaviors of the participants, but only those that are verifiable. Hence, we imagine that a contract only accepts "proofs" as inputs. We will call any process that generates these proofs a contractible monitor. Such a monitor includes metering or sensing devices on the physical network, but it is a more general concept. Constructing a proof of a particular behavior may require readings from various devices distributed among many ISPs. The contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information. Figure 1 demonstrates how our model of contracts fits together. We make the assumption that all payments are mediated by contracts. This means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency. Network Monitor Behavior Figure 1: Relationship between monitors and contracts With this model, we may conclude that the level of accountability in today's Internet only permits best effort contracts. Nodes cannot condition payments on either quality or path characteristics. Is there anything wrong with best-effort contracts? The reader might wonder why the Internet needs contracts at all. After all, in non-network industries, traditional firms invest in research and differentiate their products, all in the hopes of keeping their customers and securing new ones. One might believe that such market forces apply to ISPs as well. We may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets. There is a popular intuitive argument that supports this hypothesis, and it may be summarized as follows: Intuitive argument supporting null hypothesis: 1. Access providers try to increase their quality to get more consumers. 2. Access providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers. Access providers try to select high quality transit because that increases their quality. 3. The process continues through the network, giving every ISP a competitive reason to increase quality. We are careful to model our network in continuous time, in order to capture the essence of this argument. We can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop. Moreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer. By contrast, customers in the real world rarely respond collectively, and often simply seek the best deal currently offered. These constraints limit their ability to punish cheaters. Even with these liberal assumptions, however, we find that we must reject our null hypothesis. Our model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment. We will define an index of commoditization and show that it increases without bound as data paths grow long. Furthermore, we will demonstrate a framework in which an ISP's maximum research investment decreases hyperbolically with its distance from the end-user. To summarize, we argue that the Internet's lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved. This leads us to our next topic: How can we leverage accountability to overcome these pathologies? We approach this question from a clean-slate perspective. Instead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective. According to this approach, we first craft a new equilibrium concept appropriate for network competition. Our concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path. Rerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly. Next, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments. Finally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition. The last requirement is somewhat unconventional from an economic perspective, but we maintain that it is crucial for any reasonable solution. Although ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well. If innocent nodes may be punished, an ISP may decide to deliberately cheat and draw punishment onto itself and its neighbors. By cheating, the ISP may save resources, thereby ensuring that the punishment is more damaging to the other ISPs, which probably compete with the cheater directly for some customers. In the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes. Applying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes. The solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination. It turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect. Rest of path monitors can be implemented in various ways. They may be purely local algorithms that listen for packet echoes. Alternately, they can be distributed in nature. We describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path. This requires a mechanism to motivate ISPs to share their monitor outputs with each other. The rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required. This example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics. Our study has several practical implications for future protocol design. We show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome. Moreover, we derive exactly what monitors are needed to optimize routes and support innovation. In addition, our results provide useful input for clean-slate architectural design, and we use several novel techniques that we expect will be applicable to a variety of future research. The rest of this paper is organized as follows: In section 2, we lay out our basic network model. In section 3, we present a lowaccountability network, modeled after today's Internet. We demonstrate how poor monitoring causes commoditization and a lack of innovation. In section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo. In section 5, we turn our attention to contractible monitors. We show that rest of path monitors can support competition games with optimal routing and innovation. We further show that rest of path monitors are required to support such competition games. We continue by discussing how such monitors may be constructed using other monitors as building blocks. In section 6, we conclude and present several directions for future research. 2. BASIC NETWORK MODEL A source, S, wants to send data to destination, D. S and D are nodes on a directed, acyclic graph, with a finite set of intermediate nodes, V = {1,2,...NJ, representing ISPs. All paths lead to D, and every node not connected to D has at least two choices for next hop. We will represent quality by a finite dimensional vector space, Q, called the quality space. Each dimension represents a distinct network characteristic that end-users care about. For example, latency, loss probability, jitter, and IP version can each be assigned to a dimension. To each node, i, we associate a vector in the quality space, qi e Q. This corresponds to the quality a user would experience if i were the only ISP on the data path. Let q e Q be the vector of all node Of course, when data passes through multiple nodes, their qualities combine in some way to yield a path quality. We represent this by an associative binary operation, *: Q x Q--> Q. For path (v1, v2,..., vn), the quality is given by qv * qv *...* qvn. The * operation reflects the characteristics of each dimension of quality. For example, * can act as an addition in the case of latency, multiplication in the case of loss probability, or a minimumargument function in the case of security. When data flows along a complete path from S to D, the source and destination, generally regarded as a single player, enjoy utility given by a function of the path quality, u: Q--> R. Each node along the path, i, experiences some cost of transmission, ci. 2.1 Game Dynamics Ultimately, we are most interested in policies that promote innovation on the network. In this study, we will use innovation in a fairly general sense. Innovation describes any investment by an ISP that alters its quality vector so that at least one potential data path offers higher utility. This includes researching a new routing algorithm that decreases the amount of jitter users experience. It also includes deploying a new protocol that supports quality of service. Even more broadly, buying new equipment to decrease Figure 2: Game Dynamics latency may also be regarded as innovation. Innovation may be thought of as the micro-level process by which the network evolves. Our analysis is limited in one crucial respect: We focus on inventions that a single ISP can implement to improve the end-user experience. This excludes technologies that require adoption by all ISPs on the network to function. Because such technologies do not create a competitive advantage, rewarding them is difficult and may require intellectual property or some other market distortion. We defer this interesting topic to future work. At first, it may seem unclear how a large-scale distributed process such as innovation can be influenced by mechanical details like networks monitors. Our model must draw this connection in a realistic fashion. The rate of innovation depends on the profits that potential innovators expect in the future. The reward generated by an invention must exceed the total cost to develop it, or the inventor will not rationally invest. This reward, in turn, is governed by the competitive environment in which the firm operates, including the process by which firms select prices, and agree upon contracts with each other. Of course, these decisions depend on how routes are established, and how contracts determine actual monetary exchanges. Any model of network innovation must therefore relate at least three distinct processes: innovation, competition, and routing. We select a game dynamics that makes the relation between these processes as explicit as possible. This is represented schematically in Figure 2. The innovation stage occurs first, at time t = − 2. In this stage, each agent decides whether or not to make research investments. If she chooses not to, her quality remains fixed. If she makes an investment, her quality may change in some way. It is not necessary for us to specify how such changes take place. The agents' choices in this stage determine the vector of qualities, q, common knowledge for the rest of the game. Next, at time t = − 1, agents participate in the competition stage, in which contracts are agreed upon. In today's industry, these contracts include prices for transit access, and peering agreements. Since access is provided on a best-effort basis, a transit agreement can simply be represented by its price. Other contracting systems we will explore will require more detail. Finally, beginning at t = 0, firms participate in the routing stage. Other research has already employed repeated games to study routing, for example [1], [12]. Repetition reveals interesting effects not visible in a single stage game, such as informal collusion to elevate prices in [12]. We use a game in continuous time in order to study such properties. For example, we will later ask whether a player will maintain higher quality than her contracts require, in the hope of keeping her customer base or attracting future customers. Our dynamics reflect the fact that ISPs make innovation decisions infrequently. Although real firms have multiple opportunities to innovate, each opportunity is followed by a substantial length of time in which qualities are fixed. The decision to invest focuses on how the firm's new quality will improve the contracts it can enter into. Hence, our model places innovation at the earliest stage, attempting to capture a single investment decision. Contracting decisions are made on an intermediate time scale, thus appearing next in the dynamics. Routing decisions are made very frequently, mainly to maximize immediate profit flows, so they appear in the last stage. Because of this ordering, our model does not allow firms to route strategically to affect future innovation or contracting decisions. In opposition, Afergan and Wroclawski argue that contracts are formed in response to current traffic patterns, in a feedback loop [2]. Although we are sympathetic to their observation, such an addition would make our analysis intractable. Our model is most realistic when contracting decisions are infrequent. Throughout this paper, our solution concept will be a subgame perfect equilibrium (SPE). An SPE is a strategy point that is a Nash equilibrium when restricted to each subgame. Three important subgames have been labeled in Figure 2. The innovation game includes all three stages. The competition game includes only the competition stage and the routing stage. The routing game includes only the routing stage. An SPE guarantees that players are "forward-looking." This means, for example, that in the competition stage, firms must act rationally, maximizing their expected profits in the routing stage. They cannot carry out threats they made in the innovation stage if it lowers their expected payoff. Our schematic already suggests that the routing game is crucial for promoting innovation. To support innovation, the competition game must somehow reward ISPs with "high" quality. But that means that the routing game must tend to route to nodes with high quality. If the routing game always selects the lowest-cost routes, for example, innovation will not be supported. We will support this observation with analysis later. 2.2 The Routing Game The routing game proceeds in continuous time, with all players discounting by a common factor, r. The outputs from previous stages, q and the set of contracts, are treated as exogenous parameters for this game. For each time t>--0, each node must select a next hop to route data to. Data flows across the resultant path, causing utility flow to S and D, and a flow cost to the nodes on the path, as described above. Payment flows are also created, based on the contracts in place. Relating our game to the familiar repeated prisoners' dilemma, imagine that we are trying to impose a high quality, but costly path. As we argued loosely above, such paths must be sustainable in order to support innovation. Each ISP on the path tries to maximize her own payment, net of costs, so she may not want to cooperate with our plan. Rather, if she can find a way to save on costs, at the expense of the high quality we desire, she will be tempted to do so. Analogously to the prisoners' dilemma, we will call such a decision cheating. A little more formally, Cheating refers to any action that an ISP can take, contrary to some target strategy point that we are trying to impose, that enhances her immediate payoff, but compromises the quality of the data path. One type of cheating relates to the data path. Each node on the path has to pay the next node to deliver its traffic. If the next node offers high quality transit, we may expect that a lower quality node will offer a lower price. Each node on the path will be tempted to route to a cheaper next hop, increasing her immediate profits, but lowering the path quality. We will call this type of action cheating in route. Another possibility we can model, is that a node finds a way to save on its internal forwarding costs, at the expense of its own quality. We will call this cheating internally to distinguish it from cheating in route. For example, a node might drop packets beyond the rate required for congestion control, in order to throttle back TCP flows and thus save on forwarding costs [3]. Alternately, a node employing quality of service could give high priority packets a lower class of service, thus saving on resources and perhaps allowing itself to sell more high priority service. If either cheating in route or cheating internally is profitable, the specified path will not be an equilibrium. We assume that cheating can never be caught instantaneously. Rather, a cheater can always enjoy the payoff from cheating for some positive time, which we label t0. This includes the time for other players to detect and react to the cheating. If the cheater has a contract which includes a customer lock-in period, t0 also includes the time until customers are allowed to switch to a new ISP. As we will see later, it is socially beneficial to decrease t0, so such lock-in is detrimental to welfare. 3. PATHOLOGIES OF A LOWACCOUNTABILITY NETWORK In order to motivate an exploration of monitoring systems, we begin in this section by considering a network with a poor degree of accountability, modeled after today's Internet. We will show how the lack of monitoring necessarily leads to poor routing and diminishes the rate of innovation. Thus, the network's lack of accountability is a fundamental obstacle to resolving these pathologies. 3.1 Accountability in the Current Internet First, we reflect on what accountability characteristics the present Internet has. Argyraki, et al., point out that end hosts are given minimal information about packet drops [3]. Users know when drops occur, but not where they occur, nor why. Dropped packets may represent the innocent signaling of congestion, or, as we mentioned above, they may be a form of cheating internally. The problem is similar for other dimensions of quality, or in fact more acute. Finding an ISP that gives high priority packets a lower class of service, for example, is further complicated by the lack of even basic diagnostic tools. In fact, it is similarly difficult to identify an ISP that cheats in route. Huston notes that Internet traffic flows do not always correspond to routing information [8]. An ISP may hand a packet off to a neighbor regardless of what routes that neighbor has advertised. Furthermore, blocks of addresses are summarized together for distant hosts, so a destination may not even be resolvable until packets are forwarded closer. One might argue that diagnostic tools like ping and traceroute can identify cheaters. Unfortunately, Argyraki, et al., explain that these tools only reveal whether probe packets are echoed, not the fate of past packets [3]. Thus, for example, they are ineffective in detecting low-frequency packet drops. Even more fundamentally, a sophisticated cheater can always spot diagnostic packets and give them special treatment. As a further complication, a cheater may assume different aliases for diagnostic packets arriving over different routes. As we will see below, this gives the cheater a significant advantage in escaping punishment for bad behavior, even if the data path is otherwise observable. 3.2 Modeling Low-Accountability As the above evidence suggests, the current industry allows for very little insight into the behavior of the network. In this section, we attempt to capture this lack of accountability in our model. We begin by defining a monitor, our model of the way that players receive external information about network behavior, A monitor is any distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, informational statements about current or past network behavior. We assume that all external information about network behavior is mediated in this way. The accountability properties of the Internet can be represented by the following monitors: E2E (End to End): A monitor that informs S/D about what the total path quality is at any time (this is the quality they experience). ROP (Rest of Path): A monitor that informs each node along the data path what the quality is for the rest of the path to the destination. PRc (Packets Received): A monitor that tells nodes how much data they accept from each other, so that they can charge by volume. It is important to note, however, that this information is aggregated over many source-destination pairs. Hence, for the sake of realism, it cannot be used to monitor what the data path is. Players cannot measure the qualities of other, single nodes, just the rest of the path. Nodes cannot see the path past the next hop. This last assumption is stricter than needed for our results. The critical ingredient is that nodes cannot verify that the path avoids a specific hop. This holds, for example, if the path is generally visible, except nodes can use different aliases for different parents. Similar results also hold if alternate paths always converge after some integer number, m, of hops. It is important to stress that E2E and ROP are not the contractible monitors we described in the introduction--they do not generate proofs. Thus, even though a player observes certain information, she generally cannot credibly share it with another player. For example, if a node after the first hop starts cheating, the first hop will detect the sudden drop in quality for the rest of the path, but the first hop cannot make the source believe this observation--the source will suspect that the first hop was the cheater, and fabricated the claim against the rest of the path. Typically, E2E and ROP are envisioned as algorithms that run on a single node, and listen for packet echoes. This is not the only way that they could be implemented, however; an alternate strategy is to aggregate quality measurements from multiple points in the network. These measurements can originate in other monitors, located at various ISPs. The monitor then includes the component monitors as well as whatever mechanisms are in place to motivate nodes to share information honestly as needed. For example, if the source has monitors that reveal the qualities of individual nodes, they could be combined with path information to create an ROP monitor. Since we know that contracts only accept proofs as input, we can infer that payments in this environment can only depend on the number of packets exchanged between players. In other words, contracts are best-effort. For the remainder of this section, we will assume that contracts are also linear--there is a constant payment flow so long as a node accepts data, and all conditions of the contract are met. Other, more complicated tariffs are also possible, and are typically used to generate lock-in. We believe that our parameter t0 is sufficient to describe lock-in effects, and we believe that the insights in this section apply equally to any tariffs that are bounded so that the routing game remains continuous at infinity. Restricting attention to linear contracts allows us to represent some node i's contract by its price, pi. Because we further know that nodes cannot observe the path after the next hop, we can infer that contracts exist only between neighboring nodes on the graph. We will call this arrangement of contracts bilateral. When a competition game exclusively uses bilateral contracts, we will call it a bilateral contract competition game. We first focus on the routing game and ask whether a high quality route can be maintained, even when a low quality route is cheaper. Recall that this is a requirement in order for nodes to have any incentive to innovate. If nodes tend to route to low price next hops, regardless of quality, we say that the network is commoditized. To measure this tendency, we define an index of commoditization as follows: For a node on the data path, i, define its quality premium, di = pj − pmin, where pj is the flow payment to the next hop in equilibrium, and pmin is the price of the lowest cost next hop. Definition: The index of commoditization, IC, is the average, over each node on the data path, i, of i's flow profit as a fraction of i's quality premium, (pi − ci − pj) / di. IC ranges from 0, when each node spends all of its potential profit on its quality premium, to infinite, when a node absorbs positive profit, but uses the lowest price next hop. A high value for IC implies that nodes are spending little of their money inflow on purchasing high quality for the rest of the path. As the next claim shows, this is exactly what happens as the path grows long: Claim 1. If the only monitors are E2E, ROP, and PRc, IC as n, where n is the number of nodes on the data path. To show that this is true, we first need the following lemma, which will establish the difficulty of punishing nodes in the network. First a bit of notation: Recall that a cheater can benefit from its actions for t0> 0 before other players can react. When a node cheats, it can expect a higher profit flow, at least until it is caught and other players react, perhaps by diverting traffic. Let node i's normal profit flow be i, and her profit flow during cheating be some greater value, yi. We will call the ratio, y i / i, the temptation to cheat. Lemma 1. If the only monitors are E2E, ROP, and PRc, the discounted time, tn − rt e, needed to punish a cheater increases at i on data path i Corollary. If nodes share a minimum temptation to cheat, y /, the discounted time needed to punish cheating increases at least exponentially in the length of the data path, n, Since it is the discounted time that increases exponentially, the actual time increases faster than exponentially. If n is so large that tn is undefined, the given path cannot be maintained in equilibrium. Proof. The proof proceeds by induction on the number of nodes on the equilibrium data path, n. For n = 1, there is a single node, say i. By cheating, the node earns extra profit () − For n> 1, assume for induction that the claim holds for n − 1. The source does not know whether the cheater is the first hop, or after the first hop. Because the source does not know the data path after the first hop, it is unable to punish nodes beyond it. If it chooses a new first hop, it might not affect the rest of the data path. Because of this, the source must rely on the first hop to punish cheating nodes farther along the path. The first hop needs discounted time, the source must give the first hop this much discounted time in order to punish defectors further down the line (and the source will expect poor quality during this period). Next, the source must be protected against a first hop that cheats, and pretends that the problem is later in the path. The first hop can rt, to accomplish this by assumption. So do this for the full discounted time, the source must punish the first hop long enough to remove the extra profit it can make. Following the same argument as for n = 1, we i on data path i which completes the proof. ❑ The above lemma and its corollary show that punishing cheaters becomes more and more difficult as the data path grows long, until doing so is impossible. To capture some intuition behind this result, imagine that you are an end user, and you notice a sudden drop in service quality. If your data only travels through your access provider, you know it is that provider's fault. You can therefore take your business elsewhere, at least for some time. This threat should motivate your provider to maintain high quality. Suppose, on the other hand, that your data traverses two providers. When you complain to your ISP, he responds, "yes, we know your quality went down, but it's not our fault, it's the next ISP. Give us some time to punish them and then normal quality will resume." If your access provider is telling the truth, you will want to listen, since switching access providers may not even route around the actual offender. Thus, you will have to accept lower quality service for some longer time. On the other hand, you may want to punish your access provider as well, in case he is lying. This means you have to wait longer to resume normal service. As more ISPs are added to the path, the time increases in a recursive fashion. With this lemma in hand, we can return to prove Claim 1. Proof of Claim 1. Fix an equilibrium data path of length n. Label the path nodes 1,2,..., n. For each node i, let i's quality premium be di = pi +1 − pi +1'. Then we have, where gi is node i's temptation to cheat by routing to the lowest According to the claim, as the data path grows long, it increasingly resembles a lowest-price path. Since lowest-price routing does not support innovation, we may speculate that innovation degrades with the length of the data path. Though we suspect stronger claims are possible, we can demonstrate one such result by including an extra assumption: Available Bargain Path: A competitive market exists for lowcost transit, such that every node can route to the destination for no more than flow payment, pl. Claim 2. Under the available bargain path assumption, if node i, a distance n from S, can invest to alter its quality, and the source will spend no more than Ps for a route including node i's new quality, then the payment to node i, p, decreases hyperbolically with n, where (1 rt0) The proof is given in the appendix. As a node gets farther from the source, its maximum payment approaches the bargain price, pl. Hence, the reward for innovation is bounded by the same amount. Large innovations, meaning substantially more expensive than pl / r, will not be pursued deep into the network. Claim 2 can alternately be viewed as a lower bound on how much it costs to elicit innovation in a network. If the source S wants node i to innovate, it needs to get a motivating payment, p, to i during the routing stage. However, it must also pay the nodes on the way to i a premium in order to motivate them to route properly. The claim shows that this premium increases with the distance to i, until it dwarfs the original payment, p. Our claims stand in sharp contrast to our null hypothesis from the introduction. Comparing the intuitive argument that supported our hypothesis with these claims, we can see that we implicitly used an oversimplified model of market pressure (as either present or not). As is now clear, market pressure relies on the decisions of customers, but these are limited by the lack of information. Hence, competitive forces degrade as the network deepens. 4. VERIFIABLE MONITORS In this section, we begin to introduce more accountability into the network. Recall that in the previous section, we assumed that players couldn't convince each other of their private information. What would happen if they could? If a monitor's informational signal can be credibly conveyed to others, we will call it a verifiable monitor. The monitor's output in this case can be thought of as a statement accompanied by a proof, a string that can be processed by any player to determine that the statement is true. A verifiable monitor is a distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, proofs about current or past network behavior. Along these lines, we can imagine verifiable counterparts to E2E and ROP. We will label these E2Ev and ROPv. With these monitors, each node observes the quality of the rest of the path and can also convince other players of these observations by giving them a proof. By adding verifiability to our monitors, identifying a single cheater is straightforward. The cheater is the node that cannot produce proof that the rest of path quality decreased. This means that the negative results of the previous section no longer hold. For example, the following lemma stands in contrast to Lemma 1. Lemma 2. With monitors E2Ev, ROPv, and PRc, and provided that the node before each potential cheater has an alternate next hop that isn't more expensive, it is possible to enforce any data path in SPE so long as the maximum temptation is less than what can be deterred in finite time, Proof. This lemma follows because nodes can share proofs to identify who the cheater is. Only that node must be punished in equilibrium, and the preceding node does not lose any payoff in administering the punishment. ❑ With this lemma in mind, it is easy to construct counterexamples to Claim 1 and Claim 2 in this new environment. Unfortunately, there are at least four reasons not to be satisfied with this improved monitoring system. The first, and weakest reason is that the maximum temptation remains finite, causing some distortion in routes or payments. Each node along a route must extract some positive profit unless the next hop is also the cheapest. Of course, if t0 is small, this effect is minimal. The second, and more serious reason is that we have always given our source the ability to commit to any punishment. Real world users are less likely to act collectively, and may simply search for the best service currently offered. Since punishment phases are generally characterized by a drop in quality, real world end-users may take this opportunity to shop for a new access provider. This will make nodes less motivated to administer punishments. The third reason is that Lemma 2 does not apply to cheating by coalitions. A coalition node may pretend to punish its successor, but instead enjoy a secret payment from the cheating node. Alternately, a node may bribe its successor to cheat, if the punishment phase is profitable, and so forth. The required discounted time for punishment may increase exponentially in the number of coalition members, just as in the previous section! The final reason not to accept this monitoring system is that when a cheater is punished, the path will often be routed around not just the offender, but around other nodes as well. Effectively, innocent nodes will be punished along with the guilty. In our abstract model, this doesn't cause trouble since the punishment falls off the equilibrium path. The effects are not so benign in the real world. When ISPs lie in sequence along a data path, they contribute complementary services, and their relationship is vertical. From the perspective of other source-destination pairs, however, these same firms are likely to be horizontal competitors. Because of this, a node might deliberately cheat, in order to trigger punishment for itself and its neighbors. By cheating, the node will save money to some extent, so the cheater is likely to emerge from the punishment phase better off than the innocent nodes. This may give the cheater a strategic advantage against its competitors. In the extreme, the cheater may use such a strategy to drive neighbors out of business, and thereby gain a monopoly on some routes. 5. CONTRACTIBLE MONITORS At the end of the last section, we identified several drawbacks that persist in an environment with E2Ev, ROPv, and PRc. In this section, we will show how all of these drawbacks can be overcome. To do this, we will require our third and final category of monitor: A contractible monitor is simply a verifiable monitor that generates proofs that can serve as input to a contract. Thus, contractible is jointly a property of the monitor and the institutions that must verify its proofs. Contractibility requires that a court, 1. Can verify the monitor's proofs. 2. Can understand what the proofs and contracts represent to the extent required to police illegal activity. 3. Can enforce payments among contracting parties. Understanding the agreements between companies has traditionally been a matter of reading contracts on paper. This may prove to be a harder task in a future network setting. Contracts may plausibly be negotiated by machine, be numerous, even per-flow, and be further complicated by the many dimensions of quality. When a monitor (together with institutional infrastructure) meets these criteria, we will label it with a subscript c, for contractible. The reader may recall that this is how we labeled the packets received monitor, PRc, which allows ISPs to form contracts with per-packet payments. Similarly, E2Ec and ROPc are contractible versions of the monitors we are now familiar with. At the end of the previous section, we argued for some desirable properties that we'd like our solution to have. Briefly, we would like to enforce optimal data paths with an equilibrium concept that doesn't rely on re-routing for punishment, is coalition proof, and doesn't punish innocent nodes when a coalition cheats. We will call such an equilibrium a fixed-route coalition-proof protect-theinnocent equilibrium. As the next claim shows, ROPc allows us to create a system of linear (price, quality) contracts under just such an equilibrium. Claim 3. With ROPc, for any feasible and consistent assignment of rest of path qualities to nodes, and any corresponding payment schedule that yields non-negative payoffs, these qualities can be maintained with bilateral contracts in a fixed-route coalition-proof protect-the-innocent equilibrium. Proof: Fix any data path consistent with the given rest of path qualities. Select some monetary punishment, P, large enough to prevent any cheating for time t0 (the discounted total payment from the source will work). Let each node on the path enter into a contract with its parent, which fixes an arbitrary payment schedule so long as the rest of path quality is as prescribed. When the parent node, which has ROPc, submits a proof that the rest of path quality is less than expected, the contract awards her an instantaneous transfer, P, from the downstream node. Such proofs can be submitted every t0 for the previous interval. Suppose now that a coalition, C, decides to cheat. The source measures a decrease in quality, and according to her contract, is awarded P from the first hop. This means that there is a net outflow of P from the ISPs as a whole. Suppose that node i is not in C. In order for the parent node to claim P from i, it must submit proof that the quality of the path starting at i is not as prescribed. This means that there is a cheater after i. Hence, i would also have detected a change in quality, so i can claim P from the next node on the path. Thus, innocent nodes are not punished. The sequence of payments must end by the destination, so the net outflow of P must come from the nodes in C. This establishes all necessary conditions of the equilibrium. ❑ Essentially, ROPc allows for an implementation of (price, quality) contracts. Building upon this result, we can construct competition games in which nodes offer various qualities to each other at specified prices, and can credibly commit to meet these performance targets, even allowing for coalitions and a desire to damage other ISPs. Example 1. Define a Stackelberg price-quality competition game as follows: Extend the partial order of nodes induced by the graph to any complete ordering, such that downstream nodes appear before their parents. In this order, each node selects a contract to offer to its parents, consisting of a rest of path quality, and a linear price. In the routing game, each node selects a next hop at every time, consistent with its advertised rest of path quality. The Stackelberg price-quality competition game can be implemented in our model with ROPc monitors, by using the strategy in the proof, above. It has the following useful property: Claim 4. The Stackelberg price-quality competition game yields optimal routes in SPE. The proof is given in the appendix. This property is favorable from an innovation perspective, since firms that invest in high quality will tend to fall on the optimal path, gaining positive payoff. In general, however, investments may be over or under rewarded. Extra conditions may be given under which innovation decisions approach perfect efficiency for large innovations. We omit the full analysis here. ❑ Example 2. Alternately, we can imagine that players report their private information to a central authority, which then assigns all contracts. For example, contracts could be computed to implement the cost-minimizing VCG mechanism proposed by Feigenbaum, et al. in [7]. With ROPc monitors, we can adapt this mechanism to maximize welfare. For node, i, on the optimal path, L, the net payment must equal, essentially, its contribution to the welfare of S, D, and the other nodes. If L' is an optimal path in the graph with i removed, the profit flow to i is, where qL and qL' are the qualities of the two paths. Here, (price, quality) contracts ensure that nodes report their qualities honestly. The incentive structure of the VCG mechanism is what motivates nodes to report their costs accurately. A nice feature of this game is that individual innovation decisions are efficient, meaning that a node will invest in an innovation whenever the investment cost is less than the increased welfare of the optimal data path. Unfortunately, the source may end up paying more than the utility of the path. ❑ Notice that with just E2Ec, a weaker version of Claim 3 holds. Bilateral (price, quality) contracts can be maintained in an equilibrium that is fixed-route and coalition-proof, but not protectthe-innocent. This is done by writing contracts to punish everyone on the path when the end to end quality drops. If the path length is n, the first hop pays nP to the source, the second hop pays (n − 1) P to the first, and so forth. This ensures that every node is punished sufficiently to make cheating unprofitable. For the reasons we gave previously, we believe that this solution concept is less than ideal, since it allows for malicious nodes to deliberately trigger punishments for potential competitors. Up to this point, we have adopted fixed-route coalition-proof protect-the-innocent equilibrium as our desired solution concept, and shown that ROPc monitors are sufficient to create some competition games that are desirable in terms of service diversity and innovation. As the next claim will show, rest of path monitoring is also necessary to construct such games under our solution concept. Before we proceed, what does it mean for a game to be desirable from the perspective of service diversity and innovation? We will use a very weak assumption, essentially, that the game is not fully commoditized for any node. The claim will hold for this entire class of games. Definition: A competition game is nowhere-commoditized if for each node, i, not adjacent to D, there is some assignment of qualities and marginal costs to nodes, such that the optimal data path includes i, and i has a positive temptation to cheat. In the case of linear contracts, it is sufficient to require that IC <--, and that every node make positive profit under some assignment of qualities and marginal costs. Strictly speaking, ROPc monitors are not the only way to construct these desirable games. To prove the next claim, we must broaden our notion of rest of path monitoring to include the similar ROPc' monitor, which attests to the quality starting at its own node, through the end of the path. Compare the two monitors below: ROP,: gives a node proof that the path quality from the next node to the destination is not correct. ROP,': gives a node proof that the path quality from that node to the destination is correct. We present a simplified version of this claim, by including an assumption that only one node on the path can cheat at a time (though conspirators can still exchange side payments). We will discuss the full version after the proof. Claim 5. Assume a set of monitors, and a nowhere-commoditized bilateral contract competition game that always maintains the optimal quality in fixed-route coalition-proof protect-the-innocent equilibrium, with only one node allowed to cheat at a time. Then for each node, i, not adjacent to D, either i has an ROPc monitor, or i's children each have an ROPc' monitor. Proof: First, because of the fixed-route assumption, punishments must be purely monetary. Next, when cheating occurs, if the payment does not go to the source or destination, it may go to another coalition member, rendering it ineffective. Thus, the source must accept some monetary compensation, net of its normal flow payment, when cheating occurs. Since the source only contracts with the first hop, it must accept this money from the first hop. The source's contract must therefore distinguish when the path quality is normal from when it is lowered by cheating. To do so, it can either accept proofs from the source, that the quality is lower than required, or it can accept proofs from the first hop, that the quality is correct. These nodes will not rationally offer the opposing type of proof. By definition, any monitor that gives the source proof that the path quality is wrong is an ROPc monitor. Any monitor that gives the first hop proof that the quality is correct is a ROPc' monitor. Thus, at least one of these monitors must exist. By the protect-the-innocent assumption, if cheating occurs, but the first hop is not a cheater, she must be able to claim the same size reward from the next ISP on the path, and thus "pass on" the punishment. The first hop's contract with the second must then distinguish when cheating occurs after the first hop. By argument similar to that for the source, either the first hop has a ROPc monitor, or the second has a ROPc' monitor. This argument can be iterated along the entire path to the penultimate node before D. Since the marginal costs and qualities can be arranged to make any path the optimal path, these statements must hold for all nodes and their children, which completes the proof. ❑ The two possibilities for monitor correspond to which node has the burden of proof. In one case, the prior node must prove the suboptimal quality to claim its reward. In the other, the subsequent node must prove that the quality was correct to avoid penalty. Because the two monitors are similar, it seems likely that they require comparable costs to implement. If submitting the proofs is costly, it seems natural that nodes would prefer to use the ROPc monitor, placing the burden of proof on the upstream node. Finally, we note that it is straightforward to derive the full version of the claim, which allows for multiple cheaters. The only complication is that cheaters can exchange side payments, which makes any money transfers between them redundant. Because of this, we have to further generalize our rest of path monitors, so they are less constrained in the case that there are cheaters on either side. 5.1 Implementing Monitors Claim 5 should not be interpreted as a statement that each node must compute the rest of path quality locally, without input from other nodes. Other monitors, besides ROPc and ROPc' can still be used, loosely speaking, as building blocks. For instance, network tomography is concerned with measuring properties of the network interior with tools located at the edge. Using such techniques, our source might learn both individual node qualities and the data path. This is represented by the following two monitors: SHOPci: (source-based hop quality) A monitor that gives the source proof of what the quality of node i is. SPATHc: (source-based path) A monitor that gives the source proof of what the data path is at any time, at least as far as it matches the equilibrium path. With these monitors, a punishment mechanism can be designed to fulfill the conditions of Claim 5. It involves the source sharing the proofs it generates with nodes further down the path, which use them to determine bilateral payments. Ultimately however, the proof of Claim 5 shows us that each node i's bilateral contracts require proof of the rest of path quality. This means that node i (or possibly its children) will have to combine the proofs that they receive to generate a proof of the rest of path quality. Thus, the combined process is itself a rest of path monitor. What we have done, all in all, is constructed a rest of path monitor using SPATHc and SHOPci as building blocks. Our new monitor includes both the component monitors and whatever distributed algorithmic mechanism exists to make sure nodes share their proofs correctly. This mechanism can potentially involve external institutions. For a concrete example, suppose that when node i suspects it is getting poor rest of path quality from its successor, it takes the downstream node to court. During the discovery process, the court subpoenas proofs of the path and of node qualities from the source (ultimately, there must be some threat to ensure the source complies). Finally, for the court to issue a judgment, one party or the other must compile a proof of what the rest of path quality was. Hence, the entire discovery process acts as a rest of path monitor, albeit a rather costly monitor in this case. Of course, mechanisms can be designed to combine these monitors at much lower cost. Typically, such mechanisms would call for automatic sharing of proofs, with court intervention only as a last resort. We defer these interesting mechanisms to future work. As an aside, intuition might dictate that SHOPci generates more information than ROPc; after all, inferring individual node qualities seems a much harder problem. Yet, without path information, SHOPci is not sufficient for our first-best innovation result. The proof of this demonstrates a useful technique: Claim 6. With monitors E2E, ROP, SHOPci and PRc, and a nowhere-commoditized bilateral contract competition game, the optimal quality cannot be maintained for all assignments of quality and marginal cost, in fixed-route coalition-proof protect-theinnocent equilibrium. Proof: Because nodes cannot verify the data path, they cannot form a proof of what the rest of path quality is. Hence, ROPc monitors do not exist, and therefore the requirements of Claim 5 cannot hold. ❑ 6. CONCLUSIONS AND FUTURE WORK It is our hope that this study will have a positive impact in at least three different ways. The first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy. For protocol designers, we first provide fresh motivation to create monitoring systems. We have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation. Unless accountability improves, these pathologies are guaranteed to remain. Secondly, we suggest directions for future advances in monitoring. We have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition. At the same time, this does not present a fully satisfying solution. This paper has suggested a novel standard for monitors to aspire to--one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium. We have shown that under bilateral contracts, this specifically requires contractible rest of path monitors. This is not to say that other types of monitors are unimportant. We included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition. However, in order for this to happen, a mechanism must be included to combine proofs from these monitors to form a proof of rest of path quality. In other words, the monitors must ultimately be combined to form contractible rest of path monitors. To support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors. As far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality. These institutions must be equipped to verify proofs of quality, and police illegal contracting behavior. As quality-based contracts become numerous and complicated, and possibly negotiated by machine, this may become a challenging task, and new standards and regulations may have to emerge in response. This remains an interesting and unexplored area for research. The second area we hope our study will benefit is that of clean-slate architectural design. Traditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements. Thus, the approach is often one of engineering, which tends to neglect competitive effects. We agree with Ratnasamy, Shenker, and McCanne, that designing for evolution should be a top priority [11]. We have demonstrated that the network's monitoring ability is critical to supporting innovation, as are the institutions that support contracting. These elements should feature prominently in new designs. Our analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring. From a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems. Finally, the last contribution our study makes is methodological. We believe that the mathematical formalization we present is applicable to a variety of future research questions. While a significant literature addresses innovation in the presence of network effects, to the best of our knowledge, ours is the first model of innovation in a network industry that successfully incorporates the actual topological structure as input. This allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability. Our method also stands in contrast to the typical approach of distributed algorithmic mechanism design. Because this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes. Our technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract. We believe this is a closer reflection of the industry. Based on the insights in this study, the possible directions for future research are numerous and exciting. To some degree, contracting based on quality opens a Pandora's Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality? Should ISPs be allowed to offer a choice of contracts at different quality levels? What anti-competitive behaviors are enabled by quality-based contracts? Can a contracting system support optimal multicast trees? In this study, we have focused on bilateral contracts. This system has seemed natural, especially since it is the prevalent system on the current network. Perhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction. There is no need to worry about who will enforce a punishment against another ISP on the opposite side of the planet, nor is there a dispute over whose legal rules to apply in interpreting a contract. Although this benefit is compelling, it is worth considering other systems. The clearest alternative is to form a contract between the source and every node on the path. We may call these source contracts. Source contracting may present surprising advantages. For instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop. Additionally, if the source only has contracts with nodes on the intended path, other nodes won't even be willing to accept packets from this source since they won't receive compensation for carrying them. This combination seems to eliminate all temptation for a single cheater to cheat in route. Because of this and other encouraging features, we believe source contracts are a fertile topic for further study. Another important research task is to relax our assumption that quality can be measured fully and precisely. One possibility is to assume that monitoring is only probabilistic or suffers from noise. Even more relevant is the possibility that quality monitors are fundamentally incomplete. A quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces. We may define a monitor space as a subspace of the quality space that a monitor can measure, M c Q, and a corresponding monitoring function that simply projects the full range of qualities onto the monitor space, m: Q--> M. Clearly, innovations that leave quality invariant under m are not easy to support--they are invisible to the monitoring system. In this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs. Further research is needed to understand this process.
Network Monitors and Contracting Systems: Competition and Innovation ABSTRACT Today's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network's primitive system of contracts does not align incentives properly. In this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation. 1. INTRODUCTION Many studies before us have noted the Internet's resistance to new services and evolution. In recent decades, numerous ideas have been developed in universities, implemented in code, and even written into the routers and end systems of the network, only to languish as network operators fail to turn them on on a large scale. The list includes Multicast, IPv6, IntServ, and DiffServ. Lacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas. In the long term, this pathology stands out as a critical obstacle to the network's continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]). On a smaller time scale, ISPs shun new services in favor of cost cutting measures. Thus, the network has characteristics of a commodity market. Although in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2]. On one hand, this leads directly to suboptimal routing. More importantly, commoditization in the short term is surely related to the lack of innovation in the long term. When the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs. There is simply no reward for introducing new services or investing in quality improvements. In response to these pathologies and others, researchers have put forth various proposals for improving the situation. These can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users. Clark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5]. Ratnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11]. They propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol. The extra traffic to the ISP, the authors suggest, will motivate the initial investment. The second strategy suggests a revision of the contracting system. This is exemplified by MacKie-Mason and Varian, who propose a "smart market" to control access to network resources [10]. Prices are set to the market-clearing level based on bids that users associate to their traffic. In another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2]. They argue that such a move would improve stability and align incentives. The third high-level strategy calls for greater network accountability. In this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3]. They argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations. Unlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation. It is clear that these three strategies are closely linked to each other (for example, [2], [5], and [9] each argue that giving end-users routing control within the current contracting system is problematic). Until today, however, the relationship between them has been poorly understood. There is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation. This paper will address both issues. We will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation. Our model is highly stylized and may be considered preliminary: it is based on a single source sending data to a single destination. Nevertheless, the structure is rich enough to expose previously unseen features of network behavior. We will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today's network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation. In other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics. This result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry. We will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure. When we say that today's Internet has poor accountability, we mean that it reveals little information about the behavior--or misbehavior--of ISPs. This well-known trait is largely rooted in the network's history. In describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven "second level goals." [4] Accordingly, accountability received little attention during the network's formative years. Clark relates this to the network's military context, and finds that had the network been designed for commercial development, accountability would have been a top priority. Argyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network's lack of accountability [3]. According to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt. They notice when packet drops occur so that they can perform congestion control and retransmit packets. Details of where and why drops occur are deliberately concealed. The network's lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts. This is because contracts depend upon external institutions to function--the "judge" in the language of incomplete contract theory, or simply the legal system. Ultimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition. Of course, the vast majority of contracts never end up in court. Especially when a judge's ruling is easily predicted, the parties will typically comply with the contract terms on their own volition. This would not be possible, however, without the judge acting as a last resort. An institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time. This payment is a function of the past and present behaviors of the participants, but only those that are verifiable. Hence, we imagine that a contract only accepts "proofs" as inputs. We will call any process that generates these proofs a contractible monitor. Such a monitor includes metering or sensing devices on the physical network, but it is a more general concept. Constructing a proof of a particular behavior may require readings from various devices distributed among many ISPs. The contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information. Figure 1 demonstrates how our model of contracts fits together. We make the assumption that all payments are mediated by contracts. This means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency. Network Monitor Behavior Figure 1: Relationship between monitors and contracts With this model, we may conclude that the level of accountability in today's Internet only permits best effort contracts. Nodes cannot condition payments on either quality or path characteristics. Is there anything wrong with best-effort contracts? The reader might wonder why the Internet needs contracts at all. After all, in non-network industries, traditional firms invest in research and differentiate their products, all in the hopes of keeping their customers and securing new ones. One might believe that such market forces apply to ISPs as well. We may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets. There is a popular intuitive argument that supports this hypothesis, and it may be summarized as follows: Intuitive argument supporting null hypothesis: 1. Access providers try to increase their quality to get more consumers. 2. Access providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers. Access providers try to select high quality transit because that increases their quality. 3. The process continues through the network, giving every ISP a competitive reason to increase quality. We are careful to model our network in continuous time, in order to capture the essence of this argument. We can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop. Moreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer. By contrast, customers in the real world rarely respond collectively, and often simply seek the best deal currently offered. These constraints limit their ability to punish cheaters. Even with these liberal assumptions, however, we find that we must reject our null hypothesis. Our model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment. We will define an index of commoditization and show that it increases without bound as data paths grow long. Furthermore, we will demonstrate a framework in which an ISP's maximum research investment decreases hyperbolically with its distance from the end-user. To summarize, we argue that the Internet's lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved. This leads us to our next topic: How can we leverage accountability to overcome these pathologies? We approach this question from a clean-slate perspective. Instead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective. According to this approach, we first craft a new equilibrium concept appropriate for network competition. Our concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path. Rerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly. Next, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments. Finally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition. The last requirement is somewhat unconventional from an economic perspective, but we maintain that it is crucial for any reasonable solution. Although ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well. If innocent nodes may be punished, an ISP may decide to deliberately cheat and draw punishment onto itself and its neighbors. By cheating, the ISP may save resources, thereby ensuring that the punishment is more damaging to the other ISPs, which probably compete with the cheater directly for some customers. In the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes. Applying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes. The solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination. It turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect. Rest of path monitors can be implemented in various ways. They may be purely local algorithms that listen for packet echoes. Alternately, they can be distributed in nature. We describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path. This requires a mechanism to motivate ISPs to share their monitor outputs with each other. The rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required. This example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics. Our study has several practical implications for future protocol design. We show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome. Moreover, we derive exactly what monitors are needed to optimize routes and support innovation. In addition, our results provide useful input for clean-slate architectural design, and we use several novel techniques that we expect will be applicable to a variety of future research. The rest of this paper is organized as follows: In section 2, we lay out our basic network model. In section 3, we present a lowaccountability network, modeled after today's Internet. We demonstrate how poor monitoring causes commoditization and a lack of innovation. In section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo. In section 5, we turn our attention to contractible monitors. We show that rest of path monitors can support competition games with optimal routing and innovation. We further show that rest of path monitors are required to support such competition games. We continue by discussing how such monitors may be constructed using other monitors as building blocks. In section 6, we conclude and present several directions for future research. 2. BASIC NETWORK MODEL 2.1 Game Dynamics 2.2 The Routing Game 3. PATHOLOGIES OF A LOWACCOUNTABILITY NETWORK 3.1 Accountability in the Current Internet 3.2 Modeling Low-Accountability 4. VERIFIABLE MONITORS 5. CONTRACTIBLE MONITORS 5.1 Implementing Monitors 6. CONCLUSIONS AND FUTURE WORK It is our hope that this study will have a positive impact in at least three different ways. The first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy. For protocol designers, we first provide fresh motivation to create monitoring systems. We have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation. Unless accountability improves, these pathologies are guaranteed to remain. Secondly, we suggest directions for future advances in monitoring. We have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition. At the same time, this does not present a fully satisfying solution. This paper has suggested a novel standard for monitors to aspire to--one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium. We have shown that under bilateral contracts, this specifically requires contractible rest of path monitors. This is not to say that other types of monitors are unimportant. We included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition. However, in order for this to happen, a mechanism must be included to combine proofs from these monitors to form a proof of rest of path quality. In other words, the monitors must ultimately be combined to form contractible rest of path monitors. To support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors. As far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality. These institutions must be equipped to verify proofs of quality, and police illegal contracting behavior. As quality-based contracts become numerous and complicated, and possibly negotiated by machine, this may become a challenging task, and new standards and regulations may have to emerge in response. This remains an interesting and unexplored area for research. The second area we hope our study will benefit is that of clean-slate architectural design. Traditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements. Thus, the approach is often one of engineering, which tends to neglect competitive effects. We agree with Ratnasamy, Shenker, and McCanne, that designing for evolution should be a top priority [11]. We have demonstrated that the network's monitoring ability is critical to supporting innovation, as are the institutions that support contracting. These elements should feature prominently in new designs. Our analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring. From a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems. Finally, the last contribution our study makes is methodological. We believe that the mathematical formalization we present is applicable to a variety of future research questions. While a significant literature addresses innovation in the presence of network effects, to the best of our knowledge, ours is the first model of innovation in a network industry that successfully incorporates the actual topological structure as input. This allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability. Our method also stands in contrast to the typical approach of distributed algorithmic mechanism design. Because this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes. Our technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract. We believe this is a closer reflection of the industry. Based on the insights in this study, the possible directions for future research are numerous and exciting. To some degree, contracting based on quality opens a Pandora's Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality? Should ISPs be allowed to offer a choice of contracts at different quality levels? What anti-competitive behaviors are enabled by quality-based contracts? Can a contracting system support optimal multicast trees? In this study, we have focused on bilateral contracts. This system has seemed natural, especially since it is the prevalent system on the current network. Perhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction. There is no need to worry about who will enforce a punishment against another ISP on the opposite side of the planet, nor is there a dispute over whose legal rules to apply in interpreting a contract. Although this benefit is compelling, it is worth considering other systems. The clearest alternative is to form a contract between the source and every node on the path. We may call these source contracts. Source contracting may present surprising advantages. For instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop. Additionally, if the source only has contracts with nodes on the intended path, other nodes won't even be willing to accept packets from this source since they won't receive compensation for carrying them. This combination seems to eliminate all temptation for a single cheater to cheat in route. Because of this and other encouraging features, we believe source contracts are a fertile topic for further study. Another important research task is to relax our assumption that quality can be measured fully and precisely. One possibility is to assume that monitoring is only probabilistic or suffers from noise. Even more relevant is the possibility that quality monitors are fundamentally incomplete. A quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces. We may define a monitor space as a subspace of the quality space that a monitor can measure, M c Q, and a corresponding monitoring function that simply projects the full range of qualities onto the monitor space, m: Q--> M. Clearly, innovations that leave quality invariant under m are not easy to support--they are invisible to the monitoring system. In this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs. Further research is needed to understand this process.
Network Monitors and Contracting Systems: Competition and Innovation ABSTRACT Today's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network's primitive system of contracts does not align incentives properly. In this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation. 1. INTRODUCTION Many studies before us have noted the Internet's resistance to new services and evolution. The list includes Multicast, IPv6, IntServ, and DiffServ. Lacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas. In the long term, this pathology stands out as a critical obstacle to the network's continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]). On a smaller time scale, ISPs shun new services in favor of cost cutting measures. Thus, the network has characteristics of a commodity market. Although in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2]. On one hand, this leads directly to suboptimal routing. More importantly, commoditization in the short term is surely related to the lack of innovation in the long term. When the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs. There is simply no reward for introducing new services or investing in quality improvements. These can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users. Clark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5]. Ratnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11]. They propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol. The second strategy suggests a revision of the contracting system. This is exemplified by MacKie-Mason and Varian, who propose a "smart market" to control access to network resources [10]. In another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2]. They argue that such a move would improve stability and align incentives. The third high-level strategy calls for greater network accountability. In this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3]. They argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations. Unlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation. Until today, however, the relationship between them has been poorly understood. There is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation. This paper will address both issues. We will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation. Nevertheless, the structure is rich enough to expose previously unseen features of network behavior. We will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today's network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation. In other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics. This result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry. We will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure. When we say that today's Internet has poor accountability, we mean that it reveals little information about the behavior--or misbehavior--of ISPs. This well-known trait is largely rooted in the network's history. In describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven "second level goals." [4] Accordingly, accountability received little attention during the network's formative years. Clark relates this to the network's military context, and finds that had the network been designed for commercial development, accountability would have been a top priority. Argyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network's lack of accountability [3]. According to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt. Details of where and why drops occur are deliberately concealed. The network's lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts. This is because contracts depend upon external institutions to function--the "judge" in the language of incomplete contract theory, or simply the legal system. Ultimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition. Of course, the vast majority of contracts never end up in court. Especially when a judge's ruling is easily predicted, the parties will typically comply with the contract terms on their own volition. This would not be possible, however, without the judge acting as a last resort. An institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time. Hence, we imagine that a contract only accepts "proofs" as inputs. We will call any process that generates these proofs a contractible monitor. Such a monitor includes metering or sensing devices on the physical network, but it is a more general concept. Constructing a proof of a particular behavior may require readings from various devices distributed among many ISPs. The contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information. Figure 1 demonstrates how our model of contracts fits together. We make the assumption that all payments are mediated by contracts. This means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency. Network Monitor Behavior Figure 1: Relationship between monitors and contracts With this model, we may conclude that the level of accountability in today's Internet only permits best effort contracts. Nodes cannot condition payments on either quality or path characteristics. Is there anything wrong with best-effort contracts? The reader might wonder why the Internet needs contracts at all. One might believe that such market forces apply to ISPs as well. We may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets. 1. Access providers try to increase their quality to get more consumers. 2. Access providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers. Access providers try to select high quality transit because that increases their quality. 3. The process continues through the network, giving every ISP a competitive reason to increase quality. We are careful to model our network in continuous time, in order to capture the essence of this argument. We can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop. Moreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer. These constraints limit their ability to punish cheaters. Even with these liberal assumptions, however, we find that we must reject our null hypothesis. Our model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment. We will define an index of commoditization and show that it increases without bound as data paths grow long. To summarize, we argue that the Internet's lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved. This leads us to our next topic: How can we leverage accountability to overcome these pathologies? We approach this question from a clean-slate perspective. Instead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective. According to this approach, we first craft a new equilibrium concept appropriate for network competition. Our concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path. Rerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly. Next, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments. Finally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition. Although ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well. In the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes. Applying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes. The solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination. It turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect. Rest of path monitors can be implemented in various ways. We describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path. This requires a mechanism to motivate ISPs to share their monitor outputs with each other. The rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required. This example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics. Our study has several practical implications for future protocol design. We show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome. Moreover, we derive exactly what monitors are needed to optimize routes and support innovation. The rest of this paper is organized as follows: In section 2, we lay out our basic network model. In section 3, we present a lowaccountability network, modeled after today's Internet. We demonstrate how poor monitoring causes commoditization and a lack of innovation. In section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo. In section 5, we turn our attention to contractible monitors. We show that rest of path monitors can support competition games with optimal routing and innovation. We further show that rest of path monitors are required to support such competition games. We continue by discussing how such monitors may be constructed using other monitors as building blocks. In section 6, we conclude and present several directions for future research. 6. CONCLUSIONS AND FUTURE WORK The first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy. For protocol designers, we first provide fresh motivation to create monitoring systems. We have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation. Unless accountability improves, these pathologies are guaranteed to remain. Secondly, we suggest directions for future advances in monitoring. We have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition. At the same time, this does not present a fully satisfying solution. This paper has suggested a novel standard for monitors to aspire to--one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium. We have shown that under bilateral contracts, this specifically requires contractible rest of path monitors. This is not to say that other types of monitors are unimportant. We included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition. However, in order for this to happen, a mechanism must be included to combine proofs from these monitors to form a proof of rest of path quality. In other words, the monitors must ultimately be combined to form contractible rest of path monitors. To support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors. As far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality. These institutions must be equipped to verify proofs of quality, and police illegal contracting behavior. This remains an interesting and unexplored area for research. The second area we hope our study will benefit is that of clean-slate architectural design. Traditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements. We have demonstrated that the network's monitoring ability is critical to supporting innovation, as are the institutions that support contracting. These elements should feature prominently in new designs. Our analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring. From a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems. Finally, the last contribution our study makes is methodological. We believe that the mathematical formalization we present is applicable to a variety of future research questions. This allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability. Our method also stands in contrast to the typical approach of distributed algorithmic mechanism design. Because this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes. Our technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract. We believe this is a closer reflection of the industry. Based on the insights in this study, the possible directions for future research are numerous and exciting. To some degree, contracting based on quality opens a Pandora's Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality? Should ISPs be allowed to offer a choice of contracts at different quality levels? What anti-competitive behaviors are enabled by quality-based contracts? Can a contracting system support optimal multicast trees? In this study, we have focused on bilateral contracts. This system has seemed natural, especially since it is the prevalent system on the current network. Perhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction. Although this benefit is compelling, it is worth considering other systems. The clearest alternative is to form a contract between the source and every node on the path. We may call these source contracts. Source contracting may present surprising advantages. For instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop. Additionally, if the source only has contracts with nodes on the intended path, other nodes won't even be willing to accept packets from this source since they won't receive compensation for carrying them. This combination seems to eliminate all temptation for a single cheater to cheat in route. Because of this and other encouraging features, we believe source contracts are a fertile topic for further study. Another important research task is to relax our assumption that quality can be measured fully and precisely. Even more relevant is the possibility that quality monitors are fundamentally incomplete. A quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces. Clearly, innovations that leave quality invariant under m are not easy to support--they are invisible to the monitoring system. In this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs. Further research is needed to understand this process.
J-52
Hidden-Action in Multi-Hop Routing
In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action.
[ "multi-hop", "rout", "multi-hop network", "intermedi node", "endpoint", "incent", "prioriti", "contract", "mechan", "cost", "failur caus", "hidden action", "moral hazard", "princip-agent model", "hidden-action", "mechan design", "moralhazard" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "R", "U", "M", "U", "R", "U" ]
Hidden-Action in Multi-Hop Routing Michal Feldman1 mfeldman@sims.berkeley.edu John Chuang1 chuang@sims.berkeley.edu Ion Stoica2 istoica@cs.berkeley.edu Scott Shenker2 shenker@icir.org 1 School of Information Management and Systems U.C. Berkeley 2 Computer Science Division U.C. Berkeley ABSTRACT In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; J.4 [Social And Behavioral Sciences]: Economics General Terms Design, Economics 1. INTRODUCTION Endpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver. In settings where the intermediate nodes are independent agents (such as individual nodes in ad-hoc and peer-topeer networks or autonomous systems on the Internet), this poses an incentive problem; the intermediate nodes may incur significant communication and computation costs in the forwarding of packets without deriving any direct benefit from doing so. Consequently, a rational (i.e., utility maximizing) intermediate node may choose to forward packets at a low priority or not forward the packets at all. This rational behavior may lead to suboptimal system performance. The endpoints can provide incentives, e.g., in the form of payments, to encourage the intermediate nodes to forward their packets. However, the actions of the intermediate nodes are often hidden from the endpoints. In many cases, the endpoints can only observe whether or not the packet has reached the destination, and cannot attribute failure to a specific node on the path. Even if some form of monitoring mechanism allows them to pinpoint the location of the failure, they may still be unable to attribute the cause of failure to either the deliberate action of the intermediate node, or to some external factors beyond the control of the intermediate node, such as network congestion, channel interference, or data corruption. The problem of hidden action is hardly unique to networks. Also known as moral hazard, this problem has long been of interest in the economics literature concerning information asymmetry, incentive and contract theory, and agency theory. We follow this literature by formalizing the problem as a principal-agent model, where multiple agents making sequential hidden actions [17, 27]. Our results are threefold. First, we show that it is possible to design contracts to induce cooperation when intermediate nodes can choose to forward or drop packets, as well as when the nodes can choose to forward packets with different levels of quality of service. If the path and transit costs are known prior to transmission, the principal achieves first best solution, and can implement the contracts either directly with each intermediate node or recursively through the network (each node making a contract with the following node) without any loss in utility. Second, we find that introducing per-hop monitoring has no impact on the principal``s expected utility in equilibrium. For a principal who wishes to induce an equilibrium in which all intermediate nodes cooperate, its expected total payment is the same with or without monitoring. However, monitoring provides a dominant strategy equilibrium, which is a stronger solution concept than the Nash equilibrium achievable in the absence of monitoring. Third, we show that in the absence of a priori information about transit costs on the packet forwarding path, it is possible to generalize existing mechanisms to overcome scenarios that involve both hidden-information and hidden-action. In these scenarios, the principal pays a premium compared to scenarios with known transit costs. 2. BASELINE MODEL We consider a principal-agent model, where the principal is a pair of communication endpoints who wish to communicate over a multi-hop network, and the agents are the intermediate nodes capable of forwarding packets between the endpoints. The principal (who in practice can be either the sender, the receiver, or 117 both) makes individual take-it-or-leave-it offers (contracts) to the agents. If the contracts are accepted, the agents choose their actions sequentially to maximize their expected payoffs based on the payment schedule of the contract. When necessary, agents can in turn make subsequent take-it-or-leave-it offers to their downstream agents. We assume that all participants are risk neutral and that standard assumptions about the global observability of the final outcome and the enforceability of payments by guaranteeing parties hold. For simplicity, we assume that each agent has only two possible actions; one involving significant effort and one involving little effort. We denote the action choice of agent i by ai ∈ {0, 1}, where ai = 0 and ai = 1 stand for the low-effort and high-effort actions, respectively. Each action is associated with a cost (to the agent) C(ai), and we assume: C(ai = 1) > C(ai = 0) At this stage, we assume that all nodes have the same C(ai) for presentation clarity, but we relax this assumption later. Without loss of generality we normalize the C(ai = 0) to be zero, and denote the high-effort cost by c, so C(ai = 0) = 0 and C(ai = 1) = c. The utility of agent i, denoted by ui, is a function of the payment it receives from the principal (si), the action it takes (ai), and the cost it incurs (ci), as follows: ui(si, ci, ai) = si − aici The outcome is denoted by x ∈ {xG , xB }, where xG stands for the Good outcome in which the packet reaches the destination, and xB stands for the Bad outcome in which the packet is dropped before it reaches the destination. The outcome is a function of the vector of actions taken by the agents on the path, a = (a1, ..., an) ∈ {0, 1}n , and the loss rate on the channels, k. The benefit of the sender from the outcome is denoted by w(x), where: w(xG ) = wG ; and w(xB ) = wB = 0 The utility of the sender is consequently: u(x, S) = w(x) − S where: S = Pn i=1 si A sender who wishes to induce an equilibrium in which all nodes engage in the high-effort action needs to satisfy two constraints for each agent i: (IR) Individual rationality (participation constraint)1 : the expected utility from participation should (weakly) exceed its reservation utility (which we normalize to 0). (IC) Incentive compatibility: the expected utility from exerting high-effort should (weakly) exceed its expected utility from exerting low-effort. In some network scenarios, the topology and costs are common knowledge. That is, the sender knows in advance the path that its packet will take and the costs on that path. In other routing scenarios, the sender does not have this a priori information. We show that our model can be applied to both scenarios with known and unknown topologies and costs, and highlight the implications of each scenario in the context of contracts. We also distinguish between direct contracts, where the principal signs an individual contract 1We use the notion of ex ante individual rationality, in which the agents choose to participate before they know the state of the system. S Dn1 Source Destination n intermediate nodes Figure 1: Multi-hop path from sender to destination. Figure 2: Structure of the multihop routing game under known topology and transit costs. with each node, and recursive contracts, where each node enters a contractual relationship with its downstream node. The remainder of this paper is organized as follows. In Section 3 we consider agents who decide whether to drop or forward packets with and without monitoring when the transit costs are common knowledge. In Section 4, we extend the model to scenarios with unknown transit costs. In Section 5, we distinguish between recursive and direct contracts and discuss their relationship. In Section 6, we show that the model applies to scenarios in which agents choose between different levels of quality of service. We consider Internet routing as a case study in Section 7. In Section 8 we present related work, and Section 9 concludes the paper. 3. KNOWN TRANSIT COSTS In this section we analyze scenarios in which the principal knows in advance the nodes on the path to the destination and their costs, as shown in figure 1. We consider agents who decide whether to drop or forward packets, and distinguish between scenarios with and without monitoring. 3.1 Drop versus Forward without Monitoring In this scenario, the agents decide whether to drop (a = 0) or forward (a = 1) packets. The principal uses no monitoring to observe per-hop outcomes. Consequently, the principal makes the payment schedule to each agent contingent on the final outcome, x, as follows: si(x) = (sB i , sG i ) where: sB i = si(x = xB ) sG i = si(x = xG ) The timeline of this scenario is shown in figure 2. Given a perhop loss rate of k, we can express the probability that a packet is successfully delivered from node i to its successor i + 1 as: Pr(xG i→i+1|ai) = (1 − k)ai (1) where xG i→j denotes a successful transmission from node i to j. PROPOSITION 3.1. Under the optimal contract that induces high-effort behavior from all intermediate nodes in the Nash Equi118 librium2 , the expected payment to each node is the same as its expected cost, with the following payment schedule: sB i = si(x = xB ) = 0 (2) sG i = si(x = xG ) = c (1 − k)n−i+1 (3) PROOF. The principal needs to satisfy the IC and IR constraints for each agent i, which can be expressed as follows: (IC)Pr(xG |aj≥i = 1)sG i + Pr(xB |aj≥i = 1)sB i − c ≥ Pr(xG |ai = 0, aj>i = 1)sG i + Pr(xB |ai = 0, aj>i = 1)sB i (4) This constraint says that the expected utility from forwarding is greater than or equal to its expected utility from dropping, if all subsequent nodes forward as well. (IR)Pr(xG S→i|aj<i = 1)(Pr(xG |aj≥i = 1)sG i + Pr(xB |aj≥i = 1)sB i − c) + Pr(xB S→i|aj<i = 1)sB i ≥ 0 (5) This constraint says that the expected utility from participating is greater than or equal to zero (reservation utility), if all other nodes forward. The above constraints can be expressed as follows, based on Eq. 1: (IC) : (1 − k)n−i+1 sG i + (1 − (1 − k)n−i+1 )sB i − c ≥ sB i (IR) : (1−k)i ((1−k)n−i+1 sG i +(1−(1−k)n−i+1 )sB i −c)+ (1 − (1 − k)i )sB i ≥ 0 It is a standard result that both constraints bind at the optimal contract (see [23]). Solving the two equations, we obtain the solution that is presented in Eqs. 2 and 3. We next prove that the expected payment to a node equals its expected cost in equilibrium. The expected cost of node i is its transit cost multiplied by the probability that it faces this cost (i.e., the probability that the packet reaches node i), which is: (1 − k)i c. The expected payment that node i receives is: Pr(xG )sG i + Pr(xB )sB i = (1 − k)n+1 c (1 − k)n−i+1 = (1 − k)i c Note that the expected payment to a node decreases as the node gets closer to the destination due to the asymmetric distribution of risk. The closer the node is to the destination, the lower the probability that a packet will fail to reach the destination, resulting in the low payment being made to the node. The expected payment by the principal is: E[S] = (1 − k)n+1 nX i=1 sG i + (1 − (1 − k)n+1 ) nX i=1 sB i = (1 − k)n+1 nX i=1 ci (1 − k)n−i+1 (6) The expected payment made by the principal depends not only on the total cost, but also the number of nodes on the path. PROPOSITION 3.2. Given two paths with respective lengths of n1 and n2 hops, per-hop transit costs of c1 and c2, and per-hop loss rates of k1 and k2, such that: 2Since transit nodes perform actions sequentially, this is really a subgameperfect equilibrium (SPE), but we will refer to it as Nash equilibrium in the remainder of the paper. Figure 3: Two paths of equal total costs but different lengths and individual costs. • c1n1 = c2n2 (equal total cost) • (1 − k1)n1+1 = (1 − k2)n2+1 (equal expected benefit) • n1 < n2 (path 1 is shorter than path 2) the expected total payment made by the principal is lower on the shorter path. PROOF. The expected payment in path j is E[S]j = nj X i=1 cj (1 − kj )i = cj (1 − kj ) 1 − (1 − kj)nj kj So, we have to show that: c1(1 − k1) 1 − (1 − k1)n1 k1 > c2(1 − k2) 1 − (1 − k2)n2 k2 Let M = c1n1 = c2n2 and N = (1 − k1)n1+1 = (1 − k2)n2+1 . We have to show that MN 1 n1+1 (1 − N n1 n1+1 ) n1(1 − N 1 n1+1 ) < MN 1 n2+1 (1 − N n2 n2+1 ) n2(1 − N 1 n2+1 ) (7) Let f = N 1 n+1 (1 − N n n+1 ) n(1 − N 1 n+1 ) Then, it is enough to show that f is monotonically increasing in n ∂f ∂n = g(N, n) h(N, n) where: g(N, n) = −((ln(N)n − (n + 1)2 )(N 1 n+1 − N n+2 n+1 ) − (n + 1)2 (N + N 2 n+1 )) and h(N, n) = (n + 1)2 n2 (−1 + N 1 n+1 )2 but h(N, n) > 0 ∀N, n, therefore, it is enough to show that g(N, n) > 0. Because N ∈ (0, 1): (i) ln(N) < 0, and (ii) N 1 n+1 > N n+2 n+1 . Therefore, g(N, n) > 0 ∀N, n. This means that, ceteris paribus, shorter paths should always be preferred over longer ones. For example, consider the two topologies presented in Figure 3. While the paths are of equal total cost, the total expected payment by the principal is different. Based on Eqs. 2 and 3, the expected total payment for the top path is: E[S] = Pr(xG )(sG A + sG B) = „ c1 (1 − k1)2 + c1 1 − k1 `` (1 − k1)3 (8) 119 while the expect total payment for the bottom path is: E[S] = Pr(xG )(sG A + sG B + sG C ) = ( c2 (1 − k2)3 + c2 (1 − k2)2 + c2 1 − k2 )(1 − k2)4 For n1 = 2, c1 = 1.5, k1 = 0.5, n2 = 3, c2 = 1, k2 = 0.405, we have equal total cost and equal expected benefit, but E[S]1 = 0.948 and E[S]2 = 1.313. 3.2 Drop versus Forward with Monitoring Suppose the principal obtains per-hop monitoring information.3 Per-hop information broadens the set of mechanisms the principal can use. For example, the principal can make the payment schedule contingent on arrival to the next hop instead of arrival to the final destination. Can such information be of use to a principal wishing to induce an equilibrium in which all intermediate nodes forward the packet? PROPOSITION 3.3. In the drop versus forward model, the principal derives the same expected utility whether it obtains per-hop monitoring information or not. PROOF. The proof to this proposition is already implied in the findings of the previous section. We found that in the absence of per-hop information, the expected cost of each intermediate node equals its expected payment. In order to satisfy the IR constraint, it is essential to pay each intermediate node an expected amount of at least its expected cost; otherwise, the node would be better-off not participating. Therefore, no other payment scheme can reduce the expected payment from the principal to the intermediate nodes. In addition, if all nodes are incentivized to forward packets, the probability that the packet reaches the destination is the same in both scenarios, thus the expected benefit of the principal is the same. Indeed, we have found that even in the absence of per-hop monitoring information, the principal achieves first-best solution. To convince the reader that this is indeed the case, we provide an example of a mechanism that conditions payments on arrival to the next hop. This is possible only if per-hop monitoring information is provided. In the new mechanism, the principal makes the payment schedule contingent on whether the packet has reached the next hop or not. That is, the payment to node i is sG i if the packet has reached node i + 1, and sB i otherwise. We assume costless monitoring, giving us the best case scenario for the use of monitoring. As before, we consider a principal who wishes to induce an equilibrium in which all intermediate nodes forward the packet. The expected utility of the principal is the difference between its expected benefit and its expected payment. Because the expected benefit when all nodes forward is the same under both scenarios, we only need to show that the expected total payment is identical as well. Under the monitoring mechanism, the principal has to satisfy the following constraints: (IC)Pr(xG i→i+1|ai = 1)sG + Pr(xB i→i+1|ai = 1)sB − c ≥ Pr(xG i→i+1|ai = 0)sG + Pr(xB i→i+1|ai = 0)sB (9) (IR)Pr(xG S→i|aj<i = 1)(Pr(xG i→i+1|ai = 1)sG + Pr(xB i→i+1|ai = 1)sB − c) ≥ 0 (10) 3For a recent proposal of an accountability framework that provides such monitoring information see [4]. These constraints can be expressed as follows: (IC) : (1 − k)sG + ksB − c ≥ s0 (IR) : (1 − k)i ((1 − k)sG + ksB − c) ≥ 0 The two constraints bind at the optimal contract as before, and we get the following payment schedule: sB = 0 sG = c 1 − k The expected total payment under this scenario is: E[S] = nX i=1 ((1 − k)i (sB + (i − 1)sG )) + (1 − k)n+1 nsG = (1 − k)n+1 nX i=1 ci (1 − k)n−i+1 as in the scenario without monitoring (see Equation 6.) While the expected total payment is the same with or without monitoring, there are some differences between the two scenarios. First, the payment structure is different. If no per-hop monitoring is used, the payment to each node depends on its location (i). In contrast, monitoring provides us with n identical contracts. Second, the solution concept used is different. If no monitoring is used, the strategy profile of ai = 1 ∀i is a Nash equilibrium, which means that no agent has an incentive to deviate unilaterally from the strategy profile. In contrast, with the use of monitoring, the action chosen by node i is independent of the other agents'' forwarding behavior. Therefore, monitoring provides us with dominant strategy equilibrium, which is a stronger solution concept than Nash equilibrium. [15], [16] discuss the appropriateness of different solution concepts in the context of online environments. 4. UNKNOWN TRANSIT COSTS In certain network settings, the transit costs of nodes along the forwarding path may not be common knowledge, i.e., there exists the problem of hidden information. In this section, we address the following questions: 1. Is it possible to design contracts that induce cooperative behavior in the presence of both hidden-action and hiddeninformation? 2. What is the principal``s loss due to the lack of knowledge of the transit costs? In hidden-information problems, the principal employs mechanisms to induce truthful revelation of private information from the agents. In the routing game, the principal wishes to extract transit cost information from the network routers in order to determine the lowest cost path (LCP) for a given source-destination pair. The network routers act strategically and declare transit costs to maximize their profit. Mechanisms that have been proposed in the literature for the routing game [24, 13] assume that once the transit costs have been obtained, and the LCP has been determined, the nodes on the LCP obediently forward all packets, and that there is no loss in the network, i.e., k = 0. In this section, we consider both hidden information and hidden action, and generalize these mechanisms to induce both truth revelation and high-effort action in equilibrium, where nodes transmit over a lossy communication channel, i.e., k ≥ 0. 4.1 V CG Mechanism In their seminal paper [24], Nisan and Ronen present a VCG mechanism that induces truthful revelation of transit costs by edges 120 Figure 4: Game structure for F P SS, where only hidden-information is considered. Figure 5: Game structure for F P SS , where both hiddeninformation and hidden-action are considered. in a biconnected network, such that lowest cost paths can be chosen. Like all VCG mechanisms, it is a strategyproof mechanism, meaning that it induces truthful revelation in a dominant strategy equilibrium. In [13] (FPSS), Feigenbaum et al. slightly modify the model to have the routers as the selfish agents instead of the edges, and present a distributed algorithm that computes the VCG payments. The timeline of the FPSS game is presented in figure 4. Under FPSS, transit nodes keep track of the amount of traffic routed through them via counters, and payments are periodically transferred from the principals to the transit nodes based on the counter values. FPSS assumes that transit nodes are obedient in packet forwarding behavior, and will not update the counters without exerting high effort in packet forwarding. In this section, we present FPSS , which generalizes FPSS to operate in an environment with lossy communication channels (i.e., k ≥ 0) and strategic behavior in terms of packet forwarding. We will show that FPSS induces an equilibrium in which all nodes truthfully reveal their transit costs and forward packets if they are on the LCP. Figure 5 presents the timeline of FPSS . In the first stage, the sender declares two payment functions, (sG i , sB i ), that will be paid upon success or failure of packet delivery. Given these payments, nodes have incentive to reveal their costs truthfully, and later to forward packets. Payments are transferred based on the final outcome. In FPSS , each node i submits a bid bi, which is its reported transit cost. Node i is said to be truthful if bi = ci. We write b for the vector (b1, ... , bn) of bids submitted by all transit nodes. Let Ii(b) be the indicator function for the LCP given the bid vector b such that Ii(b) = 1 if i is on the LCP; 0 otherwise. Following FPSS [13], the payment received by node i at equilibrium is: pi = biIi(b) + [ X r Ir(b|i ∞)br − X r Ir(b)br] = X r Ir(b|i ∞)br − X r=i Ir(b)br (11) where the expression b|i x means that (b|i x)j = cj for all j = i, and (b|i x)i = x. In FPSS , we compute sB i and sG i as a function of pi, k, and n. First, we recognize that sB i must be less than or equal to zero in order for the true LCP to be chosen. Otherwise, strategic nodes may have an incentive to report extremely small costs to mislead the principal into believing that they are on the LCP. Then, these nodes can drop any packets they receive, incur zero transit cost, collect a payment of sB i > 0, and earn positive profit. PROPOSITION 4.1. Let the payments of FPSS be: sB i = 0 sG i = pi (1 − k)n−i+1 Then, FPSS has a Nash equilibrium in which all nodes truthfully reveal their transit costs and all nodes on the LCP forward packets. PROOF. In order to prove the proposition above, we have to show that nodes have no incentive to engage in the following misbehaviors: 1. truthfully reveal cost but drop packet, 2. lie about cost and forward packet, 3. lie about cost and drop packet. If all nodes truthfully reveal their costs and forward packets, the expected utility of node i on the LCP is: E[u]i = Pr(xG S→i)(E[si] − ci) + Pr(xB S→i)sB i = (1 − k)i (1 − k)n−i+1 sG i + (1 − (1 − k)n−i+1 )sB i − ci + (1 − (1 − k)i )sB i = (1 − k)i (1 − k)n−i+1 pi (1 − k)n−i+1 − (1 − k)i ci = (1 − k)i (pi − ci) ≥ 0 (12) The last inequality is derived from the fact that FPSS is a truthful mechanism, thus pi ≥ ci. The expected utility of a node not on the LCP is 0. A node that drops a packet receives sB i = 0, which is smaller than or equal to E[u]i for i ∈ LCP and equals E[u]i for i /∈ LCP. Therefore, nodes cannot gain utility from misbehaviors (1) or (3). We next show that nodes cannot gain utility from misbehavior (2). 1. if i ∈ LCP, E[u]i > 0. (a) if it reports bi > ci: i. if bi < P r Ir(b|i ∞)br − P r=i Ir(b)br, it is still on the LCP, and since the payment is independent of bi, its utility does not change. ii. if bi > P r Ir(b|i ∞)br − P r=i Ir(b)br, it will not be on the LCP and obtain E[u]i = 0, which is less than its expected utility if truthfully revealing its cost. 121 (b) if it reports bi < ci, it is still on the LCP, and since the payment is independent of bi, its utility does not change. 2. if i /∈ LCP, E[u]i = 0. (a) if it reports bi > ci, it remains out of the LCP, so its utility does not change. (b) if it reports bi < ci: i. if bi < P r Ir(b|i ∞)br − P r=i Ir(b)br, it joins the LCP, and gains an expected utility of E[u]i = (1 − k)i (pi − ct) However, if i /∈ LCP, it means that ci > X r Ir(c|i ∞)cr − X r=i Ir(c)cr But if all nodes truthfully reveal their costs, pi = X r Ir(c|i ∞)cr − X r=i Ir(c)cr < ci therefore, E[u]i < 0 ii. if bi > P r Ir(b|i ∞)br − P r=i Ir(b)br, it remains out of the LCP, so its utility does not change. Therefore, there exists an equilibrium in which all nodes truthfully reveal their transit costs and forward the received packets. We note that in the hidden information only context, FPSS induces truthful revelation as a dominant strategy equilibrium. In the current setting with both hidden information and hidden action, FPSS achieves a Nash equilibrium in the absence of per-hop monitoring, and a dominant strategy equilibrium in the presence of per-hop monitoring, consistent with the results in section 3 where there is hidden action only. In particular, with per-hop monitoring, the principal declares the payments sB i and sG i to each node upon failure or success of delivery to the next node. Given the payments sB i = 0 and sG i = pi/(1 − k), it is a dominant strategy for the nodes to reveal costs truthfully and forward packets. 4.2 Discussion More generally, for any mechanism M that induces a bid vector b in equilibrium by making a payment of pi(b) to node i on the LCP, there exists a mechanism M that induces an equilibrium with the same bid vector and packet forwarding by making a payment of: sB i = 0 sG i = pi(b) (1 − k)n−i+1 . A sketch of the proof would be as follows: 1. IM i (b) = IM i (b)∀i, since M uses the same choice metric. 2. The expected utility of a LCP node is E[u]i = (1 − k)i (pi(b) − ci) ≥ 0 if it forwards and 0 if it drops, and the expected utility of a non-LCP node is 0. 3. From 1 and 2, we get that if a node i can increase its expected utility by deviating from bi under M , it can also increase its utility by deviating from bi in M, but this is in contradiction to bi being an equilibrium in M. 4. Nodes have no incentive to drop packets since they derive an expected utility of 0 if they do. In addition to the generalization of FPSS into FPSS , we can also consider the generalization of the first-price auction (FPA) mechanism, where the principal determines the LCP and pays each node on the LCP its bid, pi(b) = bi. First-price auctions achieve Nash equilibrium as opposed to dominant strategy equilibrium. Therefore, we should expect the generalization of FPA to achieve Nash equilibrium with or without monitoring. We make two additional comments concerning this class of mechanisms. First, we find that the expected total payment made by the principal under the proposed mechanisms is E[S] = nX i=1 (1 − k)i pi(b) and the expected benefit realized by the principal is E[w] = (1 − k)n+1 wG where Pn i=1 pi and wG are the expected payment and expected benefit, respectively, when only the hidden-information problem is considered. When hidden action is also taken into consideration, the generalized mechanism handles strategic forwarding behavior by conditioning payments upon the final outcome, and accounts for lossy communication channels by designing payments that reflect the distribution of risk. The difference between expected payment and benefit is not due to strategic forwarding behavior, but to lossy communications. Therefore, in a lossless network, we should not see any gap between expected benefits and payments, independent of strategic or non-strategic forwarding behavior. Second, the loss to the principal due to unknown transit costs is also known as the price of frugality, and is an active field of research [2, 12]. This price greatly depends on the network topology and on the mechanism employed. While it is simple to characterize the principal``s loss in some special cases, it is not a trivial problem in general. For example, in topologies with parallel disjoint paths from source to destination, we can prove that under first-price auctions, the loss to the principal is the difference between the cost of the shortest path and the second-shortest path, and the loss is higher under the FPSS mechanism. 5. RECURSIVE CONTRACTS In this section, we distinguish between direct and recursive contracts. In direct contracts, the principal contracts directly with each node on the path and pays it directly. In recursive payment, the principal contracts with the first node on the path, which in turn contracts with the second, and so on, such that each node contracts with its downstream node and makes the payment based on the final result, as demonstrated in figure 6. With direct payments, the principal needs to know the identity and cost of each node on the path and to have some communication channel with the node. With recursive payments, every node needs to communicate only with its downstream node. Several questions arise in this context: • What knowledge should the principal have in order to induce cooperative behavior through recursive contracts? • What should be the structure of recursive contracts that induce cooperative behavior? • What is the relation between the total expected payment under direct and recursive contracts? • Is it possible to design recursive contracts in scenarios of unknown transit costs? 122 Figure 6: Structure of the multihop routing game under known topology and recursive contracts. In order to answer the questions outlined above, we look at the IR and IC constraints that the principal needs to satisfy when contracting with the first node on the path. When the principal designs a contract with the first node, he should take into account the incentives that the first node should provide to the second node, and so on all the way to the destination. For example, consider the topology given in figure 3 (a). When the principal comes to design a contract with node A, he needs to consider the subsequent contract that A should sign with B, which should satisfy the following constrints. (IR) :Pr(xG A→B|aA = 1)(E[s|aB = 1] − c)+ Pr(xB A→B|aA = 1)sB A→B ≥ 0 (IC) :E[s|aB = 1] − c ≥ E[s|aB = 0] where: E[s|aB = 1] = Pr(xG B→D|aB = 1)sG A→B + Pr(xB B→D|aB = 1)sB A→B and E[s|aB = 0] = Pr(xG B→D|aB = 0)sG A→B + Pr(xB B→D|aB = 0)sB A→B These (binding) constraints yield the values of sB A→B and sG A→B: sB A→B = 0 sG A→B = c/(1 − k) Based on these values, S can express the constraints it should satisfy in a contract with A. (IR) :Pr(xG S→A|aS = 1)(E[sS→A − sA→B|ai = 1∀i] − c) + Pr(xB S→A|aS = 1)sB S→A ≥ 0 (IC) : E[sS→A − sA→B|ai = 1∀i] − c ≥ E[sS→A − sA→B|aA = 0, aB = 1] where: E[sS→A − sA→B|ai = 1∀i] = Pr(xG A→D|ai = 1∀i)(sG S→A − sG A→B) +Pr(xB A→D|ai = 1∀i)(sB S→A − sB A→B) and E[sS→A − sA→B|aA = 0, aB = 1] = Pr(xG A→D|aA = 0, aB = 1)(sG S→A − sG A→B) +Pr(xB A→D|aA = 0, aB = 1)(sB S→A − sB A→B) Solving for sB S→A and sG S→A, we get: sB S→A = 0 sG S→A = c(2 − k) 1 − 2k + k2 The expected total payment is E[S] = sG S→APr(xG S→D) + sB S→APr(xB S→D) = c(2 − k)(1 − k) (13) which is equal to the expected total payment under direct contracts (see Eq. 8). PROPOSITION 5.1. The expected total payments by the principal under direct and recursive contracts are equal. PROOF. In order to calculate the expected total payment, we have to find the payment to the first node on the path that will induce appropriate behavior. Because sB i = 0 in the drop / forward model, both constraints can be reduced to: Pr(xG i→R|aj = 1∀j)(sG i − sG i+1) − ci = 0 ⇔ (1 − k)n−i+1 (sG i − sG i+1) − ci = 0 which yields, for all 1 ≤ i ≤ n: sG i = ci (1 − k)n−i+1 + sG i+1 Thus, sG n = cn 1 − k sG n−1 = cn−1 (1 − k)2 + sG n = cn−1 (1 − k)2 + cn 1 − k · · · sG 1 = c1 (1 − k)n + sG 2 = ... = nX i=1 ci (1 − k)i and the expected total payment is E[S] = (1 − k)n+1 sG 1 = (1 − k)n+1 nX i=1 ci (1 − k)n−i+1 which equals the total expected payment in direct payments, as expressed in Eq. 6. Because the payment is contingent on the final outcome, and the expected payment to a node equals its expected cost, nodes have no incentive to offer their downstream nodes lower payment than necessary, since if they do, their downstream nodes will not forward the packet. What information should the principal posess in order to implement recursive contracts? Like in direct payments, the expected payment is not affected solely by the total payment on the path, but also by the topology. Therefore, while the principal only needs to communicate with the first node on the forwarding path and does not have to know the identities of the other nodes, it still needs to know the number of nodes on the path and their individual transit costs. Finally, is it possible to design recursive contracts under unknown transit costs, and, if so, what should be the structure of such contracts? Suppose the principal has implemented the distributed algorithm that calculates the necessary payments, pi for truthful 123 revelation, would the following payment schedule to the first node induce cooperative behavior? sB 1 = 0 sG 1 = nX i=1 pi (1 − k)i The answer is not clear. Unlike contracts in known transit costs, the expected payment to a node usually exceeds its expected cost. Therefore, transit nodes may not have the appropriate incentive to follow the principal``s guarantee during the payment phase. For example, in FPSS , the principal guarantees to pay each node an expected payment of pi > ci. We assume that payments are enforceable if made by the same entity that pledge to pay. However, in the case of recursive contracts, the entity that pledges to pay in the cost discovery stage (the principal) is not the same as the entity that defines and executes the payments in the forwarding stage (the transit nodes). Transit nodes, who design the contracts in the second stage, know that their downstream nodes will forward the packet as long as the expected payment exceeds the expected cost, even if it is less than the promised amount. Thus, every node has incentive to offer lower payments than promised and keep the profit. Transit nodes, who know this is a plausible scenario, may no longer truthfully reveal their cost. Therefore, while recursive contracts under known transit costs are strategically equivalent to direct contracts, it is not clear whether this is the case under unknown transit costs. 6. HIGH-QUALITY VERSUS LOW-QUALITY FORWARDING So far, we have considered the agents'' strategy space to be limited to the drop (a = 0) and forward (a = 1) actions. In this section, we consider a variation of the model where the agents choose between providing a low-quality service (a = 0) and a high-quality service (a = 1). This may correspond to a service-differentiated service model where packets are forwarded on a best-effort or a priority basis [6]. In contrast to drop versus forward, a packet may still reach the next hop (albeit with a lower probability) even if the low-effort action is taken. As a second example, consider the practice of hot-potato routing in inter-domain routing of today``s Internet. Individual autonomous systems (AS``s) can either adopt hot-potato routing or early exit routing (a = 0), where a packet is handed off to the downstream AS at the first possible exit, or late exit routing (a = 1), where an AS carries the packet longer than it needs to, handing off the packet at an exit closer to the destination. In the absence of explicit incentives, it is not surprising that AS``s choose hot-potato routing to minimize their costs, even though it leads to suboptimal routes [28, 29]. In both examples, in the absence of contracts, a rational node would exert low-effort, resulting in lower performance. Nevertheless, this behavior can be avoided with an appropriate design of contracts. Formally, the probability that a packet successfully gets from node i to node i + 1 is: Pr(xG i→i+1|ai) = 1 − (k − qai) (14) where: q ∈ (0, 1] and k ∈ (q, 1] In the drop versus forward model, a low-effort action by any node results in a delivery failure. In contrast, a node in the high/low scenario may exert low-effort and hope to free-ride on the higheffort level exerted by the other agents. PROPOSITION 6.1. In the high-quality versus low-quality forwarding model, where transit costs are common knowledge, the principal derives the same expected utility whether it obtains perhop monitoring information or not. PROOF. The IC and IR constraints are the same as specified in the proof of proposition 3.1, but their values change, based on Eq. 14 to reflect the different model: (IC) : (1−k +q)n−i+1 sG i +(1−(1−k +q)n−i+1 )sB i −c ≥ (1 − k)(1 − k + q)n−i sG i + (1 − (1 − k)(1 − k + q)n−i )sB i (IR) : (1 − k + q)i ((1 − k + q)n−i+1 sG i +(1 − (1 − k + q)n−i+1 )sB i − c) + (1 − (1 − k + q)i )sB i ≥ 0 For this set of constraints, we obtain the following solution: sB i = (1 − k + q)i c(k − 1) q (15) sG i = (1 − k + q)i c(k − 1 + (1 − k + q)−n ) q (16) We observe that in this version, both the high and the low payments depend on i. If monitoring is used, we obtain the following constraints: (IC) : (1 − k + q)sG i + (k − q)sB i − c ≥ (1 − k)sG i + (k)sB i (IR) : (1 − k + q)i ((1 − k + q)sG i + (k − q)sB i − c) ≥ 0 and we get the solution: sB i = c(k − 1) q sG i = ck q The expected payment by the principal with or without forwarding is the same, and equals: E[S] = c(1 − k + q)(1 − (1 − k + q)n ) k − q (17) and this concludes the proof. The payment structure in the high-quality versus low-quality forwarding model is different from that in the drop versus forward model. In particular, at the optimal contract, the low-outcome payment sB i is now less than zero. A negative payment means that the agent must pay the principal in the event that the packet fails to reach the destination. In some settings, it may be necessary to impose a limited liability constraint, i.e., si ≥ 0. This prevents the first-best solution from being achieved. PROPOSITION 6.2. In the high-quality versus low-quality forwarding model, if negative payments are disallowed, the expected payment to each node exceeds its expected cost under the optimal contract. PROOF. The proof is a direct outcome of the following statements, which are proved above: 1. The optimal contract is the contract specified in equations 15 and 16 2. Under the optimal contract, E[si] equals node i s expected cost 3. Under the optimal contract, sB i = (1−k+q)i c(k−1) q < 0 Therefore, under any other contract the sender will have to compensate each node with an expected payment that is higher than its expected transit cost. 124 There is an additional difference between the two models. In drop versus forward, a principal either signs a contract with all n nodes along the path or with none. This is because a single node dropping the packet determines a failure. In contrast, in high versus low-quality forwarding, a success may occur under the low effort actions as well, and payments are used to increase the probability of success. Therefore, it may be possible for the principal to maximize its utility by contracting with only m of the n nodes along the path. While the expected outcome depends on m, it is independent of which specific m nodes are induced. At the same time, the individual expected payments decrease in i (see Eq. 16). Therefore, a principal who wishes to sign a contract with only m out of the n nodes should do so with the nodes that are closest to the destination; namely, nodes (n − m + 1, ..., n − 1, n). Solving for the high-quality versus low-quality forwarding model with unknown transit costs is left for future work. 7. CASE STUDY: INTERNET ROUTING We can map different deployed and proposed Internet routing schemes to the various models we have considered in this work. Border Gateway Protocol (BGP), the current inter-domain routing protocol in the Internet, computes routes based on path vectors. Since the protocol reveals only the autonomous systems (AS``s) along a route but not the cost associated to them, the current BGP routing is best characterized by lack of a priori information about transit costs. In this case, the principal (e.g., a multi-homed site or a tier-1 AS) can implement one of the mechanisms proposed in Section 4 by contracting with individual nodes on the path. Such contracts involve paying some premium over the real cost, and it is not clear whether recursive contacts can be implemented in this scenario. In addition, the current protocol does not have the infrastructure to support implementation of direct contracts between endpoints and the network. Recently, several new architectures have been proposed in the context of the Internet to provide the principal not only with a set of paths from which it can chose (like BGP does) but also with the performance along those paths and the network topology. One approach to obtain such information is through end-to-end probing [1]. Another approach is to have the edge networks perform measurements and discover the network topology [32]. Yet another approach is to delegate the task of obtaining topology and performance information to a third-party, like in the routing-as-a-service proposal [21]. These proposals are quite different in nature, but they are common in their attempt to provide more visibility and transparency into the network. If information about topology and transit costs is obtained, the scenario is mapped to the known transit costs model (Section 3). In this case, first-best contracts can be achieved through individual contracts with nodes along the path. However, as we have shown in Section 5, as long as each agent can chose the next hop, the principal can gain full benefit by contracting with only the first hop (through the implementation of recursive contracts). However, the various proposals for acquiring network topology and performance information do not deal with strategic behavior by the intermediate nodes. With the realization that the information collected may be used by the principal in subsequent contractual relationships, the intermediate nodes may behave strategically, misrepresenting their true costs to the entities that collect and aggregate such information. One recent approach that can alleviate this problem is to provide packet obituaries by having each packet to confirm its delivery or report its last successful AS hop [4]. Another approach is to have third parties like Keynote independently monitor the network performance. 8. RELATED WORK The study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing. [22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets. It proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes. However, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden. Therefore, such a mechanism is not effective against selfish behavior. In order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5]. Other approaches propose payment schemes [10, 20, 31] to encourage cooperation. [31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior. In their approach, nodes are supposed to send receipts to a thirdparty entity. We show that this type of per-hop monitoring may not be needed. In the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets. This proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30]. If senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility. The accountability framework proposed in [4] can serve two main goals: informing nodes of network conditions to help them make informed decision, and helping entities to establish whether individual ASs have performed their duties adequately. While such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information. Research in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14]. These works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement. Inducing desired behavior is also the objective in [26], which attempts to respond to the challenge of distributed AMD raised in [15]: if the same agents that seek to manipulate the system also run the mechanism, what prevents them from deviating from the mechanism``s proposed rules to maximize their own welfare? They start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm. The focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing. Like other works in this field, we assume that all the accounting services are done using out-of-band mechanisms. Security issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25]. The problem of information asymmetry and hidden-action (also known as moral hazard) is well studied in the economics literature [11, 17, 23, 27]. [17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced. [27] shows that this task is made possible when production takes place sequentially. 9. CONCLUSIONS AND FUTURE DIRECTIONS In this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source 125 and/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes. We conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace. In addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes. Our model and results suggest several natural and intriguing research avenues: • Consider manipulative or collusive behaviors which may arise under the proposed payment schemes. • Analyze the feasibility of recursive contracts under hiddeninformation of transit costs. • While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium. We plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes). • Consider the effect of congestion and capacity constraints on the proposed mechanisms. Our preliminary results show that when several senders compete for a single transit node``s capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge. The premium can be expressed as a function of the second-highest demand. In addition, if congestion affects the probability of successful delivery, a sender with a lower cost alternate path may end up with a lower utility level than his rival with a higher cost alternate path. • Fully characterize the full-information Nash equilibrium in first price auctions, and use this characterization to derive its overcharging compared to truthful mechaisms. 10. ACKNOWLEDGEMENTS We thank Hal Varian for his useful comments. This work is supported in part by the National Science Foundation under ITR awards ANI-0085879 and ANI-0331659, and Career award ANI0133811. 11. REFERENCES [1] ANDERSEN, D. G., BALAKRISHNAN, H., KAASHOEK, M. F., AND MORRIS, R. Resilient Overlay Networks. In 18th ACM SOSP (2001). [2] ARCHER, A., AND TARDOS, E. Frugal path mechanisms. [3] ARGYRAKI, K., AND CHERITON, D. Loose Source Routing as a Mechanism for Traffic Policies. In Proceedings of SIGCOMM FDNA (August 2004). [4] ARGYRAKI, K., MANIATIS, P., CHERITON, D., AND SHENKER, S. Providing Packet Obituaries. In Third Workshop on Hot Topics in Networks (HotNets) (November 2004). [5] BANSAL, S., AND BAKER, M. Observation-based cooperation enforcement in ad-hoc networks. Technical report, Stanford university (2003). [6] BLAKE, S., BLACK, D., CARLSON, M., DAVIES, E., WANG, Z., AND WEISS, W. An Architecture for Differentiated Service. RFC 2475, 1998. [7] BUCHEGGER, S., AND BOUDEC, J.-Y. L. Performance Analysis of the CONFIDANT Protocol: Cooperation of Nodes - Fairness in Dynamic ad-hoc Networks. In IEEE/ACM Symposium on Mobile Ad Hoc Networking and Computing (MobiHOC) (2002). [8] BUCHEGGER, S., AND BOUDEC, J.-Y. L. Coping with False Accusations in Misbehavior Reputation Systems For Mobile ad-hoc Networks. In EPFL, Technical report (2003). [9] BUCHEGGER, S., AND BOUDEC, J.-Y. L. The effect of rumor spreading in reputation systems for mobile ad-hoc networks. In WiOpt``03: Modeling and Optimization in Mobile ad-hoc and Wireless Networks (2003). [10] BUTTYAN, L., AND HUBAUX, J. Stimulating Cooperation in Self-Organizing Mobile ad-hoc Networks. ACM/Kluwer Journal on Mobile Networks and Applications (MONET) (2003). [11] CAILLAUD, B., AND HERMALIN, B. Hidden Action and Incentives. Teaching Notes. U.C. Berkeley. [12] ELKIND, E., SAHAI, A., AND STEIGLITZ, K. Frugality in path auctions, 2004. [13] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S. A BGP-based Mechanism for Lowest-Cost Routing. In Proceedings of the ACM Symposium on Principles of Distributed Computing (2002). [14] FEIGENBAUM, J., SAMI, R., AND SHENKER, S. Mechanism Design for Policy Routing. In Yale University, Technical Report (2003). [15] FEIGENBAUM, J., AND SHENKER, S. Distributed Algorithmic Mechanism Design: Recent Results and Future Directions. In Proceedings of the International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications (2002). [16] FRIEDMAN, E., AND SHENKER, S. Learning and implementation on the internet. In Manuscript. New Brunswick: Rutgers University, Department of Economics (1997). [17] HOLMSTROM, B. Moral Hazard in Teams. Bell Journal of Economics 13 (1982), 324-340. [18] HU, Y., PERRIG, A., AND JOHNSON, D. Ariadne: A Secure On-Demand Routing Protocol for ad-hoc Networks. In Eighth Annual International Conference on Mobile Computing and Networking (Mobicom) (2002), pp. 12-23. [19] HU, Y., PERRIG, A., AND JOHNSON, D. SEAD: Secure Efficient Distance Vector Routing for Mobile ad-hoc Networks. In 4th IEEE Workshop on Mobile Computing Systems and Applications (WMCSA) (2002). [20] JAKOBSSON, M., HUBAUX, J.-P., AND BUTTYAN, L. A Micro-Payment Scheme Encouraging Collaboration in Multi-Hop Cellular Networks. In Financial Cryptography (2003). [21] LAKSHMINARAYANAN, K., STOICA, I., AND SHENKER, S. Routing as a service. In UCB Technical Report No. UCB/CSD-04-1327 (January 2004). [22] MARTI, S., GIULI, T. J., LAI, K., AND BAKER, M. Mitigating Routing Misbehavior in Mobile ad-hoc Networks. In Proceedings of MobiCom (2000), pp. 255-265. [23] MASS-COLELL, A., WHINSTON, M., AND GREEN, J. Microeconomic Theory. Oxford University Press, 1995. [24] NISAN, N., AND RONEN, A. Algorithmic Mechanism Design. In Proceedings of the 31st Symposium on Theory of Computing (1999). [25] SANZGIRI, K., DAHILL, B., LEVINE, B., SHIELDS, C., AND BELDING-ROYER, E. A Secure Routing Protocol for ad-hoc Networks. In International Conference on Network Protocols (ICNP) (2002). [26] SHNEIDMAN, J., AND PARKES, D. C. Overcoming rational manipulation in mechanism implementation, 2004. [27] STRAUSZ, R. Moral Hazard in Sequential Teams. Departmental Working Paper. Free University of Berlin (1996). [28] TEIXEIRA, R., GRIFFIN, T., SHAIKH, A., AND VOELKER, G. Network sensitivity to hot-potato disruptions. In Proceedings of ACM SIGCOMM (September 2004). [29] TEIXEIRA, R., SHAIKH, A., GRIFFIN, T., AND REXFORD, J. Dynamics of hot-potato routing in IP networks. In Proceedings of ACM SIGMETRICS (June 2004). [30] YANG, X. NIRA: A New Internet Routing Architecture. In Proceedings of SIGCOMM FDNA (August 2003). [31] ZHONG, S., CHEN, J., AND YANG, Y. R. Sprite: A Simple, Cheat-Proof, Credit-Based System for Mobile ad-hoc Networks. In 22nd Annual Joint Conference of the IEEE Computer and Communications Societies (2003). [32] ZHU, D., GRITTER, M., AND CHERITON, D. Feedback-based Routing. In Proc Hotnets-I (2002). 126
Hidden-Action in Multi-Hop Routing ABSTRACT In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action. 1. INTRODUCTION Endpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver. In settings where the intermediate nodes are independent agents (such as individual nodes in ad hoc and peer-topeer networks or autonomous systems on the Internet), this poses an incentive problem; the intermediate nodes may incur significant communication and computation costs in the forwarding of packets without deriving any direct benefit from doing so. Consequently, a rational (i.e., utility maximizing) intermediate node may choose to 2Computer Science Division U.C. Berkeley forward packets at a low priority or not forward the packets at all. This rational behavior may lead to suboptimal system performance. The endpoints can provide incentives, e.g., in the form of payments, to encourage the intermediate nodes to forward their packets. However, the actions of the intermediate nodes are often hidden from the endpoints. In many cases, the endpoints can only observe whether or not the packet has reached the destination, and cannot attribute failure to a specific node on the path. Even if some form of monitoring mechanism allows them to pinpoint the location of the failure, they may still be unable to attribute the cause of failure to either the deliberate action of the intermediate node, or to some external factors beyond the control of the intermediate node, such as network congestion, channel interference, or data corruption. The problem of hidden action is hardly unique to networks. Also known as moral hazard, this problem has long been of interest in the economics literature concerning information asymmetry, incentive and contract theory, and agency theory. We follow this literature by formalizing the problem as a principal-agent model, where multiple agents making sequential hidden actions [17, 27]. Our results are threefold. First, we show that it is possible to design contracts to induce cooperation when intermediate nodes can choose to forward or drop packets, as well as when the nodes can choose to forward packets with different levels of quality of service. If the path and transit costs are known prior to transmission, the principal achieves first best solution, and can implement the contracts either directly with each intermediate node or recursively through the network (each node making a contract with the following node) without any loss in utility. Second, we find that introducing per-hop monitoring has no impact on the principal's expected utility in equilibrium. For a principal who wishes to induce an equilibrium in which all intermediate nodes cooperate, its expected total payment is the same with or without monitoring. However, monitoring provides a dominant strategy equilibrium, which is a stronger solution concept than the Nash equilibrium achievable in the absence of monitoring. Third, we show that in the absence of a priori information about transit costs on the packet forwarding path, it is possible to generalize existing mechanisms to overcome scenarios that involve both hidden-information and hidden-action. In these scenarios, the principal pays a premium compared to scenarios with known transit costs. 2. BASELINE MODEL We consider a principal-agent model, where the principal is a pair of communication endpoints who wish to communicate over a multi-hop network, and the agents are the intermediate nodes capable of forwarding packets between the endpoints. The principal (who in practice can be either the sender, the receiver, or both) makes individual take-it-or-leave-it offers (contracts) to the agents. If the contracts are accepted, the agents choose their actions sequentially to maximize their expected payoffs based on the payment schedule of the contract. When necessary, agents can in turn make subsequent take-it-or-leave-it offers to their downstream agents. We assume that all participants are risk neutral and that standard assumptions about the global observability of the final outcome and the enforceability of payments by guaranteeing parties hold. For simplicity, we assume that each agent has only two possible actions; one involving significant effort and one involving little effort. We denote the action choice of agent i by ai E {0, 1}, where ai = 0 and ai = 1 stand for the low-effort and high-effort actions, respectively. Each action is associated with a cost (to the agent) C (ai), and we assume: At this stage, we assume that all nodes have the same C (ai) for presentation clarity, but we relax this assumption later. Without loss of generality we normalize the C (ai = 0) to be zero, and denote the high-effort cost by c, so C (ai = 0) = 0 and C (ai = 1) = c. The utility of agent i, denoted by ui, is a function of the payment it receives from the principal (si), the action it takes (ai), and the cost it incurs (ci), as follows: ui (si, ci, ai) = si − aici The outcome is denoted by x E {xG, xB}, where xG stands for the "Good" outcome in which the packet reaches the destination, and xB stands for the "Bad" outcome in which the packet is dropped before it reaches the destination. The outcome is a function of the vector of actions taken by the agents on the path, a = (a1,..., an) E {0, 1} n, and the loss rate on the channels, k. The benefit of the sender from the outcome is denoted by w (x), where: w (xG) = wG; and w (xB) = wB = 0 The utility of the sender is consequently: where: S = Eni = 1 si A sender who wishes to induce an equilibrium in which all nodes engage in the high-effort action needs to satisfy two constraints for each agent i: (IR) Individual rationality (participation constraint)': the expected utility from participation should (weakly) exceed its reservation utility (which we normalize to 0). (IC) Incentive compatibility: the expected utility from exerting high-effort should (weakly) exceed its expected utility from exerting low-effort. In some network scenarios, the topology and costs are common knowledge. That is, the sender knows in advance the path that its packet will take and the costs on that path. In other routing scenarios, the sender does not have this a priori information. We show that our model can be applied to both scenarios with known and unknown topologies and costs, and highlight the implications of each scenario in the context of contracts. We also distinguish between direct contracts, where the principal signs an individual contract' We use the notion of ex ante individual rationality, in which the agents choose to participate before they know the state of the system. Figure 1: Multi-hop path from sender to destination. Figure 2: Structure of the multihop routing game under known topology and transit costs. with each node, and recursive contracts, where each node enters a contractual relationship with its downstream node. The remainder of this paper is organized as follows. In Section 3 we consider agents who decide whether to drop or forward packets with and without monitoring when the transit costs are common knowledge. In Section 4, we extend the model to scenarios with unknown transit costs. In Section 5, we distinguish between recursive and direct contracts and discuss their relationship. In Section 6, we show that the model applies to scenarios in which agents choose between different levels of quality of service. We consider Internet routing as a case study in Section 7. In Section 8 we present related work, and Section 9 concludes the paper. 3. KNOWN TRANSIT COSTS In this section we analyze scenarios in which the principal knows in advance the nodes on the path to the destination and their costs, as shown in figure 1. We consider agents who decide whether to drop or forward packets, and distinguish between scenarios with and without monitoring. 3.1 Drop versus Forward without Monitoring In this scenario, the agents decide whether to drop (a = 0) or forward (a = 1) packets. The principal uses no monitoring to observe per-hop outcomes. Consequently, the principal makes the payment schedule to each agent contingent on the final outcome, x, as follows: The timeline of this scenario is shown in figure 2. Given a perhop loss rate of k, we can express the probability that a packet is successfully delivered from node i to its successor i + 1 as: where xGi_j denotes a successful transmission from node i to j. PROOF. The principal needs to satisfy the IC and IR constraints for each agent i, which can be expressed as follows: This constraint says that the expected utility from forwarding is greater than or equal to its expected utility from dropping, if all subsequent nodes forward as well. This constraint says that the expected utility from participating is greater than or equal to zero (reservation utility), if all other nodes forward. The above constraints can be expressed as follows, based on Eq. 1: It is a standard result that both constraints bind at the optimal contract (see [23]). Solving the two equations, we obtain the solution that is presented in Eqs. 2 and 3. We next prove that the expected payment to a node equals its expected cost in equilibrium. The expected cost of node i is its transit cost multiplied by the probability that it faces this cost (i.e., the probability that the packet reaches node i), which is: (1--k) ic. The expected payment that node i receives is: Note that the expected payment to a node decreases as the node gets closer to the destination due to the asymmetric distribution of risk. The closer the node is to the destination, the lower the probability that a packet will fail to reach the destination, resulting in the low payment being made to the node. The expected payment by the principal is: The expected payment made by the principal depends not only on the total cost, but also the number of nodes on the path. PROPOSITION 3.2. Given two paths with respective lengths of n1 and n2 hops, per-hop transit costs of c1 and c2, and per-hop loss rates of k1 and k2, such that: Figure 3: Two paths of equal total costs but different lengths and individual costs. the expected total payment made by the principal is lower on the shorter path. PROOF. The expected payment in path j is So, we have to show that: Let M = c1n1 = c2n2 and N = (1--k1) n1 +1 = (1--k2) n2 +1. We have to show that n +1. Therefore, g (N, n)> 0 ` dN, n. This means that, ceteris paribus, shorter paths should always be preferred over longer ones. For example, consider the two topologies presented in Figure 3. While the paths are of equal total cost, the total expected payment by the principal is different. Based on Eqs. 2 and 3, the expected total payment for the top path is: we have equal total cost and equal expected benefit, but E [S] 1 = 0.948 and E [S] 2 = 1.313. 3.2 Drop versus Forward with Monitoring Suppose the principal obtains per-hop monitoring information .3 Per-hop information broadens the set of mechanisms the principal can use. For example, the principal can make the payment schedule contingent on arrival to the next hop instead of arrival to the final destination. Can such information be of use to a principal wishing to induce an equilibrium in which all intermediate nodes forward the packet? PROPOSITION 3.3. In the drop versus forward model, the principal derives the same expected utility whether it obtains per-hop monitoring information or not. PROOF. The proof to this proposition is already implied in the findings of the previous section. We found that in the absence of per-hop information, the expected cost of each intermediate node equals its expected payment. In order to satisfy the IR constraint, it is essential to pay each intermediate node an expected amount of at least its expected cost; otherwise, the node would be better-off not participating. Therefore, no other payment scheme can reduce the expected payment from the principal to the intermediate nodes. In addition, if all nodes are incentivized to forward packets, the probability that the packet reaches the destination is the same in both scenarios, thus the expected benefit of the principal is the same. Indeed, we have found that even in the absence of per-hop monitoring information, the principal achieves first-best solution. To convince the reader that this is indeed the case, we provide an example of a mechanism that conditions payments on arrival to the next hop. This is possible only if per-hop monitoring information is provided. In the new mechanism, the principal makes the payment schedule contingent on whether the packet has reached the next hop or not. That is, the payment to node i is sGi if the packet has reached node i + 1, and sBi otherwise. We assume costless monitoring, giving us the best case scenario for the use of monitoring. As before, we consider a principal who wishes to induce an equilibrium in which all intermediate nodes forward the packet. The expected utility of the principal is the difference between its expected benefit and its expected payment. Because the expected benefit when all nodes forward is the same under both scenarios, we only need to show that the expected total payment is identical as well. Under the monitoring mechanism, the principal has to satisfy the following constraints: 3For a recent proposal of an accountability framework that provides such monitoring information see [4]. These constraints can be expressed as follows: The two constraints bind at the optimal contract as before, and we get the following payment schedule: as in the scenario without monitoring (see Equation 6.) While the expected total payment is the same with or without monitoring, there are some differences between the two scenarios. First, the payment structure is different. If no per-hop monitoring is used, the payment to each node depends on its location (i). In contrast, monitoring provides us with n identical contracts. Second, the solution concept used is different. If no monitoring is used, the strategy profile of ai = 1 ` di is a Nash equilibrium, which means that no agent has an incentive to deviate unilaterally from the strategy profile. In contrast, with the use of monitoring, the action chosen by node i is independent of the other agents' forwarding behavior. Therefore, monitoring provides us with dominant strategy equilibrium, which is a stronger solution concept than Nash equilibrium. [15], [16] discuss the appropriateness of different solution concepts in the context of online environments. 4. UNKNOWN TRANSIT COSTS In certain network settings, the transit costs of nodes along the forwarding path may not be common knowledge, i.e., there exists the problem of hidden information. In this section, we address the following questions: 1. Is it possible to design contracts that induce cooperative behavior in the presence of both hidden-action and hiddeninformation? 2. What is the principal's loss due to the lack of knowledge of the transit costs? In hidden-information problems, the principal employs mechanisms to induce truthful revelation of private information from the agents. In the routing game, the principal wishes to extract transit cost information from the network routers in order to determine the lowest cost path (LCP) for a given source-destination pair. The network routers act strategically and declare transit costs to maximize their profit. Mechanisms that have been proposed in the literature for the routing game [24, 13] assume that once the transit costs have been obtained, and the LCP has been determined, the nodes on the LCP obediently forward all packets, and that there is no loss in the network, i.e., k = 0. In this section, we consider both hidden information and hidden action, and generalize these mechanisms to induce both truth revelation and high-effort action in equilibrium, where nodes transmit over a lossy communication channel, i.e., k> 0. 4.1 V CG Mechanism In their seminal paper [24], Nisan and Ronen present a VCG mechanism that induces truthful revelation of transit costs by edges Figure 4: Game structure for FPSS, where only hidden-information is considered. Figure 5: Game structure for FPSS', where both hiddeninformation and hidden-action are considered. in a biconnected network, such that lowest cost paths can be chosen. Like all VCG mechanisms, it is a strategyproof mechanism, meaning that it induces truthful revelation in a dominant strategy equilibrium. In [13] (FPSS), Feigenbaum et al. slightly modify the model to have the routers as the selfish agents instead of the edges, and present a distributed algorithm that computes the VCG payments. The timeline of the FPSS game is presented in figure 4. Under FPSS, transit nodes keep track of the amount of traffic routed through them via counters, and payments are periodically transferred from the principals to the transit nodes based on the counter values. FPSS assumes that transit nodes are obedient in packet forwarding behavior, and will not update the counters without exerting high effort in packet forwarding. In this section, we present FPSS', which generalizes FPSS to operate in an environment with lossy communication channels (i.e., k 0) and strategic behavior in terms of packet forwarding. We will show that FPSS' induces an equilibrium in which all nodes truthfully reveal their transit costs and forward packets if they are on the LCP. Figure 5 presents the timeline of FPSS'. In the first stage, the sender declares two payment functions, (sGi, s i), that will be paid upon success or failure of packet delivery. Given these payments, nodes have incentive to reveal their costs truthfully, and later to forward packets. Payments are transferred based on the final outcome. In FPSS', each node i submits a bid bi, which is its reported transit cost. Node i is said to be truthful if bi = ci. We write b for the vector (b1,..., bn) of bids submitted by all transit nodes. Let Ii (b) be the indicator function for the LCP given the bid vector b such that where the expression b | ix means that (b | ix) j = cj for all j = i, and (b | ix) i = x. In FPSS', we compute s i and sGi as a function of pi, k, and n. First, we recognize that s i must be less than or equal to zero in order for the true LCP to be chosen. Otherwise, strategic nodes may have an incentive to report extremely small costs to mislead the principal into believing that they are on the LCP. Then, these nodes can drop any packets they receive, incur zero transit cost, collect a payment of s i> 0, and earn positive profit. Then, FPSS' has a Nash equilibrium in which all nodes truthfully reveal their transit costs and all nodes on the LCPforward packets. PROOF. In order to prove the proposition above, we have to show that nodes have no incentive to engage in the following misbehaviors: 1. truthfully reveal cost but drop packet, 2. lie about cost and forward packet, 3. lie about cost and drop packet. If all nodes truthfully reveal their costs and forward packets, the expected utility of node i on the LCP is: The last inequality is derived from the fact that FPSS is a truthful mechanism, thus pi ci. The expected utility of a node not on the LCP is 0. A node that drops a packet receives s i = 0, which is smaller than or equal to E [u] i for i LCP and equals E [u] i for i / LCP. Therefore, nodes cannot gain utility from misbehaviors (1) or (3). We next show that nodes cannot gain utility from misbehavior (2). 1. if i LCP, E [u] i> 0. (a) if it reports bi> ci: i. if bi <Er Ir (b | i) br − Er = i Ir (b) br, it is still on the LCP, and since the payment is independent of bi, its utility does not change. ii. if bi> Er Ir (b | i) br − Er = i Ir (b) br, it will not be on the LCP and obtain E [u] i = 0, which is less than its expected utility if truthfully revealing its cost. (b) if it reports bi <ci, it is still on the LCP, and since the payment is independent of bi, its utility does not change. 2. if i / LCP, E [u] i = 0. (a) if it reports bi> ci, it remains out of the LCP, so its utility does not change. (b) if it reports bi <ci: i. if bi <Pr Ir (blioo) br--Pr = i Ir (b) br, it joins the LCP, and gains an expected utility of ii. if bi> Pr Ir (blioo) br--Pr = i Ir (b) br, it remains out of the LCP, so its utility does not change. Therefore, there exists an equilibrium in which all nodes truthfully reveal their transit costs and forward the received packets. We note that in the hidden information only context, FPSS induces truthful revelation as a dominant strategy equilibrium. In the current setting with both hidden information and hidden action, FPSS' achieves a Nash equilibrium in the absence of per-hop monitoring, and a dominant strategy equilibrium in the presence of per-hop monitoring, consistent with the results in section 3 where there is hidden action only. In particular, with per-hop monitoring, the principal declares the payments sBi and sGi to each node upon failure or success of delivery to the next node. Given the payments sBi = 0 and sGi = pi / (1--k), it is a dominant strategy for the nodes to reveal costs truthfully and forward packets. 4.2 Discussion More generally, for any mechanism M that induces a bid vector b in equilibrium by making a payment of pi (b) to node i on the LCP, there exists a mechanism M' that induces an equilibrium with the same bid vector and packet forwarding by making a payment of: 1. IMi (b) = IM' i (b) Vi, since M' uses the same choice metric. 2. The expected utility of a LCP node is E [u] i = (1--k) i (pi (b)--ci)> 0 if it forwards and 0 if it drops, and the expected utility of a non-LCP node is 0. 3. From 1 and 2, we get that if a node i can increase its expected utility by deviating from bi under M', it can also increase its utility by deviating from bi in M, but this is in contradiction to bi being an equilibrium in M. 4. Nodes have no incentive to drop packets since they derive an expected utility of 0 if they do. In addition to the generalization of FPSS into FPSS', we can also consider the generalization of the first-price auction (FPA) mechanism, where the principal determines the LCP and pays each node on the LCP its bid, pi (b) = bi. First-price auctions achieve Nash equilibrium as opposed to dominant strategy equilibrium. Therefore, we should expect the generalization of FPA to achieve Nash equilibrium with or without monitoring. We make two additional comments concerning this class of mechanisms. First, we find that the expected total payment made by the principal under the proposed mechanisms is and the expected benefit realized by the principal is where Pn i = 1 pi and wG are the expected payment and expected benefit, respectively, when only the hidden-information problem is considered. When hidden action is also taken into consideration, the generalized mechanism handles strategic forwarding behavior by conditioning payments upon the final outcome, and accounts for lossy communication channels by designing payments that reflect the distribution of risk. The difference between expected payment and benefit is not due to strategic forwarding behavior, but to lossy communications. Therefore, in a lossless network, we should not see any gap between expected benefits and payments, independent of strategic or non-strategic forwarding behavior. Second, the loss to the principal due to unknown transit costs is also known as the price offrugality, and is an active field of research [2, 12]. This price greatly depends on the network topology and on the mechanism employed. While it is simple to characterize the principal's loss in some special cases, it is not a trivial problem in general. For example, in topologies with parallel disjoint paths from source to destination, we can prove that under first-price auctions, the loss to the principal is the difference between the cost of the shortest path and the second-shortest path, and the loss is higher under the FPSS mechanism. 5. RECURSIVE CONTRACTS In this section, we distinguish between direct and recursive contracts. In direct contracts, the principal contracts directly with each node on the path and pays it directly. In recursive payment, the principal contracts with the first node on the path, which in turn contracts with the second, and so on, such that each node contracts with its downstream node and makes the payment based on the final result, as demonstrated in figure 6. With direct payments, the principal needs to know the identity and cost of each node on the path and to have some communication channel with the node. With recursive payments, every node needs to communicate only with its downstream node. Several questions arise in this context:. What knowledge should the principal have in order to induce cooperative behavior through recursive contracts? . What should be the structure of recursive contracts that induce cooperative behavior? . What is the relation between the total expected payment under direct and recursive contracts? . Is it possible to design recursive contracts in scenarios of unknown transit costs? Figure 6: Structure of the multihop routing game under known topology and recursive contracts. In order to answer the questions outlined above, we look at the IR and IC constraints that the principal needs to satisfy when contracting with the first node on the path. When the principal designs a contract with the first node, he should take into account the incentives that the first node should provide to the second node, and so on all the way to the destination. For example, consider the topology given in figure 3 (a). When the principal comes to design a contract with node A, he needs to consider the subsequent contract that A should sign with B, which should satisfy the following constrints. Solving for SBS -. A and SGS -. A, we get: SBS -. A = 0 The expected total payment is which is equal to the expected total payment under direct contracts (see Eq. 8). PROPOSITION 5.1. The expected total payments by the principal under direct and recursive contracts are equal. PROOF. In order to calculate the expected total payment, we have to find the payment to the first node on the path that will induce appropriate behavior. Because SBi = 0 in the drop / forward model, both constraints can be reduced to: and the expected total payment is where: which equals the total expected payment in direct payments, as expressed in Eq. 6. Because the payment is contingent on the final outcome, and the expected payment to a node equals its expected cost, nodes have no incentive to offer their downstream nodes lower payment than necessary, since if they do, their downstream nodes will not forward the packet. What information should the principal posess in order to implement recursive contracts? Like in direct payments, the expected payment is not affected solely by the total payment on the path, but also by the topology. Therefore, while the principal only needs to communicate with the first node on the forwarding path and does not have to know the identities of the other nodes, it still needs to know the number of nodes on the path and their individual transit costs. Finally, is it possible to design recursive contracts under unknown transit costs, and, if so, what should be the structure of such contracts? Suppose the principal has implemented the distributed algorithm that calculates the necessary payments, Pi for truthful revelation, would the following payment schedule to the first node induce cooperative behavior? The answer is not clear. Unlike contracts in known transit costs, the expected payment to a node usually exceeds its expected cost. Therefore, transit nodes may not have the appropriate incentive to follow the principal's guarantee during the payment phase. For example, in FPSS, the principal guarantees to pay each node an expected payment of pi> ci. We assume that payments are enforceable if made by the same entity that pledge to pay. However, in the case of recursive contracts, the entity that pledges to pay in the cost discovery stage (the principal) is not the same as the entity that defines and executes the payments in the forwarding stage (the transit nodes). Transit nodes, who design the contracts in the second stage, know that their downstream nodes will forward the packet as long as the expected payment exceeds the expected cost, even if it is less than the promised amount. Thus, every node has incentive to offer lower payments than promised and keep the profit. Transit nodes, who know this is a plausible scenario, may no longer truthfully reveal their cost. Therefore, while recursive contracts under known transit costs are strategically equivalent to direct contracts, it is not clear whether this is the case under unknown transit costs. 6. HIGH-QUALITY VERSUS LOW-QUALITY FORWARDING So far, we have considered the agents' strategy space to be limited to the drop (a = 0) and forward (a = 1) actions. In this section, we consider a variation of the model where the agents choose between providing a low-quality service (a = 0) and a high-quality service (a = 1). This may correspond to a service-differentiated service model where packets are forwarded on a best-effort or a priority basis [6]. In contrast to drop versus forward, a packet may still reach the next hop (albeit with a lower probability) even if the low-effort action is taken. As a second example, consider the practice of hot-potato routing in inter-domain routing of today's Internet. Individual autonomous systems (AS's) can either adopt hot-potato routing or early exit routing (a = 0), where a packet is handed off to the downstream AS at the first possible exit, or late exit routing (a = 1), where an AS carries the packet longer than it needs to, handing off the packet at an exit closer to the destination. In the absence of explicit incentives, it is not surprising that AS's choose hot-potato routing to minimize their costs, even though it leads to suboptimal routes [28, 29]. In both examples, in the absence of contracts, a rational node would exert low-effort, resulting in lower performance. Nevertheless, this behavior can be avoided with an appropriate design of contracts. Formally, the probability that a packet successfully gets from node i to node i + 1 is: where: q E (0, 1] and k E (q, 1] In the drop versus forward model, a low-effort action by any node results in a delivery failure. In contrast, a node in the high/low scenario may exert low-effort and hope to free-ride on the higheffort level exerted by the other agents. PROPOSITION 6.1. In the high-quality versus low-quality forwarding model, where transit costs are common knowledge, the principal derives the same expected utility whether it obtains perhop monitoring information or not. PROOF. The IC and IR constraints are the same as specified in the proof of proposition 3.1, but their values change, based on Eq. 14 to reflect the different model: We observe that in this version, both the high and the low payments depend on i. If monitoring is used, we obtain the following constraints: and we get the solution: The expected payment by the principal with or without forwarding is the same, and equals: and this concludes the proof. The payment structure in the high-quality versus low-quality forwarding model is different from that in the drop versus forward model. In particular, at the optimal contract, the low-outcome payment sBi is now less than zero. A negative payment means that the agent must pay the principal in the event that the packet fails to reach the destination. In some settings, it may be necessary to impose a limited liability constraint, i.e., si> 0. This prevents the first-best solution from being achieved. PROPOSITION 6.2. In the high-quality versus low-quality forwarding model, if negative payments are disallowed, the expected payment to each node exceeds its expected cost under the optimal contract. PROOF. The proof is a direct outcome of the following statements, which are proved above: 1. The optimal contract is the contract specified in equations 15 and 16 2. Under the optimal contract, E [si] equals node i's expected cost 3. Under the optimal contract, sBi = (1--k + q) ic (k--1) <0 q Therefore, under any other contract the sender will have to compensate each node with an expected payment that is higher than its expected transit cost. There is an additional difference between the two models. In drop versus forward, a principal either signs a contract with all n nodes along the path or with none. This is because a single node dropping the packet determines a failure. In contrast, in high versus low-quality forwarding, a success may occur under the low effort actions as well, and payments are used to increase the probability of success. Therefore, it may be possible for the principal to maximize its utility by contracting with only m of the n nodes along the path. While the expected outcome depends on m, it is independent of which specific m nodes are induced. At the same time, the individual expected payments decrease in i (see Eq. 16). Therefore, a principal who wishes to sign a contract with only m out of the n nodes should do so with the nodes that are closest to the destination; namely, nodes (n − m + 1,..., n − 1, n). Solving for the high-quality versus low-quality forwarding model with unknown transit costs is left for future work. 7. CASE STUDY: INTERNET ROUTING We can map different deployed and proposed Internet routing schemes to the various models we have considered in this work. Border Gateway Protocol (BGP), the current inter-domain routing protocol in the Internet, computes routes based on path vectors. Since the protocol reveals only the autonomous systems (AS's) along a route but not the cost associated to them, the current BGP routing is best characterized by lack of a priori information about transit costs. In this case, the principal (e.g., a multi-homed site or a tier-1 AS) can implement one of the mechanisms proposed in Section 4 by contracting with individual nodes on the path. Such contracts involve paying some premium over the real cost, and it is not clear whether recursive contacts can be implemented in this scenario. In addition, the current protocol does not have the infrastructure to support implementation of direct contracts between endpoints and the network. Recently, several new architectures have been proposed in the context of the Internet to provide the principal not only with a set of paths from which it can chose (like BGP does) but also with the performance along those paths and the network topology. One approach to obtain such information is through end-to-end probing [1]. Another approach is to have the edge networks perform measurements and discover the network topology [32]. Yet another approach is to delegate the task of obtaining topology and performance information to a third-party, like in the routing-as-a-service proposal [21]. These proposals are quite different in nature, but they are common in their attempt to provide more visibility and transparency into the network. If information about topology and transit costs is obtained, the scenario is mapped to the "known transit costs" model (Section 3). In this case, first-best contracts can be achieved through individual contracts with nodes along the path. However, as we have shown in Section 5, as long as each agent can chose the next hop, the principal can gain full benefit by contracting with only the first hop (through the implementation of recursive contracts). However, the various proposals for acquiring network topology and performance information do not deal with strategic behavior by the intermediate nodes. With the realization that the information collected may be used by the principal in subsequent contractual relationships, the intermediate nodes may behave strategically, misrepresenting their true costs to the entities that collect and aggregate such information. One recent approach that can alleviate this problem is to provide packet obituaries by having each packet to confirm its delivery or report its last successful AS hop [4]. Another approach is to have third parties like Keynote independently monitor the network performance. 8. RELATED WORK The study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing. [22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets. It proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes. However, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden. Therefore, such a mechanism is not effective against selfish behavior. In order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5]. Other approaches propose payment schemes [10, 20, 31] to encourage cooperation. [31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior. In their approach, nodes are supposed to send receipts to a thirdparty entity. We show that this type of per-hop monitoring may not be needed. In the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets. This proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30]. If senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility. The accountability framework proposed in [4] can serve two main goals: informing nodes of network conditions to help them make informed decision, and helping entities to establish whether individual ASs have performed their duties adequately. While such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information. Research in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14]. These works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement. Inducing desired behavior is also the objective in [26], which attempts to respond to the challenge of distributed AMD raised in [15]: if the same agents that seek to manipulate the system also run the mechanism, what prevents them from deviating from the mechanism's proposed rules to maximize their own welfare? They start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm. The focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing. Like other works in this field, we assume that all the accounting services are done using out-of-band mechanisms. Security issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25]. The problem of information asymmetry and hidden-action (also known as moral hazard) is well studied in the economics literature [11, 17, 23, 27]. [17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced. [27] shows that this task is made possible when production takes place sequentially. 9. CONCLUSIONS AND FUTURE DIRECTIONS In this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source and/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes. We conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace. In addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes. Our model and results suggest several natural and intriguing research avenues: • Consider manipulative or collusive behaviors which may arise under the proposed payment schemes. • Analyze the feasibility of recursive contracts under hiddeninformation of transit costs. • While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium. We plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes). • Consider the effect of congestion and capacity constraints on the proposed mechanisms. Our preliminary results show that when several senders compete for a single transit node's capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge. The premium can be expressed as a function of the second-highest demand. In addition, if congestion affects the probability of successful delivery, a sender with a lower cost alternate path may end up with a lower utility level than his rival with a higher cost alternate path. • Fully characterize the full-information Nash equilibrium in first price auctions, and use this characterization to derive its overcharging compared to truthful mechaisms.
Hidden-Action in Multi-Hop Routing ABSTRACT In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action. 1. INTRODUCTION Endpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver. In settings where the intermediate nodes are independent agents (such as individual nodes in ad hoc and peer-topeer networks or autonomous systems on the Internet), this poses an incentive problem; the intermediate nodes may incur significant communication and computation costs in the forwarding of packets without deriving any direct benefit from doing so. Consequently, a rational (i.e., utility maximizing) intermediate node may choose to 2Computer Science Division U.C. Berkeley 2. BASELINE MODEL 3. KNOWN TRANSIT COSTS 3.1 Drop versus Forward without Monitoring 3.2 Drop versus Forward with Monitoring 4. UNKNOWN TRANSIT COSTS 4.1 V CG Mechanism 4.2 Discussion 5. RECURSIVE CONTRACTS 6. HIGH-QUALITY VERSUS LOW-QUALITY FORWARDING 7. CASE STUDY: INTERNET ROUTING 8. RELATED WORK The study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing. [22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets. It proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes. However, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden. Therefore, such a mechanism is not effective against selfish behavior. In order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5]. Other approaches propose payment schemes [10, 20, 31] to encourage cooperation. [31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior. In their approach, nodes are supposed to send receipts to a thirdparty entity. We show that this type of per-hop monitoring may not be needed. In the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets. This proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30]. If senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility. The accountability framework proposed in [4] can serve two main goals: informing nodes of network conditions to help them make informed decision, and helping entities to establish whether individual ASs have performed their duties adequately. While such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information. Research in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14]. These works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement. Inducing desired behavior is also the objective in [26], which attempts to respond to the challenge of distributed AMD raised in [15]: if the same agents that seek to manipulate the system also run the mechanism, what prevents them from deviating from the mechanism's proposed rules to maximize their own welfare? They start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm. The focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing. Like other works in this field, we assume that all the accounting services are done using out-of-band mechanisms. Security issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25]. The problem of information asymmetry and hidden-action (also known as moral hazard) is well studied in the economics literature [11, 17, 23, 27]. [17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced. [27] shows that this task is made possible when production takes place sequentially. 9. CONCLUSIONS AND FUTURE DIRECTIONS In this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source and/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes. We conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace. In addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes. Our model and results suggest several natural and intriguing research avenues: • Consider manipulative or collusive behaviors which may arise under the proposed payment schemes. • Analyze the feasibility of recursive contracts under hiddeninformation of transit costs. • While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium. We plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes). • Consider the effect of congestion and capacity constraints on the proposed mechanisms. Our preliminary results show that when several senders compete for a single transit node's capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge. The premium can be expressed as a function of the second-highest demand. In addition, if congestion affects the probability of successful delivery, a sender with a lower cost alternate path may end up with a lower utility level than his rival with a higher cost alternate path. • Fully characterize the full-information Nash equilibrium in first price auctions, and use this characterization to derive its overcharging compared to truthful mechaisms.
Hidden-Action in Multi-Hop Routing ABSTRACT In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action. 1. INTRODUCTION Endpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver. Consequently, a rational (i.e., utility maximizing) intermediate node may choose to 8. RELATED WORK The study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing. [22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets. It proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes. However, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden. Therefore, such a mechanism is not effective against selfish behavior. In order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5]. Other approaches propose payment schemes [10, 20, 31] to encourage cooperation. [31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior. In their approach, nodes are supposed to send receipts to a thirdparty entity. We show that this type of per-hop monitoring may not be needed. In the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets. This proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30]. If senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility. While such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information. Research in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14]. These works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement. They start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm. The focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing. Like other works in this field, we assume that all the accounting services are done using out-of-band mechanisms. Security issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25]. [17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced. [27] shows that this task is made possible when production takes place sequentially. 9. CONCLUSIONS AND FUTURE DIRECTIONS In this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source and/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes. We conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace. In addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes. Our model and results suggest several natural and intriguing research avenues: • Consider manipulative or collusive behaviors which may arise under the proposed payment schemes. • Analyze the feasibility of recursive contracts under hiddeninformation of transit costs. • While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium. We plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes). • Consider the effect of congestion and capacity constraints on the proposed mechanisms. Our preliminary results show that when several senders compete for a single transit node's capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge.
C-76
Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation
The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today's event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.
[ "fault manag", "event correl", "process manag framework", "servic manag", "servic-orient event correl", "servic level agreement", "qo", "custom servic manag", "servic-orient manag", "case-base reason", "rule-base reason" ]
[ "P", "P", "P", "P", "M", "M", "U", "M", "M", "U", "U" ]
Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation Andreas Hanemann Munich Network Management Team Leibniz Supercomputing Center Barer Str. 21, D-80333 Munich, Germany hanemann@lrz.de Martin Sailer Munich Network Management Team University of Munich (LMU) Oettingenstr. 67, D-80538 Munich, Germany sailer@nm.ifi.lmu.de David Schmitz Munich Network Management Team Leibniz Supercomputing Center Barer Str. 21, D-80333 Munich, Germany schmitz@lrz.de ABSTRACT The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today``s event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation. Categories and Subject Descriptors C.2.4 [Computer Systems Organization]: ComputerCommunication Networks-Distributed Applications General Terms Management, Performance, Reliability 1. INTRODUCTION In huge networks a single fault can cause a burst of failure events. To handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed. The main idea of correlation is to condense and structure events to retrieve meaningful information. Until now, these approaches address primarily the correlation of events as reported from management tools or devices. Therefore, we call them device-oriented. In this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface. The definition of a service is therefore more general than the definition of a Web Service, but a Web Service is included in this service definition. As a consequence, the results are applicable for Web Services as well as for other kinds of services. A service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance. As in today``s IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation. It has become a necessity for providers to offer such guarantees for a differentiation from other providers. To avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively. The latter refers to the case of recognizing the influence of a device breakdown on the offered services. As in this scenario the knowledge about services and their SLAs is used we call it service-oriented. It can be addressed from two directions. Top-down perspective: Several customers report a problem in a certain time interval. Are these trouble reports correlated? How to identify a resource as being the problem``s root cause? 183 Bottom-up perspective: A device (e.g., router, server) breaks down. Which services, and especially which customers, are affected by this fault? The rest of the paper is organized as follows. In Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques. Section 3 describes the motivation for serviceoriented event correlation and its benefits. After having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4). In Section 5 we present our information modeling which is derived from the MNM Service Model. An application of the approach for a web hosting scenario is performed in Section 6. The last section concludes the paper and presents future work. 2. TODAY``S EVENT CORRELATION TECHNIQUES In [11] the task of event correlation is defined as a conceptual interpretation procedure in the sense that a new meaning is assigned to a set of events that happen in a certain time interval. We can distinguish between three aspects for event correlation. Functional aspect: The correlation focuses on functions which are provided by each network element. It is also regarded which other functions are used to provide a specific function. Topology aspect: The correlation takes into account how the network elements are connected to each other and how they interact. Time aspect: When explicitly regarding time constraints, a start and end time has to be defined for each event. The correlation can use time relationships between the events to perform the correlation. This aspect is only mentioned in some papers [11], but it has to be treated in an event correlation system. In the event correlation it is also important to distinguish between the knowledge acquisition/representation and the correlation algorithm. Examples of approaches to knowledge acquisition/representation are Gruschke``s dependency graphs [6] and Ensel``s dependency detection by neural networks [3]. It is also possible to find the dependencies by analyzing interactions [7]. In addition, there is an approach to manage service dependencies with XML and to define a resource description framework [4]. To get an overview about device-oriented event correlation a selection of several event correlation techniques being used for this kind of correlation is presented. Model-based reasoning: Model-based reasoning (MBR) [15, 10, 20] represents a system by modeling each of its components. A model can either represent a physical entity or a logical entity (e.g., LAN, WAN, domain, service, business process). The models for physical entities are called functional model, while the models for all logical entities are called logical model. A description of each model contains three categories of information: attributes, relations to other models, and behavior. The event correlation is a result of the collaboration among models. As all components of a network are represented with their behavior in the model, it is possible to perform simulations to predict how the whole network will behave. A comparison in [20] showed that a large MBR system is not in all cases easy to maintain. It can be difficult to appropriately model the behavior for all components and their interactions correctly and completely. An example system for MBR is NetExpert[16] from OSI which is a hybrid MBR/RBR system (in 2000 OSI was acquired by Agilent Technologies). Rule-based reasoning: Rule-based reasoning (RBR) [15, 10] uses a set of rules for event correlation. The rules have the form conclusion if condition. The condition uses received events and information about the system, while the conclusion contains actions which can either lead to system changes or use system parameters to choose the next rule. An advantage of the approach is that the rules are more or less human-readable and therefore their effect is intuitive. The correlation has proved to be very fast in practice by using the RETE algorithm. In the literature [20, 1] it is claimed that RBR systems are classified as relatively inflexible. Frequent changes in the modeled IT environment would lead to many rule updates. These changes would have to be performed by experts as no automation has currently been established. In some systems information about the network topology which is needed for the event correlation is not used explicitly, but is encoded into the rules. This intransparent usage would make rule updates for topology changes quite difficult. The system brittleness would also be a problem for RBR systems. It means that the system fails if an unknown case occurs, because the case cannot be mapped onto similar cases. The output of RBR systems would also be difficult to predict, because of unforeseen rule interactions in a large rule set. According to [15] an RBR system is only appropriate if the domain for which it is used is small, nonchanging, and well understood. The GTE IMPACT system [11] is an example of a rulebased system. It also uses MBR (GTE has merged with Bell Atlantic in 1998 and is now called Verizon [19]). Codebook approach: The codebook approach [12, 21] has similarities to RBR, but takes a further step and encodes the rules into a correlation matrix. The approach starts using a dependency graph with two kinds of nodes for the modeling. The first kind of nodes are the faults (denoted as problems in the cited papers) which have to be detected, while the second kind of nodes are observable events (symptoms in the papers) which are caused by the faults or other events. The dependencies between the nodes are denoted as directed edges. It is possible to choose weights for the edges, e.g., a weight for the probability that 184 fault/event A causes event B. Another possible weighting could indicate time dependencies. There are several possibilities to reduce the initial graph. If, e.g., a cyclic dependency of events exists and there are no probabilities for the cycles'' edges, all events can be treated as one event. After a final input graph is chosen, the graph is transformed into a correlation matrix where the columns contain the faults and the rows contain the events. If there is a dependency in the graph, the weight of the corresponding edge is put into the according matrix cell. In case no weights are used, the matrix cells get the values 1 for dependency and 0 otherwise. Afterwards, a simplification can be done, where events which do not help to discriminate faults are deleted. There is a trade-off between the minimization of the matrix and the robustness of the results. If the matrix is minimized as much as possible, some faults can only be distinguished by a single event. If this event cannot be reliably detected, the event correlation system cannot discriminate between the two faults. A measure how many event observation errors can be compensated by the system is the Hamming distance. The number of rows (events) that can be deleted from the matrix can differ very much depending on the relationships [15]. The codebook approach has the advantage that it uses long-term experience with graphs and coding. This experience is used to minimize the dependency graph and to select an optimal group of events with respect to processing time and robustness against noise. A disadvantage of the approach could be that similar to RBR frequent changes in the environment make it necessary to frequently edit the input graph. SMARTS InCharge [12, 17] is an example of such a correlation system. Case-based reasoning: In contrast to other approaches case-based reasoning (CBR) [14, 15] systems do not use any knowledge about the system structure. The knowledge base saves cases with their values for system parameters and successful recovery actions for these cases. The recovery actions are not performed by the CBR system in the first place, but in most cases by a human operator. If a new case appears, the CBR system compares the current system parameters with the system parameters in prior cases and tries to find a similar one. To identify such a match it has to be defined for which parameters the cases can differ or have to be the same. If a match is found, a learned action can be performed automatically or the operator can be informed with a recovery proposal. An advantage of this approach is that the ability to learn is an integral part of it which is important for rapid changing environments. There are also difficulties when applying the approach [15]. The fields which are used to find a similar case and their importance have to be defined appropriately. If there is a match with a similar case, an adaptation of the previous solution to the current one has to be found. An example system for CBR is SpectroRx from Cabletron Systems. The part of Cabletron that developed SpectroRx became an independent software company in 2002 and is now called Aprisma Management Technologies [2]. In this section four event correlation approaches were presented which have evolved into commercial event correlation systems. The correlation approaches have different focuses. MBR mainly deals with the knowledge acquisition and representation, while RBR and the codebook approach propose a correlation algorithm. The focus of CBR is its ability to learn from prior cases. 3. MOTIVATION OF SERVICE-ORIENTED EVENT CORRELATION Fig. 1 shows a general service scenario upon which we will discuss the importance of a service-oriented correlation. Several services like SSH, a web hosting service, or a video conference service are offered by a provider to its customers at the customer provider interface. A customer can allow several users to use a subscribed service. The quality and cost issues of the subscribed services between a customer and a provider are agreed in SLAs. On the provider side the services use subservices for their provisioning. In case of the services mentioned above such subservices are DNS, proxy service, and IP service. Both services and subservices depend on resources upon which they are provisioned. As displayed in the figure a service can depend on more than one resource and a resource can be used by one or more services. SSH DNS proxy IP service dependency resource dependency user a user b user c customer SLA web a video conf. SSH sun1 provider video conf. web services subservices resources Figure 1: Scenario To get a common understanding, we distinguish between different types of events: Resource event: We use the term resource event for network events and system events. A network event refers to events like node up/down or link up/down whereas system events refer to events like server down or authentication failure. Service event: A service event indicates that a service does not work properly. A trouble ticket which is generated from a customer report is a kind of such an 185 event. Other service events can be generated by the provider of a service, if the provider himself detects a service malfunction. In such a scenario the provider may receive service events from customers which indicate that SSH, web hosting service, and video conference service are not available. When referring to the service hierarchy, the provider can conclude in such a case that all services depend on DNS. Therefore, it seems more likely that a common resource which is necessary for this service does not work properly or is not available than to assume three independent service failures. In contrast to a resource-oriented perspective where all of the service events would have to be processed separately, the service events can be linked together. Their information can be aggregated and processed only once. If, e.g., the problem is solved, one common message to the customers that their services are available again is generated and distributed by using the list of linked service events. This is certainly a simplified example. However, it shows the general principle of identifying the common subservices and common resources of different services. It is important to note that the service-oriented perspective is needed to integrate service aspects, especially QoS aspects. An example of such an aspect is that a fault does not lead to a total failure of a service, but its QoS parameters, respectively agreed service levels, at the customer-provider interface might not be met. A degradation in service quality which is caused by high traffic load on the backbone is another example. In the resource-oriented perspective it would be possible to define events which indicate that there is a link usage higher than a threshold, but no mechanism has currently been established to find out which services are affected and whether a QoS violation occurs. To summarize, the reasons for the necessity of a serviceoriented event correlation are the following: Keeping of SLAs (top-down perspective): The time interval between the first symptom (recognized either by provider, network management tools, or customers) that a service does not perform properly and the verified fault repair needs to be minimized. This is especially needed with respect to SLAs as such agreements often contain guarantees like a mean time to repair. Effort reduction (top-down perspective): If several user trouble reports are symptoms of the same fault, fault processing should be performed only once and not several times. If the fault has been repaired, the affected customers should be informed about this automatically. Impact analysis (bottom-up perspective): In case of a fault in a resource, its influence on the associated services and affected customers can be determined. This analysis can be performed for short term (when there is currently a resource failure) or long term (e.g., network optimization) considerations. 4. WORKFLOW MODELING In the following we examine the established IT process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM). The aim is find out where event correlation can be found in the process structure and how detailed the frameworks currently are. After that we present our solution for a workflow modeling for the service-oriented event correlation. 4.1 IT Infrastructure Library (ITIL) The British Office of Government Commerce (OGC) and the IT Service Management Forum (itSMF) [9] provide a collection of best practices for IT processes in the area of IT service management which is called ITIL. The service management is described by 11 modules which are grouped into Service Support Set (provider internal processes) and Service Delivery Set (processes at the customer-provider interface). Each module describes processes, functions, roles, and responsibilities as well as necessary databases and interfaces. In general, ITIL describes contents, processes, and aims at a high abstraction level and contains no information about management architectures and tools. The fault management is divided into Incident Management process and Problem Management process. Incident Management: The Incident Management contains the service desk as interface to customers (e.g., receives reports about service problems). In case of severe errors structured queries are transferred to the Problem Management. Problem Management: The Problem Management``s tasks are to solve problems, take care of keeping priorities, minimize the reoccurrence of problems, and to provide management information. After receiving requests from the Incident Management, the problem has to be identified and information about necessary countermeasures is transferred to the Change Management. The ITIL processes describe only what has to be done, but contain no information how this can be actually performed. As a consequence, event correlation is not part of the modeling. The ITIL incidents could be regarded as input for the service-oriented event correlation, while the output could be used as a query to the ITIL Problem Management. 4.2 Enhanced Telecom Operations Map (eTOM) The TeleManagement Forum (TMF) [18] is an international non-profit organization from service providers and suppliers in the area of telecommunications services. Similar to ITIL a process-oriented framework has been developed at first, but the framework was designed for a narrower focus, i.e., the market of information and communications service providers. A horizontal grouping into processes for customer care, service development & operations, network & systems management, and partner/supplier is performed. The vertical grouping (fulfillment, assurance, billing) reflects the service life cycle. In the area of fault management three processes have been defined along the horizontal process grouping. Problem Handling: The purpose of this process is to receive trouble reports from customers and to solve them by using the Service Problem Management. The aim is also to keep the customer informed about the current status of the trouble report processing as well as about the general network status (e.g., planned maintenance). It is also a task of this process to inform the 186 QoS/SLA management about the impact of current errors on the SLAs. Service Problem Management: In this process reports about customer-affecting service failures are received and transformed. Their root causes are identified and a problem solution or a temporary workaround is established. The task of the Diagnose Problem subprocess is to find the root cause of the problem by performing appropriate tests. Nothing is said how this can be done (e.g., no event correlation is mentioned). Resource Trouble Management: A subprocess of the Resource Trouble Management is responsible for resource failure event analysis, alarm correlation & filtering, and failure event detection & reporting. Another subprocess is used to execute different tests to find a resource failure. There is also another subprocess which keeps track about the status of the trouble report processing. This subprocess is similar to the functionality of a trouble ticket system. The process description in eTOM is not very detailed. It is useful to have a check list which aspects for these processes have to be taken into account, but there is no detailed modeling of the relationships and no methodology for applying the framework. Event correlation is only mentioned in the resource management, but it is not used in the service level. 4.3 Workflow Modeling for the Service-Oriented Event Correlation Fig. 2 shows a general service scenario which we will use as basis for the workflow modeling for the service-oriented event correlation. We assume that the dependencies are already known (e.g., by using the approaches mentioned in Section 2). The provider offers different services which depend on other services called subservices (service dependency). Another kind of dependency exists between services/subservices and resources. These dependencies are called resource dependencies. These two kinds of dependencies are in most cases not used for the event correlation performed today. This resource-oriented event correlation deals only with relationships on the resource level (e.g., network topology). service dependency resources subservices provider services resource dependency Figure 2: Different kinds of dependencies for the service-oriented event correlation The dependencies depicted in Figure 2 reflect a situation with no redundancy in the service provisioning. The relationships can be seen as AND relationships. In case of redundancy, if e.g., a provider has 3 independent web servers, another modeling (see Figure 3) should be used (OR relationship). In such a case different relationships are possible. The service could be seen as working properly if one of the servers is working or a certain percentage of them is working. services ) AND relationship b) OR relationship resources Figure 3: Modeling of no redundancy (a) and of redundancy (b) As both ITIL and eTOM contain no description how event correlation and especially service-oriented event correlation should actually be performed, we propose the following design for such a workflow (see Fig. 4). The additional components which are not part of a device-oriented event correlation are depicted with a gray background. The workflow is divided into the phases fault detection, fault diagnosis, and fault recovery. In the fault detection phase resource events and service events can be generated from different sources. The resource events are issued during the use of a resource, e.g., via SNMP traps. The service events are originated from customer trouble reports, which are reported via the Customer Service Management (see below) access point. In addition to these two passive ways to get the events, a provider can also perform active tests. These tests can either deal with the resources (resource active probing) or can assume the role of a virtual customer and test a service or one of its subservices by performing interactions at the service access points (service active probing). The fault diagnosis phase is composed of three event correlation steps. The first one is performed by the resource event correlator which can be regarded as the event correlator in today``s commercial systems. Therefore, it deals only with resource events. The service event correlator does a correlation of the service events, while the aggregate event correlator finally performs a correlation of both resource and service events. If the correlation result in one of the correlation steps shall be improved, it is possible to go back to the fault detection phase and start the active probing to get additional events. These events can be helpful to confirm a correlation result or to reduce the list of possible root causes. After the event correlation an ordered list of possible root causes is checked by the resource management. When the root cause is found, the failure repair starts. This last step is performed in the fault recovery phase. The next subsections present different elements of the event correlation process. 4.4 Customer Service Management and Intelligent Assistant The Customer Service Management (CSM) access point was proposed by [13] as a single interface between customer 187 fault detection fault diagnosis resource active probing resource event resource event correlator resource management candidate list fault recovery resource usage service active probing intelligent assistant service event correlator aggregate event correlator service eventCSM AP Figure 4: Event correlation workflow and provider. Its functionality is to provide information to the customer about his subscribed services, e.g., reports about the fulfillment of agreed SLAs. It can also be used to subscribe services or to allow the customer to manage his services in a restricted way. Reports about problems with a service can be sent to the customer via CSM. The CSM is also contained in the MNM Service Model (see Section 5). To reduce the effort for the provider``s first level support, an Intelligent Assistant can be added to the CSM. The Intelligent Assistant structures the customer``s information about a service problem. The information which is needed for a preclassification of the problem is gathered from a list of questions to the customer. The list is not static as the current question depends on the answers to prior questions or from the result of specific tests. A decision tree is used to structure the questions and tests. The tests allow the customer to gain a controlled access to the provider``s management. At the LRZ a customer of the E-Mail Service can, e.g., use the Intelligent Assistant to perform ping requests to the mail server. But also more complex requests could be possible, e.g., requests of a combination of SNMP variables. 4.5 Active Probing Active probing is useful for the provider to check his offered services. The aim is to identify and react to problems before a customer notices them. The probing can be done from a customer point of view or by testing the resources which are part of the services. It can also be useful to perform tests of subservices (own subservices or subservices offered by suppliers). Different schedules are possible to perform the active probing. The provider could select to test important services and resources in regular time intervals. Other tests could be initiated by a user who traverses the decision tree of the Intelligent Assistant including active tests. Another possibility for the use of active probing is a request from the event correlator, if the current correlation result needs to be improved. The results of active probing are reported via service or resource events to the event correlator (or if the test was demanded by the Intelligent Assistant the result is reported to it, too). While the events that are received from management tools and customers denote negative events (something does not work), the events from active probing should also contain positive events for a better discrimination. 4.6 Event Correlator The event correlation should not be performed by a single event correlator, but by using different steps. The reason for this are the different characteristics of the dependencies (see Fig. 1). On the resource level there are only relationships between resources (network topology, systems configuration). An example for this could be a switch linking separate LANs. If the switch is down, events are reported that other network components which are located behind the switch are also not reachable. When correlating these events it can be figured out that the switch is the likely error cause. At this stage, the integration of service events does not seem to be helpful. The result of this step is a list of resources which could be the problem``s root cause. The resource event correlator is used to perform this step. In the service-oriented scenario there are also service and resource dependencies. As next step in the event correlation process the service events should be correlated with each other using the service dependencies, because the service dependencies have no direct relationship to the resource level. The result of this step, which is performed by the service event correlator, is a list of services/subservices which could contain a failure in a resource. If, e.g., there are service events from customers that two services do not work and both services depend on a common subservice, it seems more likely that the resource failure can be found inside the subservice. The output of this correlation is a list of services/subservices which could be affected by a failure in an associated resource. In the last step the aggregate event correlator matches the lists from resource event correlator and service event correlator to find the problem``s possible root cause. This is done by using the resource dependencies. The event correlation techniques presented in Section 2 could be used to perform the correlation inside the three event correlators. If the dependencies can be found precisely, an RBR or codebook approach seems to be appropriate. A case database (CBR) could be used if there are cases which could not be covered by RBR or the codebook approach. These cases could then be used to improve the modeling in a way that RBR or the codebook approach can deal with them in future correlations. 5. INFORMATION MODELING In this section we use a generic model for IT service management to derive the information necessary for the event correlation process. 5.1 MNM Service Model The MNM Service Model [5] is a generic model for IT service management. A distinction is made between customer side and provider side. The customer side contains the basic roles customer and user, while the provider side contains the role provider. The provider makes the service available to the customer side. The service as a whole is divided into usage which is accessed by the role user and management which is used by the role customer. The model consists of two main views. The Service View (see Fig. 5) shows a common perspective of the service for customer and provider. Everything that is only important 188 for the service realization is not contained in this view. For these details another perspective, the Realization View, is defined (see Fig. 6). customer domain supplies supplies provider domain ``role'' provider accesses uses concludes accesses implements observesrealizes provides directs substantiates usesuses manages implementsrealizes manages service concludes QoS parameters usage functionality service access point management functionality service implementation service management implementation service agreement customersideprovidersidesideindependent ``role'' user ``role'' customer CSM access point service client CSM client Figure 5: Service View The Service View contains the service for which the functionality is defined for usage as well as for management. There are two access points (service access point and CSM access point) where user and customer can access the usage and management functionality, respectively. Associated to each service is a list of QoS parameters which have to be met by the service at the service access point. The QoS surveillance is performed by the management. provider domain implements observesrealizes provides directs implementsrealizes accesses uses concludes accesses usesuses manages side independent side independent manages manages concludes acts as service implementation service management implementation manages uses acts as service logic sub-service client service client CSM client uses resources usesuses service management logic sub-service management client basic management functionality ``role'' customer ``role'' provider ``role'' user Figure 6: Realization View In the Realization View the service implementation and the service management implementation are described in detail. For both there are provider-internal resources and subservices. For the service implementation a service logic uses internal resources (devices, knowledge, staff) and external subservices to provide the service. Analogous, the service management implementation includes a service management logic using basic management functionalities [8] and external management subservices. The MNM Service Model can be used for a similar modeling of the used subservices, i.e., the model can be applied recursively. As the service-oriented event correlation has to use dependencies of a service from subservices and resources, the model is used in the following to derive the needed information for service events. 5.2 Information Modeling for Service Events Today``s event correlation deals mainly with events which are originated from resources. Beside a resource identifier these events contain information about the resource status, e.g., SNMP variables. To perform a service-oriented event correlation it is necessary to define events which are related to services. These events can be generated from the provider``s own service surveillance or from customer reports at the CSM interface. They contain information about the problems with the agreed QoS. In our information modeling we define an event superclass which contains common attributes (e.g., time stamp). Resource event and service event inherit from this superclass. Derived from the MNM Service Model we define the information necessary for a service event. Service: As a service event shall represent the problems of a single service, a unique identification of the affected service is contained here. Event description: This field has to contain a description of the problem. Depending on the interactions at the service access point (Service View) a classification of the problem into different categories should be defined. It should also be possible to add an informal description of the problem. QoS parameters: For each service QoS parameters (Service View) are defined between the provider and the customer. This field represents a list of these QoS parameters and agreed service levels. The list can help the provider to set the priority of a problem with respect to the service levels agreed. Resource list: This list contains the resources (Realization View) which are needed to provide the service. This list is used by the provider to check if one of these resources causes the problem. Subservice service event identification: In the service hierarchy (Realization View) the service, for which this service event has been issued, may depend on subservices. If there is a suspicion that one of these subservices causes the problem, child service events are issued from this service event for the subservices. In such a case this field contains links to the corresponding events. Other event identifications: In the event correlation process the service event can be correlated with other service events or with resource events. This field then contains links to other events which have been correlated to this service event. This is useful to, e.g., send a common message to all affected customers when their subscribed services are available again. Issuer``s identification: This field can either contain an identification of the customer who reported the problem, an identification of a service provider``s employee 189 (in case the failure has been detected by the provider``s own service active probing) or a link to a parent service event. The identification is needed, if there are ambiguities in the service event or the issuer should be informed (e.g., that the service is available again). The possible issuers refer to the basic roles (customer, provider) in the Service Model. Assignee: To keep track of the processing the name and address of the provider``s employee who is solving or solved the problem is also noted. This is a specialization of the provider role in the Service Model. Dates: This field contains key dates in the processing of the service event such as initial date, problem identification date, resolution date. These dates are important to keep track how quick the problems have been solved. Status: This field represents the service event``s actual status (e.g., active, suspended, solved). Priority: The priority shows which importance the service event has from the provider``s perspective. The importance is derived from the service agreement, especially the agreed QoS parameters (Service View). The fields date, status, and other service events are not derived directly from the Service Model, but are necessary for the event correlation process. 6. APPLICATION OF SERVICE-ORIENTED EVENT CORRELATION FOR A WEB HOSTING SCENARIO The Leibniz Supercomputing Center is the joint computing center for the Munich universities and research institutions. It also runs the Munich Scientific Network and offers related services. One of these services is the Virtual WWW Server, a web hosting offer for smaller research institutions. It currently has approximately 200 customers. A subservice of the Virtual WWW Server is the Storage Service which stores the static and dynamic web pages and uses caching techniques for a fast access. Other subservices are DNS and IP service. When a user accesses a hosted web site via one of the LRZ``s Virtual Private Networks the VPN service is also used. The resources of the Virtual WWW Server include a load balancer and 5 redundant servers. The network connections are also part of the resources as well as the Apache web server application running on the servers. Figure 7 shows the dependencies of the Virtual WWW Server. 6.1 Customer Service Management and Intelligent Assistant The Intelligent Assistant that is available at the Leibniz Supercomputing Center can currently be used for connectivity or performance problems or problems with the LRZ E-Mail Service. A selection of possible customer problem reports for the Virtual WWW Server is given in the following: • The hosted web site is not reachable. • The web site access is (too) slow. • The web site contains outdated content. server serverserver server server server server server server outgoing connection hosting of LRZ``s own pages content caching server emergency server webmail server dynamic web pages static web pages DNS ProxyIP Storage Resources: Services: Virtual WWW Server five redundant servers AFS NFS DBload balancer Figure 7: Dependencies of the Virtual WWW Server • The transfer of new content to the LRZ does not change the provided content. • The web site looks strange (e.g., caused by problems with HTML version) This customer reports have to be mapped onto failures in resources. For, e.g., an unreachable web site different root causes are possible like a DNS problem, connectivity problem, wrong configuration of the load balancer. 6.2 Active Probing In general, active probing can be used for services or resources. For the service active probing of the Virtual WWW Server a virtual customer could be installed. This customer does typical HTTP requests of web sites and compares the answer with the known content. To check the up-to-dateness of a test web site, the content could contain a time stamp. The service active probing could also include the testing of subservices, e.g., sending requests to the DNS. The resource active probing performs tests of the resources. Examples are connectivity tests, requests to application processes, and tests of available disk space. 6.3 Event Correlation for the Virtual WWW Server Figure 8 shows the example processing. At first, a customer who takes a look at his hosted web site reports that the content that he had changed is not displayed correctly. This report is transferred to the service management via the CSM interface. An Intelligent Assistant could be used to structure the customer report. The service management translates the customer report into a service event. Independent from the customer report the service provider``s own service active probing tries to change the content of a test web site. Because this is not possible, a service event is issued. Meanwhile, a resource event has been reported to the event correlator, because an access of the content caching server to one of the WWW servers failed. As there are no other events at the moment the resource event correlation 190 customer CSM service mgmt event correlator resource mgmt customer reports: ``web site content not up−to−date'' service active probing reports: ``web site content change not possible'' event: ``retrieval of server content failed``event forward resource event correlation service event correlation aggregate event correlation link failure report event forward check WWW server check link result display link repair result display result forward customer report Figure 8: Example processing of a customer report cannot correlate this event to other events. At this stage it would be possible that the event correlator asks the resource management to perform an active probing of related resources. Both service events are now transferred to the service event correlator and are correlated. From the correlation of these events it seems likely that either the WWW server itself or the link to the WWW server is the problem``s root cause. A wrong web site update procedure inside the content caching server seems to be less likely as this would only explain the customer report and not the service active probing result. At this stage a service active probing could be started, but this does not seem to be useful as this correlation only deals with the Web Hosting Service and its resources and not with other services. After the separate correlation of both resource and service events, which can be performed in parallel, the aggregate event correlator is used to correlate both types of events. The additional resource event makes it seem much more likely that the problems are caused by a broken link to the WWW server or by the WWW server itself and not by the content caching server. In this case the event correlator asks the resource management to check the link and the WWW server. The decision between these two likely error causes can not be further automated here. Later, the resource management finds out that a broken link is the failure``s root cause. It informs the event correlator about this and it can be determined that this explains all previous events. Therefore, the event correlation can be stopped at this point. Depending on the provider``s customer relationship management the finding of the root cause and an expected repair time could be reported to the customers. After the link has been repaired, it is possible to report this event via the CSM interface. Even though many details of this event correlation process could also be performed differently, the example showed an important advantage of the service-oriented event correlation. The relationship between the service provisioning and the provider``s resources is explicitly modeled. This allows a mapping of the customer report onto the provider-internal resources. 6.4 Event Correlation for Different Services If a provider like the LRZ offers several services the serviceoriented event correlation can be used to reveal relationships that are not obvious in the first place. If the LRZ E-Mail Service and its events are viewed in relationship with the events for the Virtual WWW Server, it is possible to identify failures in common subservices and resources. Both services depend on the DNS which means that customer reports like I cannot retrieve new e-mail and The web site of my research institute is not available can have a common cause, e.g., the DNS does not work properly. 7. CONCLUSION AND FUTURE WORK In our paper we showed the need for a service-oriented event correlation. For an IT service provider this new kind of event correlation makes it possible to automatically map problems with the current service quality onto resource failures. This helps to find the failure``s root cause earlier and to reduce costs for SLA violations. In addition, customer reports can be linked together and therefore the processing effort can be reduced. To receive these benefits we presented our approach for performing the service-oriented event correlation as well as a modeling of the necessary correlation information. In the future we are going to apply our workflow and information modeling for services offered by the Leibniz Supercomputing Center going further into details. Several issues have not been treated in detail so far, e.g., the consequences for the service-oriented event correlation if a subservice is offered by another provider. If a service does not perform properly, it has to be determined whether this is caused by the provider himself or by the subservice. In the latter case appropriate information has to be exchanged between the providers via the CSM interface. Another issue is the use of active probing in the event correlation process which can improve the result, but can also lead to a correlation delay. Another important point is the precise definition of dependency which has also been left out by many other publications. To avoid having to much dependencies in a certain situation one could try to check whether the dependencies currently exist. In case of a download from a web site there is only a dependency from the DNS subservice at the beginning, but after the address is resolved a download failure is unlikely to have been caused by the DNS. Another possibility to reduce the dependencies is to divide a service into its possible user interactions (e.g., an e-mail service into transactions like get mail, sent mail, etc) and to define the dependencies for each user interaction. Acknowledgments The authors wish to thank the members of the Munich Network Management (MNM) Team for helpful discussions and valuable comments on previous versions of the paper. The MNM Team, directed by Prof. Dr. Heinz-Gerd Hegering, is a 191 group of researchers of the Munich Universities and the Leibniz Supercomputing Center of the Bavarian Academy of Sciences. Its web server is located at wwwmnmteam.informatik. uni-muenchen. de. 8. REFERENCES [1] K. Appleby, G. Goldszmidt, and M. Steinder. Yemanja - A Layered Event Correlation Engine for Multi-domain Server Farms. In Proceedings of the Seventh IFIP/IEEE International Symposium on Integrated Network Management, pages 329-344. IFIP/IEEE, May 2001. [2] Spectrum, Aprisma Corporation. http://www.aprisma.com. [3] C. Ensel. New Approach for Automated Generation of Service Dependency Models. In Network Management as a Strategy for Evolution and Development; Second Latin American Network Operation and Management Symposium (LANOMS 2001). IEEE, August 2001. [4] C. Ensel and A. Keller. An Approach for Managing Service Dependencies with XML and the Resource Description Framework. Journal of Network and Systems Management, 10(2), June 2002. [5] M. Garschhammer, R. Hauck, H.-G. Hegering, B. Kempter, M. Langer, M. Nerb, I. Radisic, H. Roelle, and H. Schmidt. Towards generic Service Management Concepts - A Service Model Based Approach. In Proceedings of the Seventh IFIP/IEEE International Symposium on Integrated Network Management, pages 719-732. IFIP/IEEE, May 2001. [6] B. Gruschke. Integrated Event Management: Event Correlation using Dependency Graphs. In Proceedings of the 9th IFIP/IEEE International Workshop on Distributed Systems: Operations & Management (DSOM 98). IEEE/IFIP, October 1998. [7] M. Gupta, A. Neogi, M. Agarwal, and G. Kar. Discovering Dynamic Dependencies in Enterprise Environments for Problem Determination. In Proceedings of the 14th IFIP/IEEE Workshop on Distributed Sytems: Operations and Management. IFIP/IEEE, October 2003. [8] H.-G. Hegering, S. Abeck, and B. Neumair. Integrated Management of Networked Systems - Concepts, Architectures and their Operational Application. Morgan Kaufmann Publishers, 1999. [9] IT Infrastructure Library, Office of Government Commerce and IT Service Management Forum. http://www.itil.co.uk. [10] G. Jakobson and M. Weissman. Alarm Correlation. IEEE Network, 7(6), November 1993. [11] G. Jakobson and M. Weissman. Real-time Telecommunication Network Management: Extending Event Correlation with Temporal Constraints. In Proceedings of the Fourth IEEE/IFIP International Symposium on Integrated Network Management, pages 290-301. IEEE/IFIP, May 1995. [12] S. Kliger, S. Yemini, Y. Yemini, D. Ohsie, and S. Stolfo. A Coding Approach to Event Correlation. In Proceedings of the Fourth IFIP/IEEE International Symposium on Integrated Network Management, pages 266-277. IFIP/IEEE, May 1995. [13] M. Langer, S. Loidl, and M. Nerb. Customer Service Management: A More Transparent View To Your Subscribed Services. In Proceedings of the 9th IFIP/IEEE International Workshop on Distributed Systems: Operations & Management (DSOM 98), Newark, DE, USA, October 1998. [14] L. Lewis. A Case-based Reasoning Approach for the Resolution of Faults in Communication Networks. In Proceedings of the Third IFIP/IEEE International Symposium on Integrated Network Management. IFIP/IEEE, 1993. [15] L. Lewis. Service Level Management for Enterprise Networks. Artech House, Inc., 1999. [16] NETeXPERT, Agilent Technologies. http://www.agilent.com/comms/OSS. [17] InCharge, Smarts Corporation. http://www.smarts.com. [18] Enhanced Telecom Operations Map, TeleManagement Forum. http://www.tmforum.org. [19] Verizon Communications. http://www.verizon.com. [20] H. Wietgrefe, K.-D. Tuchs, K. Jobmann, G. Carls, P. Froelich, W. Nejdl, and S. Steinfeld. Using Neural Networks for Alarm Correlation in Cellular Phone Networks. In International Workshop on Applications of Neural Networks to Telecommunications (IWANNT), May 1997. [21] S. Yemini, S. Kliger, E. Mozes, Y. Yemini, and D. Ohsie. High Speed and Robust Event Correlation. IEEE Communiations Magazine, 34(5), May 1996. 192
Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation ABSTRACT The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today's event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation. 1. INTRODUCTION In huge networks a single fault can cause a burst of failure events. To handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed. The main idea of correlation is to condense and structure events to retrieve meaningful information. Until now, these approaches address primarily the correlation of events as reported from management tools or devices. Therefore, we call them device-oriented. In this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface. The definition of a "service" is therefore more general than the definition of a "Web Service", but a "Web Service" is included in this "service" definition. As a consequence, the results are applicable for Web Services as well as for other kinds of services. A service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance. As in today's IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation. It has become a necessity for providers to offer such guarantees for a differentiation from other providers. To avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively. The latter refers to the case of recognizing the influence of a device breakdown on the offered services. As in this scenario the knowledge about services and their SLAs is used we call it service-oriented. It can be addressed from two directions. Top-down perspective: Several customers report a problem in a certain time interval. Are these trouble reports correlated? How to identify a resource as being the problem's root cause? Bottom-up perspective: A device (e.g., router, server) breaks down. Which services, and especially which customers, are affected by this fault? The rest of the paper is organized as follows. In Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques. Section 3 describes the motivation for serviceoriented event correlation and its benefits. After having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4). In Section 5 we present our information modeling which is derived from the MNM Service Model. An application of the approach for a web hosting scenario is performed in Section 6. The last section concludes the paper and presents future work. 2. TODAY'S EVENT CORRELATION TECHNIQUES In [11] the task of event correlation is defined as "a conceptual interpretation procedure in the sense that a new meaning is assigned to a set of events that happen in a certain time interval". We can distinguish between three aspects for event correlation. Functional aspect: The correlation focuses on functions which are provided by each network element. It is also regarded which other functions are used to provide a specific function. Topology aspect: The correlation takes into account how the network elements are connected to each other and how they interact. Time aspect: When explicitly regarding time constraints, a start and end time has to be defined for each event. The correlation can use time relationships between the events to perform the correlation. This aspect is only mentioned in some papers [11], but it has to be treated in an event correlation system. In the event correlation it is also important to distinguish between the knowledge acquisition/representation and the correlation algorithm. Examples of approaches to knowledge acquisition/representation are Gruschke's dependency graphs [6] and Ensel's dependency detection by neural networks [3]. It is also possible to find the dependencies by analyzing interactions [7]. In addition, there is an approach to manage service dependencies with XML and to define a resource description framework [4]. To get an overview about device-oriented event correlation a selection of several event correlation techniques being used for this kind of correlation is presented. Model-based reasoning: Model-based reasoning (MBR) [15, 10, 20] represents a system by modeling each of its components. A model can either represent a physical entity or a logical entity (e.g., LAN, WAN, domain, service, business process). The models for physical entities are called functional model, while the models for all logical entities are called logical model. A description of each model contains three categories of information: attributes, relations to other models, and behavior. The event correlation is a result of the collaboration among models. As all components of a network are represented with their behavior in the model, it is possible to perform simulations to predict how the whole network will behave. A comparison in [20] showed that a large MBR system is not in all cases easy to maintain. It can be difficult to appropriately model the behavior for all components and their interactions correctly and completely. An example system for MBR is NetExpert [16] from OSI which is a hybrid MBR/RBR system (in 2000 OSI was acquired by Agilent Technologies). Rule-based reasoning: Rule-based reasoning (RBR) [15, 10] uses a set of rules for event correlation. The rules have the form conclusion if condition. The condition uses received events and information about the system, while the conclusion contains actions which can either lead to system changes or use system parameters to choose the next rule. An advantage of the approach is that the rules are more or less human-readable and therefore their effect is intuitive. The correlation has proved to be very fast in practice by using the RETE algorithm. In the literature [20, 1] it is claimed that RBR systems are classified as relatively inflexible. Frequent changes in the modeled IT environment would lead to many rule updates. These changes would have to be performed by experts as no automation has currently been established. In some systems information about the network topology which is needed for the event correlation is not used explicitly, but is encoded into the rules. This intransparent usage would make rule updates for topology changes quite difficult. The system brittleness would also be a problem for RBR systems. It means that the system fails if an unknown case occurs, because the case cannot be mapped onto similar cases. The output of RBR systems would also be difficult to predict, because of unforeseen rule interactions in a large rule set. According to [15] an RBR system is only appropriate if the domain for which it is used is small, nonchanging, and well understood. The GTE IMPACT system [11] is an example of a rulebased system. It also uses MBR (GTE has merged with Bell Atlantic in 1998 and is now called Verizon [19]). Codebook approach: The codebook approach [12, 21] has similarities to RBR, but takes a further step and encodes the rules into a correlation matrix. The approach starts using a dependency graph with two kinds of nodes for the modeling. The first kind of nodes are the faults (denoted as problems in the cited papers) which have to be detected, while the second kind of nodes are observable events (symptoms in the papers) which are caused by the faults or other events. The dependencies between the nodes are denoted as directed edges. It is possible to choose weights for the edges, e.g., a weight for the probability that fault/event A causes event B. Another possible weighting could indicate time dependencies. There are several possibilities to reduce the initial graph. If, e.g., a cyclic dependency of events exists and there are no probabilities for the cycles' edges, all events can be treated as one event. After a final input graph is chosen, the graph is transformed into a correlation matrix where the columns contain the faults and the rows contain the events. If there is a dependency in the graph, the weight of the corresponding edge is put into the according matrix cell. In case no weights are used, the matrix cells get the values 1 for dependency and 0 otherwise. Afterwards, a simplification can be done, where events which do not help to discriminate faults are deleted. There is a trade-off between the minimization of the matrix and the robustness of the results. If the matrix is minimized as much as possible, some faults can only be distinguished by a single event. If this event cannot be reliably detected, the event correlation system cannot discriminate between the two faults. A measure how many event observation errors can be compensated by the system is the Hamming distance. The number of rows (events) that can be deleted from the matrix can differ very much depending on the relationships [15]. The codebook approach has the advantage that it uses long-term experience with graphs and coding. This experience is used to minimize the dependency graph and to select an optimal group of events with respect to processing time and robustness against noise. A disadvantage of the approach could be that similar to RBR frequent changes in the environment make it necessary to frequently edit the input graph. SMARTS InCharge [12, 17] is an example of such a correlation system. Case-based reasoning: In contrast to other approaches case-based reasoning (CBR) [14, 15] systems do not use any knowledge about the system structure. The knowledge base saves cases with their values for system parameters and successful recovery actions for these cases. The recovery actions are not performed by the CBR system in the first place, but in most cases by a human operator. If a new case appears, the CBR system compares the current system parameters with the system parameters in prior cases and tries to find a similar one. To identify such a match it has to be defined for which parameters the cases can differ or have to be the same. If a match is found, a learned action can be performed automatically or the operator can be informed with a recovery proposal. An advantage of this approach is that the ability to learn is an integral part of it which is important for rapid changing environments. There are also difficulties when applying the approach [15]. The fields which are used to find a similar case and their importance have to be defined appropriately. If there is a match with a similar case, an adaptation of the previous solution to the current one has to be found. An example system for CBR is SpectroRx from Cabletron Systems. The part of Cabletron that developed SpectroRx became an independent software company in 2002 and is now called Aprisma Management Technologies [2]. In this section four event correlation approaches were presented which have evolved into commercial event correlation systems. The correlation approaches have different focuses. MBR mainly deals with the knowledge acquisition and representation, while RBR and the codebook approach propose a correlation algorithm. The focus of CBR is its ability to learn from prior cases. 3. MOTIVATION OF SERVICE-ORIENTED EVENT CORRELATION Fig. 1 shows a general service scenario upon which we will discuss the importance of a service-oriented correlation. Several services like SSH, a web hosting service, or a video conference service are offered by a provider to its customers at the customer provider interface. A customer can allow several users to use a subscribed service. The quality and cost issues of the subscribed services between a customer and a provider are agreed in SLAs. On the provider side the services use subservices for their provisioning. In case of the services mentioned above such subservices are DNS, proxy service, and IP service. Both services and subservices depend on resources upon which they are provisioned. As displayed in the figure a service can depend on more than one resource and a resource can be used by one or more services. Figure 1: Scenario To get a common understanding, we distinguish between different types of events: Resource event: We use the term resource event for network events and system events. A network event refers to events like node up/down or link up/down whereas system events refer to events like server down or authentication failure. Service event: A service event indicates that a service does not work properly. A trouble ticket which is generated from a customer report is a kind of such an event. Other service events can be generated by the provider of a service, if the provider himself detects a service malfunction. In such a scenario the provider may receive service events from customers which indicate that SSH, web hosting service, and video conference service are not available. When referring to the service hierarchy, the provider can conclude in such a case that all services depend on DNS. Therefore, it seems more likely that a common resource which is necessary for this service does not work properly or is not available than to assume three independent service failures. In contrast to a resource-oriented perspective where all of the service events would have to be processed separately, the service events can be linked together. Their information can be aggregated and processed only once. If, e.g., the problem is solved, one common message to the customers that their services are available again is generated and distributed by using the list of linked service events. This is certainly a simplified example. However, it shows the general principle of identifying the common subservices and common resources of different services. It is important to note that the service-oriented perspective is needed to integrate service aspects, especially QoS aspects. An example of such an aspect is that a fault does not lead to a total failure of a service, but its QoS parameters, respectively agreed service levels, at the customer-provider interface might not be met. A degradation in service quality which is caused by high traffic load on the backbone is another example. In the resource-oriented perspective it would be possible to define events which indicate that there is a link usage higher than a threshold, but no mechanism has currently been established to find out which services are affected and whether a QoS violation occurs. To summarize, the reasons for the necessity of a serviceoriented event correlation are the following: Keeping of SLAs (top-down perspective): The time interval between the first symptom (recognized either by provider, network management tools, or customers) that a service does not perform properly and the verified fault repair needs to be minimized. This is especially needed with respect to SLAs as such agreements often contain guarantees like a mean time to repair. Effort reduction (top-down perspective): If several user trouble reports are symptoms of the same fault, fault processing should be performed only once and not several times. If the fault has been repaired, the affected customers should be informed about this automatically. Impact analysis (bottom-up perspective): In case of a fault in a resource, its influence on the associated services and affected customers can be determined. This analysis can be performed for short term (when there is currently a resource failure) or long term (e.g., network optimization) considerations. 4. WORKFLOW MODELING In the following we examine the established IT process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM). The aim is find out where event correlation can be found in the process structure and how detailed the frameworks currently are. After that we present our solution for a workflow modeling for the service-oriented event correlation. 4.1 IT Infrastructure Library (ITIL) The British Office of Government Commerce (OGC) and the IT Service Management Forum (itSMF) [9] provide a collection of best practices for IT processes in the area of IT service management which is called ITIL. The service management is described by 11 modules which are grouped into Service Support Set (provider internal processes) and Service Delivery Set (processes at the customer-provider interface). Each module describes processes, functions, roles, and responsibilities as well as necessary databases and interfaces. In general, ITIL describes contents, processes, and aims at a high abstraction level and contains no information about management architectures and tools. The fault management is divided into Incident Management process and Problem Management process. Incident Management: The Incident Management contains the service desk as interface to customers (e.g., receives reports about service problems). In case of severe errors structured queries are transferred to the Problem Management. Problem Management: The Problem Management's tasks are to solve problems, take care of keeping priorities, minimize the reoccurrence of problems, and to provide management information. After receiving requests from the Incident Management, the problem has to be identified and information about necessary countermeasures is transferred to the Change Management. The ITIL processes describe only what has to be done, but contain no information how this can be actually performed. As a consequence, event correlation is not part of the modeling. The ITIL incidents could be regarded as input for the service-oriented event correlation, while the output could be used as a query to the ITIL Problem Management. 4.2 Enhanced Telecom Operations Map (eTOM) The TeleManagement Forum (TMF) [18] is an international non-profit organization from service providers and suppliers in the area of telecommunications services. Similar to ITIL a process-oriented framework has been developed at first, but the framework was designed for a narrower focus, i.e., the market of information and communications service providers. A horizontal grouping into processes for customer care, service development & operations, network & systems management, and partner/supplier is performed. The vertical grouping (fulfillment, assurance, billing) reflects the service life cycle. In the area of fault management three processes have been defined along the horizontal process grouping. Problem Handling: The purpose of this process is to receive trouble reports from customers and to solve them by using the Service Problem Management. The aim is also to keep the customer informed about the current status of the trouble report processing as well as about the general network status (e.g., planned maintenance). It is also a task of this process to inform the QoS/SLA management about the impact of current errors on the SLAs. Service Problem Management: In this process reports about customer-affecting service failures are received and transformed. Their root causes are identified and a problem solution or a temporary workaround is established. The task of the "Diagnose Problem" subprocess is to find the root cause of the problem by performing appropriate tests. Nothing is said how this can be done (e.g., no event correlation is mentioned). Resource Trouble Management: A subprocess of the Resource Trouble Management is responsible for resource failure event analysis, alarm correlation & filtering, and failure event detection & reporting. Another subprocess is used to execute different tests to find a resource failure. There is also another subprocess which keeps track about the status of the trouble report processing. This subprocess is similar to the functionality of a trouble ticket system. The process description in eTOM is not very detailed. It is useful to have a check list which aspects for these processes have to be taken into account, but there is no detailed modeling of the relationships and no methodology for applying the framework. Event correlation is only mentioned in the resource management, but it is not used in the service level. 4.3 Workflow Modeling for the Service-Oriented Event Correlation Fig. 2 shows a general service scenario which we will use as basis for the workflow modeling for the service-oriented event correlation. We assume that the dependencies are already known (e.g., by using the approaches mentioned in Section 2). The provider offers different services which depend on other services called subservices (service dependency). Another kind of dependency exists between services/subservices and resources. These dependencies are called resource dependencies. These two kinds of dependencies are in most cases not used for the event correlation performed today. This resource-oriented event correlation deals only with relationships on the resource level (e.g., network topology). Figure 2: Different kinds of dependencies for the service-oriented event correlation The dependencies depicted in Figure 2 reflect a situation with no redundancy in the service provisioning. The relationships can be seen as AND relationships. In case of redundancy, if e.g., a provider has 3 independent web servers, another modeling (see Figure 3) should be used (OR relationship). In such a case different relationships are possible. The service could be seen as working properly if one of the servers is working or a certain percentage of them is working. Figure 3: Modeling of no redundancy (a) and of redundancy (b) As both ITIL and eTOM contain no description how event correlation and especially service-oriented event correlation should actually be performed, we propose the following design for such a workflow (see Fig. 4). The additional components which are not part of a device-oriented event correlation are depicted with a gray background. The workflow is divided into the phases fault detection, fault diagnosis, and fault recovery. In the fault detection phase resource events and service events can be generated from different sources. The resource events are issued during the use of a resource, e.g., via SNMP traps. The service events are originated from customer trouble reports, which are reported via the Customer Service Management (see below) access point. In addition to these two "passive" ways to get the events, a provider can also perform active tests. These tests can either deal with the resources (resource active probing) or can assume the role of a virtual customer and test a service or one of its subservices by performing interactions at the service access points (service active probing). The fault diagnosis phase is composed of three event correlation steps. The first one is performed by the resource event correlator which can be regarded as the event correlator in today's commercial systems. Therefore, it deals only with resource events. The service event correlator does a correlation of the service events, while the aggregate event correlator finally performs a correlation of both resource and service events. If the correlation result in one of the correlation steps shall be improved, it is possible to go back to the fault detection phase and start the active probing to get additional events. These events can be helpful to confirm a correlation result or to reduce the list of possible root causes. After the event correlation an ordered list of possible root causes is checked by the resource management. When the root cause is found, the failure repair starts. This last step is performed in the fault recovery phase. The next subsections present different elements of the event correlation process. 4.4 Customer Service Management and Intelligent Assistant The Customer Service Management (CSM) access point was proposed by [13] as a single interface between customer Figure 4: Event correlation workflow and provider. Its functionality is to provide information to the customer about his subscribed services, e.g., reports about the fulfillment of agreed SLAs. It can also be used to subscribe services or to allow the customer to manage his services in a restricted way. Reports about problems with a service can be sent to the customer via CSM. The CSM is also contained in the MNM Service Model (see Section 5). To reduce the effort for the provider's first level support, an Intelligent Assistant can be added to the CSM. The Intelligent Assistant structures the customer's information about a service problem. The information which is needed for a preclassification of the problem is gathered from a list of questions to the customer. The list is not static as the current question depends on the answers to prior questions or from the result of specific tests. A decision tree is used to structure the questions and tests. The tests allow the customer to gain a controlled access to the provider's management. At the LRZ a customer of the E-Mail Service can, e.g., use the Intelligent Assistant to perform "ping" requests to the mail server. But also more complex requests could be possible, e.g., requests of a combination of SNMP variables. 4.5 Active Probing Active probing is useful for the provider to check his offered services. The aim is to identify and react to problems before a customer notices them. The probing can be done from a customer point of view or by testing the resources which are part of the services. It can also be useful to perform tests of subservices (own subservices or subservices offered by suppliers). Different schedules are possible to perform the active probing. The provider could select to test important services and resources in regular time intervals. Other tests could be initiated by a user who traverses the decision tree of the Intelligent Assistant including active tests. Another possibility for the use of active probing is a request from the event correlator, if the current correlation result needs to be improved. The results of active probing are reported via service or resource events to the event correlator (or if the test was demanded by the Intelligent Assistant the result is reported to it, too). While the events that are received from management tools and customers denote negative events (something does not work), the events from active probing should also contain positive events for a better discrimination. 4.6 Event Correlator The event correlation should not be performed by a single event correlator, but by using different steps. The reason for this are the different characteristics of the dependencies (see Fig. 1). On the resource level there are only relationships between resources (network topology, systems configuration). An example for this could be a switch linking separate LANs. If the switch is down, events are reported that other network components which are located behind the switch are also not reachable. When correlating these events it can be figured out that the switch is the likely error cause. At this stage, the integration of service events does not seem to be helpful. The result of this step is a list of resources which could be the problem's root cause. The resource event correlator is used to perform this step. In the service-oriented scenario there are also service and resource dependencies. As next step in the event correlation process the service events should be correlated with each other using the service dependencies, because the service dependencies have no direct relationship to the resource level. The result of this step, which is performed by the service event correlator, is a list of services/subservices which could contain a failure in a resource. If, e.g., there are service events from customers that two services do not work and both services depend on a common subservice, it seems more likely that the resource failure can be found inside the subservice. The output of this correlation is a list of services/subservices which could be affected by a failure in an associated resource. In the last step the aggregate event correlator matches the lists from resource event correlator and service event correlator to find the problem's possible root cause. This is done by using the resource dependencies. The event correlation techniques presented in Section 2 could be used to perform the correlation inside the three event correlators. If the dependencies can be found precisely, an RBR or codebook approach seems to be appropriate. A case database (CBR) could be used if there are cases which could not be covered by RBR or the codebook approach. These cases could then be used to improve the modeling in a way that RBR or the codebook approach can deal with them in future correlations. 5. INFORMATION MODELING In this section we use a generic model for IT service management to derive the information necessary for the event correlation process. 5.1 MNM Service Model The MNM Service Model [5] is a generic model for IT service management. A distinction is made between customer side and provider side. The customer side contains the basic roles customer and user, while the provider side contains the role provider. The provider makes the service available to the customer side. The service as a whole is divided into usage which is accessed by the role user and management which is used by the role customer. The model consists of two main views. The Service View (see Fig. 5) shows a common perspective of the service for customer and provider. Everything that is only important for the service realization is not contained in this view. For these details another perspective, the Realization View, is defined (see Fig. 6). Figure 5: Service View The Service View contains the service for which the functionality is defined for usage as well as for management. There are two access points (service access point and CSM access point) where user and customer can access the usage and management functionality, respectively. Associated to each service is a list of QoS parameters which have to be met by the service at the service access point. The QoS surveillance is performed by the management. side independent side independent Figure 6: Realization View In the Realization View the service implementation and the service management implementation are described in detail. For both there are provider-internal resources and subservices. For the service implementation a service logic uses internal resources (devices, knowledge, staff) and external subservices to provide the service. Analogous, the service management implementation includes a service management logic using basic management functionalities [8] and external management subservices. The MNM Service Model can be used for a similar modeling of the used subservices, i.e., the model can be applied recursively. As the service-oriented event correlation has to use dependencies of a service from subservices and resources, the model is used in the following to derive the needed information for service events. 5.2 Information Modeling for Service Events Today's event correlation deals mainly with events which are originated from resources. Beside a resource identifier these events contain information about the resource status, e.g., SNMP variables. To perform a service-oriented event correlation it is necessary to define events which are related to services. These events can be generated from the provider's own service surveillance or from customer reports at the CSM interface. They contain information about the problems with the agreed QoS. In our information modeling we define an event superclass which contains common attributes (e.g., time stamp). Resource event and service event inherit from this superclass. Derived from the MNM Service Model we define the information necessary for a service event. Service: As a service event shall represent the problems of a single service, a unique identification of the affected service is contained here. Event description: This field has to contain a description of the problem. Depending on the interactions at the service access point (Service View) a classification of the problem into different categories should be defined. It should also be possible to add an informal description of the problem. QoS parameters: For each service QoS parameters (Service View) are defined between the provider and the customer. This field represents a list of these QoS parameters and agreed service levels. The list can help the provider to set the priority of a problem with respect to the service levels agreed. Resource list: This list contains the resources (Realization View) which are needed to provide the service. This list is used by the provider to check if one of these resources causes the problem. Subservice service event identification: In the service hierarchy (Realization View) the service, for which this service event has been issued, may depend on subservices. If there is a suspicion that one of these subservices causes the problem, child service events are issued from this service event for the subservices. In such a case this field contains links to the corresponding events. Other event identifications: In the event correlation process the service event can be correlated with other service events or with resource events. This field then contains links to other events which have been correlated to this service event. This is useful to, e.g., send a common message to all affected customers when their subscribed services are available again. Issuer's identification: This field can either contain an identification of the customer who reported the problem, an identification of a service provider's employee (in case the failure has been detected by the provider's own service active probing) or a link to a parent service event. The identification is needed, if there are ambiguities in the service event or the issuer should be informed (e.g., that the service is available again). The possible issuers refer to the basic roles (customer, provider) in the Service Model. Assignee: To keep track of the processing the name and address of the provider's employee who is solving or solved the problem is also noted. This is a specialization of the provider role in the Service Model. Dates: This field contains key dates in the processing of the service event such as initial date, problem identification date, resolution date. These dates are important to keep track how quick the problems have been solved. Status: This field represents the service event's actual status (e.g., active, suspended, solved). Priority: The priority shows which importance the service event has from the provider's perspective. The importance is derived from the service agreement, especially the agreed QoS parameters (Service View). The fields date, status, and other service events are not derived directly from the Service Model, but are necessary for the event correlation process. 6. APPLICATION OF SERVICE-ORIENTED EVENT CORRELATION FOR A WEB HOSTING SCENARIO The Leibniz Supercomputing Center is the joint computing center for the Munich universities and research institutions. It also runs the Munich Scientific Network and offers related services. One of these services is the Virtual WWW Server, a web hosting offer for smaller research institutions. It currently has approximately 200 customers. A subservice of the Virtual WWW Server is the Storage Service which stores the static and dynamic web pages and uses caching techniques for a fast access. Other subservices are DNS and IP service. When a user accesses a hosted web site via one of the LRZ's Virtual Private Networks the VPN service is also used. The resources of the Virtual WWW Server include a load balancer and 5 redundant servers. The network connections are also part of the resources as well as the Apache web server application running on the servers. Figure 7 shows the dependencies of the Virtual WWW Server. 6.1 Customer Service Management and Intelligent Assistant The Intelligent Assistant that is available at the Leibniz Supercomputing Center can currently be used for connectivity or performance problems or problems with the LRZ E-Mail Service. A selection of possible customer problem reports for the Virtual WWW Server is given in the following: • The hosted web site is not reachable. • The web site access is (too) slow. • The web site contains outdated content. Figure 7: Dependencies of the Virtual WWW Server • The transfer of new content to the LRZ does not change the provided content. • The web site looks strange (e.g., caused by problems with HTML version) This customer reports have to be mapped onto failures in resources. For, e.g., an unreachable web site different root causes are possible like a DNS problem, connectivity problem, wrong configuration of the load balancer. 6.2 Active Probing In general, active probing can be used for services or resources. For the service active probing of the Virtual WWW Server a virtual customer could be installed. This customer does typical HTTP requests of web sites and compares the answer with the known content. To check the up-to-dateness of a test web site, the content could contain a time stamp. The service active probing could also include the testing of subservices, e.g., sending requests to the DNS. The resource active probing performs tests of the resources. Examples are connectivity tests, requests to application processes, and tests of available disk space. 6.3 Event Correlation for the Virtual WWW Server Figure 8 shows the example processing. At first, a customer who takes a look at his hosted web site reports that the content that he had changed is not displayed correctly. This report is transferred to the service management via the CSM interface. An Intelligent Assistant could be used to structure the customer report. The service management translates the customer report into a service event. Independent from the customer report the service provider's own service active probing tries to change the content of a test web site. Because this is not possible, a service event is issued. Meanwhile, a resource event has been reported to the event correlator, because an access of the content caching server to one of the WWW servers failed. As there are no other events at the moment the resource event correlation Figure 8: Example processing of a customer report cannot correlate this event to other events. At this stage it would be possible that the event correlator asks the resource management to perform an active probing of related resources. Both service events are now transferred to the service event correlator and are correlated. From the correlation of these events it seems likely that either the WWW server itself or the link to the WWW server is the problem's root cause. A wrong web site update procedure inside the content caching server seems to be less likely as this would only explain the customer report and not the service active probing result. At this stage a service active probing could be started, but this does not seem to be useful as this correlation only deals with the Web Hosting Service and its resources and not with other services. After the separate correlation of both resource and service events, which can be performed in parallel, the aggregate event correlator is used to correlate both types of events. The additional resource event makes it seem much more likely that the problems are caused by a broken link to the WWW server or by the WWW server itself and not by the content caching server. In this case the event correlator asks the resource management to check the link and the WWW server. The decision between these two likely error causes cannot be further automated here. Later, the resource management finds out that a broken link is the failure's root cause. It informs the event correlator about this and it can be determined that this explains all previous events. Therefore, the event correlation can be stopped at this point. Depending on the provider's customer relationship management the finding of the root cause and an expected repair time could be reported to the customers. After the link has been repaired, it is possible to report this event via the CSM interface. Even though many details of this event correlation process could also be performed differently, the example showed an important advantage of the service-oriented event correlation. The relationship between the service provisioning and the provider's resources is explicitly modeled. This allows a mapping of the customer report onto the provider-internal resources. 6.4 Event Correlation for Different Services If a provider like the LRZ offers several services the serviceoriented event correlation can be used to reveal relationships that are not obvious in the first place. If the LRZ E-Mail Service and its events are viewed in relationship with the events for the Virtual WWW Server, it is possible to identify failures in common subservices and resources. Both services depend on the DNS which means that customer reports like "I cannot retrieve new e-mail" and "The web site of my research institute is not available" can have a common cause, e.g., the DNS does not work properly. 7. CONCLUSION AND FUTURE WORK In our paper we showed the need for a service-oriented event correlation. For an IT service provider this new kind of event correlation makes it possible to automatically map problems with the current service quality onto resource failures. This helps to find the failure's root cause earlier and to reduce costs for SLA violations. In addition, customer reports can be linked together and therefore the processing effort can be reduced. To receive these benefits we presented our approach for performing the service-oriented event correlation as well as a modeling of the necessary correlation information. In the future we are going to apply our workflow and information modeling for services offered by the Leibniz Supercomputing Center going further into details. Several issues have not been treated in detail so far, e.g., the consequences for the service-oriented event correlation if a subservice is offered by another provider. If a service does not perform properly, it has to be determined whether this is caused by the provider himself or by the subservice. In the latter case appropriate information has to be exchanged between the providers via the CSM interface. Another issue is the use of active probing in the event correlation process which can improve the result, but can also lead to a correlation delay. Another important point is the precise definition of "dependency" which has also been left out by many other publications. To avoid having to much dependencies in a certain situation one could try to check whether the dependencies currently exist. In case of a download from a web site there is only a dependency from the DNS subservice at the beginning, but after the address is resolved a download failure is unlikely to have been caused by the DNS. Another possibility to reduce the dependencies is to divide a service into its possible user interactions (e.g., an e-mail service into transactions like get mail, sent mail, etc) and to define the dependencies for each user interaction.
Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation ABSTRACT The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today's event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation. 1. INTRODUCTION In huge networks a single fault can cause a burst of failure events. To handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed. The main idea of correlation is to condense and structure events to retrieve meaningful information. Until now, these approaches address primarily the correlation of events as reported from management tools or devices. Therefore, we call them device-oriented. In this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface. The definition of a "service" is therefore more general than the definition of a "Web Service", but a "Web Service" is included in this "service" definition. As a consequence, the results are applicable for Web Services as well as for other kinds of services. A service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance. As in today's IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation. It has become a necessity for providers to offer such guarantees for a differentiation from other providers. To avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively. The latter refers to the case of recognizing the influence of a device breakdown on the offered services. As in this scenario the knowledge about services and their SLAs is used we call it service-oriented. It can be addressed from two directions. Top-down perspective: Several customers report a problem in a certain time interval. Are these trouble reports correlated? How to identify a resource as being the problem's root cause? Bottom-up perspective: A device (e.g., router, server) breaks down. Which services, and especially which customers, are affected by this fault? The rest of the paper is organized as follows. In Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques. Section 3 describes the motivation for serviceoriented event correlation and its benefits. After having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4). In Section 5 we present our information modeling which is derived from the MNM Service Model. An application of the approach for a web hosting scenario is performed in Section 6. The last section concludes the paper and presents future work. 2. TODAY'S EVENT CORRELATION TECHNIQUES 3. MOTIVATION OF SERVICE-ORIENTED EVENT CORRELATION 4. WORKFLOW MODELING 4.1 IT Infrastructure Library (ITIL) 4.2 Enhanced Telecom Operations Map (eTOM) 4.3 Workflow Modeling for the Service-Oriented Event Correlation 4.4 Customer Service Management and Intelligent Assistant 4.5 Active Probing 4.6 Event Correlator 5. INFORMATION MODELING 5.1 MNM Service Model 5.2 Information Modeling for Service Events 6. APPLICATION OF SERVICE-ORIENTED EVENT CORRELATION FOR A WEB HOSTING SCENARIO 6.1 Customer Service Management and Intelligent Assistant 6.2 Active Probing 6.3 Event Correlation for the Virtual WWW Server 6.4 Event Correlation for Different Services
Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation ABSTRACT The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today's event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation. 1. INTRODUCTION In huge networks a single fault can cause a burst of failure events. To handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed. The main idea of correlation is to condense and structure events to retrieve meaningful information. Until now, these approaches address primarily the correlation of events as reported from management tools or devices. Therefore, we call them device-oriented. In this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface. As a consequence, the results are applicable for Web Services as well as for other kinds of services. A service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance. As in today's IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation. To avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively. The latter refers to the case of recognizing the influence of a device breakdown on the offered services. As in this scenario the knowledge about services and their SLAs is used we call it service-oriented. Top-down perspective: Several customers report a problem in a certain time interval. Are these trouble reports correlated? How to identify a resource as being the problem's root cause? Which services, and especially which customers, are affected by this fault? The rest of the paper is organized as follows. In Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques. Section 3 describes the motivation for serviceoriented event correlation and its benefits. After having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4). In Section 5 we present our information modeling which is derived from the MNM Service Model. An application of the approach for a web hosting scenario is performed in Section 6. The last section concludes the paper and presents future work.
H-43
Combining Content and Link for Classification using Matrix Factorization
The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks.
[ "combin content and link", "classif", "matrix factor", "web mine problem", "link structur", "content inform", "author inform", "joint factor", "linkag adjac matrix", "document-term matrix", "low-dimension factor space", "webkb and cora benchmark", "relationship", "asymmetr relationship", "cocit similar", "text content", "factor analysi" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "M", "R" ]
Combining Content and Link for Classification using Matrix Factorization Shenghuo Zhu Kai Yu Yun Chi Yihong Gong {zsh,kyu,ychi,ygong}@sv. nec-labs. com NEC Laboratories America, Inc. 10080 North Wolfe Road SW3-350 Cupertino, CA 95014, USA ABSTRACT The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks. Categories and Subject Descriptors: H.3.3 [Information Systems]: Information Search and Retrieval General Terms: Algorithms, Experimentation 1. INTRODUCTION With the advance of the World Wide Web, more and more hypertext documents become available on the Web. Some examples of such data include organizational and personal web pages (e.g, the WebKB benchmark data set, which contains university web pages), research papers (e.g., data in CiteSeer), online news articles, and customer-generated media (e.g., blogs). Comparing to data in traditional information management, in addition to content, these data on the Web also contain links: e.g., hyperlinks from a student``s homepage pointing to the homepage of her advisor, paper citations, sources of a news article, comments of one blogger on posts from another blogger, and so on. Performing information management tasks on such structured data raises many new research challenges. In the following discussion, we use the task of web page classification as an illustrating example, while the techniques we develop in later sections are applicable equally well to many other tasks in information retrieval and data mining. For the classification problem of web pages, a simple approach is to treat web pages as independent documents. The advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem. However, this approach relies only on the content of web pages and ignores the structure of links among them. Link structures provide invaluable information about properties of the documents as well as relationships among them. For example, in the WebKB dataset, the link structure provides additional insights about the relationship among documents (e.g., links often pointing from a student to her advisor or from a faculty member to his projects). Since some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more. From this point of view, the traditional classification methods that ignore the link structure may not be suitable. On the other hand, a few studies, for example [25], rely solely on link structures. It is however a very rare case that content information can be ignorable. For example, in the Cora dataset, the content of a research article abstract largely determines the category of the article. To improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration. To achieve this goal, a simple approach is to convert one type of information to the other. For example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog. In document classification, Kurland and Lee [14] convert content similarity among documents into weights of links. However, link and content information have different properties. For example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way. Therefore, directly converting one type of information to the other usually degrades the quality of information. On the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them. We argue that such an approach ignores the inherent consistency between link and content information and therefore fails to combine the two seamlessly. Some work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure. In Figure 1, for example, web pages v6 and v7 co-cite web page v8, implying that v6 and v7 are similar to each other. In turns, v4 and v5 should be similar to each other, since v4 and v5 cite similar web pages v6 and v7, respectively. But using cocitation similarity, the similarity between v4 and v5 is zero without considering other information. v1 v2 v3 v4 v5 v6 v7 v8 Figure 1: An example of link structure In this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis[18]. In the proposed technique, both content information and link structures are seamlessly combined through a single set of latent factors. Our model contains two components. The first component captures the content information. This component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval. That is, documents are decomposed into latent topics/factors, which in turn are represented as term vectors. The second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members. A factor can be loosely considered as a type of documents (e.g., those homepages belonging to students). It is worth noting that we do not explicitly define the semantic of a factor a priori. Instead, similar to LSI, the factors are learned from the data. Traditional factor analysis models the variables associated with entities through the factors. However, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs. Therefore, the model should involve factors of both vertices of the edge. This is a key difference between traditional factor analysis and our model. In our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links). By doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly. In the formulation, we perform factor analysis based on matrix factorization: solution to the first component is based on factorizing the term-document matrix derived from content features; solution to the second component is based on factorizing the adjacency matrix derived from links. Because the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification. This paper is organized as follows. Section 2 reviews related work. Section 3 presents the proposed approach to analyze the web page based on the combined information of links and content. Section 4 extends the basic framework and a few variants for fine tune. Section 5 shows the experiment results. Section 6 discusses the details of this approach and Section 7 concludes. 2. RELATED WORK In the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8]. LSI maps documents into a lower dimensional latent space. The latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space. The similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space. Analysis tasks, such as classification, could be performed on the latent space. The commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents. Though our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages. In the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities. Using recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs. Instead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities. He et al. [9] propose a clustering algorithm for web document clustering. The algorithm incorporates link structure and the co-citation patterns. In the algorithm, all links are treated as undirected edge of the link graph. The content information is only used for weighing the links by the textual similarity of both ends of the links. Zhang et al. [23] uses the undirected graph regularization framework for document classification. Achlioptas et al[2] decompose the web into hub and authority attributes then combine them with content. Zhou et al. [25] and [24] propose a directed graph regularization framework for semi-supervised learning. The framework combines the hub and authority information of web pages. But it is difficult to combine the content information into that framework. Our approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set of factors. Cohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5]. The major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis. In PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page. In the model, the outgoing links of the destination web page have no effect on the source web page. In other words, the overall link structure is not utilized in PHITS. In our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself. The factor of the destination web page contains information of its outgoing links. In turn, such information is passed to the factor of the source web page. As the result of matrix factorization, the factor forms a factor graph, a miniature of the original graph, preserving the major structure of the original graph. Taskar et al. [19] propose relational Markov networks (RMNs) for entity classification, by describing a conditional distribution of entity classes given entity attributes and relationships. The model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships. RMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques. However the model does not give an off-the-shelf solution, because the success highly depends on the arts of designing the potential functions. On the other hand, the inference for RMNs is intractable and requires belief propagation. The following are some work on combining documents and links, but the methods are loosely related to our approach. The experiments of [21] show that using terms from the linked document improves the classification accuracy. Chakrabarti et al.[3] use co-citation information in their classification model. Joachims et al.[11] combine text kernels and co-citation kernels for classification. Oh et al [16] use the Naive Bayesian frame to combine link information with content. 3. OUR APPROACH In this section we will first introduce a novel matrix factorization method, which is more suitable than conventional matrix factorization methods for link analysis. Then we will introduce our approach that jointly factorizes the document-term matrix and link matrix and obtains compact and highly indicative factors for representing documents or web pages. 3.1 Link Matrix Factorization Suppose we have a directed graph G = (V, E), where the vertex set V = {vi}n i=1 represents the web pages and the edge set E represents the hyperlinks between web pages. Let A = {asd} denotes the n×n adjacency matrix of G, which is also called the link matrix in this paper. For a pair of vertices, vs and vd, let asd = 1 when there is an edge from vs to vd, and asd = 0, otherwise. Note that A is an asymmetric matrix, because hyperlinks are directed. Most machine learning algorithms assume a feature-vector representation of instances. For web page classification, however, the link graph does not readily give such a vector representation for web pages. If one directly uses each row or column of A for the job, she will suffer a very high computational cost because the dimensionality equals to the number of web pages. On the other hand, it will produces a poor classification accuracy (see our experiments in Section 5), because A is extremely sparse1 . The idea of link matrix factorization is to derive a high-quality feature representation Z of web pages based on analyzing the link matrix A, where Z is an n × l matrix, with each row being the ldimensional feature vector of a web page. The new representation of web pages captures the principal factors of the link structure and makes further processing more efficient. One may use a method similar to LSI, to apply the well-known principal component analysis (PCA) for deriving Z from A. The corresponding optimization problem 2 is min Z,U A − ZU 2 F + γ U 2 F (1) where γ is a small positive number, U is an l ×n matrix, and · F is the Frobenius norm. The optimization aims to approximate A by ZU , a product of two low-rank matrices, with a regularization on U. In the end, the i-th row vector of Z can be thought as the hub feature vector of vertex vi, and the row vector of U can be thought as the authority features. A link generation model proposed in [2] is similar to the PCA approach. Since A is a nonnegative matrix here, one can also consider to put nonnegative constraints on U and Z, which produces an algorithm similar to PLSA [10] and NMF [20]. 1 Due to the sparsity of A, links from two similar pages may not share any common target pages, which makes them to appear dissimilar. However the two pages may be indirectly linked to many common pages via their neighbors. 2 Another equivalent form is minZ,U A − ZU 2 F , s. t. U U = I. The solution Z is identical subject to a scaling factor. However, despite its popularity in matrix analysis, PCA (or other similar methods like PLSA) is restrictive for link matrix factorization. The major problem is that, PCA ignores the fact that the rows and columns of A are indexed by exactly the same set of objects (i.e., web pages). The approximating matrix ˜A = ZU shows no evidence that links are within the same set of objects. To see the drawback, let``s consider a link transitivity situation vi → vs → vj, where page i is linked to page s which itself is linked to page j. Since ˜A = ZU treats A as links from web pages {vi} to a different set of objects, let it be denoted by {oi}, ˜A = ZU actually splits an linked object os from vs and breaks down the link path into two parts vi → os and vs → oj. This is obviously a miss interpretation to the original link path. To overcome the problem of PCA, in this paper we suggest to use a different factorization: min Z,U A − ZUZ 2 F + γ U 2 F (2) where U is an l × l full matrix. Note that U is not symmetric, thus ZUZ produces an asymmetric matrix, which is the case of A. Again, each row vector of Z corresponds to a feature vector of a web pages. The new approximating form ˜A = ZUZ puts a clear meaning that the links are between the same set of objects, represented by features Z. The factor model actually maps each vertex, vi, into a vector zi = {zi,k; 1 ≤ k ≤ l} in the Rl space. We call the Rl space the factor space. Then, {zi} encodes the information of incoming and outgoing connectivity of vertices {vi}. The factor loadings, U, explain how these observed connections happened based on {zi}. Once we have the vector zi, we can use many traditional classification methods (such as SVMs) or clustering tools (such as K-Means) to perform the analysis. Illustration Based on a Synthetic Problem To further illustrate the advantages of the proposed link matrix factorization Eq. (2), let us consider the graph in Figure 1. Given v1 v2 v3 v4 v5 v6 v7 v8 Figure 2: Summarize Figure 1 with a factor graph these observations, we can summarize the graph by grouping as factor graph depicted in Figure 2. In the next we preform the two factorization methods Eq. (2) and Eq. (1) on this link matrix. A good low-rank representation should reveal the structure of the factor graph. First we try PCA-like decomposition, solving Eq. (1) and obtaining Z = U = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 1. 1. 0 0 0 0 0 −.6 −.7 .1 0 0 .0 .6 −.0 0 0 .8 −.4 .3 0 0 .2 −.2 −.9 .7 .7 0 0 0 .7 .7 0 0 0 0 0 0 0 0 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 0 0 0 0 0 .5 −.5 0 0 0 .5 −.5 0 0 0 0 0 −0.6 −.7 .1 0 0 .0 .6 −.0 0 0 .8 −.4 .3 0 0 .2 −.2 −.9 .7 .7 0 0 0 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 We can see that the row vectors of v6 and v7 are the same in Z, indicating that v6 and v7 have the same hub attributes. The row vectors of v2 and v3 are the same in U, indicating that v2 and v3 have the same authority attributes. It is not clear to see the similarity between v4 and v5, because their inlinks (and outlinks) are different. Then, we factorize A by ZUZ via solving Eq. (2), and obtain the results Z = U = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 −.8 −.5 .3 −.1 −.0 −.0 .4 .6 −.1 −.4 −.0 .4 .6 −.1 −.4 .3 −.2 .3 −.4 .3 .3 −.2 .3 −.4 .3 −.4 .5 .0 −.2 .6 −.4 .5 .0 −.2 .6 −.1 .1 −.4 −.8 −.4 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 4 −.1 −.2 −.4 .6 .7 .2 −.5 −.5 −.5 .0 .1 .1 .4 −.4 .3 .1 −.2 −.0 .3 −.1 −.3 .3 −.5 −.4 −.2 3 7 7 7 7 7 7 7 7 5 The resultant Z is very consistent with the clustering structure of vertices: the row vectors of v2 and v3 are the same, those of v4 and v5 are the same, those of v6 and v7 are the same. Even interestingly, if we add constraints to ensure Z and U be nonnegative, we have Z = U = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 1. 0 0 0 0 0 .9 0 0 0 0 .9 0 0 0 0 0 .7 0 0 0 0 .7 0 0 0 0 0 .9 0 0 0 0 .9 0 0 0 0 0 1. 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 4 0 1. 0 0 0 0 0 .7 0 0 0 0 0 .7 0 0 0 0 0 1. 0 0 0 0 0 3 7 7 7 7 7 7 7 7 5 which clearly tells the assignment of vertices to clusters from Z and the links of factor graph from U. When the interpretability is not critical in some tasks, for example, classification, we found that it achieves better accuracies without the nonnegative constraints. Given our above analysis, it is clear that the factorization ZUZ is more expressive than ZU in representing the link matrix A. 3.2 Content Matrix Factorization Now let us consider the content information on the vertices. To combine the link information and content information, we want to use the same latent space to approximate the content as the latent space for the links. Using the bag-of-words approach, we denote the content of web pages by an n×m matrix C, each of whose rows represents a document, each column represents a keyword, where m is the number of keywords. Like the latent semantic indexing (LSI) [8], the l-dimensional latent space for words is denoted by an m × l matrix V . Therefore, we use ZV to approximate matrix C, min V,Z C − ZV 2 F + β V 2 F , (3) where β is a small positive number, β V 2 F serves as a regularization term to improve the robustness. 3.3 Joint Link-Content Matrix Factorization There are many ways to employ both the content and link information for web page classification. Our idea in this paper is not to simply combine them, but rather to fuse them into a single, consistent, and compact feature representation. To achieve this goal, we solve the following problem, min U,V,Z n J (U, V, Z) def = A − ZUZ 2 F + α C − ZV 2 F + γ U 2 F + β V 2 F o . (4) Eq. (4) is the joined matrix factorization of A and C with regularization. The new representation Z is ensured to capture both the structures of the link matrix A and the content matrix C. Once we find the optimal Z, we can apply the traditional classification or clustering methods on vectorial data Z. The relationship among these matrices can be depicted as Figure 3. A Y C U Z V Figure 3: Relationship among the matrices. Node Y is the target of classification. Eq. (4) can be solved using gradient methods, such as the conjugate gradient method and quasi-Newton methods. Then main computation of gradient methods is evaluating the object function J and its gradients against variables, ∂J ∂U = Z ZUZ Z − Z AZ + γU, ∂J ∂V =α V Z Z − C Z + βV, ∂J ∂Z = ZU Z ZU + ZUZ ZU − A ZU − AZU + α ZV V − CV . Because of the sparsity of A, the computational complexity of multiplication of A and Z is O(µAl), where µA is the number of nonzero entries in A. Similarly, the computational complexity of C Z and CV is O(µC l), where µC is the number of nonzero entries in C. The computational complexity of the rest multiplications in the gradient computation is O(nl2 ). Therefore, the total computational complexity in one iteration is O(µAl + µC l + nl2 ). The number of links and the number of words in a web page are relatively small comparing to the number of web pages, and are almost constant as the number of web pages/documents increases, i.e. µA = O(n) and µC = O(n). Therefore, theoretically the computation time is almost linear to the number of web pages/documents, n. 4. SUPERVISED MATRIX FACTORIZATION Consider a web page classification problem. We can solve Eq. (4) to obtain Z as Section 3, then use a traditional classifier to perform classification. However, this approach does not take data labels into account in the first step. Believing that using data labels improves the accuracy by obtaining a better Z for the classification, we consider to use the data labels to guide the matrix factorization, called supervised matrix factorization [22]. Because some data used in the matrix factorization have no label information, the supervised matrix factorization falls into the category of semi-supervised learning. Let C be the set of classes. For simplicity, we first consider binary class problem, i.e. C = {−1, 1}. Assume we know the labels {yi} for vertices in T ⊂ V. We want to find a hypothesis h : V → R, such that we assign vi to 1 when h(vi) ≥ 0, −1 otherwise. We assume a transform from the latent space to R is linear, i.e. h(vi) = w φ(vi) + b = w zi + b, (5) School course dept. faculty other project staff student total Cornell 44 1 34 581 18 21 128 827 Texas 36 1 46 561 20 2 148 814 Washington 77 1 30 907 18 10 123 1166 Wisconsin 85 0 38 894 25 12 156 1210 Table 1: Dataset of WebKB where w and b are parameters to estimate. Here, w is the norm of the decision boundary. Similar to Support Vector Machines (SVMs) [7], we can use the hinge loss to measure the loss, X i:vi∈T [1 − yih(vi)]+ , where [x]+ is x if x ≥ 0, 0 if x < 0. However, the hinge loss is not smooth at the hinge point, which makes it difficult to apply gradient methods on the problem. To overcome the difficulty, we use a smoothed version of hinge loss for each data point, g(yih(vi)), (6) where g(x) = 8 >< >: 0 when x ≥ 2, 1 − x when x ≤ 0, 1 4 (x − 2)2 when 0 < x < 2. We reduce a multiclass problem into multiple binary ones. One simple scheme of reduction is the one-against-rest coding scheme. In the one-against-rest scheme, we assign a label vector for each class label. The element of a label vector is 1 if the data point belongs the corresponding class, −1, if the data point does not belong the corresponding class, 0, if the data point is not labeled. Let Y be the label matrix, each column of which is a label vector. Therefore, Y is a matrix of n × c, where c is the number of classes, |C|. Then the values of Eq. (5) form a matrix H = ZW + 1b , (7) where 1 is a vector of size n, whose elements are all one, W is a c × l parameter matrix, and b is a parameter vector of size c. The total loss is proportional to the sum of Eq. (6) over all labeled data points and the classes, LY (W, b, Z) = λ X i:vi∈T ,j∈C g(YijHij), where λ is the parameter to scale the term. To derive a robust solution, we also use Tikhonov regularization for W, ΩW (W) = ν 2 W 2 F , where ν is the parameter to scale the term. Then the supervised matrix factorization problem becomes min U,V,Z,W,b Js(U, V, Z, W, b) (8) where Js(U, V, Z, W, b) = J (U, V, Z) + LY (W, b, Z) + ΩW (W). We can also use gradient methods to solve the problem of Eq. (8). The gradients are ∂Js ∂U = ∂J ∂U , ∂Js ∂V = ∂J ∂V , ∂Js ∂Z = ∂J ∂Z + λGW, ∂Js ∂W =λG Z + νW, ∂Js ∂b =λG 1, where G is an n×c matrix, whose ik-th element is Yikg (YikHik), and g (x) = 8 >< >: 0 when x ≥ 2, −1 when x ≤ 0, 1 2 (x − 2) when 0 < x < 2. Once we obtain w, b, and Z, we can apply h on the vertices with unknown class labels, or apply traditional classification algorithms on Z to get the classification results. 5. EXPERIMENTS 5.1 Data Description In this section, we perform classification on two datasets, to demonstrate the our approach. The two datasets are the WebKB data set[1] and the Cora data set [15]. The WebKB data set consists of about 6000 web pages from computer science departments of four schools (Cornell, Texas, Washington, and Wisconsin). The web pages are classified into seven categories. The numbers of pages in each category are shown in Table 1. The Cora data set consists of the abstracts and references of about 34,000 computer science research papers. We use part of them to categorize into one of subfields of data structure (DS), hardware and architecture (HA), machine learning (ML), and programing language (PL). We remove those articles without reference to other articles in the set. The number of papers and the number of subfields in each area are shown in Table 2. area # of papers # of subfields Data structure (DS) 751 9 Hardware and architecture (HA) 400 7 Machine learning (ML) 1617 7 Programing language (PL) 1575 9 Table 2: Dataset of Cora 5.2 Methods The task of the experiments is to classify the data based on their content information and/or link structure. We use the following methods: 65 70 75 80 85 90 95 100 WisconsinWashingtonTexasCornell accuracy(%) dataset SVM on content SVM on link SVM on link-content Directed graph reg. PLSI+PHITS link-content MF link-content sup. MF method Cornell Texas Washington Wisconsin SVM on content 81.00 ± 0.90 77.00 ± 0.60 85.20 ± 0.90 84.30 ± 0.80 SVM on links 70.10 ± 0.80 75.80 ± 1.20 80.50 ± 0.30 74.90 ± 1.00 SVM on link-content 80.00 ± 0.80 78.30 ± 1.00 85.20 ± 0.70 84.00 ± 0.90 Directed graph regularization 89.47 ± 1.41 91.28 ± 0.75 91.08 ± 0.51 89.26 ± 0.45 PLSI+PHITS 80.74 ± 0.88 76.15 ± 1.29 85.12 ± 0.37 83.75 ± 0.87 link-content MF 93.50 ± 0.80 96.20 ± 0.50 93.60 ± 0.50 92.60 ± 0.60 link-content sup. MF 93.80 ± 0.70 97.07 ± 1.11 93.70 ± 0.60 93.00 ± 0.30 Table 3: Classification accuracy (mean ± std-err %) on WebKB data set • SVM on content We apply support vector machines (SVM) on the content of documents. The features are the bag-ofwords and all word are stemmed. This method ignores link structure in the data. Linear SVM is used. The regularization parameter of SVM is selected using the cross-validation method. The implementation of SVM used in the experiments is libSVM[4]. • SVM on links We treat links as the features of each document, i.e. the i-th feature is link-to-pagei. We apply SVM on link features. This method uses link information, but not the link structure. • SVM on link-content We combine the features of the above two methods. We use different weights for these two set of features. The weights are also selected using crossvalidation. • Directed graph regularization This method is described in [25] and [24]. This method is solely based on link structure. • PLSI+PHITS This method is described in [6]. This method combines text content information and link structure for analysis. The PHITS algorithm is in spirit similar to Eq.1, with an additional nonnegative constraint. It models the outgoing and in-coming structures separately. • Link-content MF This is our approach of matrix factorization described in Section 3. We use 50 latent factors for Z. After we compute Z, we train a linear SVM using Z as the feature vectors, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output. • Link-content sup. MF This method is our approach of the supervised matrix factorization in Section 4. We use 50 latent factors for Z. After we compute Z, we train a linear SVM on the training portion of Z, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output. We randomly split data into five folds and repeat the experiment for five times, for each time we use one fold for test, four other folds for training. During the training process, we use the crossvalidation to select all model parameters. We measure the results by the classification accuracy, i.e., the percentage of the number of correct classified documents in the entire data set. The results are shown as the average classification accuracies and it standard deviation over the five repeats. 5.3 Results The average classification accuracies for the WebKB data set are shown in Table 3. For this task, the accuracies of SVM on links are worse than that of SVM on content. But the directed graph regularization, which is also based on link alone, achieves a much higher accuracy. This implies that the link structure plays an important role in the classification of this dataset, but individual links in a web page give little information. The combination of link and content using SVM achieves similar accuracy as that of SVM on content alone, which confirms individual links in a web page give little information. Since our approach consider the link structure as well as the content information, our two methods give results a highest accuracies among these approaches. The difference between the results of our two methods is not significant. However in the experiments below, we show the difference between them. The classification accuracies for the Cora data set are shown in Table 4. In this experiment, the accuracies of SVM on the combination of links and content are higher than either SVM on content or SVM on links. This indicates both content and links are infor45 50 55 60 65 70 75 80 PLMLHADS accuracy(%) dataset SVM on content SVM on link SVM on link-content Directed graph reg. PLSI+PHITS link-content MF link-content sup. MF method DS HA ML PL SVM on content 53.70 ± 0.50 67.50 ± 1.70 68.30 ± 1.60 56.40 ± 0.70 SVM on links 48.90 ± 1.70 65.80 ± 1.40 60.70 ± 1.10 58.20 ± 0.70 SVM on link-content 63.70 ± 1.50 70.50 ± 2.20 70.56 ± 0.80 62.35 ± 1.00 Directed graph regularization 46.07 ± 0.82 65.50 ± 2.30 59.37 ± 0.96 56.06 ± 0.84 PLSI+PHITS 53.60 ± 1.78 67.40 ± 1.48 67.51 ± 1.13 57.45 ± 0.68 link-content MF 61.00 ± 0.70 74.20 ± 1.20 77.50 ± 0.80 62.50 ± 0.80 link-content sup. MF 69.38 ± 1.80 74.20 ± 0.70 78.70 ± 0.90 68.76 ± 1.32 Table 4: Classification accuracy (mean ± std-err %) on Cora data set mative for classifying the articles into subfields. The method of directed graph regularization does not perform as good as SVM on link-content, which confirms the importance of the article content in this task. Though our method of link-content matrix factorization perform slightly better than other methods, our method of linkcontent supervised matrix factorization outperform significantly. 5.4 The Number of Factors As we discussed in Section 3, the computational complexity of each iteration for solving the optimization problem is quadratic to the number of factors. We perform experiments to study how the number of factors affects the accuracy of predication. We use different numbers of factors for the Cornell data of WebKB data set and the machine learning (ML) data of Cora data set. The result shown in Figure 4(a) and 4(b). The figures show that the accuracy 88 89 90 91 92 93 94 95 0 10 20 30 40 50 accuracy(%) number of factors link-content sup. MF link-content MF (a) Cornell data 62 64 66 68 70 72 74 76 78 80 0 10 20 30 40 50 accuracy(%) number of factors link-content sup. MF link-content MF (b) ML data Figure 4: Accuracy vs number of factors increases as the number of factors increases. It is a different concept from choosing the optimal number of clusters in clustering application. It is how much information to represent in the latent variables. We have considered the regularization over the factors, which avoids the overfit problem for a large number of factors. To choose of the number of factors, we need to consider the trade-off between the accuracy and the computation time, which is quadratic to the number of factors. The difference between the method of matrix factorization and that of supervised one decreases as the number of factors increases. This indicates that the usefulness of supervised matrix factorization at lower number of factors. 6. DISCUSSIONS The loss functions LA in Eq. (2) and LC in Eq. (3) use squared loss due to computationally convenience. Actually, squared loss does not precisely describe the underlying noise model, because the weights of adjacency matrix can only take nonnegative values, in our case, zero or one only, and the components of content matrix C can only take nonnegative integers. Therefore, we can apply other types of loss, such as hinge loss or smoothed hinge loss, e.g. LA(U, Z) = µh(A, ZUZ ), where h(A, B) =P i,j [1 − AijBij]+ . In our paper, we mainly discuss the application of classification. A entry of matrix Z means the relationship of a web page and a factor. The values of the entries are the weights of linear model, instead of the probabilities of web pages belonging to latent topics. Therefore, we allow the components take any possible real values. When we come to the clustering application, we can use this model to find Z, then apply K-means to partition the web pages into clusters. Actually, we can use the idea of nonnegative matrix factorization for clustering [20] to directly cluster web pages. As the example with nonnegative constraints shown in Section 3, we represent each cluster by a latent topic, i.e. the dimensionality of the latent space is set to the number of clusters we want. Then the problem of Eq. (4) becomes min U,V,Z J (U, V, Z), s.t.Z ≥ 0. (9) Solving Eq. (9), we can obtain more interpretable results, which could be used for clustering. 7. CONCLUSIONS In this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application. We propose a simple approach using factors to model the text content and link structure of web pages/documents. The directed links are generated from the linear combination of linkage of between source and destination factors. By sharing factors between text content and link structure, it is easy to combine both the content information and link structure. Our experiments show our approach is effective for classification. We also discuss an extension for clustering application. Acknowledgment We would like to thank Dr. Dengyong Zhou for sharing his code of his algorithm. Also, thanks to the reviewers for constructive comments. 8. REFERENCES [1] CMU world wide knowledge base (WebKB) project. Available at http://www.cs.cmu.edu/∼WebKB/. [2] D. Achlioptas, A. Fiat, A. R. Karlin, and F. McSherry. Web search via hub synthesis. In IEEE Symposium on Foundations of Computer Science, pages 500-509, 2001. [3] S. Chakrabarti, B. E. Dom, and P. Indyk. Enhanced hypertext categorization using hyperlinks. In L. M. Haas and A. Tiwary, editors, Proceedings of SIGMOD-98, ACM International Conference on Management of Data, pages 307-318, Seattle, US, 1998. ACM Press, New York, US. [4] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm. [5] D. Cohn and H. Chang. Learning to probabilistically identify authoritative documents. Proc. ICML 2000. pp.167-174., 2000. [6] D. Cohn and T. Hofmann. The missing link - a probabilistic model of document content and hypertext connectivity. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 430-436. MIT Press, 2001. [7] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20:273, 1995. [8] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391-407, 1990. [9] X. He, H. Zha, C. Ding, and H. Simon. Web document clustering using hyperlink structures. Computational Statistics and Data Analysis, 41(1):19-45, 2002. [10] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the Twenty-Second Annual International SIGIR Conference, 1999. [11] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext categorisation. In C. Brodley and A. Danyluk, editors, Proceedings of ICML-01, 18th International Conference on Machine Learning, pages 250-257, Williams College, US, 2001. Morgan Kaufmann Publishers, San Francisco, US. [12] J. M. Kleinberg. Authoritative sources in a hyperlinked environment. J. ACM, 48:604-632, 1999. [13] P. Kolari, T. Finin, and A. Joshi. SVMs for the Blogosphere: Blog Identification and Splog Detection. In AAAI Spring Symposium on Computational Approaches to Analysing Weblogs, March 2006. [14] O. Kurland and L. Lee. Pagerank without hyperlinks: structural re-ranking using links induced by language models. In SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 306-313, New York, NY, USA, 2005. ACM Press. [15] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the contruction of internet portals with machine learning. Information Retrieval Journal, 3(127-163), 2000. [16] H.-J. Oh, S. H. Myaeng, and M.-H. Lee. A practical hypertext catergorization method using links and incrementally available class information. In SIGIR ``00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 264-271, New York, NY, USA, 2000. ACM Press. [17] L. Page, S. Brin, R. Motowani, and T. Winograd. PageRank citation ranking: bring order to the web. Stanford Digital Library working paper 1997-0072, 1997. [18] C. Spearman. General Intelligence, objectively determined and measured. The American Journal of Psychology, 15(2):201-292, Apr 1904. [19] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proceedings of 18th International UAI Conference, 2002. [20] W. Xu, X. Liu, and Y. Gong. Document clustering based on non-negative matrix factorization. In SIGIR ``03: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 267-273. ACM Press, 2003. [21] Y. Yang, S. Slattery, and R. Ghani. A study of approaches to hypertext categorization. Journal of Intelligent Information Systems, 18(2-3):219-241, 2002. [22] K. Yu, S. Yu, and V. Tresp. Multi-label informed latent semantic indexing. In SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 258-265, New York, NY, USA, 2005. ACM Press. [23] T. Zhang, A. Popescul, and B. Dom. Linear prediction models with graph regularization for web-page categorization. In KDD ``06: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 821-826, New York, NY, USA, 2006. ACM Press. [24] D. Zhou, J. Huang, and B. Sch¨olkopf. Learning from labeled and unlabeled data on a directed graph. In Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 2005. [25] D. Zhou, B. Sch¨olkopf, and T. Hofmann. Semi-supervised learning on directed graphs. Proc. Neural Info. Processing Systems, 2004.
Combining Content and Link for Classification using Matrix Factorization ABSTRACT The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks. 1. INTRODUCTION With the advance of the World Wide Web, more and more hypertext documents become available on the Web. Some examples of such data include organizational and personal web pages (e.g, the WebKB benchmark data set, which contains university web pages), research papers (e.g., data in CiteSeer), online news articles, and customer-generated media (e.g., blogs). Comparing to data in tra ditional information management, in addition to content, these data on the Web also contain links: e.g., hyperlinks from a student's homepage pointing to the homepage of her advisor, paper citations, sources of a news article, comments of one blogger on posts from another blogger, and so on. Performing information management tasks on such structured data raises many new research challenges. In the following discussion, we use the task of web page classification as an illustrating example, while the techniques we develop in later sections are applicable equally well to many other tasks in information retrieval and data mining. For the classification problem of web pages, a simple approach is to treat web pages as independent documents. The advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem. However, this approach relies only on the content of web pages and ignores the structure of links among them. Link structures provide invaluable information about properties of the documents as well as relationships among them. For example, in the WebKB dataset, the link structure provides additional insights about the relationship among documents (e.g., links often pointing from a student to her advisor or from a faculty member to his projects). Since some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more. From this point of view, the traditional classification methods that ignore the link structure may not be suitable. On the other hand, a few studies, for example [25], rely solely on link structures. It is however a very rare case that content information can be ignorable. For example, in the Cora dataset, the content of a research article abstract largely determines the category of the article. To improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration. To achieve this goal, a simple approach is to convert one type of information to the other. For example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog. In document classification, Kurland and Lee [14] convert content similarity among documents into weights of links. However, link and content information have different properties. For example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way. Therefore, directly converting one type of information to the other usually degrades the quality of information. On the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them. We argue that such an approach ignores the inherent consistency between link and con tent information and therefore fails to combine the two seamlessly. Some work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure. In Figure 1, for example, web pages v6 and v7 co-cite web page v8, implying that v6 and v7 are similar to each other. In turns, v4 and v5 should be similar to each other, since v4 and v5 cite similar web pages v6 and v7, respectively. But using cocitation similarity, the similarity between v4 and v5 is zero without considering other information. Figure 1: An example of link structure In this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis [18]. In the proposed technique, both content information and link structures are seamlessly combined through a single set of latentfactors. Our model contains two components. The first component captures the content information. This component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval. That is, documents are decomposed into latent topics/factors, which in turn are represented as term vectors. The second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members. A factor can be loosely considered as a type of documents (e.g., those homepages belonging to students). It is worth noting that we do not explicitly define the semantic of a factor a priori. Instead, similar to LSI, the factors are learned from the data. Traditional factor analysis models the variables associated with entities through the factors. However, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs. Therefore, the model should involve factors of both vertices of the edge. This is a key difference between traditional factor analysis and our model. In our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links). By doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly. In the formulation, we perform factor analysis based on matrix factorization: solution to the first component is based on factorizing the term-document matrix derived from content features; solution to the second component is based on factorizing the adjacency matrix derived from links. Because the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification. This paper is organized as follows. Section 2 reviews related work. Section 3 presents the proposed approach to analyze the web page based on the combined information of links and content. Section 4 extends the basic framework and a few variants for fine tune. Section 5 shows the experiment results. Section 6 discusses the details of this approach and Section 7 concludes. 2. RELATED WORK In the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8]. LSI maps documents into a lower dimensional latent space. The latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space. The similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space. Analysis tasks, such as classification, could be performed on the latent space. The commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents. Though our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages. In the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities. Using recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs. Instead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities. He et al. [9] propose a clustering algorithm for web document clustering. The algorithm incorporates link structure and the co-citation patterns. In the algorithm, all links are treated as undirected edge of the link graph. The content information is only used for weighing the links by the textual similarity of both ends of the links. Zhang et al. [23] uses the undirected graph regularization framework for document classification. Achlioptas et al [2] decompose the web into hub and authority attributes then combine them with content. Zhou et al. [25] and [24] propose a directed graph regularization framework for semi-supervised learning. The framework combines the hub and authority information of web pages. But it is difficult to combine the content information into that framework. Our approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set offactors. Cohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5]. The major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis. In PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page. In the model, the outgoing links of the destination web page have no effect on the source web page. In other words, the overall link structure is not utilized in PHITS. In our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself. The factor of the destination web page contains information of its outgoing links. In turn, such information is passed to the factor of the source web page. As the result of matrix factorization, the factor forms a factor graph, a miniature of the original graph, preserving the major structure of the original graph. Taskar et al. [19] propose relational Markov networks (RMNs) for entity classification, by describing a conditional distribution of entity classes given entity attributes and relationships. The model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships. RMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques. However the model does not give an off-the-shelf solution, because the success highly depends on the arts of designing the potential functions. On the other hand, the inference for RMNs is intractable and requires belief propagation. The following are some work on combining documents and links, but the methods are loosely related to our approach. The experiments of [21] show that using terms from the linked document improves the classification accuracy. Chakrabarti et al. [3] use co-citation information in their classification model. Joachims et al. [11] combine text kernels and co-citation kernels for classification. Oh et al [16] use the Naive Bayesian frame to combine link information with content. 3. OUR APPROACH In this section we will first introduce a novel matrix factorization method, which is more suitable than conventional matrix factorization methods for link analysis. Then we will introduce our approach that jointly factorizes the document-term matrix and link matrix and obtains compact and highly indicative factors for representing documents or web pages. 3.1 Link Matrix Factorization Suppose we have a directed graph G = (V, E), where the vertex set V = {vi} n i = 1 represents the web pages and the edge set E represents the hyperlinks between web pages. Let A = {asd} denotes the n × n adjacency matrix of G, which is also called the link matrix in this paper. For a pair of vertices, vs and vd, let asd = 1 when there is an edge from vs to vd, and asd = 0, otherwise. Note that A is an asymmetric matrix, because hyperlinks are directed. Most machine learning algorithms assume a feature-vector representation of instances. For web page classification, however, the link graph does not readily give such a vector representation for web pages. If one directly uses each row or column of A for the job, she will suffer a very high computational cost because the dimensionality equals to the number of web pages. On the other hand, it will produces a poor classification accuracy (see our experiments in Section 5), because A is extremely sparse1. The idea of link matrix factorization is to derive a high-quality feature representation Z of web pages based on analyzing the link matrix A, where Z is an n × l matrix, with each row being the ldimensional feature vector of a web page. The new representation of web pages captures the principal factors of the link structure and makes further processing more efficient. One may use a method similar to LSI, to apply the well-known principal component analysis (PCA) for deriving Z from A. The corresponding optimization problem 2 is where γ is a small positive number, U is an l × n matrix, and · I IF is the Frobenius norm. The optimization aims to approximate A by ZUT, a product of two low-rank matrices, with a regularization on U. In the end, the i-th row vector of Z can be thought as the hub feature vector of vertex vi, and the row vector of U can be thought as the authority features. A link generation model proposed in [2] is similar to the PCA approach. Since A is a nonnegative matrix here, one can also consider to put nonnegative constraints on U and Z, which produces an algorithm similar to PLSA [10] and NMF [20]. 1Due to the sparsity of A, links from two similar pages may not share any common target pages, which makes them to appear "dissimilar". However the two pages may be indirectly linked to many common pages via their neighbors. 2Another equivalent form is minZ, U IIA − ZUT112F, s. t. UTU = I. The solution Z is identical subject to a scaling factor. However, despite its popularity in matrix analysis, PCA (or other similar methods like PLSA) is restrictive for link matrix factorization. The major problem is that, PCA ignores the fact that the rows and columns of A are indexed by exactly the same set of objects (i.e., web pages). The approximating matrix A˜ = ZUT shows no evidence that links are within the same set of objects. To see the drawback, let's consider a link transitivity situation vi--+ vs--+ vj, where page i is linked to page s which itself is linked to page j. where U is an l × l full matrix. Note that U is not symmetric, thus ZUZT produces an asymmetric matrix, which is the case of A. Again, each row vector of Z corresponds to a feature vector of a web pages. The new approximating form A˜ = ZUZT puts a clear meaning that the links are between the same set of objects, represented by features Z. The factor model actually maps each vertex, vi, into a vector zi = {zi, k; 1 <k <l} in the Rl space. We call the Rl space the factor space. Then, {zi} encodes the information of incoming and outgoing connectivity of vertices {vi}. The factor loadings, U, explain how these observed connections happened based on {zi}. Once we have the vector zi, we can use many traditional classification methods (such as SVMs) or clustering tools (such as K-Means) to perform the analysis. Illustration Based on a Synthetic Problem To further illustrate the advantages of the proposed link matrix factorization Eq. (2), let us consider the graph in Figure 1. Given Figure 2: Summarize Figure 1 with a factor graph these observations, we can summarize the graph by grouping as factor graph depicted in Figure 2. In the next we preform the two factorization methods Eq. (2) and Eq. (1) on this link matrix. A good low-rank representation should reveal the structure of the factor graph. First we try PCA-like decomposition, solving Eq. (1) and obtaining We can see that the row vectors of v6 and v7 are the same in Z, indicating that v6 and v7 have the same hub attributes. The row nks from web pages {vi} to a different set of objects, let it be denoted by {oi}, A˜ = ZUT actually splits an "linked" object os from vs and breaks down the link path into two parts vi--+ os and vs--+ oj. This is obviously a miss interpretation to the original link path. To overcome the problem of PCA, in this paper we suggest to use a different factorization: vectors of v2 and v3 are the same in U, indicating that v2 and v3 have the same authority attributes. It is not clear to see the similarity between v4 and v5, because their inlinks (and outlinks) are different. Then, we factorize A by ZUZT via solving Eq. (2), and obtain the results ization. The new representation Z is ensured to capture both the structures of the link matrix A and the content matrix C. Once we find the optimal Z, we can apply the traditional classification or clustering methods on vectorial data Z. The relationship among these matrices can be depicted as Figure 3. Figure 3: Relationship among the matrices. Node Y is the target of classification. The resultant Z is very consistent with the clustering structure of vertices: the row vectors of v2 and v3 are the same, those of v4 and v5 are the same, those of v6 and v7 are the same. Even interestingly, if we add constraints to ensure Z and U be nonnegative, we have Eq. (4) can be solved using gradient methods, such as the conjugate gradient method and quasi-Newton methods. Then main computation of gradient methods is evaluating the object function J and its gradients against variables, which clearly tells the assignment of vertices to clusters from Z and the links of factor graph from U. When the interpretability is not critical in some tasks, for example, classification, we found that it achieves better accuracies without the nonnegative constraints. Given our above analysis, it is clear that the factorization ZUZT is more expressive than ZUT in representing the link matrix A. 3.2 Content Matrix Factorization Now let us consider the content information on the vertices. To combine the link information and content information, we want to use the same latent space to approximate the content as the latent space for the links. Using the bag-of-words approach, we denote the content of web pages by an nxm matrix C, each of whose rows represents a document, each column represents a keyword, where m is the number of keywords. Like the latent semantic indexing (LSI) [8], the l-dimensional latent space for words is denoted by an m x l matrix V. Therefore, we use ZV T to approximate matrix C, where β is a small positive number, βIIV II2F serves as a regularization term to improve the robustness. 3.3 Joint Link-Content Matrix Factorization There are many ways to employ both the content and link information for web page classification. Our idea in this paper is not to simply combine them, but rather to fuse them into a single, consistent, and compact feature representation. To achieve this goal, we solve the following problem, Because of the sparsity of A, the computational complexity of multiplication of A and Z is O (µAl), where µA is the number of nonzero entries in A. Similarly, the computational complexity of CTZ and CV is O (µCl), where µC is the number of nonzero entries in C. The computational complexity of the rest multiplications in the gradient computation is O (nl2). Therefore, the total computational complexity in one iteration is O (µAl + µCl + nl2). The number of links and the number of words in a web page are relatively small comparing to the number of web pages, and are almost constant as the number of web pages/documents increases, i.e. µA = O (n) and µC = O (n). Therefore, theoretically the computation time is almost linear to the number of web pages/documents, n. 4. SUPERVISED MATRIX FACTORIZATION Consider a web page classification problem. We can solve Eq. (4) to obtain Z as Section 3, then use a traditional classifier to perform classification. However, this approach does not take data labels into account in the first step. Believing that using data labels improves the accuracy by obtaining a better Z for the classification, we consider to use the data labels to guide the matrix factorization, called supervised matrix factorization [22]. Because some data used in the matrix factorization have no label information, the supervised matrix factorization falls into the category of semi-supervised learning. Let C be the set of classes. For simplicity, we first consider binary class problem, i.e. C = f − 1, 1}. Assume we know the labels fyi} for vertices in T ⊂ V. We want to find a hypothesis h: V--+ R, such that we assign vi to 1 when h (vi)> 0, − 1 otherwise. We assume a transform from the latent space to R is linear, i.e. Table 1: Dataset of WebKB where w and b are parameters to estimate. Here, w is the norm of the decision boundary. Similar to Support Vector Machines (SVMs) [7], we can use the hinge loss to measure the loss, where [x] + is x if x> 0, 0 if x <0. However, the hinge loss is not smooth at the hinge point, which makes it difficult to apply gradient methods on the problem. To overcome the difficulty, we use a smoothed version of hinge loss for each data point, where and We reduce a multiclass problem into multiple binary ones. One simple scheme of reduction is the one-against-rest coding scheme. In the one-against-rest scheme, we assign a label vector for each class label. The element of a label vector is 1 if the data point belongs the corresponding class,--1, if the data point does not belong the corresponding class, 0, if the data point is not labeled. Let Y be the label matrix, each column of which is a label vector. Therefore, Y is a matrix of n x c, where c is the number of classes, JCJ. Then the values of Eq. (5) form a matrix where 1 is a vector of size n, whose elements are all one, W is a c x l parameter matrix, and b is a parameter vector of size c. The total loss is proportional to the sum of Eq. (6) over all labeled data points and the classes, where λ is the parameter to scale the term. To derive a robust solution, we also use Tikhonov regularization for W, Once we obtain w, b, and Z, we can apply h on the vertices with unknown class labels, or apply traditional classification algorithms on Z to get the classification results. 5. EXPERIMENTS 5.1 Data Description In this section, we perform classification on two datasets, to demonstrate the our approach. The two datasets are the WebKB data set [1] and the Cora data set [15]. The WebKB data set consists of about 6000 web pages from computer science departments of four schools (Cornell, Texas, Washington, and Wisconsin). The web pages are classified into seven categories. The numbers of pages in each category are shown in Table 1. The Cora data set consists of the abstracts and references of about 34,000 computer science research papers. We use part of them to categorize into one of subfields of data structure (DS), hardware and architecture (HA), machine learning (ML), and programing language (PL). We remove those articles without reference to other articles in the set. The number of papers and the number of subfields in each area are shown in Table 2. area #of papers #of subfields where ν is the parameter to scale the term. Then the supervised matrix factorization problem becomes Data structure (DS) Hardware and architecture (HA) Machine learning (ML) Programing language (PL) min is (U, V, Z, W, b) (8) Table 2: Dataset of Cora U, V, Z, W, b where is (U, V, Z, W, b) = i (U, V, Z) + LY (W, b, Z) + ΩW (W). We can also use gradient methods to solve the problem of Eq. (8). 5.2 Methods The task of the experiments is to classify the data based on their content information and/or link structure. We use the following methods: Table 3: Classification accuracy (mean f std-err%) on WebKB data set . SVM on content We apply support vector machines (SVM) on the content of documents. The features are the bag-ofwords and all word are stemmed. This method ignores link structure in the data. Linear SVM is used. The regularization parameter of SVM is selected using the cross-validation method. The implementation of SVM used in the experiments is libSVM [4]. . SVM on links We treat links as the features of each document, i.e. the i-th feature is link-to-pagei. We apply SVM on link features. This method uses link information, but not the link structure. . SVM on link-content We combine the features of the above two methods. We use different weights for these two set of features. The weights are also selected using crossvalidation. . Directed graph regularization This method is described in [25] and [24]. This method is solely based on link structure. . PLSI+PHITS This method is described in [6]. This method combines text content information and link structure for analysis. The PHITS algorithm is in spirit similar to Eq .1, with an additional nonnegative constraint. It models the outgoing and in-coming structures separately. . Link-content MF This is our approach of matrix factorization described in Section 3. We use 50 latent factors for Z. After we compute Z, we train a linear SVM using Z as the feature vectors, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output. . Link-content sup. MF This method is our approach of the supervised matrix factorization in Section 4. We use 50 latent factors for Z. After we compute Z, we train a linear SVM on the training portion of Z, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output. We randomly split data into five folds and repeat the experiment for five times, for each time we use one fold for test, four other folds for training. During the training process, we use the crossvalidation to select all model parameters. We measure the results by the classification accuracy, i.e., the percentage of the number of correct classified documents in the entire data set. The results are shown as the average classification accuracies and it standard deviation over the five repeats. 5.3 Results The average classification accuracies for the WebKB data set are shown in Table 3. For this task, the accuracies of SVM on links are worse than that of SVM on content. But the directed graph regularization, which is also based on link alone, achieves a much higher accuracy. This implies that the link structure plays an important role in the classification of this dataset, but individual links in a web page give little information. The combination of link and content using SVM achieves similar accuracy as that of SVM on content alone, which confirms individual links in a web page give little information. Since our approach consider the link structure as well as the content information, our two methods give results a highest accuracies among these approaches. The difference between the results of our two methods is not significant. However in the experiments below, we show the difference between them. The classification accuracies for the Cora data set are shown in Table 4. In this experiment, the accuracies of SVM on the combination of links and content are higher than either SVM on content or SVM on links. This indicates both content and links are infor Table 4: Classification accuracy (mean f std-err%) on Cora data set mative for classifying the articles into subfields. The method of directed graph regularization does not perform as good as SVM on link-content, which confirms the importance of the article content in this task. Though our method of link-content matrix factorization perform slightly better than other methods, our method of linkcontent supervised matrix factorization outperform significantly. 5.4 The Number of Factors As we discussed in Section 3, the computational complexity of each iteration for solving the optimization problem is quadratic to the number of factors. We perform experiments to study how the number of factors affects the accuracy of predication. We use different numbers of factors for the Cornell data of WebKB data set and the machine learning (ML) data of Cora data set. The result shown in Figure 4 (a) and 4 (b). The figures show that the accuracy Figure 4: Accuracy vs number of factors increases as the number of factors increases. It is a different concept from choosing the "optimal" number of clusters in clustering application. It is how much information to represent in the latent variables. We have considered the regularization over the factors, which avoids the overfit problem for a large number of factors. To choose of the number of factors, we need to consider the trade-off between the accuracy and the computation time, which is quadratic to the number of factors. The difference between the method of matrix factorization and that of supervised one decreases as the number of factors increases. This indicates that the usefulness of supervised matrix factorization at lower number of factors. 6. DISCUSSIONS The loss functions LA in Eq. (2) and LC in Eq. (3) use squared loss due to computationally convenience. Actually, squared loss does not precisely describe the underlying noise model, because the weights of adjacency matrix can only take nonnegative values, in our case, zero or one only, and the components of content matrix C can only take nonnegative integers. Therefore, we can apply other types of loss, such as hinge loss or smoothed hinge loss, e.g. LA (U, Z) = µh (A, ZUZT), where h (A, B) = In our paper, we mainly discuss the application of classification. A entry of matrix Z means the relationship of a web page and a factor. The values of the entries are the weights of linear model, instead of the probabilities of web pages belonging to latent topics. Therefore, we allow the components take any possible real values. When we come to the clustering application, we can use this model to find Z, then apply K-means to partition the web pages into clusters. Actually, we can use the idea of nonnegative matrix factorization for clustering [20] to directly cluster web pages. As the example with nonnegative constraints shown in Section 3, we represent each cluster by a latent topic, i.e. the dimensionality of the latent space is set to the number of clusters we want. Then the Solving Eq. (9), we can obtain more interpretable results, which could be used for clustering. 7. CONCLUSIONS In this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application. We propose a simple approach using factors to model the text content and link structure of web pages/documents. The directed links are generated from the linear combination of linkage of between source and destination factors. By sharing factors between text content and link structure, it is easy to combine both the content information and link structure. Our experiments show our approach is effective for classification. We also discuss an extension for clustering application.
Combining Content and Link for Classification using Matrix Factorization ABSTRACT The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks. 1. INTRODUCTION With the advance of the World Wide Web, more and more hypertext documents become available on the Web. Some examples of such data include organizational and personal web pages (e.g, the WebKB benchmark data set, which contains university web pages), research papers (e.g., data in CiteSeer), online news articles, and customer-generated media (e.g., blogs). Comparing to data in tra ditional information management, in addition to content, these data on the Web also contain links: e.g., hyperlinks from a student's homepage pointing to the homepage of her advisor, paper citations, sources of a news article, comments of one blogger on posts from another blogger, and so on. Performing information management tasks on such structured data raises many new research challenges. In the following discussion, we use the task of web page classification as an illustrating example, while the techniques we develop in later sections are applicable equally well to many other tasks in information retrieval and data mining. For the classification problem of web pages, a simple approach is to treat web pages as independent documents. The advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem. However, this approach relies only on the content of web pages and ignores the structure of links among them. Link structures provide invaluable information about properties of the documents as well as relationships among them. For example, in the WebKB dataset, the link structure provides additional insights about the relationship among documents (e.g., links often pointing from a student to her advisor or from a faculty member to his projects). Since some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more. From this point of view, the traditional classification methods that ignore the link structure may not be suitable. On the other hand, a few studies, for example [25], rely solely on link structures. It is however a very rare case that content information can be ignorable. For example, in the Cora dataset, the content of a research article abstract largely determines the category of the article. To improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration. To achieve this goal, a simple approach is to convert one type of information to the other. For example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog. In document classification, Kurland and Lee [14] convert content similarity among documents into weights of links. However, link and content information have different properties. For example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way. Therefore, directly converting one type of information to the other usually degrades the quality of information. On the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them. We argue that such an approach ignores the inherent consistency between link and con tent information and therefore fails to combine the two seamlessly. Some work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure. In Figure 1, for example, web pages v6 and v7 co-cite web page v8, implying that v6 and v7 are similar to each other. In turns, v4 and v5 should be similar to each other, since v4 and v5 cite similar web pages v6 and v7, respectively. But using cocitation similarity, the similarity between v4 and v5 is zero without considering other information. Figure 1: An example of link structure In this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis [18]. In the proposed technique, both content information and link structures are seamlessly combined through a single set of latentfactors. Our model contains two components. The first component captures the content information. This component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval. That is, documents are decomposed into latent topics/factors, which in turn are represented as term vectors. The second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members. A factor can be loosely considered as a type of documents (e.g., those homepages belonging to students). It is worth noting that we do not explicitly define the semantic of a factor a priori. Instead, similar to LSI, the factors are learned from the data. Traditional factor analysis models the variables associated with entities through the factors. However, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs. Therefore, the model should involve factors of both vertices of the edge. This is a key difference between traditional factor analysis and our model. In our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links). By doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly. In the formulation, we perform factor analysis based on matrix factorization: solution to the first component is based on factorizing the term-document matrix derived from content features; solution to the second component is based on factorizing the adjacency matrix derived from links. Because the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification. This paper is organized as follows. Section 2 reviews related work. Section 3 presents the proposed approach to analyze the web page based on the combined information of links and content. Section 4 extends the basic framework and a few variants for fine tune. Section 5 shows the experiment results. Section 6 discusses the details of this approach and Section 7 concludes. 2. RELATED WORK In the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8]. LSI maps documents into a lower dimensional latent space. The latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space. The similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space. Analysis tasks, such as classification, could be performed on the latent space. The commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents. Though our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages. In the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities. Using recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs. Instead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities. He et al. [9] propose a clustering algorithm for web document clustering. The algorithm incorporates link structure and the co-citation patterns. In the algorithm, all links are treated as undirected edge of the link graph. The content information is only used for weighing the links by the textual similarity of both ends of the links. Zhang et al. [23] uses the undirected graph regularization framework for document classification. Achlioptas et al [2] decompose the web into hub and authority attributes then combine them with content. Zhou et al. [25] and [24] propose a directed graph regularization framework for semi-supervised learning. The framework combines the hub and authority information of web pages. But it is difficult to combine the content information into that framework. Our approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set offactors. Cohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5]. The major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis. In PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page. In the model, the outgoing links of the destination web page have no effect on the source web page. In other words, the overall link structure is not utilized in PHITS. In our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself. The factor of the destination web page contains information of its outgoing links. In turn, such information is passed to the factor of the source web page. As the result of matrix factorization, the factor forms a factor graph, a miniature of the original graph, preserving the major structure of the original graph. Taskar et al. [19] propose relational Markov networks (RMNs) for entity classification, by describing a conditional distribution of entity classes given entity attributes and relationships. The model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships. RMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques. However the model does not give an off-the-shelf solution, because the success highly depends on the arts of designing the potential functions. On the other hand, the inference for RMNs is intractable and requires belief propagation. The following are some work on combining documents and links, but the methods are loosely related to our approach. The experiments of [21] show that using terms from the linked document improves the classification accuracy. Chakrabarti et al. [3] use co-citation information in their classification model. Joachims et al. [11] combine text kernels and co-citation kernels for classification. Oh et al [16] use the Naive Bayesian frame to combine link information with content. 3. OUR APPROACH 3.1 Link Matrix Factorization 3.2 Content Matrix Factorization 3.3 Joint Link-Content Matrix Factorization 4. SUPERVISED MATRIX FACTORIZATION 5. EXPERIMENTS 5.1 Data Description 5.2 Methods 5.3 Results 5.4 The Number of Factors 6. DISCUSSIONS 7. CONCLUSIONS In this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application. We propose a simple approach using factors to model the text content and link structure of web pages/documents. The directed links are generated from the linear combination of linkage of between source and destination factors. By sharing factors between text content and link structure, it is easy to combine both the content information and link structure. Our experiments show our approach is effective for classification. We also discuss an extension for clustering application.
Combining Content and Link for Classification using Matrix Factorization ABSTRACT The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks. 1. INTRODUCTION Comparing to data in tra Performing information management tasks on such structured data raises many new research challenges. For the classification problem of web pages, a simple approach is to treat web pages as independent documents. The advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem. However, this approach relies only on the content of web pages and ignores the structure of links among them. Link structures provide invaluable information about properties of the documents as well as relationships among them. Since some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more. From this point of view, the traditional classification methods that ignore the link structure may not be suitable. On the other hand, a few studies, for example [25], rely solely on link structures. It is however a very rare case that content information can be ignorable. For example, in the Cora dataset, the content of a research article abstract largely determines the category of the article. To improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration. To achieve this goal, a simple approach is to convert one type of information to the other. For example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog. In document classification, Kurland and Lee [14] convert content similarity among documents into weights of links. However, link and content information have different properties. For example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way. Therefore, directly converting one type of information to the other usually degrades the quality of information. On the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them. We argue that such an approach ignores the inherent consistency between link and con tent information and therefore fails to combine the two seamlessly. Some work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure. But using cocitation similarity, the similarity between v4 and v5 is zero without considering other information. Figure 1: An example of link structure In this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis [18]. In the proposed technique, both content information and link structures are seamlessly combined through a single set of latentfactors. Our model contains two components. The first component captures the content information. This component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval. That is, documents are decomposed into latent topics/factors, which in turn are represented as term vectors. The second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members. A factor can be loosely considered as a type of documents (e.g., those homepages belonging to students). Instead, similar to LSI, the factors are learned from the data. Traditional factor analysis models the variables associated with entities through the factors. However, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs. Therefore, the model should involve factors of both vertices of the edge. This is a key difference between traditional factor analysis and our model. In our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links). By doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly. Because the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification. This paper is organized as follows. Section 2 reviews related work. Section 3 presents the proposed approach to analyze the web page based on the combined information of links and content. Section 5 shows the experiment results. Section 6 discusses the details of this approach and Section 7 concludes. 2. RELATED WORK In the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8]. LSI maps documents into a lower dimensional latent space. The latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space. The similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space. Analysis tasks, such as classification, could be performed on the latent space. The commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents. Though our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages. In the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities. Using recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs. Instead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities. He et al. [9] propose a clustering algorithm for web document clustering. The algorithm incorporates link structure and the co-citation patterns. In the algorithm, all links are treated as undirected edge of the link graph. The content information is only used for weighing the links by the textual similarity of both ends of the links. Zhang et al. [23] uses the undirected graph regularization framework for document classification. Achlioptas et al [2] decompose the web into hub and authority attributes then combine them with content. The framework combines the hub and authority information of web pages. But it is difficult to combine the content information into that framework. Our approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set offactors. Cohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5]. The major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis. In PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page. In the model, the outgoing links of the destination web page have no effect on the source web page. In other words, the overall link structure is not utilized in PHITS. In our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself. The factor of the destination web page contains information of its outgoing links. In turn, such information is passed to the factor of the source web page. The model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships. RMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques. the potential functions. The following are some work on combining documents and links, but the methods are loosely related to our approach. The experiments of [21] show that using terms from the linked document improves the classification accuracy. Chakrabarti et al. [3] use co-citation information in their classification model. Joachims et al. [11] combine text kernels and co-citation kernels for classification. Oh et al [16] use the Naive Bayesian frame to combine link information with content. 7. CONCLUSIONS In this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application. We propose a simple approach using factors to model the text content and link structure of web pages/documents. The directed links are generated from the linear combination of linkage of between source and destination factors. By sharing factors between text content and link structure, it is easy to combine both the content information and link structure. Our experiments show our approach is effective for classification. We also discuss an extension for clustering application.
H-69
Ranking Web Objects from Multiple Communities
Vertical search is a promising direction as it leverages domain-specific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking.
[ "rank", "web object", "vertic search", "web object-rank", "web object-rank problem", "score fusion method", "duplic photo detect algorithm", "algorithm", "domain specif knowledg", "high-qualiti photo search", "imag search queri", "rank photo", "multipl web forum", "nonlinear fusion method", "rank function", "imag search" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "R", "R", "M", "M", "M" ]
Ranking Web Objects from Multiple Communities Le Chen ∗ Le.Chen@idiap.ch Lei Zhang leizhang@ microsoft.com Feng Jing fengjing@ microsoft.com Ke-Feng Deng kefengdeng@hotmail.com Wei-Ying Ma wyma@microsoft.com Microsoft Research Asia 5F, Sigma Center, No. 49, Zhichun Road Haidian District, Beijing, 100080, P R China ABSTRACT Vertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; G.2.2 [Discrete Mathematics]: Graph Theory; H.3.5 [Information Storage and Retrieval]: Online Information Services - Web-based services General Terms Algorithms, Experimentation 1. INTRODUCTION Despite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries. As a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want. There are many vertical search engines, including some for paper search (e.g. Libra [21], Citeseer [7] and Google Scholar [4]), product search (e.g. Froogle [5]), movie search [6], image search [1, 8], video search [6], local search [2], as well as news search [3]. We believe the vertical search engine trend will continue to grow. Essentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors. This is because it deals with the core problem of how to combine and rank objects coming from multiple communities. Although object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked. For example, algorithms that evolved from PageRank [22], PopRank [21] and LinkFusion [27] were proposed to rank objects coming from multiple communities, but can only work on well-defined graphs of heterogeneous data. Well-defined means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences). This allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities. However, this assumption does not always stand for some domains. High-quality photo search, movie search and news search are exceptions. For example, a photograph forum 377 website usually includes three kinds of objects: photos, authors and reviewers. Yet different photo forums seem to lack any relationships, as there are no cited-by relationships. This makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos. Consequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums. Similar problems also exist in movie search and news search. Although two movie titles can be identified as the same one by title and director in different movie discussion groups, it is non-trivial to combine rating scores from different discussion groups and rank movies effectively. We call such non-trivial object relationship in which identification is difficult, incomplete relationships. Other related work includes rank aggregation for the Web [13, 14], and learning algorithm for rank, such as RankBoost [15], RankSVM [17, 19], and RankNet [12]. We will contrast differences of these methods with the proposed methods after we have described the problem and our methods. We will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation. In the following, we will introduce rationale for building high-quality photo search. 1.1 High-Quality Photo Search In the past ten years, the Internet has grown to become an incredible resource, allowing users to easily access a huge number of images. However, compared to the more than 1 billion images indexed by commercial search engines, actual queries submitted to image search engines are relatively minor, and occupy only 8-10 percent of total image and text queries submitted to commercial search engines [24]. This is partially because user requirements for image search are far less than those for general text search. On the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content. To better understand user needs in image search, we conducted a query log analysis based on a commercial search engine. The result shows that more than 20% of image search queries are related to nature and places and daily life categories. Users apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds. However, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem. Ideally, the most critical part of a search engine - the ranking function - can be simplified as consisting of two key factors: relevance and quality. For the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity. However, as to quality factor, there is still no way to give an optimal rank to an image. Though content-based image quality assessment has been investigated over many years [23, 25, 26], it is still far from ready to provide a realistic quality measure in the immediate future. Seemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos. Various proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine. In general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos. In addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers. These metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results. More important, there are volunteer users in Web communities actively providing valuable ratings for these photos. The rating information is generally of great value in solving the photo quality ranking problem. Motivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums. In this paper, we specifically focus on how to rank photos from multiple Web forums. Intuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum. However, such a straightforward approach usually requires large manual effort in both tedious parameter tuning and subjective results evaluation, which makes it impractical when there are tens or hundreds of photo forums to combine. To address this problem, we seek to build relationships/links between different photo forums. That is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums. We then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function. 1.2 Main Contributions and Organization. The main contributions of this paper are: 1. We have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines. 2. We have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme. The rest of this paper is organized as follows. In Section 2, we present in detail the proposed solutions to the ranking problem, including how to find hidden links between different forums, normalize rating scores, obtain the optimal ranking function, and contrast our methods with some other related research. In Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm. Our conclusion and a discussion of future work is in Section 4. It is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on. 378 2. ALGORITHM 2.1 Overview The difficulty of integrating multiple Web forums is in their different rating systems, where there are generally two kinds of freedom. The first kind of freedom is the rating interval or rating scale including the minimal and maximal ratings for each Web object. For example, some forums use a 5-point rating scale whereas other forums use 3-point or 10-point rating scales. It seems easy to fix this freedom, but detailed analysis of the data and experiments show that it is a non-trivial problem. The second kind of freedom is the varying rating criteria found in different Web forums. That is, the same score does not mean the same quality in different forums. Intuitively, if we can detect same photographers or same photographs, we can build relationships between any two photo forums and therefore can standardize the rating criterion by score normalization and transformation. Fortunately, we find that quite a number of duplicate photographs exist in various Web photo forums. This fact is reasonable when considering that photographers sometimes submit a photo to more than one forum to obtain critiques or in hopes of widespread publicity. In this work, we adopt an efficient duplicate photo detection algorithm [10] to find these photos. The proposed methods below are based on the following considerations. Faced with the need to overcome a ranking problem, a standardized rating criterion rather than a reasonable rating criterion is needed. Therefore, we can take a large scale forum as the reference forum, and align other forums by taking into account duplicate Web objects (duplicate photos in this work). Ideally, the scores of duplicate photos should be equal even though they are in different forums. Yet we can deem that scores in different forumsexcept for the reference forum - can vary in a parametric space. This can be determined by minimizing the objective function defined by the sum of squares of the score differences. By formulating the ranking problem as an optimization problem that attempts to make the scores of duplicate photos in non-reference forums as close as possible to those in the reference forum, we can effectively solve the ranking problem. For convenience, the following notations are employed. Ski and ¯Ski denote the total score and mean score of ith Web object (photo) in the kth Web site, respectively. The total score refers to the sum of the various rating scores (e.g., novelty rating and aesthetic rating), and the mean score refers to the mean of the various rating scores. Suppose there are a total of K Web sites. We further use {Skl i |i = 1, ..., Ikl; k, l = 1, ..., K; k = l} to denote the set of scores for Web objects (photos) in kth Web forums that are duplicate with the lth Web forums, where Ikl is the total number of duplicate Web objects between these two Web sites. In general, score fusion can be seen as the procedure of finding K transforms ψk(Ski) = eSki, k = 1, ..., K such that eSki can be used to rank Web objects from different Web sites. The objective function described in the above Figure 1: Web community integration. Each Web community forms a subgraph, and all communities are linked together by some hidden links (dashed lines). paragraph can then be formulated as min {ψk|k=2,...,K} KX k=2 Ik1X i=1 ¯wk i S1k i − ψk(Sk1 i ) 2 (1) where we use k = 1 as the reference forum and thus ψ1(S1i) = S1i. ¯wk i (≥ 0) is the weight coefficient that can be set heuristically according to the numbers of voters (reviewers or commenters) in both the reference forum and the non-reference forum. The more reviewers, the more popular the photo is and the larger the corresponding weight ¯wk i should be. In this work, we do not inspect the problem of how to choose ¯wk i and simply set them to one. But we believe the proper use of ¯wk i , which leverages more information, can significantly improve the results. Figure 1 illustrates the aforementioned idea. The Web Community 1 is the reference community. The dashed lines are links indicating that the two linked Web objects are actually the same. The proposed algorithm will try to find the best ψk(k = 2, ..., K), which has certain parametric forms according to certain models. So as to minimize the cost function defined in Eq. 1, the summation is taken on all the red dashed lines. We will first discuss the score normalization methods in Section 2.2, which serves as the basis for the following work. Before we describe the proposed ranking algorithms, we first introduce a manually tuned method in Section 2.3, which is laborious and even impractical when the number of communities become large. In Section 2.4, we will briefly explain how to precisely find duplicate photos between Web forums. Then we will describe the two proposed methods: Linear fusion and Non-linear fusion, and a performance measure for result evaluation in Section 2.5. Finally, in Section 2.6 we will discuss the relationship of the proposed methods with some other related work. 2.2 Score Normalization Since different Web (photo) forums on the Web usually have different rating criteria, it is necessary to normalize them before applying different kinds of fusion methods. In addition, as there are many kinds of ratings, such as ratings for novelty, ratings for aesthetics etc, it is reasonable to choose a common one - total score or average scorethat can always be extracted in any Web forum or calculated by corresponding ratings. This allows the normaliza379 tion method on the total score or average score to be viewed as an impartial rating method between different Web forums. It is straightforward to normalize average scores by linearly transforming them to a fixed interval. We call this kind of score as Scaled Mean Score. The difficulty, however, of using this normalization method is that, if there are only a few users rating an object, say a photo in a photo forum, the average score for the object is likely to be spammed or skewed. Total score can avoid such drawbacks that contain more information such as a Web object``s quality and popularity. The problem is thus how to normalize total scores in different Web forums. The simplest way may be normalization by the maximal and minimal scores. The drawback of this normalization method is it is non robust, or in other words, it is sensitive to outliers. To make the normalization insensitive to unusual data, we propose the Mode-90% Percentile normalization method. Here, the mode score represents the total score that has been assigned to more photos than any other total score. And The high percentile score (e.g.,90%) represents the total score for which the high percentile of images have a lower total score. This normalization method utilizes the mode and 90% percentile as two reference points to align two rating systems, which makes the distributions of total scores in different forums more consistent. The underlying assumption, for example in different photo forums, is that even the qualities of top photos in different forums may vary greatly and be less dependent on the forum quality, the distribution of photos of middle-level quality (from mode to 90% percentile) should be almost of the same quality up to the freedom which reflects the rating criterion (strictness) of Web forums. Photos of this middle-level in a Web forum usually occupy more than 70 % of total photos in that forum. We will give more detailed analysis of the scores in Section 3.2. 2.3 Manual Fusion The Web movie forum, IMDB [16], proposed to use a Bayesian-ranking function to normalize rating scores within one community. Motivated by this ranking function, we propose this manual fusion method: For the kth Web site, we use the following formula eSki = αk · „ nk · ¯Ski nk + n∗ k + n∗ k · S∗ k nk + n∗ k `` (2) to rank photos, where nk is the number of votes and n∗ k, S∗ k and αk are three parameters. This ranking function first takes a balance between the original mean score ¯Ski and a reference score S∗ k to get a weighted mean score which may be more reliable than ¯Ski. Then the weighted mean score is scaled by αk to get the final score fSki. For n Web communities, there are then about 3n parameters in {(αk, n∗ k, S∗ k)|k = 1, ..., n} to tune. Though this method can achieves pretty good results after careful and thorough manual tuning on these parameters, when n becomes increasingly large, say there are tens or hundreds of Web communities crawled and indexed, this method will become more and more laborious and will eventually become impractical. It is therefore desirable to find an effective fusion method whose parameters can be automatically determined. 2.4 Duplicate Photo Detection We use Dedup [10], an efficient and effective duplicate image detection algorithm, to find duplicate photos between any two photo forums. This algorithm uses hash function to map a high dimensional feature to a 32 bits hash code (see below for how to construct the hash code). Its computational complexity to find all the duplicate images among n images is about O(n log n). The low-level visual feature for each photo is extracted on k × k regular grids. Based on all features extracted from the image database, a PCA model is built. The visual features are then transformed to a relatively low-dimensional and zero mean PCA space, or 29 dimensions in our system. Then the hash code for each photo is built as follows: each dimension is transformed to one, if the value in this dimension is greater than 0, and 0 otherwise. Photos in the same bucket are deemed potential duplicates and are further filtered by a threshold in terms of Euclidean similarity in the visual feature space. Figure 2 illustrates the hashing procedure, where visual features - mean gray values - are extracted on both 6 × 6 and 7×7 grids. The 85-dimensional features are transformed to a 32-dimensional vector, and the hash code is generated according to the signs. Figure 2: Hashing procedure for duplicate photo dectection 2.5 Score Fusion In this section, we will present two solutions on score fusion based on different parametric form assumptions of ψk in Eq. 1. 2.5.1 Linear Fusion by Duplicate Photos Intuitively, the most straightforward way to factor out the uncertainties caused by the different criterion is to scale, rel380 ative to a given center, the total scores of each unreferenced Web photo forum with respect to the reference forum. More strictly, we assume ψk has the following form ψk(Ski) = αkSki + tk, k = 2, ..., K (3) ψ1(S1i) = S1i (4) which means that the scores of k(= 1)th forum should be scaled by αk relative to the center tk 1−αk as shown in Figure 3. Then, if we substitute above ψk to Eq. 1, we get the following objective function, min {αk,tk|k=2,...,K} KX k=2 Ik1X i=1 ¯wk i h S1k i − αkSk1 i − tk i2 . (5) By solving the following set of functions, ( ∂f ∂αk = = 0 ∂f ∂tk = 0 , k = 1, ..., K where f is the objective function defined in Eq. 5, we get the closed form solution as: „ αk tk `` = A−1 k Lk (6) where Ak = „ P i ¯wi(Sk1 i )2 P i ¯wiSk1 iP i ¯wiSk1 i P i ¯wi `` (7) Lk = „ P i ¯wiS1k i Sk1 iP i ¯wiS1k i `` (8) and k = 2, ..., K. This is a linear fusion method. It enjoys simplicity and excellent performance in the following experiments. Figure 3: Linear Fusion method 2.5.2 Nonlinear Fusion by Duplicate Photos Sometimes we want a method which can adjust scores on intervals with two endpoints unchanged. As illustrated in Figure 4, the method can tune scores between [C0, C1] while leaving scores C0 and C1 unchanged. This kind of fusion method is then much finer than the linear ones and contains many more parameters to tune and expect to further improve the results. Here, we propose a nonlinear fusion solution to satisfy such constraints. First, we introduce a transform: ηc0,c1,α(x) = ( x−c0 c1−c0 α (c1 − c0) + c0, if x ∈ (c0, c1] x otherwise where α > 0. This transform satisfies that for x ∈ [c0, c1], ηc0,c1,α(x) ∈ [c0, c1] with ηc0,c1,α(c0) = c0 and ηc0,c1,α(c1) = c1. Then we can utilize this nonlinear transform to adjust the scores in certain interval, say (M, T], ψk(Ski) = ηM,T,α(Ski) . (9) Figure 4: Nonlinear Fusion method. We intent to finely adjust the shape of the curves in each segment. Even there is no closed-form solution for the following optimization problem, min {αk|k∈[2,K]} KX k=2 Ik1X i=1 ¯wk i h S1k i − ηM,T,α(Ski) i2 it is not hard to get the numeric one. Under the same assumptions made in Section 2.2, we can use this method to adjust scores of the middle-level (from the mode point to the 90 % percentile). This more complicated non-linear fusion method is expected to achieve better results than the linear one. However, difficulties in evaluating the rank results block us from tuning these parameters extensively. The current experiments in Section 3.5 do not reveal any advantages over the simple linear model. 2.5.3 Performance Measure of the Fusion Results Since our objective function is to make the scores of the same Web objects (e.g. duplicate photos) between a nonreference forum and the reference forum as close as possible, it is natural to investigate how close they become to each other and how the scores of the same Web objects change between the two non-reference forums before and after score fusion. Taken Figure 1 as an example, the proposed algorithms minimize the score differences of the same Web objects in two Web forums: the reference forum (the Web Community 1) and a non-reference forum, which corresponds to minimizing the objective function on the red dashed (hidden) links. After the optimization, we must ask what happens to the score differences of the same Web objects in two nonreference forums? Or, in other words, whether the scores of two objects linked by the green dashed (hidden) links become more consistent? We therefore define the following performance measureδ measure - to quantify the changes for scores of the same Web objects in different Web forums as δkl = Sim(Slk , Skl ) − Sim(Slk ∗ , Skl ∗ ) (10) 381 where Skl = (Skl 1 , ..., Skl Ikl )T , Skl ∗ = (eSkl 1 , ..., eSkl Ikl )T and Sim(a, b) = a · b ||a||||b|| . δkl > 0 means after score fusion, scores on the same Web objects between kth and lth Web forum become more consistent, which is what we expect. On the contrary, if δkl < 0, those scores become more inconsistent. Although we cannot rely on this measure to evaluate our final fusion results as ranking photos by their popularity and qualities is such a subjective process that every person can have its own results, it can help us understand the intermediate ranking results and provide insights into the final performances of different ranking methods. 2.6 Contrasts with Other Related Work We have already mentioned the differences of the proposed methods with the traditional methods, such as PageRank [22], PopRank [21], and LinkFusion [27] algorithms in Section 1. Here, we discuss some other related works. The current problem can also be viewed as a rank aggregation one [13, 14] as we deal with the problem of how to combine several rank lists. However, there are fundamental differences between them. First of all, unlike the Web pages, which can be easily and accurately detected as the same pages, detecting the same photos in different Web forums is a non-trivial work, and can only be implemented by some delicate algorithms while with certain precision and recall. Second, the numbers of the duplicate photos from different Web forums are small relative to the whole photo sets (see Table 1). In another words, the top K rank lists of different Web forums are almost disjointed for a given query. Under this condition, both the algorithms proposed in [13] and their measurements - Kendall tau distance or Spearman footrule distance - will degenerate to some trivial cases. Another category of rank fusion (aggregation) methods is based on machine learning algorithms, such as RankSVM [17, 19], RankBoost [15], and RankNet [12]. All of these methods entail some labelled datasets to train a model. In current settings, it is difficult or even impossible to get these datasets labelled as to their level of professionalism or popularity, since the photos are too vague and subjective to rank. Instead, the problem here is how to combine several ordered sub lists to form a total order list. 3. EXPERIMENTS In this section, we carry out our research on high-quality photo search. We first briefly introduce the newly proposed vertical image search engine - EnjoyPhoto in section 3.1. Then we focus on how to rank photos from different Web forums. In order to do so, we first normalize the scores (ratings) for photos from different multiple Web forums in section 3.2. Then we try to find duplicate photos in section 3.3. Some intermediate results are discussed using δ measure in section 3.4. Finally a set of user studies is carried out carefully to justify our proposed method in section 3.5. 3.1 EnjoyPhoto: high-quality Photo Search Engine In order to meet user requirement of enjoying high-quality photos, we propose and build a high-quality photo search engine - EnjoyPhoto, which accounts for the following three key issues: 1. how to crawl and index photos, 2. how to determine the qualities of each photo and 3. how to display the search results in order to make the search process enjoyable. For a given text based query, this system ranks the photos based on certain combination of relevance of the photo to this query (Issue 1) and the quality of the photo (Issue 2), and finally displays them in an enjoyable manner (Issue 3). As for Issue 3, we devise the interface of the system deliberately in order to smooth the users'' process of enjoying high-quality photos. Techniques, such as Fisheye and slides show, are utilized in current system. Figure 5 shows the interface. We will not talk more about this issue as it is not an emphasis of this paper. Figure 5: EnjoyPhoto: an enjoyable high-quality photo search engine, where 26,477 records are returned for the query fall in about 0.421 seconds As for Issue 1, we extracted from a commercial search engine a subset of photos coming from various photo forums all over the world, and explicitly parsed the Web pages containing these photos. The number of photos in the data collection is about 2.5 million. After the parsing, each photo was associated with its title, category, description, camera setting, EXIF data 1 (when available for digital images), location (when available in some photo forums), and many kinds of ratings. All these metadata are generally precise descriptions or annotations for the image content, which are then indexed by general text-based search technologies [9, 18, 11]. In current system, the ranking function was specifically tuned to emphasize title, categorization, and rating information. Issue 2 is essentially dealt with in the following sections which derive the quality of photos by analyzing ratings provided by various Web photo forums. Here we chose six photo forums to study the ranking problem and denote them as Web-A, Web-B, Web-C, Web-D, Web-E and Web-F. 3.2 Photo Score Normalization Detailed analysis of different score normalization methods are analyzed in this section. In this analysis, the zero 1 Digital cameras save JPEG (. jpg) files with EXIF (Exchangeable Image File) data. Camera settings and scene information are recorded by the camera into the image file. www.digicamhelp.com/what-is-exif/ 382 0 2 4 6 8 10 0 1000 2000 3000 4000 Normalized Score TotalNumber (a) Web-A 0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 3 x 10 4 Normalized Score TotalNumber (b) Web-B 0 2 4 6 8 10 0 0.5 1 1.5 2 x 10 5 Normalized Score TotalNumber (c) Web-C 0 2 4 6 8 10 0 2 4 6 8 10 x 10 4 Normalized Score TotalNumber (d) Web-D 0 2 4 6 8 10 0 2000 4000 6000 8000 10000 12000 14000 Normalized Score TotalNumber (e) Web-E 0 2 4 6 8 10 0 1 2 3 4 5 6 x 10 4 Normalized Score TotalNumber (f) Web-F Figure 6: Distributions of mean scores normalized to [0, 10] scores that usually occupy about than 30% of the total number of photos for some Web forums are not currently taken into account. How to utilize these photos is left for future explorations. In Figure 6, we list the distributions of the mean score, which is transformed to a fixed interval [0, 10]. The distributions of the average scores of these Web forums look quite different. Distributions in Figure 6(a), 6(b), and 6(e) looks like Gaussian distributions, while those in Figure 6(d) and 6(f) are dominated by the top score. The reason of these eccentric distributions for Web-D and Web-F lies in their coarse rating systems. In fact, Web-D and Web-F use 2 or 3 point rating scales whereas other Web forums use 7 or 14 point rating scales. Therefore, it will be problematic if we directly use these averaged scores. Furthermore the average score is very likely to be spammed, if there are only a few users rating a photo. Figure 7 shows the total score normalization method by maximal and minimal scores, which is one of our base line system. All the total scores of a given Web forum are normalized to [0, 100] according to the maximal score and minimal score of corresponding Web forum. We notice that total score distribution of Web-A in Figure 7(a) has two larger tails than all the others. To show the shape of the distributions more clearly, we only show the distributions on [0, 25] in Figure 7(b),7(c),7(d),7(e), and 7(f). Figure 8 shows the Mode-90% Percentile normalization method, where the modes of the six distributions are normalized to 5 and the 90% percentile to 8. We can see that this normalization method makes the distributions of total scores in different forums more consistent. The two proposed algorithms are all based on these normalization methods. 3.3 Duplicate photo detection Targeting at computational efficiency, the Dedup algorithm may lose some recall rate, but can achieve a high precision rate. We also focus on finding precise hidden links rather than all hidden links. Figure 9 shows some duplicate detection examples. The results are shown in Table 1 and verify that large numbers of duplicate photos exist in any two Web forums even with the strict condition for Dedup where we chose first 29 bits as the hash code. Since there are only a few parameters to estimate in the proposed fusion methods, the numbers of duplicate photos shown Table 1 are 0 20 40 60 80 100 0 100 200 300 400 500 600 Normalized Score TotalNumber (a) Web-A 0 5 10 15 20 25 0 1 2 3 4 5 x 10 4 Normalized Score TotalNumber (b) Web-B 0 5 10 15 20 25 0 1 2 3 4 5 x 10 5 Normalized Score TotalNumber (c) Web-C 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 x 10 4 Normalized Score TotalNumber (d) Web-D 0 5 10 15 20 25 0 2000 4000 6000 8000 10000 Normalized Score TotalNumber (e) Web-E 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 x 10 4 Normalized Score TotalNumber (f) Web-F Figure 7: Maxmin Normalization 0 5 10 15 0 200 400 600 800 1000 1200 1400 Normalized Score TotalNumber (a) Web-A 0 5 10 15 0 1 2 3 4 5 x 10 4 Normalized Score TotalNumber (b) Web-B 0 5 10 15 0 2 4 6 8 10 12 14 x 10 4 Normalized Score TotalNumber (c) Web-C 0 5 10 15 0 0.5 1 1.5 2 2.5 x 10 4 Normalized Score TotalNumber (d) Web-D 0 5 10 15 0 2000 4000 6000 8000 10000 12000 Normalized Score TotalNumber (e) Web-E 0 5 10 15 0 2000 4000 6000 8000 10000 Normalized Score TotalNumber (f) Web-F Figure 8: Mode-90% Percentile Normalization sufficient to determine these parameters. The last table column lists the total number of photos in the corresponding Web forums. 3.4 δ Measure The parameters of the proposed linear and nonlinear algorithms are calculated using the duplicate data shown in Table 1, where the Web-C is chosen as the reference Web forum since it shares the most duplicate photos with other forums. Table 2 and 3 show the δ measure on the linear model and nonlinear model. As δkl is symmetric and δkk = 0, we only show the upper triangular part. The NaN values in both tables lie in that no duplicate photos have been detected by the Dedup algorithm as reported in Table 1. The linear model guarantees that the δ measures related Table 1: Number of duplicate photos between each pair of Web forums A B C D E F Scale A 0 316 1,386 178 302 0 130k B 316 0 14,708 909 8,023 348 675k C 1,386 14,708 0 1,508 19,271 1,083 1,003k D 178 909 1,508 0 1,084 21 155k E 302 8,023 19,271 1,084 0 98 448k F 0 348 1,083 21 98 0 122k 383 Figure 9: Some results of duplicate photo detection Table 2: δ measure on the linear model. Web-B Web-C Web-D Web-E Web-F Web-A 0.0659 0.0911 0.0956 0.0928 NaN Web-B - 0.0672 0.0578 0.0791 0.4618 Web-C - - 0.0105 0.0070 0.2220 Web-D - - - 0.0566 0.0232 Web-E - - - - 0.6525 to the reference community should be no less than 0 theoretically. It is indeed the case (see the underlined numbers in Table 2). But this model can not guarantee that the δ measures on the non-reference communities can also be no less than 0, as the normalization steps are based on duplicate photos between the reference community and a nonreference community. Results shows that all the numbers in the δ measure are greater than 0 (see all the non-underlined numbers in Table 2), which indicates that it is probable that this model will give optimal results. On the contrary, the nonlinear model does not guarantee that δ measures related to the reference community should be no less than 0, as not all duplicate photos between the two Web forums can be used when optimizing this model. In fact, the duplicate photos that lie in different intervals will not be used in this model. It is these specific duplicate photos that make the δ measure negative. As a result, there are both negative and positive items in Table 3, but overall the number of positive ones are greater than negative ones (9:5), that indicates the model may be better than the normalization only method (see next subsection) which has an all-zero δ measure, and worse than the linear model. 3.5 User Study Because it is hard to find an objective criterion to evaluate Table 3: δ measure on the nonlinear model. Web-B Web-C Web-D Web-E Web-F Web-A 0.0559 0.0054 -0.0185 -0.0054 NaN Web-B - -0.0162 -0.0345 -0.0301 0.0466 Web-C - - 0.0136 0.0071 0.1264 Web-D - - - 0.0032 0.0143 Web-E - - - - 0.214 which ranking function is better, we chose to employ user studies for subjective evaluations. Ten subjects were invited to participate in the user study. They were recruited from nearby universities. As search engines of both text search and image search are familiar to university students, there was no prerequisite criterion for choosing students. We conducted user studies using Internet Explorer 6.0 on Windows XP with 17-inch LCD monitors set at 1,280 pixels by 1,024 pixels in 32-bit color. Data was recorded with server logs and paper-based surveys after each task. Figure 10: User study interface We specifically device an interface for user study as shown in Figure 10. For each pair of fusion methods, participants were encouraged to try any query they wished. For those without specific ideas, two combo boxes (category list and query list) were listed on the bottom panel, where the top 1,000 image search queries from a commercial search engine were provided. After a participant submitted a query, the system randomly selected the left or right frame to display each of the two ranking results. The participant were then required to judge which ranking result was better of the two ranking results, or whether the two ranking results were of equal quality, and submit the judgment by choosing the corresponding radio button and clicking the Submit button. For example, in Figure 10, query sunset is submitted to the system. Then, 79,092 photos were returned and ranked by the Minmax fusion method in the left frame and linear fusion method in the right frame. A participant then compares the two ranking results (without knowing the ranking methods) and submits his/her feedback by choosing answers in the Your option. Table 4: Results of user study Norm.Only Manually Linear Linear 29:13:10 14:22:15Nonlinear 29:15:9 12:27:12 6:4:45 Table 4 shows the experimental results, where Linear denotes the linear fusion method, Nonlinear denotes the non linear fusion method, Norm. Only means Maxmin normalization method, Manually means the manually tuned method. The three numbers in each item, say 29:13:10, mean that 29 judgments prefer the linear fusion results, 10 384 judgments prefer the normalization only method, and 13 judgments consider these two methods as equivalent. We conduct the ANOVA analysis, and obtain the following conclusions: 1. Both the linear and nonlinear methods are significantly better than the Norm. Only method with respective P-values 0.00165(< 0.05) and 0.00073(<< 0.05). This result is consistent with the δ-measure evaluation result. The Norm. Only method assumes that the top 10% photos in different forums are of the same quality. However, this assumption does not stand in general. For example, a top 10% photo in a top tier photo forum is generally of higher quality than a top 10% photo in a second-tier photo forum. This is similar to that, those top 10% students in a top-tier university and those in a second-tier university are generally of different quality. Both linear and nonlinear fusion methods acknowledge the existence of such differences and aim at quantizing the differences. Therefore, they perform better than the Norm. Only method. 2. The linear fusion method is significantly better than the nonlinear one with P-value 1.195 × 10−10 . This result is rather surprising as this more complicated ranking method is expected to tune the ranking more finely than the linear one. The main reason for this result may be that it is difficult to find the best intervals where the nonlinear tuning should be carried out and yet simply the middle part of the Mode-90% Percentile Normalization method was chosen. The timeconsuming and subjective evaluation methods - user studies - blocked us extensively tuning these parameters. 3. The proposed linear and nonlinear methods perform almost the same with or slightly better than the manually tuned method. Given that the linear/nonlinear fusion methods are fully automatic approaches, they are considered practical and efficient solutions when more communities (e.g. dozens of communities) need to be integrated. 4. CONCLUSIONS AND FUTURE WORK In this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation. We have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible. The proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums. Both the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on. Current system is far from being perfect. In order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed. The following points, for example, may improve the searching results and will be our future work: 1. more subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2. differentiating various communities who may have different interests and preferences or even distinct culture understandings; 3. incorporating more useful information, including photographers'' and reviewers'' information, to model the photos in a heterogeneous data space instead of the current homogeneous one. We will further utilize collaborative filtering to recommend relevant high-quality photos to browsers. One open problem is whether we can find an objective and efficient criterion for evaluating the ranking results, instead of employing subjective and inefficient user studies, which blocked us from trying more ranking algorithms and tuning parameters in one algorithm. 5. ACKNOWLEDGMENTS We thank Bin Wang and Zhi Wei Li for providing Dedup codes to detect duplicate photos; Zhen Li for helping us design the interface of EnjoyPhoto; Ming Jing Li, Longbin Chen, Changhu Wang, Yuanhao Chen, and Li Zhuang etc. for useful discussions. Special thanks go to Dwight Daniels for helping us revise the language of this paper. 6. REFERENCES [1] Google image search. http://images.google.com. [2] Google local search. http://local.google.com/. [3] Google news search. http://news.google.com. [4] Google paper search. http://Scholar.google.com. [5] Google product search. http://froogle.google.com. [6] Google video search. http://video.google.com. [7] Scientific literature digital library. http://citeseer.ist.psu.edu. [8] Yahoo image search. http://images.yahoo.com. [9] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. New York: ACM Press; Harlow, England: Addison-Wesley, 1999. [10] W. Bin, L. Zhiwei, L. Ming Jing, and M. Wei-Ying. Large-scale duplicate detection for web image search. In Proceedings of the International Conference on Multimedia and Expo, page 353, 2006. [11] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In Computer Networks, volume 30, pages 107-117, 1998. [12] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89 - 96, 2005. [13] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In Proceedings 10th International Conference on World Wide Web, pages 613 - 622, Hong-Kong, 2001. [14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing top k lists. SIAM Journal on Discrete Mathematics, 17(1):134 - 160, 2003. [15] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. 385 Journal of Machine Learning Research, 4(1):933-969(37), 2004. [16] IMDB. Formula for calculating the top rated 250 titles in imdb. http://www.imdb.com/chart/top. [17] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133 - 142, 2002. [18] J. M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604-632, 1999. [19] R. Nallapati. Discriminative models for information retrieval. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 64 - 71, 2004. [20] Z. Nie, Y. Ma, J.-R. Wen, and W.-Y. Ma. Object-level web information retrieval. In Technical Report of Microsoft Research, volume MSR-TR-2005-11, 2005. [21] Z. Nie, Y. Zhang, J.-R. Wen, and W.-Y. Ma. Object-level ranking: Bringing order to web objects. In Proceedings of the 14th international conference on World Wide Web, pages 567 - 574, Chiba, Japan, 2005. [22] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. In Technical report, Stanford Digital Libraries, 1998. [23] A. Savakis, S. Etz, and A. Loui. Evaluation of image appeal in consumer photography. In SPIE Human Vision and Electronic Imaging, pages 111-120, 2000. [24] D. Sullivan. Hitwise search engine ratings. Search Engine Watch Articles, http://searchenginewatch. com/reports/article. php/3099931, August 23, 2005. [25] S. Susstrunk and S. Winkler. Color image quality on the internet. In IS&T/SPIE Electronic Imaging 2004: Internet Imaging V, volume 5304, pages 118-131, 2004. [26] H. Tong, M. Li, Z. H.J., J. He, and Z. C.S. Classification of digital photos taken by photographers or home users. In Pacific-Rim Conference on Multimedia (PCM), pages 198-205, 2004. [27] W. Xi, B. Zhang, Z. Chen, Y. Lu, S. Yan, W.-Y. Ma, and E. A. Fox. Link fusion: a unified link analysis framework for multi-type interrelated data objects. In Proceedings of the 13th international conference on World Wide Web, pages 319 - 327, 2004. 386
Ranking Web Objects from Multiple Communities ABSTRACT Vertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking. 1. INTRODUCTION Despite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries. As a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want. There are many vertical search engines, including some for paper search (e.g. Libra [21], Citeseer [7] and Google Scholar [4]), product search (e.g. Froogle [5]), movie search [6], image search [1, 8], video search [6], local search [2], as well as news search [3]. We believe the vertical search engine trend will continue to grow. Essentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors. This is because it deals with the core problem of how to combine and rank objects coming from multiple communities. Although object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked. For example, algorithms that evolved from PageRank [22], PopRank [21] and LinkFusion [27] were proposed to rank objects coming from multiple communities, but can only work on well-defined graphs of heterogeneous data. "Well-defined" means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences). This allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities. However, this assumption does not always stand for some domains. High-quality photo search, movie search and news search are exceptions. For example, a photograph forum website usually includes three kinds of objects: photos, authors and reviewers. Yet different photo forums seem to lack any relationships, as there are no cited-by relationships. This makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos. Consequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums. Similar problems also exist in movie search and news search. Although two movie titles can be identified as the same one by title and director in different movie discussion groups, it is non-trivial to combine rating scores from different discussion groups and rank movies effectively. We call such non-trivial object relationship in which identification is difficult, incomplete relationships. Other related work includes rank aggregation for the Web [13, 14], and learning algorithm for rank, such as RankBoost [15], RankSVM [17, 19], and RankNet [12]. We will contrast differences of these methods with the proposed methods after we have described the problem and our methods. We will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation. In the following, we will introduce rationale for building high-quality photo search. 1.1 High-Quality Photo Search In the past ten years, the Internet has grown to become an incredible resource, allowing users to easily access a huge number of images. However, compared to the more than 1 billion images indexed by commercial search engines, actual queries submitted to image search engines are relatively minor, and occupy only 8-10 percent of total image and text queries submitted to commercial search engines [24]. This is partially because user requirements for image search are far less than those for general text search. On the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content. To better understand user needs in image search, we conducted a query log analysis based on a commercial search engine. The result shows that more than 20% of image search queries are related to nature and places and daily life categories. Users apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds. However, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem. Ideally, the most critical part of a search engine--the ranking function--can be simplified as consisting of two key factors: relevance and quality. For the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity. However, as to quality factor, there is still no way to give an optimal rank to an image. Though content-based image quality assessment has been investigated over many years [23, 25, 26], it is still far from ready to provide a realistic quality measure in the immediate future. Seemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos. Various proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine. In general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos. In addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers. These metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results. More important, there are volunteer users in Web communities actively providing valuable ratings for these photos. The rating information is generally of great value in solving the photo quality ranking problem. Motivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums. In this paper, we specifically focus on how to rank photos from multiple Web forums. Intuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum. However, such a straightforward approach usually requires large manual effort in both tedious parameter tuning and subjective results evaluation, which makes it impractical when there are tens or hundreds of photo forums to combine. To address this problem, we seek to build relationships/links between different photo forums. That is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums. We then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function. 1.2 Main Contributions and Organization. The main contributions of this paper are: 1. We have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines. 2. We have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme. The rest of this paper is organized as follows. In Section 2, we present in detail the proposed solutions to the ranking problem, including how to find hidden links between different forums, normalize rating scores, obtain the optimal ranking function, and contrast our methods with some other related research. In Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm. Our conclusion and a discussion of future work is in Section 4. It is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on. 2. ALGORITHM 2.1 Overview The difficulty of integrating multiple Web forums is in their different rating systems, where there are generally two kinds of freedom. The first kind of freedom is the rating interval or rating scale including the minimal and maximal ratings for each Web object. For example, some forums use a 5-point rating scale whereas other forums use 3-point or 10-point rating scales. It seems easy to fix this freedom, but detailed analysis of the data and experiments show that it is a non-trivial problem. The second kind of freedom is the varying rating criteria found in different Web forums. That is, the same score does not mean the same quality in different forums. Intuitively, if we can detect same photographers or same photographs, we can build relationships between any two photo forums and therefore can standardize the rating criterion by score normalization and transformation. Fortunately, we find that quite a number of duplicate photographs exist in various Web photo forums. This fact is reasonable when considering that photographers sometimes submit a photo to more than one forum to obtain critiques or in hopes of widespread publicity. In this work, we adopt an efficient duplicate photo detection algorithm [10] to find these photos. The proposed methods below are based on the following considerations. Faced with the need to overcome a ranking problem, a standardized rating criterion rather than a reasonable rating criterion is needed. Therefore, we can take a large scale forum as the reference forum, and align other forums by taking into account duplicate Web objects (duplicate photos in this work). Ideally, the scores of duplicate photos should be equal even though they are in different forums. Yet we can deem that scores in different forums--except for the reference forum--can vary in a parametric space. This can be determined by minimizing the objective function defined by the sum of squares of the score differences. By formulating the ranking problem as an optimization problem that attempts to make the scores of duplicate photos in non-reference forums as close as possible to those in the reference forum, we can effectively solve the ranking problem. For convenience, the following notations are employed. Ski and ¯ Ski denote the total score and mean score of ith Web object (photo) in the kth Web site, respectively. The total score refers to the sum of the various rating scores (e.g., novelty rating and aesthetic rating), and the mean score refers to the mean of the various rating scores. Suppose there are a total of K Web sites. We further use to denote the set of scores for Web objects (photos) in kth Web forums that are duplicate with the lth Web forums, where Ikl is the total number of duplicate Web objects between these two Web sites. In general, score fusion can be seen as the procedure of finding K transforms such that eSki can be used to rank Web objects from different Web sites. The objective function described in the above Figure 1: Web community integration. Each Web community forms a subgraph, and all communities are linked together by some hidden links (dashed lines). paragraph can then be formulated as where we use k = 1 as the reference forum and thus ψ1 (S1i) = S1i. ¯ wki (≥ 0) is the weight coefficient that can be set heuristically according to the numbers of voters (reviewers or commenters) in both the reference forum and the non-reference forum. The more reviewers, the more popular the photo is and the larger the corresponding weight ¯ wki should be. In this work, we do not inspect the problem of how to choose ¯ wki and simply set them to one. But we believe the proper use of ¯ wki, which leverages more information, can significantly improve the results. Figure 1 illustrates the aforementioned idea. The Web Community 1 is the reference community. The dashed lines are links indicating that the two linked Web objects are actually the same. The proposed algorithm will try to find the best ψk (k = 2,..., K), which has certain parametric forms according to certain models. So as to minimize the cost function defined in Eq. 1, the summation is taken on all the red dashed lines. We will first discuss the score normalization methods in Section 2.2, which serves as the basis for the following work. Before we describe the proposed ranking algorithms, we first introduce a manually tuned method in Section 2.3, which is laborious and even impractical when the number of communities become large. In Section 2.4, we will briefly explain how to precisely find duplicate photos between Web forums. Then we will describe the two proposed methods: Linear fusion and Non-linear fusion, and a performance measure for result evaluation in Section 2.5. Finally, in Section 2.6 we will discuss the relationship of the proposed methods with some other related work. 2.2 Score Normalization Since different Web (photo) forums on the Web usually have different rating criteria, it is necessary to normalize them before applying different kinds of fusion methods. In addition, as there are many kinds of ratings, such as ratings for novelty, ratings for aesthetics etc, it is reasonable to choose a common one--total score or average score--that can always be extracted in any Web forum or calculated by corresponding ratings. This allows the normaliza tion method on the total score or average score to be viewed as an impartial rating method between different Web forums. It is straightforward to normalize average scores by linearly transforming them to a fixed interval. We call this kind of score as Scaled Mean Score. The difficulty, however, of using this normalization method is that, if there are only a few users rating an object, say a photo in a photo forum, the average score for the object is likely to be spammed or skewed. Total score can avoid such drawbacks that contain more information such as a Web object's quality and popularity. The problem is thus how to normalize total scores in different Web forums. The simplest way may be normalization by the maximal and minimal scores. The drawback of this normalization method is it is non robust, or in other words, it is sensitive to outliers. To make the normalization insensitive to unusual data, we propose the Mode-90% Percentile normalization method. Here, the mode score represents the total score that has been assigned to more photos than any other total score. And The high percentile score (e.g.,90%) represents the total score for which the high percentile of images have a lower total score. This normalization method utilizes the mode and 90% percentile as two reference points to align two rating systems, which makes the distributions of total scores in different forums more consistent. The underlying assumption, for example in different photo forums, is that even the qualities of top photos in different forums may vary greatly and be less dependent on the forum quality, the distribution of photos of middle-level quality (from mode to 90% percentile) should be almost of the same quality up to the freedom which reflects the rating criterion (strictness) of Web forums. Photos of this middle-level in a Web forum usually occupy more than 70% of total photos in that forum. We will give more detailed analysis of the scores in Section 3.2. 2.3 Manual Fusion The Web movie forum, IMDB [16], proposed to use a Bayesian-ranking function to normalize rating scores within one community. Motivated by this ranking function, we propose this manual fusion method: For the kth Web site, we use the following formula to rank photos, where nk is the number of votes and n ∗ k, S ∗ k and αk are three parameters. This ranking function first takes a balance between the original mean score ¯ Ski and a reference score S ∗ k to get a weighted mean score which may be more reliable than ¯ Ski. Then the weighted mean score is scaled by αk to get the final score Ski. For n Web communities, there are then about 3n parameters in {(αk, n ∗ k, S ∗ k) | k = 1,..., n} to tune. Though this method can achieves pretty good results after careful and thorough manual tuning on these parameters, when n becomes increasingly large, say there are tens or hundreds of Web communities crawled and indexed, this method will become more and more laborious and will eventually become impractical. It is therefore desirable to find an effective fusion method whose parameters can be automatically determined. 2.4 Duplicate Photo Detection We use Dedup [10], an efficient and effective duplicate image detection algorithm, to find duplicate photos between any two photo forums. This algorithm uses hash function to map a high dimensional feature to a 32 bits hash code (see below for how to construct the hash code). Its computational complexity to find all the duplicate images among n images is about O (n log n). The low-level visual feature for each photo is extracted on k x k regular grids. Based on all features extracted from the image database, a PCA model is built. The visual features are then transformed to a relatively low-dimensional and zero mean PCA space, or 29 dimensions in our system. Then the hash code for each photo is built as follows: each dimension is transformed to one, if the value in this dimension is greater than 0, and 0 otherwise. Photos in the same bucket are deemed potential duplicates and are further filtered by a threshold in terms of Euclidean similarity in the visual feature space. Figure 2 illustrates the hashing procedure, where visual features--mean gray values--are extracted on both 6 x 6 and 7x7 grids. The 85-dimensional features are transformed to a 32-dimensional vector, and the hash code is generated according to the signs. Figure 2: Hashing procedure for duplicate photo dectection 2.5 Score Fusion In this section, we will present two solutions on score fusion based on different parametric form assumptions of ψk in Eq. 1. 2.5.1 Linear Fusion by Duplicate Photos Intuitively, the most straightforward way to factor out the uncertainties caused by the different criterion is to scale, rel ative to a given center, the total scores of each unreferenced Web photo forum with respect to the reference forum. More strictly, we assume ψk has the following form which means that the scores of k (~ = 1) th forum should be scaled by αk relative to the center tk 1 − αk as shown in Figure 3. Then, if we substitute above ψk to Eq. 1, we get the following objective function, By solving the following set of functions, (∂ f where f is the objective function defined in Eq. 5, we get the closed form solution as: where and k = 2,..., K. This is a linear fusion method. It enjoys simplicity and excellent performance in the following experiments. Figure 3: Linear Fusion method 2.5.2 Nonlinear Fusion by Duplicate Photos Sometimes we want a method which can adjust scores on intervals with two endpoints unchanged. As illustrated in Figure 4, the method can tune scores between [C0, C1] while leaving scores C0 and C1 unchanged. This kind of fusion method is then much finer than the linear ones and contains many more parameters to tune and expect to further improve the results. Here, we propose a nonlinear fusion solution to satisfy such constraints. First, we introduce a transform: where α> 0. This transform satisfies that for x E [c0, c1], ηco, ci, α (x) E [c0, c1] with ηco, ci, α (c0) = c0 and ηco, ci, α (c1) = c1. Then we can utilize this nonlinear transform to adjust the scores in certain interval, say (M, T], Figure 4: Nonlinear Fusion method. We intent to finely adjust the shape of the curves in each segment. Even there is no closed-form solution for the following optimization problem, it is not hard to get the numeric one. Under the same assumptions made in Section 2.2, we can use this method to adjust scores of the middle-level (from the mode point to the 90% percentile). This more complicated non-linear fusion method is expected to achieve better results than the linear one. However, difficulties in evaluating the rank results block us from tuning these parameters extensively. The current experiments in Section 3.5 do not reveal any advantages over the simple linear model. 2.5.3 Performance Measure of the Fusion Results Since our objective function is to make the scores of the same Web objects (e.g. duplicate photos) between a nonreference forum and the reference forum as close as possible, it is natural to investigate how close they become to each other and how the scores of the same Web objects change between the two non-reference forums before and after score fusion. Taken Figure 1 as an example, the proposed algorithms minimize the score differences of the same Web objects in two Web forums: the reference forum (the Web Community 1) and a non-reference forum, which corresponds to minimizing the objective function on the red dashed (hidden) links. After the optimization, we must ask what happens to the score differences of the same Web objects in two nonreference forums? Or, in other words, whether the scores of two objects linked by the green dashed (hidden) links become more consistent? We therefore define the following performance measure--δ measure--to quantify the changes for scores of the same Web objects in different Web forums as where Skl = (Skl δkl> 0 means after score fusion, scores on the same Web objects between kth and lth Web forum become more consistent, which is what we expect. On the contrary, if δkl <0, those scores become more inconsistent. Although we cannot rely on this measure to evaluate our final fusion results as ranking photos by their popularity and qualities is such a subjective process that every person can have its own results, it can help us understand the intermediate ranking results and provide insights into the final performances of different ranking methods. 2.6 Contrasts with Other Related Work We have already mentioned the differences of the proposed methods with the traditional methods, such as PageRank [22], PopRank [21], and LinkFusion [27] algorithms in Section 1. Here, we discuss some other related works. The current problem can also be viewed as a rank aggregation one [13, 14] as we deal with the problem of how to combine several rank lists. However, there are fundamental differences between them. First of all, unlike the Web pages, which can be easily and accurately detected as the same pages, detecting the same photos in different Web forums is a non-trivial work, and can only be implemented by some delicate algorithms while with certain precision and recall. Second, the numbers of the duplicate photos from different Web forums are small relative to the whole photo sets (see Table 1). In another words, the top K rank lists of different Web forums are almost disjointed for a given query. Under this condition, both the algorithms proposed in [13] and their measurements--Kendall tau distance or Spearman footrule distance--will degenerate to some trivial cases. Another category of rank fusion (aggregation) methods is based on machine learning algorithms, such as RankSVM [17, 19], RankBoost [15], and RankNet [12]. All of these methods entail some labelled datasets to train a model. In current settings, it is difficult or even impossible to get these datasets labelled as to their level of professionalism or popularity, since the photos are too vague and subjective to rank. Instead, the problem here is how to combine several ordered sub lists to form a total order list. 3. EXPERIMENTS In this section, we carry out our research on high-quality photo search. We first briefly introduce the newly proposed vertical image search engine--EnjoyPhoto in section 3.1. Then we focus on how to rank photos from different Web forums. In order to do so, we first normalize the scores (ratings) for photos from different multiple Web forums in section 3.2. Then we try to find duplicate photos in section 3.3. Some intermediate results are discussed using δ measure in section 3.4. Finally a set of user studies is carried out carefully to justify our proposed method in section 3.5. 3.1 EnjoyPhoto: high-quality Photo Search Engine In order to meet user requirement of enjoying high-quality photos, we propose and build a high-quality photo search engine--EnjoyPhoto, which accounts for the following three key issues: 1. how to crawl and index photos, 2. how to determine the qualities of each photo and 3. how to display the search results in order to make the search process enjoyable. For a given text based query, this system ranks the photos based on certain combination of relevance of the photo to this query (Issue 1) and the quality of the photo (Issue 2), and finally displays them in an enjoyable manner (Issue 3). As for Issue 3, we devise the interface of the system deliberately in order to smooth the users' process of enjoying high-quality photos. Techniques, such as Fisheye and slides show, are utilized in current system. Figure 5 shows the interface. We will not talk more about this issue as it is not an emphasis of this paper. Figure 5: EnjoyPhoto: an enjoyable high-quality photo search engine, where 26,477 records are returned for the query "fall" in about 0.421 seconds As for Issue 1, we extracted from a commercial search engine a subset of photos coming from various photo forums all over the world, and explicitly parsed the Web pages containing these photos. The number of photos in the data collection is about 2.5 million. After the parsing, each photo was associated with its title, category, description, camera setting, EXIF data 1 (when available for digital images), location (when available in some photo forums), and many kinds of ratings. All these metadata are generally precise descriptions or annotations for the image content, which are then indexed by general text-based search technologies [9, 18, 11]. In current system, the ranking function was specifically tuned to emphasize title, categorization, and rating information. Issue 2 is essentially dealt with in the following sections which derive the quality of photos by analyzing ratings provided by various Web photo forums. Here we chose six photo forums to study the ranking problem and denote them as Web-A, Web-B, Web-C, Web-D, Web-E and Web-F. 3.2 Photo Score Normalization Detailed analysis of different score normalization methods are analyzed in this section. In this analysis, the zero Figure 6: Distributions of mean scores normalized to [0, 10] scores that usually occupy about than 30% of the total number of photos for some Web forums are not currently taken into account. How to utilize these photos is left for future explorations. In Figure 6, we list the distributions of the mean score, which is transformed to a fixed interval [0, 10]. The distributions of the average scores of these Web forums look quite different. Distributions in Figure 6 (a), 6 (b), and 6 (e) looks like Gaussian distributions, while those in Figure 6 (d) and 6 (f) are dominated by the top score. The reason of these eccentric distributions for Web-D and Web-F lies in their coarse rating systems. In fact, Web-D and Web-F use 2 or 3 point rating scales whereas other Web forums use 7 or 14 point rating scales. Therefore, it will be problematic if we directly use these averaged scores. Furthermore the average score is very likely to be spammed, if there are only a few users rating a photo. Figure 7 shows the total score normalization method by maximal and minimal scores, which is one of our base line system. All the total scores of a given Web forum are normalized to [0, 100] according to the maximal score and minimal score of corresponding Web forum. We notice that total score distribution of Web-A in Figure 7 (a) has two larger tails than all the others. To show the shape of the distributions more clearly, we only show the distributions on [0, 25] in Figure 7 (b),7 (c),7 (d),7 (e), and 7 (f). Figure 8 shows the Mode-90% Percentile normalization method, where the modes of the six distributions are normalized to 5 and the 90% percentile to 8. We can see that this normalization method makes the distributions of total scores in different forums more consistent. The two proposed algorithms are all based on these normalization methods. 3.3 Duplicate photo detection Targeting at computational efficiency, the Dedup algorithm may lose some recall rate, but can achieve a high precision rate. We also focus on finding precise hidden links rather than all hidden links. Figure 9 shows some duplicate detection examples. The results are shown in Table 1 and verify that large numbers of duplicate photos exist in any two Web forums even with the strict condition for Dedup where we chose first 29 bits as the hash code. Since there are only a few parameters to estimate in the proposed fusion methods, the numbers of duplicate photos shown Table 1 are Figure 7: Maxmin Normalization Figure 8: Mode-90% Percentile Normalization sufficient to determine these parameters. The last table column lists the total number of photos in the corresponding Web forums. 3.4 δ Measure The parameters of the proposed linear and nonlinear algorithms are calculated using the duplicate data shown in Table 1, where the Web-C is chosen as the reference Web forum since it shares the most duplicate photos with other forums. Table 2 and 3 show the δ measure on the linear model and nonlinear model. As δkl is symmetric and δkk = 0, we only show the upper triangular part. The NaN values in both tables lie in that no duplicate photos have been detected by the Dedup algorithm as reported in Table 1. The linear model guarantees that the δ measures related Table 1: Number of duplicate photos between each pair of Web forums Figure 9: Some results of duplicate photo detection Table 2: S measure on the linear model. to the reference community should be no less than 0 theoretically. It is indeed the case (see the underlined numbers in Table 2). But this model cannot guarantee that the S measures on the non-reference communities can also be no less than 0, as the normalization steps are based on duplicate photos between the reference community and a nonreference community. Results shows that all the numbers in the S measure are greater than 0 (see all the non-underlined numbers in Table 2), which indicates that it is probable that this model will give optimal results. On the contrary, the nonlinear model does not guarantee that S measures related to the reference community should be no less than 0, as not all duplicate photos between the two Web forums can be used when optimizing this model. In fact, the duplicate photos that lie in different intervals will not be used in this model. It is these specific duplicate photos that make the S measure negative. As a result, there are both negative and positive items in Table 3, but overall the number of positive ones are greater than negative ones (9:5), that indicates the model may be better than the "normalization only" method (see next subsection) which has an all-zero S measure, and worse than the linear model. 3.5 User Study Because it is hard to find an objective criterion to evaluate Table 3: S measure on the nonlinear model. which ranking function is better, we chose to employ user studies for subjective evaluations. Ten subjects were invited to participate in the user study. They were recruited from nearby universities. As search engines of both text search and image search are familiar to university students, there was no prerequisite criterion for choosing students. We conducted user studies using Internet Explorer 6.0 on Windows XP with 17-inch LCD monitors set at 1,280 pixels by 1,024 pixels in 32-bit color. Data was recorded with server logs and paper-based surveys after each task. Figure 10: User study interface We specifically device an interface for user study as shown in Figure 10. For each pair of fusion methods, participants were encouraged to try any query they wished. For those without specific ideas, two combo boxes (category list and query list) were listed on the bottom panel, where the top 1,000 image search queries from a commercial search engine were provided. After a participant submitted a query, the system randomly selected the left or right frame to display each of the two ranking results. The participant were then required to judge which ranking result was better of the two ranking results, or whether the two ranking results were of equal quality, and submit the judgment by choosing the corresponding radio button and clicking the "Submit" button. For example, in Figure 10, query "sunset" is submitted to the system. Then, 79,092 photos were returned and ranked by the Minmax fusion method in the left frame and linear fusion method in the right frame. A participant then compares the two ranking results (without knowing the ranking methods) and submits his/her feedback by choosing answers in the "Your option." Table 4: Results of user study Table 4 shows the experimental results, where "Linear" denotes the linear fusion method, "Nonlinear" denotes the non linear fusion method, "Norm. Only" means Maxmin normalization method, "Manually" means the manually tuned method. The three numbers in each item, say 29:13:10, mean that 29 judgments prefer the linear fusion results, 10 judgments prefer the normalization only method, and 13 judgments consider these two methods as equivalent. We conduct the ANOVA analysis, and obtain the following conclusions: 1. Both the linear and nonlinear methods are significantly better than the "Norm. Only" method with respective P-values 0.00165 (<0.05) and 0.00073 (<<0.05). This result is consistent with the δ-measure evaluation result. The "Norm. Only" method assumes that the top 10% photos in different forums are of the same quality. However, this assumption does not stand in general. For example, a top 10% photo in a top tier photo forum is generally of higher quality than a top 10% photo in a second-tier photo forum. This is similar to that, those top 10% students in a top-tier university and those in a second-tier university are generally of different quality. Both linear and nonlinear fusion methods acknowledge the existence of such differences and aim at quantizing the differences. Therefore, they perform better than the "Norm. Only" method. 2. The linear fusion method is significantly better than the nonlinear one with P-value 1.195 × 10 − 10. This result is rather surprising as this more complicated ranking method is expected to tune the ranking more finely than the linear one. The main reason for this result may be that it is difficult to find the best intervals where the nonlinear tuning should be carried out and yet simply the middle part of the Mode-90% Percentile Normalization method was chosen. The timeconsuming and subjective evaluation methods--user studies--blocked us extensively tuning these parameters. 3. The proposed linear and nonlinear methods perform almost the same with or slightly better than the manually tuned method. Given that the linear/nonlinear fusion methods are fully automatic approaches, they are considered practical and efficient solutions when more communities (e.g. dozens of communities) need to be integrated. 4. CONCLUSIONS AND FUTURE WORK In this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation. We have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible. The proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums. Both the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on. Current system is far from being perfect. In order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed. The following points, for example, may improve the searching results and will be our future work: 1. more subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2. differentiating various communities who may have different interests and preferences or even distinct culture understandings; 3. incorporating more useful information, including photographers' and reviewers' information, to model the photos in a heterogeneous data space instead of the current homogeneous one. We will further utilize collaborative filtering to recommend relevant high-quality photos to browsers. One open problem is whether we can find an objective and efficient criterion for evaluating the ranking results, instead of employing subjective and inefficient user studies, which blocked us from trying more ranking algorithms and tuning parameters in one algorithm.
Ranking Web Objects from Multiple Communities ABSTRACT Vertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking. 1. INTRODUCTION Despite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries. As a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want. There are many vertical search engines, including some for paper search (e.g. Libra [21], Citeseer [7] and Google Scholar [4]), product search (e.g. Froogle [5]), movie search [6], image search [1, 8], video search [6], local search [2], as well as news search [3]. We believe the vertical search engine trend will continue to grow. Essentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors. This is because it deals with the core problem of how to combine and rank objects coming from multiple communities. Although object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked. For example, algorithms that evolved from PageRank [22], PopRank [21] and LinkFusion [27] were proposed to rank objects coming from multiple communities, but can only work on well-defined graphs of heterogeneous data. "Well-defined" means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences). This allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities. However, this assumption does not always stand for some domains. High-quality photo search, movie search and news search are exceptions. For example, a photograph forum website usually includes three kinds of objects: photos, authors and reviewers. Yet different photo forums seem to lack any relationships, as there are no cited-by relationships. This makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos. Consequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums. Similar problems also exist in movie search and news search. Although two movie titles can be identified as the same one by title and director in different movie discussion groups, it is non-trivial to combine rating scores from different discussion groups and rank movies effectively. We call such non-trivial object relationship in which identification is difficult, incomplete relationships. Other related work includes rank aggregation for the Web [13, 14], and learning algorithm for rank, such as RankBoost [15], RankSVM [17, 19], and RankNet [12]. We will contrast differences of these methods with the proposed methods after we have described the problem and our methods. We will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation. In the following, we will introduce rationale for building high-quality photo search. 1.1 High-Quality Photo Search In the past ten years, the Internet has grown to become an incredible resource, allowing users to easily access a huge number of images. However, compared to the more than 1 billion images indexed by commercial search engines, actual queries submitted to image search engines are relatively minor, and occupy only 8-10 percent of total image and text queries submitted to commercial search engines [24]. This is partially because user requirements for image search are far less than those for general text search. On the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content. To better understand user needs in image search, we conducted a query log analysis based on a commercial search engine. The result shows that more than 20% of image search queries are related to nature and places and daily life categories. Users apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds. However, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem. Ideally, the most critical part of a search engine--the ranking function--can be simplified as consisting of two key factors: relevance and quality. For the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity. However, as to quality factor, there is still no way to give an optimal rank to an image. Though content-based image quality assessment has been investigated over many years [23, 25, 26], it is still far from ready to provide a realistic quality measure in the immediate future. Seemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos. Various proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine. In general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos. In addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers. These metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results. More important, there are volunteer users in Web communities actively providing valuable ratings for these photos. The rating information is generally of great value in solving the photo quality ranking problem. Motivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums. In this paper, we specifically focus on how to rank photos from multiple Web forums. Intuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum. However, such a straightforward approach usually requires large manual effort in both tedious parameter tuning and subjective results evaluation, which makes it impractical when there are tens or hundreds of photo forums to combine. To address this problem, we seek to build relationships/links between different photo forums. That is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums. We then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function. 1.2 Main Contributions and Organization. The main contributions of this paper are: 1. We have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines. 2. We have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme. The rest of this paper is organized as follows. In Section 2, we present in detail the proposed solutions to the ranking problem, including how to find hidden links between different forums, normalize rating scores, obtain the optimal ranking function, and contrast our methods with some other related research. In Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm. Our conclusion and a discussion of future work is in Section 4. It is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on. 2. ALGORITHM 2.1 Overview 2.2 Score Normalization 2.3 Manual Fusion 2.4 Duplicate Photo Detection 2.5 Score Fusion 2.5.1 Linear Fusion by Duplicate Photos 3. 2.5.2 Nonlinear Fusion by Duplicate Photos 2.5.3 Performance Measure of the Fusion Results 2.6 Contrasts with Other Related Work 3. EXPERIMENTS 3.1 EnjoyPhoto: high-quality Photo Search Engine 3.2 Photo Score Normalization 3.3 Duplicate photo detection 3.4 δ Measure 3.5 User Study 4. CONCLUSIONS AND FUTURE WORK In this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation. We have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible. The proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums. Both the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on. Current system is far from being perfect. In order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed. The following points, for example, may improve the searching results and will be our future work: 1. more subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2. differentiating various communities who may have different interests and preferences or even distinct culture understandings; 3. incorporating more useful information, including photographers' and reviewers' information, to model the photos in a heterogeneous data space instead of the current homogeneous one. We will further utilize collaborative filtering to recommend relevant high-quality photos to browsers. One open problem is whether we can find an objective and efficient criterion for evaluating the ranking results, instead of employing subjective and inefficient user studies, which blocked us from trying more ranking algorithms and tuning parameters in one algorithm.
Ranking Web Objects from Multiple Communities ABSTRACT Vertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking. 1. INTRODUCTION Despite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries. As a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want. We believe the vertical search engine trend will continue to grow. Essentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors. This is because it deals with the core problem of how to combine and rank objects coming from multiple communities. Although object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked. "Well-defined" means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences). This allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities. High-quality photo search, movie search and news search are exceptions. For example, a photograph forum website usually includes three kinds of objects: photos, authors and reviewers. Yet different photo forums seem to lack any relationships, as there are no cited-by relationships. This makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos. Consequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums. Similar problems also exist in movie search and news search. We call such non-trivial object relationship in which identification is difficult, incomplete relationships. We will contrast differences of these methods with the proposed methods after we have described the problem and our methods. We will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation. In the following, we will introduce rationale for building high-quality photo search. 1.1 High-Quality Photo Search This is partially because user requirements for image search are far less than those for general text search. On the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content. To better understand user needs in image search, we conducted a query log analysis based on a commercial search engine. The result shows that more than 20% of image search queries are related to nature and places and daily life categories. Users apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds. However, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem. Ideally, the most critical part of a search engine--the ranking function--can be simplified as consisting of two key factors: relevance and quality. For the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity. However, as to quality factor, there is still no way to give an optimal rank to an image. Seemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos. Various proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine. In general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos. In addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers. These metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results. More important, there are volunteer users in Web communities actively providing valuable ratings for these photos. The rating information is generally of great value in solving the photo quality ranking problem. Motivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums. In this paper, we specifically focus on how to rank photos from multiple Web forums. Intuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum. To address this problem, we seek to build relationships/links between different photo forums. That is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums. We then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function. 1.2 Main Contributions and Organization. The main contributions of this paper are: 1. We have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines. 2. We have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme. The rest of this paper is organized as follows. In Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm. Our conclusion and a discussion of future work is in Section 4. It is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on. 4. CONCLUSIONS AND FUTURE WORK In this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation. We have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible. The proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums. Both the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on. Current system is far from being perfect. In order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed. The following points, for example, may improve the searching results and will be our future work: 1. more subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2. differentiating various communities who may have different interests and preferences or even distinct culture understandings; 3. incorporating more useful information, including photographers' and reviewers' information, to model the photos in a heterogeneous data space instead of the current homogeneous one. We will further utilize collaborative filtering to recommend relevant high-quality photos to browsers.
H-41
HITS on the Web: How does it Compare?
This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-of-the-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries.
[ "hit", "rank", "rank", "mean reciproc rank", "mean averag precis", "normal discount cumul gain measur", "breadth-first search crawl", "pagerank", "queri specif", "bm25f", "featur select", "link graph", "scale and relev", "link-base featur", "hyperlink analysi", "quantit measur", "crawl web page", "mrr", "map", "ndcg" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "U", "M", "M", "M", "M", "R", "U", "U", "U" ]
HITS on the Web: How does it Compare? Marc Najork Microsoft Research 1065 La Avenida Mountain View, CA, USA najork@microsoft.com Hugo Zaragoza ∗ Yahoo! Research Barcelona Ocata 1 Barcelona 08003, Spain hugoz@es.yahoo-inc.com Michael Taylor Microsoft Research 7 J J Thompson Ave Cambridge CB3 0FB, UK mitaylor@microsoft.com ABSTRACT This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Information Storage and Retrieval-search process, selection process General Terms Algorithms, Measurement, Experimentation 1. INTRODUCTION Link graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web. The HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query. However, it remains unclear today whether there are practical benefits of HITS over other link graph measures. This is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document``s anchor text, i.e. the text of incoming links. This, at least to some degree, takes the link graph into account, in a query-dependent manner. Comparing HITS to PageRank or in-degree empirically is no easy task. There are two main difficulties: scale and relevance. Scale is important because link-based features are known to improve in quality as the document graph grows. If we carry out a small experiment, our conclusions won``t carry over to large graphs such as the web. However, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult. Relevance is also crucial because we cannot measure the performance of a feature in the absence of human judgments: what is crucial is ranking at the top of the ten or so documents that a user will peruse. To our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment. Our results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text. This is reassuring: in the absence of a theoretical model capable of tying these measures with relevance, the only way to validate our intuitions is to carry out realistic experiments. However, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature. This continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text. The remainder of this paper is structured as follows: Section 2 surveys related work. Section 3 describes the data sets we used in our study. Section 4 reviews the performance measures we used. Sections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments. Section 7 presents the results of our evaluations, and Section 8 offers concluding remarks. 2. RELATED WORK The idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms. The popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research. There are numerous attempts at improving the effectiveness of HITS and PageRank. Query-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few. Query-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others. Another line of research is concerned with analyzing the mathematical properties of HITS and PageRank. For example, Borodin et al. [3] investigated various theoretical properties of PageRank, HITS, SALSA, and PHITS, including their similarity and stability, while Bianchini et al. [2] studied the relationship between the structure of the web graph and the distribution of PageRank scores, and Langville and Meyer examined basic properties of PageRank such as existence and uniqueness of an eigenvector and convergence of power iteration [18]. Given the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published. We are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank. Amento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query. A more recent study by Borodin et al. [4] is based on 34 queries, result sets of 200 pages per query obtained from Google, and a neighborhood graph derived by retrieving 50 in-links per result from Google. By contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs. 3. OUR DATA SETS Our evaluation is based on two data sets: a large web graph and a substantial set of queries with associated results, some of which were labeled by human judges. Our web graph is based on a web crawl that was conducted in a breadth-first-search fashion, and successfully retrieved 463,685,607 HTML pages. These pages contain 17,672,011,890 hyperlinks (after eliminating duplicate hyperlinks embedded in the same web page), which refer to a total of 2,897,671,002 URLs. Thus, at the end of the crawl there were 2,433,985,395 URLs in the frontier set of the crawler that had been discovered, but not yet downloaded. The mean out-degree of crawled web pages is 38.11; the mean in-degree of discovered pages (whether crawled or not) is 6.10. Also, it is worth pointing out that there is a lot more variance in in-degrees than in out-degrees; some popular pages have millions of incoming links. As we will see, this property affects the computational cost of HITS. Our query set was produced by sampling 28,043 queries from the MSN Search query log, and retrieving a total of 66,846,214 result URLs for these queries (using commercial search engine technology), or about 2,838 results per query on average. It is important to point out that our 2.9 billion URL web graph does not cover all these result URLs. In fact, only 9,525,566 of the result URLs (about 14.25%) were covered by the graph. 485,656 of the results in the query set (about 0.73% of all results, or about 17.3 results per query) were rated by human judges as to their relevance to the given query, and labeled on a six-point scale (the labels being definitive, excellent, good, fair, bad and detrimental). Results were selected for judgment based on their commercial search engine placement; in other words, the subset of labeled results is not random, but biased towards documents considered relevant by pre-existing ranking algorithms. Involving a human in the evaluation process is extremely cumbersome and expensive; however, human judgments are crucial for the evaluation of search engines. This is so because no document features have been found yet that can effectively estimate the relevance of a document to a user query. Since content-match features are very unreliable (and even more so link features, as we will see) we need to ask a human to evaluate the results in order to compare the quality of features. Evaluating the retrieval results from document scores and human judgments is not trivial and has been the subject of many investigations in the IR community. A good performance measure should correlate with user satisfaction, taking into account that users will dislike having to delve deep in the results to find relevant documents. For this reason, standard correlation measures (such as the correlation coefficient between the score and the judgment of a document), or order correlation measures (such as Kendall tau between the score and judgment induced orders) are not adequate. 4. MEASURING PERFORMANCE In this study, we quantify the effectiveness of various ranking algorithms using three measures: NDCG, MRR, and MAP. The normalized discounted cumulative gains (NDCG) measure [13] discounts the contribution of a document to the overall score as the document``s rank increases (assuming that the best document has the lowest rank). Such a measure is particularly appropriate for search engines, as studies have shown that search engine users rarely consider anything beyond the first few results [12]. NDCG values are normalized to be between 0 and 1, with 1 being the NDCG of a perfect ranking scheme that completely agrees with the assessment of the human judges. The discounted cumulative gain at a particular rank-threshold T (DCG@T) is defined to be PT j=1 1 log(1+j) 2r(j) − 1 , where r(j) is the rating (0=detrimental, 1=bad, 2=fair, 3=good, 4=excellent, and 5=definitive) at rank j. The NDCG is computed by dividing the DCG of a ranking by the highest possible DCG that can be obtained for that query. Finally, the NDGCs of all queries in the query set are averaged to produce a mean NDCG. The reciprocal rank (RR) of the ranked result set of a query is defined to be the reciprocal value of the rank of the highest-ranking relevant document in the result set. The RR at rank-threshold T is defined to be 0 if none of the highestranking T documents is relevant. The mean reciprocal rank (MRR) of a query set is the average reciprocal rank of all queries in the query set. Given a ranked set of n results, let rel(i) be 1 if the result at rank i is relevant and 0 otherwise. The precision P(j) at rank j is defined to be 1 j Pj i=1 rel(i), i.e. the fraction of the relevant results among the j highest-ranking results. The average precision (AP) at rank-threshold k is defined to be Pk i=1 P (i)rel(i) Pn i=1 rel(i) . The mean average precision (MAP) of a query set is the mean of the average precisions of all queries in the query set. The above definitions of MRR and MAP rely on the notion of a relevant result. We investigated two definitions of relevance: One where all documents rated fair or better were deemed relevant, and one were all documents rated good or better were deemed relevant. For reasons of space, we only report MAP and MRR values computed using the latter definition; using the former definition does not change the qualitative nature of our findings. Similarly, we computed NDCG, MAP, and MRR values for a wide range of rank-thresholds; we report results here at rank 10; again, changing the rank-threshold never led us to different conclusions. Recall that over 99% of documents are unlabeled. We chose to treat all these documents as irrelevant to the query. For some queries, however, not all relevant documents have been judged. This introduces a bias into our evaluation: features that bring new documents to the top of the rank may be penalized. This will be more acute for features less correlated to the pre-existing commercial ranking algorithms used to select documents for judgment. On the other hand, most queries have few perfect relevant documents (i.e. home page or item searches) and they will most often be within the judged set. 5. COMPUTING PAGERANK ON A LARGE WEB GRAPH PageRank is a query-independent measure of the importance of web pages, based on the notion of peer-endorsement: A hyperlink from page A to page B is interpreted as an endorsement of page B``s content by page A``s author. The following recursive definition captures this notion of endorsement: R(v) = X (u,v)∈E R(u) Out(u) where R(v) is the score (importance) of page v, (u, v) is an edge (hyperlink) from page u to page v contained in the edge set E of the web graph, and Out(u) is the out-degree (number of embedded hyperlinks) of page u. However, this definition suffers from a severe shortcoming: In the fixedpoint of this recursive equation, only edges that are part of a strongly-connected component receive a non-zero score. In order to overcome this deficiency, Page et al. grant each page a guaranteed minimum score, giving rise to the definition of standard PageRank: R(v) = d |V | + (1 − d) X (u,v)∈E R(u) Out(u) where |V | is the size of the vertex set (the number of known web pages), and d is a damping factor, typically set to be between 0.1 and 0.2. Assuming that scores are normalized to sum up to 1, PageRank can be viewed as the stationary probability distribution of a random walk on the web graph, where at each step of the walk, the walker with probability 1 − d moves from its current node u to a neighboring node v, and with probability d selects a node uniformly at random from all nodes in the graph and jumps to it. In the limit, the random walker is at node v with probability R(v). One issue that has to be addressed when implementing PageRank is how to deal with sink nodes, nodes that do not have any outgoing links. One possibility would be to select another node uniformly at random and transition to it; this is equivalent to adding edges from each sink nodes to all other nodes in the graph. We chose the alternative approach of introducing a single phantom node. Each sink node has an edge to the phantom node, and the phantom node has an edge to itself. In practice, PageRank scores can be computed using power iteration. Since PageRank is query-independent, the computation can be performed off-line ahead of query time. This property has been key to PageRank``s success, since it is a challenging engineering problem to build a system that can perform any non-trivial computation on the web graph at query time. In order to compute PageRank scores for all 2.9 billion nodes in our web graph, we implemented a distributed version of PageRank. The computation consists of two distinct phases. In the first phase, the link files produced by the web crawler, which contain page URLs and their associated link URLs in textual form, are partitioned among the machines in the cluster used to compute PageRank scores, and converted into a more compact format along the way. Specifically, URLs are partitioned across the machines in the cluster based on a hash of the URLs'' host component, and each machine in the cluster maintains a table mapping the URL to a 32-bit integer. The integers are drawn from a densely packed space, so as to make suitable indices into the array that will later hold the PageRank scores. The system then translates our log of pages and their associated hyperlinks into a compact representation where both page URLs and link URLs are represented by their associated 32-bit integers. Hashing the host component of the URLs guarantees that all URLs from the same host are assigned to the same machine in our scoring cluster. Since over 80% of all hyperlinks on the web are relative (that is, are between two pages on the same host), this property greatly reduces the amount of network communication required by the second stage of the distributed scoring computation. The second phase performs the actual PageRank power iteration. Both the link data and the current PageRank vector reside on disk and are read in a streaming fashion; while the new PageRank vector is maintained in memory. We represent PageRank scores as 64-bit floating point numbers. PageRank contributions to pages assigned to remote machines are streamed to the remote machine via a TCP connection. We used a three-machine cluster, each machine equipped with 16 GB of RAM, to compute standard PageRank scores for all 2.9 billion URLs that were contained in our web graph. We used a damping factor of 0.15, and performed 200 power iterations. Starting at iteration 165, the L∞ norm of the change in the PageRank vector from one iteration to the next had stopped decreasing, indicating that we had reached as much of a fixed point as the limitations of 64-bit floating point arithmetic would allow. 0.07 0.08 0.09 0.10 0.11 1 10 100 Number of back-links sampled per result NDCG@10 hits-aut-all hits-aut-ih hits-aut-id 0.01 0.02 0.03 0.04 1 10 100 Number of back-links sampled per result MAP@10 hits-aut-all hits-aut-ih hits-aut-id 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 1 10 100 Number of back-links sampled per result MRR@10 hits-aut-all hits-aut-ih hits-aut-id Figure 1: Effectiveness of authority scores computed using different parameterizations of HITS. A post-processing phase uses the final PageRank vectors (one per machine) and the table mapping URLs to 32-bit integers (representing indices into each PageRank vector) to score the result URL in our query log. As mentioned above, our web graph covered 9,525,566 of the 66,846,214 result URLs. These URLs were annotated with their computed PageRank score; all other URLs received a score of 0. 6. HITS HITS, unlike PageRank, is a query-dependent ranking algorithm. HITS (which stands for Hypertext Induced Topic Search) is based on the following two intuitions: First, hyperlinks can be viewed as topical endorsements: A hyperlink from a page u devoted to topic T to another page v is likely to endorse the authority of v with respect to topic T. Second, the result set of a particular query is likely to have a certain amount of topical coherence. Therefore, it makes sense to perform link analysis not on the entire web graph, but rather on just the neighborhood of pages contained in the result set, since this neighborhood is more likely to contain topically relevant links. But while the set of nodes immediately reachable from the result set is manageable (given that most pages have only a limited number of hyperlinks embedded into them), the set of pages immediately leading to the result set can be enormous. For this reason, Kleinberg suggests sampling a fixed-size random subset of the pages linking to any high-indegree page in the result set. Moreover, Kleinberg suggests considering only links that cross host boundaries, the rationale being that links between pages on the same host (intrinsic links) are likely to be navigational or nepotistic and not topically relevant. Given a web graph (V, E) with vertex set V and edge set E ⊆ V × V , and the set of result URLs to a query (called the root set R ⊆ V ) as input, HITS computes a neighborhood graph consisting of a base set B ⊆ V (the root set and some of its neighboring vertices) and some of the edges in E induced by B. In order to formalize the definition of the neighborhood graph, it is helpful to first introduce a sampling operator and the concept of a linkselection predicate. Given a set A, the notation Sn[A] draws n elements uniformly at random from A; Sn[A] = A if |A| ≤ n. A link section predicate P takes an edge (u, v) ∈ E. In this study, we use the following three link section predicates: all(u, v) ⇔ true ih(u, v) ⇔ host(u) = host(v) id(u, v) ⇔ domain(u) = domain(v) where host(u) denotes the host of URL u, and domain(u) denotes the domain of URL u. So, all is true for all links, whereas ih is true only for inter-host links, and id is true only for inter-domain links. The outlinked-set OP of the root set R w.r.t. a linkselection predicate P is defined to be: OP = [ u∈R {v ∈ V : (u, v) ∈ E ∧ P(u, v)} The inlinking-set IP s of the root set R w.r.t. a link-selection predicate P and a sampling value s is defined to be: IP s = [ v∈R Ss[{u ∈ V : (u, v) ∈ E ∧ P(u, v)}] The base set BP s of the root set R w.r.t. P and s is defined to be: BP s = R ∪ IP s ∪ OP The neighborhood graph (BP s , NP s ) has the base set BP s as its vertex set and an edge set NP s containing those edges in E that are covered by BP s and permitted by P: NP s = {(u, v) ∈ E : u ∈ BP s ∧ v ∈ BP s ∧ P(u, v)} To simplify notation, we write B to denote BP s , and N to denote NP s . For each node u in the neighborhood graph, HITS computes two scores: an authority score A(u), estimating how authoritative u is on the topic induced by the query, and a hub score H(u), indicating whether u is a good reference to many authoritative pages. This is done using the following algorithm: 1. For all u ∈ B do H(u) := q 1 |B| , A(u) := q 1 |B| . 2. Repeat until H and A converge: (a) For all v ∈ B : A (v) := P (u,v)∈N H(u) (b) For all u ∈ B : H (u) := P (u,v)∈N A(v) (c) H := H 2, A := A 2 where X 2 normalizes the vector X to unit length in euclidean space, i.e. the squares of its elements sum up to 1. In practice, implementing a system that can compute HITS within the time constraints of a major search engine (where the peak query load is in the thousands of queries per second, and the desired query response time is well below one second) is a major engineering challenge. Among other things, the web graph cannot reasonably be stored on disk, since .221 .106 .105 .104 .102 .095 .092 .090 .038 .036 .035 .034 .032 .032 .011 0.00 0.05 0.10 0.15 0.20 0.25 bm25f degree-in-id degree-in-ih hits-aut-id-25 hits-aut-ih-100 degree-in-all pagerank hits-aut-all-100 hits-hub-all-100 hits-hub-ih-100 hits-hub-id-100 degree-out-all degree-out-ih degree-out-id random NDCG@10 .100 .035 .033 .033 .033 .029 .027 .027 .008 .007 .007 .006 .006 .006 .002 0.00 0.02 0.04 0.06 0.08 0.10 0.12 bm25f hits-aut-id-9 degree-in-id hits-aut-ih-15 degree-in-ih degree-in-all pagerank hits-aut-all-100 hits-hub-all-100 hits-hub-ih-100 hits-hub-id-100 degree-out-all degree-out-ih degree-out-id random MAP@10 .273 .132 .126 .117 .114 .101 .101 .097 .032 .032 .030 .028 .027 .027 .007 0.00 0.05 0.10 0.15 0.20 0.25 0.30 bm25f hits-aut-id-9 hits-aut-ih-15 degree-in-id degree-in-ih degree-in-all hits-aut-all-100 pagerank hits-hub-all-100 hits-hub-ih-100 hits-hub-id-100 degree-out-all degree-out-ih degree-out-id random MRR@10 Figure 2: Effectiveness of different features. seek times of modern hard disks are too slow to retrieve the links within the time constraints, and the graph does not fit into the main memory of a single machine, even when using the most aggressive compression techniques. In order to experiment with HITS and other query-dependent link-based ranking algorithms that require non-regular accesses to arbitrary nodes and edges in the web graph, we implemented a system called the Scalable Hyperlink Store, or SHS for short. SHS is a special-purpose database, distributed over an arbitrary number of machines that keeps a highly compressed version of the web graph in memory and allows very fast lookup of nodes and edges. On our hardware, it takes an average of 2 microseconds to map a URL to a 64-bit integer handle called a UID, 15 microseconds to look up all incoming or outgoing link UIDs associated with a page UID, and 5 microseconds to map a UID back to a URL (the last functionality not being required by HITS). The RPC overhead is about 100 microseconds, but the SHS API allows many lookups to be batched into a single RPC request. We implemented the HITS algorithm using the SHS infrastructure. We compiled three SHS databases, one containing all 17.6 billion links in our web graph (all), one containing only links between pages that are on different hosts (ih, for inter-host), and one containing only links between pages that are on different domains (id). We consider two URLs to belong to different hosts if the host portions of the URLs differ (in other words, we make no attempt to determine whether two distinct symbolic host names refer to the same computer), and we consider a domain to be the name purchased from a registrar (for example, we consider news.bbc.co.uk and www.bbc.co.uk to be different hosts belonging to the same domain). Using each of these databases, we computed HITS authority and hub scores for various parameterizations of the sampling operator S, sampling between 1 and 100 back-links of each page in the root set. Result URLs that were not covered by our web graph automatically received authority and hub scores of 0, since they were not connected to any other nodes in the neighborhood graph and therefore did not receive any endorsements. We performed forty-five different HITS computations, each combining one of the three link selection predicates (all, ih, and id) with a sampling value. For each combination, we loaded one of the three databases into an SHS system running on six machines (each equipped with 16 GB of RAM), and computed HITS authority and hub scores, one query at a time. The longest-running combination (using the all database and sampling 100 back-links of each root set vertex) required 30,456 seconds to process the entire query set, or about 1.1 seconds per query on average. 7. EXPERIMENTAL RESULTS For a given query Q, we need to rank the set of documents satisfying Q (the result set of Q). Our hypothesis is that good features should be able to rank relevant documents in this set higher than non-relevant ones, and this should result in an increase in each performance measure over the query set. We are specifically interested in evaluating the usefulness of HITS and other link-based features. In principle, we could do this by sorting the documents in each result set by their feature value, and compare the resulting NDCGs. We call this ranking with isolated features. Let us first examine the relative performance of the different parameterizations of the HITS algorithm we examined. Recall that we computed HITS for each combination of three link section schemes - all links (all), inter-host links only (ih), and inter-domain links only (id) - with back-link sampling values ranging from 1 to 100. Figure 1 shows the impact of the number of sampled back-links on the retrieval performance of HITS authority scores. Each graph is associated with one performance measure. The horizontal axis of each graph represents the number of sampled back-links, the vertical axis represents performance under the appropriate measure, and each curve depicts a link selection scheme. The id scheme slightly outperforms ih, and both vastly outperform the all scheme - eliminating nepotistic links pays off. The performance of the all scheme increases as more back-links of each root set vertex are sampled, while the performance of the id and ih schemes peaks at between 10 and 25 samples and then plateaus or even declines, depending on the performance measure. Having compared different parameterizations of HITS, we will now fix the number of sampled back-links at 100 and compare the three link selection schemes against other isolated features: PageRank, in-degree and out-degree counting links of all pages, of different hosts only and of different domains only (all, ih and id datasets respectively), and a text retrieval algorithm exploiting anchor text: BM25F[24]. BM25F is a state-of-the art ranking function solely based on textual content of the documents and their associated anchor texts. BM25F is a descendant of BM25 that combines the different textual fields of a document, namely title, body and anchor text. This model has been shown to be one of the best-performing web search scoring functions over the last few years [8, 24]. BM25F has a number of free parameters (2 per field, 6 in our case); we used the parameter values described in [24]. .341 .340 .339 .337 .336 .336 .334 .311 .311 .310 .310 .310 .310 .231 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 degree-in-id degree-in-ih degree-in-all hits-aut-ih-100 hits-aut-all-100 pagerank hits-aut-id-10 degree-out-all hits-hub-all-100 degree-out-ih hits-hub-ih-100 degree-out-id hits-hub-id-10 bm25f NDCG@10 .152 .152 .151 .150 .150 .149 .149 .137 .136 .136 .128 .127 .127 .100 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 degree-in-ih degree-in-id degree-in-all hits-aut-ih-100 hits-aut-all-100 hits-aut-id-10 pagerank hits-hub-all-100 degree-out-ih hits-hub-id-100 degree-out-all degree-out-id hits-hub-ih-100 bm25f MAP@10 .398 .397 .394 .394 .392 .392 .391 .356 .356 .356 .356 .356 .355 .273 0.25 0.30 0.35 0.40 degree-in-id degree-in-ih degree-in-all hits-aut-ih-100 hits-aut-all-100 pagerank hits-aut-id-10 degree-out-all hits-hub-all-100 degree-out-ih hits-hub-ih-100 degree-out-id hits-hub-id-10 bm25f MRR@10 Figure 3: Effectiveness measures for linear combinations of link-based features with BM25F. Figure 2 shows the NDCG, MRR, and MAP measures of these features. Again all performance measures (and for all rank-thresholds we explored) agree. As expected, BM25F outperforms all link-based features by a large margin. The link-based features are divided into two groups, with a noticeable performance drop between the groups. The better-performing group consists of the features that are based on the number and/or quality of incoming links (in-degree, PageRank, and HITS authority scores); and the worse-performing group consists of the features that are based on the number and/or quality of outgoing links (outdegree and HITS hub scores). In the group of features based on incoming links, features that ignore nepotistic links perform better than their counterparts using all links. Moreover, using only inter-domain (id) links seems to be marginally better than using inter-host (ih) links. The fact that features based on outgoing links underperform those based on incoming links matches our expectations; if anything, it is mildly surprising that outgoing links provide a useful signal for ranking at all. On the other hand, the fact that in-degree features outperform PageRank under all measures is quite surprising. A possible explanation is that link-spammers have been targeting the published PageRank algorithm for many years, and that this has led to anomalies in the web graph that affect PageRank, but not other link-based features that explore only a distance-1 neighborhood of the result set. Likewise, it is surprising that simple query-independent features such as in-degree, which might estimate global quality but cannot capture relevance to a query, would outperform query-dependent features such as HITS authority scores. However, we cannot investigate the effect of these features in isolation, without regard to the overall ranking function, for several reasons. First, features based on the textual content of documents (as opposed to link-based features) are the best predictors of relevance. Second, link-based features can be strongly correlated with textual features for several reasons, mainly the correlation between in-degree and numFeature Transform function bm25f T(s) = s pagerank T(s) = log(s + 3 · 10−12 ) degree-in-* T(s) = log(s + 3 · 10−2 ) degree-out-* T(s) = log(s + 3 · 103 ) hits-aut-* T(s) = log(s + 3 · 10−8 ) hits-hub-* T(s) = log(s + 3 · 10−1 ) Table 1: Near-optimal feature transform functions. ber of textual anchor matches. Therefore, one must consider the effect of link-based features in combination with textual features. Otherwise, we may find a link-based feature that is very good in isolation but is strongly correlated with textual features and results in no overall improvement; and vice versa, we may find a link-based feature that is weak in isolation but significantly improves overall performance. For this reason, we have studied the combination of the link-based features above with BM25F. All feature combinations were done by considering the linear combination of two features as a document score, using the formula score(d) =Pn i=1 wiTi(Fi(d)), where d is a document (or documentquery pair, in the case of BM25F), Fi(d) (for 1 ≤ i ≤ n) is a feature extracted from d, Ti is a transform, and wi is a free scalar weight that needs to be tuned. We chose transform functions that we empirically determined to be well-suited. Table 1 shows the chosen transform functions. This type of linear combination is appropriate if we assume features to be independent with respect to relevance and an exponential model for link features, as discussed in [8]. We tuned the weights by selecting a random subset of 5,000 queries from the query set, used an iterative refinement process to find weights that maximized a given performance measure on that training set, and used the remaining 23,043 queries to measure the performance of the thus derived scoring functions. We explored the pairwise combination of BM25F with every link-based scoring function. Figure 3 shows the NDCG, MRR, and MAP measures of these feature combinations, together with a baseline BM25F score (the right-most bar in each graph), which was computed using the same subset of 23,045 queries that were used as the test set for the feature combinations. Regardless of the performance measure applied, we can make the following general observations: 1. Combining any of the link-based features with BM25F results in a substantial performance improvement over BM25F in isolation. 2. The combination of BM25F with features based on incoming links (PageRank, in-degree, and HITS authority scores) performs substantially better than the combination with features based on outgoing links (HITS hub scores and out-degree). 3. The performance differences between the various combinations of BM25F with features based on incoming links is comparatively small, and the relative ordering of feature combinations is fairly stable across the 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0 2 4 6 8 10 12 14 16 18 20 22 24 MAP@10 26 374 1640 2751 3768 4284 3944 3001 2617 1871 1367 771 1629 bm25fnorm pagerank degree-in-id hits-aut-id-100 Figure 4: Effectiveness measures for selected isolated features, broken down by query specificity. ferent performance measures used. However, the combination of BM25F with any in-degree variant, and in particular with id in-degree, consistently outperforms the combination of BM25F with PageRank or HITS authority scores, and can be computed much easier and faster. Finally, we investigated whether certain features are better for some queries than for others. Particularly, we are interested in the relationship between the specificity of a query and the performance of different ranking features. The most straightforward measure of the specificity of a query Q would be the number of documents in a search engine``s corpus that satisfy Q. Unfortunately, the query set available to us did not contain this information. Therefore, we chose to approximate the specificity of Q by summing up the inverse document frequencies of the individual query terms comprising Q. The inverse document frequency (IDF) of a term t with respect to a corpus C is defined to be logN/doc(t), where doc(t) is the number of documents in C containing t and N is the total number of documents in C. By summing up the IDFs of the query terms, we make the (flawed) assumption that the individual query terms are independent of each other. However, while not perfect, this approximation is at least directionally correct. We broke down our query set into 13 buckets, each bucket associated with an interval of query IDF values, and we computed performance metrics for all ranking functions applied (in isolation) to the queries in each bucket. In order to keep the graphs readable, we will not show the performance of all the features, but rather restrict ourselves to the four most interesting ones: PageRank, id HITS authority scores, id in-degree, and BM25F. Figure 4 shows the MAP@10 for all 13 query specificity buckets. Buckets on the far left of each graph represent very general queries; buckets on the far right represent very specific queries. The figures on the upper x axis of each graph show the number of queries in each bucket (e.g. the right-most bucket contains 1,629 queries). BM25F performs best for medium-specific queries, peaking at the buckets representing the IDF sum interval [12,14). By comparison, HITS peaks at the bucket representing the IDF sum interval [4,6), and PageRank and in-degree peak at the bucket representing the interval [6,8), i.e. more general queries. 8. CONCLUSIONS AND FUTURE WORK This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F). Evaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP. Evaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores. HITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering. A linear combination of any link-based features with BM25F produces a significant improvement in performance, and there is a clear difference between combining BM25F with a feature based on incoming links (indegree, PageRank, or HITS authority scores) and a feature based on outgoing links (HITS hub scores and out-degree), but within those two groups the precise choice of link-based feature matters relatively little. We believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes. There are many possible variants of these schemes, and many other link-based ranking algorithms have been proposed in the literature, hence we do not claim this work to be the last word on this subject, but rather the first step on a long road. Future work includes evaluation of different parameterizations of PageRank and HITS. In particular, we would like to study the impact of changes to the PageRank damping factor on effectiveness, the impact of various schemes meant to counteract the effects of link spam, and the effect of weighing hyperlinks differently depending on whether they are nepotistic or not. Going beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA. Finally, we are planning to experiment with more complex feature combinations. 9. REFERENCES [1] B. Amento, L. Terveen, and W. Hill. Does authority mean quality? Predicting expert quality ratings of web documents. In Proc. of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 296-303, 2000. [2] M. Bianchini, M. Gori, and F. Scarselli. Inside PageRank. ACM Transactions on Internet Technology, 5(1):92-128, 2005. [3] A. Borodin, G. O. Roberts, and J. S. Rosenthal. Finding authorities and hubs from link structures on the World Wide Web. In Proc. of the 10th International World Wide Web Conference, pages 415-429, 2001. [4] A. Borodin, G. O. Roberts, J. S. Rosenthal, and P. Tsaparas. Link analysis ranking: algorithms, theory, and experiments. ACM Transactions on Interet Technology, 5(1):231-297, 2005. [5] S. Brin and L. Page. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1-7):107-117, 1998. [6] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Proc. of the 22nd International Conference on Machine Learning, pages 89-96, New York, NY, USA, 2005. ACM Press. [7] D. Cohn and H. Chang. Learning to probabilistically identify authoritative documents. In Proc. of the 17th International Conference on Machine Learning, pages 167-174, 2000. [8] N. Craswell, S. Robertson, H. Zaragoza, and M. Taylor. Relevance weighting for query independent evidence. In Proc. of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 416-423, 2005. [9] E. Garfield. Citation analysis as a tool in journal evaluation. Science, 178(4060):471-479, 1972. [10] Z. Gy¨ongyi and H. Garcia-Molina. Web spam taxonomy. In 1st International Workshop on Adversarial Information Retrieval on the Web, 2005. [11] Z. Gy¨ongyi, H. Garcia-Molina, and J. Pedersen. Combating web spam with TrustRank. In Proc. of the 30th International Conference on Very Large Databases, pages 576-587, 2004. [12] B. J. Jansen, A. Spink, J. Bateman, and T. Saracevic. Real life information retrieval: a study of user queries on the web. ACM SIGIR Forum, 32(1):5-17, 1998. [13] K. J¨arvelin and J. Kek¨al¨ainen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422-446, 2002. [14] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub. Extrapolation methods for accelerating PageRank computations. In Proc. of the 12th International World Wide Web Conference, pages 261-270, 2003. [15] M. M. Kessler. Bibliographic coupling between scientific papers. American Documentation, 14(1):10-25, 1963. [16] J. M. Kleinberg. Authoritative sources in a hyperlinked environment. In Proc. of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 668-677, 1998. [17] J. M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604-632, 1999. [18] A. N. Langville and C. D. Meyer. Deeper inside PageRank. Internet Mathematics, 1(3):2005, 335-380. [19] R. Lempel and S. Moran. The stochastic approach for link-structure analysis (SALSA) and the TKC effect. Computer Networks and ISDN Systems, 33(1-6):387-401, 2000. [20] A. Y. Ng, A. X. Zheng, and M. I. Jordan. Stable algorithms for link analysis. In Proc. of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 258-266, 2001. [21] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project, 1998. [22] J. A. Tomlin. A new paradigm for ranking pages on the World Wide Web. In Proc. of the 12th International World Wide Web Conference, pages 350-355, 2003. [23] T. Upstill, N. Craswell, and D. Hawking. Predicting fame and fortune: Pagerank or indegree? In Proc. of the Australasian Document Computing Symposium, pages 31-40, 2003. [24] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. Robertson. Microsoft Cambridge at TREC-13: Web and HARD tracks. In Proc. of the 13th Text Retrieval Conference, 2004.
SIGIR 2007 Proceedings Session 20: Link Analysis HITS on the Web: How does it Compare? * ABSTRACT This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries. 1. INTRODUCTION Link graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web. The HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query. However, it remains unclear today whether there are practical benefits of HITS over other link graph measures. This is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document's anchor text, i.e. the text of incoming links. This, at least to some degree, takes the link graph into account, in a query-dependent manner. Comparing HITS to PageRank or in-degree empirically is no easy task. There are two main difficulties: scale and relevance. Scale is important because link-based features are known to improve in quality as the document graph grows. If we carry out a small experiment, our conclusions won't carry over to large graphs such as the web. However, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult. Relevance is also crucial because we cannot measure the performance of a feature in the absence of human judgments: what is crucial is ranking at the top of the ten or so documents that a user will peruse. To our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment. Our results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text. This is reassuring: in the absence of a theoretical model capable of tying these measures with relevance, the only way to validate our intuitions is to carry out realistic experiments. However, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature. This continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text. The remainder of this paper is structured as follows: Section 2 surveys related work. Section 3 describes the data sets we used in our study. Section 4 reviews the performance measures we used. Sections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments. Section 7 presents the results of our evaluations, and Section 8 offers concluding remarks. 2. RELATED WORK The idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms. The popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research. There are numerous attempts at improving the effectiveness of HITS and PageRank. Query-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few. Query-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others. Another line of research is concerned with analyzing the mathematical properties of HITS and PageRank. For example, Borodin et al. [3] investigated various theoretical properties of PageRank, HITS, SALSA, and PHITS, including their similarity and stability, while Bianchini et al. [2] studied the relationship between the structure of the web graph and the distribution of PageRank scores, and Langville and Meyer examined basic properties of PageRank such as existence and uniqueness of an eigenvector and convergence of power iteration [18]. Given the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published. We are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank. Amento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query. A more recent study by Borodin et al. [4] is based on 34 queries, result sets of 200 pages per query obtained from Google, and a neighborhood graph derived by retrieving 50 in-links per result from Google. By contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs. 3. OUR DATA SETS Our evaluation is based on two data sets: a large web graph and a substantial set of queries with associated results, some of which were labeled by human judges. Our web graph is based on a web crawl that was conducted in a breadth-first-search fashion, and successfully retrieved 463,685,607 HTML pages. These pages contain 17,672,011,890 hyperlinks (after eliminating duplicate hyperlinks embedded in the same web page), which refer to a total of 2,897,671,002 URLs. Thus, at the end of the crawl there were 2,433,985,395 URLs in the "frontier" set of the crawler that had been discovered, but not yet downloaded. The mean out-degree of crawled web pages is 38.11; the mean in-degree of discovered pages (whether crawled or not) is 6.10. Also, it is worth pointing out that there is a lot more variance in in-degrees than in out-degrees; some popular pages have millions of incoming links. As we will see, this property affects the computational cost of HITS. Our query set was produced by sampling 28,043 queries from the MSN Search query log, and retrieving a total of 66,846,214 result URLs for these queries (using commercial search engine technology), or about 2,838 results per query on average. It is important to point out that our 2.9 billion URL web graph does not cover all these result URLs. In fact, only 9,525,566 of the result URLs (about 14.25%) were covered by the graph. 485,656 of the results in the query set (about 0.73% of all results, or about 17.3 results per query) were rated by human judges as to their relevance to the given query, and labeled on a six-point scale (the labels being "definitive", "excellent", "good", "fair", "bad" and "detrimental"). Results were selected for judgment based on their commercial search engine placement; in other words, the subset of labeled results is not random, but biased towards documents considered relevant by pre-existing ranking algorithms. Involving a human in the evaluation process is extremely cumbersome and expensive; however, human judgments are crucial for the evaluation of search engines. This is so because no document features have been found yet that can effectively estimate the relevance of a document to a user query. Since content-match features are very unreliable (and even more so link features, as we will see) we need to ask a human to evaluate the results in order to compare the quality of features. Evaluating the retrieval results from document scores and human judgments is not trivial and has been the subject of many investigations in the IR community. A good performance measure should correlate with user satisfaction, taking into account that users will dislike having to delve deep in the results to find relevant documents. For this reason, standard correlation measures (such as the correlation coefficient between the score and the judgment of a document), or order correlation measures (such as Kendall tau between the score and judgment induced orders) are not adequate. 4. MEASURING PERFORMANCE In this study, we quantify the effectiveness of various ranking algorithms using three measures: NDCG, MRR, and MAP. The normalized discounted cumulative gains (NDCG) measure [13] discounts the contribution of a document to the overall score as the document's rank increases (assuming that the best document has the lowest rank). Such a measure is particularly appropriate for search engines, as studies have shown that search engine users rarely consider anything beyond the first few results [12]. NDCG values are normalized to be between 0 and 1, with 1 being the NDCG of a "perfect" ranking scheme that completely agrees with the assessment of the human judges. The discounted cumulative gain at a particular rank-threshold T (DCG@T) is de ing (0 = detrimental, 1 = bad, 2 = fair, 3 = good, 4 = excellent, and 5 = definitive) at rank j. The NDCG is computed by dividing the DCG of a ranking by the highest possible DCG that can be obtained for that query. Finally, the NDGCs of all queries in the query set are averaged to produce a mean NDCG. The reciprocal rank (RR) of the ranked result set of a query is defined to be the reciprocal value of the rank of the highest-ranking relevant document in the result set. The RR at rank-threshold T is defined to be 0 if none of the highestranking T documents is relevant. The mean reciprocal rank (MRR) of a query set is the average reciprocal rank of all queries in the query set. Given a ranked set of n results, let rel (i) be 1 if the result at rank i is relevant and 0 otherwise. The precision P (j) of the relevant results among the j highest-ranking results. The average precision (AP) at rank-threshold k is defined to i = 1 rel (i). The mean average precision (MAP) of a query set is the mean of the average precisions of all queries in the query set. The above definitions of MRR and MAP rely on the notion of a "relevant" result. We investigated two definitions of relevance: One where all documents rated "fair" or better were deemed relevant, and one were all documents rated "good" or better were deemed relevant. For reasons of space, we only report MAP and MRR values computed using the latter definition; using the former definition does not change the qualitative nature of our findings. Similarly, we computed NDCG, MAP, and MRR values for a wide range of rank-thresholds; we report results here at rank 10; again, changing the rank-threshold never led us to different conclusions. Recall that over 99% of documents are unlabeled. We chose to treat all these documents as irrelevant to the query. For some queries, however, not all relevant documents have been judged. This introduces a bias into our evaluation: features that bring new documents to the top of the rank may be penalized. This will be more acute for features less correlated to the pre-existing commercial ranking algorithms used to select documents for judgment. On the other hand, most queries have few perfect relevant documents (i.e. home page or item searches) and they will most often be within the judged set. 5. COMPUTING PAGERANK ON A LARGE WEB GRAPH PageRank is a query-independent measure of the importance of web pages, based on the notion of peer-endorsement: A hyperlink from page A to page B is interpreted as an endorsement of page B's content by page A's author. The following recursive definition captures this notion of endorsement: where R (v) is the score (importance) of page v, (u, v) is an edge (hyperlink) from page u to page v contained in the edge set E of the web graph, and Out (u) is the out-degree (number of embedded hyperlinks) of page u. However, this definition suffers from a severe shortcoming: In the fixedpoint of this recursive equation, only edges that are part of a strongly-connected component receive a non-zero score. In order to overcome this deficiency, Page et al. grant each page a guaranteed "minimum score", giving rise to the definition of standard PageRank: where IV I is the size of the vertex set (the number of known web pages), and d is a "damping factor", typically set to be between 0.1 and 0.2. Assuming that scores are normalized to sum up to 1, PageRank can be viewed as the stationary probability distribution of a random walk on the web graph, where at each step of the walk, the walker with probability 1--d moves from its current node u to a neighboring node v, and with probability d selects a node uniformly at random from all nodes in the graph and jumps to it. In the limit, the random walker is at node v with probability R (v). One issue that has to be addressed when implementing PageRank is how to deal with "sink" nodes, nodes that do not have any outgoing links. One possibility would be to select another node uniformly at random and transition to it; this is equivalent to adding edges from each sink nodes to all other nodes in the graph. We chose the alternative approach of introducing a single "phantom" node. Each sink node has an edge to the phantom node, and the phantom node has an edge to itself. In practice, PageRank scores can be computed using power iteration. Since PageRank is query-independent, the computation can be performed off-line ahead of query time. This property has been key to PageRank's success, since it is a challenging engineering problem to build a system that can perform any non-trivial computation on the web graph at query time. In order to compute PageRank scores for all 2.9 billion nodes in our web graph, we implemented a distributed version of PageRank. The computation consists of two distinct phases. In the first phase, the link files produced by the web crawler, which contain page URLs and their associated link URLs in textual form, are partitioned among the machines in the cluster used to compute PageRank scores, and converted into a more compact format along the way. Specifically, URLs are partitioned across the machines in the cluster based on a hash of the URLs' host component, and each machine in the cluster maintains a table mapping the URL to a 32-bit integer. The integers are drawn from a densely packed space, so as to make suitable indices into the array that will later hold the PageRank scores. The system then translates our log of pages and their associated hyperlinks into a compact representation where both page URLs and link URLs are represented by their associated 32-bit integers. Hashing the host component of the URLs guarantees that all URLs from the same host are assigned to the same machine in our scoring cluster. Since over 80% of all hyperlinks on the web are relative (that is, are between two pages on the same host), this property greatly reduces the amount of network communication required by the second stage of the distributed scoring computation. The second phase performs the actual PageRank power iteration. Both the link data and the current PageRank vector reside on disk and are read in a streaming fashion; while the new PageRank vector is maintained in memory. We represent PageRank scores as 64-bit floating point numbers. PageRank contributions to pages assigned to remote machines are streamed to the remote machine via a TCP connection. We used a three-machine cluster, each machine equipped with 16 GB of RAM, to compute standard PageRank scores for all 2.9 billion URLs that were contained in our web graph. We used a damping factor of 0.15, and performed 200 power iterations. Starting at iteration 165, the L ∞ norm of the change in the PageRank vector from one iteration to the next had stopped decreasing, indicating that we had reached as much of a fixed point as the limitations of 64-bit floating point arithmetic would allow. Figure 1: Effectiveness of authority scores computed using different parameterizations of HITS. A post-processing phase uses the final PageRank vectors (one per machine) and the table mapping URLs to 32-bit integers (representing indices into each PageRank vector) to score the result URL in our query log. As mentioned above, our web graph covered 9,525,566 of the 66,846,214 result URLs. These URLs were annotated with their computed PageRank score; all other URLs received a score of 0. 6. HITS HITS, unlike PageRank, is a query-dependent ranking algorithm. HITS (which stands for "Hypertext Induced Topic Search") is based on the following two intuitions: First, hyperlinks can be viewed as topical endorsements: A hyperlink from a page u devoted to topic T to another page v is likely to endorse the authority of v with respect to topic T. Second, the result set of a particular query is likely to have a certain amount of topical coherence. Therefore, it makes sense to perform link analysis not on the entire web graph, but rather on just the neighborhood of pages contained in the result set, since this neighborhood is more likely to contain topically relevant links. But while the set of nodes immediately reachable from the result set is manageable (given that most pages have only a limited number of hyperlinks embedded into them), the set of pages immediately leading to the result set can be enormous. For this reason, Kleinberg suggests sampling a fixed-size random subset of the pages linking to any high-indegree page in the result set. Moreover, Kleinberg suggests considering only links that cross host boundaries, the rationale being that links between pages on the same host ("intrinsic links") are likely to be navigational or nepotistic and not topically relevant. Given a web graph (V, E) with vertex set V and edge set E ⊆ V × V, and the set of result URLs to a query (called the root set R ⊆ V) as input, HITS computes a neighborhood graph consisting of a base set B ⊆ V (the root set and some of its neighboring vertices) and some of the edges in E induced by B. In order to formalize the definition of the neighborhood graph, it is helpful to first introduce a sampling operator and the concept of a linkselection predicate. Given a set A, the notation Sn [A] draws n elements uniformly at random from A; Sn [A] = A if | A | ≤ n. A link section predicate P takes an edge (u, v) ∈ E. In this study, we use the following three link section predicates: where host (u) denotes the host of URL u, and domain (u) denotes the domain of URL u. So, all is true for all links, whereas ih is true only for inter-host links, and id is true only for inter-domain links. The outlinked-set OP of the root set R w.r.t. a linkselection predicate P is defined to be: The inlinking-set IsP of the root set R w.r.t. a link-selection predicate P and a sampling value s is defined to be: The neighborhood graph (BPs, NsP) has the base set BPs as its vertex set and an edge set NsP containing those edges in E that are covered by BPs and permitted by P: NsP ={(u, v) ∈ E: u ∈ BPs ∧ v ∈ BPs ∧ P (u, v)} To simplify notation, we write B to denote BPs, and N to denote NsP. For each node u in the neighborhood graph, HITS computes two scores: an authority score A (u), estimating how authoritative u is on the topic induced by the query, and a hub score H (u), indicating whether u is a good reference to many authoritative pages. This is done using the following algorithm: 2. Repeat until H and A converge: (a) For all v ∈ B: A ~ (v): = E (u, v) ∈ N H (u) (b) For all u ∈ B: H ~ (u): = E (u, v) ∈ N A (v) (c) H: = H ~ 2, A: = where X 2 normalizes the vector X to unit length in euclidean space, i.e. the squares of its elements sum up to 1. In practice, implementing a system that can compute HITS within the time constraints of a major search engine (where the peak query load is in the thousands of queries per second, and the desired query response time is well below one second) is a major engineering challenge. Among other things, the web graph cannot reasonably be stored on disk, since Figure 2: Effectiveness of different features. seek times of modern hard disks are too slow to retrieve the links within the time constraints, and the graph does not fit into the main memory of a single machine, even when using the most aggressive compression techniques. In order to experiment with HITS and other query-dependent link-based ranking algorithms that require non-regular accesses to arbitrary nodes and edges in the web graph, we implemented a system called the Scalable Hyperlink Store, or SHS for short. SHS is a special-purpose database, distributed over an arbitrary number of machines that keeps a highly compressed version of the web graph in memory and allows very fast lookup of nodes and edges. On our hardware, it takes an average of 2 microseconds to map a URL to a 64-bit integer handle called a UID, 15 microseconds to look up all incoming or outgoing link UIDs associated with a page UID, and 5 microseconds to map a UID back to a URL (the last functionality not being required by HITS). The RPC overhead is about 100 microseconds, but the SHS API allows many lookups to be batched into a single RPC request. We implemented the HITS algorithm using the SHS infrastructure. We compiled three SHS databases, one containing all 17.6 billion links in our web graph (all), one containing only links between pages that are on different hosts (ih, for "inter-host"), and one containing only links between pages that are on different domains (id). We consider two URLs to belong to different hosts if the host portions of the URLs differ (in other words, we make no attempt to determine whether two distinct symbolic host names refer to the same computer), and we consider a domain to be the name purchased from a registrar (for example, we consider news.bbc.co.uk and www.bbc.co.uk to be different hosts belonging to the same domain). Using each of these databases, we computed HITS authority and hub scores for various parameterizations of the sampling operator S, sampling between 1 and 100 back-links of each page in the root set. Result URLs that were not covered by our web graph automatically received authority and hub scores of 0, since they were not connected to any other nodes in the neighborhood graph and therefore did not receive any endorsements. We performed forty-five different HITS computations, each combining one of the three link selection predicates (all, ih, and id) with a sampling value. For each combination, we loaded one of the three databases into an SHS system running on six machines (each equipped with 16 GB of RAM), and computed HITS authority and hub scores, one query at a time. The longest-running combination (using the all database and sampling 100 back-links of each root set vertex) required 30,456 seconds to process the entire query set, or about 1.1 seconds per query on average. 7. EXPERIMENTAL RESULTS For a given query Q, we need to rank the set of documents satisfying Q (the "result set" of Q). Our hypothesis is that good features should be able to rank relevant documents in this set higher than non-relevant ones, and this should result in an increase in each performance measure over the query set. We are specifically interested in evaluating the usefulness of HITS and other link-based features. In principle, we could do this by sorting the documents in each result set by their feature value, and compare the resulting NDCGs. We call this ranking with isolated features. Let us first examine the relative performance of the different parameterizations of the HITS algorithm we examined. Recall that we computed HITS for each combination of three link section schemes--all links (all), inter-host links only (ih), and inter-domain links only (id)--with back-link sampling values ranging from 1 to 100. Figure 1 shows the impact of the number of sampled back-links on the retrieval performance of HITS authority scores. Each graph is associated with one performance measure. The horizontal axis of each graph represents the number of sampled back-links, the vertical axis represents performance under the appropriate measure, and each curve depicts a link selection scheme. The id scheme slightly outperforms ih, and both vastly outperform the all scheme--eliminating nepotistic links pays off. The performance of the all scheme increases as more back-links of each root set vertex are sampled, while the performance of the id and ih schemes peaks at between 10 and 25 samples and then plateaus or even declines, depending on the performance measure. Having compared different parameterizations of HITS, we will now fix the number of sampled back-links at 100 and compare the three link selection schemes against other isolated features: PageRank, in-degree and out-degree counting links of all pages, of different hosts only and of different domains only (all, ih and id datasets respectively), and a text retrieval algorithm exploiting anchor text: BM25F [24]. BM25F is a state-of-the art ranking function solely based on textual content of the documents and their associated anchor texts. BM25F is a descendant of BM25 that combines the different textual fields of a document, namely title, body and anchor text. This model has been shown to be one of the best-performing web search scoring functions over the last few years [8, 24]. BM25F has a number of free parameters (2 per field, 6 in our case); we used the parameter values described in [24]. Figure 3: Effectiveness measures for linear combinations of link-based features with BM25F. Figure 2 shows the NDCG, MRR, and MAP measures of these features. Again all performance measures (and for all rank-thresholds we explored) agree. As expected, BM25F outperforms all link-based features by a large mar gin. The link-based features are divided into two groups, with a noticeable performance drop between the groups. The better-performing group consists of the features that are based on the number and/or quality of incoming links (in-degree, PageRank, and HITS authority scores); and the worse-performing group consists of the features that are based on the number and/or quality of outgoing links (outdegree and HITS hub scores). In the group of features based on incoming links, features that ignore nepotistic links perform better than their counterparts using all links. Moreover, using only inter-domain (id) links seems to be marginally better than using inter-host (ih) links. The fact that features based on outgoing links underperform those based on incoming links matches our expectations; if anything, it is mildly surprising that outgoing links provide a useful signal for ranking at all. On the other hand, the fact that in-degree features outperform PageRank under all measures is quite surprising. A possible explanation is that link-spammers have been targeting the published PageRank algorithm for many years, and that this has led to anomalies in the web graph that affect PageRank, but not other link-based features that explore only a distance-1 neighborhood of the result set. Likewise, it is surprising that simple query-independent features such as in-degree, which might estimate global quality but cannot capture relevance to a query, would outperform query-dependent features such as HITS authority scores. However, we cannot investigate the effect of these features in isolation, without regard to the overall ranking function, for several reasons. First, features based on the textual content of documents (as opposed to link-based features) are the best predictors of relevance. Second, link-based features can be strongly correlated with textual features for several reasons, mainly the correlation between in-degree and num Table 1: Near-optimal feature transform functions. ber of textual anchor matches. Therefore, one must consider the effect of link-based features in combination with textual features. Otherwise, we may find a link-based feature that is very good in isolation but is strongly correlated with textual features and results in no overall improvement; and vice versa, we may find a link-based feature that is weak in isolation but significantly improves overall performance. For this reason, we have studied the combination of the link-based features above with BM25F. All feature combinations were done by considering the linear combination of two features as a document score, using the formula score (d) = query pair, in the case of BM25F), Fi (d) (for 1 <i <n) is a feature extracted from d, Ti is a transform, and wi is a free scalar weight that needs to be tuned. We chose transform functions that we empirically determined to be well-suited. Table 1 shows the chosen transform functions. This type of linear combination is appropriate if we assume features to be independent with respect to relevance and an exponential model for link features, as discussed in [8]. We tuned the weights by selecting a random subset of 5,000 queries from the query set, used an iterative refinement process to find weights that maximized a given performance measure on that training set, and used the remaining 23,043 queries to measure the performance of the thus derived scoring functions. We explored the pairwise combination of BM25F with every link-based scoring function. Figure 3 shows the NDCG, MRR, and MAP measures of these feature combinations, together with a baseline BM25F score (the right-most bar in each graph), which was computed using the same subset of 23,045 queries that were used as the test set for the feature combinations. Regardless of the performance measure applied, we can make the following general observations: 1. Combining any of the link-based features with BM25F results in a substantial performance improvement over BM25F in isolation. 2. The combination of BM25F with features based on incoming links (PageRank, in-degree, and HITS authority scores) performs substantially better than the combination with features based on outgoing links (HITS hub scores and out-degree). 3. The performance differences between the various combinations of BM25F with features based on incoming links is comparatively small, and the relative ordering of feature combinations is fairly stable across the dif Figure 4: Effectiveness measures for selected isolated features, broken down by query specificity. ferent performance measures used. However, the combination of BM25F with any in-degree variant, and in particular with id in-degree, consistently outperforms the combination of BM25F with PageRank or HITS authority scores, and can be computed much easier and faster. Finally, we investigated whether certain features are better for some queries than for others. Particularly, we are interested in the relationship between the specificity of a query and the performance of different ranking features. The most straightforward measure of the specificity of a query Q would be the number of documents in a search engine's corpus that satisfy Q. Unfortunately, the query set available to us did not contain this information. Therefore, we chose to approximate the specificity of Q by summing up the inverse document frequencies of the individual query terms comprising Q. The inverse document frequency (IDF) of a term t with respect to a corpus C is defined to be logN/doc (t), where doc (t) is the number of documents in C containing t and N is the total number of documents in C. By summing up the IDFs of the query terms, we make the (flawed) assumption that the individual query terms are independent of each other. However, while not perfect, this approximation is at least directionally correct. We broke down our query set into 13 buckets, each bucket associated with an interval of query IDF values, and we computed performance metrics for all ranking functions applied (in isolation) to the queries in each bucket. In order to keep the graphs readable, we will not show the performance of all the features, but rather restrict ourselves to the four most interesting ones: PageRank, id HITS authority scores, id in-degree, and BM25F. Figure 4 shows the MAP@10 for all 13 query specificity buckets. Buckets on the far left of each graph represent very general queries; buckets on the far right represent very specific queries. The figures on the upper x axis of each graph show the number of queries in each bucket (e.g. the right-most bucket contains 1,629 queries). BM25F performs best for medium-specific queries, peaking at the buckets representing the IDF sum interval [12,14). By comparison, HITS peaks at the bucket representing the IDF sum interval [4,6), and PageRank and in-degree peak at the bucket representing the interval [6,8), i.e. more general queries. 8. CONCLUSIONS AND FUTURE WORK This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F). Evaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP. Evaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores. HITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering. A linear combination of any link-based features with BM25F produces a significant improvement in performance, and there is a clear difference between combining BM25F with a feature based on incoming links (indegree, PageRank, or HITS authority scores) and a feature based on outgoing links (HITS hub scores and out-degree), but within those two groups the precise choice of link-based feature matters relatively little. We believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes. There are many possible variants of these schemes, and many other link-based ranking algorithms have been proposed in the literature, hence we do not claim this work to be the last word on this subject, but rather the first step on a long road. Future work includes evaluation of different parameterizations of PageRank and HITS. In particular, we would like to study the impact of changes to the PageRank damping factor on effectiveness, the impact of various schemes meant to counteract the effects of link spam, and the effect of weighing hyperlinks differently depending on whether they are nepotistic or not. Going beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA. Finally, we are planning to experiment with more complex feature combinations.
SIGIR 2007 Proceedings Session 20: Link Analysis HITS on the Web: How does it Compare? * ABSTRACT This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries. 1. INTRODUCTION Link graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web. The HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query. However, it remains unclear today whether there are practical benefits of HITS over other link graph measures. This is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document's anchor text, i.e. the text of incoming links. This, at least to some degree, takes the link graph into account, in a query-dependent manner. Comparing HITS to PageRank or in-degree empirically is no easy task. There are two main difficulties: scale and relevance. Scale is important because link-based features are known to improve in quality as the document graph grows. If we carry out a small experiment, our conclusions won't carry over to large graphs such as the web. However, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult. Relevance is also crucial because we cannot measure the performance of a feature in the absence of human judgments: what is crucial is ranking at the top of the ten or so documents that a user will peruse. To our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment. Our results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text. This is reassuring: in the absence of a theoretical model capable of tying these measures with relevance, the only way to validate our intuitions is to carry out realistic experiments. However, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature. This continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text. The remainder of this paper is structured as follows: Section 2 surveys related work. Section 3 describes the data sets we used in our study. Section 4 reviews the performance measures we used. Sections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments. Section 7 presents the results of our evaluations, and Section 8 offers concluding remarks. 2. RELATED WORK The idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms. The popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research. There are numerous attempts at improving the effectiveness of HITS and PageRank. Query-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few. Query-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others. Another line of research is concerned with analyzing the mathematical properties of HITS and PageRank. For example, Borodin et al. [3] investigated various theoretical properties of PageRank, HITS, SALSA, and PHITS, including their similarity and stability, while Bianchini et al. [2] studied the relationship between the structure of the web graph and the distribution of PageRank scores, and Langville and Meyer examined basic properties of PageRank such as existence and uniqueness of an eigenvector and convergence of power iteration [18]. Given the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published. We are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank. Amento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query. A more recent study by Borodin et al. [4] is based on 34 queries, result sets of 200 pages per query obtained from Google, and a neighborhood graph derived by retrieving 50 in-links per result from Google. By contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs. 3. OUR DATA SETS 4. MEASURING PERFORMANCE 5. COMPUTING PAGERANK ON A LARGE WEB GRAPH 6. HITS 7. EXPERIMENTAL RESULTS 8. CONCLUSIONS AND FUTURE WORK This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F). Evaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP. Evaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores. HITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering. A linear combination of any link-based features with BM25F produces a significant improvement in performance, and there is a clear difference between combining BM25F with a feature based on incoming links (indegree, PageRank, or HITS authority scores) and a feature based on outgoing links (HITS hub scores and out-degree), but within those two groups the precise choice of link-based feature matters relatively little. We believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes. There are many possible variants of these schemes, and many other link-based ranking algorithms have been proposed in the literature, hence we do not claim this work to be the last word on this subject, but rather the first step on a long road. Future work includes evaluation of different parameterizations of PageRank and HITS. In particular, we would like to study the impact of changes to the PageRank damping factor on effectiveness, the impact of various schemes meant to counteract the effects of link spam, and the effect of weighing hyperlinks differently depending on whether they are nepotistic or not. Going beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA. Finally, we are planning to experiment with more complex feature combinations.
SIGIR 2007 Proceedings Session 20: Link Analysis HITS on the Web: How does it Compare? * ABSTRACT This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries. 1. INTRODUCTION Link graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web. The HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query. However, it remains unclear today whether there are practical benefits of HITS over other link graph measures. This is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document's anchor text, i.e. the text of incoming links. This, at least to some degree, takes the link graph into account, in a query-dependent manner. Comparing HITS to PageRank or in-degree empirically is no easy task. There are two main difficulties: scale and relevance. Scale is important because link-based features are known to improve in quality as the document graph grows. If we carry out a small experiment, our conclusions won't carry over to large graphs such as the web. However, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult. To our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment. Our results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text. However, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature. This continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text. The remainder of this paper is structured as follows: Section 2 surveys related work. Section 3 describes the data sets we used in our study. Section 4 reviews the performance measures we used. Sections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments. Section 7 presents the results of our evaluations, and Section 8 offers concluding remarks. 2. RELATED WORK The idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms. The popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research. There are numerous attempts at improving the effectiveness of HITS and PageRank. Query-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few. Query-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others. Another line of research is concerned with analyzing the mathematical properties of HITS and PageRank. Given the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published. We are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank. Amento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query. By contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs. 8. CONCLUSIONS AND FUTURE WORK This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F). Evaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP. Evaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores. HITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering. We believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes. Future work includes evaluation of different parameterizations of PageRank and HITS. Going beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA. Finally, we are planning to experiment with more complex feature combinations.
H-82
Downloading Textual Hidden Web Content Through Keyword Queries
An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only entry point to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.
[ "hidden web", "keyword queri", "deep web", "hidden web crawler", "queri-select polici", "crawl polici", "textual databas", "gener-frequenc polici", "independ estim", "adapt polici", "hide web crawl", "deep web crawler", "adapt algorithm", "queri select" ]
[ "P", "P", "P", "P", "M", "M", "M", "M", "U", "M", "M", "R", "U", "M" ]
Downloading Textual Hidden Web Content Through Keyword Queries Alexandros Ntoulas UCLA Computer Science ntoulas@cs.ucla.edu Petros Zerfos UCLA Computer Science pzerfos@cs.ucla.edu Junghoo Cho UCLA Computer Science cho@cs.ucla.edu ABSTRACT An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only entry point to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries. Categories and Subject Descriptors: H.3.7 [Information Systems]: Digital Libraries; H.3.1 [Information Systems]: Content Analysis and Indexing; H.3.3 [Information Systems]: Information Search and Retrieval. General Terms: Algorithms, Performance, Design. 1. INTRODUCTION Recent studies show that a significant fraction of Web content cannot be reached by following links [7, 12]. In particular, a large part of the Web is hidden behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms. These pages are often referred to as the Hidden Web [17] or the Deep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially hidden from a typical Web user). According to many studies, the size of the Hidden Web increases rapidly as more organizations put their valuable content online through an easy-to-use Web interface [7]. In [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web. Moreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7]. For example, PubMed hosts many high-quality papers on medical research that were selected from careful peerreview processes, while the site of the US Patent and Trademarks Office1 makes existing patent documents available, helping potential inventors examine prior art. In this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them. Conventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links). We believe that an effective Hidden-Web crawler can have a tremendous impact on how users search information on the Web: • Tapping into unexplored information: The Hidden-Web crawler will allow an average Web user to easily explore the vast amount of information that is mostly hidden at present. Since a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users. Unless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites. • Improving user experience: Even if a user is aware of a number of Hidden-Web sites, the user still has to waste a significant amount of time and effort, visiting all of the potentially relevant sites, querying each of them and exploring the result. By making the Hidden-Web pages searchable at a central location, we can significantly reduce the user``s wasted time and effort in searching the Hidden Web. • Reducing potential bias: Due to the heavy reliance of many Web users on search engines for locating information, search engines influence how the users perceive the Web [28]. Users do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28]. According to a recent article [5], several organizations have recognized the importance of bringing information of their Hidden Web sites onto the surface, and committed considerable resources towards this effort. Our 1 US Patent Office: http://www.uspto.gov 2 Crawlers are the programs that traverse the Web automatically and download pages for search engines. 100 Figure 1: A single-attribute search interface Hidden-Web crawler attempts to automate this process for Hidden Web sites with textual content, thus minimizing the associated costs and effort required. Given that the only entry to Hidden Web pages is through querying a search form, there are two core challenges to implementing an effective Hidden Web crawler: (a) The crawler has to be able to understand and model a query interface, and (b) The crawler has to come up with meaningful queries to issue to the query interface. The first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented. Here, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages. Clearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward. We exhaustively issue all possible queries, one query at a time. When the query forms have a free text input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries. In this case, what queries should we pick? Can the crawler automatically come up with meaningful queries without understanding the semantics of the search form? In this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically. We also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites. In summary, this paper makes the following contributions: • We present a formal framework to study the problem of HiddenWeb crawling. (Section 2). • We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions. Unfortunately, we show that the optimal policy is NP-hard and cannot be implemented in practice (Section 2.2). • We propose a new adaptive policy that approximates the optimal policy. Our adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3). • We evaluate various crawling policies through experiments on real Web sites. Our experiments will show the relative advantages of various crawling policies and demonstrate their potential. The results from our experiments are very promising. In one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries. 2. FRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem. In Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites. Based on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2. Finally in Section 2.3, we formalize the Hidden-Web crawling problem. 2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics. Depending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database. A textual database is a site that Figure 2: A multi-attribute search interface mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]). Since plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1). In contrast, a structured database often contains multi-attribute relational data (e.g., a book on the Amazon Web site may have the fields title=`Harry Potter'', author=`J.K. Rowling'' and isbn=`0590353403'') and supports multi-attribute search interfaces (Figure 2). In this paper, we will mainly focus on textual databases that support single-attribute keyword queries. We discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1. Typically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1. Step 1. First, the user issues a query, say liver, through the search interface provided by the Web site (such as the one shown in Figure 1). 2. Step 2. Shortly after the user issues the query, she is presented with a result index page. That is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3(a). 3. Step 3. From the list in the result index page, the user identifies the pages that look interesting and follows the links. Clicking on a link leads the user to the actual Web page, such as the one shown in Figure 3(b), that the user wants to look at. 2.2 A generic Hidden Web crawling algorithm Given that the only entry to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section. That is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages. In most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources. In Figure 4 we show the generic algorithm for a Hidden-Web crawler. For simplicity, we assume that the Hidden-Web crawler issues single-term queries only.3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)). Finally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)). This same process is repeated until all the available resources are used up (Step (1)). Given this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next. If the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources. In contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages. Therefore, how the crawler selects the next query can greatly affect its effectiveness. In the next section, we formalize this query selection problem. 3 For most Web sites that assume AND for multi-keyword queries, single-term queries return the maximum number of results. Extending our work to multi-keyword queries is straightforward. 101 (a) List of matching pages for query liver. (b) The first matching page for liver. Figure 3: Pages from the PubMed Web site. ALGORITHM 2.1. Crawling a Hidden Web site Procedure (1) while ( there are available resources ) do // select a term to send to the site (2) qi = SelectTerm() // send query and acquire result index page (3) R(qi) = QueryWebSite( qi ) // download the pages of interest (4) Download( R(qi) ) (5) done Figure 4: Algorithm for crawling a Hidden Web site. S q1 q qq 2 34 Figure 5: A set-formalization of the optimal query selection problem. 2.3 Problem formalization Theoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5). We represent each Web page in S as a point (dots in Figure 5). Every potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site. Each subset is associated with a weight that represents the cost of issuing the query. Under this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost). This problem is equivalent to the set-covering problem in graph theory [16]. There are two main difficulties that we need to address in this formalization. First, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance. Without knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage. Second, the set-covering problem is known to be NP-Hard [16], so an efficient algorithm to solve this problem optimally in polynomial time has yet to be found. In this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost. Our algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned. Based on this information our query selection algorithm can then select the best queries that cover the content of the Web site. We present our prediction method and our query selection algorithm in Section 3. 2.3.1 Performance Metric Before we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost/performance metrics. Given a query qi, we use P(qi) to denote the fraction of pages that we will get back if we issue query qi to the site. For example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = medicine, then P(qi) = 0.3. We use P(q1 ∧ q2) to represent the fraction of pages that are returned from both q1 and q2 (i.e., the intersection of P(q1) and P(q2)). Similarly, we use P(q1 ∨ q2) to represent the fraction of pages that are returned from either q1 or q2 (i.e., the union of P(q1) and P(q2)). We also use Cost(qi) to represent the cost of issuing the query qi. Depending on the scenario, the cost can be measured either in time, network bandwidth, the number of interactions with the site, or it can be a function of all of these. As we will see later, our proposed algorithms are independent of the exact cost function. In the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3(a)) and downloading the actual pages (Figure 3(b)). We assume that submitting a query incurs a fixed cost of cq. The cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed. Then the overall cost of query qi is Cost(qi) = cq + crP(qi) + cdP(qi). (1) In certain cases, some of the documents from qi may have already been downloaded from previous queries. In this case, the crawler may skip downloading these documents and the cost of qi can be Cost(qi) = cq + crP(qi) + cdPnew(qi). (2) Here, we use Pnew(qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries. Later in Section 3.1 we will study how we can estimate P(qi) and Pnew(qi) to estimate the cost of qi. Since our algorithms are independent of the exact cost function, we will assume a generic cost function Cost(qi) in this paper. When we need a concrete cost function, however, we will use Equation 2. Given the notation, we can formalize the goal of a Hidden-Web crawler as follows: 102 PROBLEM 1. Find the set of queries q1, ... , qn that maximizes P(q1 ∨ · · · ∨ qn) under the constraint n i=1 Cost(qi) ≤ t. Here, t is the maximum download resource that the crawler has. 3. KEYWORD SELECTION How should a crawler select the queries to issue? Given that the goal is to download the maximum number of unique documents from a textual database, we may consider one of the following options: • Random: We select random keywords from, say, an English dictionary and issue them to the database. The hope is that a random query will return a reasonable number of matching documents. • Generic-frequency: We analyze a generic document corpus collected elsewhere (say, from the Web) and obtain the generic frequency distribution of each keyword. Based on this generic distribution, we start with the most frequent keyword, issue it to the Hidden-Web database and retrieve the result. We then continue to the second-most frequent keyword and repeat this process until we exhaust all download resources. The hope is that the frequent keywords in a generic corpus will also be frequent in the Hidden-Web database, returning many matching documents. • Adaptive: We analyze the documents returned from the previous queries issued to the Hidden-Web database and estimate which keyword is most likely to return the most documents. Based on this analysis, we issue the most promising query, and repeat the process. Among these three general policies, we may consider the random policy as the base comparison point since it is expected to perform the worst. Between the generic-frequency and the adaptive policies, both policies may show similar performance if the crawled database has a generic document collection without a specialized topic. The adaptive policy, however, may perform significantly better than the generic-frequency policy if the database has a very specialized collection that is different from the generic corpus. We will experimentally compare these three policies in Section 4. While the first two policies (random and generic-frequency policies) are easy to implement, we need to understand how we can analyze the downloaded pages to identify the most promising query in order to implement the adaptive policy. We address this issue in the rest of this section. 3.1 Estimating the number of matching pages In order to identify the most promising query, we need to estimate how many new documents we will download if we issue the query qi as the next query. That is, assuming that we have issued queries q1, ... , qi−1 we need to estimate P(q1∨· · ·∨qi−1∨qi), for every potential next query qi and compare this value. In estimating this number, we note that we can rewrite P(q1 ∨ · · · ∨ qi−1 ∨ qi) as: P((q1 ∨ · · · ∨ qi−1) ∨ qi) = P(q1 ∨ · · · ∨ qi−1) + P(qi) − P((q1 ∨ · · · ∨ qi−1) ∧ qi) = P(q1 ∨ · · · ∨ qi−1) + P(qi) − P(q1 ∨ · · · ∨ qi−1)P(qi|q1 ∨ · · · ∨ qi−1) (3) In the above formula, note that we can precisely measure P(q1 ∨ · · · ∨ qi−1) and P(qi | q1 ∨ · · · ∨ qi−1) by analyzing previouslydownloaded pages: We know P(q1 ∨ · · · ∨ qi−1), the union of all pages downloaded from q1, ... , qi−1, since we have already issued q1, ... , qi−1 and downloaded the matching pages.4 We can also measure P(qi | q1 ∨ · · · ∨ qi−1), the probability that qi appears in the pages from q1, ... , qi−1, by counting how many times qi appears in the pages from q1, ... , qi−1. Therefore, we only need to estimate P(qi) to evaluate P(q1 ∨ · · · ∨ qi). We may consider a number of different ways to estimate P(qi), including the following: 1. Independence estimator: We assume that the appearance of the term qi is independent of the terms q1, ... , qi−1. That is, we assume that P(qi) = P(qi|q1 ∨ · · · ∨ qi−1). 2. Zipf estimator: In [19], Ipeirotis et al. proposed a method to estimate how many times a particular term occurs in the entire corpus based on a subset of documents from the corpus. Their method exploits the fact that the frequency of terms inside text collections follows a power law distribution [30, 25]. That is, if we rank all terms based on their occurrence frequency (with the most frequent term having a rank of 1, second most frequent a rank of 2 etc.), then the frequency f of a term inside the text collection is given by: f = α(r + β)−γ (4) where r is the rank of the term and α, β, and γ are constants that depend on the text collection. Their main idea is (1) to estimate the three parameters, α, β and γ, based on the subset of documents that we have downloaded from previous queries, and (2) use the estimated parameters to predict f given the ranking r of a term within the subset. For a more detailed description on how we can use this method to estimate P(qi), we refer the reader to the extended version of this paper [27]. After we estimate P(qi) and P(qi|q1 ∨ · · · ∨ qi−1) values, we can calculate P(q1 ∨ · · · ∨ qi). In Section 3.3, we explain how we can efficiently compute P(qi|q1 ∨ · · · ∨ qi−1) by maintaining a succinct summary table. In the next section, we first examine how we can use this value to decide which query we should issue next to the Hidden Web site. 3.2 Query selection algorithm The goal of the Hidden-Web crawler is to download the maximum number of unique documents from a database using its limited download resources. Given this goal, the Hidden-Web crawler has to take two factors into account. (1) the number of new documents that can be obtained from the query qi and (2) the cost of issuing the query qi. For example, if two queries, qi and qj, incur the same cost, but qi returns more new pages than qj, qi is more desirable than qj. Similarly, if qi and qj return the same number of new documents, but qi incurs less cost then qj, qi is more desirable. Based on this observation, the Hidden-Web crawler may use the following efficiency metric to quantify the desirability of the query qi: Efficiency(qi) = Pnew(qi) Cost(qi) Here, Pnew(qi) represents the amount of new documents returned for qi (the pages that have not been returned for previous queries). Cost(qi) represents the cost of issuing the query qi. Intuitively, the efficiency of qi measures how many new documents are retrieved per unit cost, and can be used as an indicator of 4 For exact estimation, we need to know the total number of pages in the site. However, in order to compare only relative values among queries, this information is not actually needed. 103 ALGORITHM 3.1. Greedy SelectTerm() Parameters: T: The list of potential query keywords Procedure (1) Foreach tk in T do (2) Estimate Efficiency(tk) = Pnew(tk) Cost(tk) (3) done (4) return tk with maximum Efficiency(tk) Figure 6: Algorithm for selecting the next query term. how well our resources are spent when issuing qi. Thus, the Hidden Web crawler can estimate the efficiency of every candidate qi, and select the one with the highest value. By using its resources more efficiently, the crawler may eventually download the maximum number of unique documents. In Figure 6, we show the query selection function that uses the concept of efficiency. In principle, this algorithm takes a greedy approach and tries to maximize the potential gain in every step. We can estimate the efficiency of every query using the estimation method described in Section 3.1. That is, the size of the new documents from the query qi, Pnew(qi), is Pnew(qi) = P(q1 ∨ · · · ∨ qi−1 ∨ qi) − P(q1 ∨ · · · ∨ qi−1) = P(qi) − P(q1 ∨ · · · ∨ qi−1)P(qi|q1 ∨ · · · ∨ qi−1) from Equation 3, where P(qi) can be estimated using one of the methods described in section 3. We can also estimate Cost(qi) similarly. For example, if Cost(qi) is Cost(qi) = cq + crP(qi) + cdPnew(qi) (Equation 2), we can estimate Cost(qi) by estimating P(qi) and Pnew(qi). 3.3 Efficient calculation of query statistics In estimating the efficiency of queries, we found that we need to measure P(qi|q1∨· · ·∨qi−1) for every potential query qi. This calculation can be very time-consuming if we repeat it from scratch for every query qi in every iteration of our algorithm. In this section, we explain how we can compute P(qi|q1 ∨ · · · ∨ qi−1) efficiently by maintaining a small table that we call a query statistics table. The main idea for the query statistics table is that P(qi|q1 ∨· · ·∨ qi−1) can be measured by counting how many times the keyword qi appears within the documents downloaded from q1, ... , qi−1. We record these counts in a table, as shown in Figure 7(a). The left column of the table contains all potential query terms and the right column contains the number of previously-downloaded documents containing the respective term. For example, the table in Figure 7(a) shows that we have downloaded 50 documents so far, and the term model appears in 10 of these documents. Given this number, we can compute that P(model|q1 ∨ · · · ∨ qi−1) = 10 50 = 0.2. We note that the query statistics table needs to be updated whenever we issue a new query qi and download more documents. This update can be done efficiently as we illustrate in the following example. EXAMPLE 1. After examining the query statistics table of Figure 7(a), we have decided to use the term computer as our next query qi. From the new query qi = computer, we downloaded 20 more new pages. Out of these, 12 contain the keyword model Term tk N(tk) model 10 computer 38 digital 50 Term tk N(tk) model 12 computer 20 disk 18 Total pages: 50 New pages: 20 (a) After q1, ... , qi−1 (b) New from qi = computer Term tk N(tk) model 10+12 = 22 computer 38+20 = 58 disk 0+18 = 18 digital 50+0 = 50 Total pages: 50 + 20 = 70 (c) After q1, ... , qi Figure 7: Updating the query statistics table. q i1 i−1 q\/ ... \/q q i / S Figure 8: A Web site that does not return all the results. and 18 the keyword disk. The table in Figure 7(b) shows the frequency of each term in the newly-downloaded pages. We can update the old table (Figure 7(a)) to include this new information by simply adding corresponding entries in Figures 7(a) and (b). The result is shown on Figure 7(c). For example, keyword model exists in 10 + 12 = 22 pages within the pages retrieved from q1, ... , qi. According to this new table, P(model|q1∨· · ·∨qi) is now 22 70 = 0.3. 3.4 Crawling sites that limit the number of results In certain cases, when a query matches a large number of pages, the Hidden Web site returns only a portion of those pages. For example, the Open Directory Project [2] allows the users to see only up to 10, 000 results after they issue a query. Obviously, this kind of limitation has an immediate effect on our Hidden Web crawler. First, since we can only retrieve up to a specific number of pages per query, our crawler will need to issue more queries (and potentially will use up more resources) in order to download all the pages. Second, the query selection method that we presented in Section 3.2 assumes that for every potential query qi, we can find P(qi|q1 ∨ · · · ∨ qi−1). That is, for every query qi we can find the fraction of documents in the whole text database that contains qi with at least one of q1, ... , qi−1. However, if the text database returned only a portion of the results for any of the q1, ... , qi−1 then the value P(qi|q1 ∨ · · · ∨ qi−1) is not accurate and may affect our decision for the next query qi, and potentially the performance of our crawler. Since we cannot retrieve more results per query than the maximum number the Web site allows, our crawler has no other choice besides submitting more queries. However, there is a way to estimate the correct value for P(qi|q1 ∨ · · · ∨ qi−1) in the case where the Web site returns only a portion of the results. 104 Again, assume that the Hidden Web site we are currently crawling is represented as the rectangle on Figure 8 and its pages as points in the figure. Assume that we have already issued queries q1, ... , qi−1 which returned a number of results less than the maximum number than the site allows, and therefore we have downloaded all the pages for these queries (big circle in Figure 8). That is, at this point, our estimation for P(qi|q1 ∨· · ·∨qi−1) is accurate. Now assume that we submit query qi to the Web site, but due to a limitation in the number of results that we get back, we retrieve the set qi (small circle in Figure 8) instead of the set qi (dashed circle in Figure 8). Now we need to update our query statistics table so that it has accurate information for the next step. That is, although we got the set qi back, for every potential query qi+1 we need to find P(qi+1|q1 ∨ · · · ∨ qi): P(qi+1|q1 ∨ · · · ∨ qi) = 1 P(q1 ∨ · · · ∨ qi) · [P(qi+1 ∧ (q1 ∨ · · · ∨ qi−1))+ P(qi+1 ∧ qi) − P(qi+1 ∧ qi ∧ (q1 ∨ · · · ∨ qi−1))] (5) In the previous equation, we can find P(q1 ∨· · ·∨qi) by estimating P(qi) with the method shown in Section 3. Additionally, we can calculate P(qi+1 ∧ (q1 ∨ · · · ∨ qi−1)) and P(qi+1 ∧ qi ∧ (q1 ∨ · · · ∨ qi−1)) by directly examining the documents that we have downloaded from queries q1, ... , qi−1. The term P(qi+1 ∧ qi) however is unknown and we need to estimate it. Assuming that qi is a random sample of qi, then: P(qi+1 ∧ qi) P(qi+1 ∧ qi) = P(qi) P(qi) (6) From Equation 6 we can calculate P(qi+1 ∧ qi) and after we replace this value to Equation 5 we can find P(qi+1|q1 ∨ · · · ∨ qi). 4. EXPERIMENTAL EVALUATION In this section we experimentally evaluate the performance of the various algorithms for Hidden Web crawling presented in this paper. Our goal is to validate our theoretical analysis through realworld experiments, by crawling popular Hidden Web sites of textual databases. Since the number of documents that are discovered and downloaded from a textual database depends on the selection of the words that will be issued as queries5 to the search interface of each site, we compare the various selection policies that were described in section 3, namely the random, generic-frequency, and adaptive algorithms. The adaptive algorithm learns new keywords and terms from the documents that it downloads, and its selection process is driven by a cost model as described in Section 3.2. To keep our experiment and its analysis simple at this point, we will assume that the cost for every query is constant. That is, our goal is to maximize the number of downloaded pages by issuing the least number of queries. Later, in Section 4.4 we will present a comparison of our policies based on a more elaborate cost model. In addition, we use the independence estimator (Section 3.1) to estimate P(qi) from downloaded pages. Although the independence estimator is a simple estimator, our experiments will show that it can work very well in practice.6 For the generic-frequency policy, we compute the frequency distribution of words that appear in a 5.5-million-Web-page corpus 5 Throughout our experiments, once an algorithm has submitted a query to a database, we exclude the query from subsequent submissions to the same database from the same algorithm. 6 We defer the reporting of results based on the Zipf estimation to a future work. downloaded from 154 Web sites of various topics [26]. Keywords are selected based on their decreasing frequency with which they appear in this document set, with the most frequent one being selected first, followed by the second-most frequent keyword, etc.7 Regarding the random policy, we use the same set of words collected from the Web corpus, but in this case, instead of selecting keywords based on their relative frequency, we choose them randomly (uniform distribution). In order to further investigate how the quality of the potential query-term list affects the random-based algorithm, we construct two sets: one with the 16, 000 most frequent words of the term collection used in the generic-frequency policy (hereafter, the random policy with the set of 16,000 words will be referred to as random-16K), and another set with the 1 million most frequent words of the same collection as above (hereafter, referred to as random-1M). The former set has frequent words that appear in a large number of documents (at least 10, 000 in our collection), and therefore can be considered of high-quality terms. The latter set though contains a much larger collection of words, among which some might be bogus, and meaningless. The experiments were conducted by employing each one of the aforementioned algorithms (adaptive, generic-frequency, random16K, and random-1M) to crawl and download contents from three Hidden Web sites: The PubMed Medical Library,8 Amazon,9 and the Open Directory Project[2]. According to the information on PubMed``s Web site, its collection contains approximately 14 million abstracts of biomedical articles. We consider these abstracts as the documents in the site, and in each iteration of the adaptive policy, we use these abstracts as input to the algorithm. Thus our goal is to discover as many unique abstracts as possible by repeatedly querying the Web query interface provided by PubMed. The Hidden Web crawling on the PubMed Web site can be considered as topic-specific, due to the fact that all abstracts within PubMed are related to the fields of medicine and biology. In the case of the Amazon Web site, we are interested in downloading all the hidden pages that contain information on books. The querying to Amazon is performed through the Software Developer``s Kit that Amazon provides for interfacing to its Web site, and which returns results in XML form. The generic keyword field is used for searching, and as input to the adaptive policy we extract the product description and the text of customer reviews when present in the XML reply. Since Amazon does not provide any information on how many books it has in its catalogue, we use random sampling on the 10-digit ISBN number of the books to estimate the size of the collection. Out of the 10, 000 random ISBN numbers queried, 46 are found in the Amazon catalogue, therefore the size of its book collection is estimated to be 46 10000 · 1010 = 4.6 million books. It``s also worth noting here that Amazon poses an upper limit on the number of results (books in our case) returned by each query, which is set to 32, 000. As for the third Hidden Web site, the Open Directory Project (hereafter also referred to as dmoz), the site maintains the links to 3.8 million sites together with a brief summary of each listed site. The links are searchable through a keyword-search interface. We consider each indexed link together with its brief summary as the document of the dmoz site, and we provide the short summaries to the adaptive algorithm to drive the selection of new keywords for querying. On the dmoz Web site, we perform two Hidden Web crawls: the first is on its generic collection of 3.8-million indexed 7 We did not manually exclude stop words (e.g., the, is, of, etc.) from the keyword list. As it turns out, all Web sites except PubMed return matching documents for the stop words, such as the. 8 PubMed Medical Library: http://www.pubmed.org 9 Amazon Inc.: http://www.amazon.com 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 fractionofdocuments query number Cumulative fraction of unique documents - PubMed website adaptive generic-frequency random-16K random-1M Figure 9: Coverage of policies for Pubmed 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 700 fractionofdocuments query number Cumulative fraction of unique documents - Amazon website adaptive generic-frequency random-16K random-1M Figure 10: Coverage of policies for Amazon sites, regardless of the category that they fall into. The other crawl is performed specifically on the Arts section of dmoz (http:// dmoz.org/Arts), which comprises of approximately 429, 000 indexed sites that are relevant to Arts, making this crawl topicspecific, as in PubMed. Like Amazon, dmoz also enforces an upper limit on the number of returned results, which is 10, 000 links with their summaries. 4.1 Comparison of policies The first question that we seek to answer is the evolution of the coverage metric as we submit queries to the sites. That is, what fraction of the collection of documents stored in the Hidden Web site can we download as we continuously query for new words selected using the policies described above? More formally, we are interested in the value of P(q1 ∨ · · · ∨ qi−1 ∨ qi), after we submit q1, ... , qi queries, and as i increases. In Figures 9, 10, 11, and 12 we present the coverage metric for each policy, as a function of the query number, for the Web sites of PubMed, Amazon, general dmoz and the art-specific dmoz, respectively. On the y-axis the fraction of the total documents downloaded from the website is plotted, while the x-axis represents the query number. A first observation from these graphs is that in general, the generic-frequency and the adaptive policies perform much better than the random-based algorithms. In all of the figures, the graphs for the random-1M and the random-16K are significantly below those of other policies. Between the generic-frequency and the adaptive policies, we can see that the latter outperforms the former when the site is topic spe0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 700 fractionofdocuments query number Cumulative fraction of unique documents - dmoz website adaptive generic-frequency random-16K random-1M Figure 11: Coverage of policies for general dmoz 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 250 300 350 400 450 fractionofdocuments query number Cumulative fraction of unique documents - dmoz/Arts website adaptive generic-frequency random-16K random-1M Figure 12: Coverage of policies for the Arts section of dmoz cific. For example, for the PubMed site (Figure 9), the adaptive algorithm issues only 83 queries to download almost 80% of the documents stored in PubMed, but the generic-frequency algorithm requires 106 queries for the same coverage,. For the dmoz/Arts crawl (Figure 12), the difference is even more substantial: the adaptive policy is able to download 99.98% of the total sites indexed in the Directory by issuing 471 queries, while the frequency-based algorithm is much less effective using the same number of queries, and discovers only 72% of the total number of indexed sites. The adaptive algorithm, by examining the contents of the pages that it downloads at each iteration, is able to identify the topic of the site as expressed by the words that appear most frequently in the result-set. Consequently, it is able to select words for subsequent queries that are more relevant to the site, than those preferred by the genericfrequency policy, which are drawn from a large, generic collection. Table 1 shows a sample of 10 keywords out of 211 chosen and submitted to the PubMed Web site by the adaptive algorithm, but not by the other policies. For each keyword, we present the number of the iteration, along with the number of results that it returned. As one can see from the table, these keywords are highly relevant to the topics of medicine and biology of the Public Medical Library, and match against numerous articles stored in its Web site. In both cases examined in Figures 9, and 12, the random-based policies perform much worse than the adaptive algorithm, and the generic-frequency. It is worthy noting however, that the randombased policy with the small, carefully selected set of 16, 000 quality words manages to download a considerable fraction of 42.5% 106 Iteration Keyword Number of Results 23 department 2, 719, 031 34 patients 1, 934, 428 53 clinical 1, 198, 322 67 treatment 4, 034, 565 69 medical 1, 368, 200 70 hospital 503, 307 146 disease 1, 520, 908 172 protein 2, 620, 938 Table 1: Sample of keywords queried to PubMed exclusively by the adaptive policy from the PubMed Web site after 200 queries, while the coverage for the Arts section of dmoz reaches 22.7%, after 471 queried keywords. On the other hand, the random-based approach that makes use of the vast collection of 1 million words, among which a large number is bogus keywords, fails to download even a mere 1% of the total collection, after submitting the same number of query words. For the generic collections of Amazon and the dmoz sites, shown in Figures 10 and 11 respectively, we get mixed results: The genericfrequency policy shows slightly better performance than the adaptive policy for the Amazon site (Figure 10), and the adaptive method clearly outperforms the generic-frequency for the general dmoz site (Figure 11). A closer look at the log files of the two Hidden Web crawlers reveals the main reason: Amazon was functioning in a very flaky way when the adaptive crawler visited it, resulting in a large number of lost results. Thus, we suspect that the slightly poor performance of the adaptive policy is due to this experimental variance. We are currently running another experiment to verify whether this is indeed the case. Aside from this experimental variance, the Amazon result indicates that if the collection and the words that a Hidden Web site contains are generic enough, then the generic-frequency approach may be a good candidate algorithm for effective crawling. As in the case of topic-specific Hidden Web sites, the randombased policies also exhibit poor performance compared to the other two algorithms when crawling generic sites: for the Amazon Web site, random-16K succeeds in downloading almost 36.7% after issuing 775 queries, alas for the generic collection of dmoz, the fraction of the collection of links downloaded is 13.5% after the 770th query. Finally, as expected, random-1M is even worse than random16K, downloading only 14.5% of Amazon and 0.3% of the generic dmoz. In summary, the adaptive algorithm performs remarkably well in all cases: it is able to discover and download most of the documents stored in Hidden Web sites by issuing the least number of queries. When the collection refers to a specific topic, it is able to identify the keywords most relevant to the topic of the site and consequently ask for terms that is most likely that will return a large number of results . On the other hand, the generic-frequency policy proves to be quite effective too, though less than the adaptive: it is able to retrieve relatively fast a large portion of the collection, and when the site is not topic-specific, its effectiveness can reach that of adaptive (e.g. Amazon). Finally, the random policy performs poorly in general, and should not be preferred. 4.2 Impact of the initial query An interesting issue that deserves further examination is whether the initial choice of the keyword used as the first query issued by the adaptive algorithm affects its effectiveness in subsequent iterations. The choice of this keyword is not done by the selection of the 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 60 fractionofdocuments query number Convergence of adaptive under different initial queries - PubMed website pubmed data information return Figure 13: Convergence of the adaptive algorithm using different initial queries for crawling the PubMed Web site adaptive algorithm itself and has to be manually set, since its query statistics tables have not been populated yet. Thus, the selection is generally arbitrary, so for purposes of fully automating the whole process, some additional investigation seems necessary. For this reason, we initiated three adaptive Hidden Web crawlers targeting the PubMed Web site with different seed-words: the word data, which returns 1,344,999 results, the word information that reports 308, 474 documents, and the word return that retrieves 29, 707 pages, out of 14 million. These keywords represent varying degrees of term popularity in PubMed, with the first one being of high popularity, the second of medium, and the third of low. We also show results for the keyword pubmed, used in the experiments for coverage of Section 4.1, and which returns 695 articles. As we can see from Figure 13, after a small number of queries, all four crawlers roughly download the same fraction of the collection, regardless of their starting point: Their coverages are roughly equivalent from the 25th query. Eventually, all four crawlers use the same set of terms for their queries, regardless of the initial query. In the specific experiment, from the 36th query onward, all four crawlers use the same terms for their queries in each iteration, or the same terms are used off by one or two query numbers. Our result confirms the observation of [11] that the choice of the initial query has minimal effect on the final performance. We can explain this intuitively as follows: Our algorithm approximates the optimal set of queries to use for a particular Web site. Once the algorithm has issued a significant number of queries, it has an accurate estimation of the content of the Web site, regardless of the initial query. Since this estimation is similar for all runs of the algorithm, the crawlers will use roughly the same queries. 4.3 Impact of the limit in the number of results While the Amazon and dmoz sites have the respective limit of 32,000 and 10,000 in their result sizes, these limits may be larger than those imposed by other Hidden Web sites. In order to investigate how a tighter limit in the result size affects the performance of our algorithms, we performed two additional crawls to the generic-dmoz site: we ran the generic-frequency and adaptive policies but we retrieved only up to the top 1,000 results for every query. In Figure 14 we plot the coverage for the two policies as a function of the number of queries. As one might expect, by comparing the new result in Figure 14 to that of Figure 11 where the result limit was 10,000, we conclude that the tighter limit requires a higher number of queries to achieve the same coverage. For example, when the result limit was 10,000, the adaptive pol107 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 500 1000 1500 2000 2500 3000 3500 FractionofUniquePages Query Number Cumulative fraction of unique pages downloaded per query - Dmoz Web site (cap limit 1000) adaptive generic-frequency Figure 14: Coverage of general dmoz after limiting the number of results to 1,000 icy could download 70% of the site after issuing 630 queries, while it had to issue 2,600 queries to download 70% of the site when the limit was 1,000. On the other hand, our new result shows that even with a tight result limit, it is still possible to download most of a Hidden Web site after issuing a reasonable number of queries. The adaptive policy could download more than 85% of the site after issuing 3,500 queries when the limit was 1,000. Finally, our result shows that our adaptive policy consistently outperforms the generic-frequency policy regardless of the result limit. In both Figure 14 and Figure 11, our adaptive policy shows significantly larger coverage than the generic-frequency policy for the same number of queries. 4.4 Incorporating the document download cost For brevity of presentation, the performance evaluation results provided so far assumed a simplified cost-model where every query involved a constant cost. In this section we present results regarding the performance of the adaptive and generic-frequency algorithms using Equation 2 to drive our query selection process. As we discussed in Section 2.3.1, this query cost model includes the cost for submitting the query to the site, retrieving the result index page, and also downloading the actual pages. For these costs, we examined the size of every result in the index page and the sizes of the documents, and we chose cq = 100, cr = 100, and cd = 10000, as values for the parameters of Equation 2, and for the particular experiment that we ran on the PubMed website. The values that we selected imply that the cost for issuing one query and retrieving one result from the result index page are roughly the same, while the cost for downloading an actual page is 100 times larger. We believe that these values are reasonable for the PubMed Web site. Figure 15 shows the coverage of the adaptive and genericfrequency algorithms as a function of the resource units used during the download process. The horizontal axis is the amount of resources used, and the vertical axis is the coverage. As it is evident from the graph, the adaptive policy makes more efficient use of the available resources, as it is able to download more articles than the generic-frequency, using the same amount of resource units. However, the difference in coverage is less dramatic in this case, compared to the graph of Figure 9. The smaller difference is due to the fact that under the current cost metric, the download cost of documents constitutes a significant portion of the cost. Therefore, when both policies downloaded the same number of documents, the saving of the adaptive policy is not as dramatic as before. That 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5000 10000 15000 20000 25000 30000 FractionofUniquePages Total Cost (cq=100, cr=100, cd=10000) Cumulative fraction of unique pages downloaded per cost unit - PubMed Web site adaptive frequency Figure 15: Coverage of PubMed after incorporating the document download cost is, the savings in the query cost and the result index download cost is only a relatively small portion of the overall cost. Still, we observe noticeable savings from the adaptive policy. At the total cost of 8000, for example, the coverage of the adaptive policy is roughly 0.5 while the coverage of the frequency policy is only 0.3. 5. RELATED WORK In a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler. The main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically. The potential queries are either provided manually by users or collected from the query interfaces. In contrast, our main focus is to generate queries automatically without any human intervention. The idea of automatically issuing queries to a database and examining the results has been previously used in different contexts. For example, in [10, 11], Callan and Connel try to acquire an accurate language model by collecting a uniform random sample from the database. In [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them. In a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes. In [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection. Our work differs from the previous studies in two ways. First, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work. Second, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms. Cope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form. This work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites. Reference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database. In [3] the authors study query-based techniques that can extract relational data from large text databases. Again, these works study orthogonal issues and are complementary to our work. In order to make documents in multiple textual databases searchable at a central place, a number of harvesting approaches have 108 been proposed (e.g., OAI [21], DP9 [24]). These approaches essentially assume cooperative document databases that willingly share some of their metadata and/or documents to help a third-party search engine to index the documents. Our approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces. There exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18]. This body of work is often referred to as meta-searching or database selection problem over the Hidden Web. For example, [19] suggests the use of focused probing to classify databases into a topical category, so that given a query, a relevant database can be selected based on its topical category. Our vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location. 6. CONCLUSION AND FUTURE WORK Traditional crawlers normally follow links on the Web to discover and download pages. Therefore they cannot get to the Hidden Web pages which are only accessible through query interfaces. In this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it. We proposed three different query generation policies for the Hidden Web: a policy that picks queries at random from a list of keywords, a policy that picks queries based on their frequency in a generic text collection, and a policy which adaptively picks a good query based on the content of the pages downloaded from the Hidden Web site. Experimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential. In particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries. Given these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search. 6.1 Future Work We briefly discuss some future-research avenues. Multi-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases. While generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes. For example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books. Thus, if we can analyze the returned pages and extract the values for each field (e.g, title = `Harry Potter'', author = `J.K. Rowling'', etc), we can apply the same idea that we used for the textual database: estimate the frequency of each attribute value and pick the most promising one. The main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute. Since many Web sites follow limited formatting styles in presenting multiple attributes - for example, most book titles are preceded by the label Title: - we believe we may learn page-segmentation rules automatically from a small set of training examples. Other Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler. For example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites. But how can the crawler discover the query interfaces? The method proposed in [15] may be a good starting point. In addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a next button in order to see more results. In this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and press the next button automatically. Finally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day). In this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site. Page similarity detection algorithms may be useful for this purpose [9, 13]. 7. REFERENCES [1] Lexisnexis http://www.lexisnexis.com. [2] The Open Directory Project, http://www.dmoz.org. [3] E. Agichtein and L. Gravano. Querying text databases for efficient information extraction. In ICDE, 2003. [4] E. Agichtein, P. Ipeirotis, and L. Gravano. Modeling query-based access to text databases. In WebDB, 2003. [5] Article on New York Times. Old Search Engine, the Library, Tries to Fit Into a Google World. Available at: http: //www.nytimes.com/2004/06/21/technology/21LIBR.html, June 2004. [6] L. Barbosa and J. Freire. Siphoning hidden-web data through keyword-based interfaces. In SBBD, 2004. [7] M. K. Bergman. The deep web: Surfacing hidden value,http: //www.press.umich.edu/jep/07-01/bergman.html. [8] K. Bharat and A. Broder. A technique for measuring the relative size and overlap of public web search engines. In WWW, 1998. [9] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic clustering of the web. In WWW, 1997. [10] J. Callan, M. Connell, and A. Du. Automatic discovery of language models for text databases. In SIGMOD, 1999. [11] J. P. Callan and M. E. Connell. Query-based sampling of text databases. Information Systems, 19(2):97-130, 2001. [12] K. C.-C. Chang, B. He, C. Li, and Z. Zhang. Structured databases on the web: Observations and implications. Technical report, UIUC. [13] J. Cho, N. Shivakumar, and H. Garcia-Molina. Finding replicated web collections. In SIGMOD, 2000. [14] W. Cohen and Y. Singer. Learning to query the web. In AAAI Workshop on Internet-Based Information Systems, 1996. [15] J. Cope, N. Craswell, and D. Hawking. Automated discovery of search interfaces on the web. In 14th Australasian conference on Database technologies, 2003. [16] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms, 2nd Edition. MIT Press/McGraw Hill, 2001. [17] D. Florescu, A. Y. Levy, and A. O. Mendelzon. Database techniques for the world-wide web: A survey. SIGMOD Record, 27(3):59-74, 1998. [18] B. He and K. C.-C. Chang. Statistical schema matching across web query interfaces. In SIGMOD Conference, 2003. [19] P. Ipeirotis and L. Gravano. Distributed search over the hidden web: Hierarchical database sampling and selection. In VLDB, 2002. [20] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe, count, and classify: Categorizing hidden web databases. In SIGMOD, 2001. [21] C. Lagoze and H. V. Sompel. The Open Archives Initiative: Building a low-barrier interoperability framework In JCDL, 2001. [22] S. Lawrence and C. L. Giles. Searching the World Wide Web. Science, 280(5360):98-100, 1998. [23] V. Z. Liu, J. C. Richard C. Luo and, and W. W. Chu. Dpro: A probabilistic approach for hidden web database selection using dynamic probing. In ICDE, 2004. [24] X. Liu, K. Maly, M. Zubair and M. L. Nelson. DP9-An OAI Gateway Service for Web Crawlers. In JCDL, 2002. [25] B. B. Mandelbrot. Fractal Geometry of Nature. W. H. Freeman & Co. [26] A. Ntoulas, J. Cho, and C. Olston. What``s new on the web? the evolution of the web from a search engine perspective. In WWW, 2004. [27] A. Ntoulas, P. Zerfos, and J. Cho. Downloading hidden web content. Technical report, UCLA, 2004. [28] S. Olsen. Does search engine``s power threaten web``s independence? http://news.com.com/2009-1023-963618.html. [29] S. Raghavan and H. Garcia-Molina. Crawling the hidden web. In VLDB, 2001. [30] G. K. Zipf. Human Behavior and the Principle of Least-Effort. Addison-Wesley, Cambridge, MA, 1949. 109
Downloading Textual Hidden Web Content Through Keyword Queries ABSTRACT An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only "entry point" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries. 1. INTRODUCTION Recent studies show that a significant fraction of Web content cannot be reached by following links [7, 12]. In particular, a large part of the Web is "hidden" behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms. These pages are often referred to as the Hidden Web [17] or the Deep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially "hidden" from a typical Web user). According to many studies, the size of the Hidden Web increases rapidly as more organizations put their valuable content online through an easy-to-use Web interface [7]. In [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web. Moreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7]. For example, PubMed hosts many high-quality papers on medical research that were selected from careful peerreview processes, while the site of the US Patent and Trademarks Office1 makes existing patent documents available, helping potential inventors examine "prior art." In this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them. Conventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links). We believe that an effective Hidden-Web crawler can have a tremendous impact on how users search information on the Web: 9 Tapping into unexplored information: The Hidden-Web crawler will allow an average Web user to easily explore the vast amount of information that is mostly "hidden" at present. Since a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users. Unless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites. 9 Improving user experience: Even if a user is aware of a number of Hidden-Web sites, the user still has to waste a significant amount of time and effort, visiting all of the potentially relevant sites, querying each of them and exploring the result. By making the Hidden-Web pages searchable at a central location, we can significantly reduce the user's wasted time and effort in searching the Hidden Web. 9 Reducing potential bias: Due to the heavy reliance of many Web users on search engines for locating information, search engines influence how the users perceive the Web [28]. Users do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28]. According to a recent article [5], several organizations have recognized the importance of bringing information of their Hidden Web sites onto the surface, and committed considerable resources towards this effort. Our Figure 1: A single-attribute search interface Hidden-Web crawler attempts to automate this process for Hid - Figure 2: A multi-attribute search interface den Web sites with textual content, thus minimizing the associated costs and effort required. Given that the only "entry" to Hidden Web pages is through querying a search form, there are two core challenges to implementing an effective Hidden Web crawler: (a) The crawler has to be able to understand and model a query interface, and (b) The crawler has to come up with meaningful queries to issue to the query interface. The first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented. Here, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages. Clearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward. We exhaustively issue all possible queries, one query at a time. When the query forms have a "free text" input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries. In this case, what queries should we pick? Can the crawler automatically come up with meaningful queries without understanding the semantics of the search form? In this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically. We also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites. In summary, this paper makes the following contributions: • We present a formal framework to study the problem of HiddenWeb crawling. (Section 2). • We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions. Unfortunately, we show that the optimal policy is NP-hard and cannot be implemented in practice (Section 2.2). • We propose a new adaptive policy that approximates the optimal policy. Our adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3). • We evaluate various crawling policies through experiments on real Web sites. Our experiments will show the relative advantages of various crawling policies and demonstrate their potential. The results from our experiments are very promising. In one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries. 2. FRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem. In Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites. Based on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2. Finally in Section 2.3, we formalize the Hidden-Web crawling problem. 2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics. Depending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database. A textual database is a site that mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]). Since plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1). In contrast, a structured database often contains multi-attribute relational data (e.g., a book on the Amazon Web site may have the fields title = ` Harry Potter', author = ` J.K. Rowling' and isbn = ` 0590353403') and supports multi-attribute search interfaces (Figure 2). In this paper, we will mainly focus on textual databases that support single-attribute keyword queries. We discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1. Typically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1. Step 1. First, the user issues a query, say "liver," through the search interface provided by the Web site (such as the one shown in Figure 1). 2. Step 2. Shortly after the user issues the query, she is presented with a result index page. That is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3 (a). 3. Step 3. From the list in the result index page, the user identifies the pages that look "interesting" and follows the links. Clicking on a link leads the user to the actual Web page, such as the one shown in Figure 3 (b), that the user wants to look at. 2.2 A generic Hidden Web crawling algorithm Given that the only "entry" to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section. That is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages. In most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources. In Figure 4 we show the generic algorithm for a Hidden-Web crawler. For simplicity, we assume that the Hidden-Web crawler issues single-term queries only .3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)). Finally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)). This same process is repeated until all the available resources are used up (Step (1)). Given this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next. If the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources. In contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages. Therefore, how the crawler selects the next query can greatly affect its effectiveness. In the next section, we formalize this query selection problem. 101 3For most Web sites that assume "AND" for multi-keyword queries, single-term queries return the maximum number of results. Extending our work to multi-keyword queries is straightforward. (a) List of matching pages for query "liver". (b) The first matching page for "liver". Figure 3: Pages from the PubMed Web site. ALGORITHM 2.1. Crawling a Hidden Web site Procedure (1) while (there are available resources) do / / select a term to send to the site (2) qi = SelectTerm () / / send query and acquire result index page (3) R (qi) = QueryWebSite (qi) / / download the pages of interest (4) Download (R (qi)) (5) done Figure 4: Algorithm for crawling a Hidden Web site. Figure 5: A set-formalization of the optimal query selection problem. 2.3 Problem formalization Theoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5). We represent each Web page in S as a point (dots in Figure 5). Every potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site. Each subset is associated with a weight that represents the cost of issuing the query. Under this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost). This problem is equivalent to the set-covering problem in graph theory [16]. There are two main difficulties that we need to address in this formalization. First, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance. Without knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage. Second, the set-covering problem is known to be NP-Hard [16], so an efficient algorithm to solve this problem optimally in polynomial time has yet to be found. In this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost. Our algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned. Based on this information our query selection algorithm can then select the "best" queries that cover the content of the Web site. We present our prediction method and our query selection algorithm in Section 3. 2.3.1 Performance Metric Before we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost/performance metrics. Given a query qi, we use P (qi) to denote the fraction of pages that we will get back if we issue query qi to the site. For example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = "medicine", then P (qi) = 0.3. We use P (q1 ∧ q2) to represent the fraction of pages that are returned from both q1 and q2 (i.e., the intersection of P (q1) and P (q2)). Similarly, we use P (q1 ∨ q2) to represent the fraction of pages that are returned from either q1 or q2 (i.e., the union of P (q1) and P (q2)). We also use Cost (qi) to represent the cost of issuing the query qi. Depending on the scenario, the cost can be measured either in time, network bandwidth, the number of interactions with the site, or it can be a function of all of these. As we will see later, our proposed algorithms are independent of the exact cost function. In the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3 (a)) and downloading the actual pages (Figure 3 (b)). We assume that submitting a query incurs a fixed cost of cq. The cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed. Then the overall cost of query qi is In certain cases, some of the documents from qi may have already been downloaded from previous queries. In this case, the crawler may skip downloading these documents and the cost of qi can be Here, we use Pnew (qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries. Later in Section 3.1 we will study how we can estimate P (qi) and Pnew (qi) to estimate the cost of qi. Since our algorithms are independent of the exact cost function, we will assume a generic cost function Cost (qi) in this paper. When we need a concrete cost function, however, we will use Equation 2. Given the notation, we can formalize the goal of a Hidden-Web crawler as follows: 3. KEYWORD SELECTION How should a crawler select the queries to issue? Given that the goal is to download the maximum number of unique documents from a textual database, we may consider one of the following options: . Random: We select random keywords from, say, an English dictionary and issue them to the database. The hope is that a random query will return a reasonable number of matching documents. . Generic-frequency: We analyze a generic document corpus collected elsewhere (say, from the Web) and obtain the generic fre quency distribution of each keyword. Based on this generic distribution, we start with the most frequent keyword, issue it to the Hidden-Web database and retrieve the result. We then continue to the second-most frequent keyword and repeat this process until we exhaust all download resources. The hope is that the frequent keywords in a generic corpus will also be frequent in the Hidden-Web database, returning many matching documents. . Adaptive: We analyze the documents returned from the previous queries issued to the Hidden-Web database and estimate which keyword is most likely to return the most documents. Based on this analysis, we issue the most "promising" query, and repeat the process. Among these three general policies, we may consider the random policy as the base comparison point since it is expected to perform the worst. Between the generic-frequency and the adaptive policies, both policies may show similar performance if the crawled database has a generic document collection without a specialized topic. The adaptive policy, however, may perform significantly better than the generic-frequency policy if the database has a very specialized collection that is different from the generic corpus. We will experimentally compare these three policies in Section 4. While the first two policies (random and generic-frequency policies) are easy to implement, we need to understand how we can analyze the downloaded pages to identify the most "promising" query in order to implement the adaptive policy. We address this issue in the rest of this section. 3.1 Estimating the number of matching pages In order to identify the most promising query, we need to estimate how many new documents we will download if we issue the query qi as the next query. That is, assuming that we have issued queries q1,..., qi-1 we need to estimate P (q1V • • • Vqi-1Vqi), for every potential next query qi and compare this value. In estimating this number, we note that we can rewrite P (q1 V • • • V qi-1 V qi) as: In the above formula, note that we can precisely measure P (q1 V • • • V qi-1) and P (qi I q1 V • • • V qi-1) by analyzing previouslydownloaded pages: We know P (q1 V • • • V qi-1), the union of all pages downloaded from q1,..., qi-1, since we have already issued q1,..., qi-1 and downloaded the matching pages .4 We can also measure P (qi I q1 V • • • V qi-1), the probability that qi appears in the pages from q1,..., qi-1, by counting how many times qi appears in the pages from q1,..., qi-1. Therefore, we only need to estimate P (qi) to evaluate P (q1 V • • • V qi). We may consider a number of different ways to estimate P (qi), including the following: 1. Independence estimator: We assume that the appearance of the term qi is independent of the terms q1,..., qi-1. That is, we assume that P (qi) = P (qiIq1 V • • • V qi-1). 2. Zipf estimator: In [19], Ipeirotis et al. proposed a method to estimate how many times a particular term occurs in the entire corpus based on a subset of documents from the corpus. Their method exploits the fact that the frequency of terms inside text collections follows a power law distribution [30, 25]. That is, if we rank all terms based on their occurrence frequency (with the most frequent term having a rank of 1, second most frequent a rank of 2 etc.), then the frequency f of a term inside the text collection is given by: where r is the rank of the term and α,,3, and - y are constants that depend on the text collection. Their main idea is (1) to estimate the three parameters, α,,3 and - y, based on the subset of documents that we have downloaded from previous queries, and (2) use the estimated parameters to predict f given the ranking r of a term within the subset. For a more detailed description on how we can use this method to estimate P (qi), we refer the reader to the extended version of this paper [27]. After we estimate P (qi) and P (qiIq1 V • • • V qi-1) values, we can calculate P (q1 V • • • V qi). In Section 3.3, we explain how we can efficiently compute P (qiIq1 V • • • V qi-1) by maintaining a succinct summary table. In the next section, we first examine how we can use this value to decide which query we should issue next to the Hidden Web site. 3.2 Query selection algorithm The goal of the Hidden-Web crawler is to download the maximum number of unique documents from a database using its limited download resources. Given this goal, the Hidden-Web crawler has to take two factors into account. (1) the number of new documents that can be obtained from the query qi and (2) the cost of issuing the query qi. For example, if two queries, qi and qj, incur the same cost, but qi returns more new pages than qj, qi is more desirable than qj. Similarly, if qi and qj return the same number of new documents, but qi incurs less cost then qj, qi is more desirable. Based on this observation, the Hidden-Web crawler may use the following efficiency metric to quantify the desirability of the query qi: Here, Pnew (qi) represents the amount of new documents returned for qi (the pages that have not been returned for previous queries). Cost (qi) represents the cost of issuing the query qi. Intuitively, the efficiency of qi measures how many new documents are retrieved per unit cost, and can be used as an indicator of 4For exact estimation, we need to know the total number ofpages in the site. However, in order to compare only relative values among queries, this information is not actually needed. ALGORITHM 3.1. Greedy SelectTerm () Parameters: T: The list ofpotential query keywords Procedure (1) Foreach tk in T do (2) Estimate Efficiency (tk) = Pnew (tk) Cost (tk) (3) done (4) return tk with maximum Efficiency (tk) Figure 6: Algorithm for selecting the next query term. how well our resources are spent when issuing qi. Thus, the Hidden Web crawler can estimate the efficiency of every candidate qi, and select the one with the highest value. By using its resources more efficiently, the crawler may eventually download the maximum number of unique documents. In Figure 6, we show the query selection function that uses the concept of efficiency. In principle, this algorithm takes a greedy approach and tries to maximize the "potential gain" in every step. We can estimate the efficiency of every query using the estimation method described in Section 3.1. That is, the size of the new documents from the query qi, Pnew (qi), is from Equation 3, where P (qi) can be estimated using one of the methods described in section 3. We can also estimate Cost (qi) similarly. For example, if Cost (qi) is (Equation 2), we can estimate Cost (qi) by estimating P (qi) and Pnew (qi). 3.3 Efficient calculation of query statistics In estimating the efficiency of queries, we found that we need to measure P (qi | q1 ∨ · · · ∨ qi-1) for every potential query qi. This calculation can be very time-consuming if we repeat it from scratch for every query qi in every iteration of our algorithm. In this section, we explain how we can compute P (qi | q1 ∨ · · · ∨ qi-1) efficiently by maintaining a small table that we call a query statistics table. The main idea for the query statistics table is that P (qi | q1 ∨ · · · ∨ qi-1) can be measured by counting how many times the keyword qi appears within the documents downloaded from q1,..., qi-1. We record these counts in a table, as shown in Figure 7 (a). The left column of the table contains all potential query terms and the right column contains the number of previously-downloaded documents containing the respective term. For example, the table in Figure 7 (a) shows that we have downloaded 50 documents so far, and the term model appears in 10 of these documents. Given this number, we can compute that P (model | q1 ∨ · · · ∨ qi-1) = 10 We note that the query statistics table needs to be updated whenever we issue a new query qi and download more documents. This update can be done efficiently as we illustrate in the following example. EXAMPLE 1. After examining the query statistics table of Figure 7 (a), we have decided to use the term "computer" as our next query qi. From the new query qi = "computer," we downloaded 20 more new pages. Out of these, 12 contain the keyword "model" Figure 7: Updating the query statistics table. Figure 8: A Web site that does not return all the results. and 18 the keyword "disk." The table in Figure 7 (b) shows the frequency of each term in the newly-downloaded pages. We can update the old table (Figure 7 (a)) to include this new information by simply adding corresponding entries in Figures 7 (a) and (b). The result is shown on Figure 7 (c). For example, keyword "model" exists in 10 + 12 = 22 pages within the pages retrieved from q1,..., qi. According to this new table, P (model | q1 ∨ · · · ∨ qi) 3.4 Crawling sites that limit the number of results In certain cases, when a query matches a large number of pages, the Hidden Web site returns only a portion of those pages. For example, the Open Directory Project [2] allows the users to see only up to 10, 000 results after they issue a query. Obviously, this kind of limitation has an immediate effect on our Hidden Web crawler. First, since we can only retrieve up to a specific number of pages per query, our crawler will need to issue more queries (and potentially will use up more resources) in order to download all the pages. Second, the query selection method that we presented in Section 3.2 assumes that for every potential query qi, we can find P (qi | q1 ∨ · · · ∨ qi-1). That is, for every query qi we can find the fraction of documents in the whole text database that contains qi with at least one of q1,..., qi-1. However, if the text database returned only a portion of the results for any of the q1,..., qi-1 then the value P (qi | q1 ∨ · · · ∨ qi-1) is not accurate and may affect our decision for the next query qi, and potentially the performance of our crawler. Since we cannot retrieve more results per query than the maximum number the Web site allows, our crawler has no other choice besides submitting more queries. However, there is a way to estimate the correct value for P (qi | q1 ∨ · · · ∨ qi-1) in the case where the Web site returns only a portion of the results. Again, assume that the Hidden Web site we are currently crawling is represented as the rectangle on Figure 8 and its pages as points in the figure. Assume that we have already issued queries q1,..., qi-1 which returned a number of results less than the maximum number than the site allows, and therefore we have downloaded all the pages for these queries (big circle in Figure 8). That is, at this point, our estimation for P (qi | q1 ∨ · · · ∨ qi-1) is accurate. Now assume that we submit query qi to the Web site, but due to a limitation in the number of results that we get back, we retrieve the set q' i (small circle in Figure 8) instead of the set qi (dashed circle in Figure 8). Now we need to update our query statistics table so that it has accurate information for the next step. That is, although we got the set q' i back, for every potential query qi +1 we need to In the previous equation, we can find P (q1 ∨ · · · ∨ qi) by estimating P (qi) with the method shown in Section 3. Additionally, we can calculate P (qi +1 ∧ (q1 ∨ · · · ∨ qi-1)) and P (qi +1 ∧ qi ∧ (q1 ∨ · · · ∨ qi-1)) by directly examining the documents that we have downloaded from queries q1,..., qi-1. The term P (qi +1 ∧ qi) however is unknown and we need to estimate it. Assuming that q' i is a random sample of qi, then: From Equation 6 we can calculate P (qi +1 ∧ qi) and after we replace this value to Equation 5 we can find P (qi +1 | q1 ∨ · · · ∨ qi). 4. EXPERIMENTAL EVALUATION In this section we experimentally evaluate the performance of the various algorithms for Hidden Web crawling presented in this paper. Our goal is to validate our theoretical analysis through realworld experiments, by crawling popular Hidden Web sites of textual databases. Since the number of documents that are discovered and downloaded from a textual database depends on the selection of the words that will be issued as queries5 to the search interface of each site, we compare the various selection policies that were described in section 3, namely the random, generic-frequency, and adaptive algorithms. The adaptive algorithm learns new keywords and terms from the documents that it downloads, and its selection process is driven by a cost model as described in Section 3.2. To keep our experiment and its analysis simple at this point, we will assume that the cost for every query is constant. That is, our goal is to maximize the number of downloaded pages by issuing the least number of queries. Later, in Section 4.4 we will present a comparison of our policies based on a more elaborate cost model. In addition, we use the independence estimator (Section 3.1) to estimate P (qi) from downloaded pages. Although the independence estimator is a simple estimator, our experiments will show that it can work very well in practice .6 For the generic-frequency policy, we compute the frequency distribution of words that appear in a 5.5-million-Web-page corpus 5Throughout our experiments, once an algorithm has submitted a query to a database, we exclude the query from subsequent submissions to the same database from the same algorithm. 6We defer the reporting of results based on the Zipf estimation to a future work. downloaded from 154 Web sites of various topics [26]. Keywords are selected based on their decreasing frequency with which they appear in this document set, with the most frequent one being selected first, followed by the second-most frequent keyword, etc. 7 Regarding the random policy, we use the same set of words collected from the Web corpus, but in this case, instead of selecting keywords based on their relative frequency, we choose them randomly (uniform distribution). In order to further investigate how the quality of the potential query-term list affects the random-based algorithm, we construct two sets: one with the 16, 000 most frequent words of the term collection used in the generic-frequency policy (hereafter, the random policy with the set of 16,000 words will be referred to as random-16K), and another set with the 1 million most frequent words of the same collection as above (hereafter, referred to as random-1M). The former set has frequent words that appear in a large number of documents (at least 10, 000 in our collection), and therefore can be considered of "high-quality" terms. The latter set though contains a much larger collection of words, among which some might be bogus, and meaningless. The experiments were conducted by employing each one of the aforementioned algorithms (adaptive, generic-frequency, random16K, and random-1M) to crawl and download contents from three Hidden Web sites: The PubMed Medical Library,8 Amazon,9 and the Open Directory Project [2]. According to the information on PubMed's Web site, its collection contains approximately 14 million abstracts of biomedical articles. We consider these abstracts as the "documents" in the site, and in each iteration of the adaptive policy, we use these abstracts as input to the algorithm. Thus our goal is to "discover" as many unique abstracts as possible by repeatedly querying the Web query interface provided by PubMed. The Hidden Web crawling on the PubMed Web site can be considered as topic-specific, due to the fact that all abstracts within PubMed are related to the fields of medicine and biology. In the case of the Amazon Web site, we are interested in downloading all the hidden pages that contain information on books. The querying to Amazon is performed through the Software Developer's Kit that Amazon provides for interfacing to its Web site, and which returns results in XML form. The generic "keyword" field is used for searching, and as input to the adaptive policy we extract the product description and the text of customer reviews when present in the XML reply. Since Amazon does not provide any information on how many books it has in its catalogue, we use random sampling on the 10-digit ISBN number of the books to estimate the size of the collection. Out of the 10, 000 random ISBN numbers queried, 46 are found in the Amazon catalogue, therefore the size of its book collection is estimated to be 46 million books. It's also worth noting here that Amazon poses an upper limit on the number of results (books in our case) returned by each query, which is set to 32, 000. As for the third Hidden Web site, the Open Directory Project (hereafter also referred to as dmoz), the site maintains the links to 3.8 million sites together with a brief summary of each listed site. The links are searchable through a keyword-search interface. We consider each indexed link together with its brief summary as the document of the dmoz site, and we provide the short summaries to the adaptive algorithm to drive the selection of new keywords for querying. On the dmoz Web site, we perform two Hidden Web crawls: the first is on its generic collection of 3.8-million indexed Figure 9: Coverage of policies for Pubmed Figure 11: Coverage of policies for general dmoz Figure 12: Coverage of policies for the Arts section of dmoz Figure 10: Coverage of policies for Amazon sites, regardless of the category that they fall into. The other crawl is performed specifically on the Arts section of dmoz (http: / / dmoz.org/Arts), which comprises of approximately 429, 000 indexed sites that are relevant to Arts, making this crawl topicspecific, as in PubMed. Like Amazon, dmoz also enforces an upper limit on the number of returned results, which is 10, 000 links with their summaries. 4.1 Comparison of policies The first question that we seek to answer is the evolution of the coverage metric as we submit queries to the sites. That is, what fraction of the collection of documents stored in the Hidden Web site can we download as we continuously query for new words selected using the policies described above? More formally, we are interested in the value of P (q1 V • • • V qi − 1 V qi), after we submit q1,..., qi queries, and as i increases. In Figures 9, 10, 11, and 12 we present the coverage metric for each policy, as a function of the query number, for the Web sites of PubMed, Amazon, general dmoz and the art-specific dmoz, respectively. On the y-axis the fraction of the total documents downloaded from the website is plotted, while the x-axis represents the query number. A first observation from these graphs is that in general, the generic-frequency and the adaptive policies perform much better than the random-based algorithms. In all of the figures, the graphs for the random-1M and the random-16K are significantly below those of other policies. Between the generic-frequency and the adaptive policies, we can see that the latter outperforms the former when the site is topic specific. For example, for the PubMed site (Figure 9), the adaptive algorithm issues only 83 queries to download almost 80% of the documents stored in PubMed, but the generic-frequency algorithm requires 106 queries for the same coverage,. For the dmoz/Arts crawl (Figure 12), the difference is even more substantial: the adaptive policy is able to download 99.98% of the total sites indexed in the Directory by issuing 471 queries, while the frequency-based algorithm is much less effective using the same number of queries, and discovers only 72% of the total number of indexed sites. The adaptive algorithm, by examining the contents of the pages that it downloads at each iteration, is able to identify the topic of the site as expressed by the words that appear most frequently in the result-set. Consequently, it is able to select words for subsequent queries that are more relevant to the site, than those preferred by the genericfrequency policy, which are drawn from a large, generic collection. Table 1 shows a sample of 10 keywords out of 211 chosen and submitted to the PubMed Web site by the adaptive algorithm, but not by the other policies. For each keyword, we present the number of the iteration, along with the number of results that it returned. As one can see from the table, these keywords are highly relevant to the topics of medicine and biology of the Public Medical Library, and match against numerous articles stored in its Web site. In both cases examined in Figures 9, and 12, the random-based policies perform much worse than the adaptive algorithm, and the generic-frequency. It is worthy noting however, that the randombased policy with the small, carefully selected set of 16, 000 "quality" words manages to download a considerable fraction of 42.5% Table 1: Sample of keywords queried to PubMed exclusively by from the PubMed Web site after 200 queries, while the coverage for the Arts section of dmoz reaches 22.7%, after 471 queried keywords. On the other hand, the random-based approach that makes use of the vast collection of 1 million words, among which a large number is bogus keywords, fails to download even a mere 1% of the total collection, after submitting the same number of query words. For the generic collections of Amazon and the dmoz sites, shown in Figures 10 and 11 respectively, we get mixed results: The genericfrequency policy shows slightly better performance than the adaptive policy for the Amazon site (Figure 10), and the adaptive method clearly outperforms the generic-frequency for the general dmoz site (Figure 11). A closer look at the log files of the two Hidden Web crawlers reveals the main reason: Amazon was functioning in a very flaky way when the adaptive crawler visited it, resulting in a large number of lost results. Thus, we suspect that the slightly poor performance of the adaptive policy is due to this experimental variance. We are currently running another experiment to verify whether this is indeed the case. Aside from this experimental variance, the Amazon result indicates that if the collection and the words that a Hidden Web site contains are generic enough, then the generic-frequency approach may be a good candidate algorithm for effective crawling. As in the case of topic-specific Hidden Web sites, the randombased policies also exhibit poor performance compared to the other two algorithms when crawling generic sites: for the Amazon Web site, random-16K succeeds in downloading almost 36.7% after issuing 775 queries, alas for the generic collection of dmoz, the fraction of the collection of links downloaded is 13.5% after the 770th query. Finally, as expected, random-1M is even worse than random16K, downloading only 14.5% of Amazon and 0.3% of the generic dmoz. In summary, the adaptive algorithm performs remarkably well in all cases: it is able to discover and download most of the documents stored in Hidden Web sites by issuing the least number of queries. When the collection refers to a specific topic, it is able to identify the keywords most relevant to the topic of the site and consequently ask for terms that is most likely that will return a large number of results. On the other hand, the generic-frequency policy proves to be quite effective too, though less than the adaptive: it is able to retrieve relatively fast a large portion of the collection, and when the site is not topic-specific, its effectiveness can reach that of adaptive (e.g. Amazon). Finally, the random policy performs poorly in general, and should not be preferred. 4.2 Impact of the initial query An interesting issue that deserves further examination is whether the initial choice of the keyword used as the first query issued by the adaptive algorithm affects its effectiveness in subsequent iterations. The choice of this keyword is not done by the selection of the Figure 13: Convergence of the adaptive algorithm using different initial queries for crawling the PubMed Web site adaptive algorithm itself and has to be manually set, since its query statistics tables have not been populated yet. Thus, the selection is generally arbitrary, so for purposes of fully automating the whole process, some additional investigation seems necessary. For this reason, we initiated three adaptive Hidden Web crawlers targeting the PubMed Web site with different seed-words: the word "data", which returns 1,344,999 results, the word "information" that reports 308, 474 documents, and the word "return" that retrieves 29, 707 pages, out of 14 million. These keywords represent varying degrees of term popularity in PubMed, with the first one being of high popularity, the second of medium, and the third of low. We also show results for the keyword "pubmed", used in the experiments for coverage of Section 4.1, and which returns 695 articles. As we can see from Figure 13, after a small number of queries, all four crawlers roughly download the same fraction of the collection, regardless of their starting point: Their coverages are roughly equivalent from the 25th query. Eventually, all four crawlers use the same set of terms for their queries, regardless of the initial query. In the specific experiment, from the 36th query onward, all four crawlers use the same terms for their queries in each iteration, or the same terms are used off by one or two query numbers. Our result confirms the observation of [11] that the choice of the initial query has minimal effect on the final performance. We can explain this intuitively as follows: Our algorithm approximates the optimal set of queries to use for a particular Web site. Once the algorithm has issued a significant number of queries, it has an accurate estimation of the content of the Web site, regardless of the initial query. Since this estimation is similar for all runs of the algorithm, the crawlers will use roughly the same queries. 4.3 Impact of the limit in the number of results While the Amazon and dmoz sites have the respective limit of 32,000 and 10,000 in their result sizes, these limits may be larger than those imposed by other Hidden Web sites. In order to investigate how a "tighter" limit in the result size affects the performance of our algorithms, we performed two additional crawls to the generic-dmoz site: we ran the generic-frequency and adaptive policies but we retrieved only up to the top 1,000 results for every query. In Figure 14 we plot the coverage for the two policies as a function of the number of queries. As one might expect, by comparing the new result in Figure 14 to that of Figure 11 where the result limit was 10,000, we conclude that the tighter limit requires a higher number of queries to achieve the same coverage. For example, when the result limit was 10,000, the adaptive pol Figure 14: Coverage of general dmoz after limiting the number of results to 1,000 icy could download 70% of the site after issuing 630 queries, while it had to issue 2,600 queries to download 70% of the site when the limit was 1,000. On the other hand, our new result shows that even with a tight result limit, it is still possible to download most of a Hidden Web site after issuing a reasonable number of queries. The adaptive policy could download more than 85% of the site after issuing 3,500 queries when the limit was 1,000. Finally, our result shows that our adaptive policy consistently outperforms the generic-frequency policy regardless of the result limit. In both Figure 14 and Figure 11, our adaptive policy shows significantly larger coverage than the generic-frequency policy for the same number of queries. 4.4 Incorporating the document download cost For brevity of presentation, the performance evaluation results provided so far assumed a simplified cost-model where every query involved a constant cost. In this section we present results regarding the performance of the adaptive and generic-frequency algorithms using Equation 2 to drive our query selection process. As we discussed in Section 2.3.1, this query cost model includes the cost for submitting the query to the site, retrieving the result index page, and also downloading the actual pages. For these costs, we examined the size of every result in the index page and the sizes of the documents, and we chose cq = 100, cr = 100, and cd = 10000, as values for the parameters of Equation 2, and for the particular experiment that we ran on the PubMed website. The values that we selected imply that the cost for issuing one query and retrieving one result from the result index page are roughly the same, while the cost for downloading an actual page is 100 times larger. We believe that these values are reasonable for the PubMed Web site. Figure 15 shows the coverage of the adaptive and genericfrequency algorithms as a function of the resource units used during the download process. The horizontal axis is the amount of resources used, and the vertical axis is the coverage. As it is evident from the graph, the adaptive policy makes more efficient use of the available resources, as it is able to download more articles than the generic-frequency, using the same amount of resource units. However, the difference in coverage is less dramatic in this case, compared to the graph of Figure 9. The smaller difference is due to the fact that under the current cost metric, the download cost of documents constitutes a significant portion of the cost. Therefore, when both policies downloaded the same number of documents, the saving of the adaptive policy is not as dramatic as before. That Figure 15: Coverage of PubMed after incorporating the document download cost is, the savings in the query cost and the result index download cost is only a relatively small portion of the overall cost. Still, we observe noticeable savings from the adaptive policy. At the total cost of 8000, for example, the coverage of the adaptive policy is roughly 0.5 while the coverage of the frequency policy is only 0.3. 5. RELATED WORK In a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler. The main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically. The potential queries are either provided manually by users or collected from the query interfaces. In contrast, our main focus is to generate queries automatically without any human intervention. The idea of automatically issuing queries to a database and examining the results has been previously used in different contexts. For example, in [10, 11], Callan and Connel try to acquire an accurate language model by collecting a uniform random sample from the database. In [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them. In a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes. In [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection. Our work differs from the previous studies in two ways. First, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work. Second, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms. Cope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form. This work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites. Reference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database. In [3] the authors study query-based techniques that can extract relational data from large text databases. Again, these works study orthogonal issues and are complementary to our work. In order to make documents in multiple textual databases searchable at a central place, a number of "harvesting" approaches have been proposed (e.g., OAI [21], DP9 [24]). These approaches essentially assume cooperative document databases that willingly share some of their metadata and/or documents to help a third-party search engine to index the documents. Our approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces. There exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18]. This body of work is often referred to as meta-searching or database selection problem over the Hidden Web. For example, [19] suggests the use offocused probing to classify databases into a topical category, so that given a query, a relevant database can be selected based on its topical category. Our vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location. 6. CONCLUSION AND FUTURE WORK Traditional crawlers normally follow links on the Web to discover and download pages. Therefore they cannot get to the Hidden Web pages which are only accessible through query interfaces. In this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it. We proposed three different query generation policies for the Hidden Web: a policy that picks queries at random from a list of keywords, a policy that picks queries based on their frequency in a generic text collection, and a policy which adaptively picks a good query based on the content of the pages downloaded from the Hidden Web site. Experimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential. In particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries. Given these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search. 6.1 Future Work We briefly discuss some future-research avenues. Multi-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases. While generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes. For example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books. Thus, if we can analyze the returned pages and extract the values for each field (e.g, title = ` Harry Potter', author = ` J.K. Rowling', etc), we can apply the same idea that we used for the textual database: estimate the frequency of each attribute value and pick the most promising one. The main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute. Since many Web sites follow limited formatting styles in presenting multiple attributes--for example, most book titles are preceded by the label "Title:"--we believe we may learn page-segmentation rules automatically from a small set of training examples. Other Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler. For example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites. But how can the crawler discover the query interfaces? The method proposed in [15] may be a good starting point. In addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a "next" button in order to see more results. In this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and "press" the next button automatically. Finally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day). In this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site. Page similarity detection algorithms may be useful for this purpose [9, 13].
Downloading Textual Hidden Web Content Through Keyword Queries ABSTRACT An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only "entry point" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries. 1. INTRODUCTION Recent studies show that a significant fraction of Web content cannot be reached by following links [7, 12]. In particular, a large part of the Web is "hidden" behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms. These pages are often referred to as the Hidden Web [17] or the Deep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially "hidden" from a typical Web user). According to many studies, the size of the Hidden Web increases rapidly as more organizations put their valuable content online through an easy-to-use Web interface [7]. In [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web. Moreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7]. For example, PubMed hosts many high-quality papers on medical research that were selected from careful peerreview processes, while the site of the US Patent and Trademarks Office1 makes existing patent documents available, helping potential inventors examine "prior art." In this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them. Conventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links). We believe that an effective Hidden-Web crawler can have a tremendous impact on how users search information on the Web: 9 Tapping into unexplored information: The Hidden-Web crawler will allow an average Web user to easily explore the vast amount of information that is mostly "hidden" at present. Since a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users. Unless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites. 9 Improving user experience: Even if a user is aware of a number of Hidden-Web sites, the user still has to waste a significant amount of time and effort, visiting all of the potentially relevant sites, querying each of them and exploring the result. By making the Hidden-Web pages searchable at a central location, we can significantly reduce the user's wasted time and effort in searching the Hidden Web. 9 Reducing potential bias: Due to the heavy reliance of many Web users on search engines for locating information, search engines influence how the users perceive the Web [28]. Users do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28]. According to a recent article [5], several organizations have recognized the importance of bringing information of their Hidden Web sites onto the surface, and committed considerable resources towards this effort. Our Figure 1: A single-attribute search interface Hidden-Web crawler attempts to automate this process for Hid - Figure 2: A multi-attribute search interface den Web sites with textual content, thus minimizing the associated costs and effort required. Given that the only "entry" to Hidden Web pages is through querying a search form, there are two core challenges to implementing an effective Hidden Web crawler: (a) The crawler has to be able to understand and model a query interface, and (b) The crawler has to come up with meaningful queries to issue to the query interface. The first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented. Here, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages. Clearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward. We exhaustively issue all possible queries, one query at a time. When the query forms have a "free text" input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries. In this case, what queries should we pick? Can the crawler automatically come up with meaningful queries without understanding the semantics of the search form? In this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically. We also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites. In summary, this paper makes the following contributions: • We present a formal framework to study the problem of HiddenWeb crawling. (Section 2). • We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions. Unfortunately, we show that the optimal policy is NP-hard and cannot be implemented in practice (Section 2.2). • We propose a new adaptive policy that approximates the optimal policy. Our adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3). • We evaluate various crawling policies through experiments on real Web sites. Our experiments will show the relative advantages of various crawling policies and demonstrate their potential. The results from our experiments are very promising. In one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries. 2. FRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem. In Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites. Based on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2. Finally in Section 2.3, we formalize the Hidden-Web crawling problem. 2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics. Depending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database. A textual database is a site that mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]). Since plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1). In contrast, a structured database often contains multi-attribute relational data (e.g., a book on the Amazon Web site may have the fields title = ` Harry Potter', author = ` J.K. Rowling' and isbn = ` 0590353403') and supports multi-attribute search interfaces (Figure 2). In this paper, we will mainly focus on textual databases that support single-attribute keyword queries. We discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1. Typically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1. Step 1. First, the user issues a query, say "liver," through the search interface provided by the Web site (such as the one shown in Figure 1). 2. Step 2. Shortly after the user issues the query, she is presented with a result index page. That is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3 (a). 3. Step 3. From the list in the result index page, the user identifies the pages that look "interesting" and follows the links. Clicking on a link leads the user to the actual Web page, such as the one shown in Figure 3 (b), that the user wants to look at. 2.2 A generic Hidden Web crawling algorithm Given that the only "entry" to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section. That is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages. In most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources. In Figure 4 we show the generic algorithm for a Hidden-Web crawler. For simplicity, we assume that the Hidden-Web crawler issues single-term queries only .3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)). Finally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)). This same process is repeated until all the available resources are used up (Step (1)). Given this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next. If the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources. In contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages. Therefore, how the crawler selects the next query can greatly affect its effectiveness. In the next section, we formalize this query selection problem. 101 3For most Web sites that assume "AND" for multi-keyword queries, single-term queries return the maximum number of results. Extending our work to multi-keyword queries is straightforward. (a) List of matching pages for query "liver". (b) The first matching page for "liver". Figure 3: Pages from the PubMed Web site. ALGORITHM 2.1. Crawling a Hidden Web site Procedure (1) while (there are available resources) do / / select a term to send to the site (2) qi = SelectTerm () / / send query and acquire result index page (3) R (qi) = QueryWebSite (qi) / / download the pages of interest (4) Download (R (qi)) (5) done Figure 4: Algorithm for crawling a Hidden Web site. Figure 5: A set-formalization of the optimal query selection problem. 2.3 Problem formalization Theoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5). We represent each Web page in S as a point (dots in Figure 5). Every potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site. Each subset is associated with a weight that represents the cost of issuing the query. Under this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost). This problem is equivalent to the set-covering problem in graph theory [16]. There are two main difficulties that we need to address in this formalization. First, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance. Without knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage. Second, the set-covering problem is known to be NP-Hard [16], so an efficient algorithm to solve this problem optimally in polynomial time has yet to be found. In this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost. Our algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned. Based on this information our query selection algorithm can then select the "best" queries that cover the content of the Web site. We present our prediction method and our query selection algorithm in Section 3. 2.3.1 Performance Metric Before we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost/performance metrics. Given a query qi, we use P (qi) to denote the fraction of pages that we will get back if we issue query qi to the site. For example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = "medicine", then P (qi) = 0.3. We use P (q1 ∧ q2) to represent the fraction of pages that are returned from both q1 and q2 (i.e., the intersection of P (q1) and P (q2)). Similarly, we use P (q1 ∨ q2) to represent the fraction of pages that are returned from either q1 or q2 (i.e., the union of P (q1) and P (q2)). We also use Cost (qi) to represent the cost of issuing the query qi. Depending on the scenario, the cost can be measured either in time, network bandwidth, the number of interactions with the site, or it can be a function of all of these. As we will see later, our proposed algorithms are independent of the exact cost function. In the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3 (a)) and downloading the actual pages (Figure 3 (b)). We assume that submitting a query incurs a fixed cost of cq. The cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed. Then the overall cost of query qi is In certain cases, some of the documents from qi may have already been downloaded from previous queries. In this case, the crawler may skip downloading these documents and the cost of qi can be Here, we use Pnew (qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries. Later in Section 3.1 we will study how we can estimate P (qi) and Pnew (qi) to estimate the cost of qi. Since our algorithms are independent of the exact cost function, we will assume a generic cost function Cost (qi) in this paper. When we need a concrete cost function, however, we will use Equation 2. Given the notation, we can formalize the goal of a Hidden-Web crawler as follows: 3. KEYWORD SELECTION 3.1 Estimating the number of matching pages 3.2 Query selection algorithm 3.3 Efficient calculation of query statistics 3.4 Crawling sites that limit the number of results 4. EXPERIMENTAL EVALUATION 4.1 Comparison of policies 4.2 Impact of the initial query 4.3 Impact of the limit in the number of results 4.4 Incorporating the document download cost 5. RELATED WORK In a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler. The main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically. The potential queries are either provided manually by users or collected from the query interfaces. In contrast, our main focus is to generate queries automatically without any human intervention. The idea of automatically issuing queries to a database and examining the results has been previously used in different contexts. For example, in [10, 11], Callan and Connel try to acquire an accurate language model by collecting a uniform random sample from the database. In [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them. In a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes. In [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection. Our work differs from the previous studies in two ways. First, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work. Second, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms. Cope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form. This work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites. Reference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database. In [3] the authors study query-based techniques that can extract relational data from large text databases. Again, these works study orthogonal issues and are complementary to our work. In order to make documents in multiple textual databases searchable at a central place, a number of "harvesting" approaches have been proposed (e.g., OAI [21], DP9 [24]). These approaches essentially assume cooperative document databases that willingly share some of their metadata and/or documents to help a third-party search engine to index the documents. Our approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces. There exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18]. This body of work is often referred to as meta-searching or database selection problem over the Hidden Web. For example, [19] suggests the use offocused probing to classify databases into a topical category, so that given a query, a relevant database can be selected based on its topical category. Our vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location. 6. CONCLUSION AND FUTURE WORK Traditional crawlers normally follow links on the Web to discover and download pages. Therefore they cannot get to the Hidden Web pages which are only accessible through query interfaces. In this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it. We proposed three different query generation policies for the Hidden Web: a policy that picks queries at random from a list of keywords, a policy that picks queries based on their frequency in a generic text collection, and a policy which adaptively picks a good query based on the content of the pages downloaded from the Hidden Web site. Experimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential. In particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries. Given these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search. 6.1 Future Work We briefly discuss some future-research avenues. Multi-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases. While generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes. For example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books. Thus, if we can analyze the returned pages and extract the values for each field (e.g, title = ` Harry Potter', author = ` J.K. Rowling', etc), we can apply the same idea that we used for the textual database: estimate the frequency of each attribute value and pick the most promising one. The main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute. Since many Web sites follow limited formatting styles in presenting multiple attributes--for example, most book titles are preceded by the label "Title:"--we believe we may learn page-segmentation rules automatically from a small set of training examples. Other Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler. For example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites. But how can the crawler discover the query interfaces? The method proposed in [15] may be a good starting point. In addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a "next" button in order to see more results. In this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and "press" the next button automatically. Finally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day). In this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site. Page similarity detection algorithms may be useful for this purpose [9, 13].
Downloading Textual Hidden Web Content Through Keyword Queries ABSTRACT An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only "entry point" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries. 1. INTRODUCTION Recent studies show that a significant fraction of Web content cannot be reached by following links [7, 12]. In particular, a large part of the Web is "hidden" behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms. These pages are often referred to as the Hidden Web [17] or the Deep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially "hidden" from a typical Web user). In [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web. Moreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7]. In this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them. Conventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links). Since a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users. Unless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites. By making the Hidden-Web pages searchable at a central location, we can significantly reduce the user's wasted time and effort in searching the Hidden Web. Users do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28]. Our Figure 1: A single-attribute search interface Hidden-Web crawler attempts to automate this process for Hid - Figure 2: A multi-attribute search interface den Web sites with textual content, thus minimizing the associated costs and effort required. The first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented. Here, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages. Clearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward. We exhaustively issue all possible queries, one query at a time. When the query forms have a "free text" input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries. In this case, what queries should we pick? Can the crawler automatically come up with meaningful queries without understanding the semantics of the search form? In this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically. We also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites. In summary, this paper makes the following contributions: • We present a formal framework to study the problem of HiddenWeb crawling. (Section 2). • We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions. • We propose a new adaptive policy that approximates the optimal policy. Our adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3). • We evaluate various crawling policies through experiments on real Web sites. Our experiments will show the relative advantages of various crawling policies and demonstrate their potential. The results from our experiments are very promising. In one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries. 2. FRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem. In Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites. Based on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2. Finally in Section 2.3, we formalize the Hidden-Web crawling problem. 2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics. Depending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database. A textual database is a site that mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]). Since plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1). In this paper, we will mainly focus on textual databases that support single-attribute keyword queries. We discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1. Typically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1. Step 1. First, the user issues a query, say "liver," through the search interface provided by the Web site (such as the one shown in Figure 1). 2. Step 2. Shortly after the user issues the query, she is presented with a result index page. That is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3 (a). 3. Step 3. From the list in the result index page, the user identifies the pages that look "interesting" and follows the links. Clicking on a link leads the user to the actual Web page, such as the one shown in Figure 3 (b), that the user wants to look at. 2.2 A generic Hidden Web crawling algorithm Given that the only "entry" to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section. That is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages. In most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources. In Figure 4 we show the generic algorithm for a Hidden-Web crawler. For simplicity, we assume that the Hidden-Web crawler issues single-term queries only .3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)). Finally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)). Given this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next. If the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources. In contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages. Therefore, how the crawler selects the next query can greatly affect its effectiveness. In the next section, we formalize this query selection problem. 101 3For most Web sites that assume "AND" for multi-keyword queries, single-term queries return the maximum number of results. Extending our work to multi-keyword queries is straightforward. (a) List of matching pages for query "liver". (b) The first matching page for "liver". Figure 3: Pages from the PubMed Web site. ALGORITHM 2.1. Crawling a Hidden Web site Procedure Figure 4: Algorithm for crawling a Hidden Web site. Figure 5: A set-formalization of the optimal query selection problem. 2.3 Problem formalization Theoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5). We represent each Web page in S as a point (dots in Figure 5). Every potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site. Each subset is associated with a weight that represents the cost of issuing the query. Under this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost). First, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance. Without knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage. In this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost. Our algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned. Based on this information our query selection algorithm can then select the "best" queries that cover the content of the Web site. We present our prediction method and our query selection algorithm in Section 3. 2.3.1 Performance Metric Before we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost/performance metrics. Given a query qi, we use P (qi) to denote the fraction of pages that we will get back if we issue query qi to the site. For example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = "medicine", then P (qi) = 0.3. We also use Cost (qi) to represent the cost of issuing the query qi. As we will see later, our proposed algorithms are independent of the exact cost function. In the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3 (a)) and downloading the actual pages (Figure 3 (b)). We assume that submitting a query incurs a fixed cost of cq. The cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed. Then the overall cost of query qi is In certain cases, some of the documents from qi may have already been downloaded from previous queries. In this case, the crawler may skip downloading these documents and the cost of qi can be Here, we use Pnew (qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries. Later in Section 3.1 we will study how we can estimate P (qi) and Pnew (qi) to estimate the cost of qi. Since our algorithms are independent of the exact cost function, we will assume a generic cost function Cost (qi) in this paper. Given the notation, we can formalize the goal of a Hidden-Web crawler as follows: 5. RELATED WORK In a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler. The main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically. The potential queries are either provided manually by users or collected from the query interfaces. In contrast, our main focus is to generate queries automatically without any human intervention. The idea of automatically issuing queries to a database and examining the results has been previously used in different contexts. In [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them. In a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes. In [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection. Our work differs from the previous studies in two ways. First, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work. Second, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms. Cope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form. This work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites. Reference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database. In [3] the authors study query-based techniques that can extract relational data from large text databases. Again, these works study orthogonal issues and are complementary to our work. In order to make documents in multiple textual databases searchable at a central place, a number of "harvesting" approaches have These approaches essentially assume cooperative document databases that willingly share some of their metadata and/or documents to help a third-party search engine to index the documents. Our approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces. There exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18]. This body of work is often referred to as meta-searching or database selection problem over the Hidden Web. Our vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location. 6. CONCLUSION AND FUTURE WORK Traditional crawlers normally follow links on the Web to discover and download pages. Therefore they cannot get to the Hidden Web pages which are only accessible through query interfaces. In this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it. Experimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential. In particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries. Given these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search. 6.1 Future Work We briefly discuss some future-research avenues. Multi-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases. While generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes. For example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books. The main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute. Other Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler. For example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites. But how can the crawler discover the query interfaces? The method proposed in [15] may be a good starting point. In addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a "next" button in order to see more results. In this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and "press" the next button automatically. Finally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day). In this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site. Page similarity detection algorithms may be useful for this purpose [9, 13].
H-96
A Study of Factors Affecting the Utility of Implicit Relevance Feedback
Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF.
[ "implicit relev feedback", "relev feedback", "search task complex", "brows-base result interfac", "top-rank sentenc", "queri modif term", "explicit rf system", "interact queri expans featur", "high complex whilst", "moder complex whilst", "introductori questionnair", "vari complex", "medium complex", "search precis", "proport feedback" ]
[ "P", "P", "P", "U", "U", "U", "M", "M", "M", "M", "U", "M", "M", "M", "M" ]
A Study of Factors Affecting the Utility of Implicit Relevance Feedback Ryen W. White Human-Computer Interaction Laboratory Institute for Advanced Computer Studies University of Maryland College Park, MD 20742, USA ryen@umd.edu Ian Ruthven Department of Computer and Information Sciences University of Strathclyde Glasgow, Scotland. G1 1XH. ir@cis.strath.ac.uk Joemon M. Jose Department of Computing Science University of Glasgow Glasgow, Scotland. G12 8RZ. jj@dcs.gla.ac.uk ABSTRACT Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval] General Terms Experimentation, Human Factors. 1. INTRODUCTION Information Retrieval (IR) systems are designed to help searchers solve problems. In the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection. However, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers. As the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6]. Techniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification. However, in practice RF techniques have been underutilised as they place an increased cognitive burden on searchers to directly indicate relevant results [10]. Implicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact. IRF has been implemented either through the use of surrogate measures based on interaction with documents (such as reading time, scrolling or document retention) [7] or using interaction with browse-based result interfaces [5]. IRF has been shown to display mixed effectiveness because the factors that are good indicators of user interest are often erratic and the inferences drawn from user interaction are not always valid [7]. In this paper we present a study into the use and effectiveness of IRF in an online search environment. The study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task? (ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users? (iii) is IRF equally used and does it generate terms that are equally useful at all search stages? This study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use. The main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13]. In this paper we use data derived from that experiment to study factors affecting the utility of IRF. 2. STUDY In this section we describe the user study conducted to address our research questions. 2.1 Systems Our study used two systems both of which suggested new query terms to the user. One system suggested terms based on the user``s interaction (IRF), the other used Explicit RF (ERF) asking the user to explicitly indicate relevant material. Both systems used the same term suggestion algorithm, [15], and used a common interface. 2.1.1 Interface Overview In both systems, retrieved documents are represented at the interface by their full-text and a variety of smaller, query-relevant representations, created at retrieval time. We used the Web as the test collection in this study and Google1 as the underlying search engine. Document representations include the document title and a summary of the document; a list of top-ranking sentences (TRS) extracted from the top documents retrieved, scored in relation to the query, a sentence in the document summary, and each summary sentence in the context it occurs in the document (i.e., with the preceding and following sentence). Each summary sentence and top-ranking sentence is regarded as a representation of the document. The default display contains the list of top-ranking sentences and the list of the first ten document titles. Interacting with a representation guides searchers to a different representation from the same document, e.g., moving the mouse over a document title displays a summary of the document. This presentation of progressively more information from documents to aid relevance assessments has been shown to be effective in earlier work [14, 16]. In Appendix A we show the complete interface to the IRF system with the document representations marked and in Appendix B we show a fragment from the ERF interface with the checkboxes used by searchers to indicate relevant information. Both systems provide an interactive query expansion feature by suggesting new query terms to the user. The searcher has the responsibility for choosing which, if any, of these terms to add to the query. The searcher can also add or remove terms from the query at will. 2.1.2 Explicit RF system This version of the system implements explicit RF. Next to each document representation are checkboxes that allow searchers to mark individual representations as relevant; marking a representation is an indication that its contents are relevant. Only the representations marked relevant by the user are used for suggesting new query terms. This system was used as a baseline against which the IRF system could be compared. 2.1.3 Implicit RF system This system makes inferences about searcher interests based on the information with which they interact. As described in Section 2.1.1 interacting with a representation highlights a new representation from the same document. To the searcher this is a way they can find out more information from a potentially interesting source. To the implicit RF system each interaction with a representation is interpreted as an implicit indication of interest in that representation; interacting with a representation is assumed to be an indication that its contents are relevant. The query modification terms are selected using the same algorithm as in the Explicit RF system. Therefore the only difference between the systems is how relevance is communicated to the system. The results of the main experiment [13] indicated that these two systems were comparable in terms of effectiveness. 2.2 Tasks Search tasks were designed to encourage realistic search behaviour by our subjects. The tasks were phrased in the form of simulated work task situations [2], i.e., short search scenarios that were designed to reflect real-life search situations and allow subjects to develop personal assessments of relevance. We devised six search topics (i.e., applying to university, allergies in the workplace, art galleries in Rome, Third Generation mobile phones, Internet music piracy and petrol prices) based on pilot testing with a small representative group of subjects. These subjects were not involved in the main experiment. For each topic, three versions of each work task situation were devised, each version differing in their predicted level of task complexity. As described in [1] task complexity is a variable that affects subject perceptions of a task and their interactive behaviour, e.g., subjects perform more filtering activities with highly complex search tasks. By developing tasks of different complexity we can assess how the nature of the task affects the subjects'' interactive behaviour and hence the evidence supplied to IRF algorithms. Task complexity was varied according to the methodology described in [1], specifically by varying the number of potential information sources and types of information required, to complete a task. In our pilot tests (and in a posteriori analysis of the main experiment results) we verified that subjects reporting of individual task complexity matched our estimation of the complexity of the task. Subjects attempted three search tasks: one high complexity, one moderate complexity and one low complexity2 . They were asked to read the task, place themselves in the situation it described and find the information they felt was required to complete the task. Figure 1 shows the task statements for three levels of task complexity for one of the six search topics. HC Task: High Complexity Whilst having dinner with an American colleague, they comment on the high price of petrol in the UK compared to other countries, despite large volumes coming from the same source. Unaware of any major differences, you decide to find out how and why petrol prices vary worldwide. MC Task: Moderate Complexity Whilst out for dinner one night, one of your friends'' guests is complaining about the price of petrol and the factors that cause it. Throughout the night they seem to be complaining about everything they can, reducing the credibility of their earlier statements so you decide to research which factors actually are important in determining the price of petrol in the UK. LC Task: Low Complexity While out for dinner one night, your friend complains about the rising price of petrol. However, as you have not been driving for long, you are unaware of any major changes in price. You decide to find out how the price of petrol has changed in the UK in recent years. Figure 1. Varying task complexity (Petrol Prices topic). 2.3 Subjects 156 volunteers expressed an interest in participating in our study. 48 subjects were selected from this set with the aim of populating two groups, each with 24 subjects: inexperienced (infrequent/ inexperienced searchers) and experienced (frequent/ experienced searchers). Subjects were not chosen and classified into their groups until they had completed an entry questionnaire that asked them about their search experience and computer use. The average age of the subjects was 22.83 years (maximum 51, minimum 18, σ = 5.23 years) and 75% had a university diploma or a higher degree. 47.91% of subjects had, or were pursuing, a qualification in a discipline related to Computer Science. The subjects were a mixture of students, researchers, academic staff and others, with different levels of computer and search experience. The subjects were divided into the two groups depending on their search experience, how often they searched and the types of searches they performed. All were familiar with Web searching, and some with searching in other domains. 2.4 Methodology The experiment had a factorial design; with 2 levels of search experience, 3 experimental systems (although we only report on the findings from the ERF and IRF systems) and 3 levels of search task complexity. Subjects attempted one task of each complexity, 2 The main experiment from which these results are drawn had a third comparator system which had a different interface. Each subject carried out three tasks, one on each system. We only report on the results from the ERF and IRF systems as these are the only pertinent ones for this paper. switched systems after each task and used each system once. The order in which systems were used and search tasks attempted was randomised according to a Latin square experimental design. Questionnaires used Likert scales, semantic differentials and openended questions to elicit subject opinions [4]. System logging was also used to record subject interaction. A tutorial carried out prior to the experiment allowed subjects to use a non-feedback version of the system to attempt a practice task before using the first experimental system. Experiments lasted between oneand-a-half and two hours, dependent on variables such as the time spent completing questionnaires. Subjects were offered a 5 minute break after the first hour. In each experiment: i. the subject was welcomed and asked to read an introduction to the experiments and sign consent forms. This set of instructions was written to ensure that each subject received precisely the same information. ii. the subject was asked to complete an introductory questionnaire. This contained questions about the subject``s education, general search experience, computer experience and Web search experience. iii. the subject was given a tutorial on the interface, followed by a training topic on a version of the interface with no RF. iv. the subject was given three task sheets and asked to choose one task from the six topics on each sheet. No guidelines were given to subjects when choosing a task other than they could not choose a task from any topic more than once. Task complexity was rotated by the experimenter so each subject attempted one high complexity task, one moderate complexity task and one low complexity task. v. the subject was asked to perform the search and was given 15 minutes to search. The subject could terminate a search early if they were unable to find any more information they felt helped them complete the task. vi. after completion of the search, the subject was asked to complete a post-search questionnaire. vii. the remaining tasks were attempted by the subject, following steps v. and vi. viii. the subject completed a post-experiment questionnaire and participated in a post-experiment interview. Subjects were told that their interaction may be used by the IRF system to help them as they searched. They were not told which behaviours would be used or how it would be used. We now describe the findings of our analysis. 3. FINDINGS In this section we use the data derived from the experiment to answer our research questions about the effect of search task complexity, search experience and stage in search on the use and effectiveness of IRF. We present our findings per research question. Due to the ordinal nature of much of the data non-parametric statistical testing is used in this analysis and the level of significance is set to p < .05, unless otherwise stated. We use the method proposed by [12] to determine the significance of differences in multiple comparisons and that of [9] to test for interaction effects between experimental variables, the occurrence of which we report where appropriate. All Likert scales and semantic differentials were on a 5-point scale where a rating closer to 1 signifies more agreement with the attitude statement. The category labels HC, MC and LC are used to denote the high, moderate and low complexity tasks respectively. The highest, or most positive, values in each table are shown in bold. Our analysis uses data from questionnaires, post-experiment interviews and background system logging on the ERF and IRF systems. 3.1 Search Task Searchers attempted three search tasks of varying complexity, each on a different experimental system. In this section we present an analysis on the use and usefulness of IRF for search tasks of different complexities. We present our findings in terms of the RF provided by subjects and the terms recommended by the systems. 3.1.1 Feedback We use questionnaires and system logs to gather data on subject perceptions and provision of RF for different search tasks. In the postsearch questionnaire subjects were asked about how RF was conveyed using differentials to elicit their opinion on: 1. the value of the feedback technique: How you conveyed relevance to the system (i.e. ticking boxes or viewing information) was: easy / difficult, effective/ ineffective, useful''/not useful. 2. the process of providing the feedback: How you conveyed relevance to the system made you feel: comfortable/uncomfortable, in control/not in control. The average obtained differential values are shown in Table 1 for IRF and each task category. The value corresponding to the differential All represents the mean of all differentials for a particular attitude statement. This gives some overall understanding of the subjects'' feelings which can be useful as the subjects may not answer individual differentials very precisely. The values for ERF are included for reference in this table and all other tables and figures in the Findings section. Since the aim of the paper is to investigate situations in which IRF might perform well, not a direct comparison between IRF and ERF, we make only limited comparisons between these two types of feedback. Table 1. Subject perceptions of RF method (lower = better). Each cell in Table 1 summarises the subject responses for 16 tasksystem pairs (16 subjects who ran a high complexity (HC) task on the ERF system, 16 subjects who ran a medium complexity (MC) task on the ERF system, etc). Kruskal-Wallis Tests were applied to each differential for each type of RF3 . Subject responses suggested that 3 Since this analysis involved many differentials, we use a Bonferroni correction to control the experiment-wise error rate and set the alpha level (α) to .0167 and .0250 for both statements 1. and 2. respectively, i.e., .05 divided by the number of differentials. This correction reduces the number of Type I errors i.e., rejecting null hypotheses that are true. Explicit RF Implicit RF Differential HC MC LC HC MC LC Easy 2.78 2.47 2.12 1.86 1.81 1.93 Effective 2.94 2.68 2.44 2.04 2.41 2.66 Useful 2.76 2.51 2.16 1.91 2.37 2.56 All (1) 2.83 2.55 2.24 1.94 2.20 2.38 Comfortable 2.27 2.28 2.35 2.11 2.15 2.16 In control 2.01 1.97 1.93 2.73 2.68 2.61 All (2) 2.14 2.13 2.14 2.42 2.42 2.39 IRF was most effective and useful for more complex search tasks4 and that the differences in all pair-wise comparisons between tasks were significant5 . Subject perceptions of IRF elicited using the other differentials did not appear to be affected by the complexity of the search task6 . To determine whether a relationship exists between the effectiveness and usefulness of the IRF process and task complexity we applied Spearman``s Rank Order Correlation Coefficient to participant responses. The results of this analysis suggest that the effectiveness of IRF and usefulness of IRF are both related to task complexity; as task complexity increases subject preference for IRF also increases7 . On the other hand, subjects felt ERF was more effective and useful for low complexity tasks8 . Their verbal reporting of ERF, where perceived utility and effectiveness increased as task complexity decreased, supports this finding. In tasks of lower complexity the subjects felt they were better able to provide feedback on whether or not documents were relevant to the task. We analyse interaction logs generated by both interfaces to investigate the amount of RF subjects provided. To do this we use a measure of search precision that is the proportion of all possible document representations that a searcher assessed, divided by the total number they could assess. In ERF this is the proportion of all possible representations that were marked relevant by the searcher, i.e., those representations explicitly marked relevant. In IRF this is the proportion of representations viewed by a searcher over all possible representations that could have been viewed by the searcher. This proportion measures the searcher``s level of interaction with a document, we take it to measure the user``s interest in the document: the more document representations viewed the more interested we assume a user is in the content of the document. There are a maximum of 14 representations per document: 4 topranking sentences, 1 title, 1 summary, 4 summary sentences and 4 summary sentences in document context. Since the interface shows document representations from the top-30 documents, there are 420 representations that a searcher can assess. Table 2 shows proportion of representations provided as RF by subjects. Table 2. Feedback and documents viewed. Explicit RF Implicit RF Measure HC MC LC HC MC LC Proportion Feedback 2.14 2.39 2.65 21.50 19.36 15.32 Documents Viewed 10.63 10.43 10.81 10.84 12.19 14.81 For IRF there is a clear pattern: as complexity increases the subjects viewed fewer documents but viewed more representations for each document. This suggests a pattern where users are investigating retrieved documents in more depth. It also means that the amount of 4 effective: χ2 (2) = 11.62, p = .003; useful: χ2 (2) = 12.43, p = .002 5 Dunn``s post-hoc tests (multiple comparison using rank sums); all Z ≥ 2.88, all p ≤ .002 6 all χ2 (2) ≤ 2.85, all p ≥ .24 (Kruskal-Wallis Tests) 7 effective: all r ≥ 0.644, p ≤ .002; useful: all r ≥ 0.541, p ≤ .009 8 effective: χ2 (2) = 7.01, p = .03; useful: χ2 (2) = 6.59, p = .037 (Kruskal-Wallis Test); all pair-wise differences significant, all Z ≥ 2.34, all p ≤ .01 (Dunn``s post-hoc tests) feedback varies based on the complexity of the search task. Since IRF is based on the interaction of the searcher, the more they interact, the more feedback they provide. This has no effect on the number of RF terms chosen, but may affect the quality of the terms selected. Correlation analysis revealed a strong negative correlation between the number of documents viewed and the amount of feedback searchers provide9 ; as the number of documents viewed increases the proportion of feedback falls (searchers view less representations of each document). This may be a natural consequence of their being less time to view documents in a time constrained task environment but as we will show as complexity changes, the nature of information searchers interact with also appears to change. In the next section we investigate the effect of task complexity on the terms chosen as a result of IRF. 3.1.2 Terms The same RF algorithm was used to select query modification terms in all systems [16]. We use subject opinions of terms recommended by the systems as a measure of the effectiveness of IRF with respect to the terms generated for different search tasks. To test this, subjects were asked to complete two semantic differentials that completed the statement: The words chosen by the system were: relevant/irrelevant and useful/not useful. Table 3 presents average responses grouped by search task. Table 3. Subject perceptions of system terms (lower = better). Explicit RF Implicit RF Differential HC MC LC HC MC LC Relevant 2.50 2.46 2.41 1.94 2.35 2.68 Useful 2.61 2.61 2.59 2.06 2.54 2.70 Kruskal-Wallis Tests were applied within each type of RF. The results indicate that the relevance and usefulness of the terms chosen by IRF is affected by the complexity of the search task; the terms chosen are more relevant and useful when the search task is more complex. 10 Relevant here, was explained as being related to their task whereas useful was for terms that were seen as being helpful in the search task. For ERF, the results indicate that the terms generated are perceived to be more relevant and useful for less complex search tasks; although differences between tasks were not significant11 . This suggests that subject perceptions of the terms chosen for query modification are affected by task complexity. Comparison between ERF and IRF shows that subject perceptions also vary for different types of RF12 . As well as using data on relevance and utility of the terms chosen, we used data on term acceptance to measure the perceived value of the terms suggested. Explicit and Implicit RF systems made recommendations about which terms could be added to the original search query. In Table 4 we show the proportion of the top six terms 9 r = −0.696, p = .001 (Pearson``s Correlation Coefficient) 10 relevant: χ2 (2) = 13.82, p = .001; useful: χ2 (2) = 11.04, p = .004; α = .025 11 all χ2 (2) ≤ 2.28, all p ≥ .32 (Kruskal-Wallis Test) 12 all T(16) ≥ 102, all p ≤ .021, (Wilcoxon Signed-Rank Test) 13 that were shown to the searcher that were added to the search query, for each type of task and each type of RF. Table 4. Term Acceptance (percentage of top six terms). Explicit RF Implicit RFProportion of terms HC MC LC HC MC LC Accepted 65.31 67.32 68.65 67.45 67.24 67.59 The average number of terms accepted from IRF is approximately the same across all search tasks and generally the same as that of ERF14 . As Table 2 shows, subjects marked fewer documents relevant for highly complex tasks . Therefore, when task complexity increases the ERF system has fewer examples of relevant documents and the expansion terms generated may be poorer. This could explain the difference in the proportion of recommended terms accepted in ERF as task complexity increases. For IRF there is little difference in how many of the recommended terms were chosen by subjects for each level of task complexity15 . Subjects may have perceived IRF terms as more useful for high complexity tasks but this was not reflected in the proportion of IRF terms accepted. Differences may reside in the nature of the terms accepted; future work will investigate this issue. 3.1.3 Summary In this section we have presented an investigation on the effect of search task complexity on the utility of IRF. From the results there appears to be a strong relation between the complexity of the task and the subject interaction: subjects preferring IRF for highly complex tasks. Task complexity did not affect the proportion of terms accepted in either RF method, despite there being a difference in how relevant and useful subjects perceived the terms to be for different complexities; complexity may affect term selection in ways other than the proportion of terms accepted. 3.2 Search Experience Experienced searchers may interact differently and give different types of evidence to RF than inexperienced searchers. As such, levels of search experience may affect searchers'' use and perceptions of IRF. In our experiment subjects were divided into two groups based on their level of search experience, the frequency with which they searched and the types of searches they performed. In this section we use their perceptions and logging to address the next research question; the relationship between the usefulness and use of IRF and the search experience of experimental subjects. The data are the same as that analysed in the previous section, but here we focus on search experience rather than the search task. 3.2.1 Feedback We analyse the results from the attitude statements described at the beginning of Section 3.1.1. (i.e., How you conveyed relevance to the system was... and How you conveyed relevance to the system made you feel...). These differentials elicited opinion from experimental subjects about the RF method used. In Table 5 we show the mean average responses for inexperienced and experienced subject groups on ERF and IRF; 24 subjects per cell. 13 This was the smallest number of query modification terms that were offered in both systems. 14 all T(16) ≥ 80, all p ≤ .31, (Wilcoxon Signed-Rank Test) 15 ERF: χ2 (2) = 3.67, p = .16; IRF: χ2 (2) = 2.55, p = .28 (KruskalWallis Tests) Table 5. Subject perceptions of RF method (lower = better). The results demonstrate a strong preference in inexperienced subjects for IRF; they found it more easy and effective than experienced subjects. 16 The differences for all other IRF differentials were not statistically significant. For all differentials, apart from in control, inexperienced subjects generally preferred IRF over ERF17 . Inexperienced subjects also felt that IRF was more difficult to control than experienced subjects18 . As these subjects have less search experience they may be less able to understand RF processes and may be more comfortable with the system gathering feedback implicitly from their interaction. Experienced subjects tended to like ERF more than inexperienced subjects and felt more comfortable with this feedback method19 . It appears from these results that experienced subjects found ERF more useful and were more at ease with the ERF process. In a similar way to Section 3.1.1 we analysed the proportion of feedback that searchers provided to the experimental systems. Our analysis suggested that search experience does not affect the amount of feedback subjects provide20 . 3.2.2 Terms We used questionnaire responses to gauge subject opinion on the relevance and usefulness of the terms from the perspective of experienced and inexperienced subjects. Table 6 shows the average differential responses obtained from both subject groups. Table 6. Subject perceptions of system terms (lower = better). Explicit RF Implicit RF Differential Inexp. Exp. Inexp. Exp. Relevant 2.58 2.44 2.33 2.21 Useful 2.88 2.63 2.33 2.23 The differences between subject groups were significant21 . Experienced subjects generally reacted to the query modification terms chosen by the system more positively than inexperienced 16 easy: U(24) = 391, p = .016; effective: U(24) = 399, p = .011; α = .0167 (Mann-Whitney Tests) 17 all T(24) ≥ 231, all p ≤ .001 (Wilcoxon Signed-Rank Test) 18 U(24) = 390, p = .018; α = .0250 (Mann-Whitney Test) 19 T(24) = 222, p = .020 (Wilcoxon Signed-Rank Test) 20 ERF: all U(24) ≤ 319, p ≥ .26, IRF: all U(24) ≤ 313, p ≥ .30 (MannWhitney Tests) 21 ERF: all U(24) ≥ 388, p ≤ .020, IRF: all U(24) ≥ 384, p ≤ .024 Explicit RF Implicit RF Differential Inexp. Exp. Inexp. Exp. Easy 2.46 2.46 1.84 1.98 Effective 2.75 2.63 2.32 2.43 Useful 2.50 2.46 2.28 2.27 All (1) 2.57 2.52 2.14 2.23 Comfortable 2.46 2.14 2.05 2.24 In control 1.96 1.98 2.73 2.64 All (2) 2.21 2.06 2.39 2.44 subjects. This finding was supported by the proportion of query modification terms these subjects accepted. In the same way as in Section 3.1.2, we analysed the number of query modification terms recommended by the system that were used by experimental subjects. Table 7 shows the average number of accepted terms per subject group. Table 7. Term Acceptance (percentage of top six terms). Explicit RF Implicit RFProportion of terms Inexp. Exp. Inexp. Exp. Accepted 63.76 70.44 64.43 71.35 Our analysis of the data show that differences between subject groups for each type of RF are significant; experienced subjects accepted more expansion terms regardless of type of RF. However, the differences between the same groups for different types of RF are not significant; subjects chose roughly the same percentage of expansion terms offered irrespective of the type of RF22 . 3.2.3 Summary In this section we have analysed data gathered from two subject groups - inexperienced searchers and experienced searchers - on how they perceive and use IRF. The results indicate that inexperienced subjects found IRF more easy and effective than experienced subjects, who in turn found the terms chosen as a result of IRF more relevant and useful. We also showed that inexperienced subjects generally accepted less recommended terms than experienced subjects, perhaps because they were less comfortable with RF or generally submitted shorter search queries. Search experience appears to affect how subjects use the terms recommended as a result of the RF process. 3.3 Search Stage From our observations of experimental subjects as they searched we conjectured that RF may be used differently at different times during a search. To test this, our third research question concerned the use and usefulness of IRF during the course of a search. In this section we investigate whether the amount of RF provided by searchers or the proportion of terms accepted are affected by how far through their search they are. For the purposes of this analysis a search begins when a subject poses the first query to the system and progresses until they terminate the search or reach the maximum allowed time for a search task of 15 minutes. We do not divide tasks based on this limit as subjects often terminated their search in less than 15 minutes. In this section we use data gathered from interaction logs and subject opinions to investigate the extent to which RF was used and the extent to which it appeared to benefit our experimental subjects at different stages in their search 3.3.1 Feedback The interaction logs for all searches on the Explicit RF and Implicit RF were analysed and each search is divided up into nine equal length time slices. This number of slices gave us an equal number per stage and was a sufficient level of granularity to identify trends in the results. Slices 1 - 3 correspond to the start of the search, 4 - 6 to the middle of the search and 7 - 9 to the end. In Figure 2 we plot the measure of precision described in Section 3.1.1 (i.e., the proportion of all possible representations that were provided as RF) at each of the 22 IRF: U(24) = 403, p = .009, ERF: U(24) = 396, p = .013 nine slices, per search task, averaged across all subjects; this allows us to see how the provision of RF was distributed during a search. The total amount of feedback for a single RF method/task complexity pairing across all nine slices corresponds to the value recorded in the first row of Table 2 (e.g., the sum of the RF for IRF/HC across all nine slices of Figure 2 is 21.50%). To simplify the statistical analysis and comparison we use the grouping of start, middle and end. 0 1 2 3 4 5 6 7 8 9 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Slice Search``precision''(%oftotalrepsprovidedasRF) Explicit RF/HC Explicit RF/MC Explicit RF/LC Implicit RF/HC Implicit RF/MC Implicit RF/LC Figure 2. Distribution of RF provision per search task. Figure 2 appears to show the existence of a relationship between the stage in the search and the amount of relevance information provided to the different types of feedback algorithm. These are essentially differences in the way users are assessing documents. In the case of ERF subjects provide explicit relevance assessments throughout most of the search, but there is generally a steep increase in the end phase towards the completion of the search23 . When using the IRF system, the data indicates that at the start of the search subjects are providing little relevance information24 , which corresponds to interacting with few document representations. At this stage the subjects are perhaps concentrating more on reading the retrieved results. Implicit relevance information is generally offered extensively in the middle of the search as they interact with results and it then tails off towards the end of the search. This would appear to correspond to stages of initial exploration, detailed analysis of document representations and storage and presentation of findings. Figure 2 also shows the proportion of feedback for tasks of different complexity. The results appear to show a difference25 in how IRF is used that relates to the complexity of the search task. More specifically, as complexity increases it appears as though subjects take longer to reach their most interactive point. This suggests that task complexity affects how IRF is distributed during the search and that they may be spending more time initially interpreting search results for more complex tasks. 23 IRF: all Z ≥ 1.87, p ≤ .031, ERF: start vs. end Z = 2.58, p = .005 (Dunn``s post-hoc tests). 24 Although increasing toward the end of the start stage. 25 Although not statistically significant; χ2 (2) = 3.54, p = .17 (Friedman Rank Sum Test) 3.3.2 Terms The terms recommended by the system are chosen based on the frequency of their occurrence in the relevant items. That is, nonstopword, non-query terms occurring frequently in search results regarded as relevant are likely to be recommended to the searcher for query modification. Since there is a direct association between the RF and the terms selected we use the number of terms accepted by searchers at different points in the search as an indication of how effective the RF has been up until the current point in the search. In this section we analysed the average number of terms from the top six terms recommended by Explicit RF and Implicit RF over the course of a search. The average proportion of the top six recommended terms that were accepted at each stage are shown in Table 8; each cell contains data from all 48 subjects. Table 8. Term Acceptance (proportion of top six terms). Explicit RF Implicit RFProportion of terms start middle end start middle end Accepted 66.87 66.98 67.34 61.85 68.54 73.22 The results show an apparent association between the stage in the search and the number of feedback terms subjects accept. Search stage affects term acceptance in IRF but not in ERF26 . The further into a search a searcher progresses, the more likely they are to accept terms recommended via IRF (significantly more than ERF27 ). A correlation analysis between the proportion of terms accepted at each search stage and cumulative RF (i.e., the sum of all precision at each slice in Figure 2 up to and including the end of the search stage) suggests that in both types of RF the quality of system terms improves as more RF is provided28 . 3.3.3 Summary The results from this section indicate that the location in a search affects the amount of feedback given by the user to the system, and hence the amount of information that the RF mechanism has to decide which terms to offer the user. Further, trends in the data suggest that the complexity of the task affects how subjects provide IRF and the proportion of system terms accepted. 4. DISCUSSION AND IMPLICATIONS In this section we discuss the implications of the findings presented in the previous section for each research question. 4.1 Search Task The results of our study showed that ERF was preferred for less complex tasks and IRF for more complex tasks. From observations and subject comments we perceived that when using ERF systems subjects generally forgot to provide the feedback but also employed different criteria during the ERF process (i.e., they were assessing relevance rather than expressing an interest). When the search was more complex subjects rarely found results they regarded as completely relevant. Therefore they struggled to find relevant 26 ERF: χ2 (2) = 2.22, p = .33; IRF: χ2 (2) = 7.73, p = .021 (Friedman Rank Sum Tests); IRF: all pair-wise comparisons significant at Z ≥ 1.77, all p ≤ .038 (Dunn``s post-hoc tests) 27 all T(48) ≥ 786, all p ≤ .002, (Wilcoxon Signed-Rank Test) 28 IRF: r = .712, p < .001, ERF: r = .695, p = .001 (Pearson Correlation Coefficient) information and were unable to communicate RF to the search system. In these situations subjects appeared to prefer IRF as they do not need to make a relevance decision to obtain the benefits of RF, i.e., term suggestions, whereas in ERF they do. The association between RF method and task complexity has implications for the design of user studies of RF systems and the RF systems themselves. It implies that in the design of user studies involving ERF or IRF systems care should be taken to include tasks of varying complexities, to avoid task bias. Also, in the design of search systems it implies that since different types of RF may be appropriate for different task complexities then a system that could automatically detect complexity could use both ERF and IRF simultaneously to benefit the searcher. For example, on the IRF system we noticed that as task complexity falls search behaviour shifts from results interface to retrieved documents. Monitoring such interaction across a number of studies may lead to a set of criteria that could help IR systems automatically detect task complexity and tailor support to suit. 4.2 Search Experience We analysed the affect of search experience on the utility of IRF. Our analysis revealed a general preference across all subjects for IRF over ERF. That is, the average ratings assigned to IRF were generally more positive than those assigned to ERF. However, IRF was generally liked by both subject groups (perhaps because it removed the burden of providing relevance information) and ERF was generally preferred by experienced subjects more than inexperienced subjects (perhaps because it allowed them to specify which results were used by the system when generating term recommendations). All subjects felt more in control with ERF than IRF, but for inexperienced subjects this did not appear to affect their overall preferences29 . These subjects may understand the RF process less, but may be more willing to sacrifice control over feedback in favour of IRF, a process that they perceive more positively. 4.3 Search Stage We also analysed the effects of search stage on the use and usefulness of IRF. Through analysis of this nature we can build a more complete picture of how searchers used RF and how this varies based on the RF method. The results suggest that IRF is used more in the middle of the search than at the beginning or end, whereas ERF is used more towards the end. The results also show the effects of task complexity on the IRF process and how rapidly subjects reach their most interactive point. Without an analysis of this type it would not have been possible to establish the existence of such patterns of behaviour. The findings suggest that searchers interact differently for IRF and ERF. Since ERF is not traditionally used until toward the end of the search it may be possible to incorporate both IRF and ERF into the same IR system, with IRF being used to gather evidence until subjects decide to use ERF. The development of such a system represents part of our ongoing work in this area. 5. CONCLUSIONS In this paper we have presented an investigation of Implicit Relevance Feedback (IRF). We aimed to answer three research questions about factors that may affect the provision and usefulness of IRF. These factors were search task complexity, the subjects'' search experience and the stage in the search. Our overall conclusion was that all factors 29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion. appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant. Our conclusions per each research question are: (i) IRF is generally more useful for complex search tasks, where searchers want to focus on the search task and get new ideas for their search from the system, (ii) IRF is preferred to ERF overall and generally preferred by inexperienced subjects wanting to reduce the burden of providing RF, and (iii) within a single search session IRF is affected by temporal location in a search (i.e., it is used in the middle, not the beginning or end) and task complexity. Studies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not. It is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems. 6. REFERENCES [1] Bell, D.J. and Ruthven, I. (2004). Searchers' assessments of task complexity for web searching. Proceedings of the 26th European Conference on Information Retrieval, 57-71. [2] Borlund, P. (2000). Experimental components for the evaluation of interactive information retrieval systems. Journal of Documentation. 56(1): 71-90. [3] Brajnik, G., Mizzaro, S., Tasso, C., and Venuti, F. (2002). Strategic help for user interfaces for information retrieval. Journal of the American Society for Information Science and Technology. 53(5): 343-358. [4] Busha, C.H. and Harter, S.P., (1980). Research methods in librarianship: Techniques and interpretation. Library and information science series. New York: Academic Press. [5] Campbell, I. and Van Rijsbergen, C.J. (1996). The ostensive model of developing information needs. Proceedings of the 3rd International Conference on Conceptions of Library and Information Science, 251-268. [6] Harman, D., (1992). Relevance feedback and other query modification techniques. In Information retrieval: Data structures and algorithms. New York: Prentice-Hall. [7] Kelly, D. and Teevan, J. (2003). Implicit feedback for inferring user preference. SIGIR Forum. 37(2): 18-28. [8] Koenemann, J. and Belkin, N.J. (1996). A case for interaction: A study of interactive information retrieval behavior and effectiveness. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 205-212. [9] Meddis, R., (1984). Statistics using ranks: A unified approach. Oxford: Basil Blackwell, 303-308. [10] Morita, M. and Shinoda, Y. (1994). Information filtering based on user behavior analysis and best match text retrieval. Proceedings of the 17th Annual ACM SIGIR Conference on Research and Development in Information Retrieval, 272-281. [11] Salton, G. and Buckley, C. (1990). Improving retrieval performance by relevance feedback. Journal of the American Society for Information Science. 41(4): 288-297. [12] Siegel, S. and Castellan, N.J. (1988). Nonparametric statistics for the behavioural sciences. 2nd ed. Singapore: McGraw-Hill. [13] White, R.W. (2004). Implicit feedback for interactive information retrieval. Unpublished Doctoral Dissertation, University of Glasgow, Glasgow, United Kingdom. [14] White, R.W., Jose, J.M. and Ruthven, I. (2005). An implicit feedback approach for interactive information retrieval, Information Processing and Management, in press. [15] White, R.W., Jose, J.M., Ruthven, I. and Van Rijsbergen, C.J. (2004). A simulated study of implicit feedback models. Proceedings of the 26th European Conference on Information Retrieval, 311-326. [16] Zellweger, P.T., Regli, S.H., Mackinlay, J.D., and Chang, B.-W. (2000). The impact of fluid documents on reading and browsing: An observational study. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 249-256. Appendix B. Checkboxes to mark relevant document titles in the Explicit RF system. Appendix A. Interface to Implicit RF system. 1. Top-Ranking Sentence 2. Title 3. Summary 4. Summary Sentence 5. Sentence in Context 2 3 4 5 1
A Study of Factors Affecting the Utility of Implicit Relevance Feedback ABSTRACT Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF. 1. INTRODUCTION Information Retrieval (IR) systems are designed to help searchers solve problems. In the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection. However, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers. As the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6]. Techniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification. However, in practice RF techniques have been underutilised as they place an increased cognitive burden on searchers to directly indicate relevant results [10]. Implicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact. IRF has been implemented either through the use of surrogate measures based on interaction with documents (such as reading time, scrolling or document retention) [7] or using interaction with browse-based result interfaces [5]. IRF has been shown to display mixed effectiveness because the factors that are good indicators of user interest are often erratic and the inferences drawn from user interaction are not always valid [7]. In this paper we present a study into the use and effectiveness of IRF in an online search environment. The study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task? (ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users? (iii) is IRF equally used and does it generate terms that are equally useful at all search stages? This study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use. The main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13]. In this paper we use data derived from that experiment to study factors affecting the utility of IRF. 2. STUDY In this section we describe the user study conducted to address our research questions. 2.1 Systems Our study used two systems both of which suggested new query terms to the user. One system suggested terms based on the user's interaction (IRF), the other used Explicit RF (ERF) asking the user to explicitly indicate relevant material. Both systems used the same term suggestion algorithm, [15], and used a common interface. 2.1.1 Interface Overview In both systems, retrieved documents are represented at the interface by their full-text and a variety of smaller, query-relevant representations, created at retrieval time. We used the Web as the test collection in this study and Google1 as the underlying search engine. Document representations include the document title and a summary of the document; a list of top-ranking sentences (TRS) extracted from the top documents retrieved, scored in relation to the query, a sentence in the document summary, and each summary sentence in the context it occurs in the document (i.e., with the preceding and following sentence). Each summary sentence and top-ranking sentence is regarded as a representation of the document. The default display contains the list of top-ranking sentences and the list of the first ten document titles. Interacting with a representation guides searchers to a different representation from the same document, e.g., moving the mouse over a document title displays a summary of the document. This presentation of progressively more information from documents to aid relevance assessments has been shown to be effective in earlier work [14, 16]. In Appendix A we show the complete interface to the IRF system with the document representations marked and in Appendix B we show a fragment from the ERF interface with the checkboxes used by searchers to indicate relevant information. Both systems provide an interactive query expansion feature by suggesting new query terms to the user. The searcher has the responsibility for choosing which, if any, of these terms to add to the query. The searcher can also add or remove terms from the query at will. 2.1.2 Explicit RF system This version of the system implements explicit RF. Next to each document representation are checkboxes that allow searchers to mark individual representations as relevant; marking a representation is an indication that its contents are relevant. Only the representations marked relevant by the user are used for suggesting new query terms. This system was used as a baseline against which the IRF system could be compared. 2.1.3 Implicit RF system This system makes inferences about searcher interests based on the information with which they interact. As described in Section 2.1.1 interacting with a representation highlights a new representation from the same document. To the searcher this is a way they can find out more information from a potentially interesting source. To the implicit RF system each interaction with a representation is interpreted as an implicit indication of interest in that representation; interacting with a representation is assumed to be an indication that its contents are relevant. The query modification terms are selected using the same algorithm as in the Explicit RF system. Therefore the only difference between the systems is how relevance is communicated to the system. The results of the main experiment [13] indicated that these two systems were comparable in terms of effectiveness. 2.2 Tasks Search tasks were designed to encourage realistic search behaviour by our subjects. The tasks were phrased in the form of simulated work task situations [2], i.e., short search scenarios that were designed to reflect real-life search situations and allow subjects to develop personal assessments of relevance. We devised six search topics (i.e., applying to university, allergies in the workplace, art galleries in Rome, "Third Generation" mobile phones, Internet music piracy and petrol prices) based on pilot testing with a small representative group of subjects. These subjects were not involved in the main experiment. For each topic, three versions of each work task situation were devised, each version differing in their predicted level of task complexity. As described in [1] task complexity is a variable that affects subject perceptions of a task and their interactive behaviour, e.g., subjects perform more filtering activities with highly complex search tasks. By developing tasks of different complexity we can assess how the nature of the task affects the subjects' interactive behaviour and hence the evidence supplied to IRF algorithms. Task complexity was varied according to the methodology described in [1], specifically by varying the number of potential information sources and types of information required, to complete a task. In our pilot tests (and in a posteriori analysis of the main experiment results) we verified that subjects reporting of individual task complexity matched our estimation of the complexity of the task. Subjects attempted three search tasks: one high complexity, one moderate complexity and one low complexity2. They were asked to read the task, place themselves in the situation it described and find the information they felt was required to complete the task. Figure 1 shows the task statements for three levels of task complexity for one of the six search topics. HC Task: High Complexity Whilst having dinner with an American colleague, they comment on the high price of petrol in the UK compared to other countries, despite large volumes coming from the same source. Unaware of any major differences, you decide to find out how and why petrol prices vary worldwide. MC Task: Moderate Complexity Whilst out for dinner one night, one of your friends' guests is complaining about the price of petrol and the factors that cause it. Throughout the night they seem to be complaining about everything they can, reducing the credibility of their earlier statements so you decide to research which factors actually are important in determining the price of petrol in the UK. LC Task: Low Complexity While out for dinner one night, your friend complains about the rising price of petrol. However, as you have not been driving for long, you are unaware of any major changes in price. You decide to find out how the price of petrol has changed in the UK in recent years. Figure 1. Varying task complexity ("Petrol Prices" topic). 2.3 Subjects 156 volunteers expressed an interest in participating in our study. 48 subjects were selected from this set with the aim of populating two groups, each with 24 subjects: inexperienced (infrequent / inexperienced searchers) and experienced (frequent / experienced searchers). Subjects were not chosen and classified into their groups until they had completed an entry questionnaire that asked them about their search experience and computer use. The average age of the subjects was 22.83 years (maximum 51, minimum 18, σ = 5.23 years) and 75% had a university diploma or a higher degree. 47.91% of subjects had, or were pursuing, a qualification in a discipline related to Computer Science. The subjects were a mixture of students, researchers, academic staff and others, with different levels of computer and search experience. The subjects were divided into the two groups depending on their search experience, how often they searched and the types of searches they performed. All were familiar with Web searching, and some with searching in other domains. 2.4 Methodology The experiment had a factorial design; with 2 levels of search experience, 3 experimental systems (although we only report on the findings from the ERF and IRF systems) and 3 levels of search task complexity. Subjects attempted one task of each complexity, 2 The main experiment from which these results are drawn had a third comparator system which had a different interface. Each subject carried out three tasks, one on each system. We only report on the results from the ERF and IRF systems as these are the only pertinent ones for this paper. switched systems after each task and used each system once. The order in which systems were used and search tasks attempted was randomised according to a Latin square experimental design. Questionnaires used Likert scales, semantic differentials and openended questions to elicit subject opinions [4]. System logging was also used to record subject interaction. A tutorial carried out prior to the experiment allowed subjects to use a non-feedback version of the system to attempt a practice task before using the first experimental system. Experiments lasted between oneand-a-half and two hours, dependent on variables such as the time spent completing questionnaires. Subjects were offered a 5 minute break after the first hour. In each experiment: i. the subject was welcomed and asked to read an introduction to the experiments and sign consent forms. This set of instructions was written to ensure that each subject received precisely the same information. ii. the subject was asked to complete an introductory questionnaire. This contained questions about the subject's education, general search experience, computer experience and Web search experience. iii. the subject was given a tutorial on the interface, followed by a training topic on a version of the interface with no RF. iv. the subject was given three task sheets and asked to choose one task from the six topics on each sheet. No guidelines were given to subjects when choosing a task other than they could not choose a task from any topic more than once. Task complexity was rotated by the experimenter so each subject attempted one high complexity task, one moderate complexity task and one low complexity task. v. the subject was asked to perform the search and was given 15 minutes to search. The subject could terminate a search early if they were unable to find any more information they felt helped them complete the task. vi. after completion of the search, the subject was asked to complete a post-search questionnaire. vii. the remaining tasks were attempted by the subject, following steps v. and vi. viii. the subject completed a post-experiment questionnaire and participated in a post-experiment interview. Subjects were told that their interaction may be used by the IRF system to help them as they searched. They were not told which behaviours would be used or how it would be used. We now describe the findings of our analysis. 3. FINDINGS In this section we use the data derived from the experiment to answer our research questions about the effect of search task complexity, search experience and stage in search on the use and effectiveness of IRF. We present our findings per research question. Due to the ordinal nature of much of the data non-parametric statistical testing is used in this analysis and the level of significance is set to p <.05, unless otherwise stated. We use the method proposed by [12] to determine the significance of differences in multiple comparisons and that of [9] to test for interaction effects between experimental variables, the occurrence of which we report where appropriate. All Likert scales and semantic differentials were on a 5-point scale where a rating closer to 1 signifies more agreement with the attitude statement. The category labels HC, MC and LC are used to denote the high, moderate and low complexity tasks respectively. The highest, or most positive, values in each table are shown in bold. Our analysis uses data from questionnaires, post-experiment interviews and background system logging on the ERF and IRF systems. 3.1 Search Task Searchers attempted three search tasks of varying complexity, each on a different experimental system. In this section we present an analysis on the use and usefulness of IRF for search tasks of different complexities. We present our findings in terms of the RF provided by subjects and the terms recommended by the systems. 3.1.1 Feedback We use questionnaires and system logs to gather data on subject perceptions and provision of RF for different search tasks. In the postsearch questionnaire subjects were asked about how RF was conveyed using differentials to elicit their opinion on: 1. the value of the feedback technique: How you conveyed relevance to the system (i.e. ticking boxes or viewing information) was: "easy" / "difficult", "effective" / "ineffective", "useful"' / "not useful". 2. the process of providing the feedback: How you conveyed relevance to the system made you feel: "comfortable" / "uncomfortable", "in control" / "not in control". The average obtained differential values are shown in Table 1 for IRF and each task category. The value corresponding to the differential "All" represents the mean of all differentials for a particular attitude statement. This gives some overall understanding of the subjects' feelings which can be useful as the subjects may not answer individual differentials very precisely. The values for ERF are included for reference in this table and all other tables and figures in the "Findings" section. Since the aim of the paper is to investigate situations in which IRF might perform well, not a direct comparison between IRF and ERF, we make only limited comparisons between these two types of feedback. Table 1. Subject perceptions of RF method (lower = better). Each cell in Table 1 summarises the subject responses for 16 tasksystem pairs (16 subjects who ran a high complexity (HC) task on the ERF system, 16 subjects who ran a medium complexity (MC) task on the ERF system, etc). Kruskal-Wallis Tests were applied to each differential for each type of RF3. Subject responses suggested that IRF 3 Since this analysis involved many differentials, we use a Bonferroni correction to control the experiment-wise error rate and set the alpha level (α) to .0167 and .0250 for both statements 1. and 2. respectively, i.e., .05 divided by the number of differentials. This correction reduces the number of Type I errors i.e., rejecting null hypotheses that are true. was most "effective" and "useful" for more complex search tasks4 and that the differences in all pair-wise comparisons between tasks were significant5. Subject perceptions of IRF elicited using the other differentials did not appear to be affected by the complexity of the search task6. To determine whether a relationship exists between the effectiveness and usefulness of the IRF process and task complexity we applied Spearman's Rank Order Correlation Coefficient to participant responses. The results of this analysis suggest that the effectiveness of IRF and usefulness of IRF are both related to task complexity; as task complexity increases subject preference for IRF also increases7. On the other hand, subjects felt ERF was more "effective" and "useful" for low complexity tasks8. Their verbal reporting of ERF, where perceived utility and effectiveness increased as task complexity decreased, supports this finding. In tasks of lower complexity the subjects felt they were better able to provide feedback on whether or not documents were relevant to the task. We analyse interaction logs generated by both interfaces to investigate the amount of RF subjects provided. To do this we use a measure of search "precision" that is the proportion of all possible document representations that a searcher assessed, divided by the total number they could assess. In ERF this is the proportion of all possible representations that were marked relevant by the searcher, i.e., those representations explicitly marked relevant. In IRF this is the proportion of representations viewed by a searcher over all possible representations that could have been viewed by the searcher. This proportion measures the searcher's level of interaction with a document, we take it to measure the user's interest in the document: the more document representations viewed the more interested we assume a user is in the content of the document. There are a maximum of 14 representations per document: 4 topranking sentences, 1 title, 1 summary, 4 summary sentences and 4 summary sentences in document context. Since the interface shows document representations from the top-30 documents, there are 420 representations that a searcher can assess. Table 2 shows proportion of representations provided as RF by subjects. Table 2. Feedback and documents viewed. For IRF there is a clear pattern: as complexity increases the subjects viewed fewer documents but viewed more representations for each document. This suggests a pattern where users are investigating retrieved documents in more depth. It also means that the amount of 4 effective: χ2 (2) = 11.62, p = .003; useful: χ2 (2) = 12.43, p = .002 5 Dunn's post-hoc tests (multiple comparison using rank sums); all Z ≥ 2.88, all p ≤ .002 6 all χ2 (2) ≤ 2.85, all p ≥ .24 (Kruskal-Wallis Tests) 7 effective: all r ≥ 0.644, p ≤ .002; useful: all r ≥ 0.541, p ≤ .009 8 effective: χ2 (2) = 7.01, p = .03; useful: χ2 (2) = 6.59, p = .037 (Kruskal-Wallis Test); all pair-wise differences significant, all Z ≥ 2.34, all p ≤ .01 (Dunn's post-hoc tests) feedback varies based on the complexity of the search task. Since IRF is based on the interaction of the searcher, the more they interact, the more feedback they provide. This has no effect on the number of RF terms chosen, but may affect the quality of the terms selected. Correlation analysis revealed a strong negative correlation between the number of documents viewed and the amount of feedback searchers provide9; as the number of documents viewed increases the proportion of feedback falls (searchers view less representations of each document). This may be a natural consequence of their being less time to view documents in a time constrained task environment but as we will show as complexity changes, the nature of information searchers interact with also appears to change. In the next section we investigate the effect of task complexity on the terms chosen as a result of IRF. 3.1.2 Terms The same RF algorithm was used to select query modification terms in all systems [16]. We use subject opinions of terms recommended by the systems as a measure of the effectiveness of IRF with respect to the terms generated for different search tasks. To test this, subjects were asked to complete two semantic differentials that completed the statement: The words chosen by the system were: "relevant" / "irrelevant" and "useful" / "not useful". Table 3 presents average responses grouped by search task. Table 3. Subject perceptions of system terms (lower = better). Kruskal-Wallis Tests were applied within each type of RF. The results indicate that the relevance and usefulness of the terms chosen by IRF is affected by the complexity of the search task; the terms chosen are more "relevant" and "useful" when the search task is more complex10. Relevant here, was explained as being related to their task whereas useful was for terms that were seen as being helpful in the search task. For ERF, the results indicate that the terms generated are perceived to be more "relevant" and "useful" for less complex search tasks; although differences between tasks were not significant11. This suggests that subject perceptions of the terms chosen for query modification are affected by task complexity. Comparison between ERF and IRF shows that subject perceptions also vary for different types of RF12. As well as using data on relevance and utility of the terms chosen, we used data on term acceptance to measure the perceived value of the terms suggested. Explicit and Implicit RF systems made recommendations about which terms could be added to the original search query. In Table 4 we show the proportion of the top six terms 13 that were shown to the searcher that were added to the search query, for each type of task and each type of RF. Table 4. Term Acceptance (percentage of top six terms). The average number of terms accepted from IRF is approximately the same across all search tasks and generally the same as that of ERF14. As Table 2 shows, subjects marked fewer documents relevant for highly complex tasks. Therefore, when task complexity increases the ERF system has fewer examples of relevant documents and the expansion terms generated may be poorer. This could explain the difference in the proportion of recommended terms accepted in ERF as task complexity increases. For IRF there is little difference in how many of the recommended terms were chosen by subjects for each level of task complexity15. Subjects may have perceived IRF terms as more useful for high complexity tasks but this was not reflected in the proportion of IRF terms accepted. Differences may reside in the nature of the terms accepted; future work will investigate this issue. 3.1.3 Summary In this section we have presented an investigation on the effect of search task complexity on the utility of IRF. From the results there appears to be a strong relation between the complexity of the task and the subject interaction: subjects preferring IRF for highly complex tasks. Task complexity did not affect the proportion of terms accepted in either RF method, despite there being a difference in how "relevant" and "useful" subjects perceived the terms to be for different complexities; complexity may affect term selection in ways other than the proportion of terms accepted. 3.2 Search Experience Experienced searchers may interact differently and give different types of evidence to RF than inexperienced searchers. As such, levels of search experience may affect searchers' use and perceptions of IRF. In our experiment subjects were divided into two groups based on their level of search experience, the frequency with which they searched and the types of searches they performed. In this section we use their perceptions and logging to address the next research question; the relationship between the usefulness and use of IRF and the search experience of experimental subjects. The data are the same as that analysed in the previous section, but here we focus on search experience rather than the search task. 3.2.1 Feedback We analyse the results from the attitude statements described at the beginning of Section 3.1.1. (i.e., How you conveyed relevance to the system was...and How you conveyed relevance to the system made you feel ...). These differentials elicited opinion from experimental subjects about the RF method used. In Table 5 we show the mean average responses for inexperienced and experienced subject groups on ERF and IRF; 24 subjects per cell. 14 all T (16) ≥ 80, all p ≤ .31, (Wilcoxon Signed-Rank Test) 15 ERF: χ2 (2) = 3.67, p = .16; IRF: χ2 (2) = 2.55, p = .28 (KruskalWallis Tests) Table 5. Subject perceptions of RF method (lower = better). The results demonstrate a strong preference in inexperienced subjects for IRF; they found it more "easy" and "effective" than experienced subjects. 16 The differences for all other IRF differentials were not statistically significant. For all differentials, apart from "in control", inexperienced subjects generally preferred IRF over ERF17. Inexperienced subjects also felt that IRF was more difficult to control than experienced subjects18. As these subjects have less search experience they may be less able to understand RF processes and may be more comfortable with the system gathering feedback implicitly from their interaction. Experienced subjects tended to like ERF more than inexperienced subjects and felt more "comfortable" with this feedback method19. It appears from these results that experienced subjects found ERF more useful and were more at ease with the ERF process. In a similar way to Section 3.1.1 we analysed the proportion of feedback that searchers provided to the experimental systems. Our analysis suggested that search experience does not affect the amount of feedback subjects provide20. 3.2.2 Terms We used questionnaire responses to gauge subject opinion on the relevance and usefulness of the terms from the perspective of experienced and inexperienced subjects. Table 6 shows the average differential responses obtained from both subject groups. Table 6. Subject perceptions of system terms (lower = better). This finding was supported by the proportion of query modification terms these subjects accepted. In the same way as in Section 3.1.2, we analysed the number of query modification terms recommended by the system that were used by experimental subjects. Table 7 shows the average number of accepted terms per subject group. Table 7. Term Acceptance (percentage of top six terms). Our analysis of the data show that differences between subject groups for each type of RF are significant; experienced subjects accepted more expansion terms regardless of type of RF. However, the differences between the same groups for different types of RF are not significant; subjects chose roughly the same percentage of expansion terms offered irrespective of the type of RF22. 3.2.3 Summary In this section we have analysed data gathered from two subject groups--inexperienced searchers and experienced searchers--on how they perceive and use IRF. The results indicate that inexperienced subjects found IRF more "easy" and "effective" than experienced subjects, who in turn found the terms chosen as a result of IRF more "relevant" and "useful". We also showed that inexperienced subjects generally accepted less recommended terms than experienced subjects, perhaps because they were less comfortable with RF or generally submitted shorter search queries. Search experience appears to affect how subjects use the terms recommended as a result of the RF process. 3.3 Search Stage From our observations of experimental subjects as they searched we conjectured that RF may be used differently at different times during a search. To test this, our third research question concerned the use and usefulness of IRF during the course of a search. In this section we investigate whether the amount of RF provided by searchers or the proportion of terms accepted are affected by how far through their search they are. For the purposes of this analysis a search begins when a subject poses the first query to the system and progresses until they terminate the search or reach the maximum allowed time for a search task of 15 minutes. We do not divide tasks based on this limit as subjects often terminated their search in less than 15 minutes. In this section we use data gathered from interaction logs and subject opinions to investigate the extent to which RF was used and the extent to which it appeared to benefit our experimental subjects at different stages in their search 3.3.1 Feedback The interaction logs for all searches on the Explicit RF and Implicit RF were analysed and each search is divided up into nine equal length time slices. This number of slices gave us an equal number per stage and was a sufficient level of granularity to identify trends in the results. Slices 1--3 correspond to the "start" of the search, 4--6 to the "middle" of the search and 7--9 to the "end". In Figure 2 we plot the measure of "precision" described in Section 3.1.1 (i.e., the proportion of all possible representations that were provided as RF) at each of the nine slices, per search task, averaged across all subjects; this allows us to see how the provision of RF was distributed during a search. The 22 IRF: U (24) = 403, p = .009, ERF: U (24) = 396, p = .013 total amount of feedback for a single RF method/task complexity pairing across all nine slices corresponds to the value recorded in the first row of Table 2 (e.g., the sum of the RF for IRF/HC across all nine slices of Figure 2 is 21.50%). To simplify the statistical analysis and comparison we use the grouping of "start", "middle" and "end". Figure 2. Distribution of RF provision per search task. Figure 2 appears to show the existence of a relationship between the stage in the search and the amount of relevance information provided to the different types of feedback algorithm. These are essentially differences in the way users are assessing documents. In the case of ERF subjects provide explicit relevance assessments throughout most of the search, but there is generally a steep increase in the "end" phase towards the completion of the search23. When using the IRF system, the data indicates that at the start of the search subjects are providing little relevance information24, which corresponds to interacting with few document representations. At this stage the subjects are perhaps concentrating more on reading the retrieved results. Implicit relevance information is generally offered extensively in the middle of the search as they interact with results and it then tails off towards the end of the search. This would appear to correspond to stages of initial exploration, detailed analysis of document representations and storage and presentation of findings. Figure 2 also shows the proportion of feedback for tasks of different complexity. The results appear to show a difference25 in how IRF is used that relates to the complexity of the search task. More specifically, as complexity increases it appears as though subjects take longer to reach their most interactive point. This suggests that task complexity affects how IRF is distributed during the search and that they may be spending more time initially interpreting search results for more complex tasks. 3.3.2 Terms The terms recommended by the system are chosen based on the frequency of their occurrence in the relevant items. That is, nonstopword, non-query terms occurring frequently in search results 23 IRF: all Z ≥ 1.87, p ≤ .031, ERF: "start" vs. "end" Z = 2.58, p = .005 (Dunn's post-hoc tests). 24 Although increasing toward the end of the "start" stage. 25 Although not statistically significant; χ2 (2) = 3.54, p = .17 (Friedman Rank Sum Test) regarded as relevant are likely to be recommended to the searcher for query modification. Since there is a direct association between the RF and the terms selected we use the number of terms accepted by searchers at different points in the search as an indication of how effective the RF has been up until the current point in the search. In this section we analysed the average number of terms from the top six terms recommended by Explicit RF and Implicit RF over the course of a search. The average proportion of the top six recommended terms that were accepted at each stage are shown in Table 8; each cell contains data from all 48 subjects. Table 8. Term Acceptance (proportion of top six terms). The results show an apparent association between the stage in the search and the number of feedback terms subjects accept. Search stage affects term acceptance in IRF but not in ERF26. The further into a search a searcher progresses, the more likely they are to accept terms recommended via IRF (significantly more than ERF27). A correlation analysis between the proportion of terms accepted at each search stage and cumulative RF (i.e., the sum of all "precision" at each slice in Figure 2 up to and including the end of the search stage) suggests that in both types of RF the quality of system terms improves as more RF is provided28. 3.3.3 Summary The results from this section indicate that the location in a search affects the amount of feedback given by the user to the system, and hence the amount of information that the RF mechanism has to decide which terms to offer the user. Further, trends in the data suggest that the complexity of the task affects how subjects provide IRF and the proportion of system terms accepted. 5. CONCLUSIONS In this paper we have presented an investigation of Implicit Relevance Feedback (IRF). We aimed to answer three research questions about factors that may affect the provision and usefulness of IRF. These factors were search task complexity, the subjects' search experience and the stage in the search. Our overall conclusion was that all factors appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant. 29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion. Our conclusions per each research question are: (i) IRF is generally more useful for complex search tasks, where searchers want to focus on the search task and get new ideas for their search from the system, (ii) IRF is preferred to ERF overall and generally preferred by inexperienced subjects wanting to reduce the burden of providing RF, and (iii) within a single search session IRF is affected by temporal location in a search (i.e., it is used in the middle, not the beginning or end) and task complexity. Studies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not. It is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.
A Study of Factors Affecting the Utility of Implicit Relevance Feedback ABSTRACT Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF. 1. INTRODUCTION Information Retrieval (IR) systems are designed to help searchers solve problems. In the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection. However, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers. As the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6]. Techniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification. However, in practice RF techniques have been underutilised as they place an increased cognitive burden on searchers to directly indicate relevant results [10]. Implicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact. IRF has been implemented either through the use of surrogate measures based on interaction with documents (such as reading time, scrolling or document retention) [7] or using interaction with browse-based result interfaces [5]. IRF has been shown to display mixed effectiveness because the factors that are good indicators of user interest are often erratic and the inferences drawn from user interaction are not always valid [7]. In this paper we present a study into the use and effectiveness of IRF in an online search environment. The study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task? (ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users? (iii) is IRF equally used and does it generate terms that are equally useful at all search stages? This study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use. The main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13]. In this paper we use data derived from that experiment to study factors affecting the utility of IRF. 2. STUDY 2.1 Systems 2.1.1 Interface Overview 2.1.2 Explicit RF system 2.1.3 Implicit RF system 2.2 Tasks MC Task: Moderate Complexity LC Task: Low Complexity 2.3 Subjects 2.4 Methodology 3. FINDINGS 3.1 Search Task 3.1.1 Feedback 3.1.2 Terms 3.1.3 Summary 3.2 Search Experience 3.2.1 Feedback 3.2.2 Terms 3.2.3 Summary 3.3 Search Stage 3.3.1 Feedback 3.3.2 Terms 25 Although not statistically significant; χ2 (2) = 3.54, p = .17 (Friedman Rank Sum Test) 3.3.3 Summary 5. CONCLUSIONS In this paper we have presented an investigation of Implicit Relevance Feedback (IRF). We aimed to answer three research questions about factors that may affect the provision and usefulness of IRF. These factors were search task complexity, the subjects' search experience and the stage in the search. Our overall conclusion was that all factors appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant. 29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion. Our conclusions per each research question are: (i) IRF is generally more useful for complex search tasks, where searchers want to focus on the search task and get new ideas for their search from the system, (ii) IRF is preferred to ERF overall and generally preferred by inexperienced subjects wanting to reduce the burden of providing RF, and (iii) within a single search session IRF is affected by temporal location in a search (i.e., it is used in the middle, not the beginning or end) and task complexity. Studies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not. It is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.
A Study of Factors Affecting the Utility of Implicit Relevance Feedback ABSTRACT Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF. 1. INTRODUCTION Information Retrieval (IR) systems are designed to help searchers solve problems. In the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection. However, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers. As the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6]. Techniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification. Implicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact. In this paper we present a study into the use and effectiveness of IRF in an online search environment. The study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task? (ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users? (iii) is IRF equally used and does it generate terms that are equally useful at all search stages? This study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use. The main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13]. In this paper we use data derived from that experiment to study factors affecting the utility of IRF. 5. CONCLUSIONS In this paper we have presented an investigation of Implicit Relevance Feedback (IRF). We aimed to answer three research questions about factors that may affect the provision and usefulness of IRF. These factors were search task complexity, the subjects' search experience and the stage in the search. Our overall conclusion was that all factors appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant. 29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion. Studies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not. It is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.
C-48
Multi-dimensional Range Queries in Sensor Networks
In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: List all events whose temperature lies between 50◦ and 60◦ , and whose light levels lie between 10 and 15. Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( √ N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs.
[ "multi-dimension rang queri", "dim", "sensor network", "multidimension rang queri", "distribut index", "geograph rout", "queri cost", "central index", "distribut data structur", "datacentr storag system", "event insert", "index techniqu", "queri flood", "effici correl", "local-preserv geograph hash", "asymptot behavior", "normal event distribut" ]
[ "P", "P", "P", "P", "P", "P", "P", "M", "R", "U", "R", "M", "M", "M", "M", "U", "M" ]
Multi-dimensional Range Queries in Sensor Networks∗ Xin Li † Young Jin Kim † Ramesh Govindan † Wei Hong ‡ ABSTRACT In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: List all events whose temperature lies between 50◦ and 60◦ , and whose light levels lie between 10 and 15. Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( √ N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs. Categories and Subject Descriptors C.2.4 [Computer Communication Networks]: Distributed Systems; C.3 [Special-Purpose and Application-Based Systems]: Embedded Systems General Terms Embedded Systems, Sensor Networks, Storage 1. INTRODUCTION In wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3]. Many of these attributes will have scalar values: e.g., temperature and light levels, soil moisture conditions, etc.. In these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes. For example, scientists analyzing the growth of marine microorganisms might be interested in events that occurred within certain temperature and light conditions: List all events that have temperatures between 50◦ F and 60◦ F, and light levels between 10 and 20. Such range queries can be used in two distinct ways. They can help users efficiently drill-down their search for events of interest. The query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms. More importantly, they can be used by application software running within a sensor network for correlating events and triggering actions. For example, if in a habitat monitoring application, a bird alighting on its nest is indicated by a certain range of thermopile sensor readings, and a certain range of microphone readings, a multi-dimensional range query on those attributes enables higher confidence detection of the arrival of a flock of birds, and can trigger a system of cameras. In traditional database systems, such range queries are supported using pre-computed indices. Indices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability. For sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network). Rather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries. In this paper, we present just such a data structure, that we call a DIM1 . DIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network. DIMs leverage two key ideas: in-network 1 Distributed Index for Multi-dimensional data. 63 data centric storage, and a novel locality-preserving geographic hash (Section 3). DIMs trace their lineage to datacentric storage systems [23]. The underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events. Building upon this, DIMs use a technique whereby events whose attribute values are close are likely to be stored at the same or nearby nodes. DIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion. We discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3). We then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6). Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( √ N)). In detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network. Experiments on a small scale testbed validate the feasibility of DIMs (Section 6). Much work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity. We believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks. DIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18]. All such systems will likely be integrated to a sensor network database system such as TinyDB [17]. Application designers could then choose the appropriate method of information access. For instance, a fire tracking application would use DIM to detect the hotspots, and would then use mechanisms that enable continuous queries [15, 18] to track the spatio-temporal progress of the hotspots. Finally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well. 2. RELATED WORK The basic problem that this paper addresses - multidimensional range queries - is typically solved in database systems using indexing techniques. The database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature. Indexing techniques essentially trade-off some data insertion cost to enable efficient querying. Indexing has, for long, been a classical research problem in the database community [5, 2]. Our work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space. Our approach essentially represents a geographic embedding of such structures in a sensor field. There is one important difference. The classical indexing structures are data-dependent (as are some indexing schemes that use locality preserving hashes, and developed in the theory literature [14, 8, 13]). The index structure is decided not only by the data, but also by the order in which data is inserted. Our current design is not data dependent. Finally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11]. While there has been some work on distributed indexing, the problem has not been extensively explored. There exist distributed indices of a restricted kind-those that allow exact match or partial prefix match queries. Examples of such systems, of course, are the Internet Domain Name System, and the class of distributed hash table (DHT) systems exemplified by Freenet[4], Chord[24], and CAN[19]. Our work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network. The underlying details make the two systems very different: CAN``s overlay is purely logical while our overlay is consistent with the underlying physical topology. More recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context. Several research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17]. Our work is similar in spirit to this body of literature. In fact, DIMs could become an important component of a sensor network database system such as TinyDB [17]. Our work departs from prior work in this area in two significant respects. Unlike these approaches, in our work the data generated at a node are hashed (in general) to different locations. This hashing is the key to scaling multi-dimensional range searches. In all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system. Our work avoids query flooding by an appropriate choice of hashing. Madden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT). This index is used to direct queries to nodes that have detected relevant data. Our work differs from SRT in three key aspects. First, SRT is built on single attributes while DIM supports mulitple attributes. Second, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values. Finally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node. A similar differentiation applies with respect to work on data-centric routing in sensor networks [15, 25], where data generated at a node is assumed to be stored at the node, and queries are either flooded throughout the network [15], or each source sets up a network-wide overlay announcing its presence so that mobile sinks can rendezvous with sources at the nearest node on the overlay [25]. These approaches work well for relatively long-lived queries. Finally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10]. In a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous. GHTs are built upon the GPSR [16] protocol and leverage some interesting properties of that protocol, such as the ability to route to a node nearest to a given location. We also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries. DIMENSIONS and DIFS can be thought of as using the same set of primitives as GHT (storage using consistent hashing), but for different ends: DIMENSIONS allows drill64 down search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations. 3. THE DESIGN OF DIMS Most sensor networks are deployed to collect data from the environment. In these networks, nodes (either individually or collaboratively) will generate events. An event can generally be described as a tuple of attribute values, A1, A2, · · · , Ak , where each attribute Ai represents a sensor reading, or some value corresponding to a detection (e.g., a confidence level). The focus of this paper is the design of systems to efficiently answer multi-dimensional range queries of the form: x1 − y1, x2 − y2, · · · , xk − yk . Such a query returns all events whose attribute values fall into the corresponding ranges. Notice that point queries, i.e., queries that ask for events with specified values for each attribute, are a special case of range queries. As we have discussed in Section 1, range queries can enable efficient correlation and triggering within the network. It is possible to implement range queries by flooding a query within the network. However, as we show in later sections, this alternative can be inefficient, particularly as the system scales, and if nodes within the network issue such queries relatively frequently. The other alternative, sending all events to an external storage node results in the access link being a bottleneck, especially if nodes within the network issue queries. Shenker et al. [23] also make similar arguments with respect to data-centric storage schemes in general; DIMs are an instance of such schemes. The system we present in this paper, the DIM, relies upon two foundations: a locality-preserving geographic hash, and an underlying geographic routing scheme. The key to resolving range queries efficiently is data locality: i.e., events with comparable attribute values are stored nearby. The basic insight underlying DIM is that data locality can be obtained by a locality-preserving geographic hash function. Our geographic hash function finds a localitypreserving mapping from the multi-dimensional space (described by the set of attributes) to a 2-d geographic space; this mapping is inspired by k-d trees [2] and is described later. Moreover, each node in the network self-organizes to claim part of the attribute space for itself (we say that each node owns a zone), so events falling into that space are routed to and stored at that node. Having established the mapping, and the zone structure, DIMs use a geographic routing algorithm previously developed in the literature to route events to their corresponding nodes, or to resolve queries. This algorithm, GPSR [16], essentially enables the delivery of a packet to a node at a specified location. The routing mechanism is simple: when a node receives a packet destined to a node at location X, it forwards the packet to the neighbor closest to X. In GPSR, this is called greedy-mode forwarding. When no such neighbor exists (as when there exists a void in the network), the node starts the packet on a perimeter mode traversal, using the well known right-hand rule to circumnavigate voids. GPSR includes efficient techniques for perimeter traversal that are based on graph planarization algorithms amenable to distributed implementation. For all of this to work, DIMs make two assumptions that are consistent with the literature [23]. First, all nodes know the approximate geographic boundaries of the network. These boundaries may either be configured in nodes at the time of deployment, or may be discovered using a simple protocol. Second, each node knows its geographic location. Node locations can be automatically determined by a localization system or by other means. Although the basic idea of DIMs may seem straightforward, it is challenging to design a completely distributed data structure that must be robust to packet losses and node failures, yet must support efficient query distribution and deal with communication voids and obstacles. We now describe the complete design of DIMs. 3.1 Zones The key idea behind DIMs, as we have discussed, is a geographic locality-preserving hash that maps a multi-attribute event to a geographic zone. Intuitively, a zone is a subdivision of the geographic extent of a sensor field. A zone is defined by the following constructive procedure. Consider a rectangle R on the x-y plane. Intuitively, R is the bounding rectangle that contains all sensors withing the network. We call a sub-rectangle Z of R a zone, if Z is obtained by dividing R k times, k ≥ 0, using a procedure that satisfies the following property: After the i-th division, 0 ≤ i ≤ k, R is partitioned into 2i equal sized rectangles. If i is an odd (even) number, the i-th division is parallel to the y-axis (x-axis). That is, the bounding rectangle R is first sub-divided into two zones at level 0 by a vertical line that splits R into two equal pieces, each of these sub-zones can be split into two zones at level 1 by a horizontal line, and so on. We call the non-negative integer k the level of zone Z, i.e. level(Z) = k. A zone can be identified either by a zone code code(Z) or by an address addr(Z). The code code(Z) is a 0-1 bit string of length level(Z), and is defined as follows. If Z lies in the left half of R, the first (from the left) bit of code(Z) is 0, else 1. If Z lies in the bottom half of R, the second bit of code(Z) is 0, else 1. The remaining bits of code(Z) are then recursively defined on each of the four quadrants of R. This definition of the zone code matches the definition of zones given above, encoding divisions of the sensor field geography by bit strings. Thus, in Figure 2, the zone in the top-right corner of the rectangle R has a zone code of 1111. Note that the zone codes collectively define a zone tree such that individual zones are at the leaves of this tree. The address of a zone Z, addr(Z), is defined to be the centroid of the rectangle defined by Z. The two representations of a zone (its code and its address) can each be computed from the other, assuming the level of the zone is known. Two zones are called sibling zones if their zone codes are the same except for the last bit. For example, if code(Z1) = 01101 and code(Z2) = 01100, then Z1 and Z2 are sibling zones. The sibling subtree of a zone is the subtree rooted at the left or right sibling of the zone in the zone tree. We uniquely define a backup zone for each zone as follows: if the sibling subtree of the zone is on the left, the backup zone is the right-most zone in the sibling subtree; otherwise, the backup zone is the left-most zone in the sibling subtree. For a zone Z, let p be the first level(Z) − 1 digits of code(Z). Let backup(Z) be the backup zone of zone Z. If code(Z) = p1, code(backup(Z)) = p01∗ with the most number of trailing 1``s (∗ means 0 or 1 occurrences). If 65 code(Z) = p0, code(backup(Z)) = p10∗ with the most number of trailing 0``s. 3.2 Associating Zones with Nodes Our definition of a zone is independent of the actual distribution of nodes in the sensor field, and only depends upon the geographic extent (the bounding rectangle) of the sensor field. Now we describe how zones are mapped to nodes. Conceptually, the sensor field is logically divided into zones and each zone is assigned to a single node. If the sensor network were deployed in a grid-like (i.e., very regular) fashion, then it is easy to see that there exists a k such that each node maps into a distinct level-k zone. In general, however, the node placements within a sensor field are likely to be less regular than the grid. For some k, some zones may be empty and other zones might have more than one node situated within them. One alternative would have been to choose a fixed k for the overall system, and then associate nodes with the zones they are in (and if a zone is empty, associate the nearest node with it, for some definition of nearest). Because it makes our overall query routing system simpler, we allow nodes in a DIM to map to different-sized zones. To precisely understand the associations between zones and nodes, we define the notion of zone ownership. For any given placement of network nodes, consider a node A. Let ZA to be the largest zone that includes only node A and no other node. Then, we say that A owns ZA. Notice that this definition of ownership may leave some sections of the sensor field un-associated with a node. For example, in Figure 2, the zone 110 does not contain any nodes and would not have an owner. To remedy this, for any empty zone Z, we define the owner to be the owner of backup(Z). In our example, that empty zone``s owner would also be the node that owns 1110, its backup zone. Having defined the association between nodes and zones, the next problem we tackle is: given a node placement, does there exist a distributed algorithm that enables each node to determine which zones it owns, knowing only the overall boundary of the sensor network? In principle, this should be relatively straightforward, since each node can simply determine the location of its neighbors, and apply simple geometric methods to determine the largest zone around it such that no other node resides in that zone. In practice, however, communication voids and obstacles make the algorithm much more challenging. In particular, resolving the ownership of zones that do not contain any nodes is complicated. Equally complicated is the case where the zone of a node is larger than its communication radius and the node cannot determine the boundaries of its zone by local communication alone. Our distributed zone building algorithm defers the resolution of such zones until when either a query is initiated, or when an event is inserted. The basic idea behind our algorithm is that each node tentatively builds up an idea of the zone it resides in just by communicating with its neighbors (remembering which boundaries of the zone are undecided because there is no radio neighbor that can help resolve that boundary). These undecided boundaries are later resolved by a GPSR perimeter traversal when data messages are actually routed. We now describe the algorithm, and illustrate it using examples. In our algorithm, each node uses an array bound[0. .3] to maintain the four boundaries of the zone it owns (rememFigure 1: A network, where circles represent sensor nodes and dashed lines mark the network boundary. 1111 011 00 110 100 101 1110 010 Figure 2: The zone code and boundaries. 0 1 0 1 10 10 1 1 10 00 Figure 3: The Corresponding Zone Tree ber that in this algorithm, the node only tries to determine the zone it resides in, not the other zones it might own because those zones are devoid of nodes). When a node starts up, each node initializes this array to be the network boundary, i.e., initially each node assumes its zone contains the whole network. The zone boundary algorithm now relies upon GPSR``s beacon messages to learn the locations of neighbors within radio range. Upon hearing of such a neighbor, the node calls the algorithm in Figure 4 to update its zone boundaries and its code accordingly. In this algorithm, we assume that A is the node at which the algorithm is executed, ZA is its zone, and a is a newly discovered neighbor of A. (Procedure Contain(ZA, a) is used to decide if node a is located within the current zone boundaries of node A). Using this algorithm, then, each node can independently and asynchronously decide its own tentative zone based on the location of its neighbors. Figure 2 illustrates the results of applying this algorithm for the network in Figure 1. Figure 3 describes the corresponding zone tree. Each zone resides at a leaf node and the code of a zone is the path from the root to the zone if we represent the branch to the left 66 Build-Zone(a) 1 while Contain(ZA, a) 2 do if length(code(ZA)) mod 2 = 0 3 then new bound ← (bound[0] + bound[1])/2 4 if A.x < new bound 5 then bound[1] ← new bound 6 else bound[0] ← new bound 7 else new bound ← (bound[2] + bound[3])/2 8 if A.y < new bound 9 then bound[3] ← new bound 10 else bound[2] ← new bound 11 Update zone code code(ZA) Figure 4: Zone Boundary Determination, where A.x and A.y represent the geographic coordinate of node A. Insert-Event(e) 1 c ← Encode(e) 2 if Contain(ZA, c) = true and is Internal() = true 3 then Store e and exit 4 Send-Message(c, e) Send-Message(c, m) 1 if ∃ neighbor Y, Closer(Y, owner(m), m) = true 2 then addr(m) ← addr(Y ) 3 else if length(c) > length(code(m)) 4 then Update code(m) and addr(m) 5 source(m) ← caller 6 if is Owner(msg) = true 7 then owner(m) ← caller``s code 8 Send(m) Figure 5: Inserting an event in a DIM. Procedure Closer(A, B, m) returns true if code(A) is closer to code(m) than code(B). source(m) is used to set the source address of message m. child by 0 and the branch to the right child by 1. This binary tree forms the index that we will use in the following event and query processing procedures. We see that the zone sizes are different and depend on the local densities and so are the lengths of zone codes for different nodes. Notice that in Figure 2, there is an empty zone whose code should be 110. In this case, if the node in zone 1111 can only hear the node in zone 1110, it sets its boundary with the empty zone to undecided, because it did not hear from any neighboring nodes from that direction. As we have mentioned before, the undecided boundaries are resolved using GPSR``s perimeter mode when an event is inserted, or a query sent. We describe event insertion in the next step. Finally, this description does not describe how a node``s zone codes are adjusted when neighboring nodes fail, or new nodes come up. We return to this in Section 3.5. 3.3 Inserting an Event In this section, we describe how events are inserted into a DIM. There are two algorithms of interest: a consistent hashing technique for mapping an event to a zone, and a routing algorithm for storing the event at the appropriate zone. As we shall see, these two algorithms are inter-related. 3.3.1 Hashing an Event to a Zone In Section 3.1, we described a recursive tessellation of the geographic extent of a sensor field. We now describe a consistent hashing scheme for a DIM that supports range queries on m distinct attributes2 Let us denote these attributes A1 ... Am. For simplicity, assume for now that the depth of every zone in the network is k, k is a multiple of m, and that this value of k is known to every node. We will relax this assumption shortly. Furthermore, for ease of discussion, we assume that all attribute values have been normalized to be between 0 and 1. Our hashing scheme assigns a k bit zone code to an event as follows. For i between 1 and m, if Ai < 0.5, the i-th bit of the zone code is assigned 0, else 1. For i between m + 1 and 2m, if Ai−m < 0.25 or Ai−m ∈ [0.5, 0.75), the i-th bit of the zone is assigned 0, else 1, because the next level divisions are at 0.25 and 0.75 which divide the ranges to [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1). We repeat this procedure until all k bits have been assigned. As an example, consider event E = 0.3, 0.8 . For this event, the 5-bit zone code is code(ZA) = 01110. Essentially, our hashing scheme uses the values of the attributes in round-robin fashion on the zone tree (such as the one in Figure 3), in order to map an m-attribute event to a zone code. This is reminiscent of k-d trees [2], but is quite different from that data structure: zone trees are spatial embeddings and do not incorporate the re-balancing algorithms in k-d trees. In our design of DIMs, we do not require nodes to have zone codes of the same length, nor do we expect a node to know the zone codes of other nodes. Rather, suppose the encoding node is A and its own zone code is of length kA. Then, given an event E, node A only hashes E to a zone code of length kA. We denote the zone code assigned to an event E by code(E). As we describe below, as the event is routed, code(E) is refined by intermediate nodes. This lazy evaluation of zone codes allows different nodes to use different length zone codes without any explicit coordination. 3.3.2 Routing an Event to its Owner The aim of hashing an event to a zone code is to store the event at the node within the network node that owns that zone. We call this node the owner of the event. Consider an event E that has just been generated at a node A. After encoding event E, node A compares code(E) with code(A). If the two are identical, node A store event E locally; otherwise, node A will attempt to route the event to its owner. To do this, note that code(E) corresponds to some zone Z , which is A``s current guess for the zone at which event E should be stored. A now invokes GPSR to send a message to addr(Z ) (the centroid of Z , Section 3.1). The message contains the event E, code(E), and the target geographic location for storing the event. In the message, A also marks itself as the owner of event E. As we will see later, the guessed zone Z , the address addr(Z ), and the owner of E, all of them contained in the message, will be refined by intermediate forwarding nodes. GPSR now delivers this message to the next hop towards addr(Z ) from A. This next hop node (call it B) does not immediately forward the message. Rather, it attempts to com2 DIM does not assume that all nodes are homogeneous in terms of the sensors they have. Thus, in an m dimensional DIM, a node that does not possess all m sensors can use NULL values for the corresponding readings. DIM treats NULL as an extreme value for range comparisons. As an aside, a network may have many DIM instances running concurrently. 67 pute a new zone code for E to get a new code codenew(E). B will update the code contained in the message (and also the geographic destination of the message) if codenew(E) is longer than the event code in the message. In this manner, as the event wends its way to its owner, its zone code gets refined. Now, B compares its own code code(B) against the owner code owner(E) contained in the incoming message. If code(B) has a longer match with code(E) than the current owner owner(E), then B sets itself to be the current owner of E, meaning that if nobody is eligible to store E, then B will store the event (we shall see how this happens next). If B``s zone code does not exactly match code(E), B will invoke GPSR to deliver E to the next hop. 3.3.3 Resolving undecided zone boundaries during insertion Suppose that some node, say C, finds itself to be the destination (or eventual owner) of an event E. It does so by noticing that code code(C) equals code(E) after locally recomputing a code for E. In that case, C stores E locally, but only if all four of C``s zone boundaries are decided. When this condition holds, C knows for sure that no other nodes have overlapped zones with it. In this case, we call C an internal node. Recall, though, that because the zone discovery algorithm Section 3.2 only uses information from immediate neighbors, one or more of C``s boundaries may be undecided. If so, C assumes that some other nodes have a zone that overlaps with its own, and sets out to resolve this overlap. To do this, C now sets itself to be the owner of E and continues forwarding the message. Here we rely on GPSR``s perimeter mode routing to probe around the void that causes the undecided boundary. Since the message starts from C and is destined for a geographic location near C, GPSR guarantees that the message will be delivered back to C if no other nodes will update the information in the message. If the message comes back to C with itself to be the owner, C infers that it must be the true owner of the zone and stores E locally. If this does not happen, there are two possibilities. The first is that as the event traverses the perimeter, some intermediate node, say B whose zone overlaps with C``s marks itself to be the owner of the event, but otherwise does not change the event``s zone code. This node also recognizes that its own zone overlaps with C``s and initiates a message exchange which causes each of them to appropriately shrink their zone. Figures 6 through 8 show an example of this data-driven zone shrinking. Initially, both node A and node B have claimed the same zone 0 because they are out of radio range of each other. Suppose that A inserts an event E = 0.4, 0.8, 0.9 . A encodes E to 0 and claims itself to be the owner of E. Since A is not an internal node, it sends out E, looking for other owner candidates of E. Once E gets to node B, B will see in the message``s owner field A``s code that is the same as its own. B then shrinks its zone from 0 to 01 according to A``s location which is also recorded in the message and send a shrink request to A. Upon receiving this request, A also shrinks its zone from 0 to 00. A second possibility is if some intermediate node changes the destination code of E to a more specific value (i.e., longer zone code). Let us label this node D. D now tries to initiate delivery to the centroid of the new zone. This A B 0 0 110 100 1111 1110 101 Figure 6: Nodes A and B have claimed the same zone. A B <0.4,0.8,0.9> Figure 7: An event/query message (filled arrows) triggers zone shrinking (hollow arrows). A B 01 00 110 100 1111 1110 101 Figure 8: The zone layout after shrinking. Now node A and B have been mapped to different zones. might result in a new perimeter walk that returns to D (if, for example, D happens to be geographically closest to the centroid of the zone). However, D would not be the owner of the event, which would still be C. In routing to the centroid of this zone, the message may traverse the perimeter and return to D. Now D notices that C was the original owner, so it encapsulates the event and directs it to C. In case that there indeed is another node, say X, that owns an overlapped zone with C, X will notice this fact by finding in the message the same prefix of the code of one of its zones, but with a different geographic location from its own. X will shrink its zone to resolve the overlap. If X``s zone is smaller than or equal to C``s zone, X will also send a shrink request to C. Once C receives a shrink request, it will reduce its zone appropriately and fix its undecided boundary. In this manner, the zone formation process is resolved on demand in a data-driven way. 68 There are several interesting effects with respect to perimeter walking that arise in our algorithm. The first is that there are some cases where an event insertion might cause the entire outer perimeter of the network to be traversed3 . Figure 6 also works as an example where the outer perimeter is traversed. Event E inserted by A will eventually be stored in node B. Before node B stores event E, if B``s nominal radio range does not intersect the network boundary, it needs to send out E again as A did, because B in this case is not an internal node. But if B``s nominal radio range intersects the network boundary, it then has two choices. It can assume that there will not be any nodes outside the network boundary and so B is an internal node. This is an aggressive approach. On the other hand, B can also make a conservative decision assuming that there might be some other nodes it have not heard of yet. B will then force the message walking another perimeter before storing it. In some situations, especially for large zones where the node that owns a zone is far away from the centroid of the owned zone, there might exist a small perimeter around the destination that does not include the owner of the zone. The event will end up being stored at a different node than the real owner. In order to deal with this problem, we add an extra operation in event forwarding, called efficient neighbor discovery. Before invoking GPSR, a node needs to check if there exists a neighbor who is eligible to be the real owner of the event. To do this, a node C, say, needs to know the zone codes of its neighboring nodes. We deploy GPSR``s beaconing message to piggyback the zone codes for nodes. So by simply comparing the event``s code and neighbor``s code, a node can decide whether there exists a neighbor Y which is more likely to be the owner of event E. C delivers E to Y , which simply follows the decision making procedure discussed above. 3.3.4 Summary and Pseudo-code In summary, our event insertion procedure is designed to nicely interact with the zone discovery mechanism, and the event hashing mechanism. The latter two mechanisms are kept simple, while the event insertion mechanism uses lazy evaluation at each hop to refine the event``s zone code, and it leverages GPSR``s perimeter walking mechanism to fix undecided zone boundaries. In Section 3.5, we address robustness of event insertion to packet loss or to node failures. Figure 5 shows the pseudo-code for inserting and forwarding an event e. In this pseudo code, we have omitted a description of the zone shrinking procedure. In the pseudo code, procedure is Internal() is used to determine if the caller is an internal node and procedure is Owner() is used to determine if the caller is more eligible to be the owner of the event than is currently claimed owner as recorded in the message. Procedure Send-Message is used to send either an event message or a query message. If the message destination address has been changed, the packet source address needs also to be changed in order to avoid being dropped by GPSR, since GPSR does not allow a node to see the same packet in greedy mode twice. 3 This happens less frequently than for GHTs, where inserting an event to a location outside the actual (but inside the nominal) boundary of the network will always invoke an external perimeter walk. 3.4 Resolving and Routing Queries DIMs support both point queries4 and range queries. Routing a point query is identical to routing an event. Thus, the rest of this section details how range queries are routed. The key challenge in routing zone queries is brought out by the following strawman design. If the entire network was divided evenly into zones of depth k (for some pre-defined constant k), then the querier (the node issuing the query) could subdivide a given range query into the relevant subzones and route individual requests to each of the zones. This can be inefficient for large range queries and also hard to implement in our design where zone sizes are not predefined. Accordingly, we use a slightly different technique where a range query is initially routed to a zone corresponding to the entire range, and is then progressively split into smaller subqueries. We describe this algorithm here. The first step of the algorithm is to map a range query to a zone code prefix. Conceptually, this is easy; in a zone tree (Figure 3), there exists some node which contains the entire range query in its sub-tree, and none of its children in the tree do. The initial zone code we choose for the query is the zone code corresponding to that tree node, and is a prefix of the zone codes of all zones (note that these zones may not be geographically contiguous) in the subtree. The querier computes the zone code of Q, denoted by code(Q) and then starts routing a query to addr(code(Q)). Upon receiving a range query Q, a node A (where A is any node on the query propagation path) divides it into multiple smaller sized subqueries if there is an overlap between the zone of A, zone(A) and the zone code associated with Q, code(Q). Our approach to split a query Q into subqueries is as follows. If the range of Q``s first attribute contains the value 0.5, A divides Q into two sub-queries one of whose first attribute ranges from 0 to 0.5, and the other from 0.5 to 1. Then A decides the half that overlaps with its own zone. Let``s call it QA. If QA does not exist, then A stops splitting; otherwise, it continues splitting (using the second attribute range) and recomputing QA until QA is small enough so that it completely falls into zone(A) and hence A can now resolve it. For example, suppose that node A, whose code is 0110, is to split a range query Q = 0.3 − 0.8, 0.6 − 0.9 . The splitting steps is shown in Figure 2. After splitting, we obtain three smaller queries q0 = 0.3 − 0.5, 0.6 − 0.75 , q1 = 0.3 − 0.5, 0.75 − 0.9 , and q2 = 0.5 − 0.8, 0.6 − 0.9 . This splitting procedure is illustrated in Figure 9 which also shows the codes of each subquery after splitting. A then replies to subquery q0 with data stored locally and sends subqueries q1 and q2 using the procedure outlined above. More generally, if node A finds itself to be inside the zone subtree that maximally covers Q, it will send the subqueries that resulted from the split. Otherwise, if there is no overlap between A and Q, then A forwards Q as is (in this case Q is either the original query, or a product of an earlier split). Figure 10 describes the pseudo-code for the zone splitting algorithm. As shown in the above algorithm, once a subquery has been recognized as belonging to the caller``s zone, procedure Resolve is invoked to resolve the subquery and send a reply to the querier. Every query message contains 4 By point queries, we mean the equality condition on all indexed keys. DIM index attributes are not necessarily primary keys. 69 the geographic location of its initiator, so the corresponding reply message can be delivered directly back to the initiator. Finally, in the process of query resolution, zones might shrink similar to shrinkage during inserting. We omit this in the pseudo code. 3.5 Robustness Until now, we have not discussed the impact of node failures and packet losses, or node arrivals and departures on our algorithms. Packet losses can affect query and event insertion, and node failures can result in lost data, while node arrivals and departures can impact the zone structure. We now discuss how DIMs can be made robust to these kinds of dynamics. 3.5.1 Maintaining Zones In previous sections, we described how the zone discovery algorithm could leave zone boundaries undecided. These undecided boundaries are resolved during insertion or querying, using the zone shrinking procedure describe above. When a new node joins the network, the zone discovery mechanism (Section 3.2) will cause neighboring zones to appropriately adjust their zone boundaries. At this time, those zones can also transfer to the new node those events they store but which should belong to the new node. Before a node turns itself off (if this is indeed possible), it knows that its backup node (Section 3.1) will take over its zone, and will simply send all its events to its backup node. Node deletion may also cause zone expansion. In order to keep the mapping between the binary zone tree``s leaf nodes and zones, we allow zone expansion to only occur among sibling zones (Section 3.1). The rule is: if zone(A)``s sibling zone becomes empty, then A can expand its own zone to include its sibling zone. Now, we turn our attention to node failures. Node failures are just like node deletions except that a failed node does not have a chance to move its events to another node. But how does a node decide if its sibling has failed? If the sibling is within radio range, the absence of GPSR beaconing messages can detect this. Once it detects this, the node can expand its zone. A different approach is needed for detecting siblings who are not within radio range. These are the cases where two nodes own their zones after exchanging a shrink message; they do not periodically exchange messages thereafter to maintain this zone relationship. In this case, we detect the failure in a data-driven fashion, with obvious efficiency benefits compared to periodic keepalives. Once a node B has failed, an event or query message that previously should have been owned by the failed node will now be delivered to the node A that owns the empty zone left by node B. A can see this message because A stands right around the empty area left by B and is guaranteed to be visited in a GPSR perimeter traversal. A will set itself to be the owner of the message, and any node which would have dropped this message due to a perimeter loop will redirect the message to A instead. If A``s zone happens to be the sibling of B``s zone, A can safely expand its own zone and notify its expanded zone to its neighbors via GPSR beaconing messages. 3.5.2 Preventing Data Loss from Node Failure The algorithms described above are robust in terms of zone formation, but node failure can erase data. To avoid this, DIMs can employ two kinds of replication: local replication to be resilient to random node failures, and mirror replication for resilience to concurrent failure of geographically contiguous nodes. Mirror replication is conceptually easy. Suppose an event E has a zone code code(E). Then, the node that inserts E would store two copies of E; one at the zone denoted by code(E), and the other at the zone corresponding to the one``s complement of code(E). This technique essentially creates a mirror DIM. A querier would need, in parallel, to query both the original DIM and its mirror since there is no way of knowing if a collection of nodes has failed. Clearly, the trade-off here is an approximate doubling of both insertion and query costs. There exists a far cheaper technique to ensure resilience to random node failures. Our local replication technique rests on the observation that, for each node A, there exists a unique node which will take over its zone when A fails. This node is defined as the node responsible for A``s zone``s backup zone (see Section 3.1). The basic idea is that A replicates each data item it has in this node. We call this node A``s local replica. Let A``s local replica be B. Often B will be a radio neighbor of A and can be detected from GPSR beacons. Sometimes, however, this is not the case, and B will have to be explicitly discovered. We use an explicit message for discovering the local replica. Discovering the local replica is data-driven, and uses a mechanism similar to that of event insertion. Node A sends a message whose geographic destination is a random nearby location chosen by A. The location is close enough to A such that GPSR will guarantee that the message will delivered back to A. In addition, the message has three fields, one for the zone code of A, code(A), one for the owner owner(A) of zone(A) which is set to be empty, and one for the geographic location of owner(A). Then the packet will be delivered in GPSR perimeter mode. Each node that receives this message will compare its zone code and code(A) in the message, and if it is more eligible to be the owner of zone(A) than the current owner(A) recorded in the message, it will update the field owner(A) and the corresponding geographic location. Once the packet comes back to A, it will know the location of its local replica and can start to send replicas. In a dense sensor network, the local replica of a node is usually very near to the node, either its direct neighbor or 1-2 hops away, so the cost of sending replicas to local replication will not dominate the network traffic. However, a node``s local replica itself may fail. There are two ways to deal with this situation; periodic refreshes, or repeated datadriven discovery of local replicas. The former has higher overhead, but more quickly discovers failed replicas. 3.5.3 Robustness to Packet Loss Finally, the mechanisms for querying and event insertion can be easily made resilient to packet loss. For event insertion, a simple ACK scheme suffices. Of course, queries and responses can be lost as well. In this case, there exists an efficient approach for error recovery. This rests on the observation that the querier knows which zones fall within its query and should have responded (we assume that a node that has no data matching a query, but whose zone falls within the query, responds with a negative acknowledgment). After a conservative timeout, the querier can re-issue the queries selectively to these zones. If DIM cannot get any answers (positive or negative) from 70 <0.3-0.8, 0.6-0.9> <0.5-0.8, 0.6-0.9><0.3-0.5, 0.6-0.9> <0.3-0.5, 0.6-0.9> <0.3-0.5, 0.6-0.9> <0.3-0.5, 0.6-0.75> <0.3-0.5, 0.75-0.9> 0 0 1 1 1 1 Figure 9: An example of range query splitting Resolve-Range-Query(Q) 1 Qsub ← nil 2 q0, Qsub ← Split-Query(Q) 3 if q0 = nil 4 then c ← Encode(Q) 5 if Contain(c, code(A)) = true 6 then go to step 12 7 else Send-Message(c, q0) 8 else Resolve(q0) 9 if is Internal() = true 10 then Absorb (q0) 11 else Append q0 to Qsub 12 if Qsub = nil 13 then for each subquery q ∈ Qsub 14 do c ← Encode(q) 15 Send-Message(c, q) Figure 10: Query resolving algorithm certain zones after repeated timeouts, it can at least return the partial query results to the application together with the information about the zones from which data is missing. 4. DIMS: AN ANALYSIS In this section, we present a simple analytic performance evaluation of DIMs, and compare their performance against other possible approaches for implementing multi-dimensional range queries in sensor networks. In the next section, we validate these analyses using detailed packet-level simulations. Our primary metrics for the performance of a DIM are: Average Insertion Cost measures the average number of messages required to insert an event into the network. Average Query Delivery Cost measures the average number of messages required to route a query message to all the relevant nodes in the network. It does not measure the number of messages required to transmit responses to the querier; this latter number depends upon the precise data distribution and is the same for many of the schemes we compare DIMs against. In DIMs, event insertion essentially uses geographic routing. In a dense N-node network where the likelihood of traversing perimeters is small, the average event insertion cost proportional to √ N [23]. On the other hand, the query delivery cost depends upon the size of ranges specified in the query. Recall that our query delivery mechanism is careful about splitting a query into sub-queries, doing so only when the query nears the zone that covers the query range. Thus, when the querier is far from the queried zone, there are two components to the query delivery cost. The first, which is proportional to √ N, is the cost to deliver the query near the covering zone. If within this covering zone, there are M nodes, the message delivery cost of splitting the query is proportional to M. The average cost of query delivery depends upon the distribution of query range sizes. Now, suppose that query sizes follow some density function f(x), then the average cost of resolve a query can be approximated by Ê N 1 xf(x)dx. To give some intuition for the performance of DIMs, we consider four different forms for f(x): the uniform distribution where a query range encompassing the entire network is as likely as a point query; a bounded uniform distribution where all sizes up to a bound B are equally likely; an algebraic distribution in which most queries are small, but large queries are somewhat likely; and an exponential distribution where most queries are small and large queries are unlikely. In all our analyses, we make the simplifying assumption that the size of a query is proportional to the number of nodes that can answer that query. For the uniform distribution P(x) ∝ c for some constant c. If each query size from 1 ... N is equally likely, the average query delivery cost of uniformly distributed queries is O(N). Thus, for uniformly distributed queries, the performance of DIMs is comparable to that of flooding. However, for the applications we envision, where nodes within the network are trying to correlate events, the uniform distribution is highly unrealistic. Somewhat more realistic is a situation where all query sizes are bounded by a constant B. In this case, the average cost for resolving such a query is approximately Ê B 1 xf(x)dx = O(B). Recall now that all queries have to pay an approximate cost of O( √ N) to deliver the query near the covering zone. Thus, if DIM limited queries to a size proportional to√ N, the average query cost would be O( √ N). The algebraic distribution, where f(x) ∝ x−k , for some constant k between 1 and 2, has an average query resolution cost given by Ê N 1 xf(x)dx = O(N2−k ). In this case, if k > 1.5, the average cost of query delivery is dominated by the cost to deliver the query to near the covering zone, given by O( √ N). Finally, for the exponential distribution, f(x) = ce−cx for some constant c, and the average cost is just the mean of the corresponding distribution, i.e., O(1) for large N. Asymptotically, then, the cost of the query for the exponential distribution is dominated by the cost to deliver the query near the covering zone (O( √ N)). Thus, we see that if queries follow either the bounded uniform distribution, the algebraic distribution, or the exponential distribution, the query cost scales as the insertion cost (for appropriate choice of constants for the bounded uniform and the algebraic distributions). How well does the performance of DIMs compare against alternative choices for implementing multi-dimensional queries? A simple alternative is called external storage [23], where all events are stored centrally in a node outside the sensor network. This scheme incurs an insertion cost of O( √ N), and a zero query cost. However, as [23] points out, such systems may be impractical in sensor networks since the access link to the external node becomes a hotspot. A second alternative implementation would store events at the node where they are generated. Queries are flooded 71 throughout the network, and nodes that have matching data respond. Examples of systems that can be used for this (although, to our knowledge, these systems do not implement multi-dimensional range queries) are Directed Diffusion [15] and TinyDB [17]. The flooding scheme incurs a zero insertion cost, but an O(N) query cost. It is easy to show that DIMs outperform flooding as long as the ratio of the number of insertions to the number of queries is less than √ N. A final alternative would be to use a geographic hash table (GHT [20]). In this approach, attribute values are assumed to be integers (this is actually quite a reasonable assumption since attribute values are often quantized), and events are hashed on some (say, the first) attribute. A range query is sub-divided into several sub-queries, one for each integer in the range of the first attribute. Each sub-query is then hashed to the appropriate location. The nodes that receive a sub-query only return events that match all other attribute ranges. In this approach, which we call GHT-R (GHT``s for range queries) the insertion cost is O( √ N). Suppose that the range of the first attribute contains r discrete values. Then the cost to deliver queries is O(r √ N). Thus, asymptotically, GHT-R``s perform similarly to DIMs. In practice, however, the proportionality constants are significantly different, and DIMs outperform GHT-Rs, as we shall show using detailed simulations. 5. DIMS: SIMULATION RESULTS Our analysis gives us some insight into the asymptotic behavior of various approaches for multi-dimensional range queries. In this section, we use simulation to compare DIMs against flooding and GHT-R; this comparison gives us a more detailed understanding of these approaches for moderate size networks, and gives us a nuanced view of the mechanistic differences between some of these approaches. 5.1 Simulation Methodology We use ns-2 for our simulations. Since DIMs are implemented on top of GPSR, we first ported an earlier GPSR implementation to the latest version of ns-2. We modified the GPSR module to call our DIM implementation when it receives any data message in transit or when it is about to drop a message because that message traversed the entire perimeter. This allows a DIM to modify message zone codes in flight (Section 3), and determine the actual owner of an event or query. In addition, to this, we implemented in ns-2 most of the DIM mechanisms described in Section 3. Of those mechanisms, the only one we did not implement is mirror replication. We have implemented selective query retransmission for resiliency to packet loss, but have left the evaluation of this mechanism to future work. Our DIM implementation in ns-2 is 2800 lines of code. Finally, we implemented GHT-R, our GHT-based multidimensional range query mechanism in ns-2. This implementation was relatively straightforward, given that we had ported GPSR, and modified GPSR to detect the completion of perimeter mode traversals. Using this implementation, we conducted a fairly extensive evaluation of DIM and two alternatives (flooding, and our GHT-R). For all our experiments, we use uniformly placed sensor nodes with network sizes ranging from 50 nodes to 300 nodes. Each node has a radio range of 40m. For the results presented here, each node has on average 20 nodes within its nominal radio range. We have conducted experiments at other node densities; they are in agreement with the results presented here. In all our experiments, each node first generates 3 events5 on average (more precisely, for a topology of size N, we have 3N events, and each node is equally likely to generate an event). We have conducted experiments for three different event value distributions. Our uniform event distribution generates 2-dimensional events and, for each dimension, every attribute value is equally likely. Our normal event distribution generates 2-dimensional events and, for each dimension, the attribute value is normally distributed with a mean corresponding to the mid-point of the attribute value range. The normal event distribution represents a skewed data set. Finally, our trace event distribution is a collection of 4-dimensional events obtained from a habitat monitoring network. As we shall see, this represents a fairly skewed data set. Having generated events, for each simulation we generate queries such that, on average, each node generates 2 queries. The query sizes are determined using the four size distributions we discussed in Section 4: uniform, boundeduniform, algebraic and exponential. Once a query size has been determined, the location of the query (i.e., the actual boundaries of the zone) are uniformly distributed. For our GHT-R experiments, the dynamic range of the attributes had 100 discrete values, but we restricted the query range for any one attribute to 50 discrete values to allow those simulations to complete in reasonable time. Finally, using one set of simulations we evaluate the efficacy of local replication by turning off random fractions of nodes and measuring the fidelity of the returned results. The primary metrics for our simulations are the average query and insertion costs, as defined in Section 4. 5.2 Results Although we have examined almost all the combinations of factors described above, we discuss only the most salient ones here, for lack of space. Figure 11 plots the average insertion costs for DIM and GHT-R (for flooding, of course, the insertion costs are zero). DIM incurs less per event overhead in inserting events (regardless of the actual event distribution; Figure 11 shows the cost for uniformly distributed events). The reason for this is interesting. In GHT-R, storing almost every event incurs a perimeter traversal, and storing some events require traversing the outer perimeter of the network [20]. By contrast, in DIM, storing an event incurs a perimeter traversal only when a node``s boundaries are undecided. Furthermore, an insertion or a query in a DIM can traverse the outer perimeter (Section 3.3), but less frequently than in GHTs. Figure 13 plots the average query cost for a bounded uniform query size distribution. For this graph (and the next) we use a uniform event distribution, since the event distribution does not affect the query delivery cost. For this simulation, our bound was 1 4 th the size of the largest possible 5 Our metrics are chosen so that the exact number of events and queries is unimportant for our discussion. Of course, the overall performance of the system will depend on the relative frequency of events and queries, as we discuss in Section 4. Since we don``t have realistic ratios for these, we focus on the microscopic costs, rather than on the overall system costs. 72 0 2 4 6 8 10 12 14 16 18 20 50 100 150 200 250 300 AverageCostperInsertion Network Size DIM GHT-R Figure 11: Average insertion cost for DIM and GHT. 0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 Fractionofrepliescomparedwithnon-failurecase Fraction of failed nodes (%) No Replication Local Replication Figure 12: Local replication performance. query (e.g., a query of the form 0 − 0.5, 0 − 0.5 . Even for this generous query size, DIMs perform quite well (almost a third the cost of flooding). Notice, however, that GHTRs incur high query cost since almost any query requires as many subqueries as the width of the first attribute``s range. Figure 14 plots the average query cost for the exponential distribution (the average query size for this distribution was set to be 1 16 th the largest possible query). The superior scaling of DIMs is evident in these graphs. Clearly, this is the regime in which one might expect DIMs to perform best, when most of the queries are small and large queries are relatively rare. This is also the regime in which one would expect to use multi-dimensional range queries: to perform relatively tight correlations. As with the bounded uniform distribution, GHT query cost is dominated by the cost of sending sub-queries; for DIMs, the query splitting strategy works quite well in keep overall query delivery costs low. Figure 12 describes the efficacy of local replication. To obtain this figure, we conducted the following experiment. On a 100-node network, we inserted a number of events uniformly distributed throughout the network, then issued a query covering the entire network and recorded the answers. Knowing the expected answers for this query, we then successively removed a fraction f of nodes randomly, and re-issued the same query. The figure plots the fraction of expected responses actually received, with and without replication. As the graph shows, local replication performs well for random failures, returning almost 90% of the responses when up to 30% of the nodes have failed simultaneously 6 . In the absence of local replication, of course, when 6 In practice, the performance of local replication is likely to 0 100 200 300 400 500 600 700 50 100 150 200 250 300 AverageCostperQueryinBoundedUnifDistribution Network Size DIM flooding GHT-R Figure 13: Average query cost with a bounded uniform query distribution 0 50 100 150 200 250 300 350 400 450 50 100 150 200 250 300 AverageCostperQueryinExponentialDistribution Network Size DIM flooding GHT-R Figure 14: Average query cost with an exponential query distribution 30% of the nodes fail, the response rate is only 70% as one would expect. We note that DIMs (as currently designed) are not perfect. When the data is highly skewed-as it was for our trace data set from the habitat monitoring application where most of the event values fell into within 10% of the attribute``s range-a few DIM nodes will clearly become the bottleneck. This is depicted in Figure 15, which shows that for DIMs, and GHT-Rs, the maximum number of transmissions at any network node (the hotspots) is rather high. (For less skewed data distributions, and reasonable query size distributions, the hotspot curves for all three schemes are comparable.) This is a standard problem that the database indices have dealt with by tree re-balancing. In our case, simpler solutions might be possible (and we discuss this in Section 7). However, our use of the trace data demonstrates that DIMs work for events which have more than two dimensions. Increasing the number of dimensions does not noticeably degrade DIMs query cost (omitted for lack of space). Also omitted are experiments examining the impact of several other factors, as they do not affect our conclusions in any way. As we expected, DIMs are comparable in performance to flooding when all sizes of queries are equally likely. For an algebraic distribution of query sizes, the relative performance is close to that for the exponential distribution. For normally distributed events, the insertion costs be much better than this. Assuming a node and its replica don``t simultaneously fail often, a node will almost always detect a replica failure and re-replicate, leading to near 100% response rates. 73 0 2000 4000 6000 8000 10000 12000 50 100 150 200 250 300 MaximumHotspotonTraceDataSet Network Size DIM flooding GHT-R Figure 15: Hotspot usage DIM Zone Manager Query Router Query Processor Event Manager Event Router GPSR interface(Event driven/Thread based) update useuse update GPSR Upper interface(Event driven/Thread based) Lower interface(Event driven/Thread based) Greedy Forwarding Perimeter Forwarding Beaconing Neighbor List Manager update use MoteNIC (MicaRadio) IP Socket (802.11b/Ethernet) Figure 16: Software architecture of DIM over GPSR are comparable to that for the uniform distribution. Finally, we note that in all our evaluations we have only used list queries (those that request all events matching the specified range). We expect that for summary queries (those that expect an aggregate over matching events), the overall cost of DIMs could be lower because the matching data are likely to be found in one or a small number of zones. We leave an understanding of this to future work. Also left to future work is a detailed understanding of the impact of location error on DIM``s mechanisms. Recent work [22] has examined the impact of imprecise location information on other data-centric storage mechanisms such as GHTs, and found that there exist relatively simple fixes to GPSR that ameliorate the effects of location error. 6. IMPLEMENTATION We have implemented DIMs on a Linux platform suitable for experimentation on PDAs and PC-104 class machines. To implement DIMs, we had to develop and test an independent implementation of GPSR. Our GPSR implementation is full-featured, while our DIM implementation has most of the algorithms discussed in Section 3; some of the robustness extensions have only simpler variants implemented. The software architecture of DIM/GPSR system is shown in Figure 16. The entire system (about 5000 lines of code) is event-driven and multi-threaded. The DIM subsystem consists of six logical components: zone management, event maintenance, event routing, query routing, query processing, and GPSR interactions. The GPSR system is implemented as user-level daemon process. Applications are executed as clients. For the DIM subsystem, the GPSR module 0 2 4 6 8 10 12 14 16 0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0 Query size Average#ofreceivedresponses perquery Figure 17: Number of events received for different query sizes 0 2 4 6 8 10 12 14 16 0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0 Query sizeTotalnumberofmessages onlyforsendingthequery Figure 18: Query distribution cost provides several extensions: it exports information about neighbors, and provides callbacks during packet forwarding and perimeter-mode termination. We tested our implementation on a testbed consisting of 8 PC-104 class machines. Each of these boxes runs Linux and uses a Mica mote (attached through a serial cable) for communication. These boxes are laid out in an office building with a total spatial separation of over a hundred feet. We manually measured the locations of these nodes relative to some coordinate system and configured the nodes with their location. The network topology is approximately a chain. On this testbed, we inserted queries and events from a single designated node. Our events have two attributes which span all combinations of the four values [0, 0.25, 0.75, 1] (sixteen events in all). Our queries span four sizes, returning 1, 4, 9 and 16 events respectively. Figure 17 plots the number of events received for different sized queries. It might appear that we received fewer events than expected, but this graph doesn``t count the events that were already stored at the querier. With that adjustment, the number of responses matches our expectation. Finally, Figure 18 shows the total number of messages required for different query sizes on our testbed. While these experiments do not reveal as much about the performance range of DIMs as our simulations do, they nevertheless serve as proof-of-concept for DIMs. Our next step in the implementation is to port DIMs to the Mica motes, and integrate them into the TinyDB [17] sensor database engine on motes. 74 7. CONCLUSIONS In this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks. Our design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR. We have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future. There are several interesting future directions that we intend to pursue. One is adaptation to skewed data distributions, since these can cause storage and transmission hotspots. Unlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution. Another direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves. A third is support for efficient resolution of existential queries-whether there exists an event matching a multi-dimensional range. Acknowledgments This work benefited greatly from discussions with Fang Bian, Hui Zhang and other ENL lab members, as well as from comments provided by the reviewers and our shepherd Feng Zhao. 8. REFERENCES [1] J. Aspnes and G. Shah. Skip Graphs. In Proceedings of 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Baltimore, MD, January 2003. [2] J. L. Bentley. Multidimensional Binary Search Trees Used for Associative Searching. Communicaions of the ACM, 18(9):475-484, 1975. [3] P. Bonnet, J. E. Gerhke, and P. Seshadri. Towards Sensor Database Systems. In Proceedings of the Second International Conference on Mobile Data Management, Hong Kong, January 2001. [4] I. Clarke, O. Sandberg, B. Wiley, and T. W. Hong. Freenet: A Distributed Anonymous Information Storage and Retrieval System. In Designing Privacy Enhancing Technologies: International Workshop on Design Issues in Anonymity and Unobservability. Springer, New York, 2001. [5] D. Comer. The Ubiquitous B-tree. ACM Computing Surveys, 11(2):121-137, 1979. [6] R. A. Finkel and J. L. Bentley. Quad Trees: A Data Structure for Retrieval on Composite Keys. Acta Informatica, 4:1-9, 1974. [7] D. Ganesan, D. Estrin, and J. Heidemann. DIMENSIONS: Why do we need a new Data Handling architecture for Sensor Networks? In Proceedings of the First Workshop on Hot Topics In Networks (HotNets-I), Princeton, NJ, October 2002. [8] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing. In Proceedings of the 25th VLDB conference, Edinburgh, Scotland, September 1999. [9] R. Govindan, J. Hellerstein, W. Hong, S. Madden, M. Franklin, and S. Shenker. The Sensor Network as a Database. Technical Report 02-771, Computer Science Department, University of Southern California, September 2002. [10] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and S. Shenker. DIFS: A Distributed Index for Features in Sensor Networks. In Proceedings of 1st IEEE International Workshop on Sensor Network Protocols and Applications, Anchorage, AK, May 2003. [11] A. Guttman. R-trees: A Dynamic Index Structure for Spatial Searching. In Proceedings of the ACM SIGMOD, Boston, MA, June 1984. [12] M. Harren, J. M. Hellerstein, R. Huebsch, B. T. Loo, S. Shenker, and I. Stoica. Complex Queries in DHT-based Peer-to-Peer Networks. In P. Druschel, F. Kaashoek, and A. Rowstron, editors, Proceedings of 1st International Workshop on Peer-to-Peer Systems (IPTPS``02), volume 2429 of LNCS, page 242, Cambridge, MA, March 2002. Springer-Verlag. [13] P. Indyk and R. Motwani. Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing, Dallas, Texas, May 1998. [14] P. Indyk, R. Motwani, P. Raghavan, and S. Vempala. Locality-preserving Hashing in Multidimensional Spaces. In Proceedings of the 29th Annual ACM symposium on Theory of Computing, pages 618 - 625, El Paso, Texas, May 1997. ACM Press. [15] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks. In Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking (Mobicom 2000), Boston, MA, August 2000. [16] B. Karp and H. T. Kung. GPSR: Greedy Perimeter Stateless Routing for Wireless Networks. In Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking (Mobicom 2000), Boston, MA, August 2000. [17] S. Madden, M. Franklin, J. Hellerstein, and W. Hong. The Design of an Acquisitional Query Processor for Sensor Networks. In Proceedings of ACM SIGCMOD, San Diego, CA, June 2003. [18] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. TAG: a Tiny AGregation Service for ad-hoc Sensor Networks. In Proceedings of 5th Annual Symposium on Operating Systems Design and Implementation (OSDI), Boston, MA, December 2002. [19] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker. A Scalable Content-Addressable Network. In Proceedings of the ACM SIGCOMM, San Diego, CA, August 2001. [20] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govindan, and S. Shenker. GHT: A Geographic Hash Table for Data-Centric Storage. In Proceedings of the First ACM International Workshop on Wireless Sensor Networks and Applications, Atlanta, GA, September 2002. [21] H. Samet. Spatial Data Structures. In W. Kim, editor, Modern Database Systems: The Object Model, Interoperability and Beyond, pages 361-385. Addison Wesley/ACM, 1995. [22] K. Sead, A. Helmy, and R. Govindan. On the Effect of Localization Errors on Geographic Face Routing in Sensor Networks. In Under submission, 2003. [23] S. Shenker, S. Ratnasamy, B. Karp, R. Govindan, and D. Estrin. Data-Centric Storage in Sensornets. In Proc. ACM SIGCOMM Workshop on Hot Topics In Networks, Princeton, NJ, 2002. [24] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan. Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications. In Proceedings of the ACM SIGCOMM, San Diego, CA, August 2001. [25] F. Ye, H. Luo, J. Cheng, S. Lu, and L. Zhang. A Two-Tier Data Dissemination Model for Large-scale Wireless Sensor Networks. In Proceedings of the Eighth Annual ACM/IEEE International Conference on Mobile Computing and Networking (Mobicom``02), Atlanta, GA, September 2002. 75
Multi-dimensional Range Queries in Sensor Networks * ABSTRACT In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: "List all events whose temperature lies between 50 ◦ and 60 ◦, and whose light levels lie between 10 and 15." Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O (√ N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs. 1. INTRODUCTION In wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3]. Many of these attributes will have scalar values: e.g., temperature and light levels, soil moisture conditions, etc. . In these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes. For example, scientists analyzing the growth of marine microorganisms might be interested in events that occurred within certain temperature and light conditions: "List all events that have temperatures between 50 ◦ F and 60 ◦ F, and light levels between 10 and 20". Such range queries can be used in two distinct ways. They can help users efficiently drill-down their search for events of interest. The query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms. More importantly, they can be used by application software running within a sensor network for correlating events and triggering actions. For example, if in a habitat monitoring application, a bird alighting on its nest is indicated by a certain range of thermopile sensor readings, and a certain range of microphone readings, a multi-dimensional range query on those attributes enables higher confidence detection of the arrival of a flock of birds, and can trigger a system of cameras. In traditional database systems, such range queries are supported using pre-computed indices. Indices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability. For sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network). Rather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries. In this paper, we present just such a data structure, that we call a DIM'. DIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network. DIMs leverage two key ideas: in-network data centric storage, and a novel locality-preserving geographic hash (Section 3). DIMs trace their lineage to datacentric storage systems [23]. The underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events. Building upon this, DIMs use a technique whereby events whose attribute values are "close" are likely to be stored at the same or nearby nodes. DIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion. We discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3). We then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6). Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query √ costs scale as O (N)). In detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network. Experiments on a small scale testbed validate the feasibility of DIMs (Section 6). Much work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity. We believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks. DIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18]. All such systems will likely be integrated to a sensor network database system such as TinyDB [17]. Application designers could then choose the appropriate method of information access. For instance, a fire tracking application would use DIM to detect the hotspots, and would then use mechanisms that enable continuous queries [15, 18] to track the spatio-temporal progress of the hotspots. Finally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well. 2. RELATED WORK The basic problem that this paper addresses--multidimensional range queries--is typically solved in database systems using indexing techniques. The database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature. Indexing techniques essentially trade-off some data insertion cost to enable efficient querying. Indexing has, for long, been a classical research problem in the database community [5, 2]. Our work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space. Our approach essentially represents a geographic embedding of such structures in a sensor field. There is one important difference. The classical indexing structures are data-dependent (as are some indexing schemes that use locality preserving hashes, and developed in the theory literature [14, 8, 13]). The index structure is decided not only by the data, but also by the order in which data is inserted. Our current design is not data dependent. Finally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11]. While there has been some work on distributed indexing, the problem has not been extensively explored. There exist distributed indices of a restricted kind--those that allow exact match or partial prefix match queries. Examples of such systems, of course, are the Internet Domain Name System, and the class of distributed hash table (DHT) systems exemplified by Freenet [4], Chord [24], and CAN [19]. Our work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network. The underlying details make the two systems very different: CAN's overlay is purely logical while our overlay is consistent with the underlying physical topology. More recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context. Several research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17]. Our work is similar in spirit to this body of literature. In fact, DIMs could become an important component of a sensor network database system such as TinyDB [17]. Our work departs from prior work in this area in two significant respects. Unlike these approaches, in our work the data generated at a node are hashed (in general) to different locations. This hashing is the key to scaling multi-dimensional range searches. In all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system. Our work avoids query flooding by an appropriate choice of hashing. Madden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT). This index is used to direct queries to nodes that have detected relevant data. Our work differs from SRT in three key aspects. First, SRT is built on single attributes while DIM supports mulitple attributes. Second, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values. Finally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node. A similar differentiation applies with respect to work on data-centric routing in sensor networks [15, 25], where data generated at a node is assumed to be stored at the node, and queries are either flooded throughout the network [15], or each source sets up a network-wide overlay announcing its presence so that mobile sinks can rendezvous with sources at the nearest node on the overlay [25]. These approaches work well for relatively long-lived queries. Finally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10]. In a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous. GHTs are built upon the GPSR [16] protocol and leverage some interesting properties of that protocol, such as the ability to route to a node nearest to a given location. We also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries. DIMENSIONS and DIFS can be thought of as using the same set of primitives as GHT (storage using consistent hashing), but for different ends: DIMENSIONS allows drill down search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations. 3. THE DESIGN OF DIMS Most sensor networks are deployed to collect data from the environment. In these networks, nodes (either individually or collaboratively) will generate events. An event can generally be described as a tuple of attribute values, (A1, A2, • • •, Ak), where each attribute Ai represents a sensor reading, or some value corresponding to a detection (e.g., a confidence level). The focus of this paper is the design of systems to efficiently answer multi-dimensional range queries of the form: (x1--y1, x2--y2, • • •, xk--yk). Such a query returns all events whose attribute values fall into the corresponding ranges. Notice that point queries, i.e., queries that ask for events with specified values for each attribute, are a special case of range queries. As we have discussed in Section 1, range queries can enable efficient correlation and triggering within the network. It is possible to implement range queries by flooding a query within the network. However, as we show in later sections, this alternative can be inefficient, particularly as the system scales, and if nodes within the network issue such queries relatively frequently. The other alternative, sending all events to an external storage node results in the access link being a bottleneck, especially if nodes within the network issue queries. Shenker et al. [23] also make similar arguments with respect to data-centric storage schemes in general; DIMs are an instance of such schemes. The system we present in this paper, the DIM, relies upon two foundations: a locality-preserving geographic hash, and an underlying geographic routing scheme. The key to resolving range queries efficiently is data locality: i.e., events with comparable attribute values are stored nearby. The basic insight underlying DIM is that data locality can be obtained by a locality-preserving geographic hash function. Our geographic hash function finds a localitypreserving mapping from the multi-dimensional space (described by the set of attributes) to a 2-d geographic space; this mapping is inspired by k-d trees [2] and is described later. Moreover, each node in the network self-organizes to claim part of the attribute space for itself (we say that each node owns a zone), so events falling into that space are routed to and stored at that node. Having established the mapping, and the zone structure, DIMs use a geographic routing algorithm previously developed in the literature to route events to their corresponding nodes, or to resolve queries. This algorithm, GPSR [16], essentially enables the delivery of a packet to a node at a specified location. The routing mechanism is simple: when a node receives a packet destined to a node at location X, it forwards the packet to the neighbor closest to X. In GPSR, this is called greedy-mode forwarding. When no such neighbor exists (as when there exists a void in the network), the node starts the packet on a perimeter mode traversal, using the well known right-hand rule to circumnavigate voids. GPSR includes efficient techniques for perimeter traversal that are based on graph planarization algorithms amenable to distributed implementation. For all of this to work, DIMs make two assumptions that are consistent with the literature [23]. First, all nodes know the approximate geographic boundaries of the network. These boundaries may either be configured in nodes at the time of deployment, or may be discovered using a simple protocol. Second, each node knows its geographic location. Node locations can be automatically determined by a localization system or by other means. Although the basic idea of DIMs may seem straightforward, it is challenging to design a completely distributed data structure that must be robust to packet losses and node failures, yet must support efficient query distribution and deal with communication voids and obstacles. We now describe the complete design of DIMs. 3.1 Zones The key idea behind DIMs, as we have discussed, is a geographic locality-preserving hash that maps a multi-attribute event to a geographic zone. Intuitively, a zone is a subdivision of the geographic extent of a sensor field. A zone is defined by the following constructive procedure. Consider a rectangle R on the x-y plane. Intuitively, R is the bounding rectangle that contains all sensors withing the network. We call a sub-rectangle Z of R a zone, if Z is obtained by dividing R k times, k> 0, using a procedure that satisfies the following property: After the i-th division, 0 <i <k, R is partitioned into 2i equal sized rectangles. If i is an odd (even) number, the i-th division is parallel to the y-axis (x-axis). That is, the bounding rectangle R is first sub-divided into two zones at level 0 by a vertical line that splits R into two equal pieces, each of these sub-zones can be split into two zones at level 1 by a horizontal line, and so on. We call the non-negative integer k the level of zone Z, i.e. level (Z) = k. A zone can be identified either by a zone code code (Z) or by an address addr (Z). The code code (Z) is a 0-1 bit string of length level (Z), and is defined as follows. If Z lies in the left half of R, the first (from the left) bit of code (Z) is 0, else 1. If Z lies in the bottom half of R, the second bit of code (Z) is 0, else 1. The remaining bits of code (Z) are then recursively defined on each of the four quadrants of R. This definition of the zone code matches the definition of zones given above, encoding divisions of the sensor field geography by bit strings. Thus, in Figure 2, the zone in the top-right corner of the rectangle R has a zone code of 1111. Note that the zone codes collectively define a zone tree such that individual zones are at the leaves of this tree. The address of a zone Z, addr (Z), is defined to be the centroid of the rectangle defined by Z. The two representations of a zone (its code and its address) can each be computed from the other, assuming the level of the zone is known. Two zones are called sibling zones if their zone codes are the same except for the last bit. For example, if code (Z1) = 01101 and code (Z2) = 01100, then Z1 and Z2 are sibling zones. The sibling subtree of a zone is the subtree rooted at the left or right sibling of the zone in the zone tree. We uniquely define a backup zone for each zone as follows: if the sibling subtree of the zone is on the left, the backup zone is the right-most zone in the sibling subtree; otherwise, the backup zone is the left-most zone in the sibling subtree. For a zone Z, let p be the first level (Z)--1 digits of code (Z). Let backup (Z) be the backup zone of zone Z. If code (Z) = p1, code (backup (Z)) = p01 * with the most number of trailing 1's (* means 0 or 1 occurrences). If code (Z) = p0, code (backup (Z)) = p10 ∗ with the most number of trailing 0's. 3.2 Associating Zones with Nodes Our definition of a zone is independent of the actual distribution of nodes in the sensor field, and only depends upon the geographic extent (the bounding rectangle) of the sensor field. Now we describe how zones are mapped to nodes. Conceptually, the sensor field is logically divided into zones and each zone is assigned to a single node. If the sensor network were deployed in a grid-like (i.e., very regular) fashion, then it is easy to see that there exists a k such that each node maps into a distinct level-k zone. In general, however, the node placements within a sensor field are likely to be less regular than the grid. For some k, some zones may be empty and other zones might have more than one node situated within them. One alternative would have been to choose a fixed k for the overall system, and then associate nodes with the zones they are in (and if a zone is empty, associate the "nearest" node with it, for some definition of "nearest"). Because it makes our overall query routing system simpler, we allow nodes in a DIM to map to different-sized zones. To precisely understand the associations between zones and nodes, we define the notion of zone ownership. For any given placement of network nodes, consider a node A. Let ZA to be the largest zone that includes only node A and no other node. Then, we say that A owns ZA. Notice that this definition of ownership may leave some sections of the sensor field un-associated with a node. For example, in Figure 2, the zone 110 does not contain any nodes and would not have an owner. To remedy this, for any empty zone Z, we define the owner to be the owner of backup (Z). In our example, that empty zone's owner would also be the node that owns 1110, its backup zone. Having defined the association between nodes and zones, the next problem we tackle is: given a node placement, does there exist a distributed algorithm that enables each node to determine which zones it owns, knowing only the overall boundary of the sensor network? In principle, this should be relatively straightforward, since each node can simply determine the location of its neighbors, and apply simple geometric methods to determine the largest zone around it such that no other node resides in that zone. In practice, however, communication voids and obstacles make the algorithm much more challenging. In particular, resolving the ownership of zones that do not contain any nodes is complicated. Equally complicated is the case where the zone of a node is larger than its communication radius and the node cannot determine the boundaries of its zone by local communication alone. Our distributed zone building algorithm defers the resolution of such zones until when either a query is initiated, or when an event is inserted. The basic idea behind our algorithm is that each node tentatively builds up an idea of the zone it resides in just by communicating with its neighbors (remembering which boundaries of the zone are "undecided" because there is no radio neighbor that can help resolve that boundary). These undecided boundaries are later resolved by a GPSR perimeter traversal when data messages are actually routed. We now describe the algorithm, and illustrate it using examples. In our algorithm, each node uses an array bound [0. .3] to maintain the four boundaries of the zone it owns (remem Figure 1: A network, where circles represent sensor nodes and dashed lines mark the network boundary. Figure 2: The zone code and boundaries. Figure 3: The Corresponding Zone Tree ber that in this algorithm, the node only tries to determine the zone it resides in, not the other zones it might own because those zones are devoid of nodes). When a node starts up, each node initializes this array to be the network boundary, i.e., initially each node assumes its zone contains the whole network. The zone boundary algorithm now relies upon GPSR's beacon messages to learn the locations of neighbors within radio range. Upon hearing of such a neighbor, the node calls the algorithm in Figure 4 to update its zone boundaries and its code accordingly. In this algorithm, we assume that A is the node at which the algorithm is executed, ZA is its zone, and a is a newly discovered neighbor of A. (Procedure CONTAiN (ZA, a) is used to decide if node a is located within the current zone boundaries of node A). Using this algorithm, then, each node can independently and asynchronously decide its own tentative zone based on the location of its neighbors. Figure 2 illustrates the results of applying this algorithm for the network in Figure 1. Figure 3 describes the corresponding zone tree. Each zone resides at a leaf node and the code of a zone is the path from the root to the zone if we represent the branch to the left Figure 4: Zone Boundary Determination, where A.x and A.y represent the geographic coordinate of node Figure 5: Inserting an event in a DIM. Procedure CLOSER (A, B, m) returns true if code (A) is closer to code (m) than code (B). source (m) is used to set the source address of message m. child by 0 and the branch to the right child by 1. This binary tree forms the index that we will use in the following event and query processing procedures. We see that the zone sizes are different and depend on the local densities and so are the lengths of zone codes for different nodes. Notice that in Figure 2, there is an empty zone whose code should be 110. In this case, if the node in zone 1111 can only hear the node in zone 1110, it sets its boundary with the empty zone to undecided, because it did not hear from any neighboring nodes from that direction. As we have mentioned before, the undecided boundaries are resolved using GPSR's perimeter mode when an event is inserted, or a query sent. We describe event insertion in the next step. Finally, this description does not describe how a node's zone codes are adjusted when neighboring nodes fail, or new nodes come up. We return to this in Section 3.5. 3.3 Inserting an Event In this section, we describe how events are inserted into a DIM. There are two algorithms of interest: a consistent hashing technique for mapping an event to a zone, and a routing algorithm for storing the event at the appropriate zone. As we shall see, these two algorithms are inter-related. 3.3.1 Hashing an Event to a Zone In Section 3.1, we described a recursive tessellation of the geographic extent of a sensor field. We now describe a consistent hashing scheme for a DIM that supports range queries on m distinct attributes2 Let us denote these attributes A1...Am. For simplicity, assume for now that the depth of every zone in the network is k, k is a multiple of m, and that this value of k is known to every node. We will relax this assumption shortly. Furthermore, for ease of discussion, we assume that all attribute values have been normalized to be between 0 and 1. Our hashing scheme assigns a k bit zone code to an event as follows. For i between 1 and m, if Ai <0.5, the i-th bit of the zone code is assigned 0, else 1. For i between m + 1 and 2m, if Ai--m <0.25 or Ai--m ∈ [0.5, 0.75), the i-th bit of the zone is assigned 0, else 1, because the next level divisions are at 0.25 and 0.75 which divide the ranges to [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1). We repeat this procedure until all k bits have been assigned. As an example, consider event E = (0.3, 0.8). For this event, the 5-bit zone code is code (ZA) = 01110. Essentially, our hashing scheme uses the values of the attributes in round-robin fashion on the zone tree (such as the one in Figure 3), in order to map an m-attribute event to a zone code. This is reminiscent of k-d trees [2], but is quite different from that data structure: zone trees are spatial embeddings and do not incorporate the re-balancing algorithms in k-d trees. In our design of DIMs, we do not require nodes to have zone codes of the same length, nor do we expect a node to know the zone codes of other nodes. Rather, suppose the encoding node is A and its own zone code is of length kA. Then, given an event E, node A only hashes E to a zone code of length kA. We denote the zone code assigned to an event E by code (E). As we describe below, as the event is routed, code (E) is refined by intermediate nodes. This lazy evaluation of zone codes allows different nodes to use different length zone codes without any explicit coordination. 3.3.2 Routing an Event to its Owner The aim of hashing an event to a zone code is to store the event at the node within the network node that owns that zone. We call this node the owner of the event. Consider an event E that has just been generated at a node A. After encoding event E, node A compares code (E) with code (A). If the two are identical, node A store event E locally; otherwise, node A will attempt to route the event to its owner. To do this, note that code (E) corresponds to some zone Z', which is A's current guess for the zone at which event E should be stored. A now invokes GPSR to send a message to addr (Z') (the centroid of Z', Section 3.1). The message contains the event E, code (E), and the target geographic location for storing the event. In the message, A also marks itself as the owner of event E. As we will see later, the guessed zone Z', the address addr (Z'), and the owner of E, all of them contained in the message, will be refined by intermediate forwarding nodes. GPSR now delivers this message to the next hop towards addr (Z') from A. This next hop node (call it B) does not immediately forward the message. Rather, it attempts to com2DIM does not assume that all nodes are homogeneous in terms of the sensors they have. Thus, in an m dimensional DIM, a node that does not possess all m sensors can use NULL values for the corresponding readings. DIM treats NULL as an extreme value for range comparisons. As an aside, a network may have many DIM instances running concurrently. pute a new zone code for E to get a new code codenew (E). B will update the code contained in the message (and also the geographic destination of the message) if codenew (E) is longer than the event code in the message. In this manner, as the event wends its way to its owner, its zone code gets refined. Now, B compares its own code code (B) against the owner code owner (E) contained in the incoming message. If code (B) has a longer match with code (E) than the current owner owner (E), then B sets itself to be the current owner of E, meaning that if nobody is eligible to store E, then B will store the event (we shall see how this happens next). If B's zone code does not exactly match code (E), B will invoke GPSR to deliver E to the next hop. Figure 6: Nodes A and B have claimed the same zone. Suppose that some node, say C, finds itself to be the destination (or eventual owner) of an event E. It does so by noticing that code code (C) equals code (E) after locally recomputing a code for E. In that case, C stores E locally, but only if all four of C's zone boundaries are decided. When this condition holds, C knows for sure that no other nodes have overlapped zones with it. In this case, we call C an internal node. Recall, though, that because the zone discovery algorithm Section 3.2 only uses information from immediate neighbors, one or more of C's boundaries may be undecided. If so, C assumes that some other nodes have a zone that overlaps with its own, and sets out to resolve this overlap. To do this, C now sets itself to be the owner of E and continues forwarding the message. Here we rely on GPSR's perimeter mode routing to probe around the void that causes the undecided boundary. Since the message starts from C and is destined for a geographic location near C, GPSR guarantees that the message will be delivered back to C if no other nodes will update the information in the message. If the message comes back to C with itself to be the owner, C infers that it must be the true owner of the zone and stores E locally. If this does not happen, there are two possibilities. The first is that as the event traverses the perimeter, some intermediate node, say B whose zone overlaps with C's marks itself to be the owner of the event, but otherwise does not change the event's zone code. This node also recognizes that its own zone overlaps with C's and initiates a message exchange which causes each of them to appropriately shrink their zone. Figures 6 through 8 show an example of this data-driven zone shrinking. Initially, both node A and node B have claimed the same zone 0 because they are out of radio range of each other. Suppose that A inserts an event E = (0.4, 0.8, 0.9). A encodes E to 0 and claims itself to be the owner of E. Since A is not an internal node, it sends out E, looking for other owner candidates of E. Once E gets to node B, B will see in the message's owner field A's code that is the same as its own. B then shrinks its zone from 0 to 01 according to A's location which is also recorded in the message and send a shrink request to A. Upon receiving this request, A also shrinks its zone from 0 to 00. A second possibility is if some intermediate node changes the destination code of E to a more specific value (i.e., longer zone code). Let us label this node D. D now tries to initiate delivery to the centroid of the new zone. This Figure 7: An event/query message (filled arrows) triggers zone shrinking (hollow arrows). Figure 8: The zone layout after shrinking. Now node A and B have been mapped to different zones. might result in a new perimeter walk that returns to D (if, for example, D happens to be geographically closest to the centroid of the zone). However, D would not be the owner of the event, which would still be C. In routing to the centroid of this zone, the message may traverse the perimeter and return to D. Now D notices that C was the original owner, so it encapsulates the event and directs it to C. In case that there indeed is another node, say X, that owns an overlapped zone with C, X will notice this fact by finding in the message the same prefix of the code of one of its zones, but with a different geographic location from its own. X will shrink its zone to resolve the overlap. If X's zone is smaller than or equal to C's zone, X will also send a" shrink" request to C. Once C receives a shrink request, it will reduce its zone appropriately and fix its "undecided" boundary. In this manner, the zone formation process is resolved on demand in a data-driven way. There are several interesting effects with respect to perimeter walking that arise in our algorithm. The first is that there are some cases where an event insertion might cause the entire outer perimeter of the network to be traversed3. Figure 6 also works as an example where the outer perimeter is traversed. Event E inserted by A will eventually be stored in node B. Before node B stores event E, if B's nominal radio range does not intersect the network boundary, it needs to send out E again as A did, because B in this case is not an internal node. But if B's nominal radio range intersects the network boundary, it then has two choices. It can assume that there will not be any nodes outside the network boundary and so B is an internal node. This is an aggressive approach. On the other hand, B can also make a conservative decision assuming that there might be some other nodes it have not heard of yet. B will then force the message walking another perimeter before storing it. In some situations, especially for large zones where the node that owns a zone is far away from the centroid of the owned zone, there might exist a small perimeter around the destination that does not include the owner of the zone. The event will end up being stored at a different node than the real owner. In order to deal with this problem, we add an extra operation in event forwarding, called efficient neighbor discovery. Before invoking GPSR, a node needs to check if there exists a neighbor who is eligible to be the real owner of the event. To do this, a node C, say, needs to know the zone codes of its neighboring nodes. We deploy GPSR's beaconing message to piggyback the zone codes for nodes. So by simply comparing the event's code and neighbor's code, a node can decide whether there exists a neighbor Y which is more likely to be the owner of event E. C delivers E to Y, which simply follows the decision making procedure discussed above. 3.3.4 Summary and Pseudo-code In summary, our event insertion procedure is designed to nicely interact with the zone discovery mechanism, and the event hashing mechanism. The latter two mechanisms are kept simple, while the event insertion mechanism uses lazy evaluation at each hop to refine the event's zone code, and it leverages GPSR's perimeter walking mechanism to fix undecided zone boundaries. In Section 3.5, we address robustness of event insertion to packet loss or to node failures. Figure 5 shows the pseudo-code for inserting and forwarding an event e. In this pseudo code, we have omitted a description of the zone shrinking procedure. In the pseudo code, procedure IS INTERNAL () is used to determine if the caller is an internal node and procedure IS OWNER () is used to determine if the caller is more eligible to be the owner of the event than is currently claimed owner as recorded in the message. Procedure SEND-MESSAGE is used to send either an event message or a query message. If the message destination address has been changed, the packet source address needs also to be changed in order to avoid being dropped by GPSR, since GPSR does not allow a node to see the same packet in greedy mode twice. 3This happens less frequently than for GHTs, where inserting an event to a location outside the actual (but inside the nominal) boundary of the network will always invoke an external perimeter walk. 3.4 Resolving and Routing Queries DIMs support both point queries4 and range queries. Routing a point query is identical to routing an event. Thus, the rest of this section details how range queries are routed. The key challenge in routing zone queries is brought out by the following strawman design. If the entire network was divided evenly into zones of depth k (for some pre-defined constant k), then the querier (the node issuing the query) could subdivide a given range query into the relevant subzones and route individual requests to each of the zones. This can be inefficient for large range queries and also hard to implement in our design where zone sizes are not predefined. Accordingly, we use a slightly different technique where a range query is initially routed to a zone corresponding to the entire range, and is then progressively split into smaller subqueries. We describe this algorithm here. The first step of the algorithm is to map a range query to a zone code prefix. Conceptually, this is easy; in a zone tree (Figure 3), there exists some node which contains the entire range query in its sub-tree, and none of its children in the tree do. The initial zone code we choose for the query is the zone code corresponding to that tree node, and is a prefix of the zone codes of all zones (note that these zones may not be geographically contiguous) in the subtree. The querier computes the zone code of Q, denoted by code (Q) and then starts routing a query to addr (code (Q)). Upon receiving a range query Q, a node A (where A is any node on the query propagation path) divides it into multiple smaller sized subqueries if there is an overlap between the zone of A, zone (A) and the zone code associated with Q, code (Q). Our approach to split a query Q into subqueries is as follows. If the range of Q's first attribute contains the value 0.5, A divides Q into two sub-queries one of whose first attribute ranges from 0 to 0.5, and the other from 0.5 to 1. Then A decides the half that overlaps with its own zone. Let's call it QA. If QA does not exist, then A stops splitting; otherwise, it continues splitting (using the second attribute range) and recomputing QA until QA is small enough so that it completely falls into zone (A) and hence A can now resolve it. For example, suppose that node A, whose code is 0110, is to split a range query Q = (0.3 - 0.8, 0.6 - 0.9). The splitting steps is shown in Figure 2. After splitting, we obtain three smaller queries q0 = (0.3 - 0.5, 0.6 - 0.75), q1 = (0.3 - 0.5, 0.75 - 0.9), and q2 = (0.5 - 0.8, 0.6 - 0.9). This splitting procedure is illustrated in Figure 9 which also shows the codes of each subquery after splitting. A then replies to subquery q0 with data stored locally and sends subqueries q1 and q2 using the procedure outlined above. More generally, if node A finds itself to be inside the zone subtree that maximally covers Q, it will send the subqueries that resulted from the split. Otherwise, if there is no overlap between A and Q, then A forwards Q as is (in this case Q is either the original query, or a product of an earlier split). Figure 10 describes the pseudo-code for the zone splitting algorithm. As shown in the above algorithm, once a subquery has been recognized as belonging to the caller's zone, procedure RESOLVE is invoked to resolve the subquery and send a reply to the querier. Every query message contains the geographic location of its initiator, so the corresponding reply message can be delivered directly back to the initiator. Finally, in the process of query resolution, zones might shrink similar to shrinkage during inserting. We omit this in the pseudo code. 3.5 Robustness Until now, we have not discussed the impact of node failures and packet losses, or node arrivals and departures on our algorithms. Packet losses can affect query and event insertion, and node failures can result in lost data, while node arrivals and departures can impact the zone structure. We now discuss how DIMs can be made robust to these kinds of dynamics. 3.5.1 Maintaining Zones In previous sections, we described how the zone discovery algorithm could leave zone boundaries undecided. These undecided boundaries are resolved during insertion or querying, using the zone shrinking procedure describe above. When a new node joins the network, the zone discovery mechanism (Section 3.2) will cause neighboring zones to appropriately adjust their zone boundaries. At this time, those zones can also transfer to the new node those events they store but which should belong to the new node. Before a node turns itself off (if this is indeed possible), it knows that its backup node (Section 3.1) will take over its zone, and will simply send all its events to its backup node. Node deletion may also cause zone expansion. In order to keep the mapping between the binary zone tree's leaf nodes and zones, we allow zone expansion to only occur among sibling zones (Section 3.1). The rule is: if zone (A)'s sibling zone becomes empty, then A can expand its own zone to include its sibling zone. Now, we turn our attention to node failures. Node failures are just like node deletions except that a failed node does not have a chance to move its events to another node. But how does a node decide if its sibling has failed? If the sibling is within radio range, the absence of GPSR beaconing messages can detect this. Once it detects this, the node can expand its zone. A different approach is needed for detecting siblings who are not within radio range. These are the cases where two nodes own their zones after exchanging a shrink message; they do not periodically exchange messages thereafter to maintain this zone relationship. In this case, we detect the failure in a data-driven fashion, with obvious efficiency benefits compared to periodic keepalives. Once a node B has failed, an event or query message that previously should have been owned by the failed node will now be delivered to the node A that owns the empty zone left by node B. A can see this message because A stands right around the empty area left by B and is guaranteed to be visited in a GPSR perimeter traversal. A will set itself to be the owner of the message, and any node which would have dropped this message due to a perimeter loop will redirect the message to A instead. If A's zone happens to be the sibling of B's zone, A can safely expand its own zone and notify its expanded zone to its neighbors via GPSR beaconing messages. 3.5.2 Preventing Data Loss from Node Failure The algorithms described above are robust in terms of zone formation, but node failure can erase data. To avoid this, DIMs can employ two kinds of replication: local replication to be resilient to random node failures, and mirror replication for resilience to concurrent failure of geographically contiguous nodes. Mirror replication is conceptually easy. Suppose an event E has a zone code code (E). Then, the node that inserts E would store two copies of E; one at the zone denoted by code (E), and the other at the zone corresponding to the one's complement of code (E). This technique essentially creates a mirror DIM. A querier would need, in parallel, to query both the original DIM and its mirror since there is no way of knowing if a collection of nodes has failed. Clearly, the trade-off here is an approximate doubling of both insertion and query costs. There exists a far cheaper technique to ensure resilience to random node failures. Our local replication technique rests on the observation that, for each node A, there exists a unique node which will take over its zone when A fails. This node is defined as the node responsible for A's zone's backup zone (see Section 3.1). The basic idea is that A replicates each data item it has in this node. We call this node A's local replica. Let A's local replica be B. Often B will be a radio neighbor of A and can be detected from GPSR beacons. Sometimes, however, this is not the case, and B will have to be explicitly discovered. We use an explicit message for discovering the local replica. Discovering the local replica is data-driven, and uses a mechanism similar to that of event insertion. Node A sends a message whose geographic destination is a random nearby location chosen by A. The location is close enough to A such that GPSR will guarantee that the message will delivered back to A. In addition, the message has three fields, one for the zone code of A, code (A), one for the owner owner (A) of zone (A) which is set to be empty, and one for the geographic location of owner (A). Then the packet will be delivered in GPSR perimeter mode. Each node that receives this message will compare its zone code and code (A) in the message, and if it is more eligible to be the owner of zone (A) than the current owner (A) recorded in the message, it will update the field owner (A) and the corresponding geographic location. Once the packet comes back to A, it will know the location of its local replica and can start to send replicas. In a dense sensor network, the local replica of a node is usually very near to the node, either its direct neighbor or 1--2 hops away, so the cost of sending replicas to local replication will not dominate the network traffic. However, a node's local replica itself may fail. There are two ways to deal with this situation; periodic refreshes, or repeated datadriven discovery of local replicas. The former has higher overhead, but more quickly discovers failed replicas. 3.5.3 Robustness to Packet Loss Finally, the mechanisms for querying and event insertion can be easily made resilient to packet loss. For event insertion, a simple ACK scheme suffices. Of course, queries and responses can be lost as well. In this case, there exists an efficient approach for error recovery. This rests on the observation that the querier knows which zones fall within its query and should have responded (we assume that a node that has no data matching a query, but whose zone falls within the query, responds with a negative acknowledgment). After a conservative timeout, the querier can re-issue the queries selectively to these zones. If DIM cannot get any answers (positive or negative) from Figure 9: An example of range query splitting Figure 10: Query resolving algorithm certain zones after repeated timeouts, it can at least return the partial query results to the application together with the information about the zones from which data is missing. 4. DIMS: AN ANALYSIS In this section, we present a simple analytic performance evaluation of DIMs, and compare their performance against other possible approaches for implementing multi-dimensional range queries in sensor networks. In the next section, we validate these analyses using detailed packet-level simulations. Our primary metrics for the performance of a DIM are: Average Insertion Cost measures the average number of messages required to insert an event into the network. Average Query Delivery Cost measures the average number of messages required to route a query message to all the relevant nodes in the network. It does not measure the number of messages required to transmit responses to the querier; this latter number depends upon the precise data distribution and is the same for many of the schemes we compare DIMs against. In DIMs, event insertion essentially uses geographic routing. In a dense N-node network where the likelihood of traversing perimeters is small, the average event insertion \ / cost proportional to N [23]. On the other hand, the query delivery cost depends upon the size of ranges specified in the query. Recall that our query delivery mechanism is careful about splitting a query into sub-queries, doing so only when the query nears the zone that covers the query range. Thus, when the querier is far from the queried zone, there are two components to the \ / query delivery cost. The first, which is proportional to N, is the cost to deliver the query near the covering zone. If within this covering zone, there are M nodes, the message delivery cost of splitting the query is proportional to M. The average cost of query delivery depends upon the distribution of query range sizes. Now, suppose that query sizes follow some density function f (x), then the average cost of resolve a query can be approximated by f1N xf (x) dx. To give some intuition for the performance of DIMs, we consider four different forms for f (x): the uniform distribution where a query range encompassing the entire network is as likely as a point query; a bounded uniform distribution where all sizes up to a bound B are equally likely; an algebraic distribution in which most queries are small, but large queries are somewhat likely; and an exponential distribution where most queries are small and large queries are unlikely. In all our analyses, we make the simplifying assumption that the size of a query is proportional to the number of nodes that can answer that query. For the uniform distribution P (x) a c for some constant c. If each query size from 1...N is equally likely, the average query delivery cost of uniformly distributed queries is O (N). Thus, for uniformly distributed queries, the performance of DIMs is comparable to that of flooding. However, for the applications we envision, where nodes within the network are trying to correlate events, the uniform distribution is highly unrealistic. Somewhat more realistic is a situation where all query sizes are bounded by a constant B. In this case, the average cost for resolving such a query is approximately f1 B xf (x) dx = O (B). Recall now that all queries have to pay an approxi \ / mate cost of O (N) to deliver the query near the covering zone. Thus, if DIM limited queries to a size proportional to \ / \ / N, the average query cost would be O (N). The algebraic distribution, where f (x) a x − k, for some constant k between 1 and 2, has an average query resolution cost given by f1N xf (x) dx = O (N2 − k). In this case, if k> 1.5, the average cost of query delivery is dominated by the cost to deliver the query to near the covering zone, given by O (\ / N). Finally, for the exponential distribution, f (x) = ce − cx for some constant c, and the average cost is just the mean of the corresponding distribution, i.e., O (1) for large N. Asymptotically, then, the cost of the query for the exponential distribution is dominated by the cost to deliver the query \ / near the covering zone (O (N)). Thus, we see that if queries follow either the bounded uniform distribution, the algebraic distribution, or the exponential distribution, the query cost scales as the insertion cost (for appropriate choice of constants for the bounded uniform and the algebraic distributions). How well does the performance of DIMs compare against alternative choices for implementing multi-dimensional queries? A simple alternative is called external storage [23], where all events are stored centrally in a node outside the sensor net \ / work. This scheme incurs an insertion cost of O (N), and a zero query cost. However, as [23] points out, such systems may be impractical in sensor networks since the access link to the external node becomes a hotspot. A second alternative implementation would store events at the node where they are generated. Queries are flooded throughout the network, and nodes that have matching data respond. Examples of systems that can be used for this (although, to our knowledge, these systems do not implement multi-dimensional range queries) are Directed Diffusion [15] and TinyDB [17]. The flooding scheme incurs a zero insertion cost, but an O (N) query cost. It is easy to show that DIMs outperform flooding as long as the ratio of the number \ / of insertions to the number of queries is less than N. A final alternative would be to use a geographic hash table (GHT [20]). In this approach, attribute values are assumed to be integers (this is actually quite a reasonable assumption since attribute values are often quantized), and events are hashed on some (say, the first) attribute. A range query is sub-divided into several sub-queries, one for each integer in the range of the first attribute. Each sub-query is then hashed to the appropriate location. The nodes that receive a sub-query only return events that match all other attribute ranges. In this approach, which we call GHT-R (GHT's for \ / range queries) the insertion cost is O (N). Suppose that the range of the first attribute contains r discrete values. \ / Then the cost to deliver queries is O (r N). Thus, asymptotically, GHT-R's perform similarly to DIMs. In practice, however, the proportionality constants are significantly different, and DIMs outperform GHT-Rs, as we shall show using detailed simulations. 5. DIMS: SIMULATION RESULTS Our analysis gives us some insight into the asymptotic behavior of various approaches for multi-dimensional range queries. In this section, we use simulation to compare DIMs against flooding and GHT-R; this comparison gives us a more detailed understanding of these approaches for moderate size networks, and gives us a nuanced view of the mechanistic differences between some of these approaches. 5.1 Simulation Methodology We use ns-2 for our simulations. Since DIMs are implemented on top of GPSR, we first ported an earlier GPSR implementation to the latest version of ns-2. We modified the GPSR module to call our DIM implementation when it receives any data message in transit or when it is about to drop a message because that message traversed the entire perimeter. This allows a DIM to modify message zone codes in flight (Section 3), and determine the actual owner of an event or query. In addition, to this, we implemented in ns-2 most of the DIM mechanisms described in Section 3. Of those mechanisms, the only one we did not implement is mirror replication. We have implemented selective query retransmission for resiliency to packet loss, but have left the evaluation of this mechanism to future work. Our DIM implementation in ns-2 is 2800 lines of code. Finally, we implemented GHT-R, our GHT-based multidimensional range query mechanism in ns-2. This implementation was relatively straightforward, given that we had ported GPSR, and modified GPSR to detect the completion of perimeter mode traversals. Using this implementation, we conducted a fairly extensive evaluation of DIM and two alternatives (flooding, and our GHT-R). For all our experiments, we use uniformly placed sensor nodes with network sizes ranging from 50 nodes to 300 nodes. Each node has a radio range of 40m. For the results presented here, each node has on average 20 nodes within its nominal radio range. We have conducted experiments at other node densities; they are in agreement with the results presented here. In all our experiments, each node first generates 3 events5 on average (more precisely, for a topology of size N, we have 3N events, and each node is equally likely to generate an event). We have conducted experiments for three different event value distributions. Our uniform event distribution generates 2-dimensional events and, for each dimension, every attribute value is equally likely. Our normal event distribution generates 2-dimensional events and, for each dimension, the attribute value is normally distributed with a mean corresponding to the mid-point of the attribute value range. The normal event distribution represents a skewed data set. Finally, our trace event distribution is a collection of 4-dimensional events obtained from a habitat monitoring network. As we shall see, this represents a fairly skewed data set. Having generated events, for each simulation we generate queries such that, on average, each node generates 2 queries. The query sizes are determined using the four size distributions we discussed in Section 4: uniform, boundeduniform, algebraic and exponential. Once a query size has been determined, the location of the query (i.e., the actual boundaries of the zone) are uniformly distributed. For our GHT-R experiments, the dynamic range of the attributes had 100 discrete values, but we restricted the query range for any one attribute to 50 discrete values to allow those simulations to complete in reasonable time. Finally, using one set of simulations we evaluate the efficacy of local replication by turning off random fractions of nodes and measuring the fidelity of the returned results. The primary metrics for our simulations are the average query and insertion costs, as defined in Section 4. 5.2 Results Although we have examined almost all the combinations of factors described above, we discuss only the most salient ones here, for lack of space. Figure 11 plots the average insertion costs for DIM and GHT-R (for flooding, of course, the insertion costs are zero). DIM incurs less per event overhead in inserting events (regardless of the actual event distribution; Figure 11 shows the cost for uniformly distributed events). The reason for this is interesting. In GHT-R, storing almost every event incurs a perimeter traversal, and storing some events require traversing the outer perimeter of the network [20]. By contrast, in DIM, storing an event incurs a perimeter traversal only when a node's boundaries are undecided. Furthermore, an insertion or a query in a DIM can traverse the outer perimeter (Section 3.3), but less frequently than in GHTs. Figure 13 plots the average query cost for a bounded uniform query size distribution. For this graph (and the next) we use a uniform event distribution, since the event distribution does not affect the query delivery cost. For this simulation, our bound was 14th the size of the largest possible 5Our metrics are chosen so that the exact number of events and queries is unimportant for our discussion. Of course, the overall performance of the system will depend on the relative frequency of events and queries, as we discuss in Section 4. Since we don't have realistic ratios for these, we focus on the microscopic costs, rather than on the overall system costs. Figure 11: Average insertion cost for DIM and GHT. Figure 12: Local replication performance. query (e.g., a query of the form (0 − 0.5, 0 − 0.5). Even for this generous query size, DIMs perform quite well (almost a third the cost of flooding). Notice, however, that GHTRs incur high query cost since almost any query requires as many subqueries as the width of the first attribute's range. Figure 14 plots the average query cost for the exponential distribution (the average query size for this distribution was set to be 1 16th the largest possible query). The superior scaling of DIMs is evident in these graphs. Clearly, this is the regime in which one might expect DIMs to perform best, when most of the queries are small and large queries are relatively rare. This is also the regime in which one would expect to use multi-dimensional range queries: to perform relatively tight correlations. As with the bounded uniform distribution, GHT query cost is dominated by the cost of sending sub-queries; for DIMs, the query splitting strategy works quite well in keep overall query delivery costs low. Figure 12 describes the efficacy of local replication. To obtain this figure, we conducted the following experiment. On a 100-node network, we inserted a number of events uniformly distributed throughout the network, then issued a query covering the entire network and recorded the answers. Knowing the expected answers for this query, we then successively removed a fraction f of nodes randomly, and re-issued the same query. The figure plots the fraction of expected responses actually received, with and without replication. As the graph shows, local replication performs well for random failures, returning almost 90% of the responses when up to 30% of the nodes have failed simultaneously 6. In the absence of local replication, of course, when Figure 13: Average query cost with a bounded uniform query distribution Figure 14: Average query cost with an exponential query distribution 30% of the nodes fail, the response rate is only 70% as one would expect. We note that DIMs (as currently designed) are not perfect. When the data is highly skewed--as it was for our trace data set from the habitat monitoring application where most of the event values fell into within 10% of the attribute's range--a few DIM nodes will clearly become the bottleneck. This is depicted in Figure 15, which shows that for DIMs, and GHT-Rs, the maximum number of transmissions at any network node (the hotspots) is rather high. (For less skewed data distributions, and reasonable query size distributions, the hotspot curves for all three schemes are comparable.) This is a standard problem that the database indices have dealt with by tree re-balancing. In our case, simpler solutions might be possible (and we discuss this in Section 7). However, our use of the trace data demonstrates that DIMs work for events which have more than two dimensions. Increasing the number of dimensions does not noticeably degrade DIMs query cost (omitted for lack of space). Also omitted are experiments examining the impact of several other factors, as they do not affect our conclusions in any way. As we expected, DIMs are comparable in performance to flooding when all sizes of queries are equally likely. For an algebraic distribution of query sizes, the relative performance is close to that for the exponential distribution. For normally distributed events, the insertion costs be much better than this. Assuming a node and its replica don't simultaneously fail often, a node will almost always detect a replica failure and re-replicate, leading to near 100% response rates. Figure 15: Hotspot usage Figure 17: Number of events received for different query sizes Figure 16: Software architecture of DIM over GPSR Figure 18: Query distribution cost are comparable to that for the uniform distribution. Finally, we note that in all our evaluations we have only used list queries (those that request all events matching the specified range). We expect that for summary queries (those that expect an aggregate over matching events), the overall cost of DIMs could be lower because the matching data are likely to be found in one or a small number of zones. We leave an understanding of this to future work. Also left to future work is a detailed understanding of the impact of location error on DIM's mechanisms. Recent work [22] has examined the impact of imprecise location information on other data-centric storage mechanisms such as GHTs, and found that there exist relatively simple fixes to GPSR that ameliorate the effects of location error. 6. IMPLEMENTATION We have implemented DIMs on a Linux platform suitable for experimentation on PDAs and PC-104 class machines. To implement DIMs, we had to develop and test an independent implementation of GPSR. Our GPSR implementation is full-featured, while our DIM implementation has most of the algorithms discussed in Section 3; some of the robustness extensions have only simpler variants implemented. The software architecture of DIM/GPSR system is shown in Figure 16. The entire system (about 5000 lines of code) is event-driven and multi-threaded. The DIM subsystem consists of six logical components: zone management, event maintenance, event routing, query routing, query processing, and GPSR interactions. The GPSR system is implemented as user-level daemon process. Applications are executed as clients. For the DIM subsystem, the GPSR module provides several extensions: it exports information about neighbors, and provides callbacks during packet forwarding and perimeter-mode termination. We tested our implementation on a testbed consisting of 8 PC-104 class machines. Each of these boxes runs Linux and uses a Mica mote (attached through a serial cable) for communication. These boxes are laid out in an office building with a total spatial separation of over a hundred feet. We manually measured the locations of these nodes relative to some coordinate system and configured the nodes with their location. The network topology is approximately a chain. On this testbed, we inserted queries and events from a single designated node. Our events have two attributes which span all combinations of the four values [0, 0.25, 0.75, 1] (sixteen events in all). Our queries span four sizes, returning 1, 4, 9 and 16 events respectively. Figure 17 plots the number of events received for different sized queries. It might appear that we received fewer events than expected, but this graph doesn't count the events that were already stored at the querier. With that adjustment, the number of responses matches our expectation. Finally, Figure 18 shows the total number of messages required for different query sizes on our testbed. While these experiments do not reveal as much about the performance range of DIMs as our simulations do, they nevertheless serve as proof-of-concept for DIMs. Our next step in the implementation is to port DIMs to the Mica motes, and integrate them into the TinyDB [17] sensor database engine on motes. 7. CONCLUSIONS In this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks. Our design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR. We have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future. There are several interesting future directions that we intend to pursue. One is adaptation to skewed data distributions, since these can cause storage and transmission hotspots. Unlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution. Another direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves. A third is support for efficient resolution of existential queries--whether there exists an event matching a multi-dimensional range.
Multi-dimensional Range Queries in Sensor Networks * ABSTRACT In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: "List all events whose temperature lies between 50 ◦ and 60 ◦, and whose light levels lie between 10 and 15." Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O (√ N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs. 1. INTRODUCTION In wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3]. Many of these attributes will have scalar values: e.g., temperature and light levels, soil moisture conditions, etc. . In these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes. For example, scientists analyzing the growth of marine microorganisms might be interested in events that occurred within certain temperature and light conditions: "List all events that have temperatures between 50 ◦ F and 60 ◦ F, and light levels between 10 and 20". Such range queries can be used in two distinct ways. They can help users efficiently drill-down their search for events of interest. The query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms. More importantly, they can be used by application software running within a sensor network for correlating events and triggering actions. For example, if in a habitat monitoring application, a bird alighting on its nest is indicated by a certain range of thermopile sensor readings, and a certain range of microphone readings, a multi-dimensional range query on those attributes enables higher confidence detection of the arrival of a flock of birds, and can trigger a system of cameras. In traditional database systems, such range queries are supported using pre-computed indices. Indices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability. For sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network). Rather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries. In this paper, we present just such a data structure, that we call a DIM'. DIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network. DIMs leverage two key ideas: in-network data centric storage, and a novel locality-preserving geographic hash (Section 3). DIMs trace their lineage to datacentric storage systems [23]. The underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events. Building upon this, DIMs use a technique whereby events whose attribute values are "close" are likely to be stored at the same or nearby nodes. DIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion. We discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3). We then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6). Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query √ costs scale as O (N)). In detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network. Experiments on a small scale testbed validate the feasibility of DIMs (Section 6). Much work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity. We believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks. DIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18]. All such systems will likely be integrated to a sensor network database system such as TinyDB [17]. Application designers could then choose the appropriate method of information access. For instance, a fire tracking application would use DIM to detect the hotspots, and would then use mechanisms that enable continuous queries [15, 18] to track the spatio-temporal progress of the hotspots. Finally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well. 2. RELATED WORK The basic problem that this paper addresses--multidimensional range queries--is typically solved in database systems using indexing techniques. The database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature. Indexing techniques essentially trade-off some data insertion cost to enable efficient querying. Indexing has, for long, been a classical research problem in the database community [5, 2]. Our work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space. Our approach essentially represents a geographic embedding of such structures in a sensor field. There is one important difference. The classical indexing structures are data-dependent (as are some indexing schemes that use locality preserving hashes, and developed in the theory literature [14, 8, 13]). The index structure is decided not only by the data, but also by the order in which data is inserted. Our current design is not data dependent. Finally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11]. While there has been some work on distributed indexing, the problem has not been extensively explored. There exist distributed indices of a restricted kind--those that allow exact match or partial prefix match queries. Examples of such systems, of course, are the Internet Domain Name System, and the class of distributed hash table (DHT) systems exemplified by Freenet [4], Chord [24], and CAN [19]. Our work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network. The underlying details make the two systems very different: CAN's overlay is purely logical while our overlay is consistent with the underlying physical topology. More recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context. Several research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17]. Our work is similar in spirit to this body of literature. In fact, DIMs could become an important component of a sensor network database system such as TinyDB [17]. Our work departs from prior work in this area in two significant respects. Unlike these approaches, in our work the data generated at a node are hashed (in general) to different locations. This hashing is the key to scaling multi-dimensional range searches. In all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system. Our work avoids query flooding by an appropriate choice of hashing. Madden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT). This index is used to direct queries to nodes that have detected relevant data. Our work differs from SRT in three key aspects. First, SRT is built on single attributes while DIM supports mulitple attributes. Second, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values. Finally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node. A similar differentiation applies with respect to work on data-centric routing in sensor networks [15, 25], where data generated at a node is assumed to be stored at the node, and queries are either flooded throughout the network [15], or each source sets up a network-wide overlay announcing its presence so that mobile sinks can rendezvous with sources at the nearest node on the overlay [25]. These approaches work well for relatively long-lived queries. Finally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10]. In a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous. GHTs are built upon the GPSR [16] protocol and leverage some interesting properties of that protocol, such as the ability to route to a node nearest to a given location. We also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries. DIMENSIONS and DIFS can be thought of as using the same set of primitives as GHT (storage using consistent hashing), but for different ends: DIMENSIONS allows drill down search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations. 3. THE DESIGN OF DIMS 3.1 Zones 3.2 Associating Zones with Nodes 3.3 Inserting an Event 3.3.1 Hashing an Event to a Zone 3.3.2 Routing an Event to its Owner 3.3.4 Summary and Pseudo-code 3.4 Resolving and Routing Queries 3.5 Robustness 3.5.1 Maintaining Zones 3.5.2 Preventing Data Loss from Node Failure 3.5.3 Robustness to Packet Loss 4. DIMS: AN ANALYSIS 5. DIMS: SIMULATION RESULTS 5.1 Simulation Methodology 5.2 Results 6. IMPLEMENTATION 7. CONCLUSIONS In this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks. Our design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR. We have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future. There are several interesting future directions that we intend to pursue. One is adaptation to skewed data distributions, since these can cause storage and transmission hotspots. Unlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution. Another direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves. A third is support for efficient resolution of existential queries--whether there exists an event matching a multi-dimensional range.
Multi-dimensional Range Queries in Sensor Networks * ABSTRACT In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: "List all events whose temperature lies between 50 ◦ and 60 ◦, and whose light levels lie between 10 and 15." Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O (√ N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs. 1. INTRODUCTION In wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3]. In these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes. Such range queries can be used in two distinct ways. They can help users efficiently drill-down their search for events of interest. The query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms. More importantly, they can be used by application software running within a sensor network for correlating events and triggering actions. In traditional database systems, such range queries are supported using pre-computed indices. Indices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability. For sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network). Rather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries. In this paper, we present just such a data structure, that we call a DIM'. DIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network. DIMs leverage two key ideas: in-network data centric storage, and a novel locality-preserving geographic hash (Section 3). DIMs trace their lineage to datacentric storage systems [23]. The underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events. Building upon this, DIMs use a technique whereby events whose attribute values are "close" are likely to be stored at the same or nearby nodes. DIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion. We discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3). We then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6). Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query √ costs scale as O (N)). In detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network. Experiments on a small scale testbed validate the feasibility of DIMs (Section 6). Much work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity. We believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks. DIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18]. All such systems will likely be integrated to a sensor network database system such as TinyDB [17]. Finally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well. 2. RELATED WORK The basic problem that this paper addresses--multidimensional range queries--is typically solved in database systems using indexing techniques. The database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature. Indexing techniques essentially trade-off some data insertion cost to enable efficient querying. Indexing has, for long, been a classical research problem in the database community [5, 2]. Our work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space. Our approach essentially represents a geographic embedding of such structures in a sensor field. There is one important difference. The index structure is decided not only by the data, but also by the order in which data is inserted. Our current design is not data dependent. Finally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11]. While there has been some work on distributed indexing, the problem has not been extensively explored. There exist distributed indices of a restricted kind--those that allow exact match or partial prefix match queries. Our work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network. More recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context. Several research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17]. Our work is similar in spirit to this body of literature. In fact, DIMs could become an important component of a sensor network database system such as TinyDB [17]. Our work departs from prior work in this area in two significant respects. Unlike these approaches, in our work the data generated at a node are hashed (in general) to different locations. This hashing is the key to scaling multi-dimensional range searches. In all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system. Our work avoids query flooding by an appropriate choice of hashing. Madden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT). This index is used to direct queries to nodes that have detected relevant data. Our work differs from SRT in three key aspects. First, SRT is built on single attributes while DIM supports mulitple attributes. Second, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values. Finally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node. These approaches work well for relatively long-lived queries. Finally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10]. In a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous. We also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries. down search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations. 7. CONCLUSIONS In this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks. Our design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR. We have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future. One is adaptation to skewed data distributions, since these can cause storage and transmission hotspots. Unlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution. Another direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves. A third is support for efficient resolution of existential queries--whether there exists an event matching a multi-dimensional range.
J-44
Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering
Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles -- scouts, promoters, and connectors -- that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.). These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.
[ "scout", "promot", "connector", "rate", "nearest neighbor", "collabor filter", "recommend", "recommend system", "aggreg process", "neighborhood", "collabor filter algorithm", "purchas", "opinion", "list rank accuraci", "user-base and item-base algorithm" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "U", "U", "M" ]
Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering Bharath Kumar Mohan Dept. of CSA Indian Institute of Science Bangalore 560 012, India mbk@csa.iisc.ernet.in Benjamin J. Keller Dept. of Computer Science Eastern Michigan University Ypsilanti, MI 48917, USA bkeller@emich.edu Naren Ramakrishnan Dept. of Computer Science Virginia Tech, Blacksburg VA 24061, USA naren@cs.vt.edu ABSTRACT Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles-scouts, promoters, and connectors-that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.) . These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community. Categories and Subject Descriptors H.4.2 [Information Systems Applications]: Types of Systems-Decision support; J.4 [Computer Applications]: Social and Behavioral Sciences General Terms Algorithms, Human Factors 1. INTRODUCTION Recommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history. Collaborative filtering, a common form of recommendation, predicts a user``s rating for an item by combining (other) ratings of that user with other users'' ratings. Significant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8]. However, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations. Such an understanding will give an important handle to monitoring and managing a recommender system, to engineer mechanisms to sustain the recommender, and thereby ensure its continued success. Our motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering. We identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds. Viewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations. In turn, this capability helps support scenarios such as: 1. Situating users in better neighborhoods: A user``s ratings may inadvertently connect the user to a neighborhood for which the user``s tastes may not be a perfect match. Identifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood. 2. Targeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items. Identifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system. 3. Monitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and 250 items, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack. Tracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved. These include being able to identify rating subspaces that do not contribute (or contribute negatively) to system performance, and could be removed; to enumerate users who are in danger of leaving, or have left the system; and to assess the susceptibility of the system to attacks such as shilling [5]. As we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community. The rest of the paper is organized as follows. Background on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2. Section 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles. In Section 5, we illustrate the use of these roles to address the goals outlined above. 2. BACKGROUND 2.1 Algorithms Nearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction. An algorithm of the first kind is called user-based, and one of the second kind is called itembased [12]. In both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based). Predictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user``s neighbors and, in an item-based algorithm, involves aggregating the user``s ratings of items that are neighbors of the target item. Algorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions. We consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12]. The algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings: sim(u, v) = P i∈Iu∩Iv (ru,i − ¯ru)(rv,i − ¯rv) qP i∈Iu (ru,i − ¯ru)2 qP i∈Iv (rv,i − ¯rv)2 , where Iu is the set of items rated by user u, ru,i is user u``s rating for item i, and ¯ru is the average rating of user u (similarly for v). Similarity computed in this manner is typically scaled by a factor proportional to the number of common ratings, to reduce the chance of making a recommendation made on weak connections: sim (u, v) = max(|Iu ∩ Iv|, γ) γ · sim(u, v), where γ ≈ 5 is a constant used as a lower limit in scaling [2]. These new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u. A prediction for user u and item i is computed by a weighted average of the ratings by the neighbors pu,i = ¯ru + P v∈V sim (u, v)(rv,i − ¯rv) P v∈V sim (u, v) (1) where V = Nu ∩ Ui is the set of users most similar to u who have rated i. The item-based algorithm we use is the one defined by Sarwar et al. [12]. In this algorithm, similarity is defined as the adjusted cosine measure sim(i, j) = P u∈Ui∩Uj (ru,i − ¯ru)(ru,j − ¯ru) qP u∈Ui (ru,i − ¯ru)2 qP u∈Uj (ru,j − ¯ru)2 (2) where Ui is the set of users who have rated item i. As for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common sim (i, j) = max(|Ui ∩ Uj|, γ) γ · sim(i, j). (3) Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item. A prediction for user u and item i is computed as the weighted average pu,i = ¯ri + P j∈J sim (i, j)(ru,j − ¯rj) P j∈J sim (i, j) (4) where J = Ni ∩ Iu is the set of items rated by u that are most similar to i. 2.2 Evaluation Recommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3]. Studies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set. A standard measure of predictive accuracy is mean absolute error (MAE), which for a test set T = {(u, i)} is defined as, MAE = P (u,i)∈T |pu,i − ru,i| |T | . (5) Coverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3]. A practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions. Recommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list. So, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order. Herlocker et al. [3] discuss a number of rank accuracy measures, which range from Kendall``s Tau to measures that consider the fact that users tend to only look at a prefix of the list [5]. Kendall``s Tau measures the number of inversions when comparing ordered pairs in the true user ordering of 251 Jim Tom Jeff My Cousin Vinny The Matrix Star Wars The Mask Figure 1: Ratings in simple movie recommender. items and the recommended order, and is defined as τ = C − D p (C + D + TR)(C + D + TP) (6) where C is the number of pairs that the system predicts in the correct order, D the number of pairs the system predicts in the wrong order, TR the number of pairs in the true ordering that have the same ratings, and TP is the number of pairs in the predicted ordering that have the same ratings [3]. A shortcoming of the Tau metric is that it is oblivious to the position in the ordered list where the inversion occurs [3]. For instance, an inversion toward the end of the list is given the same weight as one in the beginning. One solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list. 3. ROLES OF A RATING Our basic observation is that each rating plays a different role in each prediction in which it is used. Consider a simplified movie recommender system with three users Jim, Jeff, and Tom and their ratings for a few movies, as shown in Fig. 1. (For this initial discussion we will not consider the rating values involved.) The recommender predicts whether Tom will like The Mask using the other already available ratings. How this is done depends on the algorithm: 1. An item-based collaborative filtering algorithm constructs a neighborhood of movies around The Mask by using the ratings of users who rated The Mask and other movies similarly (e.g., Jim``s ratings of The Matrix and The Mask; and Jeff``s ratings of Star Wars and The Mask). Tom``s ratings of those movies are then used to make a prediction for The Mask. 2. A user-based collaborative filtering algorithm would construct a neighborhood around Tom by tracking other users whose rating behaviors are similar to Tom``s (e.g., Tom and Jeff have rated Star Wars; Tom and Jim have rated The Matrix). The prediction of Tom``s rating for The Mask is then based on the ratings of Jeff and Tim. Although the nearest-neighbor algorithms aggregate the ratings to form neighborhoods used to compute predictions, we can disaggregate the similarities to view the computation of a prediction as simultaneously following parallel paths of ratings. So, irrespective of the collaborative filtering algorithm used, we can visualize the prediction of Tom``s rating of The Mask as walking through a sequence of ratings. In Jim Tom Jeff The Matrix Star Wars The Mask q1 q2 q3 p1 p2 p3 Figure 2: Ratings used to predict The Mask for Tom. Jim Tom Jeff The Matrix Star Wars The Mask q1 q2 q3 p1 p2 p3 Jerry r2 r3 Figure 3: Prediction of The Mask for Tom in which a rating is used more than once. this example, two paths were used for this prediction as depicted in Fig. 2: (p1, p2, p3) and (q1, q2, q3). Note that these paths are undirected, and are all of length 3. Only the order in which the ratings are traversed is different between the item-based algorithm (e.g., (p3, p2, p1), (q3, q2, q1)) and the user-based algorithm (e.g., (p1, p2, p3), (q1, q2, q3).) A rating can be part of many paths for a single prediction as shown in Fig. 3, where three paths are used for a prediction, two of which follow p1: (p1, p2, p3) and (p1, r2, r3). Predictions in a collaborative filtering algorithms may involve thousands of such walks in parallel, each playing a part in influencing the predicted value. Each prediction path consists of three ratings, playing roles that we call scouts, promoters, and connectors. To illustrate these roles, consider the path (p1, p2, p3) in Fig. 2 used to make a prediction of The Mask for Tom: 1. The rating p1 (Tom → Star Wars) makes a connection from Tom to other ratings that can be used to predict Tom``s rating for The Mask. This rating serves as a scout in the bipartite graph of ratings to find a path that leads to The Mask. 2. The rating p2 (Jeff → Star Wars) helps the system recommend The Mask to Tom by connecting the scout to the promoter. 3. The rating p3 (Jeff → The Mask) allows connections to The Mask, and, therefore, promotes this movie to Tom. Formally, given a prediction pu,a of a target item a for user u, a scout for pu,a is a rating ru,i such that there exists a user v with ratings rv,a and rv,i for some item i; a promoter for pu,a is a rating rv,a for some user v, such that there exist ratings rv,i and ru,i for an item i, and; a connector for pu,a 252 Jim Tom Jeff Jerry My Cousin Vinny The Matrix Star Wars The Mask Jurasic Park Figure 4: Scouts, promoters, and connectors. is a rating rv,i by some user v and rating i, such that there exists ratings ru,i and rv,a. The scouts, connectors, and promoters for the prediction of Tom``s rating of The Mask are p1 and q1, p2 and q2, and p3 and q3 (respectively). Each of these roles has a value in the recommender to the user, the user``s neighborhood, and the system in terms of allowing recommendations to be made. 3.1 Roles in Detail Ratings that act as scouts tend to help the recommender system suggest more movies to the user, though the extent to which this is true depends on the rating behavior of other users. For example, in Fig. 4 the rating Tom → Star Wars helps the system recommend only The Mask to him, while Tom → The Matrix helps recommend The Mask, Jurassic Park, and My Cousin Vinny. Tom makes a connection to Jim who is a prolific user of the system, by rating The Matrix. However, this does not make The Matrix the best movie to rate for everyone. For example, Jim is benefited equally by both The Mask and The Matrix, which allow the system to recommend Star Wars to him. His rating of The Mask is the best scout for Jeff, and Jerry``s only scout is his rating of Star Wars. This suggests that good scouts allow a user to build similarity with prolific users, and thereby ensure they get more from the system. While scouts represent beneficial ratings from the perspective of a user, promoters are their duals, and are of benefit to items. In Fig. 4, My Cousin Vinny benefits from Jim``s rating, since it allows recommendations to Jeff and Tom. The Mask is not so dependent on just one rating, since the ratings by Jim and Jeff help it. On the other hand, Jerry``s rating of Star Wars does not help promote it to any other user. We conclude that a good promoter connects an item to a broader neighborhood of other items, and thereby ensures that it is recommended to more users. Connectors serve a crucial role in a recommender system that is not as obvious. The movies My Cousin Vinny and Jurassic Park have the highest recommendation potential since they can be recommended to Jeff, Jerry and Tom based on the linkage structure illustrated in Fig. 4. Beside the fact that Jim rated these movies, these recommendations are possible only because of the ratings Jim → The Matrix and Jim → The Mask, which are the best connectors. A connector improves the system``s ability to make recommendations with no explicit gain for the user. Note that every rating can be of varied benefit in each of these roles. The rating Jim → My Cousin Vinny is a poor scout and connector, but is a very good promoter. The rating Jim → The Mask is a reasonably good scout, a very good connector, and a good promoter. Finally, the rating Jerry → Star Wars is a very good scout, but is of no value as a connector or promoter. As illustrated here, a rating can have different value in each of the three roles in terms of whether a recommendation can be made or not. We could measure this value by simply counting the number of times a rating is used in each role, which alone would be helpful in characterizing the behavior of a system. But we can also measure the contribution of each rating to the quality of recommendations or health of the system. Since every prediction is a combined effort of several recommendation paths, we are interested in discerning the influence of each rating (and, hence, each path) in the system towards the system``s overall error. We can understand the dynamics of the system at a finer granularity by tracking the influence of a rating according to the role played. The next section describes the approach to measuring the values of a rating in each role. 4. CONTRIBUTIONS OF RATINGS As we``ve seen, a rating may play different roles in different predictions and, in each prediction, contribute to the quality of a prediction in different ways. Our approach can use any numeric measure of a property of system health, and assigns credit (or blame) to each rating proportional to its influence in the prediction. By tracking the role of each rating in a prediction, we can accumulate the credit for a rating in each of the three roles, and also track the evolution of the roles of rating over time in the system. This section defines the methodology for computing the contribution of ratings by first defining the influence of a rating, and then instantiating the approach for predictive accuracy, and then rank accuracy. We also demonstrate how these contributions can be aggregated to study the neighborhood of ratings involved in computing a user``s recommendations. Note that although our general formulation for rating influence is algorithm independent, due to space considerations, we present the approach for only item-based collaborative filtering. The definition for user-based algorithms is similar and will be presented in an expanded version of this paper. 4.1 Influence of Ratings Recall that an item-based approach to collaborative filtering relies on building item neighborhoods using the similarity of ratings by the same user. As described earlier, similarity is defined by the adjusted cosine measure (Equations (2) and (3)). A set of the top K neighbors is maintained for all items for space and computational efficiency. A prediction of item i for a user u is computed as the weighted deviation from the item``s mean rating as shown in Equation (4). The list of recommendations for a user is then the list of items sorted in descending order of their predicted values. We first define impact(a, i, j), the impact a user a has in determining the similarity between two items i and j. This is the change in the similarity between i and j when a``s rating is removed, and is defined as impact(a, i, j) = |sim (i, j) − sim¯a(i, j)| P w∈Cij |sim (i, j) − sim ¯w(i, j)| where Cij = {u ∈ U | ∃ ru,i, ru,j ∈ R(u)} is the set of coraters 253 of items i and j (users who rate both i and j), R(u) is the set of ratings provided by user u, and sim¯a(i, j) is the similarity of i and j when the ratings of user a are removed sim¯a(i, j) = P v∈U\{a} (ru,i − ¯ru)(ru,j − ¯ru) qP u∈U\{a}(ru,i − ¯ru)2 qP u∈U\{a}(ru,j − ¯ru)2 , and adjusted for the number of raters sim¯a(i, j) = max(|Ui ∩ Uj| − 1, γ) γ · sim(i, j). If all coraters of i and j rate them identically, we define the impact as impact(a, i, j) = 1 |Cij| since P w∈Cij |sim (i, j) − sim ¯w(i, j)| = 0. The influence of each path (u, j, v, i) = [ru,j, rv,j, rv,i] in the prediction of pu,i is given by influence(u, j, v, i) = sim (i, j) P l∈Ni∩Iu sim (i, l) · impact(v, i, j) It follows that the sum of influences over all such paths, for a given set of endpoints, is 1. 4.2 Role Values for Predictive Accuracy The value of a rating in each role is computed from the influence depending on the evaluation measure employed. Here we illustrate the approach using predictive accuracy as the evaluation metric. In general, the goodness of a prediction decides whether the ratings involved must be credited or discredited for their role. For predictive accuracy, the error in prediction e = |pu,i − ru,i| is mapped to a comfort level using a mapping function M(e). Anecdotal evidence suggests that users are unable to discern errors less than 1.0 (for a rating scale of 1 to 5) [4], and so an error less than 1.0 is considered acceptable, but anything larger is not. We hence define M(e) as (1 − e) binned to an appropriate value in [−1, −0.5, 0.5, 1]. For each prediction pu,i, M(e) is attributed to all the paths that assisted the computation of pu,i, proportional to their influences. This tribute, M(e)∗influence(u, j, v, i), is in turn inherited by each of the ratings in the path [ru,j, rv,j, rv,i], with the credit/blame accumulating to the respective roles of ru,j as a scout, rv,j as a connector, and rv,i as a promoter. In other words, the scout value SF(ru,j), the connector value CF(rv,j) and the promoter value PF(rv,i) are all incremented by the tribute amount. Over a large number of predictions, scouts that have repeatedly resulted in big error rates have a big negative scout value, and vice versa (similarly with the other roles). Every rating is thus summarized by its triple [SF, CF, PF]. 4.3 Role Values for Rank Accuracy We now define the computation of the contribution of ratings to observed rank accuracy. For this computation, we must know the user``s preference order for a set of items for which predictions can be computed. We assume that we have a test set of the users'' ratings of the items presented in the recommendation list. For every pair of items rated by a user in the test data, we check whether the predicted order is concordant with his preference. We say a pair (i, j) is concordant (with error ) whenever one of the following holds: • if (ru,i < ru,j) then (pu,i − pu,j < ); • if (ru,i > ru,j) then (pu,i − pu,j > ); or • if (ru,i = ru,j) then (|pu,i − pu,j| ≤ ). Similarly, a pair (i, j) is discordant (with error ) if it is not concordant. Our experiments described below use an error tolerance of = 0.1. All paths involved in the prediction of the two items in a concordant pair are credited, and the paths involved in a discordant pair are discredited. The credit assigned to a pair of items (i, j) in the recommendation list for user u is computed as c(i, j) = ( t T · 1 C+D if (i, j) are concordant − t T · 1 C+D if (i, j) are discordant (7) where t is the number of items in the user``s test set whose ratings could be predicted, T is the number of items rated by user u in the test set, C is the number of concordances and D is the number of discordances. The credit c is then divided among all paths responsible for predicting pu,i and pu,j proportional to their influences. We again add the role values obtained from all the experiments to form a triple [SF, CF, PF] for each rating. 4.4 Aggregating rating roles After calculating the role values for individual ratings, we can also use these values to study neighborhoods and the system. Here we consider how we can use the role values to characterize the health of a neighborhood. Consider the list of top recommendations presented to a user at a specific point in time. The collaborative filtering algorithm traversed many paths in his neighborhood through his scouts and other connectors and promoters to make these recommendations. We call these ratings the recommender neighborhood of the user. The user implicitly chooses this neighborhood of ratings through the items he rates. Apart from the collaborative filtering algorithm, the health of this neighborhood completely influences a user``s satisfaction with the system. We can characterize a user``s recommender neighborhood by aggregating the individual role values of the ratings involved, weighted by the influence of individual ratings in determining his recommended list. Different sections of the user``s neighborhood wield varied influence on his recommendation list. For instance, ratings reachable through highly rated items have a bigger say in the recommended items. Our aim is to study the system and classify users with respect to their positioning in a healthy or unhealthy neighborhood. A user can have a good set of scouts, but may be exposed to a neighborhood with bad connectors and promoters. He can have a good neighborhood, but his bad scouts may ensure the neighborhood``s potential is rendered useless. We expect that users with good scouts and good neighborhoods will be most satisfied with the system in the future. A user``s neighborhood is characterized by a triple that represents the weighted sum of the role values of individual ratings involved in making recommendations. Consider a user u and his ordered list of recommendations L. An item i 254 in the list is weighted inversely, as K(i), depending on its position in the list. In our studies we use K(i) = p position(i). Several paths of ratings [ru,j, rv,j, rv,i] are involved in predicting pu,i which ultimately decides its position in L, each with influence(u, j, v, i). The recommender neighborhood of a user u is characterized by the triple, [SFN(u), CFN(u), PFN(u)] where SFN(u) = X i∈L P [ru,j ,rv,j ,rv,i] SF(ru,j)influence(u, j, v, i) K(i) ! CFN(u) and PFN(u) are defined similarly. This triple estimates the quality of u``s recommendations based on the past track record of the ratings involved in their respective roles. 5. EXPERIMENTATION As we have seen, we can assign role values to each rating when evaluating a collaborative filtering system. In this section, we demonstrate the use of this approach to our overall goal of defining an approach to monitor and manage the health of a recommender system through experiments done on the MovieLens million rating dataset. In particular, we discuss results relating to identifying good scouts, promoters, and connectors; the evolution of rating roles; and the characterization of user neighborhoods. 5.1 Methodology Our experiments use the MovieLens million rating dataset, which consists of ratings by 6040 users of 3952 movies. The ratings are in the range 1 to 5, and are labeled with the time the rating was given. As discussed before, we consider only the item-based algorithm here (with item neighborhoods of size 30) and, due to space considerations, only present role value results for rank accuracy. Since we are interested in the evolution of the rating role values over time, the model of the recommender system is built by processing ratings in their arrival order. The timestamping provided by MovieLens is hence crucial for the analyses presented here. We make assessments of rating roles at intervals of 10,000 ratings and processed the first 200,000 ratings in the dataset (giving rise to 20 snapshots). We incrementally update the role values as the time ordered ratings are merged into the model. To keep the experiment computationally manageable, we define a test dataset for each user. As the time ordered ratings are merged into the model, we label a small randomly selected percentage (20%) as test data. At discrete epochs, i.e., after processing every 10,000 ratings, we compute the predictions for the ratings in the test data, and then compute the role values for the ratings used in the predictions. One potential criticism of this methodology is that the ratings in the test set are never evaluated for their roles. We overcome this concern by repeating the experiment, using different random seeds. The probability that every rating is considered for evaluation is then considerably high: 1 − 0.2n , where n is the number of times the experiment is repeated with different random seeds. The results here are based on n = 4 repetitions. The item-based collaborative filtering algorithm``s performance was ordinary with respect to rank accuracy. Fig. 5 shows a plot of the precision and recall as ratings were merged in time order into the model. The recall was always high, but the average precision was just about 53%. 0 0.2 0.4 0.6 0.8 1 1.2 10000 30000 50000 70000 90000110000130000150000 Ratings merged into model Value Precision Recall Figure 5: Precision and recall for the item-based collaborative filtering algorithm. 5.2 Inducing good scouts The ratings of a user that serve as scouts are those that allow the user to receive recommendations. We claim that users with ratings that have respectable scout values will be happier with the system than those with ratings with low scout values. Note that the item-based algorithm discussed here produces recommendation lists with nearly half of the pairs in the list discordant from the user``s preference. Whether all of these discordant pairs are observable by the user is unclear, however, certainly this suggests that there is a need to be able to direct users to items whose ratings would improve the lists. The distribution of the scout values for most users'' ratings are Gaussian with mean zero. Fig. 6 shows the frequency distribution of scout values for a sample user at a given snapshot. We observe that a large number of ratings never serve as scouts for their users. A relatable scenario is when Amazon``s recommender makes suggestions of books or items based on other items that were purchased as gifts. With simple relevance feedback from the user, such ratings can be isolated as bad scouts and discounted from future predictions. Removing bad scouts was found to be worthwhile for individual users but the overall performance improvement was only marginal. An obvious question is whether good scouts can be formed by merely rating popular movies as suggested by Rashid et al. [9]. They show that a mix of popularity and rating entropy identifies the best items to suggest to new users when evaluated using MAE. Following their intuition, we would expect to see a higher correlation between popularityentropy and good scouts. We measured the Pearson correlation coefficient between aggregated scout values for a movie with the popularity of a movie (number of times it is rated); and with its popularity*variance measure at different snapshots of the system. Note that the scout values were initially anti-correlated with popularity (Fig. 7), but became moderately correlated as the system evolved. Both popularity and popularity*variance performed similarly. A possible explanation is that there has been insufficient time for the popular movies to accumulate ratings. 255 -10 0 10 20 30 40 50 60 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 Scout Value Frequency Figure 6: Distribution of scout values for a sample user. -0.4 -0.2 0 0.2 0.4 0.6 0.8 30000 60000 90000 120000 150000 180000 Popularity Pop*Var Figure 7: Correlation between aggregated scout value and item popularity (computed at different intervals). -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 30000 60000 90000 120000 150000 180000 Figure 8: Correlation between aggregated promoter value and user prolificity (computed at different intervals). Table 1: Movies forming the best scouts. Best Scouts Conf. Pop. Being John Malkovich (1999) 1.00 445 Star Wars: Episode IV - A New Hope (1977) 0.92 623 Princess Bride, The (1987) 0.85 477 Sixth Sense, The (1999) 0.85 617 Matrix, The (1999) 0.77 522 Ghostbusters (1984) 0.77 441 Casablanca (1942) 0.77 384 Insider, The (1999) 0.77 235 American Beauty (1999) 0.69 624 Terminator 2: Judgment Day (1991) 0.69 503 Fight Club (1999) 0.69 235 Shawshank Redemption, The (1994) 0.69 445 Run Lola Run (Lola rennt) (1998) 0.69 220 Terminator, The (1984) 0.62 450 Usual Suspects, The (1995) 0.62 326 Aliens (1986) 0.62 385 North by Northwest (1959) 0.62 245 Fugitive, The (1993) 0.62 402 End of Days (1999) 0.62 132 Raiders of the Lost Ark (1981) 0.54 540 Schindler``s List (1993) 0.54 453 Back to the Future (1985) 0.54 543 Toy Story (1995) 0.54 419 Alien (1979) 0.54 415 Abyss, The (1989) 0.54 345 2001: A Space Odyssey (1968) 0.54 358 Dogma (1999) 0.54 228 Little Mermaid, The (1989) 0.54 203 Table 2: Movies forming the worst scouts. Worst scouts Conf. Pop. Harold and Maude (1971) 0.46 141 Grifters, The (1990) 0.46 180 Sting, The (1973) 0.38 244 Godfather: Part III, The (1990) 0.38 154 Lawrence of Arabia (1962) 0.38 167 High Noon (1952) 0.38 84 Women on the Verge of a.. . (1988) 0.38 113 Grapes of Wrath, The (1940) 0.38 115 Duck Soup (1933) 0.38 131 Arsenic and Old Lace (1944) 0.38 138 Midnight Cowboy (1969) 0.38 137 To Kill a Mockingbird (1962) 0.31 195 Four Weddings and a Funeral (1994) 0.31 271 Good, The Bad and The Ugly, The (1966) 0.31 156 It``s a Wonderful Life (1946) 0.31 146 Player, The (1992) 0.31 220 Jackie Brown (1997) 0.31 118 Boat, The (Das Boot) (1981) 0.31 210 Manhattan (1979) 0.31 158 Truth About Cats & Dogs, The (1996) 0.31 143 Ghost (1990) 0.31 227 Lone Star (1996) 0.31 125 Big Chill, The (1983) 0.31 184 256 By studying the evolution of scout values, we can identify movies that consistently feature in good scouts over time. We claim these movies will make viable scouts for other users. We found the aggregated scout values for all movies in intervals of 10,000 ratings each. A movie is said to induce a good scout if the movie was in the top 100 of the sorted list, and to induce a bad scout if it was in bottom 100 of the same list. Movies appearing consistently high over time are expected to remain up there in the future. The effective confidence in a movie m is Cm = Tm − Bm N (8) where Tm is the number of times it appeared in the top 100, Bm the number of times it appeared in the bottom 100, and N is the number of intervals considered. Using this measure, the top few movies expected to induce the best scouts are shown in Table 1. Movies that would be bad scout choices are shown in Table 2 with their associated confidences. The popularities of the movies are also displayed. Although more popular movies appear in the list of good scouts, these tables show that a blind choice of scout based on popularity alone can be potentially dangerous. Interestingly, the best scout-`Being John Malkovich''-is about a puppeteer who discovers a portal into a movie star, a movie that has been described variously on amazon.com as `makes you feel giddy,'' `seriously weird,'' `comedy with depth,'' `silly,'' `strange,'' and `inventive.'' Indicating whether someone likes this movie or not goes a long way toward situating the user in a suitable neighborhood, with similar preferences. On the other hand, several factors may have made a movie a bad scout, like the sharp variance in user preferences in the neighborhood of a movie. Two users may have the same opinion about Lawrence of Arabia, but they may differ sharply about how they felt about the other movies they saw. Bad scouts ensue when there is deviation in behavior around a common synchronization point. 5.3 Inducing good promoters Ratings that serve to promote items in a collaborative filtering system are critical to allowing a new item be recommended to users. So, inducing good promoters is important for cold-start recommendation. We note that the frequency distribution of promoter values for a sample movie``s ratings is also Gaussian (similar to Fig. 6). This indicates that the promotion of a movie is benefited most by the ratings of a few users, and are unaffected by the ratings of most users. We find a strong correlation between a user``s number of ratings and his aggregated promoter value. Fig. 8 depicts the evolution of the Pearson correlation co-efficient between the prolificity of a user (number of ratings) versus his aggregated promoter value. We expect that conspicuous shills, by recommending wrong movies to users, will be discredited with negative aggregate promoter values and should be identifiable easily. Given this observation, the obvious rule to follow when introducing a new movie is to have it rated directly by prolific users who posses high aggregated promoter values. A new movie is thus cast into the neighborhood of many other movies improving its visibility. Note, though, that a user may have long stopped using the system. Tracking promoter values consistently allows only the most active recent users to be considered. 5.4 Inducing good connectors Given the way scouts, connectors, and promoters are characterized, it follows that the movies that are part of the best scouts are also part of the best connectors. Similarly, the users that constitute the best promoters are also part of the best connectors. Good connectors are induced by ensuring a user with a high promoter value rates a movie with a high scout value. In our experiments, we find that a rating``s longest standing role is often as a connector. A rating with a poor connector value is often seen due to its user being a bad promoter, or its movie being a bad scout. Such ratings can be removed from the prediction process to bring marginal improvements to recommendations. In some selected experiments, we observed that removing a set of badly behaving connectors helped improve the system``s overall performance by 1.5%. The effect was even higher on a few select users who observed an improvement of above 10% in precision without much loss in recall. 5.5 Monitoring the evolution of rating roles One of the more significant contributions of our work is the ability to model the evolution of recommender systems, by studying the changing roles of ratings over time. The role and value of a rating can change depending on many factors like user behavior, redundancy, shilling effects or properties of the collaborative filtering algorithm used. Studying the dynamics of rating roles in terms of transitions between good, bad, and negligible values can provide insights into the functioning of the recommender system. We believe that a continuous visualization of these transitions will improve the ability to manage a recommender system. We classify different rating states as good, bad, or negligible. Consider a user who has rated 100 movies in a particular interval, of which 20 are part of the test set. If a scout has a value greater than 0.005, it indicates that it is uniquely involved in at least 2 concordant predictions, which we will say is good. Thus, a threshold of 0.005 is chosen to bin a rating as good, bad or negligible in terms of its scout, connector and promoter value. For instance, a rating r, at time t with role value triple [0.1, 0.001, −0.01] is classified as [scout +, connector 0, promoter −], where + indicates good, 0 indicates negligible, and − indicates bad. The positive credit held by a rating is a measure of its contribution to the betterment of the system, and the discredit is a measure of its contribution to the detriment of the system. Even though the positive roles (and the negative roles) make up a very small percentage of all ratings, their contribution supersedes their size. For example, even though only 1.7% of all ratings were classified as good scouts, they hold 79% of all positive credit in the system! Similarly, the bad scouts were just 1.4% of all ratings but hold 82% of all discredit. Note that good and bad scouts, together, comprise only 1.4% + 1.7% = 3.1% of the ratings, implying that the majority of the ratings are negligible role players as scouts (more on this later). Likewise, good connectors were 1.2% of the system, and hold 30% of all positive credit. The bad connectors (0.8% of the system) hold 36% of all discredit. Good promoters (3% of the system) hold 46% of all credit, while bad promoters (2%) hold 50% of all discredit. This reiterates that a few ratings influence most of the system``s performance. Hence it is important to track transitions between them regardless of their small numbers. 257 Across different snapshots, a rating can remain in the same state or change. A good scout can become a bad scout, a good promoter can become a good connector, good and bad scouts can become vestigial, and so on. It is not practical to expect a recommender system to have no ratings in bad roles. However, it suffices to see ratings in bad roles either convert to good or vestigial roles. Similarly, observing a large number of good roles become bad ones is a sign of imminent failure of the system. We employ the principle of non-overlapping episodes [6] to count such transitions. A sequence such as: [+, 0, 0] → [+, 0, 0] → [0, +, 0] → [0, 0, 0] is interpreted as the transitions [+, 0, 0] ; [0, +, 0] : 1 [+, 0, 0] ; [0, 0, 0] : 1 [0, +, 0] ; [0, 0, 0] : 1 instead of [+, 0, 0] ; [0, +, 0] : 2 [+, 0, 0] ; [0, 0, 0] : 2 [0, +, 0] ; [0, 0, 0] : 1. See [6] for further details about this counting procedure. Thus, a rating can be in one of 27 possible states, and there are about 272 possible transitions. We make a further simplification and utilize only 9 states, indicating whether the rating is a scout, promoter, or connector, and whether it has a positive, negative, or negligible role. Ratings that serve multiple purposes are counted using multiple episode instantiations but the states themselves are not duplicated beyond the 9 restricted states. In this model, a transition such as [+, 0, +] ; [0, +, 0] : 1 is counted as [scout+] ; [scout0] : 1 [scout+] ; [connector+] : 1 [scout+] ; [promoter0] : 1 [connector0] ; [scout0] : 1 [connector0] ; [scout+] : 1 [connector0] ; [promoter0] : 1 [promoter+] ; [scout0] : 1 [promoter+] ; [connector+] : 1 [promoter+] ; [promoter0] : 1 Of these, transitions like [pX] ; [q0] where p = q, X ∈ {+, 0, −} are considered uninteresting, and only the rest are counted. Fig. 9 depicts the major transitions counted while processing the first 200,000 ratings from the MovieLens dataset. Only transitions with frequency greater than or equal to 3% are shown. The percentages for each state indicates the number of ratings that were found to be in those states. We consider transitions from any state to a good state as healthy, from any state to a bad state as unhealthy, and from any state to a vestigial state as decaying. From Fig. 9, we can observe: • The bulk of the ratings have negligible values, irrespective of their role. The majority of the transitions involve both good and bad ratings becoming negligible. Scout + (2%) Scout(1.5%) Scout 0 (96.5%) Connector + (1.2%) Connector(0.8%) Connector 0 (98%) Promoter + (3%) Promoter(2%) Promoter 0 (95%) 84% 84% 81% 74% 10% 6% 11% 77% 8% 7% 8% 82% 4% 86% 4% 68% 15% 13% 5% 5% 77% 11% 7% 5% 4% 3% 3% 3% Healthy Unhealthy Decaying Figure 9: Transitions among rating roles. • The number of good ratings is comparable to the bad ratings, and ratings are seen to switch states often, except in the case of scouts (see below). • The negative and positive scout states are not reachable through any transition, indicating that these ratings must begin as such, and cannot be coerced into these roles. • Good promoters and good connectors have a much longer survival period than scouts. Transitions that recur to these states have frequencies of 10% and 15% when compared to just 4% for scouts. Good connectors are the slowest to decay whereas (good) scouts decay the fastest. • Healthy percentages are seen on transitions between promoters and connectors. As indicated earlier, there are hardly any transitions from promoters/connectors to scouts. This indicates that, over the long run, a user``s rating is more useful to others (movies or other users) than to the user himself. • The percentages of healthy transitions outweigh the unhealthy ones - this hints that the system is healthy, albeit only marginally. Note that these results are conditioned by the static nature of the dataset, which is a set of ratings over a fixed window of time. However a diagram such as Fig. 9 is clearly useful for monitoring the health of a recommender system. For instance, acceptable limits can be imposed on different types of transitions and, if a transition fails to meet the threshold, the recommender system or a part of it can be brought under closer scrutiny. Furthermore, the role state transition diagram would also be the ideal place to study the effects of shilling, a topic we will consider in future research. 5.6 Characterizing neighborhoods Earlier we saw that we can characterize the neighborhood of ratings involved in creating a recommendation list L for 258 a user. In our experiment, we consider lists of length 30, and sample the lists of about 5% of users through the evolution of the model (at intervals of 10,000 ratings each) and compute their neighborhood characteristics. To simplify our presentation, we consider the percentage of the sample that fall into one of the following categories: 1. Inactive user: (SFN(u) = 0) 2. Good scouts, Good neighborhood: (SFN(u) > 0) ∧ (CFN(u) > 0 ∧ PFN(u) > 0) 3. Good scouts, Bad neighborhood: (SFN(u) > 0) ∧ (CFN(u) < 0 ∨ PFN(u) < 0) 4. Bad scouts, Good neighborhood: (SFN(u) < 0) ∧ (CFN(u) > 0 ∧ PFN(u) > 0) 5. Bad scouts, Bad neighborhood: (SFN(u) < 0) ∧ (CFN(u) < 0 ∨ PFN(u) < 0) From our sample set of 561 users, we found that 476 users were inactive. Of the remaining 85 users, we found 26 users had good scouts and a good neighborhood, 6 had bad scouts and a good neighborhood, 29 had good scouts and a bad neighborhood, and 24 had bad scouts and a bad neighborhood. Thus, we conjecture that 59 users (29+24+6) are in danger of leaving the system. As a remedy, users with bad scouts and a good neighborhood can be asked to reconsider rating of some movies hoping to improve the system``s recommendations. The system can be expected to deliver more if they engineer some good scouts. Users with good scouts and a bad neighborhood are harder to address; this situation might entail selectively removing some connector-promoter pairs that are causing the damage. Handling users with bad scouts and bad neighborhoods is a more difficult challenge. Such a classification allows the use of different strategies to better a user``s experience with the system depending on his context. In future work, we intend to conduct field studies and study the improvement in performance of different strategies for different contexts. 6. CONCLUSIONS To further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings. A fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor. Although we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics. In future research, we plan to systematically study the many algorithmic parameters, tolerances, and cutoff thresholds employed here and reason about their effects on the downstream conclusions. We also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values. Finally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health. 7. REFERENCES [1] Cosley, D., Lam, S., Albert, I., Konstan, J., and Riedl, J. Is Seeing Believing? : How Recommender System Interfaces Affect User``s Opinions. In Proc. CHI (2001), pp. 585-592. [2] Herlocker, J. L., Konstan, J. A., Borchers, A., and Riedl, J. An Algorithmic Framework for Performing Collaborative Filtering. In Proc. SIGIR (1999), pp. 230-237. [3] Herlocker, J. L., Konstan, J. A., Terveen, L. G., and Riedl, J. T. Evaluating Collaborative Filtering Recommender Systems. ACM Transactions on Information Systems Vol. 22, 1 (2004), pp. 5-53. [4] Konstan, J. A. Personal communication. 2003. [5] Lam, S. K., and Riedl, J. Shilling Recommender Systems for Fun and Profit. In Proceedings of the 13th International World Wide Web Conference (2004), ACM Press, pp. 393-402. [6] Laxman, S., Sastry, P. S., and Unnikrishnan, K. P. Discovering Frequent Episodes and Learning Hidden Markov Models: A Formal Connection. IEEE Transactions on Knowledge and Data Engineering Vol. 17, 11 (2005), 1505-1517. [7] McLaughlin, M. R., and Herlocker, J. L. A Collaborative Filtering Algorithm and Evaluation Metric that Accurately Model the User Experience. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2004), pp. 329 - 336. [8] O``Mahony, M., Hurley, N. J., Kushmerick, N., and Silvestre, G. Collaborative Recommendation: A Robustness Analysis. ACM Transactions on Internet Technology Vol. 4, 4 (Nov 2004), pp. 344-377. [9] Rashid, A. M., Albert, I., Cosley, D., Lam, S., McNee, S., Konstan, J. A., and Riedl, J. Getting to Know You: Learning New User Preferences in Recommender Systems. In Proceedings of the 2002 Conference on Intelligent User Interfaces (IUI 2002) (2002), pp. 127-134. [10] Rashid, A. M., Karypis, G., and Riedl, J. Influence in Ratings-Based Recommender Systems: An Algorithm-Independent Approach. In Proc. of the SIAM International Conference on Data Mining (2005). [11] Resnick, P., Iacovou, N., Sushak, M., Bergstrom, P., and Riedl, J. GroupLens: An Open Architecture for Collaborative Filtering of Netnews. In Proceedings of the Conference on Computer Supported Collaborative Work (CSCW``94) (1994), ACM Press, pp. 175-186. [12] Sarwar, B., Karypis, G., Konstan, J., and Reidl, J. Item-Based Collaborative Filtering Recommendation Algorithms. In Proceedings of the Tenth International World Wide Web Conference (WWW``10) (2001), pp. 285-295. [13] Schein, A., Popescu, A., Ungar, L., and Pennock, D. Methods and Metrics for Cold-Start Recommendation. In Proc. SIGIR (2002), pp. 253-260. 259
Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering ABSTRACT Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles--scouts, promoters, and connectors--that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.) . These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community. 1. INTRODUCTION Recommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history. Collaborative filtering, a common form of recommendation, predicts a user's rating for an item by combining (other) ratings of that user with other users' ratings. Significant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8]. However, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations. Such an understanding will give an important handle to monitoring and managing a recommender system, to engineer mechanisms to sustain the recommender, and thereby ensure its continued success. Our motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering. We identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds. Viewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations. In turn, this capability helps support scenarios such as: 1. Situating users in better neighborhoods: A user's ratings may inadvertently connect the user to a neighborhood for which the user's tastes may not be a perfect match. Identifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood. 2. Targeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items. Identifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system. 3. Monitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and items, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack. Tracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved. These include being able to identify rating subspaces that do not contribute (or contribute negatively) to system performance, and could be removed; to enumerate users who are in danger of leaving, or have left the system; and to assess the susceptibility of the system to attacks such as shilling [5]. As we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community. The rest of the paper is organized as follows. Background on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2. Section 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles. In Section 5, we illustrate the use of these roles to address the goals outlined above. 2. BACKGROUND 2.1 Algorithms Nearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction. An algorithm of the first kind is called user-based, and one of the second kind is called itembased [12]. In both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based). Predictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user's neighbors and, in an item-based algorithm, involves aggregating the user's ratings of items that are neighbors of the target item. Algorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions. We consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12]. The algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings: where Iu is the set of items rated by user u, ru, i is user u's rating for item i, and ¯ ru is the average rating of user u (similarly for v). Similarity computed in this manner is typically scaled by a factor proportional to the number of common ratings, to reduce the chance of making a recommendation made on weak connections: sim' (u, v) = max (| Iu ∩ Iv |, γ) · sim (u, v), γ where γ ≈ 5 is a constant used as a lower limit in scaling [2]. These new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u. A prediction for user u and item i is computed by a weighted average of the ratings by the neighbors where V = Nu ∩ Ui is the set of users most similar to u who have rated i. The item-based algorithm we use is the one defined by Sarwar et al. [12]. In this algorithm, similarity is defined as the adjusted cosine measure where Ui is the set of users who have rated item i. As for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item. A prediction for user u and item i is computed as the weighted average where J = Ni ∩ Iu is the set of items rated by u that are most similar to i. 2.2 Evaluation Recommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3]. Studies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set. A standard measure of predictive accuracy is mean absolute error (MAE), which for a test set Coverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3]. A practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions. Recommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list. So, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order. Herlocker et al. [3] discuss a number of rank accuracy measures, which range from Kendall's Tau to measures that consider the fact that users tend to only look at a prefix of the list [5]. Kendall's Tau measures the number of inversions when comparing ordered pairs in the true user ordering of Figure 1: Ratings in simple movie recommender. items and the recommended order, and is defined as where C is the number of pairs that the system predicts in the correct order, D the number of pairs the system predicts in the wrong order, TR the number of pairs in the true ordering that have the same ratings, and TP is the number of pairs in the predicted ordering that have the same ratings [3]. A shortcoming of the Tau metric is that it is oblivious to the position in the ordered list where the inversion occurs [3]. For instance, an inversion toward the end of the list is given the same weight as one in the beginning. One solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list. 3. ROLES OF A RATING Our basic observation is that each rating plays a different role in each prediction in which it is used. Consider a simplified movie recommender system with three users Jim, Jeff, and Tom and their ratings for a few movies, as shown in Fig. 1. (For this initial discussion we will not consider the rating values involved.) The recommender predicts whether Tom will like The Mask using the other already available ratings. How this is done depends on the algorithm: 1. An item-based collaborative filtering algorithm constructs a neighborhood of movies around The Mask by using the ratings of users who rated The Mask and other movies similarly (e.g., Jim's ratings of The Matrix and The Mask; and Jeff's ratings of Star Wars and The Mask). Tom's ratings of those movies are then used to make a prediction for The Mask. 2. A user-based collaborative filtering algorithm would construct a neighborhood around Tom by tracking other users whose rating behaviors are similar to Tom's (e.g., Tom and Jeff have rated Star Wars; Tom and Jim have rated The Matrix). The prediction of Tom's rating for The Mask is then based on the ratings of Jeff and Tim. Although the nearest-neighbor algorithms aggregate the ratings to form neighborhoods used to compute predictions, we can disaggregate the similarities to view the computation of a prediction as simultaneously following parallel paths of ratings. So, irrespective of the collaborative filtering algorithm used, we can visualize the prediction of Tom's rating of The Mask as walking through a sequence of ratings. In Figure 2: Ratings used to predict The Mask for Tom. Figure 3: Prediction of The Mask for Tom in which a rating is used more than once. this example, two paths were used for this prediction as depicted in Fig. 2: (p1, p2, p3) and (q1, q2, q3). Note that these paths are undirected, and are all of length 3. Only the order in which the ratings are traversed is different between the item-based algorithm (e.g., (p3, p2, p1), (q3, q2, q1)) and the user-based algorithm (e.g., (p1, p2, p3), (q1, q2, q3).) A rating can be part of many paths for a single prediction as shown in Fig. 3, where three paths are used for a prediction, two of which follow p1: (p1, p2, p3) and (p1, r2, r3). Predictions in a collaborative filtering algorithms may involve thousands of such walks in parallel, each playing a part in influencing the predicted value. Each prediction path consists of three ratings, playing roles that we call scouts, promoters, and connectors. To illustrate these roles, consider the path (p1, p2, p3) in Fig. 2 used to make a prediction of The Mask for Tom: 1. The rating p1 (Tom--> Star Wars) makes a connection from Tom to other ratings that can be used to predict Tom's rating for The Mask. This rating serves as a scout in the bipartite graph of ratings to find a path that leads to The Mask. 2. The rating p2 (Jeff--> Star Wars) helps the system recommend The Mask to Tom by connecting the scout to the promoter. 3. The rating p3 (Jeff--> The Mask) allows connections to The Mask, and, therefore, promotes this movie to Tom. Formally, given a prediction pu, a of a target item a for user u, a scout for pu, a is a rating ru, i such that there exists a user v with ratings rv, a and rv, i for some item i; a promoter for pu, a is a rating rv, a for some user v, such that there exist ratings rv, i and ru, i for an item i, and; a connector for pu, a Figure 4: Scouts, promoters, and connectors. is a rating rv, i by some user v and rating i, such that there exists ratings ru, i and rv, a. The scouts, connectors, and promoters for the prediction of Tom's rating of The Mask are p1 and q1, p2 and q2, and p3 and q3 (respectively). Each of these roles has a value in the recommender to the user, the user's neighborhood, and the system in terms of allowing recommendations to be made. 3.1 Roles in Detail Ratings that act as scouts tend to help the recommender system suggest more movies to the user, though the extent to which this is true depends on the rating behavior of other users. For example, in Fig. 4 the rating Tom → Star Wars helps the system recommend only The Mask to him, while Tom → The Matrix helps recommend The Mask, Jurassic Park, and My Cousin Vinny. Tom makes a connection to Jim who is a prolific user of the system, by rating The Matrix. However, this does not make The Matrix the best movie to rate for everyone. For example, Jim is benefited equally by both The Mask and The Matrix, which allow the system to recommend Star Wars to him. His rating of The Mask is the best scout for Jeff, and Jerry's only scout is his rating of Star Wars. This suggests that good scouts allow a user to build similarity with prolific users, and thereby ensure they get more from the system. While scouts represent beneficial ratings from the perspective of a user, promoters are their duals, and are of benefit to items. In Fig. 4, My Cousin Vinny benefits from Jim's rating, since it allows recommendations to Jeff and Tom. The Mask is not so dependent on just one rating, since the ratings by Jim and Jeff help it. On the other hand, Jerry's rating of Star Wars does not help promote it to any other user. We conclude that a good promoter connects an item to a broader neighborhood of other items, and thereby ensures that it is recommended to more users. Connectors serve a crucial role in a recommender system that is not as obvious. The movies My Cousin Vinny and Jurassic Park have the highest recommendation potential since they can be recommended to Jeff, Jerry and Tom based on the linkage structure illustrated in Fig. 4. Beside the fact that Jim rated these movies, these recommendations are possible only because of the ratings Jim → The Matrix and Jim → The Mask, which are the best connectors. A connector improves the system's ability to make recommendations with no explicit gain for the user. Note that every rating can be of varied benefit in each of these roles. The rating Jim → My Cousin Vinny is a poor scout and connector, but is a very good promoter. The rating Jim → The Mask is a reasonably good scout, a very good connector, and a good promoter. Finally, the rating Jerry → Star Wars is a very good scout, but is of no value as a connector or promoter. As illustrated here, a rating can have different value in each of the three roles in terms of whether a recommendation can be made or not. We could measure this value by simply counting the number of times a rating is used in each role, which alone would be helpful in characterizing the behavior of a system. But we can also measure the contribution of each rating to the quality of recommendations or health of the system. Since every prediction is a combined effort of several recommendation paths, we are interested in discerning the influence of each rating (and, hence, each path) in the system towards the system's overall error. We can understand the dynamics of the system at a finer granularity by tracking the influence of a rating according to the role played. The next section describes the approach to measuring the values of a rating in each role. 4. CONTRIBUTIONS OF RATINGS As we've seen, a rating may play different roles in different predictions and, in each prediction, contribute to the quality of a prediction in different ways. Our approach can use any numeric measure of a property of system health, and assigns credit (or blame) to each rating proportional to its influence in the prediction. By tracking the role of each rating in a prediction, we can accumulate the credit for a rating in each of the three roles, and also track the evolution of the roles of rating over time in the system. This section defines the methodology for computing the contribution of ratings by first defining the influence of a rating, and then instantiating the approach for predictive accuracy, and then rank accuracy. We also demonstrate how these contributions can be aggregated to study the neighborhood of ratings involved in computing a user's recommendations. Note that although our general formulation for rating influence is algorithm independent, due to space considerations, we present the approach for only item-based collaborative filtering. The definition for user-based algorithms is similar and will be presented in an expanded version of this paper. 4.1 Influence of Ratings Recall that an item-based approach to collaborative filtering relies on building item neighborhoods using the similarity of ratings by the same user. As described earlier, similarity is defined by the adjusted cosine measure (Equations (2) and (3)). A set of the top K neighbors is maintained for all items for space and computational efficiency. A prediction of item i for a user u is computed as the weighted deviation from the item's mean rating as shown in Equation (4). The list of recommendations for a user is then the list of items sorted in descending order of their predicted values. We first define impact (a, i, j), the impact a user a has in determining the similarity between two items i and j. This is the change in the similarity between i and j when a's rating is removed, and is defined as of items i and j (users who rate both i and j), R (u) is the set of ratings provided by user u, and sim' ¯ a (i, j) is the similarity of i and j when the ratings of user a are removed and adjusted for the number of raters The influence of each path (u, j, v, i) = [ru, j, rv, j, rv, i] in the prediction of pu, i is given by It follows that the sum of influences over all such paths, for a given set of endpoints, is 1. 4.2 Role Values for Predictive Accuracy The value of a rating in each role is computed from the influence depending on the evaluation measure employed. Here we illustrate the approach using predictive accuracy as the evaluation metric. In general, the goodness of a prediction decides whether the ratings involved must be credited or discredited for their role. For predictive accuracy, the error in prediction e = | pu, i − ru, i | is mapped to a comfort level using a mapping function M (e). Anecdotal evidence suggests that users are unable to discern errors less than 1.0 (for a rating scale of 1 to 5) [4], and so an error less than 1.0 is considered acceptable, but anything larger is not. We hence define M (e) as (1 − e) binned to an appropriate value in [− 1, − 0.5, 0.5, 1]. For each prediction pu, i, M (e) is attributed to all the paths that assisted the computation of pu, i, proportional to their influences. This tribute, M (e) ∗ influence (u, j, v, i), is in turn inherited by each of the ratings in the path [ru, j, rv, j, rv, i], with the credit/blame accumulating to the respective roles of ru, j as a scout, rv, j as a connector, and rv, i as a promoter. In other words, the scout value SF (ru, j), the connector value CF (rv, j) and the promoter value PF (rv, i) are all incremented by the tribute amount. Over a large number of predictions, scouts that have repeatedly resulted in big error rates have a big negative scout value, and vice versa (similarly with the other roles). Every rating is thus summarized by its triple [SF, CF, PF]. 4.3 Role Values for Rank Accuracy We now define the computation of the contribution of ratings to observed rank accuracy. For this computation, we must know the user's preference order for a set of items for which predictions can be computed. We assume that we have a test set of the users' ratings of the items presented in the recommendation list. For every pair of items rated by a user in the test data, we check whether the predicted order is concordant with his preference. We say a pair (i, j) is concordant (with error e) whenever one of the following holds: • if (ru, i <ru, j) then (pu, i − pu, j <e); • if (ru, i> ru, j) then (pu, i − pu, j> e); or • if (ru, i = ru, j) then (| pu, i − pu, j | ≤ E). Similarly, a pair (i, j) is discordant (with error e) if it is not concordant. Our experiments described below use an error tolerance of e = 0.1. All paths involved in the prediction of the two items in a concordant pair are credited, and the paths involved in a discordant pair are discredited. The credit assigned to a pair of items (i, j) in the recommendation list for user u is computed as where t is the number of items in the user's test set whose ratings could be predicted, T is the number of items rated by user u in the test set, C is the number of concordances and D is the number of discordances. The credit c is then divided among all paths responsible for predicting pu, i and pu, j proportional to their influences. We again add the role values obtained from all the experiments to form a triple [SF, CF, PF] for each rating. 4.4 Aggregating rating roles After calculating the role values for individual ratings, we can also use these values to study neighborhoods and the system. Here we consider how we can use the role values to characterize the health of a neighborhood. Consider the list of top recommendations presented to a user at a specific point in time. The collaborative filtering algorithm traversed many paths in his neighborhood through his scouts and other connectors and promoters to make these recommendations. We call these ratings the recommender neighborhood of the user. The user implicitly chooses this neighborhood of ratings through the items he rates. Apart from the collaborative filtering algorithm, the health of this neighborhood completely influences a user's satisfaction with the system. We can characterize a user's recommender neighborhood by aggregating the individual role values of the ratings involved, weighted by the influence of individual ratings in determining his recommended list. Different sections of the user's neighborhood wield varied influence on his recommendation list. For instance, ratings reachable through highly rated items have a bigger say in the recommended items. Our aim is to study the system and classify users with respect to their positioning in a healthy or unhealthy neighborhood. A user can have a good set of scouts, but may be exposed to a neighborhood with bad connectors and promoters. He can have a good neighborhood, but his bad scouts may ensure the neighborhood's potential is rendered useless. We expect that users with good scouts and good neighborhoods will be most satisfied with the system in the future. A user's neighborhood is characterized by a triple that represents the weighted sum of the role values of individual ratings involved in making recommendations. Consider a user u and his ordered list of recommendations L. An item i in the list is weighted inversely, as K (i), depending on its popsition in the list. In our studies we use K (i) = position (i). Several paths of ratings [ru, j, rv, j, rv, i] are involved in predicting pu, i which ultimately decides its position in L, each with inf luence (u, j, v, i). The recommender neighborhood of a user u is characterized by the triple, [SFN (u), CFN (u), PFN (u)] where CFN (u) and PFN (u) are defined similarly. This triple estimates the quality of u's recommendations based on the past track record of the ratings involved in their respective roles. 5. EXPERIMENTATION As we have seen, we can assign role values to each rating when evaluating a collaborative filtering system. In this section, we demonstrate the use of this approach to our overall goal of defining an approach to monitor and manage the health of a recommender system through experiments done on the MovieLens million rating dataset. In particular, we discuss results relating to identifying good scouts, promoters, and connectors; the evolution of rating roles; and the characterization of user neighborhoods. 5.1 Methodology Our experiments use the MovieLens million rating dataset, which consists of ratings by 6040 users of 3952 movies. The ratings are in the range 1 to 5, and are labeled with the time the rating was given. As discussed before, we consider only the item-based algorithm here (with item neighborhoods of size 30) and, due to space considerations, only present role value results for rank accuracy. Since we are interested in the evolution of the rating role values over time, the model of the recommender system is built by processing ratings in their arrival order. The timestamping provided by MovieLens is hence crucial for the analyses presented here. We make assessments of rating roles at intervals of 10,000 ratings and processed the first 200,000 ratings in the dataset (giving rise to 20 snapshots). We incrementally update the role values as the time ordered ratings are merged into the model. To keep the experiment computationally manageable, we define a test dataset for each user. As the time ordered ratings are merged into the model, we label a small randomly selected percentage (20%) as test data. At discrete epochs, i.e., after processing every 10,000 ratings, we compute the predictions for the ratings in the test data, and then compute the role values for the ratings used in the predictions. One potential criticism of this methodology is that the ratings in the test set are never evaluated for their roles. We overcome this concern by repeating the experiment, using different random seeds. The probability that every rating is considered for evaluation is then considerably high: 1 − 0.2 n, where n is the number of times the experiment is repeated with different random seeds. The results here are based on n = 4 repetitions. The item-based collaborative filtering algorithm's performance was ordinary with respect to rank accuracy. Fig. 5 shows a plot of the precision and recall as ratings were merged in time order into the model. The recall was always high, but the average precision was just about 53%. Figure 5: Precision and recall for the item-based collaborative filtering algorithm. 5.2 Inducing good scouts The ratings of a user that serve as scouts are those that allow the user to receive recommendations. We claim that users with ratings that have respectable scout values will be happier with the system than those with ratings with low scout values. Note that the item-based algorithm discussed here produces recommendation lists with nearly half of the pairs in the list discordant from the user's preference. Whether all of these discordant pairs are observable by the user is unclear, however, certainly this suggests that there is a need to be able to direct users to items whose ratings would improve the lists. The distribution of the scout values for most users' ratings are Gaussian with mean zero. Fig. 6 shows the frequency distribution of scout values for a sample user at a given snapshot. We observe that a large number of ratings never serve as scouts for their users. A relatable scenario is when Amazon's recommender makes suggestions of books or items based on other items that were purchased as gifts. With simple relevance feedback from the user, such ratings can be isolated as bad scouts and discounted from future predictions. Removing bad scouts was found to be worthwhile for individual users but the overall performance improvement was only marginal. An obvious question is whether good scouts can be formed by merely rating popular movies as suggested by Rashid et al. [9]. They show that a mix of popularity and rating entropy identifies the best items to suggest to new users when evaluated using MAE. Following their intuition, we would expect to see a higher correlation between popularityentropy and good scouts. We measured the Pearson correlation coefficient between aggregated scout values for a movie with the popularity of a movie (number of times it is rated); and with its popularity * variance measure at different snapshots of the system. Note that the scout values were initially anti-correlated with popularity (Fig. 7), but became moderately correlated as the system evolved. Both popularity and popularity * variance performed similarly. A possible explanation is that there has been insufficient time for the popular movies to accumulate ratings. Figure 6: Distribution of scout values for a sample user. Figure 7: Correlation between aggregated scout value and item popularity (computed at different intervals). Figure 8: Correlation between aggregated promoter value and user prolificity (computed at different intervals). Table 1: Movies forming the best scouts. Table 2: Movies forming the worst scouts. By studying the evolution of scout values, we can identify movies that consistently feature in good scouts over time. We claim these movies will make viable scouts for other users. We found the aggregated scout values for all movies in intervals of 10,000 ratings each. A movie is said to induce a good scout if the movie was in the top 100 of the sorted list, and to induce a bad scout if it was in bottom 100 of the same list. Movies appearing consistently high over time are expected to remain up there in the future. The effective confidence in a movie m is where Tm is the number of times it appeared in the top 100, Bm the number of times it appeared in the bottom 100, and N is the number of intervals considered. Using this measure, the top few movies expected to induce the best scouts are shown in Table 1. Movies that would be bad scout choices are shown in Table 2 with their associated confidences. The popularities of the movies are also displayed. Although more popular movies appear in the list of good scouts, these tables show that a blind choice of scout based on popularity alone can be potentially dangerous. Interestingly, the best scout--` Being John Malkovich'--is about a puppeteer who discovers a portal into a movie star, a movie that has been described variously on amazon.com as ` makes you feel giddy,' ` seriously weird,' ` comedy with depth,' ` silly,' ` strange,' and ` inventive . ' Indicating whether someone likes this movie or not goes a long way toward situating the user in a suitable neighborhood, with similar preferences. On the other hand, several factors may have made a movie a bad scout, like the sharp variance in user preferences in the neighborhood of a movie. Two users may have the same opinion about Lawrence of Arabia, but they may differ sharply about how they felt about the other movies they saw. Bad scouts ensue when there is deviation in behavior around a common synchronization point. 5.3 Inducing good promoters Ratings that serve to promote items in a collaborative filtering system are critical to allowing a new item be recommended to users. So, inducing good promoters is important for cold-start recommendation. We note that the frequency distribution of promoter values for a sample movie's ratings is also Gaussian (similar to Fig. 6). This indicates that the promotion of a movie is benefited most by the ratings of a few users, and are unaffected by the ratings of most users. We find a strong correlation between a user's number of ratings and his aggregated promoter value. Fig. 8 depicts the evolution of the Pearson correlation co-efficient between the prolificity of a user (number of ratings) versus his aggregated promoter value. We expect that conspicuous shills, by recommending wrong movies to users, will be discredited with negative aggregate promoter values and should be identifiable easily. Given this observation, the obvious rule to follow when introducing a new movie is to have it rated directly by prolific users who posses high aggregated promoter values. A new movie is thus cast into the neighborhood of many other movies improving its visibility. Note, though, that a user may have long stopped using the system. Tracking promoter values consistently allows only the most active recent users to be considered. 5.4 Inducing good connectors Given the way scouts, connectors, and promoters are characterized, it follows that the movies that are part of the best scouts are also part of the best connectors. Similarly, the users that constitute the best promoters are also part of the best connectors. Good connectors are induced by ensuring a user with a high promoter value rates a movie with a high scout value. In our experiments, we find that a rating's longest standing role is often as a connector. A rating with a poor connector value is often seen due to its user being a bad promoter, or its movie being a bad scout. Such ratings can be removed from the prediction process to bring marginal improvements to recommendations. In some selected experiments, we observed that removing a set of badly behaving connectors helped improve the system's overall performance by 1.5%. The effect was even higher on a few select users who observed an improvement of above 10% in precision without much loss in recall. 5.5 Monitoring the evolution of rating roles One of the more significant contributions of our work is the ability to model the evolution of recommender systems, by studying the changing roles of ratings over time. The role and value of a rating can change depending on many factors like user behavior, redundancy, shilling effects or properties of the collaborative filtering algorithm used. Studying the dynamics of rating roles in terms of transitions between good, bad, and negligible values can provide insights into the functioning of the recommender system. We believe that a continuous visualization of these transitions will improve the ability to manage a recommender system. We classify different rating states as good, bad, or negligible. Consider a user who has rated 100 movies in a particular interval, of which 20 are part of the test set. If a scout has a value greater than 0.005, it indicates that it is uniquely involved in at least 2 concordant predictions, which we will say is good. Thus, a threshold of 0.005 is chosen to bin a rating as good, bad or negligible in terms of its scout, connector and promoter value. For instance, a rating r, at time t with role value triple [0.1, 0.001, − 0.01] is classified as [scout +, connector 0, promoter −], where + indicates good, 0 indicates negligible, and − indicates bad. The positive credit held by a rating is a measure of its contribution to the betterment of the system, and the discredit is a measure of its contribution to the detriment of the system. Even though the positive roles (and the negative roles) make up a very small percentage of all ratings, their contribution supersedes their size. For example, even though only 1.7% of all ratings were classified as good scouts, they hold 79% of all positive credit in the system! Similarly, the bad scouts were just 1.4% of all ratings but hold 82% of all discredit. Note that good and bad scouts, together, comprise only 1.4% + 1.7% = 3.1% of the ratings, implying that the majority of the ratings are negligible role players as scouts (more on this later). Likewise, good connectors were 1.2% of the system, and hold 30% of all positive credit. The bad connectors (0.8% of the system) hold 36% of all discredit. Good promoters (3% of the system) hold 46% of all credit, while bad promoters (2%) hold 50% of all discredit. This reiterates that a few ratings influence most of the system's performance. Hence it is important to track transitions between them regardless of their small numbers. Across different snapshots, a rating can remain in the same state or change. A good scout can become a bad scout, a good promoter can become a good connector, good and bad scouts can become vestigial, and so on. It is not practical to expect a recommender system to have no ratings in bad roles. However, it suffices to see ratings in bad roles either convert to good or vestigial roles. Similarly, observing a large number of good roles become bad ones is a sign of imminent failure of the system. We employ the principle of non-overlapping episodes [6] to count such transitions. A sequence such as: [+, 0, 0]--+ [+, 0, 0]--+ [0, +, 0]--+ [0, 0, 0] is interpreted as the transitions [+, 0, 0] ^. * [0, +, 0]: 1 [+, 0, 0] ^. * [0, 0, 0]: 1 [0, +, 0] ^. * [0, 0, 0]: 1 instead of [+, 0, 0] ^. * [0, +, 0]: 2 [+, 0, 0] ^. * [0, 0, 0]: 2 [0, +, 0] ^. * [0, 0, 0]: 1. See [6] for further details about this counting procedure. Thus, a rating can be in one of 27 possible states, and there are about 272 possible transitions. We make a further simplification and utilize only 9 states, indicating whether the rating is a scout, promoter, or connector, and whether it has a positive, negative, or negligible role. Ratings that serve multiple purposes are counted using multiple episode instantiations but the states themselves are not duplicated beyond the 9 restricted states. In this model, a transition such as [+, 0, +] ^. * [0, +, 0]: 1 is counted as Of these, transitions like [pX] ^. * [q0] where p = q, X ∈ {+, 0, −} are considered uninteresting, and only the rest are counted. Fig. 9 depicts the major transitions counted while processing the first 200,000 ratings from the MovieLens dataset. Only transitions with frequency greater than or equal to 3% are shown. The percentages for each state indicates the number of ratings that were found to be in those states. We consider transitions from any state to a good state as healthy, from any state to a bad state as unhealthy, and from any state to a vestigial state as decaying. From Fig. 9, we can observe: • The bulk of the ratings have negligible values, irrespective of their role. The majority of the transitions involve both good and bad ratings becoming negligible. Figure 9: Transitions among rating roles. • The number of good ratings is comparable to the bad ratings, and ratings are seen to switch states often, except in the case of scouts (see below). • The negative and positive scout states are not reachable through any transition, indicating that these ratings must begin as such, and cannot be coerced into these roles. • Good promoters and good connectors have a much longer survival period than scouts. Transitions that recur to these states have frequencies of 10% and 15% when compared to just 4% for scouts. Good connectors are the slowest to decay whereas (good) scouts decay the fastest. • Healthy percentages are seen on transitions between promoters and connectors. As indicated earlier, there are hardly any transitions from promoters/connectors to scouts. This indicates that, over the long run, a user's rating is more useful to others (movies or other users) than to the user himself. • The percentages of healthy transitions outweigh the unhealthy ones--this hints that the system is healthy, albeit only marginally. Note that these results are conditioned by the static nature of the dataset, which is a set of ratings over a fixed window of time. However a diagram such as Fig. 9 is clearly useful for monitoring the health of a recommender system. For instance, acceptable limits can be imposed on different types of transitions and, if a transition fails to meet the threshold, the recommender system or a part of it can be brought under closer scrutiny. Furthermore, the role state transition diagram would also be the ideal place to study the effects of shilling, a topic we will consider in future research. 5.6 Characterizing neighborhoods Earlier we saw that we can characterize the neighborhood of ratings involved in creating a recommendation list L for a user. In our experiment, we consider lists of length 30, and sample the lists of about 5% of users through the evolution of the model (at intervals of 10,000 ratings each) and compute their neighborhood characteristics. To simplify our presentation, we consider the percentage of the sample that fall into one of the following categories: 1. Inactive user: (SFN (u) = 0) 2. Good scouts, Good neighborhood: From our sample set of 561 users, we found that 476 users were inactive. Of the remaining 85 users, we found 26 users had good scouts and a good neighborhood, 6 had bad scouts and a good neighborhood, 29 had good scouts and a bad neighborhood, and 24 had bad scouts and a bad neighborhood. Thus, we conjecture that 59 users (29 +24 +6) are in danger of leaving the system. As a remedy, users with bad scouts and a good neighborhood can be asked to reconsider rating of some movies hoping to improve the system's recommendations. The system can be expected to deliver more if they engineer some good scouts. Users with good scouts and a bad neighborhood are harder to address; this situation might entail selectively removing some connector-promoter pairs that are causing the damage. Handling users with bad scouts and bad neighborhoods is a more difficult challenge. Such a classification allows the use of different strategies to better a user's experience with the system depending on his context. In future work, we intend to conduct field studies and study the improvement in performance of different strategies for different contexts. 6. CONCLUSIONS To further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings. A fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor. Although we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics. In future research, we plan to systematically study the many algorithmic parameters, tolerances, and cutoff thresholds employed here and reason about their effects on the downstream conclusions. We also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values. Finally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.
Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering ABSTRACT Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles--scouts, promoters, and connectors--that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.) . These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community. 1. INTRODUCTION Recommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history. Collaborative filtering, a common form of recommendation, predicts a user's rating for an item by combining (other) ratings of that user with other users' ratings. Significant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8]. However, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations. Such an understanding will give an important handle to monitoring and managing a recommender system, to engineer mechanisms to sustain the recommender, and thereby ensure its continued success. Our motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering. We identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds. Viewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations. In turn, this capability helps support scenarios such as: 1. Situating users in better neighborhoods: A user's ratings may inadvertently connect the user to a neighborhood for which the user's tastes may not be a perfect match. Identifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood. 2. Targeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items. Identifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system. 3. Monitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and items, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack. Tracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved. These include being able to identify rating subspaces that do not contribute (or contribute negatively) to system performance, and could be removed; to enumerate users who are in danger of leaving, or have left the system; and to assess the susceptibility of the system to attacks such as shilling [5]. As we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community. The rest of the paper is organized as follows. Background on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2. Section 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles. In Section 5, we illustrate the use of these roles to address the goals outlined above. 2. BACKGROUND 2.1 Algorithms Nearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction. An algorithm of the first kind is called user-based, and one of the second kind is called itembased [12]. In both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based). Predictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user's neighbors and, in an item-based algorithm, involves aggregating the user's ratings of items that are neighbors of the target item. Algorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions. We consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12]. The algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings: where Iu is the set of items rated by user u, ru, i is user u's rating for item i, and ¯ ru is the average rating of user u (similarly for v). Similarity computed in this manner is typically scaled by a factor proportional to the number of common ratings, to reduce the chance of making a recommendation made on weak connections: sim' (u, v) = max (| Iu ∩ Iv |, γ) · sim (u, v), γ where γ ≈ 5 is a constant used as a lower limit in scaling [2]. These new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u. A prediction for user u and item i is computed by a weighted average of the ratings by the neighbors where V = Nu ∩ Ui is the set of users most similar to u who have rated i. The item-based algorithm we use is the one defined by Sarwar et al. [12]. In this algorithm, similarity is defined as the adjusted cosine measure where Ui is the set of users who have rated item i. As for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item. A prediction for user u and item i is computed as the weighted average where J = Ni ∩ Iu is the set of items rated by u that are most similar to i. 2.2 Evaluation Recommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3]. Studies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set. A standard measure of predictive accuracy is mean absolute error (MAE), which for a test set Coverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3]. A practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions. Recommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list. So, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order. Herlocker et al. [3] discuss a number of rank accuracy measures, which range from Kendall's Tau to measures that consider the fact that users tend to only look at a prefix of the list [5]. Kendall's Tau measures the number of inversions when comparing ordered pairs in the true user ordering of Figure 1: Ratings in simple movie recommender. items and the recommended order, and is defined as where C is the number of pairs that the system predicts in the correct order, D the number of pairs the system predicts in the wrong order, TR the number of pairs in the true ordering that have the same ratings, and TP is the number of pairs in the predicted ordering that have the same ratings [3]. A shortcoming of the Tau metric is that it is oblivious to the position in the ordered list where the inversion occurs [3]. For instance, an inversion toward the end of the list is given the same weight as one in the beginning. One solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list. 3. ROLES OF A RATING 3.1 Roles in Detail 4. CONTRIBUTIONS OF RATINGS 4.1 Influence of Ratings 4.2 Role Values for Predictive Accuracy 4.3 Role Values for Rank Accuracy 4.4 Aggregating rating roles 5. EXPERIMENTATION 5.1 Methodology 5.2 Inducing good scouts 5.3 Inducing good promoters 5.4 Inducing good connectors 5.5 Monitoring the evolution of rating roles 5.6 Characterizing neighborhoods 6. CONCLUSIONS To further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings. A fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor. Although we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics. In future research, we plan to systematically study the many algorithmic parameters, tolerances, and cutoff thresholds employed here and reason about their effects on the downstream conclusions. We also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values. Finally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.
Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering ABSTRACT Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles--scouts, promoters, and connectors--that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.) . These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community. 1. INTRODUCTION Recommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history. Collaborative filtering, a common form of recommendation, predicts a user's rating for an item by combining (other) ratings of that user with other users' ratings. Significant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8]. However, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations. Our motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering. We identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds. Viewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations. 1. Situating users in better neighborhoods: A user's ratings may inadvertently connect the user to a neighborhood for which the user's tastes may not be a perfect match. Identifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood. 2. Targeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items. Identifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system. 3. Monitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and items, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack. Tracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved. As we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community. Background on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2. Section 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles. In Section 5, we illustrate the use of these roles to address the goals outlined above. 2. BACKGROUND 2.1 Algorithms Nearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction. An algorithm of the first kind is called user-based, and one of the second kind is called itembased [12]. In both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based). Predictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user's neighbors and, in an item-based algorithm, involves aggregating the user's ratings of items that are neighbors of the target item. Algorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions. We consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12]. The algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings: where Iu is the set of items rated by user u, ru, i is user u's rating for item i, and ¯ ru is the average rating of user u (similarly for v). These new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u. A prediction for user u and item i is computed by a weighted average of the ratings by the neighbors where V = Nu ∩ Ui is the set of users most similar to u who have rated i. The item-based algorithm we use is the one defined by Sarwar et al. [12]. In this algorithm, similarity is defined as the adjusted cosine measure where Ui is the set of users who have rated item i. As for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item. A prediction for user u and item i is computed as the weighted average where J = Ni ∩ Iu is the set of items rated by u that are most similar to i. 2.2 Evaluation Recommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3]. Studies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set. A standard measure of predictive accuracy is mean absolute error (MAE), which for a test set Coverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3]. A practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions. Recommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list. So, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order. Kendall's Tau measures the number of inversions when comparing ordered pairs in the true user ordering of Figure 1: Ratings in simple movie recommender. items and the recommended order, and is defined as One solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list. 6. CONCLUSIONS To further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings. A fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor. Although we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics. We also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values. Finally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.
C-74
Adapting Asynchronous Messaging Middleware to ad-hoc Networking
The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad-hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for ad-hoc networks), an adaptation of Java Message Service (JMS) for mobile ad-hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.
[ "asynchron messag middlewar", "asynchron commun", "messag orient middlewar", "epidem messag middlewar", "java messag servic", "mobil ad-hoc environ", "messag-orient middlewar", "epidem protocol", "cross-layer", "applic level rout", "group commun", "middlewar for mobil comput", "mobil ad-hoc network", "context awar" ]
[ "P", "P", "P", "P", "P", "P", "M", "M", "U", "M", "M", "R", "R", "U" ]
Adapting Asynchronous Messaging Middleware to ad-hoc Networking Mirco Musolesi Dept. of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom m.musolesi@cs.ucl.ac.uk Cecilia Mascolo Dept. of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom c.mascolo@cs.ucl.ac.uk Stephen Hailes Dept. of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom s.hailes@cs.ucl.ac.uk ABSTRACT The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad-hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for ad-hoc networks), an adaptation of Java Message Service (JMS) for mobile ad-hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed Applications; C.2.1 [Network Architecture and Design]: Wireless Communication General Terms DESIGN, ALGORITHMS 1. INTRODUCTION With the increasing popularity of mobile devices and their widespread adoption, there is a clear need to allow the development of a broad spectrum of applications that operate effectively over such an environment. Unfortunately, this is far from simple: mobile devices are increasingly heterogeneous in terms of processing capabilities, memory size, battery capacity, and network interfaces. Each such configuration has substantially different characteristics that are both statically different - for example, there is a major difference in capability between a Berkeley mote and an 802.11g-equipped laptop - and that vary dynamically, as in situations of fluctuating bandwidth and intermittent connectivity. Mobile ad hoc environments have an additional element of complexity in that they are entirely decentralised. In order to craft applications for such complex environments, an appropriate form of middleware is essential if cost effective development is to be achieved. In this paper, we examine one of the foundational aspects of middleware for mobile ad-hoc environments: that of the communication primitives. Traditionally, the most frequently used middleware primitives for communication assume the simultaneous presence of both end points on a network, since the stability and pervasiveness of the networking infrastructure is not an unreasonable assumption for most wired environments. In other words, most communication paradigms are synchronous: object oriented middleware such as CORBA and Java RMI are typical examples of middleware based on synchronous communication. In recent years, there has been growing interest in platforms based on asynchronous communication paradigms, such as publish-subscribe systems [6]: these have been exploited very successfully where there is application level asynchronicity. From a Gartner Market Report [7]: Given messageoriented-middleware``s (MOM) popularity, scalability, flexibility, and affinity with mobile and wireless architectures, by 2004, MOM will emerge as the dominant form of communication middleware for linking mobile and enterprise applications (0.7 probability)... Moreover, in mobile ad-hoc systems, the likelihood of network fragmentation means that synchronous communication may in any case be impracticable, giving situations in which delay tolerant asynchronous traffic is the only form of traffic that could be supported. 121 Middleware 2004 Companion Middleware for mobile ad-hoc environments must therefore support semi-synchronous or completely asynchronous communication primitives if it is to avoid substantial limitations to its utility. Aside from the intellectual challenge in supporting this model, this work is also interesting because there are a number of practical application domains in allowing inter-community communication in undeveloped areas of the globe. Thus, for example, projects that have been carried out to help populations that live in remote places of the globe such as Lapland [3] or in poor areas that lack fixed connectivity infrastructure [9]. There have been attempts to provide mobile middleware with these properties, including STEAM, LIME, XMIDDLE, Bayou (see [11] for a more complete review of mobile middleware). These models differ quite considerably from the existing traditional middleware in terms of primitives provided. Furthermore, some of them fail in providing a solution for the true ad-hoc scenarios. If the projected success of MOM becomes anything like a reality, there will be many programmers with experience of it. The ideal solution to the problem of middleware for ad-hoc systems is, then, to allow programmers to utilise the same paradigms and models presented by common forms of MOM and to ensure that these paradigms are supportable within the mobile environment. This approach has clear advantages in allowing applications developed on standard middleware platforms to be easily deployed on mobile devices. Indeed, some research has already led to the adaptation of traditional middleware platforms to mobile settings, mainly to provide integration between mobile devices and existing fixed networks in a nomadic (i.e., mixed) environment [4]. With respect to message oriented middleware, the current implementations, however, either assume the existence of a backbone network to which the mobile hosts connect from time to time while roaming [10], or assume that nodes are always somehow reachable through a path [18]. No adaptation to heterogeneous or completely ad-hoc scenarios, with frequent disconnection and periodically isolated clouds of hosts, has been attempted. In the remainder of this paper we describe an initial attempt to adapt message oriented middleware to suit mobile and, more specifically, mobile ad-hoc networks. In our case, we elected to examine JMS, as one of the most widely known MOM systems. In the latter part of this paper, we explore the limitations of our results and describe the plans we have to take the work further. 2. MESSAGE ORIENTED MIDDLEWARE AND JAVA MESSAGE SERVICE (JMS) Message-oriented middleware systems support communication between distributed components via message-passing: the sender sends a message to identified queues, which usually reside on a server. A receiver retrieves the message from the queue at a different time and may acknowledge the reply using the same asynchronous mechanism. Message-oriented middleware thus supports asynchronous communication in a very natural way, achieving de-coupling of senders and receivers. A sender is able to continue processing as soon as the middleware has accepted the message; eventually, the receiver will send an acknowledgment message and the sender will be able to collect it at a convenient time. However, given the way they are implemented, these middleware systems usually require resource-rich devices, especially in terms of memory and disk space, where persistent queues of messages that have been received but not yet processed, are stored. Sun Java Message Service [5], IBM WebSphere MQ [6], Microsoft MSMQ [12] are examples of very successful message-oriented middleware for traditional distributed systems. The Java Messaging Service (JMS) is a collection of interfaces for asynchronous communication between distributed components. It provides a common way for Java programs to create, send and receive messages. JMS users are usually referred to as clients. The JMS specification further defines providers as the components in charge of implementing the messaging system and providing the administrative and control functionality (i.e., persistence and reliability) required by the system. Clients can send and receive messages, asynchronously, through the JMS provider, which is in charge of the delivery and, possibly, of the persistence of the messages. There are two types of communication supported: point to point and publish-subscribe models. In the point to point model, hosts send messages to queues. Receivers can be registered with some specific queues, and can asynchronously retrieve the messages and then acknowledge them. The publish-subscribe model is based on the use of topics that can be subscribed to by clients. Messages are sent to topics by other clients and are then received in an asynchronous mode by all the subscribed clients. Clients learn about the available topics and queues through Java Naming and Directory Interface (JNDI) [14]. Queues and topics are created by an administrator on the provider and are registered with the JNDI interface for look-up. In the next section, we introduce the challenges of mobile networks, and show how JMS can be adapted to cope with these requirements. 3. JMS FOR MOBILE COMPUTING Mobile networks vary very widely in their characteristics, from nomadic networks in which modes relocate whilst offline through to ad-hoc networks in which modes move freely and in which there is no infrastructure. Mobile ad-hoc networks are most generally applicable in situations where survivability and instant deployability are key: most notably in military applications and disaster relief. In between these two types of ``mobile'' networks, there are, however, a number of possible heterogeneous combinations, where nomadic and ad-hoc paradigms are used to interconnect totally unwired areas to more structured networks (such as a LAN or the Internet). Whilst the JMS specification has been extensively implemented and used in traditional distributed systems, adaptations for mobile environments have been proposed only recently. The challenges of porting JMS to mobile settings are considerable; however, in view of its widespread acceptance and use, there are considerable advantages in allowing the adaptation of existing applications to mobile environments and in allowing the interoperation of applications in the wired and wireless regions of a network. In [10], JMS was adapted to a nomadic mobile setting, where mobile hosts can be JMS clients and communicate through the JMS provider that, however, sits on a backbone network, providing reliability and persistence. The client prototype presented in [10] is very lightweight, due to the delegation of all the heavyweight functionality to the Middleware for Pervasive and ad-hoc Computing 122 provider on the wired network. However, this approach is somewhat limited in terms of widespread applicability and scalability as a consequence of the concentration of functionality in the wired portion of the network. If JMS is to be adapted to completely ad-hoc environments, where no fixed infrastructure is available, and where nodes change location and status very dynamically, more issues must be taken into consideration. Firstly, discovery needs to use a resilient but distributed model: in this extremely dynamic environment, static solutions are unacceptable. As discussed in Section 2, a JMS administrator defines queues and topics on the provider. Clients can then learn about them using the Java Naming and Directory Interface (JNDI). However, due to the way JNDI is designed, a JNDI node (or more than one) needs to be in reach in order to obtain a binding of a name to an address (i.e., knowing where a specific queue/topic is). In mobile ad-hoc environments, the discovery process cannot assume the existence of a fixed set of discovery servers that are always reachable, as this would not match the dynamicity of ad-hoc networks. Secondly, a JMS Provider, as suggested by the JMS specification, also needs to be reachable by each node in the network, in order to communicate. This assumes a very centralised architecture, which again does not match the requirements of a mobile ad-hoc setting, in which nodes may be moving and sparse: a more distributed and dynamic solution is needed. Persistence is, however, essential functionality in asynchronous communication environments as hosts are, by definition, connected at different times. In the following section, we will discuss our experience in designing and implementing JMS for mobile ad-hoc networks. 4. JMSFOR MOBILE ad-hoc NETWORKS 4.1 Adaptation of JMS for Mobile ad-hoc Networks Developing applications for mobile networks is yet more challenging: in addition to the same considerations as for infrastructured wireless environments, such as the limited device capabilities and power constraints, there are issues of rate of change of network connectivity, and the lack of a static routing infrastructure. Consequently, we now describe an initial attempt to adapt the JMS specification to target the particular requirements related to ad-hoc scenarios. As discussed in Section 3, a JMS application can use either the point to point and the publish-subscribe styles of messaging. Point to Point Model The point to point model is based on the concept of queues, that are used to enable asynchronous communication between the producer of a message and possible different consumers. In our solution, the location of queues is determined by a negotiation process that is application dependent. For example, let us suppose that it is possible to know a priori, or it is possible to determine dynamically, that a certain host is the receiver of the most part of messages sent to a particular queue. In this case, the optimum location of the queue may well be on this particular host. In general, it is worth noting that, according to the JMS specification and suggested design patterns, it is common and preferable for a client to have all of its messages delivered to a single queue. Queues are advertised periodically to the hosts that are within transmission range or that are reachable by means of the underlying synchronous communication protocol, if provided. It is important to note that, at the middleware level, it is logically irrelevant whether or not the network layer implements some form of ad-hoc routing (though considerably more efficient if it does); the middleware only considers information about which nodes are actively reachable at any point in time. The hosts that receive advertisement messages add entries to their JNDI registry. Each entry is characterized by a lease (a mechanism similar to that present in Jini [15]). A lease represents the time of validity of a particular entry. If a lease is not refreshed (i.e, its life is not extended), it can expire and, consequently, the entry is deleted from the registry. In other words, the host assumes that the queue will be unreachable from that point in time. This may be caused, for example, if a host storing the queue becomes unreachable. A host that initiates a discovery process will find the topics and the queues present in its connected portion of the network in a straightforward manner. In order to deliver a message to a host that is not currently in reach1 , we use an asynchronous epidemic routing protocol that will be discussed in detail in Section 4.2. If two hosts are in the same cloud (i.e., a connected path exists between them), but no synchronous protocol is available, the messages are sent using the epidemic protocol. In this case, the delivery latency will be low as a result of the rapidity of propagation of the infection in the connected cloud (see also the simulation results in Section 5). Given the existence of an epidemic protocol, the discovery mechanism consists of advertising the queues to the hosts that are currently unreachable using analogous mechanisms. Publish-Subscribe Model In the publish-subscribe model, some of the hosts are similarly designated to hold topics and store subscriptions, as before. Topics are advertised through the registry in the same way as are queues, and a client wishing to subscribe to a topic must register with the client holding the topic. When a client wishes to send a message to the topic list, it sends it to the topic holder (in the same way as it would send a message to a queue). The topic holder then forwards the message to all subscribers, using the synchronous protocol if possible, the epidemic protocol otherwise. It is worth noting that we use a single message with multiple recipients, instead of multiple messages with multiple recipients. When a message is delivered to one of the subscribers, this recipient is deleted from the list. In order to delete the other possible replicas, we employ acknowledgment messages (discussed in Section 4.4), returned in the same way as a normal message. We have also adapted the concepts of durable and non durable subscriptions for ad-hoc settings. In fixed platforms, durable subscriptions are maintained during the disconnections of the clients, whether these are intentional or are the result of failures. In traditional systems, while a durable subscriber is disconnected from the server, it is responsible for storing messages. When the durable subscriber reconnects, the server sends it all unexpired messages. The problem is that, in our scenario, disconnections are the norm 1 In theory, it is not possible to send a message to a peer that has never been reachable in the past, since there can be no entry present in the registry. However, to overcome this possible limitation, we provide a primitive through which information can be added to the registry without using the normal channels. 123 Middleware 2004 Companion rather than the exception. In other words, we cannot consider disconnections as failures. For these reasons, we adopt a slightly different semantics. With respect to durable subscriptions, if a subscriber becomes disconnected, notifications are not stored but are sent using the epidemic protocol rather than the synchronous protocol. In other words, durable notifications remain valid during the possible disconnections of the subscriber. On the other hand, if a non-durable subscriber becomes disconnected, its subscription is deleted; in other words, during disconnections, notifications are not sent using the epidemic protocol but exploit only the synchronous protocol. If the topic becomes accessible to this host again, it must make another subscription in order to receive the notifications. Unsubscription messages are delivered in the same way as are subscription messages. It is important to note that durable subscribers have explicitly to unsubscribe from a topic in order to stop the notification process; however, all durable subscriptions have a predefined expiration time in order to cope with the cases of subscribers that do not meet again because of their movements or failures. This feature is clearly provided to limit the number of the unnecessary messages sent around the network. 4.2 Message Delivery using Epidemic Routing In this section, we examine one possible mechanism that will allow the delivery of messages in a partially connected network. The mechanism we discuss is intended for the purposes of demonstrating feasibility; more efficient communication mechanisms for this environment are themselves complex, and are the subject of another paper [13]. The asynchronous message delivery described above is based on a typical pure epidemic-style routing protocol [16]. A message that needs to be sent is replicated on each host in reach. In this way, copies of the messages are quickly spread through connected networks, like an infection. If a host becomes connected to another cloud of mobile nodes, during its movement, the message spreads through this collection of hosts. Epidemic-style replication of data and messages has been exploited in the past in many fields starting with the distributed database systems area [2]. Within epidemic routing, each host maintains a buffer containing the messages that it has created and the replicas of the messages generated by the other hosts. To improve the performance, a hash-table indexes the content of the buffer. When two hosts connect, the host with the smaller identifier initiates a so-called anti-entropy session, sending a list containing the unique identifiers of the messages that it currently stores. The other host evaluates this list and sends back a list containing the identifiers it is storing that are not present in the other host, together with the messages that the other does not have. The host that has started the session receives the list and, in the same way, sends the messages that are not present in the other host. Should buffer overflow occur, messages are dropped. The reliability offered by this protocol is typically best effort, since there is no guarantee that a message will eventually be delivered to its recipient. Clearly, the delivery ratio of the protocol increases proportionally to the maximum allowed delay time and the buffer size in each host (interesting simulation results may be found in [16]). 4.3 Adaptation of the JMS Message Model In this section, we will analyse the aspects of our adaptation of the specification related to the so-called JMS Message Model [5]. According to this, JMS messages are characterised by some properties defined using the header field, which contains values that are used by both clients and providers for their delivery. The aspects discussed in the remainder of this section are valid for both models (point to point and publish-subscribe). A JMS message can be persistent or non-persistent. According to the JMS specification, persistent messages must be delivered with a higher degree of reliability than the nonpersistent ones. However, it is worth noting that it is not possible to ensure once-and-only-once reliability for persistent messages as defined in the specification, since, as we discussed in the previous subsection, the underlying epidemic protocol can guarantee only best-effort delivery. However, clients maintain a list of the identifiers of the recently received messages to avoid the delivery of message duplicates. In other words, we provide the applications with at-mostonce reliability for both types of messages. In order to implement different levels of reliability, EMMA treats persistent and non-persistent messages differently, during the execution of the anti-entropy epidemic protocol. Since the message buffer space is limited, persistent messages are preferentially replicated using the available free space. If this is insufficient and non-persistent messages are present in the buffer, these are replaced. Only the successful deliveries of the persistent messages are notified to the senders. According to the JMS specification, it is possible to assign a priority to each message. The messages with higher priorities are delivered in a preferential way. As discussed above, persistent messages are prioritised above the non-persistent ones. Further selection is based on their priorities. Messages with higher priorities are treated in a preferential way. In fact, if there is not enough space to replicate all the persistent messages, a mechanism based on priorities is used to delete and replicate non-persistent messages (and, if necessary, persistent messages). Messages are deleted from the buffers using the expiration time value that can be set by senders. This is a way to free space in the buffers (one preferentially deletes older messages in cases of conflict); to eliminate stale replicas in the system; and to limit the time for which destinations must hold message identifiers to dispose of duplicates. 4.4 Reliability and Acknowledgment Mechanisms As already discussed, at-most-once message delivery is the best that can be achieved in terms of delivery semantics in partially connected ad-hoc settings. However, it is possible to improve the reliability of the system with efficient acknowledgment mechanisms. EMMA provides a mechanism for failure notification to applications if the acknowledgment is not received within a given timeout (that can be configured by application developers). This mechanism is the one that distinguishes the delivery of persistent and non-persistent messages in our JMS implementation: the deliveries of the former are notified to the senders, whereas the latter are not. We use acknowledgment messages not only to inform senders about the successful delivery of messages but also to delete the replicas of the delivered messages that are still present in the network. Each host maintains a list of the messages Middleware for Pervasive and ad-hoc Computing 124 successfully delivered that is updated as part of the normal process of information exchange between the hosts. The lists are exchanged during the first steps of the anti-entropic epidemic protocol with a certain predefined frequency. In the case of messages with multiple recipients, a list of the actual recipients is also stored. When a host receives the list, it checks its message buffer and updates it according to the following rules: (1) if a message has a single recipient and it has been delivered, it is deleted from the buffer; (2) if a message has multiple recipients, the identifiers of the delivered hosts are deleted from the associated list of recipients. If the resulting length of the list of recipients is zero, the message is deleted from the buffer. These lists have, clearly, finite dimensions and are implemented as circular queues. This simple mechanism, together with the use of expiration timestamps, guarantees that the old acknowledgment notifications are deleted from the system after a limited period of time. In order to improve the reliability of EMMA, a design mechanism for intelligent replication of queues and topics based on the context information could be developed. However this is not yet part of the current architecture of EMMA. 5. IMPLEMENTATION AND PRELIMINARY EVALUATION We implemented a prototype of our platform using the J2ME Personal Profile. The size of the executable is about 250KB including the JMS 1.1 jar file; this is a perfectly acceptable figure given the available memory of the current mobile devices on the market. We tested our prototype on HP IPaq PDAs running Linux, interconnected with WaveLan, and on a number of laptops with the same network interface. We also evaluated the middleware platform using the OMNET++ discrete event simulator [17] in order to explore a range of mobile scenarios that incorporated a more realistic number of hosts than was achievable experimentally. More specifically, we assessed the performance of the system in terms of delivery ratio and average delay, varying the density of population and the buffer size, and using persistent and non-persistent messages with different priorities. The simulation results show that the EMMA``s performance, in terms of delivery ratio and delay of persistent messages with higher priorities, is good. In general, it is evident that the delivery ratio is strongly related to the correct dimensioning of the buffers to the maximum acceptable delay. Moreover, the epidemic algorithms are able to guarantee a high delivery ratio if one evaluates performance over a time interval sufficient for the dissemination of the replicas of messages (i.e., the infection spreading) in a large portion of the ad-hoc network. One consequence of the dimensioning problem is that scalability may be seriously impacted in peer-to-peer middleware for mobile computing due to the resource poverty of the devices (limited memory to store temporarily messages) and the number of possible interconnections in ad-hoc settings. What is worse is that common forms of commercial and social organisation (six degrees of separation) mean that even modest TTL values on messages will lead to widespread flooding of epidemic messages. This problem arises because of the lack of intelligence in the epidemic protocol, and can be addressed by selecting carrier nodes for messages with greater care. The details of this process are, however, outside the scope of this paper (but may be found in [13]) and do not affect the foundation on which the EMMA middleware is based: the ability to deliver messages asynchronously. 6. CRITICAL VIEW OF THE STATE OF THE ART The design of middleware platforms for mobile computing requires researchers to answer new and fundamentally different questions; simply assuming the presence of wired portions of the network on which centralised functionality can reside is not generalisable. Thus, it is necessary to investigate novel design principles and to devise architectural patterns that differ from those traditionally exploited in the design of middleware for fixed systems. As an example, consider the recent cross-layering trend in ad-hoc networking [1]. This is a way of re-thinking software systems design, explicitly abandoning the classical forms of layering, since, although this separation of concerns afford portability, it does so at the expense of potential efficiency gains. We believe that it is possible to view our approach as an instance of cross-layering. In fact, we have added the epidemic network protocol at middleware level and, at the same time, we have used the existing synchronous network protocol if present both in delivering messages (traditional layering) and in informing the middleware about when messages may be delivered by revealing details of the forwarding tables (layer violation). For this reason, we prefer to consider them jointly as the communication layer of our platform together providing more efficient message delivery. Another interesting aspect is the exploitation of context and system information to improve the performance of mobile middleware platforms. Again, as a result of adopting a cross-layering methodology, we are able to build systems that gather information from the underlying operating system and communication components in order to allow for adaptation of behaviour. We can summarise this conceptual design approach by saying that middleware platforms must be not only context-aware (i.e., they should be able to extract and analyse information from the surrounding context) but also system-aware (i.e., they should be able to gather information from the software and hardware components of the mobile system). A number of middleware systems have been developed to support ad-hoc networking with the use of asynchronous communication (such as LIME, XMIDDLE, STEAM [11]). In particular, the STEAM platform is an interesting example of event-based middleware for ad-hoc networks, providing location-aware message delivery and an effective solution for event filtering. A discussion of JMS, and its mobile realisation, has already been conducted in Sections 4 and 2. The Swiss company Softwired has developed the first JMS middleware for mobile computing, called iBus Mobile [10]. The main components of this typically infrastructure-based architecture are the JMS provider, the so-called mobile JMS gateway, which is deployed on a fixed host and a lightweight JMS client library. The gateway is used for the communication between the application server and mobile hosts. The gateway is seen by the JMS provider as a normal JMS client. The JMS provider can be any JMS-enabled application server, such as BEA Weblogic. Pronto [19] is an example of mid125 Middleware 2004 Companion dleware system based on messaging that is specifically designed for mobile environments. The platform is composed of three classes of components: mobile clients implementing the JMS specification, gateways that control traffic, guaranteeing efficiency and possible user customizations using different plug-ins and JMS servers. Different configurations of these components are possible; with respect to mobile ad hoc networks applications, the most interesting is Serverless JMS. The aim of this configuration is to adapt JMS to a decentralized model. The publish-subscribe model exploits the efficiency and the scalability of the underlying IP multicast protocol. Unreliable and reliable message delivery services are provided: reliability is provided through a negative acknowledgment-based protocol. Pronto represents a good solution for infrastructure-based mobile networks but it does not adequately target ad-hoc settings, since mobile nodes rely on fixed servers for the exchange of messages. Other MOM implemented for mobile environments exist; however, they are usually straightforward extensions of existing middleware [8]. The only implementation of MOM specifically designed for mobile ad-hoc networks was developed at the University of Newcastle [18]. This work is again a JMS adaptation; the focus of that implementation is on group communication and the use of application level routing algorithms for topic delivery of messages. However, there are a number of differences in the focus of our work. The importance that we attribute to disconnections makes persistence a vital requirement for any middleware that needs to be used in mobile ad-hoc networks. The authors of [18] signal persistence as possible future work, not considering the fact that routing a message to a non-connected host will result in delivery failure. This is a remarkable limitation in mobile settings where unpredictable disconnections are the norm rather than the exception. 7. ROADMAP AND CONCLUSIONS Asynchronous communication is a useful communication paradigm for mobile ad-hoc networks, as hosts are allowed to come, go and pick up messages when convenient, also taking account of their resource availability (e.g., power, connectivity levels). In this paper we have described the state of the art in terms of MOM for mobile systems. We have also shown a proof of concept adaptation of JMS to the extreme scenario of partially connected mobile ad-hoc networks. We have described and discussed the characteristics and differences of our solution with respect to traditional JMS implementations and the existing adaptations for mobile settings. However, trade-offs between application-level routing and resource usage should also be investigated, as mobile devices are commonly power/resource scarce. A key limitation of this work is the poorly performing epidemic algorithm and an important advance in the practicability of this work requires an algorithm that better balances the needs of efficiency and message delivery probability. We are currently working on algorithms and protocols that, exploiting probabilistic and statistical techniques on the basis of small amounts of exchanged information, are able to improve considerably the efficiency in terms of resources (memory, bandwidth, etc) and the reliability of our middleware platform [13]. One futuristic research development, which may take these ideas of adaptation of messaging middleware for mobile environments further is the introduction of more mobility oriented communication extensions, for instance the support of geocast (i.e., the ability to send messages to specific geographical areas). 8. REFERENCES [1] M. Conti, G. Maselli, G. Turi, and S. Giordano. Cross-layering in Mobile ad-hoc Network Design. IEEE Computer, 37(2):48-51, February 2004. [2] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson, S. Shenker, H. Sturgis, D. Swinehart, and D. Terry. Epidemic Algorithms for Replicated Database Maintenance. In Sixth Symposium on Principles of Distributed Computing, pages 1-12, August 1987. [3] A. Doria, M. Uden, and D. P. Pandey. Providing connectivity to the Saami nomadic community. In Proceedings of the Second International Conference on Open Collaborative Design for Sustainable Innovation, December 2002. [4] M. Haahr, R. Cunningham, and V. Cahill. Supporting CORBA applications in a Mobile Environment. In 5th International Conference on Mobile Computing and Networking (MOBICOM99), pages 36-47. ACM, August 1999. [5] M. Hapner, R. Burridge, R. Sharma, J. Fialli, and K. Stout. Java Message Service Specification Version 1.1. Sun Microsystems, Inc., April 2002. http://java.sun.com/products/jms/. [6] J. Hart. WebSphere MQ: Connecting your applications without complex programming. IBM WebSphere Software White Papers, 2003. [7] S. Hayward and M. Pezzini. Marrying Middleware and Mobile Computing. Gartner Group Research Report, September 2001. [8] IBM. WebSphere MQ EveryPlace Version 2.0, November 2002. http://www-3.ibm.com/software/integration/wmqe/. [9] ITU. Connecting remote communities. Documents of the World Summit on Information Society, 2003. http://www.itu.int/osg/spu/wsis-themes. [10] S. Maffeis. Introducing Wireless JMS. Softwired AG, www.sofwired-inc.com, 2002. [11] C. Mascolo, L. Capra, and W. Emmerich. Middleware for Mobile Computing. In E. Gregori, G. Anastasi, and S. Basagni, editors, Advanced Lectures on Networking, volume 2497 of Lecture Notes in Computer Science, pages 20-58. Springer Verlag, 2002. [12] Microsoft. Microsoft Message Queuing (MSMQ) Version 2.0 Documentation. [13] M. Musolesi, S. Hailes, and C. Mascolo. Adaptive routing for intermittently connected mobile ad-hoc networks. Technical report, UCL-CS Research Note, July 2004. Submitted for Publication. [14] Sun Microsystems. Java Naming and Directory Interface (JNDI) Documentation Version 1.2. 2003. http://java.sun.com/products/jndi/. [15] Sun Microsystems. Jini Specification Version 2.0, 2003. http://java.sun.com/products/jini/. [16] A. Vahdat and D. Becker. Epidemic routing for Partially Connected ad-hoc Networks. Technical Report CS-2000-06, Department of Computer Science, Duke University, 2000. [17] A. Vargas. The OMNeT++ discrete event simulation system. In Proceedings of the European Simulation Multiconference (ESM``2001), Prague, June 2001. [18] E. Vollset, D. Ingham, and P. Ezhilchelvan. JMS on Mobile ad-hoc Networks. In Personal Wireless Communications (PWC), pages 40-52, Venice, September 2003. [19] E. Yoneki and J. Bacon. Pronto: Mobilegateway with publish-subscribe paradigm over wireless network. Technical Report 559, University of Cambridge, Computer Laboratory, February 2003. Middleware for Pervasive and ad-hoc Computing 126
Adapting Asynchronous Messaging Middleware to Ad Hoc Networking ABSTRACT The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for Ad hoc networks), an adaptation of Java Message Service (JMS) for mobile ad hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years. 2. MESSAGE ORIENTED MIDDLEWARE AND JAVA MESSAGE SERVICE (JMS) Message-oriented middleware systems support communication between distributed components via message-passing: the sender sends a message to identified provider on the wired network. However, this approach is somewhat limited in terms of widespread applicability and scalability as a consequence of the concentration of functionality in the wired portion of the network. If JMS is to be adapted to completely ad hoc environments, where no fixed infrastructure is available, and where nodes change location and status very dynamically, more issues must be taken into consideration. Firstly, discovery needs to use a resilient but distributed model: in this extremely dynamic environment, static solutions are unacceptable. As discussed in Section 2, a JMS administrator defines queues and topics on the provider. Clients can then learn about them using the Java Naming and Directory Interface (JNDI). However, due to the way JNDI is designed, a JNDI node (or more than one) needs to be in reach in order to obtain a binding of a name to an address (i.e., knowing where a specific queue/topic is). In mobile ad hoc environments, the discovery process cannot assume the existence of a fixed set of discovery servers that are always reachable, as this would not match the dynamicity of ad hoc networks. Secondly, a JMS Provider, as suggested by the JMS specification, also needs to be reachable by each node in the network, in order to communicate. This assumes a very centralised architecture, which again does not match the requirements of a mobile ad hoc setting, in which nodes may be moving and sparse: a more distributed and dynamic solution is needed. Persistence is, however, essential functionality in asynchronous communication environments as hosts are, by definition, connected at different times. In the following section, we will discuss our experience in designing and implementing JMS for mobile ad hoc networks. 4. JMS FOR MOBILE AD HOC NETWORKS 4.1 Adaptation of JMS for Mobile Ad Hoc Networks Developing applications for mobile networks is yet more challenging: in addition to the same considerations as for infrastructured wireless environments, such as the limited device capabilities and power constraints, there are issues of rate of change of network connectivity, and the lack of a static routing infrastructure. Consequently, we now describe an initial attempt to adapt the JMS specification to target the particular requirements related to ad hoc scenarios. As discussed in Section 3, a JMS application can use either the rather than the exception. In other words, we cannot consider disconnections as failures. For these reasons, we adopt a slightly different semantics. With respect to durable subscriptions, if a subscriber becomes disconnected, notifications are not stored but are sent using the epidemic protocol rather than the synchronous protocol. In other words, durable notifications remain valid during the possible disconnections of the subscriber. On the other hand, if a non-durable subscriber becomes disconnected, its subscription is deleted; in other words, during disconnections, notifications are not sent using the epidemic protocol but exploit only the synchronous protocol. If the topic becomes accessible to this host again, it must make another subscription in order to receive the notifications. Unsubscription messages are delivered in the same way as are subscription messages. It is important to note that durable subscribers have explicitly to unsubscribe from a topic in order to stop the notification process; however, all durable subscriptions have a predefined expiration time in order to cope with the cases of subscribers that do not meet again because of their movements or failures. This feature is clearly provided to limit the number of the unnecessary messages sent around the network. 4.2 Message Delivery using Epidemic Routing In this section, we examine one possible mechanism that will allow the delivery of messages in a partially connected network. The mechanism we discuss is intended for the purposes of demonstrating feasibility; more efficient communication mechanisms for this environment are themselves complex, and are the subject of another paper [13]. The asynchronous message delivery described above is based on a typical pure epidemic-style routing protocol [16]. A message that needs to be sent is replicated on each host in reach. In this way, copies of the messages are quickly spread through connected networks, like an infection. If a host becomes connected to another cloud of mobile nodes, during its movement, the message spreads through this collection of hosts. Epidemic-style replication of data and messages has been exploited in the past in many fields starting with the distributed database systems area [2]. Within epidemic routing, each host maintains a buffer containing the messages that it has created and the replicas of the messages generated by the other hosts. To improve the performance, a hash-table indexes the content of the buffer. When two hosts connect, the host with the smaller identifier initiates a so-called successfully delivered that is updated as part of the normal process of information exchange between the hosts. The lists are exchanged during the first steps of the anti-entropic epidemic protocol with a certain predefined frequency. In the case of messages with multiple recipients, a list of the actual recipients is also stored. When a host receives the list, it checks its message buffer and updates it according to the following rules: (1) if a message has a single recipient and it has been delivered, it is deleted from the buffer; (2) if a message has multiple recipients, the identifiers of the delivered hosts are deleted from the associated list of recipients. If the resulting length of the list of recipients is zero, the message is deleted from the buffer. These lists have, clearly, finite dimensions and are implemented as circular queues. This simple mechanism, together with the use of expiration timestamps, guarantees that the old acknowledgment notifications are deleted from the system after a limited period of time. In order to improve the reliability of EMMA, a design mechanism for intelligent replication of queues and topics based on the context information could be developed. However this is not yet part of the current architecture of EMMA. 5. IMPLEMENTATION AND PRELIMINARY EVALUATION We implemented a prototype of our platform using the J2ME Personal Profile. The size of the executable is about 250KB including the JMS 1.1 jar file; this is a perfectly acceptable figure given the available memory of the current mobile devices on the market. We tested our prototype on HP IPaq PDAs running Linux, interconnected with WaveLan, and on a number of laptops with the same network interface. We also evaluated the middleware platform using the OMNET + + discrete event simulator [17] in order to explore a range of mobile scenarios that incorporated a more realistic number of hosts than was achievable experimentally. More specifically, we assessed the performance of the system in terms of delivery ratio and average delay, varying the density of population and the buffer size, and using persistent and non-persistent messages with different priorities. The simulation results show that the EMMA's performance, in terms of delivery ratio and delay of persistent messages with higher priorities, is good. In general, it is evident that the delivery ratio is strongly related to the correct dimensioning of the buffers to the maximum acceptable delay. Moreover, the epidemic algorithms are able to guarantee a high delivery ratio if one evaluates performance over a time interval sufficient for the dissemination of the replicas of messages (i.e., the dleware system based on messaging that is specifically designed for mobile environments. The platform is composed of three classes of components: mobile clients implementing the JMS specification, gateways that control traffic, guaranteeing efficiency and possible user customizations using different plug-ins and JMS servers. Different configurations of these components are possible; with respect to mobile ad hoc networks applications, the most interesting is
Adapting Asynchronous Messaging Middleware to Ad Hoc Networking ABSTRACT The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for Ad hoc networks), an adaptation of Java Message Service (JMS) for mobile ad hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years. 2. MESSAGE ORIENTED MIDDLEWARE AND JAVA MESSAGE SERVICE (JMS) 4. JMS FOR MOBILE AD HOC NETWORKS 4.1 Adaptation of JMS for Mobile Ad Hoc Networks 4.2 Message Delivery using Epidemic Routing 5. IMPLEMENTATION AND PRELIMINARY EVALUATION
Adapting Asynchronous Messaging Middleware to Ad Hoc Networking ABSTRACT The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for Ad hoc networks), an adaptation of Java Message Service (JMS) for mobile ad hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.
J-50
Communication Complexity of Common Voting Rules
We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin.
[ "commun", "commun complex", "complex", "vote rule", "vote", "stv", "maximin", "protocol", "prefer", "prefer aggreg", "resourc alloc", "elicit problem" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U" ]
Communication Complexity of Common Voting Rules∗ Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin. Categories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION One key factor in the practicality of any preference aggregation rule is its communication burden. To successfully aggregate the agents'' preferences, it is usually not necessary for all the agents to report all of their preference information. Clever protocols that elicit the agents'' preferences partially and sequentially have the potential to dramatically reduce the required communication. This has at least the following advantages: • It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate. • Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents. This is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16]. • It preserves (some of) the agents'' privacy. Most of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event. Because in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial. This can be accomplished by supplementing the auctioneer with an elicitor that incrementally elicits parts of the bidders'' preferences on an as-needed basis, based on what the bidders have revealed about their preferences so far, as suggested by Conen and Sandholm [5]. For example, the elicitor can ask for a bidder``s value for a specific bundle (value queries), which of two bundles the bidder prefers (order queries), which bundle he ranks kth or what the rank of a given bundle is (rank queries), which bundle he would purchase given a particular vector of prices (demand queries), etc.-until (at least) the final allocation can be determined. Experimentally, this yields drastic savings in preference revelation [11]. Furthermore, if the agents'' valuation functions are drawn from certain natural subclasses, the elicitation problem can be solved using only polynomially many queries even in the worst case [23, 4, 13, 18, 14]. For a review of preference elicitation in combinatorial auctions, see [17]. Ascending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9]. Finally, resource 78 allocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication. For example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14]. Segal also studies social choice rules in general, and shows that for a large class of social choice rules, supporting budget sets must be revealed such that if every agent prefers the same outcome in her budget set, this proves the optimality of that outcome. Segal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20]. In this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules. In a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes. The communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both. Prior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters'' preferences. In addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation. In contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters'' preferences. We determine the communication complexity of the common voting rules. For each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol. (Segal``s results [20] do not apply to most voting rules because most voting rules are not intersectionmonotonic (or even monotonic).1 ) For many of the voting rules under study, it turns out that one cannot do better than simply letting each voter immediately communicate all her (potentially relevant) information. However, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information. Finally, for some rules (such as the Condorcet and Bucklin rules), we need to introduce a more complex communication protocol to achieve the best possible upper 1 For two of the rules that we study that are intersectionmonotonic, namely the approval and Condorcet rules, Segal``s results can in fact be used to give alternative proofs of our lower bounds. We only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal``s results, and 3) a space constraint applies. However, we hope to also include the alternative proofs in a later version. bound. After obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule. There are two exceptions: STV, for which our upper and lower bounds are apart by a factor log m; and maximin, for which our best deterministic upper bound is also a factor log m above the (nondeterministic) lower bound, although we give a nondeterministic upper bound that matches the lower bound. 2. REVIEW OF VOTING RULES In this section, we review the common voting rules that we study in this paper. A voting rule2 is a function mapping a vector of the n voters'' votes (i.e. preferences over candidates) to one of the m candidates (the winner) in the candidate set C. In some cases (such as the Condorcet rule), the rule may also declare that no winner exists. We do not concern ourselves with what happens in case of a tie between candidates (our lower bounds hold regardless of how ties are broken, and the communication protocols used for our upper bounds do not attempt to break the ties). All of the rules that we study are rank-based rules, which means that a vote is defined as an ordering of the candidates (with the exception of the plurality rule, for which a vote is a single candidate, and the approval rule, for which a vote is a subset of the candidates). We will consider the following voting rules. (For rules that define a score, the candidate with the highest score wins.) • scoring rules. Let α = α1, ... , αm be a vector of integers such that α1 ≥ α2 ... ≥ αm. For each voter, a candidate receives α1 points if it is ranked first by the voter, α2 if it is ranked second etc.. The score sα of a candidate is the total number of points the candidate receives. The Borda rule is the scoring rule with α = m−1, m−2, ... , 0 . The plurality rule is the scoring rule with α = 1, 0, ... , 0 . • single transferable vote (STV). The rule proceeds through a series of m − 1 rounds. In each round, the candidate with the lowest plurality score (that is, the least number of voters ranking it first among the remaining candidates) is eliminated (and each of the votes for that candidate transfer to the next remaining candidate in the order given in that vote). The winner is the last remaining candidate. • plurality with run-off. In this rule, a first round eliminates all candidates except the two with the highest plurality scores. Votes are transferred to these as in the STV rule, and a second round determines the winner from these two. • approval. Each voter labels each candidate as either approved or disapproved. The candidate approved by the greatest number of voters wins. • Condorcet. For any two candidates i and j, let N(i, j) be the number of voters who prefer i to j. If there is a candidate i that is preferred to any other candidate by a majority of the voters (that is, N(i, j) > N(j, i) for all j = i-that is, i wins every pairwise election), then candidate i wins. 2 The term voting protocol is often used to describe the same concept, but we seek to draw a sharp distinction between the rule mapping preferences to outcomes, and the communication/elicitation protocol used to implement this rule. 79 • maximin (aka. Simpson). The maximin score of i is s(i) = minj=i N(i, j)-that is, i``s worst performance in a pairwise election. The candidate with the highest maximin score wins. • Copeland. For any two distinct candidates i and j, let C(i, j) = 1 if N(i, j) > N(j, i), C(i, j) = 1/2 if N(i, j) = N(j, i) and C(i, j) = 0 if N(i, j) < N(j, i). The Copeland score of candidate i is s(i) = j=i C(i, j). • cup (sequential binary comparisons). The cup rule is defined by a balanced3 binary tree T with one leaf per candidate, and an assignment of candidates to leaves (each leaf gets one candidate). Each non-leaf node is assigned the winner of the pairwise election of the node``s children; the candidate assigned to the root wins. • Bucklin. For any candidate i and integer l, let B(i, l) be the number of voters that rank candidate i among the top l candidates. The winner is arg mini(min{l : B(i, l) > n/2}). That is, if we say that a voter approves her top l candidates, then we repeatedly increase l by 1 until some candidate is approved by more than half the voters, and this candidate is the winner. • ranked pairs. This rule determines an order on all the candidates, and the winner is the candidate at the top of this order. Sort all ordered pairs of candidates (i, j) by N(i, j), the number of voters who prefer i to j. Starting with the pair (i, j) with the highest N(i, j), we lock in the result of their pairwise election (i j). Then, we move to the next pair, and we lock the result of their pairwise election. We continue to lock every pairwise result that does not contradict the ordering established so far. We emphasize that these definitions of voting rules do not concern themselves with how the votes are elicited from the voters; all the voting rules, including those that are suggestively defined in terms of rounds, are in actuality just functions mapping the vector of all the voters'' votes to a winner. Nevertheless, there are always many different ways of eliciting the votes (or the relevant parts thereof) from the voters. For example, in the plurality with runoff rule, one way of eliciting the votes is to ask every voter to declare her entire ordering of the candidates up front. Alternatively, we can first ask every voter to declare only her most preferred candidate; then, we will know the two candidates in the runoff, and we can ask every voter which of these two candidates she prefers. Thus, we distinguish between the voting rule (the mapping from vectors of votes to outcomes) and the communication protocol (which determines how the relevant parts of the votes are actually elicited from the voters). The goal of this paper is to give efficient communication protocols for the voting rules just defined, and to prove that there do not exist any more efficient communication protocols. It is interesting to note that the choice of the communication protocol may affect the strategic behavior of the voters. Multistage communication protocols may reveal to the voters some information about how the other voters are voting (for example, in the two-stage communication protocol just given for plurality with runoff, in the second stage voters 3 Balanced means that the difference in depth between two leaves can be at most one. will know which two candidates have the highest plurality scores). In general, when the voters receive such information, it may give them incentives to vote differently than they would have in a single-stage communication protocol in which all voters declare their entire votes simultaneously. Of course, even the single-stage communication protocol is not strategy-proof4 for any reasonable voting rule, by the Gibbard-Satterthwaite theorem [10, 19]. However, this does not mean that we should not be concerned about adding even more opportunities for strategic voting. In fact, many of the communication protocols introduced in this paper do introduce additional opportunities for strategic voting, but we do not have the space to discuss this here. (In prior work [8], we do give an example where an elicitation protocol for the approval voting rule introduces strategic voting, and give principles for designing elicitation protocols that do not introduce strategic problems.) Now that we have reviewed voting rules, we move on to a brief review of communication complexity theory. 3. REVIEW OF SOME COMMUNICATION COMPLEXITY THEORY In this section, we review the basic model of a communication problem and the lower-bounding technique of constructing a fooling set. (The basic model of a communication problem is due to Yao [22]; for an overview see Kushilevitz and Nisan [12].) Each player 1 ≤ i ≤ n knows (only) input xi. Together, they seek to compute f(x1, x2, ... , xn). In a deterministic protocol for computing f, in each stage, one of the players announces (to all other players) a bit of information based on her own input and the bits announced so far. Eventually, the communication terminates and all players know f(x1, x2, ... , xn). The goal is to minimize the worst-case (over all input vectors) number of bits sent. The deterministic communication complexity of a problem is the worstcase number of bits sent in the best (correct) deterministic protocol for it. In a nondeterministic protocol, the next bit to be sent can be chosen nondeterministically. For the purposes of this paper, we will consider a nondeterministic protocol correct if for every input vector, there is some sequence of nondeterministic choices the players can make so that the players know the value of f when the protocol terminates. The nondeterministic communication complexity of a problem is the worst-case number of bits sent in the best (correct) nondeterministic protocol for it. We are now ready to give the definition of a fooling set. Definition 1. A fooling set is a set of input vectors {(x1 1, x1 2, ... , x1 n), (x2 1, x2 2, ... , x2 n), ... , (xk 1 , xk 2 , ... , xk n) such that for any i, f(xi 1, xi 2, ... , xi n) = f0 for some constant f0, but for any i = j, there exists some vector (r1, r2, ... , rn) ∈ {i, j}n such that f(xr1 1 , xr2 2 , ... , xrn n ) = f0. (That is, we can mix the inputs from the two input vectors to obtain a vector with a different function value.) It is known that if a fooling set of size k exists, then log k is a lower bound on the communication complexity (even the nondeterministic communication complexity) [12]. 4 A strategy-proof protocol is one in which it is in the players'' best interest to report their preferences truthfully. 80 For the purposes of this paper, f is the voting rule that maps the votes to the winning candidate, and xi is voter i``s vote (the information that the voting rule would require from the voter if there were no possibility of multistage communication-i.e. the most preferred candidate (plurality), the approved candidates (approval), or the ranking of all the candidates (all other protocols)). However, when we derive our lower bounds, f will only signify whether a distinguished candidate a wins. (That is, f is 1 if a wins, and 0 otherwise.) This will strengthen our lower bound results (because it implies that even finding out whether one given candidate wins is hard).5 Thus, a fooling set in our context is a set of vectors of votes so that a wins (does not win) with each of them; but for any two different vote vectors in the set, there is a way of taking some voters'' votes from the first vector and the others'' votes from the second vector, so that a does not win (wins). To simplify the proofs of our lower bounds, we make assumptions such as the number of voters n is odd in many of these proofs. Therefore, technically, we do not prove the lower bound for (number of candidates, number of voters) pairs (m, n) that do not satisfy these assumptions (for example, if we make the above assumption, then we technically do not prove the lower bound for any pair (m, n) in which n is even). Nevertheless, we always prove the lower bound for a representative set of (m, n) pairs. For example, for every one of our lower bounds it is the case that for infinitely many values of m, there are infinitely many values of n such that the lower bound is proved for the pair (m, n). 4. RESULTS We are now ready to present our results. For each voting rule, we first give a deterministic communication protocol for determining the winner to establish an upper bound. Then, we give a lower bound on the nondeterministic communication complexity (even on the complexity of deciding whether a given candidate wins, which is an easier question). The lower bounds match the upper bounds in all but two cases: the STV rule (upper bound O(n(log m)2 ); lower bound Ω(n log m)) and the maximin rule (upper bound O(nm log m), although we do give a nondeterministic protocol that is O(nm); lower bound Ω(nm)). When we discuss a voting rule in which the voters rank the candidates, we will represent a ranking in which candidate c1 is ranked first, c2 is ranked second, etc. as c1 c2 ... cm. 5 One possible concern is that in the case where ties are possible, it may require much communication to verify whether a specific candidate a is among the winners, but little communication to produce one of the winners. However, all the fooling sets we use in the proofs have the property that if a wins, then a is the unique winner. Therefore, in these fooling sets, if one knows any one of the winners, then one knows whether a is a winner. Thus, computing one of the winners requires at least as much communication as verifying whether a is among the winners. In general, when a communication problem allows multiple correct answers for a given vector of inputs, this is known as computing a relation rather than a function [12]. However, as per the above, we can restrict our attention to a subset of the domain where the voting rule truly is a (single-valued) function, and hence lower bounding techniques for functions rather than relations will suffice. Sometimes for the purposes of a proof the internal ranking of a subset of the candidates does not matter, and in this case we will not specify it. For example, if S = {c2, c3}, then c1 S c4 indicates that either the ranking c1 c2 c3 c4 or the ranking c1 c3 c2 c4 can be used for the proof. We first give a universal upper bound. Theorem 1. The deterministic communication complexity of any rank-based voting rule is O(nm log m). Proof. This bound is achieved by simply having everyone communicate their entire ordering of the candidates (indicating the rank of an individual candidate requires only O(log m) bits, so each of the n voters can simply indicate the rank of each of the m candidates). The next lemma will be useful in a few of our proofs. Lemma 1. If m divides n, then log(n!) −m log((n/m)!) ≥ n(log m − 1)/2. Proof. If n/m = 1 (that is, n = m), then this expression simplifies to log(n!) . We have log(n!) = n i=1 log i ≥ n x=1 log(i)dx, which, using integration by parts, is equal to n log n − (n − 1) > n(log n − 1) = n(log m − 1) > n(log m − 1)/2. So, we can assume that n/m ≥ 2. We observe that log(n!) = n i=1 log i = n/m−1 i=0 m j=1 log(im+j) ≥ n/m−1 i=1 m j=1 log(im) = m n/m−1 i=1 log(im), and that m log((n/m)!) = m n/m i=1 log(i). Therefore, log(n!) − m log((n/m)!) ≥ m n/m−1 i=1 log(im) − m n/m i=1 log(i) = m(( n/m−1 i=1 log(im/i))−log(n/m)) = m((n/m− 1) log m−log n+log m) = n log m−m log n. Now, using the fact that n/m ≥ 2, we have m log n = n(m/n) log m(n/m) = n(m/n)(log m + log(n/m)) ≤ n(1/2)(log m + log 2). Thus, log(n!) − m log((n/m)!) ≥ n log m − m log n ≥ n log m − n(1/2)(log m + log 2) = n(log m − 1)/2. Theorem 2. The deterministic communication complexity of the plurality rule is O(n log m). Proof. Indicating one of the candidates requires only O(log m) bits, so each voter can simply indicate her most preferred candidate. Theorem 3. The nondeterministic communication complexity of the plurality rule is Ω(n log m) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size n ! (( n m )!) m where n = (n−1)/2. Taking the logarithm of this gives log(n !) − m log((n /m)!) , so the result follows from Lemma 1. The fooling set will consist of all vectors of votes satisfying the following constraints: • For any 1 ≤ i ≤ n , voters 2i−1 and 2i vote the same. 81 • Every candidate receives equally many votes from the first 2n = n − 1 voters. • The last voter (voter n) votes for a. Candidate a wins with each one of these vote vectors because of the extra vote for a from the last voter. Given that m divides n , let us see how many vote vectors there are in the fooling set. We need to distribute n voter pairs evenly over m candidates, for a total of n /m voter pairs per candidate; and there are precisely n ! (( n m )!) m ways of doing this.6 All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let i be a number such that the two vote vectors disagree on the candidate for which voters 2i − 1 and 2i vote. Without loss of generality, suppose that in the first vote vector, these voters do not vote for a (but for some other candidate, b, instead). Now, construct a new vote vector by taking votes 2i − 1 and 2i from the first vote vector, and the remaining votes from the second vote vector. Then, b receives 2n /m + 2 votes in this newly constructed vote vector, whereas a receives at most 2n /m+1 votes. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 4. The deterministic communication complexity of the plurality with runoff rule is O(n log m). Proof. First, let every voter indicate her most preferred candidate using log m bits. After this, the two candidates in the runoff are known, and each voter can indicate which one she prefers using a single additional bit. Theorem 5. The nondeterministic communication complexity of the plurality with runoff rule is Ω(n log m) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size n ! (( n m )!) m where m = m/2 and n = (n − 2)/4. Taking the logarithm of this gives log(n !) − m log((n /m )!) , so the result follows from Lemma 1. Divide the candidates into m pairs: (c1, d1), (c2, d2), ... , (cm , dm ) where c1 = a and d1 = b. The fooling set will consist of all vectors of votes satisfying the following constraints: • For any 1 ≤ i ≤ n , voters 4i − 3 and 4i − 2 rank the candidates ck(i) a C − {a, ck(i)}, for some candidate ck(i). (If ck(i) = a then the vote is simply a C − {a}.) • For any 1 ≤ i ≤ n , voters 4i − 1 and 4i rank the candidates dk(i) a C − {a, dk(i)} (that is, their most preferred candidate is the candidate that is paired with the candidate that the previous two voters vote for). 6 An intuitive proof of this is the following. We can count the number of permutations of n elements as follows. First, divide the elements into m buckets of size n /m, so that if x is placed in a lower-indexed bucket than y, then x will be indexed lower in the eventual permutation. Then, decide on the permutation within each bucket (for which there are (n /m)! choices per bucket). It follows that n ! equals the number of ways to divide n elements into m buckets of size n /m, times ((n /m)!) m . • Every candidate is ranked at the top of equally many of the first 4n = n − 2 votes. • Voter 4n +1 = n−1 ranks the candidates a C−{a}. • Voter 4n + 2 = n ranks the candidates b C − {b}. Candidate a wins with each one of these vote vectors: because of the last two votes, candidates a and b are one vote ahead of all the other candidates and continue to the runoff, and at this point all the votes that had another candidate ranked at the top transfer to a, so that a wins the runoff. Given that m divides n , let us see how many vote vectors there are in the fooling set. We need to distribute n groups of four voters evenly over the m pairs of candidates, and (as in the proof of Theorem 3) there are n ! (( n m )!) m ways of doing this. All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let i be a number such that ck(i) is not the same in both of these two vote vectors, that is, c1 k(i) (ck(i) in the first vote vector) is not equal to c2 k(i) (ck(i) in the second vote vector). Without loss of generality, suppose c1 k(i) = a. Now, construct a new vote vector by taking votes 4i − 3, 4i − 2, 4i − 1, 4i from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, c1 k(i) and d1 k(i) each receive 4n /m+2 votes in the first round, whereas a receives at most 4n /m+1 votes. So, a does not continue to the runoff in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 6. The nondeterministic communication complexity of the Borda rule is Ω(nm log m) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size (m !) n where m = m−2 and n = (n−2)/4. This will prove the theorem because m ! is Ω(m log m), so that log((m !) n ) = n log(m !) is Ω(nm log m). For every vector (π1, π2, ... , πn ) consisting of n orderings of all candidates other than a and another fixed candidate b (technically, the orderings take the form of a one-to-one function πi : {1, 2, ... , m } → C − {a, b} with πi(j) = c indicating that candidate c is the jth in the order represented by πi), let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voters 4i − 3 and 4i − 2 rank the candidates a b πi(1) πi(2) ... πi(m ). • For 1 ≤ i ≤ n , let voters 4i − 1 and 4i rank the candidates πi(m ) πi(m − 1) ... πi(1) b a. • Let voter 4n + 1 = n − 1 rank the candidates a b π0(1) π0(2) ... π0(m ) (where π0 is an arbitrary order of the candidates other than a and b which is the same for every element of the fooling set). • Let voter 4n + 2 = n rank the candidates π0(m ) π0(m − 1) ... π0(1) a b. We observe that this fooling set has size (m !) n , and that candidate a wins in each vector of votes in the fooling set (to 82 see why, we observe that for any 1 ≤ i ≤ n , votes 4i−3 and 4i − 2 rank the candidates in the exact opposite way from votes 4i − 1 and 4i, which under the Borda rule means they cancel out; and the last two votes give one more point to a than to any other candidate-besides b who gets two fewer points than a). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (π1 1, π1 2, ... , π1 n ), and let the second vote vector correspond to the vector (π2 1, π2 2, ... , π2 n ). For some i, we must have π1 i = π2 i , so that for some candidate c /∈ {a, b}, (π1 i )−1 (c) < (π2 i )−1 (c) (that is, c is ranked higher in π1 i than in π2 i ). Now, construct a new vote vector by taking votes 4i−3 and 4i−2 from the first vote vector, and the remaining votes from the second vote vector. a``s Borda score remains unchanged. However, because c is ranked higher in π1 i than in π2 i , c receives at least 2 more points from votes 4i−3 and 4i − 2 in the newly constructed vote vector than it did in the second vote vector. It follows that c has a higher Borda score than a in the newly constructed vote vector. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 7. The nondeterministic communication complexity of the Copeland rule is Ω(nm log m) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size (m !) n where m = (m − 2)/2 and n = (n − 2)/2. This will prove the theorem because m ! is Ω(m log m), so that log((m !) n ) = n log(m !) is Ω(nm log m). We write the set of candidates as the following disjoint union: C = {a, b} ∪ L ∪ R where L = {l1, l2, ... , lm } and R = {r1, r2, ... , rm }. For every vector (π1, π2, ... , πn ) consisting of n permutations of the integers 1 through m (πi : {1, 2, ... , m } → {1, 2, ... , m }), let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates a b lπi(1) rπi(1) lπi(2) rπi(2) ... lπi(m ) rπi(m ). • For 1 ≤ i ≤ n , let voter 2i rank the candidates rπi(m ) lπi(m ) rπi(m −1) lπi(m −1) ... rπi(1) lπi(1) b a. • Let voter n − 1 = 2n + 1 rank the candidates a b l1 r1 l2 r2 ... lm rm . • Let voter n = 2n +2 rank the candidates rm lm rm −1 lm −1 ... r1 l1 a b. We observe that this fooling set has size (m !) n , and that candidate a wins in each vector of votes in the fooling set (every pair of candidates is tied in their pairwise election, with the exception that a defeats b, so that a wins the election by half a point). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (π1 1, π1 2, ... , π1 n ), and let the second vote vector correspond to the vector (π2 1, π2 2, ... , π2 n ). For some i, we must have π1 i = π2 i , so that for some j ∈ {1, 2, ... , m }, we have (π1 i )−1 (j) < (π2 i )−1 (j). Now, construct a new vote vector by taking vote 2i−1 from the first vote vector, and the remaining votes from the second vote vector. a``s Copeland score remains unchanged. Let us consider the score of lj. We first observe that the rank of lj in vote 2i − 1 in the newly constructed vote vector is at least 2 higher than it was in the second vote vector, because (π1 i )−1 (j) < (π2 i )−1 (j). Let D1 (lj) be the set of candidates in L ∪ R that voter 2i − 1 ranked lower than lj in the first vote vector (D1 (lj) = {c ∈ L ∪ R : lj 1 2i−1 c}), and let D2 (lj) be the set of candidates in L ∪ R that voter 2i − 1 ranked lower than lj in the second vote vector (D2 (lj) = {c ∈ L ∪ R : lj 2 2i−1 c}). Then, it follows that in the newly constructed vote vector, lj defeats all the candidates in D1 (lj) − D2 (lj) in their pairwise elections (because lj receives an extra vote in each one of these pairwise elections relative to the second vote vector), and loses to all the candidates in D2 (lj) − D1 (lj) (because lj loses a vote in each one of these pairwise elections relative to the second vote vector), and ties with everyone else. But |D1 (lj)|−|D2 (lj)| ≥ 2, and hence |D1 (lj) − D2 (lj)| − |D2 (lj) − D1 (lj)| ≥ 2. Hence, in the newly constructed vote vector, lj has at least two more pairwise wins than pairwise losses, and therefore has at least 1 more point than if lj had tied all its pairwise elections. Thus, lj has a higher Copeland score than a in the newly constructed vote vector. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 8. The nondeterministic communication complexity of the maximin rule is O(nm). Proof. The nondeterministic protocol will guess which candidate w is the winner, and, for each other candidate c, which candidate o(c) is the candidate against whom c receives its lowest score in a pairwise election. Then, let every voter communicate the following: • for each candidate c = w, whether she prefers c to w; • for each candidate c = w, whether she prefers c to o(c). We observe that this requires the communication of 2n(m− 1) bits. If the guesses were correct, then, letting N(d, e) be the number of voters preferring candidate d to candidate e, we should have N(c, o(c)) < N(w, c ) for any c = w, c = w, which will prove that w wins the election. Theorem 9. The nondeterministic communication complexity of the maximin rule is Ω(nm) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size 2n m where m = m − 2 and n = (n − 1)/4. Let b be a candidate other than a. For every vector (S1, S2, ... , Sn ) consisting of n subsets Si ⊆ C − {a, b}, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voters 4i − 3 and 4i − 2 rank the candidates Si a C − (Si ∪ {a, b}) b. • For 1 ≤ i ≤ n , let voters 4i − 1 and 4i rank the candidates b C − (Si ∪ {a, b}) a Si. 83 • Let voter 4n + 1 = n rank the candidates a b C − {a, b}. We observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set (in every one of a``s pairwise elections, a is ranked higher than its opponent by 2n +1 = (n+1)/2 > n/2 votes). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ). For some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i . Without loss of generality, suppose S1 i S2 i , and let c be some candidate in S1 i − S2 i . Now, construct a new vote vector by taking votes 4i − 3 and 4i − 2 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, a is ranked higher than c by only 2n −1 voters, for the following reason. Whereas voters 4i−3 and 4i − 2 do not rank c higher than a in the second vote vector (because c /∈ S2 i ), voters 4i − 3 and 4i − 2 do rank c higher than a in the first vote vector (because c ∈ S1 i ). Moreover, in every one of b``s pairwise elections, b is ranked higher than its opponent by at least 2n voters. So, a has a lower maximin score than b, therefore a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 10. The deterministic communication complexity of the STV rule is O(n(log m)2 ). Proof. Consider the following communication protocol. Let each voter first announce her most preferred candidate (O(n log m) communication). In the remaining rounds, we will keep track of each voter``s most preferred candidate among the remaining candidates, which will be enough to implement the rule. When candidate c is eliminated, let each of the voters whose most preferred candidate among the remaining candidates was c announce their most preferred candidate among the candidates remaining after c``s elimination. If candidate c was the ith candidate to be eliminated (that is, there were m − i + 1 candidates remaining before c``s elimination), it follows that at most n/(m − i + 1) voters had candidate c as their most preferred candidate among the remaining candidates, and thus the number of bits to be communicated after the elimination of the ith candidate is O((n/(m−i+1)) log m).7 Thus, the total communication in this communication protocol is O(n log m + m−1 i=1 (n/(m − i + 1)) log m). Of course, m−1 i=1 1/(m − i + 1) = m i=2 1/i, which is O(log m). Substituting into the previous expression, we find that the communication complexity is O(n(log m)2 ). Theorem 11. The nondeterministic communication complexity of the STV rule is Ω(n log m) (even to decide whether a given candidate a wins). Proof. We omit this proof because of space constraint. 7 Actually, O((n/(m − i + 1)) log(m − i + 1)) is also correct, but it will not improve the bound. Theorem 12. The deterministic communication complexity of the approval rule is O(nm). Proof. Approving or disapproving of a candidate requires only one bit of information, so every voter can simply approve or disapprove of every candidate for a total communication of nm bits. Theorem 13. The nondeterministic communication complexity of the approval rule is Ω(nm) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size 2n m where m = m − 1 and n = (n − 1)/4. For every vector (S1, S2, ... , Sn ) consisting of n subsets Si ⊆ C − {a}, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voters 4i − 3 and 4i − 2 approve Si ∪ {a}. • For 1 ≤ i ≤ n , let voters 4i − 1 and 4i approve C − (Si ∪ {a}). • Let voter 4n + 1 = n approve {a}. We observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set (a is approved by 2n + 1 voters, whereas each other candidate is approved by only 2n voters). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ). For some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i . Without loss of generality, suppose S1 i S2 i , and let b be some candidate in S1 i − S2 i . Now, construct a new vote vector by taking votes 4i − 3 and 4i − 2 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, a is still approved by 2n + 1 votes. However, b is approved by 2n + 2 votes, for the following reason. Whereas voters 4i−3 and 4i−2 do not approve b in the second vote vector (because b /∈ S2 i ), voters 4i − 3 and 4i − 2 do approve b in the first vote vector (because b ∈ S1 i ). It follows that b``s score in the newly constructed vote vector is b``s score in the second vote vector (2n ), plus two. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Interestingly, an Ω(m) lower bound can be obtained even for the problem of finding a candidate that is approved by more than one voter [20]. Theorem 14. The deterministic communication complexity of the Condorcet rule is O(nm). Proof. We maintain a set of active candidates S which is initialized to C. At each stage, we choose two of the active candidates (say, the two candidates with the lowest indices), and we let each voter communicate which of the two candidates she prefers. (Such a stage requires the communication of n bits, one per voter.) The candidate preferred by fewer 84 voters (the loser of the pairwise election) is removed from S. (If the pairwise election is tied, both candidates are removed.) After at most m − 1 iterations, only one candidate is left (or zero candidates are left, in which case there is no Condorcet winner). Let a be the remaining candidate. To find out whether candidate a is the Condorcet winner, let each voter communicate, for every candidate c = a, whether she prefers a to c. (This requires the communication of at most n(m − 1) bits.) This is enough to establish whether a won each of its pairwise elections (and thus, whether a is the Condorcet winner). Theorem 15. The nondeterministic communication complexity of the Condorcet rule is Ω(nm) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size 2n m where m = m − 1 and n = (n − 1)/2. For every vector (S1, S2, ... , Sn ) consisting of n subsets Si ⊆ C − {a}, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates Si a C − Si. • For 1 ≤ i ≤ n , let voter 2i rank the candidates C − Si a Si. • Let voter 2n +1 = n rank the candidates a C −{a}. We observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set (a wins each of its pairwise elections by a single vote). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ). For some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i . Without loss of generality, suppose S1 i S2 i , and let b be some candidate in S1 i − S2 i . Now, construct a new vote vector by taking vote 2i − 1 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, b wins its pairwise election against a by one vote (vote 2i − 1 ranks b above a in the newly constructed vote vector because b ∈ S1 i , whereas in the second vote vector vote 2i − 1 ranked a above b because b /∈ S2 i ). So, a is not the Condorcet winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 16. The deterministic communication complexity of the cup rule is O(nm). Proof. Consider the following simple communication protocol. First, let all the voters communicate, for every one of the matchups in the first round, which of its two candidates they prefer. After this, the matchups for the second round are known, so let all the voters communicate which candidate they prefer in each matchup in the second round-etc. Because communicating which of two candidates is preferred requires only one bit per voter, and because there are only m − 1 matchups in total, this communication protocol requires O(nm) communication. Theorem 17. The nondeterministic communication complexity of the cup rule is Ω(nm) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size 2n m where m = (m − 1)/2 and n = (n − 7)/2. Given that m + 1 is a power of 2, so that one candidate gets a bye (that is, does not face an opponent) in the first round, let a be the candidate with the bye. Of the m first-round matchups, let lj denote the one (left) candidate in the jth matchup, and let rj be the other (right) candidate. Let L = {lj : 1 ≤ j ≤ m } and R = {rj : 1 ≤ j ≤ m }, so that C = L ∪ R ∪ {a}. . . . . . . ... l r l r l r a m``1 1 2 2 m'' Figure 1: The schedule for the cup rule used in the proof of Theorem 17. For every vector (S1, S2, ... , Sn ) consisting of n subsets Si ⊆ R, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates Si L a R − Si. • For 1 ≤ i ≤ n , let voter 2i rank the candidates R − Si L a Si. • Let voters 2n +1 = n−6, 2n +2 = n−5, 2n +3 = n−4 rank the candidates L a R. • Let voters 2n + 4 = n − 3, 2n + 5 = n − 2 rank the candidates a r1 l1 r2 l2 ... rm lm . • Let voters 2n + 6 = n − 1, 2n + 7 = n rank the candidates rm lm rm −1 lm −1 ... r1 l1 a. We observe that this fooling set has size (2m )n = 2n m . Also, candidate a wins in each vector of votes in the fooling set, for the following reasons. Each candidate rj defeats its opponent lj in the first round. (For any 1 ≤ i ≤ n , the net effect of votes 2i − 1 and 2i on the pairwise election between rj and lj is zero; votes n − 6, n − 5, n − 4 prefer lj to rj, but votes n − 3, n − 2, n − 1, n all prefer rj to lj.) Moreover, a defeats every rj in their pairwise election. (For any 1 ≤ i ≤ n , the net effect of votes 2i − 1 and 2i on the pairwise election between a and rj is zero; votes n − 1, n prefer rj to a, but votes n − 6, n − 5, n − 4, n − 3, n − 2 all prefer a to rj.) It follows that a will defeat all the candidates that it faces. All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector 85 (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ). For some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i . Without loss of generality, suppose S1 i S2 i , and let rj be some candidate in S1 i − S2 i . Now, construct a new vote vector by taking vote 2i from the first vote vector, and the remaining votes from the second vote vector. We note that, whereas in the second vote vector vote 2i preferred rj to lj (because rj ∈ R−S2 i ), in the newly constructed vote vector this is no longer the case (because rj ∈ S1 i ). It follows that, whereas in the second vote vector, rj defeated lj in the first round by one vote, in the newly constructed vote vector, lj defeats rj in the first round. Thus, at least one lj advances to the second round after defeating its opponent rj. Now, we observe that in the newly constructed vote vector, any lk wins its pairwise election against any rq with q = k. This is because among the first 2n votes, at least n − 1 prefer lk to rq; votes n − 6, n − 5, n − 4 prefer lk to rq; and, because q = k, either votes n − 3, n − 2 prefer lk to rq (if k < q), or votes n − 1, n prefer lk to rq (if k > q). Thus, at least n + 4 = (n + 1)/2 > n/2 votes prefer lk to rq. Moreover, any lk wins its pairwise election against a. This is because only votes n − 3 and n − 2 prefer a to lk. It follows that, after the first round, any surviving candidate lk can only lose a matchup against another surviving lk , so that one of the lk must win the election. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 18. The deterministic communication complexity of the Bucklin rule is O(nm). Proof. Let l be the minimum integer for which there is a candidate who is ranked among the top l candidates by more than half the votes. We will do a binary search for l. At each point, we will have a lower bound lL which is smaller than l (initialized to 0), and an upper bound lH which is at least l (initialized to m). While lH − lL > 1, we continue by finding out whether (lH − l)/2 is smaller than l, after which we can update the bounds. To find out whether a number k is smaller than l, we determine every voter``s k most preferred candidates. Every voter can communicate which candidates are among her k most preferred candidates using m bits (for each candidate, indicate whether the candidate is among the top k or not), but because the binary search requires log m iterations, this gives us an upper bound of O((log m)nm), which is not strong enough. However, if lL < k < lH , and we already know a voter``s lL most preferred candidates, as well as her lH most preferred candidates, then the voter no longer needs to communicate whether the lL most preferred candidates are among her k most preferred candidates (because they must be), and she no longer needs to communicate whether the m−lH least preferred candidates are among her k most preferred candidates (because they cannot be). Thus the voter needs to communicate only m−lL −(m−lH ) = lH −lL bits in any given stage. Because each stage, lH − lL is (roughly) halved, each voter in total communicates only (roughly) m + m/2 + m/4 + ... ≤ 2m bits. Theorem 19. The nondeterministic communication complexity of the Bucklin rule is Ω(nm) (even to decide whether a given candidate a wins). Proof. We will exhibit a fooling set of size 2n m where m = (m−1)/2 and n = n/2. We write the set of candidates as the following disjoint union: C = {a} ∪ L ∪ R where L = {l1, l2, ... , lm } and R = {r1, r2, ... , rm }. For any subset S ⊆ {1, 2, ... , m }, let L(S) = {li : i ∈ S} and let R(S) = {ri : i ∈ S}. For every vector (S1, S2, ... , Sn ) consisting of n sets Si ⊆ {1, 2, ... , m }, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n , let voter 2i − 1 rank the candidates L(Si) R − R(Si) a L − L(Si) R(Si). • For 1 ≤ i ≤ n , let voter 2i rank the candidates L − L(Si) R(Si) a L(Si) R − R(Si). We observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set, for the following reason. Each candidate in C − {a} is ranked among the top m candidates by exactly half the voters (which is not enough to win). Thus, we need to look at the voters'' top m +1 candidates, and a is ranked m +1th by all voters. All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ). For some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i . Without loss of generality, suppose S1 i S2 i , and let j be some integer in S1 i − S2 i . Now, construct a new vote vector by taking vote 2i − 1 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, a is still ranked m + 1th by all votes. However, lj is ranked among the top m candidates by n + 1 = n/2 + 1 votes. This is because whereas vote 2i − 1 does not rank lj among the top m candidates in the second vote vector (because j /∈ S2 i , we have lj /∈ L(S2 i )), vote 2i − 1 does rank lj among the top m candidates in the first vote vector (because j ∈ S1 i , we have lj ∈ L(S1 i )). So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Theorem 20. The nondeterministic communication complexity of the ranked pairs rule is Ω(nm log m) (even to decide whether a given candidate a wins). Proof. We omit this proof because of space constraint. 5. DISCUSSION One key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters. By lowering this burden, it may become feasible to conduct more elections over more issues. In the limit, this could lead to a shift from representational government to a system in which most issues are decided by referenda-a veritable e-democracy. In this paper, we analyzed the communication complexity of the common voting rules. Knowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends 86 to produce outcomes that are slightly less representative of the voters'' preferences, if this rule reduces the communication burden on the voters significantly. The following table summarizes the results we obtained. Rule Lower bound Upper bound plurality Ω(n log m) O(n log m) plurality w/ runoff Ω(n log m) O(n log m) STV Ω(n log m) O(n(log m)2) Condorcet Ω(nm) O(nm) approval Ω(nm) O(nm) Bucklin Ω(nm) O(nm) cup Ω(nm) O(nm) maximin Ω(nm) O(nm) Borda Ω(nm log m) O(nm log m) Copeland Ω(nm log m) O(nm log m) ranked pairs Ω(nm log m) O(nm log m) Communication complexity of voting rules, sorted from low to high. All of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O(nm log m)). All of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner. One area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information. This restriction may invalidate some of the upper bounds that we derived using multistage communication protocols. Also, all of our bounds are worst-case bounds. It may be possible to outperform these bounds when the distribution of votes has additional structure. When deciding which voting rule to use for an election, there are many considerations to take into account. The voting rules that we studied in this paper are the most common ones that have survived the test of time. One way to select among these rules is to consider recent results on complexity. The table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable. However, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7]. Plurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others'' votes [7, 6]. STV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others'' votes (when the number of candidates is unbounded) [2]. Therefore, STV is more robust, although it may require slightly more worst-case communication as per the table above. Yet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8]. 6. REFERENCES [1] Lawrence Ausubel and Paul Milgrom. Ascending auctions with package bidding. Frontiers of Theoretical Economics, 1, 2002. No. 1, Article 1. [2] John Bartholdi, III and James Orlin. Single transferable vote resists strategic voting. Social Choice and Welfare, 8(4):341-354, 1991. [3] John Bartholdi, III, Craig Tovey, and Michael Trick. The computational difficulty of manipulating an election. Social Choice and Welfare, 6(3):227-241, 1989. [4] Avrim Blum, Jeffrey Jackson, Tuomas Sandholm, and Martin Zinkevich. Preference elicitation and query learning. Journal of Machine Learning Research, 5:649-667, 2004. [5] Wolfram Conen and Tuomas Sandholm. Preference elicitation in combinatorial auctions: Extended abstract. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 256-259, 2001. [6] Vincent Conitzer, Jerome Lang, and Tuomas Sandholm. How many candidates are needed to make elections hard to manipulate? In Theoretical Aspects of Rationality and Knowledge (TARK), pages 201-214, 2003. [7] Vincent Conitzer and Tuomas Sandholm. Complexity of manipulating elections with few candidates. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 314-319, 2002. [8] Vincent Conitzer and Tuomas Sandholm. Vote elicitation: Complexity and strategy-proofness. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 392-397, 2002. [9] Sven de Vries, James Schummer, and Rakesh V. Vohra. On ascending auctions for heterogeneous objects, 2003. Draft. [10] Allan Gibbard. Manipulation of voting schemes. Econometrica, 41:587-602, 1973. [11] Benoit Hudson and Tuomas Sandholm. Effectiveness of query types and policies for preference elicitation in combinatorial auctions. In International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 386-393, 2004. [12] E Kushilevitz and N Nisan. Communication Complexity. Cambridge University Press, 1997. [13] Sebasti´en Lahaie and David Parkes. Applying learning algorithms to preference elicitation. In Proceedings of the ACM Conference on Electronic Commerce, 2004. [14] Noam Nisan and Ilya Segal. The communication requirements of efficient allocations and supporting prices. Journal of Economic Theory, 2005. Forthcoming. [15] David Parkes. iBundle: An efficient ascending price bundle auction. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 148-157, 1999. [16] Tuomas Sandholm. An implementation of the contract net protocol based on marginal cost calculations. In Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 256-262, 1993. [17] Tuomas Sandholm and Craig Boutilier. Preference elicitation in combinatorial auctions. In Peter Cramton, Yoav Shoham, and Richard Steinberg, editors, Combinatorial Auctions, chapter 10. MIT Press, 2005. [18] Paolo Santi, Vincent Conitzer, and Tuomas Sandholm. Towards a characterization of polynomial preference elicitation with value queries in combinatorial auctions. In Conference on Learning Theory (COLT), pages 1-16, 2004. [19] Mark Satterthwaite. Strategy-proofness and Arrow``s conditions: existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10:187-217, 1975. [20] Ilya Segal. The communication requirements of social choice rules and supporting budget sets, 2004. Draft. Presented at the DIMACS Workshop on Computational Issues in Auction Design, Rutgers University, New Jersey, USA. [21] Peter Wurman and Michael Wellman. AkBA: A progressive, anonymous-price combinatorial auction. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 21-29, 2000. [22] A. C. Yao. Some complexity questions related to distributed computing. In Proceedings of the 11th ACM symposium on theory of computing (STOC), pages 209-213, 1979. [23] Martin Zinkevich, Avrim Blum, and Tuomas Sandholm. On polynomial-time preference elicitation with value queries. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 176-185, 2003. 87
Communication Complexity of Common Voting Rules * ABSTRACT We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin. 1. INTRODUCTION One key factor in the practicality of any preference aggregation rule is its communication burden. To successfully aggregate the agents' preferences, it is usually not necessary * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. The authors thank Ilya Segal for helpful comments. for all the agents to report all of their preference information. Clever protocols that elicit the agents' preferences partially and sequentially have the potential to dramatically reduce the required communication. This has at least the following advantages: 9 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate. 9 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents. This is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16]. 9 It preserves (some of) the agents' privacy. Most of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event. Because in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial. This can be accomplished by supplementing the auctioneer with an elicitor that incrementally elicits parts of the bidders' preferences on an as-needed basis, based on what the bidders have revealed about their preferences so far, as suggested by Conen and Sandholm [5]. For example, the elicitor can ask for a bidder's value for a specific bundle (value queries), which of two bundles the bidder prefers (order queries), which bundle he ranks kth or what the rank of a given bundle is (rank queries), which bundle he would purchase given a particular vector of prices (demand queries), etc.--until (at least) the final allocation can be determined. Experimentally, this yields drastic savings in preference revelation [11]. Furthermore, if the agents' valuation functions are drawn from certain natural subclasses, the elicitation problem can be solved using only polynomially many queries even in the worst case [23, 4, 13, 18, 14]. For a review of preference elicitation in combinatorial auctions, see [17]. Ascending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9]. Finally, resource allocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication. For example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14]. Segal also studies social choice rules in general, and shows that for a large class of social choice rules, supporting budget sets must be revealed such that if every agent prefers the same outcome in her budget set, this proves the optimality of that outcome. Segal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20]. In this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules. In a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes. The communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both. Prior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters' preferences. In addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation. In contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters' preferences. We determine the communication complexity of the common voting rules. For each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol. (Segal's results [20] do not apply to most voting rules because most voting rules are not intersectionmonotonic (or even monotonic).1) For many of the voting rules under study, it turns out that one cannot do better than simply letting each voter immediately communicate all her (potentially relevant) information. However, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information. Finally, for some rules (such as the Condorcet and Bucklin rules), we need to introduce a more complex communication protocol to achieve the best possible upper 1For two of the rules that we study that are intersectionmonotonic, namely the approval and Condorcet rules, Segal's results can in fact be used to give alternative proofs of our lower bounds. We only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal's results, and 3) a space constraint applies. However, we hope to also include the alternative proofs in a later version. bound. After obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule. There are two exceptions: STV, for which our upper and lower bounds are apart by a factor log m; and maximin, for which our best deterministic upper bound is also a factor log m above the (nondeterministic) lower bound, although we give a nondeterministic upper bound that matches the lower bound. 2. REVIEW OF VOTING RULES In this section, we review the common voting rules that we study in this paper. A voting rule2 is a function mapping a vector of the n voters' votes (i.e. preferences over candidates) to one of the m candidates (the winner) in the candidate set C. In some cases (such as the Condorcet rule), the rule may also declare that no winner exists. We do not concern ourselves with what happens in case of a tie between candidates (our lower bounds hold regardless of how ties are broken, and the communication protocols used for our upper bounds do not attempt to break the ties). All of the rules that we study are rank-based rules, which means that a vote is defined as an ordering of the candidates (with the exception of the plurality rule, for which a vote is a single candidate, and the approval rule, for which a vote is a subset of the candidates). We will consider the following voting rules. (For rules that define a score, the candidate with the highest score wins.) 9 scoring rules. Let α ~ = (α1,..., αm) be a vector of integers such that α1> α2 ...> αm. For each voter, a candidate receives α1 points if it is ranked first by the voter, α2 if it is ranked second etc. . The score s ~ α of a candidate is the total number of points the candidate receives. The Borda rule is the scoring rule with α ~ = (m-1, m-2,..., 0). The plurality rule is the scoring rule with α ~ = (1, 0,..., 0). 9 single transferable vote (STV). The rule proceeds through a series of m - 1 rounds. In each round, the candidate with the lowest plurality score (that is, the least number of voters ranking it first among the remaining candidates) is eliminated (and each of the votes for that candidate "transfer" to the next remaining candidate in the order given in that vote). The winner is the last remaining candidate. 9 plurality with run-off. In this rule, a first round eliminates all candidates except the two with the highest plurality scores. Votes are transferred to these as in the STV rule, and a second round determines the winner from these two. 9 approval. Each voter labels each candidate as either approved or disapproved. The candidate approved by the greatest number of voters wins. 9 Condorcet. For any two candidates i and j, let N (i, j) be the number of voters who prefer i to j. If there is a candidate i that is preferred to any other candidate by a majority of the voters (that is, N (i, j)> N (j, i) for all j = 6i--that is, i wins every pairwise election), then candidate i wins. 2The term voting protocol is often used to describe the same concept, but we seek to draw a sharp distinction between the rule mapping preferences to outcomes, and the communication/elicitation protocol used to implement this rule. 9 maximin (aka. Simpson). The maximin score of i is s (i) = minj6 = i N (i, j)--that is, i's worst performance in a pairwise election. The candidate with the highest maximin score wins. 9 Copeland. For any two distinct candidates i and j, let C (i, j) = 1 if N (i, j)> N (j, i), C (i, j) = 1/2 if N (i, j) = N (j, i) and C (i, j) = 0 if N (i, j) <N (j, i). The Copeland score of candidate i is s (i) = Ej6 = i C (i, j). 9 cup (sequential binary comparisons). The cup rule is defined by a balanced3 binary tree T with one leaf per candidate, and an assignment of candidates to leaves (each leaf gets one candidate). Each non-leaf node is assigned the winner of the pairwise election of the node's children; the candidate assigned to the root wins. 9 Bucklin. For any candidate i and integer l, let B (i, l) be the number of voters that rank candidate i among the top l candidates. The winner is arg mini (min {l: B (i, l)> n/2}). That is, if we say that a voter "approves" her top l candidates, then we repeatedly increase l by 1 until some candidate is approved by more than half the voters, and this candidate is the winner. 9 ranked pairs. This rule determines an order>. - on all the candidates, and the winner is the candidate at the top of this order. Sort all ordered pairs of candidates (i, j) by N (i, j), the number of voters who prefer i to j. Starting with the pair (i, j) with the highest N (i, j), we "lock in" the result of their pairwise election (i>. - j). Then, we move to the next pair, and we lock the result of their pairwise election. We continue to lock every pairwise result that does not contradict the ordering>. - established so far. We emphasize that these definitions of voting rules do not concern themselves with how the votes are elicited from the voters; all the voting rules, including those that are suggestively defined in terms of "rounds", are in actuality just functions mapping the vector of all the voters' votes to a winner. Nevertheless, there are always many different ways of eliciting the votes (or the relevant parts thereof) from the voters. For example, in the plurality with runoff rule, one way of eliciting the votes is to ask every voter to declare her entire ordering of the candidates up front. Alternatively, we can first ask every voter to declare only her most preferred candidate; then, we will know the two candidates in the runoff, and we can ask every voter which of these two candidates she prefers. Thus, we distinguish between the voting rule (the mapping from vectors of votes to outcomes) and the communication protocol (which determines how the relevant parts of the votes are actually elicited from the voters). The goal of this paper is to give efficient communication protocols for the voting rules just defined, and to prove that there do not exist any more efficient communication protocols. It is interesting to note that the choice of the communication protocol may affect the strategic behavior of the voters. Multistage communication protocols may reveal to the voters some information about how the other voters are voting (for example, in the two-stage communication protocol just given for plurality with runoff, in the second stage voters 3Balanced means that the difference in depth between two leaves can be at most one. will know which two candidates have the highest plurality scores). In general, when the voters receive such information, it may give them incentives to vote differently than they would have in a single-stage communication protocol in which all voters declare their entire votes simultaneously. Of course, even the single-stage communication protocol is not strategy-proof4 for any reasonable voting rule, by the Gibbard-Satterthwaite theorem [10, 19]. However, this does not mean that we should not be concerned about adding even more opportunities for strategic voting. In fact, many of the communication protocols introduced in this paper do introduce additional opportunities for strategic voting, but we do not have the space to discuss this here. (In prior work [8], we do give an example where an elicitation protocol for the approval voting rule introduces strategic voting, and give principles for designing elicitation protocols that do not introduce strategic problems.) Now that we have reviewed voting rules, we move on to a brief review of communication complexity theory. 3. REVIEW OF SOME COMMUNICATION COMPLEXITY THEORY In this section, we review the basic model of a communication problem and the lower-bounding technique of constructing a fooling set. (The basic model of a communication problem is due to Yao [22]; for an overview see Kushilevitz and Nisan [12].) Each player 1 <i <n knows (only) input xi. Together, they seek to compute f (x1, x2,..., xn). In a deterministic protocol for computing f, in each stage, one of the players announces (to all other players) a bit of information based on her own input and the bits announced so far. Eventually, the communication terminates and all players know f (x1, x2,..., xn). The goal is to minimize the worst-case (over all input vectors) number of bits sent. The deterministic communication complexity of a problem is the worstcase number of bits sent in the best (correct) deterministic protocol for it. In a nondeterministic protocol, the next bit to be sent can be chosen nondeterministically. For the purposes of this paper, we will consider a nondeterministic protocol correct if for every input vector, there is some sequence of nondeterministic choices the players can make so that the players know the value of f when the protocol terminates. The nondeterministic communication complexity of a problem is the worst-case number of bits sent in the best (correct) nondeterministic protocol for it. We are now ready to give the definition of a fooling set. 2,..., xrnn) = 6 f0. (That is, we can mix the inputs from the two input vectors to obtain a vector with a different function value.) It is known that if a fooling set of size k exists, then log k is a lower bound on the communication complexity (even the nondeterministic communication complexity) [12]. 4A strategy-proof protocol is one in which it is in the players' best interest to report their preferences truthfully. For the purposes of this paper, f is the voting rule that maps the votes to the winning candidate, and xi is voter i's vote (the information that the voting rule would require from the voter if there were no possibility of multistage communication--i.e. the most preferred candidate (plurality), the approved candidates (approval), or the ranking of all the candidates (all other protocols)). However, when we derive our lower bounds, f will only signify whether a distinguished candidate a wins. (That is, f is 1 if a wins, and 0 otherwise.) This will strengthen our lower bound results (because it implies that even finding out whether one given candidate wins is "hard").5 Thus, a fooling set in our context is a set of vectors of votes so that a wins (does not win) with each of them; but for any two different vote vectors in the set, there is a way of taking some voters' votes from the first vector and the others' votes from the second vector, so that a does not win (wins). To simplify the proofs of our lower bounds, we make assumptions such as "the number of voters n is odd" in many of these proofs. Therefore, technically, we do not prove the lower bound for (number of candidates, number of voters) pairs (m, n) that do not satisfy these assumptions (for example, if we make the above assumption, then we technically do not prove the lower bound for any pair (m, n) in which n is even). Nevertheless, we always prove the lower bound for a representative set of (m, n) pairs. For example, for every one of our lower bounds it is the case that for infinitely many values of m, there are infinitely many values of n such that the lower bound is proved for the pair (m, n). 4. RESULTS We are now ready to present our results. For each voting rule, we first give a deterministic communication protocol for determining the winner to establish an upper bound. Then, we give a lower bound on the nondeterministic communication complexity (even on the complexity of deciding whether a given candidate wins, which is an easier question). The lower bounds match the upper bounds in all but two cases: the STV rule (upper bound O (n (log m) 2); lower bound Ω (n log m)) and the maximin rule (upper bound O (nm log m), although we do give a nondeterministic protocol that is O (nm); lower bound Ω (nm)). When we discuss a voting rule in which the voters rank the candidates, we will represent a ranking in which candidate c1 is ranked first, c2 is ranked second, etc. as c1> - c2> -...cm. 5One possible concern is that in the case where ties are possible, it may require much communication to verify whether a specific candidate a is among the winners, but little communication to produce one of the winners. However, all the fooling sets we use in the proofs have the property that if a wins, then a is the unique winner. Therefore, in these fooling sets, if one knows any one of the winners, then one knows whether a is a winner. Thus, computing one of the winners requires at least as much communication as verifying whether a is among the winners. In general, when a communication problem allows multiple correct answers for a given vector of inputs, this is known as computing a relation rather than a function [12]. However, as per the above, we can restrict our attention to a subset of the domain where the voting rule truly is a (single-valued) function, and hence lower bounding techniques for functions rather than relations will suffice. Sometimes for the purposes of a proof the internal ranking of a subset of the candidates does not matter, and in this case we will not specify it. For example, if S = {c2, c31, then c1> - S> - c4 indicates that either the ranking c1> c2> - c3> - c4 or the ranking c1> - c3> - c2> - c4 can be used for the proof. We first give a universal upper bound. THEOREM 1. The deterministic communication complexity of any rank-based voting rule is O (nm log m). PROOF. This bound is achieved by simply having everyone communicate their entire ordering of the candidates (indicating the rank of an individual candidate requires only O (log m) bits, so each of the n voters can simply indicate the rank of each of the m candidates). The next lemma will be useful in a few of our proofs. LEMMA 1. If m divides n, then log (n!) - m log ((n/m)!) > n (log m - 1) / 2. PROOF. If n/m = 1 (that is, n = m), then this expression simplifies to log (n!) . We have log (n!) = log (i) dx, which, using integration by parts, is equal to n log n - (n - 1)> n (log n - 1) = n (log m - 1)> n (log m 1) / 2. So, we can assume that n/m> 2. We observe that THEOREM 2. The deterministic communication complexity of the plurality rule is O (n log m). PROOF. Indicating one of the candidates requires only O (log m) bits, so each voter can simply indicate her most preferred candidate. THEOREM 3. The nondeterministic communication complexity of the plurality rule is Ω (n log m) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of sizen'! ((n0m)!) m where n' = (n-1) / 2. Taking the logarithm of this gives log (n'!) m log ((n' / m)!) , so the result follows from Lemma 1. The fooling set will consist of all vectors of votes satisfying the following constraints: • Every candidate receives equally many votes from the first 2n' = n − 1 voters. • The last voter (voter n) votes for a. Candidate a wins with each one of these vote vectors because of the extra vote for a from the last voter. Given that m divides n', let us see how many vote vectors there are in the fooling set. We need to distribute n' voter pairs evenly over m candidates, for a total of n' / m voter pairs per candidate; and there are preciselyn,! ((n, m)!) m ways of doing this .6 All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let i be a number such that the two vote vectors disagree on the candidate for which voters 2i − 1 and 2i vote. Without loss of generality, suppose that in the first vote vector, these voters do not vote for a (but for some other candidate, b, instead). Now, construct a new vote vector by taking votes 2i − 1 and 2i from the first vote vector, and the remaining votes from the second vote vector. Then, b receives 2n' / m + 2 votes in this newly constructed vote vector, whereas a receives at most 2n' / m +1 votes. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 4. The deterministic communication complexity of the plurality with runoff rule is O (n log m). PROOF. First, let every voter indicate her most preferred candidate using log m bits. After this, the two candidates in the runoff are known, and each voter can indicate which one she prefers using a single additional bit. THEOREM 5. The nondeterministic communication complexity of the plurality with runoff rule is 0 (n log m) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size n,! ((n, m,)!) m, where m' = m/2 and n' = (n − 2) / 4. Taking the logarithm of this gives log (n'!) − m' log ((n' / m')!) , so the result follows from Lemma 1. Divide the candidates into m' pairs: (c1, d1), (c2, d2),..., (cm,, dm,) where c1 = a and d1 = b. The fooling set will consist of all vectors of votes satisfying the following constraints: • For any 1 ≤ i ≤ n', voters 4i − 3 and 4i − 2 rank the candidates ck (i) ~ a ~ C − {a, ck (i)}, for some candidate ck (i). (If ck (i) = a then the vote is simply a ~ C − {a}.) • For any 1 ≤ i ≤ n', voters 4i − 1 and 4i rank the candidates dk (i) ~ a ~ C − {a, dk (i)} (that is, their most preferred candidate is the candidate that is paired with the candidate that the previous two voters vote for). 6An intuitive proof of this is the following. We can count the number of permutations of n' elements as follows. First, divide the elements into m buckets of size n' / m, so that if x is placed in a lower-indexed bucket than y, then x will be indexed lower in the eventual permutation. Then, decide on the permutation within each bucket (for which there are (n' / m)! choices per bucket). It follows that n'! equals the number of ways to divide n' elements into m buckets of size n' / m, times ((n' / m)!) m. • Every candidate is ranked at the top of equally many of the first 4n' = n − 2 votes. • Voter 4n' +1 = n − 1 ranks the candidates a ~ C − {a}. • Voter 4n' + 2 = n ranks the candidates b ~ C − {b}. Candidate a wins with each one of these vote vectors: because of the last two votes, candidates a and b are one vote ahead of all the other candidates and continue to the runoff, and at this point all the votes that had another candidate ranked at the top transfer to a, so that a wins the runoff. Given that m' divides n', let us see how many vote vectors there are in the fooling set. We need to distribute n' groups of four voters evenly over the m' pairs of candidates, and (as in the proof of Theorem 3) there are n,! ((m, n,)!) m, ways of doing this. All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let i be a number such that ck (i) is not the same in both of these two vote vectors, that is, c1k (i) (ck (i) in the first vote vector) is not equal to c2k (i) (ck (i) in the second vote vector). Without loss of generality, suppose c1k (i) = 6 a. Now, construct a new vote vector by taking votes 4i − 3, 4i − 2, 4i − 1, 4i from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, c1k (i) and d1 k (i) each receive 4n' / m +2 votes in the first round, whereas a receives at most 4n' / m +1 votes. So, a does not continue to the runoff in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 6. The nondeterministic communication complexity of the Borda rule is 0 (nm log m) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size (m'!) n, where m' = m − 2 and n' = (n − 2) / 4. This will prove the theorem because m'! is 0 (m log m), so that log ((m'!) n,) = n' log (m'!) is 0 (nm log m). For every vector (Sr1, Sr2,..., Srn,) consisting of n' orderings of all candidates other than a and another fixed candidate b (technically, the orderings take the form of a one-to-one function Sri: {1, 2,..., m'} → C − {a, b} with Sri (j) = c indicating that candidate c is the jth in the order represented by Sri), let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n', let voters 4i − 3 and 4i − 2 rank the candidates a ~ b ~ Sri (1) ~ Sri (2) ~...~ Sri (m'). • For 1 ≤ i ≤ n', let voters 4i − 1 and 4i rank the candidates Sri (m') ~ Sri (m' − 1) ~...~ Sri (1) ~ b ~ a. • Let voter 4n' + 1 = n − 1 rank the candidates a ~ b ~ Sr0 (1) ~ Sr0 (2) ~...~ Sr0 (m') (where Sr0 is an arbitrary order of the candidates other than a and b which is the same for every element of the fooling set). • Let voter 4n' + 2 = n rank the candidates Sr0 (m') ~ Sr0 (m' − 1) ~...~ Sr0 (1) ~ a ~ b. We observe that this fooling set has size (m'!) n,, and that candidate a wins in each vector of votes in the fooling set (to see why, we observe that for any 1 ≤ i ≤ n', votes 4i − 3 and 4i − 2 rank the candidates in the exact opposite way from votes 4i − 1 and 4i, which under the Borda rule means they cancel out; and the last two votes give one more point to a than to any other candidate--besides b who gets two fewer points than a). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (π11, π12,..., π1n,), and let the second vote vector correspond to the vector (π21, π22,..., π2n,). For some i, we must have π1i = 6 π2i, so that for some candidate c ∈ / {a, b}, (π1i) _ 1 (c) <(π2i) _ 1 (c) (that is, c is ranked higher in π1i than in π2i). Now, construct a new vote vector by taking votes 4i − 3 and 4i − 2 from the first vote vector, and the remaining votes from the second vote vector. a's Borda score remains unchanged. However, because c is ranked higher in π1i than in π2i, c receives at least 2 more points from votes 4i − 3 and 4i − 2 in the newly constructed vote vector than it did in the second vote vector. It follows that c has a higher Borda score than a in the newly constructed vote vector. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 7. The nondeterministic communication complexity of the Copeland rule is Ω (nm log m) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size (m'!) n, where m' = (m − 2) / 2 and n' = (n − 2) / 2. This will prove the theorem because m'! is Ω (m log m), so that log ((m'!) n,) = n' log (m'!) is Ω (nm log m). We write the set of candidates as the following disjoint union: C = {a, b} ∪ L ∪ R where L = {l1, l2,..., lm,} and R = {r1, r2,..., rm,}. For every vector (π1, π2,..., πn,) consisting of n' permutations of the integers 1 through m' (πi: {1, 2,..., m'} → {1, 2,..., m'}), let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n', let voter 2i − 1 rank the candidates a ~ b ~ lπi (1) ~ rπi (1) ~ lπi (2) ~ rπi (2) ~...~ lπi (m,) ~ rπi (m,). • For 1 ≤ i ≤ n', let voter 2i rank the candidates rπi (m,) ~ lπi (m,) ~ rπi (m, _ 1) ~ lπi (m, _ 1) ~...~ rπi (1) ~ lπi (1) ~ b ~ a. • Let voter n − 1 = 2n' + 1 rank the candidates a ~ b ~ l1 ~ r1 ~ l2 ~ r2 ~...~ lm, ~ rm,. • Let voter n = 2n' +2 rank the candidates rm, ~ lm, ~ rm, _ 1 ~ lm, _ 1 ~...~ r1 ~ l1 ~ a ~ b. We observe that this fooling set has size (m'!) n,, and that candidate a wins in each vector of votes in the fooling set (every pair of candidates is tied in their pairwise election, with the exception that a defeats b, so that a wins the election by half a point). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (π11, π12,..., π1n,), and let the second vote vector correspond to the vector (π21, π22,..., π2n,). For some i, we must have π1i = 6 π2i, so that for some j ∈ {1, 2,..., m'}, we have (π1i) _ 1 (j) <(π2i) _ 1 (j). Now, construct a new vote vector by taking vote 2i − 1 from the first vote vector, and the remaining votes from the second vote vector. a's Copeland score remains unchanged. Let us consider the score of lj. We first observe that the rank of lj in vote 2i − 1 in the newly constructed vote vector is at least 2 higher than it was in the second vote vector, because (π1i) _ 1 (j) <(π2i) _ 1 (j). Let D1 (lj) be the set of candidates in L ∪ R that voter 2i − 1 ranked lower than lj in the first vote vector (D1 (lj) = {c ∈ L ∪ R: lj ~ 12i_1 c}), and let D2 (lj) be the set of candidates in L ∪ R that voter 2i − 1 ranked lower than lj in the second vote vector (D2 (lj) = {c ∈ L ∪ R: lj ~ 22i_1 c}). Then, it follows that in the newly constructed vote vector, lj defeats all the candidates in D1 (lj) − D2 (lj) in their pairwise elections (because lj receives an "extra" vote in each one of these pairwise elections relative to the second vote vector), and loses to all the candidates in D2 (lj) − D1 (lj) (because lj loses a vote in each one of these pairwise elections relative to the second vote vector), and ties with everyone else. But | D1 (lj) | − | D2 (lj) | ≥ 2, and hence | D1 (lj) − D2 (lj) | − | D2 (lj) − D1 (lj) | ≥ 2. Hence, in the newly constructed vote vector, lj has at least two more pairwise wins than pairwise losses, and therefore has at least 1 more point than if lj had tied all its pairwise elections. Thus, lj has a higher Copeland score than a in the newly constructed vote vector. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 8. The nondeterministic communication complexity of the maximin rule is O (nm). PROOF. The nondeterministic protocol will guess which candidate w is the winner, and, for each other candidate c, which candidate o (c) is the candidate against whom c receives its lowest score in a pairwise election. Then, let every voter communicate the following: • for each candidate c = 6 w, whether she prefers c to w; • for each candidate c = 6 w, whether she prefers c to o (c). We observe that this requires the communication of 2n (m − 1) bits. If the guesses were correct, then, letting N (d, e) be the number of voters preferring candidate d to candidate e, we should have N (c, o (c)) <N (w, c') for any c = 6 w, c' = 6 w, which will prove that w wins the election. THEOREM 9. The nondeterministic communication complexity of the maximin rule is Ω (nm) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size 2n, m, where m' = m − 2 and n' = (n − 1) / 4. Let b be a candidate other than a. For every vector (S1, S2,..., Sn,) consisting of n' subsets Si ⊆ C − {a, b}, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n', let voters 4i − 3 and 4i − 2 rank the candidates Si ~ a ~ C − (Si ∪ {a, b}) ~ b. • For 1 ≤ i ≤ n', let voters 4i − 1 and 4i rank the candidates b ~ C − (Si ∪ {a, b}) ~ a ~ Si. • Let voter 4n' + 1 = n rank the candidates a> - b> C − {a, b}. We observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set (in every one of a's pairwise elections, a is ranked higher than its opponent by 2n' +1 = (n +1) / 2> n/2 votes). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,). For some i, we must have S1i = 6 S2i, so that either S1i Z S2i or S2i Z S1i. Without loss of generality, suppose S1i Z S2i, and let c be some candidate in S1i − S2i. Now, construct a new vote vector by taking votes 4i − 3 and 4i − 2 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, a is ranked higher than c by only 2n' − 1 voters, for the following reason. Whereas voters 4i − 3 and 4i − 2 do not rank c higher than a in the second vote vector (because c E / S2i), voters 4i − 3 and 4i − 2 do rank c higher than a in the first vote vector (because c E S1i). Moreover, in every one of b's pairwise elections, b is ranked higher than its opponent by at least 2n' voters. So, a has a lower maximin score than b, therefore a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 10. The deterministic communication complexity of the STV rule is O (n (log m) 2). PROOF. Consider the following communication protocol. Let each voter first announce her most preferred candidate (O (n log m) communication). In the remaining rounds, we will keep track of each voter's most preferred candidate among the remaining candidates, which will be enough to implement the rule. When candidate c is eliminated, let each of the voters whose most preferred candidate among the remaining candidates was c announce their most preferred candidate among the candidates remaining after c's elimination. If candidate c was the ith candidate to be eliminated (that is, there were m − i + 1 candidates remaining before c's elimination), it follows that at most n / (m − i + 1) voters had candidate c as their most preferred candidate among the remaining candidates, and thus the number of bits to be communicated after the elimination of the ith candidate is O ((n / (m − i +1)) log m).7 Thus, the total communication in this communication protocol is O (n log m + 1)) log m). Of course, O (log m). Substituting into the previous expression, we find that the communication complexity is O (n (log m) 2). THEOREM 11. The nondeterministic communication complexity of the STV rule is Ω (n log m) (even to decide whether a given candidate a wins). PROOF. We omit this proof because of space constraint. 7Actually, O ((n / (m − i + 1)) log (m − i + 1)) is also correct, but it will not improve the bound. THEOREM 12. The deterministic communication complexity of the approval rule is O (nm). PROOF. Approving or disapproving of a candidate requires only one bit of information, so every voter can simply approve or disapprove of every candidate for a total communication of nm bits. THEOREM 13. The nondeterministic communication complexity of the approval rule is Ω (nm) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size 2n, m, where m' = m − 1 and n' = (n − 1) / 4. For every vector (S1, S2,..., Sn,) consisting of n' subsets Si C _ C − {a}, let the following vector of votes be an element of the fooling set: • For 1 <i <n', let voters 4i − 3 and 4i − 2 approve Si U {a}. • For 1 <i <n', let voters 4i − 1 and 4i approve C − (Si U {a}). • Let voter 4n' + 1 = n approve {a}. We observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set (a is approved by 2n' + 1 voters, whereas each other candidate is approved by only 2n' voters). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S1 1, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,). For some i, we must have S1i = 6 S2i, so that either S1i Z S2i or S2i Z S1i. Without loss of generality, suppose S1i Z S2i, and let b be some candidate in S1i − S2i. Now, construct a new vote vector by taking votes 4i − 3 and 4i − 2 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, a is still approved by 2n' + 1 votes. However, b is approved by 2n' + 2 votes, for the following reason. Whereas voters 4i − 3 and 4i − 2 do not approve b in the second vote vector (because b E / S2i), voters 4i − 3 and 4i − 2 do approve b in the first vote vector (because b E S1i). It follows that b's score in the newly constructed vote vector is b's score in the second vote vector (2n'), plus two. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. Interestingly, an Ω (m) lower bound can be obtained even for the problem of finding a candidate that is approved by more than one voter [20]. THEOREM 14. The deterministic communication complexity of the Condorcet rule is O (nm). PROOF. We maintain a set of active candidates S which is initialized to C. At each stage, we choose two of the active candidates (say, the two candidates with the lowest indices), and we let each voter communicate which of the two candidates she prefers. (Such a stage requires the communication of n bits, one per voter.) The candidate preferred by fewer voters (the loser of the pairwise election) is removed from S. (If the pairwise election is tied, both candidates are removed.) After at most m − 1 iterations, only one candidate is left (or zero candidates are left, in which case there is no Condorcet winner). Let a be the remaining candidate. To find out whether candidate a is the Condorcet winner, let each voter communicate, for every candidate c = 6 a, whether she prefers a to c. (This requires the communication of at most n (m − 1) bits.) This is enough to establish whether a won each of its pairwise elections (and thus, whether a is the Condorcet winner). THEOREM 15. The nondeterministic communication complexity of the Condorcet rule is Ω (nm) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size 2n, m, where m' = m − 1 and n' = (n − 1) / 2. For every vector (S1, S2,..., Sn,) consisting of n' subsets Si ⊆ C − {a}, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n', let voter 2i − 1 rank the candidates Si ~ a ~ C − Si. • For 1 ≤ i ≤ n', let voter 2i rank the candidates C − Si ~ a ~ Si. • Let voter 2n' +1 = n rank the candidates a ~ C − {a}. We observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set (a wins each of its pairwise elections by a single vote). All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,). For some i, we must have S1i = 6 S2i, so that either S1i S2i or S2i Z S1i. Without loss of generality, suppose S1i * S2i, and let b be some candidate in S1i − S2i. Now, construct a new vote vector by taking vote 2i − 1 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, b wins its pairwise election against a by one vote (vote 2i − 1 ranks b above a in the newly constructed vote vector because b ∈ S1i, whereas in the second vote vector vote 2i − 1 ranked a above b because b ∈ / S2i). So, a is not the Condorcet winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 16. The deterministic communication complexity of the cup rule is O (nm). PROOF. Consider the following simple communication protocol. First, let all the voters communicate, for every one of the matchups in the first round, which of its two candidates they prefer. After this, the matchups for the second round are known, so let all the voters communicate which candidate they prefer in each matchup in the second round--etc. . Because communicating which of two candidates is preferred requires only one bit per voter, and because there are only m − 1 matchups in total, this communication protocol requires O (nm) communication. THEOREM 17. The nondeterministic communication complexity of the cup rule is Ω (nm) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size 2n, m, where m' = (m − 1) / 2 and n' = (n − 7) / 2. Given that m + 1 is a power of 2, so that one candidate gets a bye (that is, does not face an opponent) in the first round, let a be the candidate with the bye. Of the m' first-round matchups, let lj denote the one ("left") candidate in the jth matchup, and let rj be the other ("right") candidate. Let L = {lj: 1 ≤ j ≤ m'} and R = {rj: 1 ≤ j ≤ m'}, so that C = L ∪ R ∪ {a}. Figure 1: The schedule for the cup rule used in the proof of Theorem 17. For every vector (S1, S2,..., Sn,) consisting of n' subsets Si ⊆ R, let the following vector of votes be an element of the fooling set: • For 1 ≤ i ≤ n', let voter 2i − 1 rank the candidates Si ~ L ~ a ~ R − Si. • For 1 ≤ i ≤ n', let voter 2i rank the candidates R − Si ~ L ~ a ~ Si. • Let voters 2n' +1 = n − 6, 2n' +2 = n − 5, 2n' +3 = n − 4 rank the candidates L ~ a ~ R. • Let voters 2n' + 4 = n − 3, 2n' + 5 = n − 2 rank the candidates a ~ r1 ~ l1 ~ r2 ~ l2 ~...~ rm, ~ lm,. • Let voters 2n' + 6 = n − 1, 2n' + 7 = n rank the candidates rm, ~ lm, ~ rm,-1 ~ lm,-1 ~...~ r1 ~ l1 ~ a. We observe that this fooling set has size (2m,) n, = 2n, m,. Also, candidate a wins in each vector of votes in the fooling set, for the following reasons. Each candidate rj defeats its opponent lj in the first round. (For any 1 ≤ i ≤ n', the net effect of votes 2i − 1 and 2i on the pairwise election between rj and lj is zero; votes n − 6, n − 5, n − 4 prefer lj to rj, but votes n − 3, n − 2, n − 1, n all prefer rj to lj.) Moreover, a defeats every rj in their pairwise election. (For any 1 ≤ i ≤ n', the net effect of votes 2i − 1 and 2i on the pairwise election between a and rj is zero; votes n − 1, n prefer rj to a, but votes n − 6, n − 5, n − 4, n − 3, n − 2 all prefer a to rj.) It follows that a will defeat all the candidates that it faces. All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,). For some i, we must have S1i = 6 S2i, so that either S1i S2i or S2i Z S1i. Without loss of generality, suppose S1i * S2i, and let rj be some candidate in S1i − S2i. Now, construct a new vote vector by taking vote 2i from the first vote vector, and the remaining votes from the second vote vector. We note that, whereas in the second vote vector vote 2i preferred rj to lj (because rj E R − S2i), in the newly constructed vote vector this is no longer the case (because rj E S1i). It follows that, whereas in the second vote vector, rj defeated lj in the first round by one vote, in the newly constructed vote vector, lj defeats rj in the first round. Thus, at least one lj advances to the second round after defeating its opponent rj. Now, we observe that in the newly constructed vote vector, any lk wins its pairwise election against any rq with q = 6 k. This is because among the first 2n' votes, at least n' − 1 prefer lk to rq; votes n − 6, n − 5, n − 4 prefer lk to rq; and, because q = 6 k, either votes n − 3, n − 2 prefer lk to rq (if k <q), or votes n − 1, n prefer lk to rq (if k> q). Thus, at least n' + 4 = (n + 1) / 2> n/2 votes prefer lk to rq. Moreover, any lk wins its pairwise election against a. This is because only votes n − 3 and n − 2 prefer a to lk. It follows that, after the first round, any surviving candidate lk can only lose a matchup against another surviving lk,, so that one of the lk must win the election. So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 18. The deterministic communication complexity of the Bucklin rule is O (nm). PROOF. Let l be the minimum integer for which there is a candidate who is ranked among the top l candidates by more than half the votes. We will do a binary search for l. At each point, we will have a lower bound lL which is smaller than l (initialized to 0), and an upper bound lH which is at least l (initialized to m). While lH − lL> 1, we continue by finding out whether b (lH − l) / 2J is smaller than l, after which we can update the bounds. To find out whether a number k is smaller than l, we determine every voter's k most preferred candidates. Every voter can communicate which candidates are among her k most preferred candidates using m bits (for each candidate, indicate whether the candidate is among the top k or not), but because the binary search requires log m iterations, this gives us an upper bound of O ((log m) nm), which is not strong enough. However, if lL <k <lH, and we already know a voter's lL most preferred candidates, as well as her lH most preferred candidates, then the voter no longer needs to communicate whether the lL most preferred candidates are among her k most preferred candidates (because they must be), and she no longer needs to communicate whether the m − lH least preferred candidates are among her k most preferred candidates (because they cannot be). Thus the voter needs to communicate only m − lL − (m − lH) = lH − lL bits in any given stage. Because each stage, lH − lL is (roughly) halved, each voter in total communicates only (roughly) m + m/2 + m/4 +...<2m bits. THEOREM 19. The nondeterministic communication complexity of the Bucklin rule is Ω (nm) (even to decide whether a given candidate a wins). PROOF. We will exhibit a fooling set of size 2n, m, where m' = (m − 1) / 2 and n' = n/2. We write the set of candidates as the following disjoint union: C = {a} U L U R where L = {l1, l2,..., lm,} and R = {r1, r2,..., rm,}. For any subset S C _ {1, 2,..., m'}, let L (S) = {li: i E S} and let R (S) = {ri: i E S}. For every vector (S1, S2,..., Sn,) consisting of n' sets Si C _ {1, 2,..., m'}, let the following vector of votes be an element of the fooling set: • For 1 <i <n', let voter 2i − 1 rank the candidates L (Si)> - R − R (Si)> - a> - L − L (Si)> - R (Si). • For 1 <i <n', let voter 2i rank the candidates L − L (Si)> - R (Si)> - a> - L (Si)> - R − R (Si). We observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set, for the following reason. Each candidate in C − {a} is ranked among the top m' candidates by exactly half the voters (which is not enough to win). Thus, we need to look at the voters' top m' +1 candidates, and a is ranked m' +1 th by all voters. All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses. Let the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,). For some i, we must have S1i = 6 S2i, so that either S1i S2i or S2i Z S1i. Without loss of generality, suppose S1i * S2i, and let j be some integer in S1i − S2i. Now, construct a new vote vector by taking vote 2i − 1 from the first vote vector, and the remaining votes from the second vote vector. In this newly constructed vote vector, a is still ranked m' + 1th by all votes. However, lj is ranked among the top m' candidates by n' + 1 = n/2 + 1 votes. This is because whereas vote 2i − 1 does not rank lj among the top m' candidates in the second vote vector (because j E / S2i, we have lj E / L (S2i)), vote 2i − 1 does rank lj among the top m' candidates in the first vote vector (because j E S1i, we have lj E L (S1i)). So, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set. THEOREM 20. The nondeterministic communication complexity of the ranked pairs rule is Ω (nm log m) (even to decide whether a given candidate a wins). PROOF. We omit this proof because of space constraint. 5. DISCUSSION One key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters. By lowering this burden, it may become feasible to conduct more elections over more issues. In the limit, this could lead to a shift from representational government to a system in which most issues are decided by referenda--a veritable e-democracy. In this paper, we analyzed the communication complexity of the common voting rules. Knowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends to produce outcomes that are slightly less representative of the voters' preferences, if this rule reduces the communication burden on the voters significantly. The following table summarizes the results we obtained. Communication complexity of voting rules, sorted from low to high. All of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O (nm log m)). All of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner. One area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information. This restriction may invalidate some of the upper bounds that we derived using multistage communication protocols. Also, all of our bounds are worst-case bounds. It may be possible to outperform these bounds when the distribution of votes has additional structure. When deciding which voting rule to use for an election, there are many considerations to take into account. The voting rules that we studied in this paper are the most common ones that have survived the test of time. One way to select among these rules is to consider recent results on complexity. The table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable. However, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7]. Plurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others' votes [7, 6]. STV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others' votes (when the number of candidates is unbounded) [2]. Therefore, STV is more robust, although it may require slightly more worst-case communication as per the table above. Yet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].
Communication Complexity of Common Voting Rules * ABSTRACT We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin. 1. INTRODUCTION One key factor in the practicality of any preference aggregation rule is its communication burden. To successfully aggregate the agents' preferences, it is usually not necessary * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. The authors thank Ilya Segal for helpful comments. for all the agents to report all of their preference information. Clever protocols that elicit the agents' preferences partially and sequentially have the potential to dramatically reduce the required communication. This has at least the following advantages: 9 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate. 9 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents. This is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16]. 9 It preserves (some of) the agents' privacy. Most of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event. Because in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial. This can be accomplished by supplementing the auctioneer with an elicitor that incrementally elicits parts of the bidders' preferences on an as-needed basis, based on what the bidders have revealed about their preferences so far, as suggested by Conen and Sandholm [5]. For example, the elicitor can ask for a bidder's value for a specific bundle (value queries), which of two bundles the bidder prefers (order queries), which bundle he ranks kth or what the rank of a given bundle is (rank queries), which bundle he would purchase given a particular vector of prices (demand queries), etc.--until (at least) the final allocation can be determined. Experimentally, this yields drastic savings in preference revelation [11]. Furthermore, if the agents' valuation functions are drawn from certain natural subclasses, the elicitation problem can be solved using only polynomially many queries even in the worst case [23, 4, 13, 18, 14]. For a review of preference elicitation in combinatorial auctions, see [17]. Ascending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9]. Finally, resource allocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication. For example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14]. Segal also studies social choice rules in general, and shows that for a large class of social choice rules, supporting budget sets must be revealed such that if every agent prefers the same outcome in her budget set, this proves the optimality of that outcome. Segal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20]. In this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules. In a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes. The communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both. Prior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters' preferences. In addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation. In contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters' preferences. We determine the communication complexity of the common voting rules. For each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol. (Segal's results [20] do not apply to most voting rules because most voting rules are not intersectionmonotonic (or even monotonic).1) For many of the voting rules under study, it turns out that one cannot do better than simply letting each voter immediately communicate all her (potentially relevant) information. However, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information. Finally, for some rules (such as the Condorcet and Bucklin rules), we need to introduce a more complex communication protocol to achieve the best possible upper 1For two of the rules that we study that are intersectionmonotonic, namely the approval and Condorcet rules, Segal's results can in fact be used to give alternative proofs of our lower bounds. We only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal's results, and 3) a space constraint applies. However, we hope to also include the alternative proofs in a later version. bound. After obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule. There are two exceptions: STV, for which our upper and lower bounds are apart by a factor log m; and maximin, for which our best deterministic upper bound is also a factor log m above the (nondeterministic) lower bound, although we give a nondeterministic upper bound that matches the lower bound. 2. REVIEW OF VOTING RULES 3. REVIEW OF SOME COMMUNICATION COMPLEXITY THEORY 4. RESULTS 5. DISCUSSION One key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters. By lowering this burden, it may become feasible to conduct more elections over more issues. In the limit, this could lead to a shift from representational government to a system in which most issues are decided by referenda--a veritable e-democracy. In this paper, we analyzed the communication complexity of the common voting rules. Knowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends to produce outcomes that are slightly less representative of the voters' preferences, if this rule reduces the communication burden on the voters significantly. The following table summarizes the results we obtained. Communication complexity of voting rules, sorted from low to high. All of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O (nm log m)). All of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner. One area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information. This restriction may invalidate some of the upper bounds that we derived using multistage communication protocols. Also, all of our bounds are worst-case bounds. It may be possible to outperform these bounds when the distribution of votes has additional structure. When deciding which voting rule to use for an election, there are many considerations to take into account. The voting rules that we studied in this paper are the most common ones that have survived the test of time. One way to select among these rules is to consider recent results on complexity. The table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable. However, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7]. Plurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others' votes [7, 6]. STV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others' votes (when the number of candidates is unbounded) [2]. Therefore, STV is more robust, although it may require slightly more worst-case communication as per the table above. Yet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].
Communication Complexity of Common Voting Rules * ABSTRACT We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin. 1. INTRODUCTION One key factor in the practicality of any preference aggregation rule is its communication burden. The authors thank Ilya Segal for helpful comments. for all the agents to report all of their preference information. Clever protocols that elicit the agents' preferences partially and sequentially have the potential to dramatically reduce the required communication. This has at least the following advantages: 9 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate. 9 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents. This is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16]. 9 It preserves (some of) the agents' privacy. Most of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event. Because in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial. Experimentally, this yields drastic savings in preference revelation [11]. For a review of preference elicitation in combinatorial auctions, see [17]. Ascending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9]. Finally, resource allocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication. For example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14]. Segal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20]. In this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules. In a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes. The communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both. Prior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters' preferences. In addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation. In contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters' preferences. We determine the communication complexity of the common voting rules. For each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol. However, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information. We only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal's results, and 3) a space constraint applies. However, we hope to also include the alternative proofs in a later version. bound. After obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule. 5. DISCUSSION One key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters. By lowering this burden, it may become feasible to conduct more elections over more issues. In this paper, we analyzed the communication complexity of the common voting rules. Knowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends to produce outcomes that are slightly less representative of the voters' preferences, if this rule reduces the communication burden on the voters significantly. The following table summarizes the results we obtained. Communication complexity of voting rules, sorted from low to high. All of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O (nm log m)). All of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner. One area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information. This restriction may invalidate some of the upper bounds that we derived using multistage communication protocols. Also, all of our bounds are worst-case bounds. It may be possible to outperform these bounds when the distribution of votes has additional structure. When deciding which voting rule to use for an election, there are many considerations to take into account. The voting rules that we studied in this paper are the most common ones that have survived the test of time. One way to select among these rules is to consider recent results on complexity. The table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable. However, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7]. Plurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others' votes [7, 6]. STV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others' votes (when the number of candidates is unbounded) [2]. Therefore, STV is more robust, although it may require slightly more worst-case communication as per the table above. Yet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].
C-75
Composition of a DIDS by Integrating Heterogeneous IDSs on Grids
This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation.
[ "grid", "distribut intrus detect system", "intrus detect system", "grid middlewar", "heterogen intrus detect system", "open grid servic architectur", "comput grid", "intrus detect servic", "grid intrus detect architectur", "id integr", "gridsim grid simul", "grid servic for intrus detect", "system integr" ]
[ "P", "P", "P", "P", "R", "M", "M", "M", "R", "M", "M", "M", "R" ]
Composition of a DIDS by Integrating Heterogeneous IDSs on Grids Paulo F. Silva and Carlos B. Westphall and Carla M. Westphall Network and Management Laboratory Department of Computer Science and Statistics Federal University of Santa Catarina, Florianópolis, Brazil Marcos D. Assunção Grid Computing and Distributed Systems Laboratory and NICTA Victoria Laboratory Department of Computer Science and Software Engineering The University of Melbourne, Victoria, 3053, Australia {paulo,westphal,assuncao,carla}@lrg. ufsc.br ABSTRACT This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation. Categories and Subject Descriptors C.2.4 [Distributed Systes]: Client/Server, Distributed Applications. 1. INTRODUCTION Solutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6],[7],[11],[2]. Some reasons for integrating IDSs are described by the IDWG (Intrusion Detection Working Group) from the IETF (Internet Engineering Task Force) [12] as follows: • Many IDSs available in the market have strong and weak points, which generally make necessary the deployment of more than one IDS to provided an adequate solution. • Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs. The integration of IDSs is then needed to correlate information from multiple networks to allow the identification of distributed attacks and or intrusions. • The interoperability/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products. DIDSs (Distributed Intrusion Detection Systems) therefore started to emerge in early 90s [9] to allow the correlation of intrusion information from multiple hosts, networks or domains to detect distributed attacks. Research on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13]. However, the realization of a DIDS requires a high degree of coordination. Computational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment. Grid computing aims to enable coordinate resource sharing in dynamic groups of individuals and/or organizations. Moreover, Grid middleware provides means for secure access, management and allocation of remote resources; resource information services; and protocols and mechanisms for transfer of data [4]. According to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share. OGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids. The services in OGSA are specified through well-defined, open, extensible and platformindependent interfaces, which enable the development of interoperable applications. This article proposes a model for integration of IDSs by using computational Grids. The proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid). Each of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes. A Grid middleware provides several features for the realization of a DIDSoG, including [3]: decentralized coordination of resources; use of standard protocols and interfaces; and the delivery of optimized QoS (Quality of Service). The service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms. Different implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3]. The virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented. This characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs. Grid middleware can thus be used to implement a great variety of services. Some functions provided by Grid middleware are [3]: (i) data management services, including access services, replication, and localisation; (ii) workflow services that implement coordinate execution of multiple applications on multiple resources; (iii) auditing services that perform the detection of frauds or intrusions; (iv) monitoring services which implement the discovery of sensors in a distributed environment and generate alerts under determined conditions; (v) services for identification of problems in a distributed environment, which implement the correlation of information from disparate and distributed logs. These services are important for the implementation of a DIDSoG. A DIDS needs services for the location of and access to distributed data from different IDSs. Auditing and monitoring services take care of the proper needs of the DIDSs such as: secure storage, data analysis to detect intrusions, discovery of distributed sensors, and sending of alerts. The correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG. The next sections of this article are organized as follows. Section 2 presents related work. The proposed model is presented in Section 3. Section 4 describes the development and a case study. Results and discussion are presented in Section 5. Conclusions and future work are discussed in Section 6. 2. RELATED WORK DIDMA [5] is a flexible, scalable, reliable, and platformindependent DIDS. DIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents. However, the integration with existing IDSs and the development of security components are presented as future work [5]. The extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG. The flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform. More efficient techniques for analysis of great amounts of data in wide scale networks based on clustering and applicable to DIDSs are presented in [13]. The integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13]. Ref. [10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy. The architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers. Components in the same level of the hierarchy cooperate with one another. The integration proposed by DIDSoG also follows a hierarchical architecture. Each IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level. The hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs. There are proposals on integrating computational Grids and IDSs [6],[7],[11],[2]. Ref. [6] and [7] propose the use of Globus Toolkit for intrusion detection, especially for DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks; Globus is used due to the need to process great amounts of data to detect these kinds of attack. A two-phase processing architecture is presented. The first phase aims at the detection of momentary attacks, while the second phase is concerned with chronic or perennial attacks. Traditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks. Leu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms. Leu et al. [6],[7] have used tools to generate several types of attacks - including TCP, ICMP and UDP flooding - and have demonstrated through experimental results the advantages of applying computational Grids to IDSs. This work proposes the development of a DIDS upon a Grid platform. However, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6][7] do not consider the integration of heterogeneous IDSs. The processing in phases [6][7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs. The DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11]. GridSim Grid simulator was used for the validation of DIDS GIDA. Homogeneous resources were used to simplify the development [11]. However, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2]. Scenarios demonstrating how a DIDS can execute on a Grid environment are presented. DIDSoG does not aim at detecting intrusions in a Grid environment. In contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment. Local and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs. 3. THE PROPOSED MODEL DIDSoG presents a hierarchy of intrusion detection services; this hierarchy is organized through a two-dimensional vector defined by Scope:Complexity. The IDSs composing DIDSoG can be organized in different levels of scope or complexity, depending on its functionalities, the topology of the target environment and expected results. Figure 1 presents a DIDSoG composed by different intrusion detection services (i.e. data gathering, data aggregation, data correlation, analysis, intrusion response and management) provided by different IDSs. The information flow and the relationship between the levels of scope and complexity are presented in this figure. Information about the environment (host, network or application) is collected by Sensors located both in user 1``s and user 2``s computers in domain 1. The information is sent to both simple Analysers that act on the information from a single host (level 1:1), and to aggregation and correlation services that act on information from multiple hosts from the same domain (level 2:1). Simple Analysers in the first scope level send the information to more complex Analysers in the next levels of complexity (level 1: N). When an Analyser detects an intrusion, it communicates with Countermeasure and Monitoring services registered to its scope. An Analyser can invoke a Countermeasure service that replies to a detected attack, or informs a Monitoring service about the ongoing attack, so the administrator can act accordingly. Aggregation and correlation resources in the second scope receive information from Sensors from different users'' computers (user 1``s and user 2``s) in the domain 1. These resources process the received information and send it to the analysis resources registered to the first level of complexity in the second scope (level 2:1). The information is also sent to the aggregation and correlation resources registered in the first level of complexity in the next scope (level 3:1). User 1 Domain 1 Analysers Level 1:1 Local Sensors Analysers Level 1:N Aggreg. Correlation Level 2:1 User 2 Domain 1 Local Sensors Analysers Level 2:1 Analysers Level 2:N Aggreg. Correlation Level 3:1 Domain 2 Monitor Level 1 Monitor Level 2 Analysers Level 3:1 Analysers Level 3:N Monitor Level 3 Response Level 1 Response Level 2 Response Level 3 Fig. 1. How DIDSoG works. The analysis resources in the second scope act like the analysis resources in the first scope, directing the information to a more complex analysis resource and putting the Countermeasure and Monitoring resources in action in case of detected attacks. Aggregation and correlation resources in the third scope receive information from domains 1 and 2. These resources then carry out the aggregation and correlation of the information from different domains and send it to the analysis resources in the first level of complexity in the third scope (level 3:1). The information could also be sent to the aggregate service in the next scope in case of any resources registered to such level. The analysis resources in the third scope act similar to the analysis resources in the first and second scopes, except that the analysis resources in the third scope act on information from multiple domains. The functionalities of the registered resources in each of the scopes and complexity level can vary from one environment to another. The model allows the development of N levels of scope and complexity. Figure 2 presents the architecture of a resource participating in the DIDSoG. Initially, the resource registers itself to GIS (Grid Information Service) so other participating resources can query the services provided. After registering itself, the resource requests information about other intrusion detection resources registered to the GIS. A given resource of DIDSoG interacts with other resources, by receiving data from the Source Resources, processing it, and sending the results to the Destination Resources, therefore forming a grid of intrusion detection resources. Grid Resource BaseNative IDS Grid Origin Resources Grid Destination Resources Grid Information Service Descri ptor Connec tor Fig. 2. Architecture of a resource participating of the DIDSoG. A resource is made up of four components: Base, Connector, Descriptor and Native IDS. Native IDS corresponds to the IDS being integrated to the DIDSoG. This component process the data received from the Origin Resources and generates new data to be sent to the Destination Resources. A Native IDS component can be any tool processes information related to intrusion detection, including analysis, data gathering, data aggregation, data correlation, intrusion response or management. The Descriptor is responsible for the information that identifies a resource and its respective Destination Resources in the DIDSoG. Figure 3 presents the class diagram of the stored information by the Descriptor. The ResourceDescriptor class has Feature, Level, DataType and Target Resources type members. Feature class represents the functionalities that a resource has. Type, name and version attributes refer to the functions offered by the Native IDS component, its name and version, respectively. Level class identifies the level of target and complexity in which the resource acts. DataType class represents the data format that the resource accepts to receive. DataType class is specialized by classes Text, XML and Binary. Class XML contains the DTDFile attribute to specify the DTD file that validates the received XML. -ident -version -description ResourceDescriptor -featureType -name -version Feature 1 1 -type -version DataType -escope -complex Level 1 1 Text Binary -DTDFile XML 1 1 TargetResources 1 1 10. . * -featureType Resource11 1 1 Fig. 3. Class Diagram of the Descriptor component. TargetResources class represents the features of the Destination Resources of a determined resource. This class aggregates Resource. The Resource class identifies the characteristics of a Destination Resource. This identification is made through the featureType attribute and the Level and DataType classes. A given resource analyses the information from Descriptors from other resources, and compares this information with the information specified in TargetResources to know to which resources to send the results of its processing. The Base component is responsible for the communication of a resource with other resources of the DIDSoG and with the Grid Information Service. It is this component that registers the resource and the queries other resources in the GIS. The Connector component is the link between Base and Native IDS. The information that Base receives from Origin Resources is passed to Connector component. The Connector component performs the necessary changes in the data so that it is understood by Native IDS, and sends this data to Native IDS for processing. The Connector component also has the responsibility of collecting the information processed by Native IDS, and making the necessary changes so the information can pass through the DIDSoG again. After these changes, Connector sends the information to the Base, which in turn sends it to the Destination Resources in accordance with the specifications of the Descriptor component. 4. IMPLEMENTATION We have used GridSim Toolkit 3 [1] for development and evaluation of the proposed model. We have used and extended GridSim features to model and simulate the resources and components of DIDSoG. Figure 4 presents the Class diagram of the simulated DIDSoG. The Simulation_DIDSoG class starts the simulation components. The Simulation_User class represents a user of DIDSoG. This class'' function is to initiate the processing of a resource Sensor, from where the gathered information will be sent to other resources. DIDSoG_GIS keeps a registry of the DIDSoG resources.The DIDSoG_BaseResource class implements the Base component (see Figure 2). DIDSoG_BaseResource interacts with DIDSoG_Descriptor class, which represents the Descriptor component. The DIDSoG_Descriptor class is created from an XML file that specifies a resource descriptor (see Figure 3). DIDSoG_BaseResource DIDSoG_Descriptor 11 DIDSoG_GIS Simulation_User Simulation_DIDSoG 1 *1* 1 1 GridInformationService GridSim GridResource Fig. 4. Class Diagram of the simulatated DIDSoG. A Connector component must be developed for each Native IDS integrated to DIDSoG. The Connector component is implemented by creating a class derived from DIDSoG_BaseResource. The new class will implement new functionalities in accordance with the needs of the corresponding Native IDS. In the simulation environment, data collection resources, analysis, aggregation/correlation and generation of answers were integrated. Classes were developed to simulate the processing of each Native IDS components associated to the resources. For each simulated Native IDS a class derived from DIDSoG_BaseResource was developed. This class corresponds to the Connector component of the Native IDS and aims at the integrating the IDS to DIDSoG. A XML file describing each of the integrated resources is chosen by using the Connector component. The resulting relationship between the resources integrated to the DIDSoG, in accordance with the specification of its respective descriptors, is presented in Figure 5. The Sensor_1 and Sensor_2 resources generate simulated data in the TCPDump [8] format. The generated data is directed to Analyser_1 and Aggreg_Corr_1 resources, in the case of Sensor_1, and to Aggreg_Corr_1 in the case of Sensor_2, according to the specification of their descriptors. User_1 Analyser_ 1 Level 1:1 Sensor_1 Aggreg_ Corr_1 Level 2:1 User_2 Sensor_2 Analyser_2 Level 2:1 Analyser_3 Level 2:2 TCPDump TCPDump TCPDumpAg TCPDumpAg IDMEF IDMEF IDMEF TCPDump Countermeasure_1 Level 1 Countermeasure_2 Level 2 Fig. 5. Flow of the execution of the simulation. The Native IDS of Analyser_1 generates alerts for any attempt of connection to port 23. The data received from Analyser_1 had presented such features, generating an IDMEF (Intrusion Detection Message Exchange Format) alert [14]. The generated alert was sent to Countermeasure_1 resource, where a warning was dispatched to the administrator informing him of the alert received. The Aggreg_Corr_1 resource received the information generated by sensors 1 and 2. Its processing activities consist in correlating the source IP addresses with the received data. The resultant information of the processing of Aggreg_Corr_1 was directed to the Analyser_2 resource. The Native IDS component of the Analyser_2 generates alerts when a source tries to connect to the same port number of multiple destinations. This situation is identified by the Analyser_2 in the data received from Aggreg_Corr_1 and an alert in IDMEF format is then sent to the Countermeasures_2 resource. In addition to generating alerts in IDMEF format, Analyser_2 also directs the received data to the Analyser_3, in the level of complexity 2. The Native IDS component of Analyser_3 generates alerts when the transmission of ICMP messages from a given source to multiple destinations is detected. This situation is detected in the data received from Analyser_2, and an IDMEF alert is then sent to the Countermeasure_2 resource. The Countermeasure_2 resource receives the alerts generated by analysers 3 and 2, in accordance with the implementation of its Native IDS component. Warnings on alerts received are dispatched to the administrator. The simulation carried out demonstrates how DIDSoG works. Simulated data was generated to be the input for a grid of intrusion detection systems composed by several distinct resources. The resources carry out tasks such as data collection, aggregation and analysis, and generation of alerts and warnings in an integrated manner. 5. EXPERIMENT RESULTS The hierarchic organization of scope and complexity provides a high degree of flexibility to the model. The DIDSoG can be modelled in accordance with the needs of each environment. The descriptors define data flow desired for the resulting DIDS. Each Native IDS is integrated to the DIDSoG through a Connector component. The Connector component is also flexible in the DIDSoG. Adaptations, conversions of data types and auxiliary processes that Native IDSs need are provided by the Connector. Filters and generation of Specific logs for each Native IDS or environment can also be incorporated to the Connector. If the integration of a new IDS to an environment already configured is desired, it is enough to develop the Connector for the desired IDS and to specify the resource Descriptor. After the specification of the Connector and the Descriptor the new IDS is integrated to the DIDSoG. Through the definition of scopes, resources can act on data of different source groups. For example, scope 1 can be related to a given set of hosts, scope 2 to another set of hosts, while scope 3 can be related to hosts from scopes 1 and 2. Scopes can be defined according to the needs of each environment. The complexity levels allow the distribution of the processing between several resources inside the same scope. In an analysis task, for example, the search for simple attacks can be made by resources of complexity 1, whereas the search for more complex attacks, that demands more time, can be performed by resources of complexity 2. With this, the analysis of the data is made by two resources. The distinction between complexity levels can also be organized in order to integrate different techniques of intrusion detection. The complexity level 1 could be defined for analyses based on signatures, which are simpler techniques; the complexity level 2 for techniques based on behaviour, that require greater computational power; and the complexity level 3 for intrusion detection in applications, where the techniques are more specific and depend on more data. The division of scopes and the complexity levels make the processing of the data to be carried out in phases. No resource has full knowledge about the complete data processing flow. Each resource only knows the results of its processing and the destination to which it sends the results. Resources of higher complexity must be linked to resources of lower complexity. Therefore, the hierarchic structure of the DIDSoG is maintained, facilitating its extension and integration with other domains of intrusion detection. By carrying out a hierarchic relationship between the several chosen analysers for an environment, the sensor resource is not overloaded with the task to send the data to all the analysers. An initial analyser will exist (complexity level 1) to which the sensor will send its data, and this analyser will then direct the data to the next step of the processing flow. Another feature of the hierarchical organization is the easy extension and integration with other domains. If it is necessary to add a new host (sensor) to the DIDSoG, it is enough to plug it to the first hierarchy of resources. If it is necessary to add a new analyser, it will be in the scope of several domains, it is enough to relate it to another resource of same scope. The DIDSoG allows different levels to be managed by different entities. For example, the first scope can be managed by the local user of a host. The second scope, comprising several hosts of a domain can be managed by the administrator of the domain. A third entity can be responsible for managing the security of several domains in a joint way. This entity can act in the scope 3 independently from others. With the proposed model for integration of IDSs in Grids, the different IDSs of an environment (or multiple IDSs integrated) act in a cooperative manner improving the intrusion detection services, mainly in two aspects. First, the information from multiple sources are analysed in an integrated way to search for distributed attacks. This integration can be made under several scopes. Second, there is a great diversity of data aggregation techniques, data correlation and analysis, and intrusion response that can be applied to the same environment; these techniques can be organized under several levels of complexity. 6. CONCLUSION The integration of heterogeneous IDSs is important. However, the incompatibility and diversity of IDS solutions make such integration extremely difficult. This work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG). IDSs in DIDSoG are encapsulated as Grid services for intrusion detection. A computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms. The components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator. Services for communication and localization were used to carry out the integration between components of different resources. Based on the components of the architecture, several resources were modelled forming a grid of intrusion detection. The simulation demonstrated the usefulness of the proposed model. Data from the sensor resources was read and this data was used to feed other resources of DIDSoG. The integration of distinct IDSs could be observed through the simulated environment. Resources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert). The communication and localization services provided by GridSim were used to integrate components of different resources. Various resources were modelled following the architecture components forming a grid of intrusion detection. The components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation. During the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks. This capability demonstrates that the IDSs integrated have resulted in a DIDS. Related work presents cooperation between components of a specific DIDS. Some work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids. However, none deals with the integration of heterogeneous IDSs. In contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs. DIDSoG presents new research opportunities that we would like to pursue, including: deployment of the model in a more realistic environment such as a Grid; incorporation of new security services; parallel analysis of data by Native IDSs in multiple hosts. In addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem. IDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs. The development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle. 7. REFERENCES [1] Sulistio, A.; Poduvaly, G.; Buyya, R; and Tham, CK, Constructing A Grid Simulation with Differentiated Network Service Using GridSim, Proc. of the 6th International Conference on Internet Computing (ICOMP'05), June 27-30, 2005, Las Vegas, USA. [2] Choon, O. T.; Samsudim, A. Grid-based Intrusion Detection System. The 9th IEEE Asia-Pacific Conference Communications, September 2003. [3] Foster, I.; Kesselman, C.; Tuecke, S. The Physiology of the Grid: An Open Grid Service Architecture for Distributed System Integration. Draft June 2002. Available at http://www.globus.org/research/papers/ogsa.pdf. Access Feb. 2006. [4] Foster, Ian; Kesselman, Carl; Tuecke, Steven. The anatomy of the Grid: enabling scalable virtual organizations. International Journal of Supercomputer Applications, 2001. [5] Kannadiga, P.; Zulkernine, M. DIDMA: A Distributed Intrusion Detection System Using Mobile Agents. Proceedings of the IEEE Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, May 2005. [6] Leu, Fang-Yie, et al.. Integrating Grid with Intrusion Detection. Proceedings of 19th IEEE AINA``05, March 2005. [7] Leu, Fang-Yie, et al.. A Performance-Based Grid Intrusion Detection System. Proceedings of the 29th IEEE COMPSAC``05, July 2005. [8] McCanne, S.; Leres, C.; Jacobson, V.; TCPdump/Libpcap, http://www.tcpdump.org/, 1994. [9] Snapp, S. R. et al.. DIDS (Distributed Intrusion Detection System) - Motivation, Architecture and An Early Prototype. Proceeding of the Fifteenth IEEE National Computer Security Conference. Baltimore, MD, October 1992. [10] Sterne, D.; et al.. A General Cooperative Intrusion Detection Architecture for MANETs. Proceedings of the Third IEEE IWIA``05, March 2005. [11] Tolba, M. F. ; et al.. GIDA: Toward Enabling Grid Intrusion Detection Systems. 5th IEEE International Symposium on Cluster Computing and the Grid, May 2005. [12] Wood, M. Intrusion Detection message exchange requirements. Draft-ietf-idwg-requirements-10, October 2002. Available at http://www.ietf.org/internet-drafts/draftietf-idwg-requirements-10.txt. Access March 2006. [13] Zhang, Yu-Fang; Xiong, Z.; Wang, X. Distributed Intrusion Detection Based on Clustering. Proceedings of IEEE International Conference Machine Learning and Cybernetics, August 2005. [14] Curry, D.; Debar, H. Intrusion Detection Message exchange format data model and Extensible Markup Language (XML) Document Type Definition. Draft-ietf-idwg-idmef-xml-10, March 2006. Available at http://www.ietf.org/internetdrafts/draft-ietf-idwg-idmef-xml-16.txt.
Composition of a DIDS by Integrating Heterogeneous IDSs on Grids ABSTRACT This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation. 1. INTRODUCTION Solutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6], [7], [11], [2]. Some reasons for integrating IDSs are described by the IDWG (Intrusion Detection Working Group) from the IETF (Internet Engineering Task Force) [12] as follows: • Many IDSs available in the market have strong and weak points, which generally make necessary the deployment of more than one IDS to provided an adequate solution. • Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs. The integration of IDSs is then needed to correlate information from multiple networks to allow the identification of distributed attacks and or intrusions. • The interoperability/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products. DIDSs (Distributed Intrusion Detection Systems) therefore started to emerge in early 90s [9] to allow the correlation of intrusion information from multiple hosts, networks or domains to detect distributed attacks. Research on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13]. However, the realization of a DIDS requires a high degree of coordination. Computational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment. Grid computing aims to enable coordinate resource sharing in dynamic groups of individuals and/or organizations. Moreover, Grid middleware provides means for secure access, management and allocation of remote resources; resource information services; and protocols and mechanisms for transfer of data [4]. According to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share. OGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids. The services in OGSA are specified through well-defined, open, extensible and platformindependent interfaces, which enable the development of interoperable applications. This article proposes a model for integration of IDSs by using computational Grids. The proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid). Each of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes. A Grid middleware provides several features for the realization of a DIDSoG, including [3]: decentralized coordination of resources; use of standard protocols and interfaces; and the delivery of optimized QoS (Quality of Service). The service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms. Different implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3]. The virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented. This characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs. Grid middleware can thus be used to implement a great variety of services. Some functions provided by Grid middleware are [3]: (i) data management services, including access services, replication, and localisation; (ii) workflow services that implement coordinate execution of multiple applications on multiple resources; (iii) auditing services that perform the detection of frauds or intrusions; (iv) monitoring services which implement the discovery of sensors in a distributed environment and generate alerts under determined conditions; (v) services for identification of problems in a distributed environment, which implement the correlation of information from disparate and distributed logs. These services are important for the implementation of a DIDSoG. A DIDS needs services for the location of and access to distributed data from different IDSs. Auditing and monitoring services take care of the proper needs of the DIDSs such as: secure storage, data analysis to detect intrusions, discovery of distributed sensors, and sending of alerts. The correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG. The next sections of this article are organized as follows. Section 2 presents related work. The proposed model is presented in Section 3. Section 4 describes the development and a case study. Results and discussion are presented in Section 5. Conclusions and future work are discussed in Section 6. 2. RELATED WORK DIDMA [5] is a flexible, scalable, reliable, and platformindependent DIDS. DIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents. However, the integration with existing IDSs and the development of security components are presented as future work [5]. The extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG. The flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform. More efficient techniques for analysis of great amounts of data in wide scale networks based on clustering and applicable to DIDSs are presented in [13]. The integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13]. Ref. [10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy. The architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers. Components in the same level of the hierarchy cooperate with one another. The integration proposed by DIDSoG also follows a hierarchical architecture. Each IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level. The hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs. There are proposals on integrating computational Grids and IDSs [6], [7], [11], [2]. Ref. [6] and [7] propose the use of Globus Toolkit for intrusion detection, especially for DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks; Globus is used due to the need to process great amounts of data to detect these kinds of attack. A two-phase processing architecture is presented. The first phase aims at the detection of momentary attacks, while the second phase is concerned with chronic or perennial attacks. Traditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks. Leu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms. Leu et al. [6], [7] have used tools to generate several types of attacks - including TCP, ICMP and UDP flooding - and have demonstrated through experimental results the advantages of applying computational Grids to IDSs. This work proposes the development of a DIDS upon a Grid platform. However, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6] [7] do not consider the integration of heterogeneous IDSs. The processing in phases [6] [7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs. The DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11]. GridSim Grid simulator was used for the validation of DIDS GIDA. Homogeneous resources were used to simplify the development [11]. However, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2]. Scenarios demonstrating how a DIDS can execute on a Grid environment are presented. DIDSoG does not aim at detecting intrusions in a Grid environment. In contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment. Local and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs. 3. THE PROPOSED MODEL DIDSoG presents a hierarchy of intrusion detection services; this hierarchy is organized through a two-dimensional vector defined by "Scope: Complexity". The IDSs composing DIDSoG can be organized in different levels of scope or complexity, depending on its functionalities, the topology of the target environment and expected results. Figure 1 presents a DIDSoG composed by different intrusion detection services (i.e. data gathering, data aggregation, data correlation, analysis, intrusion response and management) provided by different IDSs. The information flow and the relationship between the levels of scope and complexity are presented in this figure. Information about the environment (host, network or application) is collected by Sensors located both in user 1's and user 2's computers in domain 1. The information is sent to both simple Analysers that act on the information from a single host (level 1:1), and to aggregation and correlation services that act on information from multiple hosts from the same domain (level 2:1). Simple Analysers in the first scope level send the information to more complex Analysers in the next levels of complexity (level 1: N). When an Analyser detects an intrusion, it communicates with Countermeasure and Monitoring services registered to its scope. An Analyser can invoke a Countermeasure service that replies to a detected attack, or informs a Monitoring service about the ongoing attack, so the administrator can act accordingly. Aggregation and correlation resources in the second scope receive information from Sensors from different users' computers (user 1's and user 2's) in the domain 1. These resources process the received information and send it to the analysis resources registered to the first level of complexity in the second scope (level 2:1). The information is also sent to the aggregation and correlation resources registered in the first level of complexity in the next scope (level 3:1). Fig. 1. How DIDSoG works. The analysis resources in the second scope act like the analysis resources in the first scope, directing the information to a more complex analysis resource and putting the Countermeasure and Monitoring resources in action in case of detected attacks. Aggregation and correlation resources in the third scope receive information from domains 1 and 2. These resources then carry out the aggregation and correlation of the information from different domains and send it to the analysis resources in the first level of complexity in the third scope (level 3:1). The information could also be sent to the aggregate service in the next scope in case of any resources registered to such level. The analysis resources in the third scope act similar to the analysis resources in the first and second scopes, except that the analysis resources in the third scope act on information from multiple domains. The functionalities of the registered resources in each of the scopes and complexity level can vary from one environment to another. The model allows the development of "N" levels of scope and complexity. Figure 2 presents the architecture of a resource participating in the DIDSoG. Initially, the resource registers itself to GIS (Grid Information Service) so other participating resources can query the services provided. After registering itself, the resource requests information about other intrusion detection resources registered to the GIS. A given resource of DIDSoG interacts with other resources, by receiving data from the Source Resources, processing it, and sending the results to the Destination Resources, therefore forming a grid of intrusion detection resources. Grid Origin Resources Grid Resource Grid Information Service Native IDS Grid Destination Resources Fig. 2. Architecture of a resource participating of the DIDSoG. A resource is made up of four components: Base, Connector, Descriptor and Native IDS. Native IDS corresponds to the IDS being integrated to the DIDSoG. This component process the data received from the Origin Resources and generates new data to be sent to the Destination Resources. A Native IDS component can be any tool processes information related to intrusion detection, including analysis, data gathering, data aggregation, data correlation, intrusion response or management. The Descriptor is responsible for the information that identifies a resource and its respective Destination Resources in the DIDSoG. Figure 3 presents the class diagram of the stored information by the Descriptor. The ResourceDescriptor class has Feature, Level, DataType and Target Resources type members. Feature class represents the functionalities that a resource has. Type, name and version attributes refer to the functions offered by the Native IDS component, its name and version, respectively. Level class identifies the level of target and complexity in which the resource acts. DataType class represents the data format that the resource accepts to receive. DataType class is specialized by classes Text, XML and Binary. Class XML contains the DTDFile attribute to specify the DTD file that validates the received XML. Fig. 3. Class Diagram of the Descriptor component. TargetResources class represents the features of the Destination Resources of a determined resource. This class aggregates Resource. The Resource class identifies the characteristics of a Destination Resource. This identification is made through the featureType attribute and the Level and DataType classes. A given resource analyses the information from Descriptors from other resources, and compares this information with the information specified in TargetResources to know to which resources to send the results of its processing. The Base component is responsible for the communication of a resource with other resources of the DIDSoG and with the Grid Information Service. It is this component that registers the resource and the queries other resources in the GIS. The Connector component is the link between Base and Native IDS. The information that Base receives from Origin Resources is passed to Connector component. The Connector component performs the necessary changes in the data so that it is understood by Native IDS, and sends this data to Native IDS for processing. The Connector component also has the responsibility of collecting the information processed by Native IDS, and making the necessary changes so the information can pass through the DIDSoG again. After these changes, Connector sends the information to the Base, which in turn sends it to the Destination Resources in accordance with the specifications of the Descriptor component. 4. IMPLEMENTATION We have used GridSim Toolkit 3 [1] for development and evaluation of the proposed model. We have used and extended GridSim features to model and simulate the resources and components of DIDSoG. Figure 4 presents the Class diagram of the simulated DIDSoG. The Simulation_DIDSoG class starts the simulation components. The Simulation_User class represents a user of DIDSoG. This class' function is to initiate the processing of a resource Sensor, from where the gathered information will be sent to other resources. DIDSoG_GIS keeps a registry of the DIDSoG resources.The DIDSoG_BaseResource class implements the Base component (see Figure 2). DIDSoG_BaseResource interacts with DIDSoG_Descriptor class, which represents the Descriptor component. The DIDSoG_Descriptor class is created from an XML file that specifies a resource descriptor (see Figure 3). Fig. 4. Class Diagram of the simulatated DIDSoG. A Connector component must be developed for each Native IDS integrated to DIDSoG. The Connector component is implemented by creating a class derived from DIDSoG_BaseResource. The new class will implement new functionalities in accordance with the needs of the corresponding Native IDS. In the simulation environment, data collection resources, analysis, aggregation/correlation and generation of answers were integrated. Classes were developed to simulate the processing of each Native IDS components associated to the resources. For each simulated Native IDS a class derived from DIDSoG_BaseResource was developed. This class corresponds to the Connector component of the Native IDS and aims at the integrating the IDS to DIDSoG. A XML file describing each of the integrated resources is chosen by using the Connector component. The resulting relationship between the resources integrated to the DIDSoG, in accordance with the specification of its respective descriptors, is presented in Figure 5. The Sensor_1 and Sensor_2 resources generate simulated data in the TCPDump [8] format. The generated data is directed to Analyser_1 and Aggreg_Corr_1 resources, in the case of Sensor_1, and to Aggreg_Corr_1 in the case of Sensor_2, according to the specification of their descriptors. Fig. 5. Flow of the execution of the simulation. Level 2 The Native IDS of Analyser_1 generates alerts for any attempt of connection to port 23. The data received from Analyser_1 had presented such features, generating an IDMEF (Intrusion Detection Message Exchange Format) alert [14]. The generated alert was sent to Countermeasure_1 resource, where a warning was dispatched to the administrator informing him of the alert received. The Aggreg_Corr_1 resource received the information generated by sensors 1 and 2. Its processing activities consist in correlating the source IP addresses with the received data. The resultant information of the processing of Aggreg_Corr_1 was directed to the Analyser_2 resource. The Native IDS component of the Analyser_2 generates alerts when a source tries to connect to the same port number of multiple destinations. This situation is identified by the Analyser_2 in the data received from Aggreg_Corr_1 and an alert in IDMEF format is then sent to the Countermeasures_2 resource. In addition to generating alerts in IDMEF format, Analyser_2 also directs the received data to the Analyser_3, in the level of complexity 2. The Native IDS component of Analyser_3 generates alerts when the transmission of ICMP messages from a given source to multiple destinations is detected. This situation is detected in the data received from Analyser_2, and an IDMEF alert is then sent to the Countermeasure_2 resource. The Countermeasure_2 resource receives the alerts generated by analysers 3 and 2, in accordance with the implementation of its Native IDS component. Warnings on alerts received are dispatched to the administrator. The simulation carried out demonstrates how DIDSoG works. Simulated data was generated to be the input for a grid of intrusion detection systems composed by several distinct resources. The resources carry out tasks such as data collection, aggregation and analysis, and generation of alerts and warnings in an integrated manner. 5. EXPERIMENT RESULTS The hierarchic organization of scope and complexity provides a high degree of flexibility to the model. The DIDSoG can be modelled in accordance with the needs of each environment. The descriptors define data flow desired for the resulting DIDS. Each Native IDS is integrated to the DIDSoG through a Connector component. The Connector component is also flexible in the DIDSoG. Adaptations, conversions of data types and auxiliary processes that Native IDSs need are provided by the Connector. Filters and generation of Specific logs for each Native IDS or environment can also be incorporated to the Connector. If the integration of a new IDS to an environment already configured is desired, it is enough to develop the Connector for the desired IDS and to specify the resource Descriptor. After the specification of the Connector and the Descriptor the new IDS is integrated to the DIDSoG. Through the definition of scopes, resources can act on data of different source groups. For example, scope 1 can be related to a given set of hosts, scope 2 to another set of hosts, while scope 3 can be related to hosts from scopes 1 and 2. Scopes can be defined according to the needs of each environment. The complexity levels allow the distribution of the processing between several resources inside the same scope. In an analysis task, for example, the search for simple attacks can be made by resources of complexity 1, whereas the search for more complex attacks, that demands more time, can be performed by resources of complexity 2. With this, the analysis of the data is made by two resources. The distinction between complexity levels can also be organized in order to integrate different techniques of intrusion detection. The complexity level 1 could be defined for analyses based on signatures, which are simpler techniques; the complexity level 2 for techniques based on behaviour, that require greater computational power; and the complexity level 3 for intrusion detection in applications, where the techniques are more specific and depend on more data. The division of scopes and the complexity levels make the processing of the data to be carried out in phases. No resource has full knowledge about the complete data processing flow. Each resource only knows the results of its processing and the destination to which it sends the results. Resources of higher complexity must be linked to resources of lower complexity. Therefore, the hierarchic structure of the DIDSoG is maintained, facilitating its extension and integration with other domains of intrusion detection. By carrying out a hierarchic relationship between the several chosen analysers for an environment, the sensor resource is not overloaded with the task to send the data to all the analysers. An initial analyser will exist (complexity level 1) to which the sensor will send its data, and this analyser will then direct the data to the next step of the processing flow. Another feature of the hierarchical organization is the easy extension and integration with other domains. If it is necessary to add a new host (sensor) to the DIDSoG, it is enough to plug it to the first hierarchy of resources. If it is necessary to add a new analyser, it will be in the scope of several domains, it is enough to relate it to another resource of same scope. The DIDSoG allows different levels to be managed by different entities. For example, the first scope can be managed by the local user of a host. The second scope, comprising several hosts of a domain can be managed by the administrator of the domain. A third entity can be responsible for managing the security of several domains in a joint way. This entity can act in the scope 3 independently from others. With the proposed model for integration of IDSs in Grids, the different IDSs of an environment (or multiple IDSs integrated) act in a cooperative manner improving the intrusion detection services, mainly in two aspects. First, the information from multiple sources are analysed in an integrated way to search for distributed attacks. This integration can be made under several scopes. Second, there is a great diversity of data aggregation techniques, data correlation and analysis, and intrusion response that can be applied to the same environment; these techniques can be organized under several levels of complexity. 6. CONCLUSION The integration of heterogeneous IDSs is important. However, the incompatibility and diversity of IDS solutions make such integration extremely difficult. This work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG). IDSs in DIDSoG are encapsulated as Grid services for intrusion detection. A computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms. The components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator. Services for communication and localization were used to carry out the integration between components of different resources. Based on the components of the architecture, several resources were modelled forming a grid of intrusion detection. The simulation demonstrated the usefulness of the proposed model. Data from the sensor resources was read and this data was used to feed other resources of DIDSoG. The integration of distinct IDSs could be observed through the simulated environment. Resources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert). The communication and localization services provided by GridSim were used to integrate components of different resources. Various resources were modelled following the architecture components forming a grid of intrusion detection. The components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation. During the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks. This capability demonstrates that the IDSs integrated have resulted in a DIDS. Related work presents cooperation between components of a specific DIDS. Some work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids. However, none deals with the integration of heterogeneous IDSs. In contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs. DIDSoG presents new research opportunities that we would like to pursue, including: deployment of the model in a more realistic environment such as a Grid; incorporation of new security services; parallel analysis of data by Native IDSs in multiple hosts. In addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem. IDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs. The development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.
Composition of a DIDS by Integrating Heterogeneous IDSs on Grids ABSTRACT This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation. 1. INTRODUCTION Solutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6], [7], [11], [2]. Some reasons for integrating IDSs are described by the IDWG (Intrusion Detection Working Group) from the IETF (Internet Engineering Task Force) [12] as follows: • Many IDSs available in the market have strong and weak points, which generally make necessary the deployment of more than one IDS to provided an adequate solution. • Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs. The integration of IDSs is then needed to correlate information from multiple networks to allow the identification of distributed attacks and or intrusions. • The interoperability/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products. DIDSs (Distributed Intrusion Detection Systems) therefore started to emerge in early 90s [9] to allow the correlation of intrusion information from multiple hosts, networks or domains to detect distributed attacks. Research on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13]. However, the realization of a DIDS requires a high degree of coordination. Computational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment. Grid computing aims to enable coordinate resource sharing in dynamic groups of individuals and/or organizations. Moreover, Grid middleware provides means for secure access, management and allocation of remote resources; resource information services; and protocols and mechanisms for transfer of data [4]. According to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share. OGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids. The services in OGSA are specified through well-defined, open, extensible and platformindependent interfaces, which enable the development of interoperable applications. This article proposes a model for integration of IDSs by using computational Grids. The proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid). Each of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes. A Grid middleware provides several features for the realization of a DIDSoG, including [3]: decentralized coordination of resources; use of standard protocols and interfaces; and the delivery of optimized QoS (Quality of Service). The service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms. Different implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3]. The virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented. This characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs. Grid middleware can thus be used to implement a great variety of services. Some functions provided by Grid middleware are [3]: (i) data management services, including access services, replication, and localisation; (ii) workflow services that implement coordinate execution of multiple applications on multiple resources; (iii) auditing services that perform the detection of frauds or intrusions; (iv) monitoring services which implement the discovery of sensors in a distributed environment and generate alerts under determined conditions; (v) services for identification of problems in a distributed environment, which implement the correlation of information from disparate and distributed logs. These services are important for the implementation of a DIDSoG. A DIDS needs services for the location of and access to distributed data from different IDSs. Auditing and monitoring services take care of the proper needs of the DIDSs such as: secure storage, data analysis to detect intrusions, discovery of distributed sensors, and sending of alerts. The correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG. The next sections of this article are organized as follows. Section 2 presents related work. The proposed model is presented in Section 3. Section 4 describes the development and a case study. Results and discussion are presented in Section 5. Conclusions and future work are discussed in Section 6. 2. RELATED WORK DIDMA [5] is a flexible, scalable, reliable, and platformindependent DIDS. DIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents. However, the integration with existing IDSs and the development of security components are presented as future work [5]. The extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG. The flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform. More efficient techniques for analysis of great amounts of data in wide scale networks based on clustering and applicable to DIDSs are presented in [13]. The integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13]. Ref. [10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy. The architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers. Components in the same level of the hierarchy cooperate with one another. The integration proposed by DIDSoG also follows a hierarchical architecture. Each IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level. The hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs. There are proposals on integrating computational Grids and IDSs [6], [7], [11], [2]. Ref. [6] and [7] propose the use of Globus Toolkit for intrusion detection, especially for DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks; Globus is used due to the need to process great amounts of data to detect these kinds of attack. A two-phase processing architecture is presented. The first phase aims at the detection of momentary attacks, while the second phase is concerned with chronic or perennial attacks. Traditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks. Leu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms. Leu et al. [6], [7] have used tools to generate several types of attacks - including TCP, ICMP and UDP flooding - and have demonstrated through experimental results the advantages of applying computational Grids to IDSs. This work proposes the development of a DIDS upon a Grid platform. However, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6] [7] do not consider the integration of heterogeneous IDSs. The processing in phases [6] [7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs. The DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11]. GridSim Grid simulator was used for the validation of DIDS GIDA. Homogeneous resources were used to simplify the development [11]. However, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2]. Scenarios demonstrating how a DIDS can execute on a Grid environment are presented. DIDSoG does not aim at detecting intrusions in a Grid environment. In contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment. Local and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs. 3. THE PROPOSED MODEL Grid Origin Resources Grid Resource Grid Information Service Grid Destination Resources 4. IMPLEMENTATION 5. EXPERIMENT RESULTS 6. CONCLUSION The integration of heterogeneous IDSs is important. However, the incompatibility and diversity of IDS solutions make such integration extremely difficult. This work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG). IDSs in DIDSoG are encapsulated as Grid services for intrusion detection. A computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms. The components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator. Services for communication and localization were used to carry out the integration between components of different resources. Based on the components of the architecture, several resources were modelled forming a grid of intrusion detection. The simulation demonstrated the usefulness of the proposed model. Data from the sensor resources was read and this data was used to feed other resources of DIDSoG. The integration of distinct IDSs could be observed through the simulated environment. Resources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert). The communication and localization services provided by GridSim were used to integrate components of different resources. Various resources were modelled following the architecture components forming a grid of intrusion detection. The components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation. During the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks. This capability demonstrates that the IDSs integrated have resulted in a DIDS. Related work presents cooperation between components of a specific DIDS. Some work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids. However, none deals with the integration of heterogeneous IDSs. In contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs. DIDSoG presents new research opportunities that we would like to pursue, including: deployment of the model in a more realistic environment such as a Grid; incorporation of new security services; parallel analysis of data by Native IDSs in multiple hosts. In addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem. IDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs. The development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.
Composition of a DIDS by Integrating Heterogeneous IDSs on Grids ABSTRACT This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation. 1. INTRODUCTION Solutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6], [7], [11], [2]. • Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs. The integration of IDSs is then needed to correlate information from multiple networks to allow the identification of distributed attacks and or intrusions. • The interoperability/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products. Research on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13]. However, the realization of a DIDS requires a high degree of coordination. Computational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment. Grid computing aims to enable coordinate resource sharing in dynamic groups of individuals and/or organizations. According to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share. OGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids. This article proposes a model for integration of IDSs by using computational Grids. The proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid). Each of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes. The service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms. Different implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3]. The virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented. This characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs. Grid middleware can thus be used to implement a great variety of services. These services are important for the implementation of a DIDSoG. A DIDS needs services for the location of and access to distributed data from different IDSs. The correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG. The next sections of this article are organized as follows. Section 2 presents related work. The proposed model is presented in Section 3. Section 4 describes the development and a case study. Results and discussion are presented in Section 5. Conclusions and future work are discussed in Section 6. 2. RELATED WORK DIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents. However, the integration with existing IDSs and the development of security components are presented as future work [5]. The extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG. The flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform. The integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13]. Ref. [10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy. The architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers. The integration proposed by DIDSoG also follows a hierarchical architecture. Each IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level. The hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs. There are proposals on integrating computational Grids and IDSs [6], [7], [11], [2]. Ref. A two-phase processing architecture is presented. Traditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks. Leu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms. This work proposes the development of a DIDS upon a Grid platform. However, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6] [7] do not consider the integration of heterogeneous IDSs. The processing in phases [6] [7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs. The DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11]. GridSim Grid simulator was used for the validation of DIDS GIDA. Homogeneous resources were used to simplify the development [11]. However, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2]. Scenarios demonstrating how a DIDS can execute on a Grid environment are presented. DIDSoG does not aim at detecting intrusions in a Grid environment. In contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment. Local and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs. 6. CONCLUSION The integration of heterogeneous IDSs is important. However, the incompatibility and diversity of IDS solutions make such integration extremely difficult. This work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG). IDSs in DIDSoG are encapsulated as Grid services for intrusion detection. A computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms. The components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator. Services for communication and localization were used to carry out the integration between components of different resources. Based on the components of the architecture, several resources were modelled forming a grid of intrusion detection. The simulation demonstrated the usefulness of the proposed model. Data from the sensor resources was read and this data was used to feed other resources of DIDSoG. The integration of distinct IDSs could be observed through the simulated environment. Resources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert). The communication and localization services provided by GridSim were used to integrate components of different resources. Various resources were modelled following the architecture components forming a grid of intrusion detection. The components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation. During the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks. This capability demonstrates that the IDSs integrated have resulted in a DIDS. Related work presents cooperation between components of a specific DIDS. Some work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids. However, none deals with the integration of heterogeneous IDSs. In contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs. In addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem. IDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs. The development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.
J-51
Complexity of (Iterated) Dominance
We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).
[ "domin", "domin", "iter domin", "strategi", "elimin", "bayesian game", "normal form game", "multiag system", "self-interest agent", "optim action", "game theori", "nash equilibrium" ]
[ "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "M", "U" ]
Complexity of (Iterated) Dominance∗ Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak ∗ This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). Categories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION In multiagent systems with self-interested agents, the optimal action for one agent may depend on the actions taken by other agents. In such settings, the agents require tools from game theory to rationally decide on an action. Game theory offers various formal models of strategic settings-the best-known of which is a game in normal (or matrix) form, specifying a utility (payoff) for each agent for each combination of strategies that the agents choose-as well as solution concepts, which, given a game, specify which outcomes are reasonable (under various assumptions of rationality and common knowledge). Probably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium. A Nash equilibrium specifies a strategy for each player, in such a way that no player has an incentive to (unilaterally) deviate from the prescribed strategy. Recently, numerous papers have studied computing Nash equilibria in various settings [9, 4, 12, 3, 13, 14], and the complexity of constructing a Nash equilibrium in normal form games has been labeled one of the two most important open problems on the boundary of P today [20]. The problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention. (After an early short paper on an easy case [11], the main computational study of these concepts has actually taken place in a paper in the game theory community [7].1 ) A strategy strictly dominates another strategy if it performs strictly 1 This is not to say that computer scientists have ignored 88 better against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one. The idea is that dominated strategies can be eliminated from consideration. In iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds. Computing solutions according to (iterated) dominance is important for at least the following reasons: 1) it can be computationally easier than computing (for instance) a Nash equilibrium (and therefore it can be useful as a preprocessing step in computing a Nash equilibrium), and 2) (iterated) dominance requires a weaker rationality assumption on the players than (for instance) Nash equilibrium, and therefore solutions derived according to it are more likely to occur. In this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions. The rest of the paper is organized as follows. In Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance. In the remaining sections, we study computational aspects of dominance and iterated dominance. In Section 3, we study one-shot (not iterated) dominance. In Section 4, we study iterated dominance. In Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies. Finally, in Section 6, we study dominance and iterated dominance in Bayesian games. 2. DEFINITIONS AND BASIC PROPERTIES In this section, we briefly review normal form games, as well as dominance and iterated dominance (both strict and weak). An n-player normal form game is defined as follows. Definition 1. A normal form game is given by a set of players {1, 2, ... , n}; and, for each player i, a (finite) set of pure strategies Σi and a utility function ui : Σ1 × Σ2 × ... × Σn → R (where ui(σ1, σ2, ... , σn) denotes player i``s utility when each player j plays action σj). The two main notions of dominance are defined as follows. Definition 2. Player i``s strategy σi is said to be strictly dominated by player i``s strategy σi if for any vector of strategies σ−i for the other players, ui(σi, σ−i) > ui(σi, σ−i). Player i``s strategy σi is said to be weakly dominated by player i``s strategy σi if for any vector of strategies σ−i for the other players, ui(σi, σ−i) ≥ ui(σi, σ−i), and for at least one vector of strategies σ−i for the other players, ui(σi, σ−i) > ui(σi, σ−i). In this definition, it is sometimes allowed for the dominating strategy σi to be a mixed strategy, that is, a probability distribution over pure strategies. In this case, the utilities in dominance altogether. For example, simple dominance checks are sometimes used as a subroutine in searching for Nash equilibria [21]. the definition are the expected utilities.2 There are other notions of dominance, such as very weak dominance (in which no strict inequality is required, so two strategies can dominate each other), but we will not study them here. When we are looking at the dominance relations for player i, the other players (−i) can be thought of as a single player.3 Therefore, in the rest of the paper, when we study one-shot (not iterated) dominance, we will focus without loss of generality on two-player games.4 In two-player games, we will generally refer to the players as r (row) and c (column) rather than 1 and 2. In iterated dominance, dominated strategies are removed from the game, and no longer have any effect on future dominance relations. Iterated dominance can eliminate more strategies than dominance, as follows. σr may originally not dominate σr because the latter performs better against σc; but then, once σc is removed because it is dominated by σc, σr dominates σr, and the latter can be removed. For example, in the following game, R can be removed first, after which D is dominated. L R U 1, 1 0, 0 D 0, 1 1, 0 Either strict or weak dominance can be used in the definition of iterated dominance. We note that the process of iterated dominance is never helped by removing a dominated mixed strategy, for the following reason. If σi gives player i a higher utility than σi against mixed strategy σj for player j = i (and strategies σ−{i,j} for the other players), then for at least one pure strategy σj that σj places positive probability on, σi must perform better than σi against σj (and strategies σ−{i,j} for the other players). Thus, removing the mixed strategy σj does not introduce any new dominances. More detailed discussions and examples can be found in standard texts on microeconomics or game theory [17, 5]. We are now ready to move on to the core of this paper. 3. DOMINANCE (NOT ITERATED) In this section, we study the notion of one-shot (not iterated) dominance. As a first observation, checking whether a given strategy is strictly (weakly) dominated by some pure strategy is straightforward, by checking, for every pure strategy for that player, whether the latter strategy performs strictly better against all the opponent``s strategies (at least as well against all the opponent``s strategies, and strictly 2 The dominated strategy σi is, of course, also allowed to be mixed, but this has no technical implications for the paper: when we study one-shot dominance, we ask whether a given strategy is dominated, and it does not matter whether the given strategy is pure or mixed; when we study iterated dominance, there is no use in eliminating mixed strategies, as we will see shortly. 3 This player may have a very large strategy space (one pure strategy for every vector of pure strategies for the players that are being replaced). Nevertheless, this will not result in an increase in our representation size, because the original representation already had to specify utilities for each of these vectors. 4 We note that a restriction to two-player games would not be without loss of generality for iterated dominance. This is because for iterated dominance, we need to look at the dominated strategies of each individual player, so we cannot merge any players. 89 better against at least one).5 Next, we show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time by solving a single linear program. (Similar linear programs have been given before [18]; we present the result here for completeness, and because we will build on the linear programs given below in Theorem 6.) Proposition 1. Given the row player``s utilities, a subset Dr of the row player``s pure strategies Σr, and a distinguished strategy σ∗ r for the row player, we can check in time polynomial in the size of the game (by solving a single linear program of polynomial size) whether there exists some mixed strategy σr, that places positive probability only on strategies in Dr and dominates σ∗ r , both for strict and for weak dominance. Proof. Let pdr be the probability that σr places on dr ∈ Dr. We will solve a single linear program in each of our algorithms; linear programs can be solved in polynomial time [10]. For strict dominance, the question is whether the pdr can be set so that for every pure strategy for the column player σc ∈ Σc, dr∈Dr pdr ur(dr, σc) > ur(σ∗ r , σc). Because the inequality must be strict, we cannot solve this directly by linear programming. We proceed as follows. Because the game is finite, we may assume without loss of generality that all utilities are positive (if not, simply add a constant to all utilities.) Solve the following linear program: minimize dr∈Dr pdr such that for any σc ∈ Σc, dr∈Dr pdr ur(dr, σc) ≥ ur(σ∗ r , σc). If σ∗ r is strictly dominated by some mixed strategy, this linear program has a solution with objective value < 1. (The dominating strategy is a feasible solution with objective value exactly 1. Because no constraint is binding for this solution, we can reduce one of the probabilities slightly without affecting feasibility, thereby obtaining a solution with objective value < 1.) Moreover, if this linear program has a solution with objective value < 1, there is a mixed strategy strictly dominating σ∗ r , which can be obtained by taking the LP solution and adding the remaining probability to any strategy (because all the utilities are positive, this will add to the left side of any inequality, so all inequalities will become strict). For weak dominance, we can solve the following linear program: maximize σc∈Σc (( dr∈Dr pdr ur(dr, σc)) − ur(σ∗ r , σc)) such that for any σc ∈ Σc, dr∈Dr pdr ur(dr, σc) ≥ ur(σ∗ r , σc); dr∈Dr pdr = 1. If σ∗ r is weakly dominated by some mixed strategy, then that mixed strategy is a feasible solution to this program with objective value > 0, because for at least one strategy σc ∈ Σc we have ( dr∈Dr pdr ur(dr, σc)) − ur(σ∗ r , σc) > 0. On the other hand, if this program has a solution with objective value > 0, then for at least one strategy σc ∈ Σc we 5 Recall that the assumption of a single opponent (that is, the assumption of two players) is without loss of generality for one-shot dominance. must have ( dr∈Dr pdr ur(dr, σc)) − ur(σ∗ r , σc) > 0, and thus the linear program``s solution is a weakly dominating mixed strategy. 4. ITERATED DOMINANCE We now move on to iterated dominance. It is well-known that iterated strict dominance is path-independent [6, 19]that is, if we remove dominated strategies until no more dominated strategies remain, in the end the remaining strategies for each player will be the same, regardless of the order in which strategies are removed. Because of this, to see whether a given strategy can be eliminated by iterated strict dominance, all that needs to be done is to repeatedly remove strategies that are strictly dominated, until no more dominated strategies remain. Because we can check in polynomial time whether any given strategy is dominated (whether or not dominance by mixed strategies is allowed, as described in Section 3), this whole procedure takes only polynomial time. In the case of iterated dominance by pure strategies with two players, Knuth et al. [11] slightly improve on (speed up) the straightforward implementation of this procedure by keeping track of, for each ordered pair of strategies for a player, the number of opponent strategies that prevent the first strategy from dominating the second. Hereby the runtime for an m × n game is reduced from O((m + n)4 ) to O((m + n)3 ). (Actually, they only study very weak dominance (for which no strict inequalities are required), but the approach is easily extended.) In contrast, iterated weak dominance is known to be pathdependent.6 For example, in the following game, using iterated weak dominance we can eliminate M first, and then D, or R first, and then U. L M R U 1, 1 0, 0 1, 0 D 1, 1 1, 0 0, 0 Therefore, while the procedure of removing weakly dominated strategies until no more weakly dominated strategies remain can certainly be executed in polynomial time, which strategies survive in the end depends on the order in which we remove the dominated strategies. We will investigate two questions for iterated weak dominance: whether a given strategy is eliminated in some path, and whether there is a path to a unique solution (one pure strategy left per player). We will show that both of these problems are computationally hard. Definition 3. Given a game in normal form and a distinguished strategy σ∗ , IWD-STRATEGY-ELIMINATION asks whether there is some path of iterated weak dominance that eliminates σ∗ . Given a game in normal form, IWDUNIQUE-SOLUTION asks whether there is some path of iterated weak dominance that leads to a unique solution (one strategy left per player). The following lemma shows a special case of normal form games in which allowing for weak dominance by mixed strategies (in addition to weak dominance by pure strategies) does 6 There is, however, a restriction of weak dominance called nice weak dominance which is path-independent [15, 16]. For an overview of path-independence results, see Apt [1]. 90 not help. We will prove the hardness results in this setting, so that they will hold whether or not dominance by mixed strategies is allowed. Lemma 1. Suppose that all the utilities in a game are in {0, 1}. Then every pure strategy that is weakly dominated by a mixed strategy is also weakly dominated by a pure strategy. Proof. Suppose pure strategy σ is weakly dominated by mixed strategy σ∗ . If σ gets a utility of 1 against some opponent strategy (or vector of opponent strategies if there are more than 2 players), then all the pure strategies that σ∗ places positive probability on must also get a utility of 1 against that opponent strategy (or else the expected utility would be smaller than 1). Moreover, at least one of the pure strategies that σ∗ places positive probability on must get a utility of 1 against an opponent strategy that σ gets 0 against (or else the inequality would never be strict). It follows that this pure strategy weakly dominates σ. We are now ready to prove the main results of this section. Theorem 1. IWD-STRATEGY-ELIMINATION is NPcomplete, even with 2 players, and with 0 and 1 being the only utilities occurring in the matrix-whether or not dominance by mixed strategies is allowed. Proof. The problem is in NP because given a sequence of strategies to be eliminated, we can easily check whether this is a valid sequence of eliminations (even when dominance by mixed strategies is allowed, using Proposition 1). To show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V , with corresponding literals L = {+v : v ∈ V } ∪ {−v : v ∈ V }) to the following IWD-STRATEGY-ELIMINATION instance. (In this instance, we will specify that certain strategies are uneliminable. A strategy σr can be made uneliminable, even when 0 and 1 are the only allowed utilities, by adding another strategy σr and another opponent strategy σc, so that: 1. σr and σr are the only strategies that give the row player a utility of 1 against σc. 2. σr and σr always give the row player the same utility. 3. σc is the only strategy that gives the column player a utility of 1 against σr, but otherwise σc always gives the column player utility 0. This makes it impossible to eliminate any of these three strategies. We will not explicitly specify the additional strategies to make the proof more legible.) In this proof, we will denote row player strategies by s, and column player strategies by t, to improve legibility. Let the row player``s pure strategy set be given as follows. For every variable v ∈ V , the row player has corresponding strategies s1 +v, s2 +v, s1 −v, s2 −v. Additionally, the row player has the following 2 strategies: s1 0 and s2 0, where s2 0 = σ∗ r (that is, it is the strategy we seek to eliminate). Finally, for every clause c ∈ C, the row player has corresponding strategies s1 c (uneliminable) and s2 c. Let the column player``s pure strategy set be given as follows. For every variable v ∈ V , the column player has a corresponding strategy tv. For every clause c ∈ C, the column player has a corresponding strategy tc, and additionally, for every literal l ∈ L that occurs in c, a strategy tc,l. For every variable v ∈ V , the column player has corresponding strategies t+v, t−v (both uneliminable). Finally, the column player has three additional strategies: t1 0 (uneliminable), t2 0, and t1. The utility function for the row player is given as follows: • ur(s1 +v, tv) = 0 for all v ∈ V ; • ur(s2 +v, tv) = 1 for all v ∈ V ; • ur(s1 −v, tv) = 1 for all v ∈ V ; • ur(s2 −v, tv) = 0 for all v ∈ V ; • ur(s1 +v, t1) = 1 for all v ∈ V ; • ur(s2 +v, t1) = 0 for all v ∈ V ; • ur(s1 −v, t1) = 0 for all v ∈ V ; • ur(s2 −v, t1) = 1 for all v ∈ V ; • ur(sb +v, t+v) = 1 for all v ∈ V and b ∈ {1, 2}; • ur(sb −v, t−v) = 1 for all v ∈ V and b ∈ {1, 2}; • ur(sl, t) = 0 otherwise for all l ∈ L and t ∈ S2; • ur(s1 0, tc) = 0 for all c ∈ C; • ur(s2 0, tc) = 1 for all c ∈ C; • ur(sb 0, t1 0) = 1 for all b ∈ {1, 2}; • ur(s1 0, t2 0) = 1; • ur(s2 0, t2 0) = 0; • ur(sb 0, t) = 0 otherwise for all b ∈ {1, 2} and t ∈ S2; • ur(sb c, t) = 0 otherwise for all c ∈ C and b ∈ {1, 2}; and the row player``s utility is 0 in every other case. The utility function for the column player is given as follows: • uc(s, tv) = 0 for all v ∈ V and s ∈ S1; • uc(s, t1) = 0 for all s ∈ S1; • uc(s2 l , tc) = 1 for all c ∈ C and l ∈ L where l ∈ c (literal l occurs in clause c); • uc(s2 l2 , tc,l1 ) = 1 for all c ∈ C and l1, l2 ∈ L, l1 = l2 where l2 ∈ c; • uc(s1 c, tc) = 1 for all c ∈ C; • uc(s2 c, tc) = 0 for all c ∈ C; • uc(sb c, tc,l) = 1 for all c ∈ C, l ∈ L, and b ∈ {1, 2}; • uc(s2, tc) = uc(s2, tc,l) = 0 otherwise for all c ∈ C and l ∈ L; and the column player``s utility is 0 in every other case. We now show that the two instances are equivalent. First, suppose there is a solution to the satisfiability instance: that is, a truth-value assignment to the variables in V such that all clauses are satisfied. Then, consider the following sequence of eliminations in our game: 1. For every variable v that is set to true in the assignment, eliminate tv (which gives the column player utility 0 everywhere). 2. Then, for every variable v that is set to true in the assignment, eliminate s2 +v using s1 +v (which is possible because tv has been eliminated, and because t1 has not been eliminated (yet)). 3. Now eliminate t1 (which gives the column player utility 0 everywhere). 4. Next, for every variable v that is set to false in the assignment, eliminate s2 −v using s1 −v (which is possible because t1 has been eliminated, and because tv has not been eliminated (yet)). 5. For every clause c which has the variable corresponding to one of its positive literals l = +v set to true in the assignment, eliminate tc using tc,l (which is possible because s2 l has been eliminated, and s2 c has not been eliminated (yet)). 6. For every clause c which has the variable corresponding to one of its negative literals l = −v set to false in the assignment, eliminate tc using tc,l 91 (which is possible because s2 l has been eliminated, and s2 c has not been eliminated (yet)). 7. Because the assignment satisfied the formula, all the tc have now been eliminated. Thus, we can eliminate s2 0 = σ∗ r using s1 0. It follows that there is a solution to the IWD-STRATEGY-ELIMINATION instance. Now suppose there is a solution to the IWD-STRATEGYELIMINATION instance. By Lemma 1, we can assume that all the dominances are by pure strategies. We first observe that only s1 0 can eliminate s2 0 = σ∗ r , because it is the only other strategy that gets the row player a utility of 1 against t1 0, and t1 0 is uneliminable. However, because s2 0 performs better than s1 0 against the tc strategies, it follows that all of the tc strategies must be eliminated. For each c ∈ C, the strategy tc can only be eliminated by one of the strategies tc,l (with the same c), because these are the only other strategies that get the column player a utility of 1 against s1 c, and s1 c is uneliminable. But, in order for some tc,l to eliminate tc, s2 l must be eliminated first. Only s1 l can eliminate s2 l , because it is the only other strategy that gets the row player a utility of 1 against tl, and tl is uneliminable. We next show that for every v ∈ V only one of s2 +v, s2 −v can be eliminated. This is because in order for s1 +v to eliminate s2 +v, tv needs to have been eliminated and t1, not (so tv must be eliminated before t1); but in order for s1 −v to eliminate s2 −v, t1 needs to have been eliminated and tv, not (so t1 must be eliminated before tv). So, set v to true if s2 +v is eliminated, and to false otherwise Because by the above, for every clause c, one of the s2 l with l ∈ c must be eliminated, it follows that this is a satisfying assignment to the satisfiability instance. Using Theorem 1, it is now (relatively) easy to show that IWD-UNIQUE-SOLUTION is also NP-complete under the same restrictions. Theorem 2. IWD-UNIQUE-SOLUTION is NP-complete, even with 2 players, and with 0 and 1 being the only utilities occurring in the matrix-whether or not dominance by mixed strategies is allowed. Proof. Again, the problem is in NP because we can nondeterministically choose the sequence of eliminations and verify whether it is correct. To show NP-hardness, we reduce an arbitrary IWD-STRATEGY-ELIMINATION instance to the following IWD-UNIQUE-SOLUTION instance. Let all the strategies for each player from the original instance remain part of the new instance, and let the utilities resulting from the players playing a pair of these strategies be the same. We add three additional strategies σ1 r , σ2 r , σ3 r for the row player, and three additional strategies σ1 c , σ2 c , σ3 c for the column player. Let the additional utilities be as follows: • ur(σr, σj c) = 1 for all σr /∈ {σ1 r , σ2 r , σ3 r } and j ∈ {2, 3}; • ur(σi r, σc) = 1 for all i ∈ {1, 2, 3} and σc /∈ {σ2 c , σ3 c }; • ur(σi r, σ2 c ) = 1 for all i ∈ {2, 3}; • ur(σ1 r , σ3 c ) = 1; • and the row player``s utility is 0 in all other cases involving a new strategy. • uc(σ3 r , σc) = 1 for all σc /∈ {σ1 c , σ2 c , σ3 c }; • uc(σ∗ r , σj c) = 1 for all j ∈ {2, 3} (σ∗ r is the strategy to be eliminated in the original instance); • uc(σi r, σ1 c ) = 1 for all i ∈ {1, 2}; • ur(σ1 r , σ2 c ) = 1; • ur(σ2 r , σ3 c ) = 1; • and the column player``s utility is 0 in all other cases involving a new strategy. We proceed to show that the two instances are equivalent. First suppose there exists a solution to the original IWDSTRATEGY-ELIMINATION instance. Then, perform the same sequence of eliminations to eliminate σ∗ r in the new IWD-UNIQUE-SOLUTION instance. (This is possible because at any stage, any weak dominance for the row player in the original instance is still a weak dominance in the new instance, because the two strategies'' utilities for the row player are the same when the column player plays one of the new strategies; and the same is true for the column player.) Once σ∗ r is eliminated, let σ1 c eliminate σ2 c . (It performs better against σ2 r .) Then, let σ1 r eliminate all the other remaining strategies for the row player. (It always performs better against either σ1 c or σ3 c .) Finally, σ1 c is the unique best response against σ1 r among the column player``s remaining strategies, so let it eliminate all the other remaining strategies for the column player. Thus, there exists a solution to the IWD-UNIQUE-SOLUTION instance. Now suppose there exists a solution to the IWD-UNIQUESOLUTION instance. By Lemma 1, we can assume that all the dominances are by pure strategies. We will show that none of the new strategies (σ1 r , σ2 r , σ3 r , σ1 c , σ2 c , σ3 c ) can either eliminate another strategy, or be eliminated before σ∗ r is eliminated. Thus, there must be a sequence of eliminations ending in the elimination of σ∗ r , which does not involve any of the new strategies, and is therefore a valid sequence of eliminations in the original game (because all original strategies perform the same against each new strategy). We now show that this is true by exhausting all possibilities for the first elimination before σ∗ r is eliminated that involves a new strategy. None of the σi r can be eliminated by a σr /∈ {σ1 r , σ2 r , σ3 r }, because the σi r perform better against σ1 c . σ1 r cannot eliminate any other strategy, because it always performs poorer against σ2 c . σ2 r and σ3 r are equivalent from the row player``s perspective (and thus cannot eliminate each other), and cannot eliminate any other strategy because they always perform poorer against σ3 c . None of the σj c can be eliminated by a σc /∈ {σ1 c , σ2 c , σ3 c }, because the σj c always perform better against either σ1 r or σ2 r . σ1 c cannot eliminate any other strategy, because it always performs poorer against either σ∗ r or σ3 r . σ2 c cannot eliminate any other strategy, because it always performs poorer against σ2 r or σ3 r . σ3 c cannot eliminate any other strategy, because it always performs poorer against σ1 r or σ3 r . Thus, there exists a solution to the IWDSTRATEGY-ELIMINATION instance. A slightly weaker version of the part of Theorem 2 concerning dominance by pure strategies only is the main result of Gilboa et al. [7]. (Besides not proving the result for dominance by mixed strategies, the original result was weaker because it required utilities {0, 1, 2, 3, 4, 5, 6, 7, 8} rather than just {0, 1} (and because of this, our Lemma 1 cannot be applied to it to get the result for mixed strategies).) 5. (ITERATED) DOMINANCE USING MIXED STRATEGIES WITH SMALL SUPPORTS When showing that a strategy is dominated by a mixed strategy, there are several reasons to prefer exhibiting a 92 dominating strategy that places positive probability on as few pure strategies as possible. First, this will reduce the number of bits required to specify the dominating strategy (and thus the proof of dominance can be communicated quicker): if the dominating mixed strategy places positive probability on only k strategies, then it can be specified using k real numbers for the probabilities, plus k log m (where m is the number of strategies for the player under consideration) bits to indicate which strategies are used. Second, the proof of dominance will be cleaner: for a dominating mixed strategy, it is typically (always in the case of strict dominance) possible to spread some of the probability onto any unused pure strategy and still have a dominating strategy, but this obscures which pure strategies are the ones that are key in making the mixed strategy dominating. Third, because (by the previous) the argument for eliminating the dominated strategy is simpler and easier to understand, it is more likely to be accepted. Fourth, the level of risk neutrality required for the argument to work is reduced, at least in the extreme case where dominance by a single pure strategy can be exhibited (no risk neutrality is required here). This motivates the following problem. Definition 4 (MINIMUM-DOMINATING-SET). We are given the row player``s utilities of a game in normal form, a distinguished strategy σ∗ for the row player, a specification of whether the dominance should be strict or weak, and a number k. We are asked whether there exists a mixed strategy σ for the row player that places positive probability on at most k pure strategies, and dominates σ∗ in the required sense. Unfortunately, this problem is NP-complete. Theorem 3. MINIMUM-DOMINATING-SET is NPcomplete, both for strict and for weak dominance. Proof. The problem is in NP because we can nondeterministically choose a set of at most k strategies to give positive probability, and decide whether we can dominate σ∗ with these k strategies as described in Proposition 1. To show NP-hardness, we reduce an arbitrary SET-COVER instance (given a set S, subsets S1, S2, ... , Sr, and a number t, can all of S be covered by at most t of the subsets?) to the following MINIMUM-DOMINATING-SET instance. For every element s ∈ S, there is a pure strategy σs for the column player. For every subset Si, there is a pure strategy σSi for the row player. Finally, there is the distinguished pure strategy σ∗ for the row player. The row player``s utilities are as follows: ur(σSi , σs) = t + 1 if s ∈ Si; ur(σSi , σs) = 0 if s /∈ Si; ur(σ∗ , σs) = 1 for all s ∈ S. Finally, we let k = t. We now proceed to show that the two instances are equivalent. First suppose there exists a solution to the SET-COVER instance. Without loss of generality, we can assume that there are exactly k subsets in the cover. Then, for every Si that is in the cover, let the dominating strategy σ place exactly 1 k probability on the corresponding pure strategy σSi . Now, if we let n(s) be the number of subsets in the cover containing s (we observe that that n(s) ≥ 1), then for every strategy σs for the column player, the row player``s expected utility for playing σ when the column player is playing σs is u(σ, σs) = n(s) k (k + 1) ≥ k+1 k > 1 = u(σ∗ , σs). So σ strictly (and thus also weakly) dominates σ∗ , and there exists a solution to the MINIMUM-DOMINATING-SET instance. Now suppose there exists a solution to the MINIMUMDOMINATING-SET instance. Consider the (at most k) pure strategies of the form σSi on which the dominating mixed strategy σ places positive probability, and let T be the collection of the corresponding subsets Si. We claim that T is a cover. For suppose there is some s ∈ S that is not in any of the subsets in T . Then, if the column player plays σs, the row player (when playing σ) will always receive utility 0-as opposed to the utility of 1 the row player would receive for playing σ∗ , contradicting the fact that σ dominates σ∗ (whether this dominance is weak or strict). It follows that there exists a solution to the SET-COVER instance. On the other hand, if we require that the dominating strategy only places positive probability on a very small number of pure strategies, then it once again becomes easy to check whether a strategy is dominated. Specifically, to find out whether player i``s strategy σ∗ is dominated by a strategy that places positive probability on only k pure strategies, we can simply check, for every subset of k of player i``s pure strategies, whether there is a strategy that places positive probability only on these k strategies and dominates σ∗ , using Proposition 1. This requires only O(|Σi|k ) such checks. Thus, if k is a constant, this constitutes a polynomial-time algorithm. A natural question to ask next is whether iterated strict dominance remains computationally easy when dominating strategies are required to place positive probability on at most k pure strategies, where k is a small constant. (We have already shown in Section 4 that iterated weak dominance is hard even when k = 1, that is, only dominance by pure strategies is allowed.) Of course, if iterated strict dominance were path-independent under this restriction, computational easiness would follow as it did in Section 4. However, it turns out that this is not the case. Observation 1. If we restrict the dominating strategies to place positive probability on at most two pure strategies, iterated strict dominance becomes path-dependent. Proof. Consider the following game: 7, 1 0, 0 0, 0 0, 0 7, 1 0, 0 3, 0 3, 0 0, 0 0, 0 0, 0 3, 1 1, 0 1, 0 1, 0 Let (i, j) denote the outcome in which the row player plays the ith row and the column player plays the jth column. Because (1, 1), (2, 2), and (4, 3) are all Nash equilibria, none of the column player``s pure strategies will ever be eliminated, and neither will rows 1, 2, and 4. We now observe that randomizing uniformly over rows 1 and 2 dominates row 3, and randomizing uniformly over rows 3 and 4 dominates row 5. However, if we eliminate row 3 first, it becomes impossible to dominate row 5 without randomizing over at least 3 pure strategies. Indeed, iterated strict dominance turns out to be hard even when k = 3. Theorem 4. If we restrict the dominating strategies to place positive probability on at most three pure strategies, it becomes NP-complete to decide whether a given strategy can be eliminated using iterated strict dominance. 93 Proof. The problem is in NP because given a sequence of strategies to be eliminated, we can check in polynomial time whether this is a valid sequence of eliminations (for any strategy to be eliminated, we can check, for every subset of three other strategies, whether there is a strategy placing positive probability on only these three strategies that dominates the strategy to be eliminated, using Proposition 1). To show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V , with corresponding literals L = {+v : v ∈ V } ∪ {−v : v ∈ V }) to the following two-player game. For every variable v ∈ V , the row player has strategies s+v, s−v, s1 v, s2 v, s3 v, s4 v, and the column player has strategies t1 v, t2 v, t3 v, t4 v. For every clause c ∈ C, the row player has a strategy sc, and the column player has a strategy tc, as well as, for every literal l occurring in c, an additional strategy tl c. The row player has two additional strategies s1 and s2. (s2 is the strategy that we are seeking to eliminate.) Finally, the column player has one additional strategy t1. The utility function for the row player is given as follows (where is some sufficiently small number): • ur(s+v, tj v) = 4 if j ∈ {1, 2}, for all v ∈ V ; • ur(s+v, tj v) = 1 if j ∈ {3, 4}, for all v ∈ V ; • ur(s−v, tj v) = 1 if j ∈ {1, 2}, for all v ∈ V ; • ur(s−v, tj v) = 4 if j ∈ {3, 4}, for all v ∈ V ; • ur(s+v, t) = ur(s−v, t) = 0 for all v ∈ V and t /∈ {t1 v, t2 v, t3 v, t4 v}; • ur(si v, ti v) = 13 for all v ∈ V and i ∈ {1, 2, 3, 4}; • ur(si v, t) = for all v ∈ V , i ∈ {1, 2, 3, 4}, and t = ti v; • ur(sc, tc) = 2 for all c ∈ C; • ur(sc, t) = 0 for all c ∈ C and t = tc; • ur(s1, t1) = 1 + ; • ur(s1, t) = for all t = t1; • ur(s2, t1) = 1; • ur(s2, tc) = 1 for all c ∈ C; • ur(s2, t) = 0 for all t /∈ {t1} ∪ {tc : c ∈ C}. The utility function for the column player is given as follows: • uc(si v, ti v) = 1 for all v ∈ V and i ∈ {1, 2, 3, 4}; • uc(s, ti v) = 0 for all v ∈ V , i ∈ {1, 2, 3, 4}, and s = si v; • uc(sc, tc) = 1 for all c ∈ C; • uc(sl, tc) = 1 for all c ∈ C and l ∈ L occurring in c; • uc(s, tc) = 0 for all c ∈ C and s /∈ {sc} ∪ {sl : l ∈ c}; • uc(sc, tl c) = 1 + for all c ∈ C; • uc(sl , tl c) = 1 + for all c ∈ C and l = l occurring in c; • uc(s, tl c) = for all c ∈ C and s /∈ {sc} ∪ {sl : l ∈ c, l = l }; • uc(s2, t1) = 1; • uc(s, t1) = 0 for all s = s2. We now show that the two instances are equivalent. First, suppose that there is a solution to the satisfiability instance. Then, consider the following sequence of eliminations in our game: 1. For every variable v that is set to true in the satisfying assignment, eliminate s+v with the mixed strategy σr that places probability 1/3 on s−v, probability 1/3 on s1 v, and probability 1/3 on s2 v. (The expected utility of playing σr against t1 v or t2 v is 14/3 > 4; against t3 v or t4 v, it is 4/3 > 1; and against anything else it is 2 /3 > 0. Hence the dominance is valid.) 2. Similarly, for every variable v that is set to false in the satisfying assignment, eliminate s−v with the mixed strategy σr that places probability 1/3 on s+v, probability 1/3 on s3 v, and probability 1/3 on s4 v. (The expected utility of playing σr against t1 v or t2 v is 4/3 > 1; against t3 v or t4 v, it is 14/3 > 4; and against anything else it is 2 /3 > 0. Hence the dominance is valid.) 3. For every c ∈ C, eliminate tc with any tl c for which l was set to true in the satisfying assignment. (This is a valid dominance because tl c performs better than tc against any strategy other than sl, and we eliminated sl in step 1 or in step 2.) 4. Finally, eliminate s2 with s1. (This is a valid dominance because s1 performs better than s2 against any strategy other than those in {tc : c ∈ C}, which we eliminated in step 3.) Hence, there is an elimination path that eliminates s2. Now, suppose that there is an elimination path that eliminates s2. The strategy that eventually dominates s2 must place most of its probability on s1, because s1 is the only other strategy that performs well against t1, which cannot be eliminated before s2. But, s1 performs significantly worse than s2 against any strategy tc with c ∈ C, so it follows that all these strategies must be eliminated first. Each strategy tc can only be eliminated by a strategy that places most of its weight on the corresponding strategies tl c with l ∈ c, because they are the only other strategies that perform well against sc, which cannot be eliminated before tc. But, each strategy tl c performs significantly worse than tc against sl, so it follows that for every clause c, for one of the literals l occurring in it, sl must be eliminated first. Now, strategies of the form tj v will never be eliminated because they are the unique best responses to the corresponding strategies sj v (which are, in turn, the best responses to the corresponding tj v). As a result, if strategy s+v (respectively, s−v) is eliminated, then its opposite strategy s−v (respectively, s+v) can no longer be eliminated, for the following reason. There is no other pure strategy remaining that gets a significant utility against more than one of the strategies t1 v, t2 v, t3 v, t4 v, but s−v (respectively, s+v) gets significant utility against all 4, and therefore cannot be dominated by a mixed strategy placing positive probability on at most 3 strategies. It follows that for each v ∈ V , at most one of the strategies s+v, s−v is eliminated, in such a way that for every clause c, for one of the literals l occurring in it, sl must be eliminated. But then setting all the literals l such that sl is eliminated to true constitutes a solution to the satisfiability instance. In the next section, we return to the setting where there is no restriction on the number of pure strategies on which a dominating mixed strategy can place positive probability. 6. (ITERATED) DOMINANCE IN BAYESIAN GAMES So far, we have focused on normal form games that are flatly represented (that is, every matrix entry is given ex94 plicitly). However, for many games, the flat representation is too large to write down explicitly, and instead, some representation that exploits the structure of the game needs to be used. Bayesian games, besides being of interest in their own right, can be thought of as a useful structured representation of normal form games, and we will study them in this section. In a Bayesian game, each player first receives privately held preference information (the player``s type) from a distribution, which determines the utility that that player receives for every outcome of (that is, vector of actions played in) the game. After receiving this type, the player plays an action based on it.7 Definition 5. A Bayesian game is given by a set of players {1, 2, ... , n}; and, for each player i, a (finite) set of actions Ai, a (finite) type space Θi with a probability distribution πi over it, and a utility function ui : Θi × A1 × A2 × ... × An → R (where ui(θi, a1, a2, ... , an) denotes player i``s utility when i``s type is θi and each player j plays action aj). A pure strategy in a Bayesian game is a mapping from types to actions, σi : Θi → Ai, where σi(θi) denotes the action that player i plays for type θi. Any vector of pure strategies in a Bayesian game defines an (expected) utility for each player, and therefore we can translate a Bayesian game into a normal form game. In this normal form game, the notions of dominance and iterated dominance are defined as before. However, the normal form representation of the game is exponentially larger than the Bayesian representation, because each player i has |Ai||Θi| distinct pure strategies. Thus, any algorithm for Bayesian games that relies on expanding the game to its normal form will require exponential time. Specifically, our easiness results for normal form games do not directly transfer to this setting. In fact, it turns out that checking whether a strategy is dominated by a pure strategy is hard in Bayesian games. Theorem 5. In a Bayesian game, it is NP-complete to decide whether a given pure strategy σr : Θr → Ar is dominated by some other pure strategy (both for strict and weak dominance), even when the row player``s distribution over types is uniform. Proof. The problem is in NP because it is easy to verify whether a candidate dominating strategy is indeed a dominating strategy. To show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a set of clauses C using variables from V ) to the following Bayesian game. Let the row player``s action set be Ar = {t, f, 0} and let the column player``s action set be Ac = {ac : c ∈ C}. Let the row player``s type set be Θr = {θv : v ∈ V }, with a distribution πr that is uniform. Let the row player``s utility function be as follows: • ur(θv, 0, ac) = 0 for all v ∈ V and c ∈ C; • ur(θv, b, ac) = |V | for all v ∈ V , c ∈ C, and b ∈ {t, f} such that setting v to b satisfies c; • ur(θv, b, ac) = −1 for all v ∈ V , c ∈ C, and b ∈ {t, f} such that setting v to b does not satisfy c. 7 In general, a player can also receive a signal about the other players'' preferences, but we will not concern ourselves with that here. Let the pure strategy to be dominated be the one that plays 0 for every type. We show that the strategy is dominated by a pure strategy if and only if there is a solution to the satisfiability instance. First, suppose there is a solution to the satisfiability instance. Then, let σd r be given by: σd r (θv) = t if v is set to true in the solution to the satisfiability instance, and σd r (θv) = f otherwise. Then, against any action ac by the column player, there is at least one type θv such that either +v ∈ c and σd r (θv) = t, or −v ∈ c and σd r (θv) = f. Thus, the row player``s expected utility against action ac is at least |V | |V | − |V |−1 |V | = 1 |V | > 0. So, σd r is a dominating strategy. Now, suppose there is a dominating pure strategy σd r . This dominating strategy must play t or f for at least one type. Thus, against any ac by the column player, there must at least be some type θv for which ur(θv, σd r (θv), ac) > 0. That is, there must be at least one variable v such that setting v to σd r (θv) satifies c. But then, setting each v to σd r (θv) must satisfy all the clauses. So a satisfying assignment exists. However, it turns out that we can modify the linear programs from Proposition 1 to obtain a polynomial time algorithm for checking whether a strategy is dominated by a mixed strategy in Bayesian games. Theorem 6. In a Bayesian game, it can be decided in polynomial time whether a given (possibly mixed) strategy σr is dominated by some other mixed strategy, using linear programming (both for strict and weak dominance). Proof. We can modify the linear programs presented in Proposition 1 as follows. For strict dominance, again assuming without loss of generality that all the utilities in the game are positive, use the following linear program (in which pσr r (θr, ar) is the probability that σr, the strategy to be dominated, places on ar for type θr): minimize θr∈Θr ar∈Ar pr(ar) such that for any ac ∈ Ac, θr∈Θr ar∈Ar π(θr)ur(θr, ar, ac)pr(θr, ar) ≥ θr∈Θr ar∈Ar π(θr)ur(θr, ar, ac)pσr r (θr, ar); for any θr ∈ Θr, ar∈Ar pr(θr, ar) ≤ 1. Assuming that π(θr) > 0 for all θr ∈ Θr, this program will return an objective value smaller than |Θr| if and only if σr is strictly dominated, by reasoning similar to that done in Proposition 1. For weak dominance, use the following linear program: maximize ac∈Ac ( θr∈Θr ar∈Ar π(θr)ur(θr, ar, ac)pr(θr, ar)− θr∈Θr ar∈Ar π(θr)ur(θr, ar, ac)pσr r (θr, ar)) such that for any ac ∈ Ac, θr∈Θr ar∈Ar π(θr)ur(θr, ar, ac)pr(θr, ar) ≥ θr∈Θr ar∈Ar π(θr)ur(θr, ar, ac)pσr r (θr, ar); for any θr ∈ Θr, ar∈Ar pr(θr, ar) = 1. This program will return an objective value greater than 0 if and only if σr is weakly dominated, by reasoning similar to that done in Proposition 1. We now turn to iterated dominance in Bayesian games. Na¨ıvely, one might argue that iterated dominance in Bayesian 95 games always requires an exponential number of steps when a significant fraction of the game``s pure strategies can be eliminated, because there are exponentially many pure strategies. However, this is not a very strong argument because oftentimes we can eliminate exponentially many pure strategies in one step. For example, if for some type θr ∈ Θr we have, for all ac ∈ Ac, that u(θr, a1 r, ac) > u(θr, a2 r, ac), then any pure strategy for the row player which plays action a2 r for type θr is dominated (by the strategy that plays action a1 r for type θr instead)-and there are exponentially many (|Ar||Θr|−1 ) such strategies. It is therefore conceivable that we need only polynomially many eliminations of collections of a player``s strategies. However, the following theorem shows that this is not the case, by giving an example where an exponential number of iterations (that is, alternations between the players in eliminating strategies) is required. (We emphasize that this is not a result about computational complexity.) Theorem 7. Even in symmetric 3-player Bayesian games, iterated dominance by pure strategies can require an exponential number of iterations (both for strict and weak dominance), even with only three actions per player. Proof. Let each player i ∈ {1, 2, 3} have n + 1 types θ1 i , θ2 i , ... , θn+1 i . Let each player i have 3 actions ai, bi, ci, and let the utility function of each player be defined as follows. (In the below, i + 1 and i + 2 are shorthand for i + 1(mod 3) and i + 2(mod 3) when used as player indices. Also, −∞ can be replaced by a sufficiently negative number. Finally, δ and should be chosen to be very small (even compared to 2−(n+1) ), and should be more than twice as large as δ.) • ui(θ1 i ; ai, ci+1, ci+2) = −1; • ui(θ1 i ; ai, si+1, si+2) = 0 for si+1 = ci+1 or si+2 = ci+2; • ui(θ1 i ; bi, si+1, si+2) = − for si+1 = ai+1 and si+2 = ai+2; • ui(θ1 i ; bi, si+1, si+2) = −∞ for si+1 = ai+1 or si+2 = ai+2; • ui(θ1 i ; ci, si+1, si+2) = −∞ for all si+1, si+2; • ui(θj i ; ai, si+1, si+2) = −∞ for all si+1, si+2 when j > 1; • ui(θj i ; bi, si+1, si+2) = − for all si+1, si+2 when j > 1; • ui(θj i ; ci, si+1, ci+2) = δ − − 1/2 for all si+1 when j > 1; • ui(θj i ; ci, si+1, si+2) = δ− for all si+1 and si+2 = ci+2 when j > 1. Let the distribution over each player``s types be given by p(θj i ) = 2−j (with the exception that p(θ2 i ) = 2−2 +2−(n+1) ). We will be interested in eliminating strategies of the following form: play bi for type θ1 i , and play one of bi or ci otherwise. Because the utility function is the same for any type θj i with j > 1, these strategies are effectively defined by the total probability that they place on ci,8 which is t2 i (2−2 + 2−(n+1) ) + n+1 j=3 tj i 2−j where tj i = 1 if player i 8 Note that the strategies are still pure strategies; the probability placed on an action by a strategy here is simply the sum of the probabilities of the types for which the strategy chooses that action. plays ci for type θj i , and 0 otherwise. This probability is different for any two different strategies of the given form, and we have exponentially many different strategies of the given form. For any probability q which can be expressed as t2(2−2 + 2−(n+1) ) + n+1 j=3 tj2−j (with all tj ∈ {0, 1}), let σi(q) denote the (unique) strategy of the given form for player i which places a total probability of q on ci. Any strategy that plays ci for type θ1 i or ai for some type θj i with j > 1 can immediately be eliminated. We will show that, after that, we must eliminate the strategies σi(q) with high q first, slowly working down to those with lower q. Claim 1: If σi+1(q ) and σi+2(q ) have not yet been eliminated, and q < q , then σi(q) cannot yet be eliminated. Proof: First, we show that no strategy σi(q ) can eliminate σi(q). Against σi+1(q ), σi+2(q ), the utility of playing σi(p) is − + p · δ − p · q /2. Thus, when q = 0, it is best to set p as high as possible (and we note that σi+1(0) and σi+2(0) have not been eliminated), but when q > 0, it is best to set p as low as possible because δ < q /2. Thus, whether q > q or q < q , σi(q) will always do strictly better than σi(q ) against some remaining opponent strategies. Hence, no strategy σi(q ) can eliminate σi(q). The only other pure strategies that could dominate σi(q) are strategies that play ai for type θ1 i , and bi or ci for all other types. Let us take such a strategy and suppose that it plays c with probability p. Against σi+1(q ), σi+2(q ) (which have not yet been eliminated), the utility of playing this strategy is −(q )2 /2 − /2 + p · δ − p · q /2. On the other hand, playing σi(q) gives − + q · δ − q · q /2. Because q > q, we have −(q )2 /2 < −q · q /2, and because δ and are small, it follows that σi(q) receives a higher utility. Therefore, no strategy dominates σi(q), proving the claim. Claim 2: If for all q > q, σi+1(q ) and σi+2(q ) have been eliminated, then σi(q) can be eliminated. Proof: Consider the strategy for player i that plays ai for type θ1 i , and bi for all other types (call this strategy σi); we claim σi dominates σi(q). First, if either of the other players k plays ak for θ1 k, then σi performs better than σi(q) (which receives −∞ in some cases). Because the strategies for player k that play ck for type θ1 k, or ak for some type θj k with j > 1, have already been eliminated, all that remains to check is that σi performs better than σi(q) whenever both of the other two players play strategies of the following form: play bk for type θ1 k, and play one of bk or ck otherwise. We note that among these strategies, there are none left that place probability greater than q on ck. Letting qk denote the probability with which player k plays ck, the expected utility of playing σi is −qi+1 · qi+2/2 − /2. On the other hand, the utility of playing σi(q) is − + q · δ − q · qi+2/2. Because qi+1 ≤ q, the difference between these two expressions is at least /2 − δ, which is positive. It follows that σi dominates σi(q). From Claim 2, it follows that all strategies of the form σi(q) will eventually be eliminated. However, Claim 1 shows that we cannot go ahead and eliminate multiple such strategies for one player, unless at least one other player simultaneously keeps up in the eliminated strategies: every time a σi(q) is eliminated such that σi+1(q) and σi+2(q) have not yet been eliminated, we need to eliminate one of the latter two strategies before any σi(q ) with q > q can be eliminated-that is, we need to alternate between players. Because there are exponentially many strategies of the form σi(q), it follows that iterated elimination will require exponentially many iterations to complete. 96 It follows that an efficient algorithm for iterated dominance (strict or weak) by pure strategies in Bayesian games, if it exists, must somehow be able to perform (at least part of) many iterations in a single step of the algorithm (because if each step only performed a single iteration, we would need exponentially many steps). Interestingly, Knuth et al. [11] argue that iterated dominance appears to be an inherently sequential problem (in light of their result that iterated very weak dominance is P-complete, that is, apparently not efficiently parallelizable), suggesting that aggregating many iterations may be difficult. 7. CONCLUSIONS While the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention. In this paper we studied various computational aspects of this concept. We first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then moved on to iterated dominance. We showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right. We showed that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance). Finally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). There are various avenues for future research. First, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required). Second, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games-for example, in graphical games [9] or local-effect/action graph games [12, 2]. (How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].) Finally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster. 8. REFERENCES [1] Krzysztof R. Apt. Uniform proofs of order independence for various strategy elimination procedures. Contributions to Theoretical Economics, 4(1), 2004. [2] Nivan A. R. Bhat and Kevin Leyton-Brown. Computing Nash equilibria of action-graph games. In UAI, 2004. [3] Ben Blum, Christian R. Shelton, and Daphne Koller. A continuation method for Nash equilibria in structured games. In IJCAI, 2003. [4] Vincent Conitzer and Tuomas Sandholm. Complexity results about Nash equilibria. In IJCAI, pages 765-771, 2003. [5] Drew Fudenberg and Jean Tirole. Game Theory. MIT Press, 1991. [6] Itzhak Gilboa, Ehud Kalai, and Eitan Zemel. On the order of eliminating dominated strategies. Operations Research Letters, 9:85-89, 1990. [7] Itzhak Gilboa, Ehud Kalai, and Eitan Zemel. The complexity of eliminating dominated strategies. Mathematics of Operation Research, 18:553-565, 1993. [8] Eric A. Hansen, Daniel S. Bernstein, and Shlomo Zilberstein. Dynamic programming for partially observable stochastic games. In AAAI, pages 709-715, 2004. [9] Michael Kearns, Michael Littman, and Satinder Singh. Graphical models for game theory. In UAI, 2001. [10] Leonid Khachiyan. A polynomial algorithm in linear programming. Soviet Math. Doklady, 20:191-194, 1979. [11] Donald E. Knuth, Christos H. Papadimitriou, and John N Tsitsiklis. A note on strategy elimination in bimatrix games. Operations Research Letters, 7(3):103-107, 1988. [12] Kevin Leyton-Brown and Moshe Tennenholtz. Local-effect games. In IJCAI, 2003. [13] Richard Lipton, Evangelos Markakis, and Aranyak Mehta. Playing large games using simple strategies. In ACM-EC, pages 36-41, 2003. [14] Michael Littman and Peter Stone. A polynomial-time Nash equilibrium algorithm for repeated games. In ACM-EC, pages 48-54, 2003. [15] Leslie M. Marx and Jeroen M. Swinkels. Order independence for iterated weak dominance. Games and Economic Behavior, 18:219-245, 1997. [16] Leslie M. Marx and Jeroen M. Swinkels. Corrigendum, order independence for iterated weak dominance. Games and Economic Behavior, 31:324-329, 2000. [17] Andreu Mas-Colell, Michael Whinston, and Jerry R. Green. Microeconomic Theory. Oxford University Press, 1995. [18] Roger Myerson. Game Theory: Analysis of Conflict. Harvard University Press, Cambridge, 1991. [19] Martin J Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press, 1994. [20] Christos Papadimitriou. Algorithms, games and the Internet. In STOC, pages 749-753, 2001. [21] Ryan Porter, Eugene Nudelman, and Yoav Shoham. Simple search methods for finding a Nash equilibrium. In AAAI, pages 664-669, 2004. 97
Complexity of (Iterated) Dominance * ABSTRACT We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). 1. INTRODUCTION In multiagent systems with self-interested agents, the optimal action for one agent may depend on the actions taken by other agents. In such settings, the agents require tools from game theory to rationally decide on an action. Game theory offers various formal models of strategic settings--the best-known of which is a game in normal (or matrix) form, specifying a utility (payoff) for each agent for each combination of strategies that the agents choose--as well as solution concepts, which, given a game, specify which outcomes are reasonable (under various assumptions of rationality and common knowledge). Probably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium. A Nash equilibrium specifies a strategy for each player, in such a way that no player has an incentive to (unilaterally) deviate from the prescribed strategy. Recently, numerous papers have studied computing Nash equilibria in various settings [9, 4, 12, 3, 13, 14], and the complexity of constructing a Nash equilibrium in normal form games has been labeled one of the two most important open problems on the boundary of P today [20]. The problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention. (After an early short paper on an easy case [11], the main computational study of these concepts has actually taken place in a paper in the game theory community [7].') A strategy strictly dominates another strategy if it performs strictly ` This is not to say that computer scientists have ignored better against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one. The idea is that dominated strategies can be eliminated from consideration. In iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds. Computing solutions according to (iterated) dominance is important for at least the following reasons: 1) it can be computationally easier than computing (for instance) a Nash equilibrium (and therefore it can be useful as a preprocessing step in computing a Nash equilibrium), and 2) (iterated) dominance requires a weaker rationality assumption on the players than (for instance) Nash equilibrium, and therefore solutions derived according to it are more likely to occur. In this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions. The rest of the paper is organized as follows. In Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance. In the remaining sections, we study computational aspects of dominance and iterated dominance. In Section 3, we study one-shot (not iterated) dominance. In Section 4, we study iterated dominance. In Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies. Finally, in Section 6, we study dominance and iterated dominance in Bayesian games. 2. DEFINITIONS AND BASIC PROPERTIES In this section, we briefly review normal form games, as well as dominance and iterated dominance (both strict and weak). An n-player normal form game is defined as follows. pure strategies Σi and a utility function ui: Σ1 x Σ2 x...x Σn--+ R (where ui (v1, v2,..., vn) denotes player i's utility when each player j plays action vj). The two main notions of dominance are defined as follows. gies v − i for the other players, ui (vi, v − i)> ui (v0 i, v − i). Player i's strategy v0i is said to be weakly dominated by player i's strategy vi if for any vector of strategies v − i for the other players, ui (vi, v − i)> ui (v0 i, v − i), and for at least one vector of strategies v − i for the other players, ui (vi, v − i)> ui (v0 i, v − i). In this definition, it is sometimes allowed for the dominating strategy vi to be a mixed strategy, that is, a probability distribution over pure strategies. In this case, the utilities in dominance altogether. For example, simple dominance checks are sometimes used as a subroutine in searching for Nash equilibria [21]. the definition are the expected utilities .2 There are other notions of dominance, such as very weak dominance (in which no strict inequality is required, so two strategies can dominate each other), but we will not study them here. When we are looking at the dominance relations for player i, the other players (− i) can be thought of as a single player .3 Therefore, in the rest of the paper, when we study one-shot (not iterated) dominance, we will focus without loss of generality on two-player games .4 In two-player games, we will generally refer to the players as r (row) and c (column) rather than 1 and 2. In iterated dominance, dominated strategies are removed from the game, and no longer have any effect on future dominance relations. Iterated dominance can eliminate more strategies than dominance, as follows. vr may originally not dominate v0r because the latter performs better against v0c; but then, once v0c is removed because it is dominated by vc, vr dominates v0r, and the latter can be removed. For example, in the following game, R can be removed first, after which D is dominated. Either strict or weak dominance can be used in the definition of iterated dominance. We note that the process of iterated dominance is never helped by removing a dominated mixed strategy, for the following reason. If v0i gives player i a higher utility than vi against mixed strategy v0j for player j = 6 i (and strategies v − {i, j} for the other players), then for at least one pure strategy vj that v0j places positive probability on, v0i must perform better than vi against vj (and strategies v − {i, j} for the other players). Thus, removing the mixed strategy v0j does not introduce any new dominances. More detailed discussions and examples can be found in standard texts on microeconomics or game theory [17, 5]. We are now ready to move on to the core of this paper. 3. DOMINANCE (NOT ITERATED) In this section, we study the notion of one-shot (not iterated) dominance. As a first observation, checking whether a given strategy is strictly (weakly) dominated by some pure strategy is straightforward, by checking, for every pure strategy for that player, whether the latter strategy performs strictly better against all the opponent's strategies (at least as well against all the opponent's strategies, and strictly 2The dominated strategy v0i is, of course, also allowed to be mixed, but this has no technical implications for the paper: when we study one-shot dominance, we ask whether a given strategy is dominated, and it does not matter whether the given strategy is pure or mixed; when we study iterated dominance, there is no use in eliminating mixed strategies, as we will see shortly. 3This player may have a very large strategy space (one pure strategy for every vector of pure strategies for the players that are being replaced). Nevertheless, this will not result in an increase in our representation size, because the original representation already had to specify utilities for each of these vectors. 4We note that a restriction to two-player games would not be without loss of generality for iterated dominance. This is because for iterated dominance, we need to look at the dominated strategies of each individual player, so we cannot merge any players. better against at least one).5 Next, we show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time by solving a single linear program. (Similar linear programs have been given before [18]; we present the result here for completeness, and because we will build on the linear programs given below in Theorem 6.) PROOF. Let pdr be the probability that σr places on dr E Dr. We will solve a single linear program in each of our algorithms; linear programs can be solved in polynomial time [10]. For strict dominance, the question is whether the pdr can be set so that for every pure strategy for the column player σc E Σc, E pdr ur (dr, σc)> ur (σ * r, σc). Because dr EDr the inequality must be strict, we cannot solve this directly by linear programming. We proceed as follows. Because the game is finite, we may assume without loss of generality that all utilities are positive (if not, simply add a constant to all utilities.) Solve the following linear program: If σ * r is strictly dominated by some mixed strategy, this linear program has a solution with objective value <1. (The dominating strategy is a feasible solution with objective value exactly 1. Because no constraint is binding for this solution, we can reduce one of the probabilities slightly without affecting feasibility, thereby obtaining a solution with objective value <1.) Moreover, if this linear program has a solution with objective value <1, there is a mixed strategy strictly dominating σ * r, which can be obtained by taking the LP solution and adding the remaining probability to any strategy (because all the utilities are positive, this will add to the left side of any inequality, so all inequalities will become strict). For weak dominance, we can solve the following linear program: that mixed strategy is a feasible solution to this program with objective value> 0, because for at least one strategy the other hand, if this program has a solution with objective value> 0, then for at least one strategy σc E Σc we 5Recall that the assumption of a single opponent (that is, the assumption of two players) is without loss of generality for one-shot dominance. must have (E pdr ur (dr, σc))--ur (σ * r, σc)> 0, and thus dr EDr the linear program's solution is a weakly dominating mixed strategy. 4. ITERATED DOMINANCE We now move on to iterated dominance. It is well-known that iterated strict dominance is path-independent [6, 19]--that is, if we remove dominated strategies until no more dominated strategies remain, in the end the remaining strategies for each player will be the same, regardless of the order in which strategies are removed. Because of this, to see whether a given strategy can be eliminated by iterated strict dominance, all that needs to be done is to repeatedly remove strategies that are strictly dominated, until no more dominated strategies remain. Because we can check in polynomial time whether any given strategy is dominated (whether or not dominance by mixed strategies is allowed, as described in Section 3), this whole procedure takes only polynomial time. In the case of iterated dominance by pure strategies with two players, Knuth et al. [11] slightly improve on (speed up) the straightforward implementation of this procedure by keeping track of, for each ordered pair of strategies for a player, the number of opponent strategies that prevent the first strategy from dominating the second. Hereby the runtime for an m x n game is reduced from O ((m + n) 4) to O ((m + n) 3). (Actually, they only study very weak dominance (for which no strict inequalities are required), but the approach is easily extended.) In contrast, iterated weak dominance is known to be pathdependent .6 For example, in the following game, using iterated weak dominance we can eliminate M first, and then D, or R first, and then U. Therefore, while the procedure of removing weakly dominated strategies until no more weakly dominated strategies remain can certainly be executed in polynomial time, which strategies survive in the end depends on the order in which we remove the dominated strategies. We will investigate two questions for iterated weak dominance: whether a given strategy is eliminated in some path, and whether there is a path to a unique solution (one pure strategy left per player). We will show that both of these problems are computationally hard. The following lemma shows a special case of normal form games in which allowing for weak dominance by mixed strategies (in addition to weak dominance by pure strategies) does 6There is, however, a restriction of weak dominance called nice weak dominance which is path-independent [15, 16]. For an overview of path-independence results, see Apt [1]. not help. We will prove the hardness results in this setting, so that they will hold whether or not dominance by mixed strategies is allowed. PROOF. Suppose pure strategy σ is weakly dominated by mixed strategy σ ∗. If σ gets a utility of 1 against some opponent strategy (or vector of opponent strategies if there are more than 2 players), then all the pure strategies that σ ∗ places positive probability on must also get a utility of 1 against that opponent strategy (or else the expected utility would be smaller than 1). Moreover, at least one of the pure strategies that σ ∗ places positive probability on must get a utility of 1 against an opponent strategy that σ gets 0 against (or else the inequality would never be strict). It follows that this pure strategy weakly dominates σ. We are now ready to prove the main results of this section. PROOF. The problem is in NP because given a sequence of strategies to be eliminated, we can easily check whether this is a valid sequence of eliminations (even when dominance by mixed strategies is allowed, using Proposition 1). To show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V, with corresponding literals L = {+ v: v E V} U {− v: v E V}) to the following IWD-STRATEGY-ELIMINATION instance. (In this instance, we will specify that certain strategies are uneliminable. A strategy σr can be made uneliminable, even when 0 and 1 are the only allowed utilities, by adding another strategy σ0r and another opponent strategy σc, so that: 1. σr and σ0r are the only strategies that give the row player a utility of 1 against σc. 2. σr and σ0r always give the row player the same utility. 3. σc is the only strategy that gives the column player a utility of 1 against σ0r, but otherwise σc always gives the column player utility 0. This makes it impossible to eliminate any of these three strategies. We will not explicitly specify the additional strategies to make the proof more legible.) In this proof, we will denote row player strategies by s, and column player strategies by t, to improve legibility. Let the row player's pure strategy set be given as follows. For every variable v E V, the row player has corresponding strategies s1 + v, s2 + v, s1 − v, s2 − v. Additionally, the row player has the following 2 strategies: s10 and s20, where s20 = σ ∗ r (that is, it is the strategy we seek to eliminate). Finally, for every clause c E C, the row player has corresponding strategies s1c (uneliminable) and s2c. Let the column player's pure strategy set be given as follows. For every variable v E V, the column player has a corresponding strategy tv. For every clause c E C, the column player has a corresponding strategy tc, and additionally, for every literal l E L that occurs in c, a strategy tc, l. For every variable v E V, the column player has corresponding strategies t + v, t − v (both uneliminable). Finally, the column player has three additional strategies: t10 (uneliminable), t20, and t1. The utility function for the row player is given as follows: • ur (s1 + v, tv) = 0 for all v E V; • ur (s2 + v, tv) = 1 for all v E V; • ur (s1 − v, tv) = 1 for all v E V; • ur (s2 − v, tv) = 0 for all v E V; • ur (s1 + v, t1) = 1 for all v E V; • ur (s2 + v, t1) = 0 for all v E V; • ur (s1 − v, t1) = 0 for all v E V; • ur (s2 − v, t1) = 1 for all v E V; • ur (sb + v, t + v) = 1 for all v E V and b E {1, 2}; • ur (sb − v, t − v) = 1 for all v E V and b E {1, 2}; • ur (sl, t) = 0 otherwise for all l E L and t E S2; • ur (s10, tc) = 0 for all c E C; • ur (s20, tc) = 1 for all c E C; • ur (sb0, t10) = 1 for all b E {1, 2}; • ur (s10, t20) = 1; • ur (s20, t20) = 0; • ur (sb0, t) = 0 otherwise for all b E {1, 2} and t E S2; • ur (sb c, t) = 0 otherwise for all c E C and b E {1, 2}; and the row player's utility is 0 in every other case. The utility function for the column player is given as follows: • uc (s, tv) = 0 for all v E V and s E S1; • uc (s, t1) = 0 for all s E S1; • uc (s2l, tc) = 1 for all c E C and l E L where l E c (literal l occurs in clause c); • uc (s2l2, tc, l1) = 1 for all c E C and l1, l2 E L, l1 = 6 l2 where l2 E c; • uc (s1 c, tc) = 1 for all c E C; • uc (s2 c, tc) = 0 for all c E C; • uc (sb c, tc, l) = 1 for all c E C, l E L, and b E {1, 2}; • uc (s2, tc) = uc (s2, tc, l) = 0 otherwise for all c E C and l E L; and the column player's utility is 0 in every other case. We now show that the two instances are equivalent. First, suppose there is a solution to the satisfiability instance: that is, a truth-value assignment to the variables in V such that all clauses are satisfied. Then, consider the following sequence of eliminations in our game: 1. For every variable v that is set to true in the assignment, eliminate tv (which gives the column player utility 0 everywhere). 2. Then, for every variable v that is set to true in the assignment, eliminate s2 + v using s1 + v (which is possible because tv has been eliminated, and because t1 has not been eliminated (yet)). 3. Now eliminate t1 (which gives the column player utility 0 everywhere). 4. Next, for every variable v that is set to false in the assignment, eliminate s2 − v using s1 − v (which is possible because t1 has been eliminated, and because tv has not been eliminated (yet)). 5. For every clause c which has the variable corresponding to one of its positive literals l = + v set to true in the assignment, eliminate tc using tc, l (which is possible because s2l has been eliminated, and s2c has not been eliminated (yet)). 6. For every clause c which has the variable corresponding to one of its negative literals l = − v set to false in the assignment, eliminate tc using tc, l (which is possible because s2l has been eliminated, and s2c has not been eliminated (yet)). 7. Because the assignment satisfied the formula, all the tc have now been eliminated. Thus, we can eliminate s20 = a * r using s10. It follows that there is a solution to the IWD-STRATEGY-ELIMINATION instance. Now suppose there is a solution to the IWD-STRATEGYELIMINATION instance. By Lemma 1, we can assume that all the dominances are by pure strategies. We first observe that only s10 can eliminate s20 = a * r, because it is the only other strategy that gets the row player a utility of 1 against t10, and t10 is uneliminable. However, because s20 performs better than s10 against the tc strategies, it follows that all of the tc strategies must be eliminated. For each c ∈ C, the strategy tc can only be eliminated by one of the strategies tc, l (with the same c), because these are the only other strategies that get the column player a utility of 1 against s1 c, and s1c is uneliminable. But, in order for some tc, l to eliminate tc, s2l must be eliminated first. Only s1l can eliminate s2l, because it is the only other strategy that gets the row player a utility of 1 against tl, and tl is uneliminable. We next show that for every v ∈ V only one of s2 + v, s2 − v can be eliminated. This is because in order for s1 + v to eliminate s2 + v, tv needs to have been eliminated and t1, not (so tv must be eliminated before t1); but in order for s1 − v to eliminate s2 − v, t1 needs to have been eliminated and tv, not (so t1 must be eliminated before tv). So, set v to true if s2 + v is eliminated, and to false otherwise Because by the above, for every clause c, one of the s2l with l ∈ c must be eliminated, it follows that this is a satisfying assignment to the satisfiability instance. Using Theorem 1, it is now (relatively) easy to show that IWD-UNIQUE-SOLUTION is also NP-complete under the same restrictions. PROOF. Again, the problem is in NP because we can nondeterministically choose the sequence of eliminations and verify whether it is correct. To show NP-hardness, we reduce an arbitrary IWD-STRATEGY-ELIMINATION instance to the following IWD-UNIQUE-SOLUTION instance. Let all the strategies for each player from the original instance remain part of the new instance, and let the utilities resulting from the players playing a pair of these strategies be the same. We add three additional strategies a1r, a2r, a3r for the row player, and three additional strategies a1c, a2c, a3c for the column player. Let the additional utilities be as follows: • ur (ar, ajc) = 1 for all ar ∈ / {a1r, a2r, a3r} and j ∈ {2, 3}; • ur (air, ac) = 1 for all i ∈ {1, 2, 3} and ac ∈ / {a2c, a3c}; • ur (air, ac2) = 1 for all i ∈ {2, 3}; • ur (a1r, a3c) = 1; • and the row player's utility is 0 in all other cases involving a new strategy. • uc (a3r, ac) = 1 for all ac ∈ / {a1c, a2c, a3c}; • uc (a * r, ajc) = 1 for all j ∈ {2, 3} (a * r is the strategy to be eliminated in the original instance); • uc (air, a1c) = 1 for all i ∈ {1, 2}; • ur (a1r, a2c) = 1; • ur (a2r, a3c) = 1; • and the column player's utility is 0 in all other cases involving a new strategy. We proceed to show that the two instances are equivalent. First suppose there exists a solution to the original IWDSTRATEGY-ELIMINATION instance. Then, perform the same sequence of eliminations to eliminate a * r in the new IWD-UNIQUE-SOLUTION instance. (This is possible because at any stage, any weak dominance for the row player in the original instance is still a weak dominance in the new instance, because the two strategies' utilities for the row player are the same when the column player plays one of the new strategies; and the same is true for the column player.) Once a * r is eliminated, let a1c eliminate a2c. (It performs better against a2r.) Then, let a1r eliminate all the other remaining strategies for the row player. (It always performs better against either a1c or a3c.) Finally, a1c is the unique best response against a1r among the column player's remaining strategies, so let it eliminate all the other remaining strategies for the column player. Thus, there exists a solution to the IWD-UNIQUE-SOLUTION instance. Now suppose there exists a solution to the IWD-UNIQUESOLUTION instance. By Lemma 1, we can assume that all the dominances are by pure strategies. We will show that none of the new strategies (a1r, a2r, a3r, a1c, a2c, a3c) can either eliminate another strategy, or be eliminated before a * r is eliminated. Thus, there must be a sequence of eliminations ending in the elimination of a * r, which does not involve any of the new strategies, and is therefore a valid sequence of eliminations in the original game (because all original strategies perform the same against each new strategy). We now show that this is true by exhausting all possibilities for the first elimination before a * r is eliminated that involves a new strategy. None of the air can be eliminated by a ar ∈ / {a1r, a2r, a3r}, because the air perform better against a1c. a1r cannot eliminate any other strategy, because it always performs poorer against a2c. a2r and a3r are equivalent from the row player's perspective (and thus cannot eliminate each other), and cannot eliminate any other strategy because they always perform poorer against a3c. None of the ajc can be eliminated by a ac ∈ / {a1c, a2c, a3c}, because the ajc always perform better against either a1r or a2r. a1c cannot eliminate any other strategy, because it always performs poorer against either a * r or a3r. a2c cannot eliminate any other strategy, because it always performs poorer against a2r or a3r. a3c cannot eliminate any other strategy, because it always performs poorer against a1r or a3r. Thus, there exists a solution to the IWDSTRATEGY-ELIMINATION instance. A slightly weaker version of the part of Theorem 2 concerning dominance by pure strategies only is the main result of Gilboa et al. [7]. (Besides not proving the result for dominance by mixed strategies, the original result was weaker because it required utilities {0, 1, 2, 3, 4, 5, 6, 7, 8} rather than just {0, 1} (and because of this, our Lemma 1 cannot be applied to it to get the result for mixed strategies).) 5. (ITERATED) DOMINANCE USING MIXED STRATEGIES WITH SMALL SUPPORTS When showing that a strategy is dominated by a mixed strategy, there are several reasons to prefer exhibiting a dominating strategy that places positive probability on as few pure strategies as possible. First, this will reduce the number of bits required to specify the dominating strategy (and thus the proof of dominance can be communicated quicker): if the dominating mixed strategy places positive probability on only k strategies, then it can be specified using k real numbers for the probabilities, plus k log m (where m is the number of strategies for the player under consideration) bits to indicate which strategies are used. Second, the proof of dominance will be "cleaner": for a dominating mixed strategy, it is typically (always in the case of strict dominance) possible to spread some of the probability onto any unused pure strategy and still have a dominating strategy, but this obscures which pure strategies are the ones that are key in making the mixed strategy dominating. Third, because (by the previous) the argument for eliminating the dominated strategy is simpler and easier to understand, it is more likely to be accepted. Fourth, the level of risk neutrality required for the argument to work is reduced, at least in the extreme case where dominance by a single pure strategy can be exhibited (no risk neutrality is required here). This motivates the following problem. DEFINITION 4 (MINIMUM-DOMINATING-SET). We are given the row player's utilities of a game in normal form, a distinguished strategy σ * for the row player, a specification of whether the dominance should be strict or weak, and a number k. We are asked whether there exists a mixed strategy σ for the row player that places positive probability on at most k pure strategies, and dominates σ * in the required sense. Unfortunately, this problem is NP-complete. THEOREM 3. MINIMUM-DOMINATING-SET is NPcomplete, both for strict and for weak dominance. PROOF. The problem is in NP because we can nondeterministically choose a set of at most k strategies to give positive probability, and decide whether we can dominate σ * with these k strategies as described in Proposition 1. To show NP-hardness, we reduce an arbitrary SET-COVER instance (given a set S, subsets S1, S2,..., S,. , and a number t, can all of S be covered by at most t of the subsets?) to the following MINIMUM-DOMINATING-SET instance. For every element s ∈ S, there is a pure strategy σ3 for the column player. For every subset Si, there is a pure strategy σSi for the row player. Finally, there is the distinguished pure strategy σ * for the row player. The row player's utilities are as follows: u,. (σSi, σ3) = t + 1 if s ∈ Si; u,. (σSi, σ3) = 0 if s ∈ / Si; u,. (σ *, σ3) = 1 for all s ∈ S. Finally, we let k = t. We now proceed to show that the two instances are equivalent. First suppose there exists a solution to the SET-COVER instance. Without loss of generality, we can assume that there are exactly k subsets in the cover. Then, for every Si that is in the cover, let the dominating strategy σ place exactly k1 probability on the corresponding pure strategy σSi. Now, if we let n (s) be the number of subsets in the cover containing s (we observe that that n (s) ≥ 1), then for every strategy σ3 for the column player, the row player's expected utility for playing σ when the column player is playing σ3 is (and thus also weakly) dominates σ *, and there exists a solution to the MINIMUM-DOMINATING-SET instance. Now suppose there exists a solution to the MINIMUMDOMINATING-SET instance. Consider the (at most k) pure strategies of the form σSi on which the dominating mixed strategy σ places positive probability, and let T be the collection of the corresponding subsets Si. We claim that T is a cover. For suppose there is some s ∈ S that is not in any of the subsets in T. Then, if the column player plays σ3, the row player (when playing σ) will always receive utility 0--as opposed to the utility of 1 the row player would receive for playing σ *, contradicting the fact that σ dominates σ * (whether this dominance is weak or strict). It follows that there exists a solution to the SET-COVER instance. On the other hand, if we require that the dominating strategy only places positive probability on a very small number of pure strategies, then it once again becomes easy to check whether a strategy is dominated. Specifically, to find out whether player i's strategy σ * is dominated by a strategy that places positive probability on only k pure strategies, we can simply check, for every subset of k of player i's pure strategies, whether there is a strategy that places positive probability only on these k strategies and dominates σ *, using Proposition 1. This requires only O (| Σi | k) such checks. Thus, if k is a constant, this constitutes a polynomial-time algorithm. A natural question to ask next is whether iterated strict dominance remains computationally easy when dominating strategies are required to place positive probability on at most k pure strategies, where k is a small constant. (We have already shown in Section 4 that iterated weak dominance is hard even when k = 1, that is, only dominance by pure strategies is allowed.) Of course, if iterated strict dominance were path-independent under this restriction, computational easiness would follow as it did in Section 4. However, it turns out that this is not the case. OBSERVATION 1. If we restrict the dominating strategies to place positive probability on at most two pure strategies, iterated strict dominance becomes path-dependent. PROOF. Consider the following game: 7, 1 0, 0 0, 0 0, 0 7, 1 0, 0 3, 0 3, 0 0, 0 0, 0 0, 0 3, 1 1, 0 1, 0 1, 0 Let (i, j) denote the outcome in which the row player plays the ith row and the column player plays the jth column. Because (1, 1), (2, 2), and (4, 3) are all Nash equilibria, none of the column player's pure strategies will ever be eliminated, and neither will rows 1, 2, and 4. We now observe that randomizing uniformly over rows 1 and 2 dominates row 3, and randomizing uniformly over rows 3 and 4 dominates row 5. However, if we eliminate row 3 first, it becomes impossible to dominate row 5 without randomizing over at least 3 pure strategies. Indeed, iterated strict dominance turns out to be hard even when k = 3. PROOF. The problem is in NP because given a sequence of strategies to be eliminated, we can check in polynomial time whether this is a valid sequence of eliminations (for any strategy to be eliminated, we can check, for every subset of three other strategies, whether there is a strategy placing positive probability on only these three strategies that dominates the strategy to be eliminated, using Proposition 1). To show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V, with corresponding literals L = {+ v: v ∈ V} ∪ {− v: v ∈ V}) to the following two-player game. For every variable v ∈ V, the row player has strategies s + v, s_v, s1v, s2v, s3v, s4v, and the column player has strategies t1v, t2v, t3v, t4v. For every clause c ∈ C, the row player has a strategy sc, and the column player has a strategy tc, as well as, for every literal l occurring in c, an additional strategy tlc. The row player has two additional strategies s1 and s2. (s2 is the strategy that we are seeking to eliminate.) Finally, the column player has one additional strategy t1. The utility function for the row player is given as follows (where ~ is some sufficiently small number): • ur (s + v, tjv) = 4 if j ∈ {1, 2}, for all v ∈ V; • ur (s + v, tjv) = 1 if j ∈ {3, 4}, for all v ∈ V; • ur (s_v, tjv) = 1 if j ∈ {1, 2}, for all v ∈ V; • ur (s_v, tjv) = 4 if j ∈ {3, 4}, for all v ∈ V; • ur (s + v, t) = ur (s_v, t) = 0 for all v ∈ V and t ∈ / {t1 v, t2 v, t3 v, t4 v}; • ur (siv, tiv) = 13 for all v ∈ V and i ∈ {1, 2, 3, 4}; • ur (siv, t) = ~ for all v ∈ V, i ∈ {1, 2, 3, 4}, and t = 6 tiv; • ur (sc, tc) = 2 for all c ∈ C; • ur (sc, t) = 0 for all c ∈ C and t = 6 tc; • ur (s1, t1) = 1 + ~; • ur (s1, t) = ~ for all t = 6 t1; • ur (s2, t1) = 1; • ur (s2, tc) = 1 for all c ∈ C; • ur (s2, t) = 0 for all t ∈ / {t1} ∪ {tc: c ∈ C}. The utility function for the column player is given as follows: • uc (siv, tiv) = 1 for all v ∈ V and i ∈ {1, 2, 3, 4}; • uc (s, tiv) = 0 for all v ∈ V, i ∈ {1, 2, 3, 4}, and s = 6 siv; • uc (sc, tc) = 1 for all c ∈ C; • uc (sl, tc) = 1 for all c ∈ C and l ∈ L occurring in c; • uc (s, tc) = 0 for all c ∈ C and s ∈ / {sc} ∪ {sl: l ∈ c}; • uc (sc, tlc) = 1 + ~ for all c ∈ C; • uc (sl, tlc) = 1 + ~ for all c ∈ C and l' = 6 l occurring in c; • uc (s, tlc) = ~ for all c ∈ C and s ∈ / {sc} ∪ {sl: l' ∈ c, l = 6 l'}; • uc (s2, t1) = 1; • uc (s, t1) = 0 for all s = 6 s2. We now show that the two instances are equivalent. First, suppose that there is a solution to the satisfiability instance. Then, consider the following sequence of eliminations in our game: 1. For every variable v that is set to true in the satisfying assignment, eliminate s + v with the mixed strategy σr that places probability 1/3 on s_v, probability 1/3 on s1v, and probability 1/3 on s2v. (The expected utility of playing σr against t1v or t2v is 14/3> 4; against t3 v or t4 v, it is 4/3> 1; and against anything else it is 2 ~ / 3> 0. Hence the dominance is valid.) 2. Similarly, for every variable v that is set to false in the satisfying assignment, eliminate s_v with the mixed strategy σr that places probability 1/3 on s + v, probability 1/3 on s3v, and probability 1/3 on s4v. (The expected utility of playing σr against t1 v or t2 v is 4/3> 1; against t3 v or t4 v, it is 14/3> 4; and against anything else it is 2 ~ / 3> 0. Hence the dominance is valid.) 3. For every c ∈ C, eliminate tc with any tlc for which l was set to true in the satisfying assignment. (This is a valid dominance because tlc performs better than tc against any strategy other than sl, and we eliminated sl in step 1 or in step 2.) 4. Finally, eliminate s2 with s1. (This is a valid dominance because s1 performs better than s2 against any strategy other than those in {tc: c ∈ C}, which we eliminated in step 3.) Hence, there is an elimination path that eliminates s2. Now, suppose that there is an elimination path that eliminates s2. The strategy that eventually dominates s2 must place most of its probability on s1, because s1 is the only other strategy that performs well against t1, which cannot be eliminated before s2. But, s1 performs significantly worse than s2 against any strategy tc with c ∈ C, so it follows that all these strategies must be eliminated first. Each strategy tc can only be eliminated by a strategy that places most of its weight on the corresponding strategies tlc with l ∈ c, because they are the only other strategies that perform well against sc, which cannot be eliminated before tc. But, each strategy tlc performs significantly worse than tc against sl, so it follows that for every clause c, for one of the literals l occurring in it, sl must be eliminated first. Now, strategies of the form tjv will never be eliminated because they are the unique best responses to the corresponding strategies sjv (which are, in turn, the best responses to the corresponding tjv). As a result, if strategy s + v (respectively, s_v) is eliminated, then its opposite strategy s_v (respectively, s + v) can no longer be eliminated, for the following reason. There is no other pure strategy remaining that gets a significant utility against more than one of the strategies t1v, t2v, t3v, t4v, but s_v (respectively, s + v) gets significant utility against all 4, and therefore cannot be dominated by a mixed strategy placing positive probability on at most 3 strategies. It follows that for each v ∈ V, at most one of the strategies s + v, s_v is eliminated, in such a way that for every clause c, for one of the literals l occurring in it, sl must be eliminated. But then setting all the literals l such that sl is eliminated to true constitutes a solution to the satisfiability instance. In the next section, we return to the setting where there is no restriction on the number of pure strategies on which a dominating mixed strategy can place positive probability. 6. (ITERATED) DOMINANCE IN BAYESIAN GAMES So far, we have focused on normal form games that are flatly represented (that is, every matrix entry is given ex plicitly). However, for many games, the flat representation is too large to write down explicitly, and instead, some representation that exploits the structure of the game needs to be used. Bayesian games, besides being of interest in their own right, can be thought of as a useful structured representation of normal form games, and we will study them in this section. In a Bayesian game, each player first receives privately held preference information (the player's type) from a distribution, which determines the utility that that player receives for every outcome of (that is, vector of actions played in) the game. After receiving this type, the player plays an action based on it .7 i's utility when i's type is θi and each player j plays action aj). A pure strategy in a Bayesian game is a mapping from types to actions, σi: Θi--+ Ai, where σi (θi) denotes the action that player i plays for type θi. Any vector of pure strategies in a Bayesian game defines an (expected) utility for each player, and therefore we can translate a Bayesian game into a normal form game. In this normal form game, the notions of dominance and iterated dominance are defined as before. However, the normal form representation of the game is exponentially larger than the Bayesian representation, because each player i has | Ai | IΘiI distinct pure strategies. Thus, any algorithm for Bayesian games that relies on expanding the game to its normal form will require exponential time. Specifically, our easiness results for normal form games do not directly transfer to this setting. In fact, it turns out that checking whether a strategy is dominated by a pure strategy is hard in Bayesian games. THEOREM 5. In a Bayesian game, it is NP-complete to decide whether a given pure strategy σr: Θr--+ Ar is dominated by some other pure strategy (both for strict and weak dominance), even when the row player's distribution over types is uniform. PROOF. The problem is in NP because it is easy to verify whether a candidate dominating strategy is indeed a dominating strategy. To show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a set of clauses C using variables from V) to the following Bayesian game. Let the row player's action set be Ar = {t, f, 0} and let the column player's action set be Ac = {ac: c E C}. Let the row player's type set be Θr = {θv: v E V}, with a distribution πr that is uniform. Let the row player's utility function be as follows: • ur (θv, 0, ac) = 0 for all v E V and c E C; • ur (θv, b, ac) = | V | for all v E V, c E C, and b E {t, f} such that setting v to b satisfies c; • ur (θv, b, ac) = − 1 for all v E V, c E C, and b E {t, f} such that setting v to b does not satisfy c. Let the pure strategy to be dominated be the one that plays 0 for every type. We show that the strategy is dominated by a pure strategy if and only if there is a solution to the satisfiability instance. First, suppose there is a solution to the satisfiability instance. Then, let σdr be given by: σdr (θv) = t if v is set to true in the solution to the satisfiability instance, and σdr (θv) = f otherwise. Then, against any action ac by the column player, there is at least one type θv such that either + v E c and σdr (θv) = t, or − v E c and σdr (θv) = f. Thus, the row player's expected utility against action ac is at least IV I IV I However, it turns out that we can modify the linear programs from Proposition 1 to obtain a polynomial time algorithm for checking whether a strategy is dominated by a mixed strategy in Bayesian games. THEOREM 6. In a Bayesian game, it can be decided in polynomial time whether a given (possibly mixed) strategy σr is dominated by some other mixed strategy, using linear programming (both for strict and weak dominance). PROOF. We can modify the linear programs presented in Proposition 1 as follows. For strict dominance, again assuming without loss of generality that all the utilities in the game are positive, use the following linear program (in which pσr such that for any ac E Ac, E E π (θr) ur (θr, ar, ac) pr (θr, ar)> for any θr E Θr, E pr (θr, ar) <1. arEAr Assuming that π (θr)> 0 for all θr E Θr, this program will return an objective value smaller than | Θr | if and only if σr is strictly dominated, by reasoning similar to that done in Proposition 1. For weak dominance, use the following linear program: such that for any ac E Ac, E E π (θr) ur (θr, ar, ac) pr (θr, ar)> for any θr E Θr, E pr (θr, ar) = 1. arEAr This program will return an objective value greater than 0 if and only if σr is weakly dominated, by reasoning similar to that done in Proposition 1. We now turn to iterated dominance in Bayesian games. Na ¨ ıvely, one might argue that iterated dominance in Bayesian games always requires an exponential number of steps when a significant fraction of the game's pure strategies can be eliminated, because there are exponentially many pure strategies. However, this is not a very strong argument because oftentimes we can eliminate exponentially many pure strategies in one step. For example, if for some type θr ∈ Θr we have, for all ac ∈ Ac, that u (θr, a1r, ac)> u (θr, a2r, ac), then any pure strategy for the row player which plays action a2r for type θr is dominated (by the strategy that plays action a1r for type θr instead)--and there are exponentially many (| Ar | | Θr | − 1) such strategies. It is therefore conceivable that we need only polynomially many eliminations of collections of a player's strategies. However, the following theorem shows that this is not the case, by giving an example where an exponential number of iterations (that is, alternations between the players in eliminating strategies) is required. (We emphasize that this is not a result about computational complexity.) THEOREM 7. Even in symmetric 3-player Bayesian games, iterated dominance by pure strategies can require an exponential number of iterations (both for strict and weak dominance), even with only three actions per player. PROOF. Let each player i ∈ {1, 2, 3} have n + 1 types θ1i, θ2i,..., θn +1 i. Let each player i have 3 actions ai, bi, ci, and let the utility function of each player be defined as follows. (In the below, i + 1 and i + 2 are shorthand for i + 1 (mod 3) and i + 2 (mod 3) when used as player indices. Also, − ∞ can be replaced by a sufficiently negative number. Finally, δ and ~ should be chosen to be very small (even compared to 2 − (n +1)), and ~ should be more than twice as large as δ.) • ui (θ1i; ai, ci +1, ci +2) = − 1; • ui (θ1i; ai, si +1, si +2) = 0 for si +1 = 6 ci +1 or si +2 = 6 ci +2; • ui (θ1i; bi, si +1, si +2) = − ~ for si +1 = 6 ai +1 and si +2 = 6 ai +2; • ui (θ1i; bi, si +1, si +2) = − ∞ for si +1 = ai +1 or si +2 = ai +2; • ui (θ1i; ci, si +1, si +2) = − ∞ for all si +1, si +2; • ui (θji; ai, si +1, si +2) = − ∞ for all si +1, si +2 when j> 1; • ui (θji; bi, si +1, si +2) = − ~ for all si +1, si +2 when j> 1; • ui (θji; ci, si +1, ci +2) = δ − ~ − 1/2 for all si +1 when j> 1; • ui (θji; ci, si +1, si +2) = δ − ~ for all si +1 and si +2 = 6 ci +2 when j> 1. Let the distribution over each player's types be given by p (θji) = 2 − j (with the exception that p (θ2i) = 2 − 2 +2 − (n +1)). We will be interested in eliminating strategies of the following form: play bi for type θ1i, and play one of bi or ci otherwise. Because the utility function is the same for any type θji with j> 1, these strategies are effectively defined by the total probability that they place on ci,8 which is t2i (2 − 2 + 2 − (n +1)) + n +1 j = 3 tji2 − j where tji = 1 if player i 8Note that the strategies are still pure strategies; the probability placed on an action by a strategy here is simply the sum of the probabilities of the types for which the strategy chooses that action. plays ci for type θji, and 0 otherwise. This probability is different for any two different strategies of the given form, and we have exponentially many different strategies of the given form. For any probability q which can be expressed let σi (q) denote the (unique) strategy of the given form for player i which places a total probability of q on ci. Any strategy that plays ci for type θ1i or ai for some type θji with j> 1 can immediately be eliminated. We will show that, after that, we must eliminate the strategies σi (q) with high q first, slowly working down to those with lower q. Claim 1: If σi +1 (q0) and σi +2 (q0) have not yet been eliminated, and q <q0, then σi (q) cannot yet be eliminated. Proof: First, we show that no strategy σi (q00) can eliminate σi (q). Against σi +1 (q000), σi +2 (q000), the utility of playing σi (p) is − ~ + p · δ − p · q000/2. Thus, when q000 = 0, it is best to set p as high as possible (and we note that σi +1 (0) and σi +2 (0) have not been eliminated), but when q000> 0, it is best to set p as low as possible because δ <q000/2. Thus, whether q> q00 or q <q00, σi (q) will always do strictly better than σi (q00) against some remaining opponent strategies. Hence, no strategy σi (q00) can eliminate σi (q). The only other pure strategies that could dominate σi (q) are strategies that play ai for type θ1i, and bi or ci for all other types. Let us take such a strategy and suppose that it plays c with probability p. Against σi +1 (q0), σi +2 (q0) (which have not yet been eliminated), the utility of playing this strategy is − (q0) 2/2 − ~ / 2 + p · δ − p · q0/2. On the other hand, playing σi (q) gives − ~ + q · δ − q · q0/2. Because q0> q, we have − (q0) 2/2 <− q · q0/2, and because δ and ~ are small, it follows that σi (q) receives a higher utility. Therefore, no strategy dominates σi (q), proving the claim. Claim 2: If for all q0> q, σi +1 (q0) and σi +2 (q0) have been eliminated, then σi (q) can be eliminated. Proof: Consider the strategy for player i that plays ai for type θ1i, and bi for all other types (call this strategy σ0i); we claim σ0i dominates σi (q). First, if either of the other players k plays ak for θ1k, then σ0i performs better than σi (q) (which receives − ∞ in some cases). Because the strategies for player k that play ck for type θ1k, or ak for some type θjk with j> 1, have already been eliminated, all that remains to check is that σ0i performs better than σi (q) whenever both of the other two players play strategies of the following form: play bk for type θ1k, and play one of bk or ck otherwise. We note that among these strategies, there are none left that place probability greater than q on ck. Letting qk denote the probability with which player k plays ck, the expected utility of playing σ0i is − qi +1 · qi +2 / 2 − ~ / 2. On the other hand, the utility of playing σi (q) is − ~ + q · δ − q · qi +2 / 2. Because qi +1 ≤ q, the difference between these two expressions is at least ~ / 2 − δ, which is positive. It follows that σ0i dominates σi (q). From Claim 2, it follows that all strategies of the form σi (q) will eventually be eliminated. However, Claim 1 shows that we cannot go ahead and eliminate multiple such strategies for one player, unless at least one other player simultaneously "keeps up" in the eliminated strategies: every time a σi (q) is eliminated such that σi +1 (q) and σi +2 (q) have not yet been eliminated, we need to eliminate one of the latter two strategies before any σi (q0) with q0> q can be eliminated--that is, we need to alternate between players. Because there are exponentially many strategies of the form σi (q), it follows that iterated elimination will require exponentially many iterations to complete. It follows that an efficient algorithm for iterated dominance (strict or weak) by pure strategies in Bayesian games, if it exists, must somehow be able to perform (at least part of) many iterations in a single step of the algorithm (because if each step only performed a single iteration, we would need exponentially many steps). Interestingly, Knuth et al. [11] argue that iterated dominance appears to be an inherently sequential problem (in light of their result that iterated very weak dominance is P-complete, that is, apparently not efficiently parallelizable), suggesting that aggregating many iterations may be difficult. 7. CONCLUSIONS While the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention. In this paper we studied various computational aspects of this concept. We first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then moved on to iterated dominance. We showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right. We showed that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance). Finally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). There are various avenues for future research. First, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required). Second, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games--for example, in graphical games [9] or local-effect/action graph games [12, 2]. (How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].) Finally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.
Complexity of (Iterated) Dominance * ABSTRACT We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). 1. INTRODUCTION In multiagent systems with self-interested agents, the optimal action for one agent may depend on the actions taken by other agents. In such settings, the agents require tools from game theory to rationally decide on an action. Game theory offers various formal models of strategic settings--the best-known of which is a game in normal (or matrix) form, specifying a utility (payoff) for each agent for each combination of strategies that the agents choose--as well as solution concepts, which, given a game, specify which outcomes are reasonable (under various assumptions of rationality and common knowledge). Probably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium. A Nash equilibrium specifies a strategy for each player, in such a way that no player has an incentive to (unilaterally) deviate from the prescribed strategy. Recently, numerous papers have studied computing Nash equilibria in various settings [9, 4, 12, 3, 13, 14], and the complexity of constructing a Nash equilibrium in normal form games has been labeled one of the two most important open problems on the boundary of P today [20]. The problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention. (After an early short paper on an easy case [11], the main computational study of these concepts has actually taken place in a paper in the game theory community [7].') A strategy strictly dominates another strategy if it performs strictly ` This is not to say that computer scientists have ignored better against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one. The idea is that dominated strategies can be eliminated from consideration. In iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds. Computing solutions according to (iterated) dominance is important for at least the following reasons: 1) it can be computationally easier than computing (for instance) a Nash equilibrium (and therefore it can be useful as a preprocessing step in computing a Nash equilibrium), and 2) (iterated) dominance requires a weaker rationality assumption on the players than (for instance) Nash equilibrium, and therefore solutions derived according to it are more likely to occur. In this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions. The rest of the paper is organized as follows. In Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance. In the remaining sections, we study computational aspects of dominance and iterated dominance. In Section 3, we study one-shot (not iterated) dominance. In Section 4, we study iterated dominance. In Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies. Finally, in Section 6, we study dominance and iterated dominance in Bayesian games. 2. DEFINITIONS AND BASIC PROPERTIES 3. DOMINANCE (NOT ITERATED) 4. ITERATED DOMINANCE 5. (ITERATED) DOMINANCE USING MIXED STRATEGIES WITH SMALL SUPPORTS 6. (ITERATED) DOMINANCE IN BAYESIAN GAMES IV I IV I 7. CONCLUSIONS While the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention. In this paper we studied various computational aspects of this concept. We first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then moved on to iterated dominance. We showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right. We showed that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance). Finally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). There are various avenues for future research. First, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required). Second, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games--for example, in graphical games [9] or local-effect/action graph games [12, 2]. (How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].) Finally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.
Complexity of (Iterated) Dominance * ABSTRACT We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship. dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). 1. INTRODUCTION In such settings, the agents require tools from game theory to rationally decide on an action. Probably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium. The problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention. A strategy strictly dominates another strategy if it performs strictly ` This is not to say that computer scientists have ignored better against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one. The idea is that dominated strategies can be eliminated from consideration. In iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds. In this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions. The rest of the paper is organized as follows. In Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance. In the remaining sections, we study computational aspects of dominance and iterated dominance. In Section 3, we study one-shot (not iterated) dominance. In Section 4, we study iterated dominance. In Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies. Finally, in Section 6, we study dominance and iterated dominance in Bayesian games. 7. CONCLUSIONS While the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention. In this paper we studied various computational aspects of this concept. We first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then moved on to iterated dominance. We showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right. Finally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance). There are various avenues for future research. First, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required). Second, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games--for example, in graphical games [9] or local-effect/action graph games [12, 2]. (How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].) Finally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.
J-45
Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario
Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent.
[ "empir mechan", "empir mechan design", "paramet set", "analysi", "observ behavior", "interest outcom featur", "suppli-chain trade", "two-stage game", "player", "particip", "gametheoret model", "nash equilibrium", "game theori" ]
[ "P", "P", "P", "P", "P", "R", "M", "U", "U", "U", "M", "U", "U" ]
Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario Yevgeniy Vorobeychik, Christopher Kiekintveld, and Michael P. Wellman University of Michigan Computer Science & Engineering Ann Arbor, MI 48109-2121 USA { yvorobey, ckiekint, wellman }@umich. edu ABSTRACT Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent. Categories and Subject Descriptors I.6 [Computing Methodologies]: Simulation and Modeling; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Design 1. MOTIVATION We illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game. TAC/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain. The agents procure components from the various suppliers and assemble finished goods for sale to customers, repeatedly over a simulated year. As it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation. During the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0. Although jockeying for day-0 procurement turned out to be an interesting strategic issue in itself [19], the phenomenon detracted from other interesting problems, such as adapting production levels to varying demand (since component costs were already sunk), and dynamic management of production, sales, and inventory. Several participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13]. After the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement. The task facing game organizers can be viewed as a problem in mechanism design. The designers have certain game features under their control, and a set of objectives regarding game outcomes. Unlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game. Replacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option. We believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems. In response to the problem, the TAC/SCM designers adopted several rule changes intended to penalize large day-0 orders. These included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods. Despite the changes, day-0 procurement was very high in the early rounds of the 2004 competition. In a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament. Even this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9]. The apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking. Although the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations. Much of the difficulty, of course, is anticipating the agents'' (and their developers'') responses without essentially running a gaming exercise for this purpose. The episode caused us to consider whether new ap306 proaches or tools could enable more systematic analysis of design options. Standard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment. Under the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design. In the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC/SCM redesign problem. Our analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted. Our results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise. We also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used. Finally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem. Overall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players'' utilities is available. Our methods incorporate estimation of sets of Nash equilibria and sample Nash equilibria, used in conjuction to support general claims about the structure of the mechanism designer``s utility, as well as a restricted probabilistic analysis to assess the likelihood of conclusions. We believe that most realistic problems are too complex to be amenable to exact analysis. Consequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses. 2. PRELIMINARIES A normal form game2 is denoted by [I, {Ri}, {ui(r)}], where I refers to the set of players and m = |I| is the number of players. Ri is the set of strategies available to player i ∈ I, with R = R1 ×...×Rm representing the set of joint strategies of all players. We designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 ×...×Am. It is often convenient to refer to a strategy of player i separately from that of the remaining players. To accommodate this, we use a−i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A. An s ∈ S is called a mixed strategy profile. When the game is finite (i.e., A and I are both finite), the probability that a ∈ A is played under s is written s(a) = s(ai, a−i). When the distribution s is not correlated, we can simply say si(ai) when referring to the probability player i plays ai under s. Next, we define the payoff (utility) function of each player i by ui : A1 ×· · ·×Am → R, where ui(ai, a−i) indicates the payoff to player i to playing pure strategy ai when the remaining players play a−i. We can extend this definition to mixed strategies by assuming that ui are von Neumann-Morgenstern (vNM) utilities as follows: ui(s) = Es[ui], where Es is the expectation taken with respect to the probability distribution of play induced by the players'' mixed strategy s. 2 By employing the normal form, we model agents as playing a single action, with decisions taken simultaneously. This is appropriate for our current study, which treats strategies (agent programs) as atomic actions. We could capture finer-grained decisions about action over time in the extensive form. Although any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection). Occasionally, we write ui(x, y) to mean that x ∈ Ai or Si and y ∈ A−i or S−i depending on context. We also express the set of utility functions of all players as u(·) = {u1(·), ... , um(·)}. We define a function, : R → R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile. (r) = max i∈I max ai∈Ai [ui(ai, r−i) − ui(r)], (1) where r belongs to some strategy set, R, of either pure or mixed strategies. Faced with a game, an agent would ideally play its best strategy given those played by the other agents. A configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium. DEFINITION 1. A strategy profile r = (r1, ... , rm) constitutes a Nash equilibrium of game [I, {Ri}, {ui(r)}] if for every i ∈ I, ri ∈ Ri, ui(ri, r−i) ≥ ui(ri, r−i). When r ∈ A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium. We often appeal to the concept of an approximate, or -Nash equilibrium, where is the maximum benefit to any agent for deviating from the prescribed strategy. Thus, (r) as defined above (1) is such that profile r is an -Nash equilibrium iff (r) ≤ . In this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical. DEFINITION 2. A game [I, {Ri}, {ui(r)}] is symmetric if for all i, j ∈ I, (a) Ri = Rj and (b) ui(ri, r−i) = uj (rj, r−j) whenever ri = rj and r−i = r−j 3. THE MODEL We model the strategic interactions between the designer of the mechanism and its participants as a two-stage game. The designer moves first by selecting a value, θ, from a set of allowable mechanism settings, Θ. All the participant agents observe the mechanism parameter θ and move simultaneously thereafter. For example, the designer could be deciding between a first-price and second-price sealed-bid auction mechanisms, with the presumption that after the choice has been made, the bidders will participate with full awareness of the auction rules. Since the participants play with full knowledge of the mechanism parameter, we define a game between them in the second stage as Γθ = [I, {Ri}, {ui(r, θ)}]. We refer to Γθ as a game induced by θ. Let N(θ) be the set of strategy profiles considered solutions of the game Γθ.3 Suppose that the goal of the designer is to optimize the value of some welfare function, W (r, θ), dependent on the mechanism parameter and resulting play, r. We define a pessimistic measure, W ( ˆR, θ) = inf{W (r, θ) : r ∈ ˆR}, representing the worst-case welfare of the game induced by θ, assuming that agents play some joint strategy in ˆR. Typically we care about W (N(θ), θ), the worst-case outcome of playing some solution.4 On some problems we can gain considerable advantage by using an aggregation function to map the welfare outcome of a game 3 We generally adopt Nash equilibrium as the solution concept, and thus take N(θ) to be the set of equilibria. However, much of the methodology developed here could be employed with alternative criteria for deriving agent behavior from a game definition. 4 Again, alternatives are available. For example, if one has a probability distribution over the solution set N(θ), it would be natural to take the expectation of W (r, θ) instead. 307 specified in terms of agent strategies to an equivalent welfare outcome specified in terms of a lower-dimensional summary. DEFINITION 3. A function φ : R1 × · · · × Rm → Rq is an aggregation function if m ≥ q and W (r, θ) = V (φ(r), θ) for some function V . We overload the function symbol to apply to sets of strategy profiles: φ( ˆR) = {φ(r) : r ∈ ˆR}. For convenience of exposition, we write φ∗ (θ) to mean φ(N(θ)). Using an aggregation function yields a more compact representation of strategy profiles. For example, suppose-as in our application below-that an agent``s strategy is defined by a numeric parameter. If all we care about is the total value played, we may take φ(a) = Pm i=1 ai. If we have chosen our aggregator carefully, we may also capture structure not obvious otherwise. For example, φ∗ (θ) could be decreasing in θ, whereas N(θ) might have a more complex structure. Given a description of the solution correspondence N(θ) (equivalently, φ∗ (θ)), the designer faces a standard optimization problem. Alternatively, given a simulator that could produce an unbiased sample from the distribution of W (N(θ), θ) for any θ, the designer would be faced with another much appreciated problem in the literature: simulation optimization [12]. However, even for a game Γθ with known payoffs it may be computationally intractable to solve for Nash equilibria, particularly if the game has large or infinite strategy sets. Additionally, we wish to study games where the payoffs are not explicitly given, but must be determined from simulation or other experience with the game.5 Accordingly, we assume that we are given a (possibly noisy) data set of payoff realizations: Do = {(θ1 , a1 , U1 ), ... , (θk , ak , Uk )}, where for every data point θi is the observed mechanism parameter setting, ai is the observed pure strategy profile of the participants, and Ui is the corresponding realization of agent payoffs. We may also have additional data generated by a (possibly noisy) simulator: Ds = {(θk+1 , ak+1 , Uk+1 ), ... , (θk+l , ak+l , Uk+l )}. Let D = {Do, Ds} be the combined data set. (Either Do or Ds may be null for a particular problem.) In the remainder of this paper, we apply our modeling approach, together with several empirical game-theoretic methods, in order to answer questions regarding the design of the TAC/SCM scenario. 4. EMPIRICAL DESIGN ANALYSIS Since our data comes in the form of payoff experience and not as the value of an objective function for given settings of the control variable, we can no longer rely on the methods for optimizing functions using simulations. Indeed, a fundamental aspect of our design problem involves estimating the Nash equilibrium correspondence. Furthermore, we cannot rely directly on the convergence results that abound in the simulation optimization literature, and must establish probabilistic analysis methods tailored for our problem setting. 4.1 TAC/SCM Design Problem We describe our empirical design analysis methods by presenting a detailed application to the TAC/SCM scenario introduced above. Recall that during the 2004 tournament, the designers of the supplychain game chose to dramatically increase storage costs as a measure aimed at curbing day-0 procurement, to little avail. Here we systematically explore the relationship between storage costs and 5 This is often the case for real games of interest, where natural language or algorithmic descriptions may substitute for a formal specification of strategy and payoff functions. the aggregate quantity of components procured on day 0 in equilibrium. In doing so, we consider several questions raised during and after the tournament. First, does increasing storage costs actually reduce day-0 procurement? Second, was the excessive day-0 procurement that was observed during the 2004 tournament rational? And third, could increasing storage costs sufficiently have reduced day-0 procurement to an acceptable level, and if so, what should the setting of storage costs have been? It is this third question that defines the mechanism design aspect of our analysis.6 To apply our methods, we must specify the agent strategy sets, the designer``s welfare function, the mechanism parameter space, and the source of data. We restrict the agent strategies to be a multiplier on the quantity of the day-0 requests by one of the finalists, Deep Maize, in the 2004 TAC/SCM tournament. We further restrict it to the set [0,1.5], since any strategy below 0 is illegal and strategies above 1.5 are extremely aggressive (thus unlikely to provide refuting deviations beyond those available from included strategies, and certainly not part of any desirable equilibrium). All other behavior is based on the behavior of Deep Maize and is identical for all agents. This choice can provide only an estimate of the actual tournament behavior of a typical agent. However, we believe that the general form of the results should be robust to changes in the full agent behavior. We model the designer``s welfare function as a threshold on the sum of day-0 purchases. Let φ(a) = P6 i=1 ai be the aggregation function representing the sum of day-0 procurement of the six agents participating in a particular supply-chain game (for mixed strategy profiles s, we take expectation of φ with respect to the mixture). The designer``s welfare function W (N(θ), θ) is then given by I{sup{φ∗ (θ)} ≤ α}, where α is the maximum acceptable level of day-0 procurement and I is the indicator function. The designer selects a value θ of storage costs, expressed as an annual percentage of the baseline value of components in the inventory (charged daily), from the set Θ = R+ . Since the designer``s decision depends only on φ∗ (θ), we present all of our results in terms of the value of the aggregation function. 4.2 Estimating Nash Equilibria The objective of TAC/SCM agents is to maximize profits realized over a game instance. Thus, if we fix a strategy for each agent at the beginning of the simulation and record the corresponding profits at the end, we will have obtained a data point in the form (a, U(a)). If we also have fixed the parameter θ of the simulator, the resulting data point becomes part of our data set D. This data set, then, contains data only in the form of pure strategies of players and their corresponding payoffs, and, consequently, in order to formulate the designer``s problem as optimization, we must first determine or approximate the set of Nash equilibria of each game Γθ. Thus, we need methods for approximating Nash equilibria for infinite games. Below, we describe the two methods we used in our study. The first has been explored empirically before, whereas the second is introduced here as the method specifically designed to approximate a set of Nash equilibria. 4.2.1 Payoff Function Approximation The first method for estimating Nash equilibria based on data uses supervised learning to approximate payoff functions of mech6 We do not address whether and how other measures (e.g., constraining procurement directly) could have achieved design objectives. Our approach takes as given some set of design options, in this case defined by the storage cost parameter. In principle our methods could be applied to a different or larger design space, though with corresponding complexity growth. 308 anism participants from a data set of game experience [17]. Once approximate payoff functions are available for all players, the Nash equilibria may be either found analytically or approximated using numerical techniques, depending on the learning model. In what follows, we estimate only a sample Nash equilibrium using this technique, although this restriction can be removed at the expense of additional computation time. One advantage of this method is that it can be applied to any data set and does not require the use of a simulator. Thus, we can apply it when Ds = ∅. If a simulator is available, we can generate additional data to build confidence in our initial estimates.7 We tried the following methods for approximating payoff functions: quadratic regression (QR), locally weighted average (LWA), and locally weighted linear regression (LWLR). We also used control variates to reduce the variance of payoff estimates, as in our previous empirical game-theoretic analysis of TAC/SCM-03 [19]. The quadratic regression model makes it possible to compute equilibria of the learned game analytically. For the other methods we applied replicator dynamics [7] to a discrete approximation of the learned game. The expected total day-0 procurement in equilibrium was taken as the estimate of an outcome. 4.2.2 Search in Strategy Profile Space When we have access to a simulator, we can also use directed search through profile space to estimate the set of Nash equilibria, which we describe here after presenting some additional notation. DEFINITION 4. A strategic neighbor of a pure strategy profile a is a profile that is identical to a in all but one strategy. We define Snb(a, D) as the set of all strategic neighbors of a available in the data set D. Similarly, we define Snb(a, ˜D) to be all strategic neighbors of a not in D. Finally, for any a ∈ Snb(a, D) we define the deviating agent as i(a, a ). DEFINITION 5. The -bound, ˆ, of a pure strategy profile a is defined as maxa ∈Snb(a,D) max{ui(a,a )(a )−ui(a,a )(a), 0}. We say that a is a candidate δ-equilibrium for δ ≥ ˆ. When Snb(a, ˜D) = ∅ (i.e., all strategic neighbors are represented in the data), a is confirmed as an ˆ-Nash equilibrium. Our search method operates by exploring deviations from candidate equilibria. We refer to it as BestFirstSearch, as it selects with probability one a strategy profile a ∈ Snb(a, ˜D) that has the smallest ˆin D. Finally we define an estimator for a set of Nash equilibria. DEFINITION 6. For a set K, define Co(K) to be the convex hull of K. Let Bδ be the set of candidates at level δ. We define ˆφ∗ (θ) = Co({φ(a) : a ∈ Bδ}) for a fixed δ to be an estimator of φ∗ (θ). In words, the estimate of a set of equilibrium outcomes is the convex hull of all aggregated strategy profiles with -bound below some fixed δ. This definition allows us to exploit structure arising from the aggregation function. If two profiles are close in terms of aggregation values, they may be likely to have similar -bounds. In particular, if one is an equilibrium, the other may be as well. We present some theoretical support for this method of estimating the set of Nash equilibria below. Since the game we are interested in is infinite, it is necessary to terminate BestFirstSearch before exploring the entire space of strat7 For example, we can use active learning techniques [5] to improve the quality of payoff function approximation. In this work, we instead concentrate on search in strategy profile space. egy profiles. We currently determine termination time in a somewhat ad-hoc manner, based on observations about the current set of candidate equilibria.8 4.3 Data Generation Our data was collected by simulating TAC/SCM games on a local version of the 2004 TAC/SCM server, which has a configuration setting for the storage cost. Agent strategies in simulated games were selected from the set {0, 0.3, 0.6, ... , 1.5} in order to have positive probability of generating strategic neighbors.9 A baseline data set Do was generated by sampling 10 randomly generated strategy profiles for each θ ∈ {0, 50, 100, 150, 200}. Between 5 and 10 games were run for each profile after discarding games that had various flaws.10 We used search to generate a simulated data set Ds, performing between 12 and 32 iterations of BestFirstSearch for each of the above settings of θ. Since simulation cost is extremely high (a game takes nearly 1 hour to run), we were able to run a total of 2670 games over the span of more than six months. For comparison, to get the entire description of an empirical game defined by the restricted finite joint strategy space for each value of θ ∈ {0, 50, 100, 150, 200} would have required at least 23100 games (sampling each profile 10 times). 4.4 Results 4.4.1 Analysis of the Baseline Data Set We applied the three learning methods described above to the baseline data set Do. Additionally, we generated an estimate of the Nash equilibrium correspondence, ˆφ∗ (θ), by applying Definition 6 with δ =2.5E6. The results are shown in Figure 1. As we can see, the correspondence ˆφ∗ (θ) has little predictive power based on Do, and reveals no interesting structure about the game. In contrast, all three learning methods suggest that total day-0 procurement is a decreasing function of storage costs. 0 1 2 3 4 5 6 7 8 9 10 0 50 100 150 200 Storage Cost TotalDay-0Procurement LWA LWLR QR BaselineMin BaselineMax Figure 1: Aggregate day-0 procurement estimates based on Do. The correspondence ˆφ∗ (θ) is the interval between BaselineMin and BaselineMax. 8 Generally, search is terminated once the set of candidate equilibria is small enough to draw useful conclusions about the likely range of equilibrium strategies in the game. 9 Of course, we do not restrict our Nash equilibrium estimates to stay in this discrete subset of [0,1.5]. 10 For example, if we detected that any agent failed during the game (failures included crashes, network connectivity problems, and other obvious anomalies), the game would be thrown out. 309 4.4.2 Analysis of Search Data To corroborate the initial evidence from the learning methods, we estimated ˆφ∗ (θ) (again, using δ =2.5E6) on the data set D = {Do, Ds}, where Ds is data generated through the application of BestFirstSearch. The results of this estimate are plotted against the results of the learning methods trained on Do 11 in Figure 2. First, we note that the addition of the search data narrows the range of potential equilibria substantially. Furthermore, the actual point predictions of the learning methods and those based on -bounds after search are reasonably close. Combining the evidence gathered from these two very different approaches to estimating the outcome correspondence yields a much more compelling picture of the relationship between storage costs and day-0 procurement than either method used in isolation. 0 1 2 3 4 5 6 7 8 9 10 0 50 100 150 200 Storage Cost TotayDay-0Procurement LWA LWLR QR SearchMin SearchMax Figure 2: Aggregate day-0 procurement estimates based on search in strategy profile space compared to function approximation techniques trained on Do. The correspondence ˆφ∗ (θ) for D = {Do, Ds} is the interval between SearchMin and SearchMax. This evidence supports the initial intuition that day-0 procurement should be decreasing with storage costs. It also confirms that high levels of day-0 procurement are a rational response to the 2004 tournament setting of average storage cost, which corresponds to θ = 100. The minimum prediction for aggregate procurement at this level of storage costs given by any experimental methods is approximately 3. This is quite high, as it corresponds to an expected commitment of 1/3 of the total supplier capacity for the entire game. The maximum prediction is considerably higher at 4.5. In the actual 2004 competition, aggregate day-0 procurement was equivalent to 5.71 on the scale used here [9]. Our predictions underestimate this outcome to some degree, but show that any rational outcome was likely to have high day-0 procurement. 4.4.3 Extrapolating the Solution Correspondence We have reasonably strong evidence that the outcome correspondence is decreasing. However, the ultimate goal is to be able to either set the storage cost parameter to a value that would curb day-0 procurement in equilibrium or conclude that this is not possible. To answer this question directly, suppose that we set a conservative threshold α = 2 on aggregate day-0 procurement.12 Linear 11 It is unclear how meaningful the results of learning would be if Ds were added to the training data set. Indeed, the additional data may actually increase the learning variance. 12 Recall that designer``s objective is to incentivize aggergate day-0 procurement that is below the threshold α. Our threshold here still represents a commitment of over 20% of the suppliers'' capacity for extrapolation of the maximum of the outcome correspondence estimated from D yields θ = 320. The data for θ = 320 were collected in the same way as for other storage cost settings, with 10 randomly generated profiles followed by 33 iterations of BestFirstSearch. Figure 3 shows the detailed -bounds for all profiles in terms of their corresponding values of φ. 0.00E+00 5.00E+06 1.00E+07 1.50E+07 2.00E+07 2.50E+07 3.00E+07 3.50E+07 4.00E+07 4.50E+07 5.00E+07 2.1 2.4 2.7 3 3.3 3.6 3.9 4.2 4.5 4.8 5.1 5.4 5.7 6 6.3 6.6 6.9 7.2 Total Day-0 Procurement ε−boundFigure 3: Values of ˆ for profiles explored using search when θ = 320. Strategy profiles explored are presented in terms of the corresponding values of φ(a). The gray region corresponds to ˆφ∗ (320) with δ =2.5M. The estimated set of aggregate day-0 outcomes is very close to that for θ = 200, indicating that there is little additional benefit to raising storage costs above 200. Observe, that even the lower bound of our estimated set of Nash equilibria is well above the target day-0 procurement of 2. Furthermore, payoffs to agents are almost always negative at θ = 320. Consequently, increasing the costs further would be undesirable even if day-0 procurement could eventually be curbed. Since we are reasonably confident that φ∗ (θ) is decreasing in θ, we also do not expect that setting θ somewhere between 200 and 320 will achieve the desired result. We conclude that it is unlikely that day-0 procurement could ever be reduced to a desirable level using any reasonable setting of the storage cost parameter. That our predictions tend to underestimate tournament outcomes reinforces this conclusion. To achieve the desired reduction in day-0 procurement requires redesigning other aspects of the mechanism. 4.5 Probabilistic Analysis Our empirical analysis has produced evidence in support of the conclusion that no reasonable setting of storage cost was likely to sufficiently curb excessive day-0 procurement in TAC/SCM ``04. All of this evidence has been in the form of simple interpolation and extrapolation of estimates of the Nash equilibrium correspondence. These estimates are based on simulating game instances, and are subject to sampling noise contributed by the various stochastic elements of the game. In this section, we develop and apply methods for evaluating the sensitivity of our -bound calculations to such stochastic effects. Suppose that all agents have finite (and small) pure strategy sets, A. Thus, it is feasible to sample the entire payoff matrix of the game. Additionally, suppose that noise is additive with zero-mean the entire game on average, so in practice we would probably want the threshold to be even lower. 310 and finite variance, that is, Ui(a) = ui(a) + ˜ξi(a), where Ui(a) is the observed payoff to i when a was played, ui(a) is the actual corresponding payoff, and ˜ξi(a) is a mean-zero normal random variable. We designate the known variance of ˜ξi(a) by σ2 i (a). Thus, we assume that ˜ξi(a) is normal with distribution N(0, σ2 i (a)). We take ¯ui(a) to be the sample mean over all Ui(a) in D, and follow Chang and Huang [3] to assume that we have an improper prior over the actual payoffs ui(a) and sampling was independent for all i and a. We also rely on their result that ui(a)|¯ui(a) = ¯ui(a)−Zi(a)/[σi(a)/ p ni(a)] are independent with posterior distributions N(¯ui(a), σ2 i (a)/ni(a)), where ni(a) is the number of samples taken of payoffs to i for pure profile a, and Zi(a) ∼ N(0, 1). We now derive a generic probabilistic bound that a profile a ∈ A is an -Nash equilibrium. If ui(·)|¯ui(·) are independent for all i ∈ I and a ∈ A, we have the following result (from this point on we omit conditioning on ¯ui(·) for brevity): PROPOSITION 1. Pr „ max i∈I max b∈Ai ui(b, a−i) − ui(a) ≤ `` = = Y i∈I Z R Y b∈Ai\ai Pr(ui(b, a−i) ≤ u + )fui(a)(u)du, (2) where fui(a)(u) is the pdf of N(¯ui(a), σi(a)). The proofs of this and all subsequent results are in the Appendix. The posterior distribution of the optimum mean of n samples, derived by Chang and Huang [3], is Pr (ui(a) ≤ c) = 1 − Φ ``p ni(a)(¯ui(a) − c) σi(a) # , (3) where a ∈ A and Φ(·) is the N(0, 1) distribution function. Combining the results (2) and (3), we obtain a probabilistic confidence bound that (a) ≤ γ for a given γ. Now, we consider cases of incomplete data and use the results we have just obtained to construct an upper bound (restricted to profiles represented in data) on the distribution of sup{φ∗ (θ)} and inf{φ∗ (θ)} (assuming that both are attainable): Pr{sup{φ∗ (θ)} ≤ x} ≤D Pr{∃a ∈ D : φ(a) ≤ x ∧ a ∈ N(θ)} ≤ X a∈D:φ(a)≤x Pr{a ∈ N(θ)} = X a∈D:φ(a)≤x Pr{ (a) = 0}, where x is a real number and ≤D indicates that the upper bound accounts only for strategies that appear in the data set D. Since the events {∃a ∈ D : φ(a) ≤ x ∧ a ∈ N(θ)} and {inf{φ∗ (θ)} ≤ x} are equivalent, this also defines an upper bound on the probability of {inf{φ∗ (θ)} ≤ x}. The values thus derived comprise the Tables 1 and 2. φ∗ (θ) θ = 0 θ = 50 θ = 100 <2.7 0.000098 0 0.146 <3 0.158 0.0511 0.146 <3.9 0.536 0.163 1 <4.5 1 1 1 Table 1: Upper bounds on the distribution of inf{φ∗ (θ)} restricted to D for θ ∈ {0, 50, 100} when N(θ) is a set of Nash equilibria. φ∗ (θ) θ = 150 θ = 200 θ = 320 <2.7 0 0 0.00132 <3 0.0363 0.141 1 <3.9 1 1 1 <4.5 1 1 1 Table 2: Upper bounds on the distribution of inf{φ∗ (θ)} restricted to D for θ ∈ {150, 200, 320} when N(θ) is a set of Nash equilibria. Tables 1 and 2 suggest that the existence of any equilibrium with φ(a) < 2.7 is unlikely for any θ that we have data for, although this judgment, as we mentioned, is only with respect to the profiles we have actually sampled. We can then accept this as another piece of evidence that the designer could not find a suitable setting of θ to achieve his objectives-indeed, the designer seems unlikely to achieve his objective even if he could persuade participants to play a desirable equilibrium! Table 1 also provides additional evidence that the agents in the 2004 TAC/SCM tournament were indeed rational in procuring large numbers of components at the beginning fo the game. If we look at the third column of this table, which corresponds to θ = 100, we can gather that no profile a in our data with φ(a) < 3 is very likely to be played in equilibrium. The bounds above provide some general evidence, but ultimately we are interested in a concrete probabilistic assessment of our conclusion with respect to the data we have sampled. Particularly, we would like to say something about what happens for the settings of θ for which we have no data. To derive an approximate probabilistic bound on the probability that no θ ∈ Θ could have achieved the designer``s objective, let ∪J j=1Θj, be a partition of Θ, and assume that the function sup{φ∗ (θ)} satisfies the Lipschitz condition with Lipschitz constant Aj on each subset Θj.13 Since we have determined that raising the storage cost above 320 is undesirable due to secondary considerations, we restrict attention to Θ = [0, 320]. We now define each subset j to be the interval between two points for which we have produced data. Thus, Θ = [0, 50] [ (50, 100] [ (100, 150] [ (150, 200] [ (200, 320], with j running between 1 and 5, corresponding to subintervals above. We will further denote each Θj by (aj , bj].14 Then, the following Proposition gives us an approximate upper bound15 on the probability that sup{φ∗ (θ)} ≤ α. PROPOSITION 2. Pr{ _ θ∈Θ sup{φ(θ)} ≤ α} ≤D 5X j=1 X y,z∈D:y+z≤cj 0 @ X a:φ(a)=z Pr{ (a) = 0} 1 A × × 0 @ X a:φ(a)=y Pr{ (a) = 0} 1 A , where cj = 2α + Aj(bj − aj) and ≤D indicates that the upper bound only accounts for strategies that appear in the data set D. 13 A function that satisfies the Lipschitz condition is called Lipschitz continuous. 14 The treatment for the interval [0,50] is identical. 15 It is approximate in a sense that we only take into account strategies that are present in the data. 311 Due to the fact that our bounds are approximate, we cannot use them as a conclusive probabilistic assessment. Instead, we take this as another piece of evidence to complement our findings. Even if we can assume that a function that we approximate from data is Lipschitz continuous, we rarely actually know the Lipschitz constant for any subset of Θ. Thus, we are faced with a task of estimating it from data. Here, we tried three methods of doing this. The first one simply takes the highest slope that the function attains within the available data and uses this constant value for every subinterval. This produces the most conservative bound, and in many situations it is unlikely to be informative. An alternative method is to take an upper bound on slope obtained within each subinterval using the available data. This produces a much less conservative upper bound on probabilities. However, since the actual upper bound is generally greater for each subinterval, the resulting probabilistic bound may be deceiving. A final method that we tried is a compromise between the two above. Instead of taking the conservative upper bound based on data over the entire function domain Θ, we take the average of upper bounds obtained at each Θj. The bound at an interval is then taken to be the maximum of the upper bound for this interval and the average upper bound for all intervals. The results of evaluating the expression for Pr{ _ θ∈Θ sup{φ∗ (θ)} ≤ α} when α = 2 are presented in Table 3. In terms of our claims in maxj Aj Aj max{Aj ,ave(Aj)} 1 0.00772 0.00791 Table 3: Approximate upper bound on probability that some setting of θ ∈ [0, 320] will satisfy the designer objective with target α = 2. Different methods of approximating the upper bound on slope in each subinterval j are used. this work, the expression gives an upper bound on the probability that some setting of θ (i.e., storage cost) in the interval [0,320] will result in total day-0 procurement that is no greater in any equilibrium than the target specified by α and taken here to be 2. As we had suspected, the most conservative approach to estimating the upper bound on slope, presented in the first column of the table, provides us little information here. However, the other two estimation approaches, found in columns two and three of Table 3, suggest that we are indeed quite confident that no reasonable setting of θ ∈ [0, 320] would have done the job. Given the tremendous difficulty of the problem, this result is very strong.16 Still, we must be very cautious in drawing too heroic a conclusion based on this evidence. Certainly, we have not checked all the profiles but only a small proportion of them (infinitesimal, if we consider the entire continuous domain of θ and strategy sets). Nor can we expect ever to obtain enough evidence to make completely objective conclusions. Instead, the approach we advocate here is to collect as much evidence as is feasible given resource constraints, and make the most compelling judgment based on this evidence, if at all possible. 5. CONVERGENCE RESULTS At this point, we explore abstractly whether a design parameter choice based on payoff data can be asymptotically reliable. 16 Since we did not have all the possible deviations for any profile available in the data, the true upper bounds may be even lower. As a matter of convenience, we will use notation un,i(a) to refer to a payoff function of player i based on an average over n i.i.d. samples from the distribution of payoffs. We also assume that un,i(a) are independent for all a ∈ A and i ∈ I. We will use the notation Γn to refer to the game [I, R, {ui,n(·)}], whereas Γ will denote the underlying game, [I, R, {ui(·)}]. Similarly, we define n(r) to be (r) with respect to the game Γn. In this section, we show that n(s) → (s) a.s. uniformly on the mixed strategy space for any finite game, and, furthermore, that all mixed strategy Nash equilibria in empirical games eventually become arbitrarily close to some Nash equilibrium strategies in the underlying game. We use these results to show that under certain conditions, the optimal choice of the design parameter based on empirical data converges almost surely to the actual optimum. THEOREM 3. Suppose that |I| < ∞, |A| < ∞. Then n(s) → (s) a.s. uniformly on S. Recall that N is a set of all Nash equilibria of Γ. If we define Nn,γ = {s ∈ S : n(s) ≤ γ}, we have the following corollary to Theorem 3: COROLLARY 4. For every γ > 0, there is M such that ∀n ≥ M, N ⊂ Nn,γ a.s. PROOF. Since (s) = 0 for every s ∈ N, we can find M large enough such that Pr{supn≥M sups∈N n(s) < γ} = 1. By the Corollary, for any game with a finite set of pure strategies and for any > 0, all Nash equilibria lie in the set of empirical -Nash equilibria if enough samples have been taken. As we now show, this provides some justification for our use of a set of profiles with a non-zero -bound as an estimate of the set of Nash equilibria. First, suppose we conclude that for a particular setting of θ, sup{ˆφ∗ (θ)} ≤ α. Then, since for any fixed > 0, N(θ) ⊂ Nn, (θ) when n is large enough, sup{φ∗ (θ)} = sup s∈N (θ) φ(s) ≤ sup s∈Nn, (θ) φ(s) = sup{ˆφ∗ (θ)} ≤ α for any such n. Thus, since we defined the welfare function of the designer to be I{sup{φ∗ (θ)} ≤ α} in our domain of interest, the empirical choice of θ satisfies the designer``s objective, thereby maximizing his welfare function. Alternatively, suppose we conclude that inf{ˆφ∗ (θ)} > α for every θ in the domain. Then, α < inf{ˆφ∗ (θ)} = inf s∈Nn, (θ) φ(s) ≤ inf s∈N (θ) φ(s) ≤ ≤ sup s∈N (θ) φ(s) = sup{φ∗ (θ)}, for every θ, and we can conclude that no setting of θ will satisfy the designer``s objective. Now, we will show that when the number of samples is large enough, every Nash equilibrium of Γn is close to some Nash equilibrium of the underlying game. This result will lead us to consider convergence of optimizers based on empirical data to actual optimal mechanism parameter settings. We first note that the function (s) is continuous in a finite game. LEMMA 5. Let S be a mixed strategy set defined on a finite game. Then : S → R is continuous. 312 For the exposition that follows, we need a bit of additional notation. First, let (Z, d) be a metric space, and X, Y ⊂ Z and define directed Hausdorff distance from X to Y to be h(X, Y ) = sup x∈X inf y∈Y d(x, y). Observe that U ⊂ X ⇒ h(U, Y ) ≤ h(X, Y ). Further, define BS(x, δ) to be an open ball in S ⊂ Z with center x ∈ S and radius δ. Now, let Nn denote all Nash equilibria of the game Γn and let Nδ = [ x∈N BS(x, δ), that is, the union of open balls of radius δ with centers at Nash equilibria of Γ. Note that h(Nδ, N) = δ. We can then prove the following general result. THEOREM 6. Suppose |I| < ∞ and |A| < ∞. Then almost surely h(Nn, N) converges to 0. We will now show that in the special case when Θ and A are finite and each Γθ has a unique Nash equilibrium, the estimates ˆθ of optimal designer parameter converge to an actual optimizer almost surely. Let ˆθ = arg maxθ∈Θ W (Nn(θ), θ), where n is the number of times each pure profile was sampled in Γθ for every θ, and let θ∗ = arg maxθ∈Θ W (N(θ), θ). THEOREM 7. Suppose |N(θ)| = 1 for all θ ∈ Θ and suppose that Θ and A are finite. Let W (s, θ) be continuous at the unique s∗ (θ) ∈ N(θ) for each θ ∈ Θ. Then ˆθ is a consistent estimator of θ∗ if W (N(θ), θ) is defined as a supremum, infimum, or expectation over the set of Nash equilibria. In fact, ˆθ → θ∗ a.s. in each of these cases. The shortcoming of the above result is that, within our framework, the designer has no way of knowing or ensuring that Γθ do, indeed, have unique equilibria. However, it does lend some theoretical justification for pursuing design in this manner, and, perhaps, will serve as a guide for more general results in the future. 6. RELATED WORK The mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10]. Additionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant. In much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality. Several related approaches to search for the best mechanism exist in the Computer Science literature. Conitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge. When payoff functions of players are unknown, a search using simulations has been explored as an alternative. One approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria. An alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming. In this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning. Most recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18]. 7. CONCLUSION In this work we spent considerable effort developing general tactics for empirical mechanism design. We defined a formal gametheoretic model of interaction between the designer and the participants of the mechanism as a two-stage game. We also described in some generality the methods for estimating a sample Nash equilibrium function when the data is extremely scarce, or a Nash equilibrium correspondence when more data is available. Our techniques are designed specifically to deal with problems in which both the mechanism parameter space and the agent strategy sets are infinite and only a relatively small data set can be acquired. A difficult design issue in the TAC/SCM game which the TAC community has been eager to address provides us with a setting to test our methods. In applying empirical game analysis to the problem at hand, we are fully aware that our results are inherently inexact. Thus, we concentrate on collecting evidence about the structure of the Nash equilibrium correspondence. In the end, we can try to provide enough evidence to either prescribe a parameter setting, or suggest that no setting is possible that will satisfy the designer. In the case of TAC/SCM, our evidence suggests quite strongly that storage cost could not have been effectively adjusted in the 2004 tournament to curb excessive day-0 procurement without detrimental effects on overall profitability. The success of our analysis in this extremely complex environment with high simulation costs makes us optimistic that our methods can provide guidance in making mechanism design decisions in other challenging domains. The theoretical results confirm some intuitions behind the empirical mechanism design methods we have introduced, and increases our confidence that our framework can be effective in estimating the best mechanism parameter choice in relatively general settings. Acknowledgments We thank Terence Kelly, Matthew Rudary, and Satinder Singh for helpful comments on earlier drafts of this work. This work was supported in part by NSF grant IIS-0205435 and the DARPA REAL strategic reasoning program. 8. REFERENCES [1] R. Arunachalam and N. M. Sadeh. The supply chain trading agent competition. Electronic Commerce Research and Applications, 4:63-81, 2005. [2] M. Benisch, A. Greenwald, V. Naroditskiy, and M. Tschantz. A stochastic programming approach to scheduling in TAC SCM. In Fifth ACM Conference on Electronic Commerce, pages 152-159, New York, 2004. [3] Y.-P. Chang and W.-T. Huang. Generalized confidence intervals for the largest value of some functions of parameters under normality. Statistica Sinica, 10:1369-1383, 2000. [4] D. Cliff. Evolution of market mechanism through a continuous space of auction-types. In Congress on Evolutionary Computation, 2002. [5] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129-145, 1996. [6] V. Conitzer and T. Sandholm. An algorithm for automatically designing deterministic mechanisms without payments. In 313 Third International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 128-135, 2004. [7] D. Friedman. Evolutionary games in economics. Econometrica, 59(3):637-666, May 1991. [8] R. Keener. Statistical Theory: A Medley of Core Topics. University of Michigan Department of Statistics, 2004. [9] C. Kiekintveld, Y. Vorobeychik, and M. P. Wellman. An analysis of the 2004 supply chain management trading agent competition. In IJCAI-05 Workshop on Trading Agent Design and Analysis, Edinburgh, 2005. [10] A. Mas-Colell, M. Whinston, and J. Green. Microeconomic Theory. Oxford University Press, 1995. [11] R. B. Myerson. Optimal auction design. Mathematics of Operations Research, 6(1):58-73, February 1981. [12] S. Olafsson and J. Kim. Simulation optimization. In E. Yucesan, C.-H. Chen, J. Snowdon, and J. Charnes, editors, 2002 Winter Simulation Conference, 2002. [13] D. Pardoe and P. Stone. TacTex-03: A supply chain management agent. SIGecom Exchanges, 4(3):19-28, 2004. [14] S. Phelps, S. Parsons, and P. McBurney. Automated agents versus virtual humans: an evolutionary game theoretic comparison of two double-auction market designs. In Workshop on Agent Mediated Electronic Commerce VI, 2004. [15] S. Phelps, S. Parsons, P. McBurney, and E. Sklar. Co-evolution of auction mechanisms and trading strategies: towards a novel approach to microeconomic design. In ECOMAS 2002 Workshop, 2002. [16] S. Phelps, S. Parsons, E. Sklar, and P. McBurney. Using genetic programming to optimise pricing rules for a double-auction market. In Workshop on Agents for Electronic Commerce, 2003. [17] Y. Vorobeychik, M. P. Wellman, and S. Singh. Learning payoff functions in infinite games. In Nineteenth International Joint Conference on Artificial Intelligence, pages 977-982, 2005. [18] W. E. Walsh, R. Das, G. Tesauro, and J. O. Kephart. Analyzing complex strategic interactions in multi-agent systems. In AAAI-02 Workshop on Game Theoretic and Decision Theoretic Agents, 2002. [19] M. P. Wellman, J. J. Estelle, S. Singh, Y. Vorobeychik, C. Kiekintveld, and V. Soni. Strategic interactions in a supply chain game. Computational Intelligence, 21(1):1-26, February 2005. APPENDIX A. PROOFS A.1 Proof of Proposition 1 Pr „ max i∈I max b∈Ai\ai ui(b, a−i) − ui(a) ≤ `` = = Y i∈I Eui(a) '' Pr( max b∈Ai\ai ui(b, a−i) − ui(a) ≤ |ui(a)) = = Y i∈I Z R Y b∈Ai\ai Pr(ui(b, a−i) ≤ u + )fui(a)(u)du. A.2 Proof of Proposition 2 First, let us suppose that some function, f(x) defined on [ai, bi], satisfy Lipschitz condition on (ai, bi] with Lipschitz constant Ai. Then the following claim holds: Claim: infx∈(ai,bi] f(x) ≥ 0.5(f(ai) + f(bi) − Ai(bi − ai). To prove this claim, note that the intersection of lines at f(ai) and f(bi) with slopes −Ai and Ai respectively will determine the lower bound on the minimum of f(x) on [ai, bi] (which is a lower bound on infimum of f(x) on (ai, bj ]). The line at f(ai) is determined by f(ai) = −Aiai + cL and the line at f(bi) is determined by f(bi) = Aibi +cR. Thus, the intercepts are cL = f(ai)+Aiai and cR = f(bi) + Aibi respectively. Let x∗ be the point at which these lines intersect. Then, x∗ = − f(x∗ ) − cR A = f(x∗ ) − cL A . By substituting the expressions for cR and cL, we get the desired result. Now, subadditivity gives us Pr{ _ θ∈Θ sup{φ∗ (θ)} ≤ α} ≤ 5X j=1 Pr{ _ θ∈Θj sup{φ∗ (θ)} ≤ α}, and, by the claim, Pr{ _ θ∈Θj sup{φ∗ (θ)} ≤ α} = 1 − Pr{ inf θ∈Θj sup{φ∗ (θ)} > α} ≤ Pr{sup{φ∗ (aj)} + sup{φ∗ (bj)} ≤ 2α + Aj(bj − aj )}. Since we have a finite number of points in the data set for each θ, we can obtain the following expression: Pr{sup{φ∗ (aj)} + sup{φ∗ (bj)} ≤ cj } =D X y,z∈D:y+z≤cj Pr{sup{φ∗ (bj )} = y} Pr{sup{φ∗ (aj)} = z}. We can now restrict attention to deriving an upper bound on Pr{sup{φ∗ (θ)} = y} for a fixed θ. To do this, observe that Pr{sup{φ∗ (θ)} = y} ≤D Pr{ _ a∈D:φ(a)=y (a) = 0} ≤ X a∈D:φ(a)=y Pr{ (a) = 0} by subadditivity and the fact that a profile a is a Nash equilibrium if and only if (a) = 0. Putting everything together yields the desired result. A.3 Proof of Theorem 3 First, we will need the following fact: Claim: Given a function fi(x) and a set X, | maxx∈X f1(x) − maxx∈X f2(x)| ≤ maxx∈X |f1(x) − f2(x)|. To prove this claim, observe that | max x∈X f1(x) − max x∈X f2(x)| = maxx f1(x) − maxx f2(x) if maxx f1(x) ≥ maxx f2(x) maxx f2(x) − maxx f1(x) if maxx f2(x) ≥ maxx f1(x) In the first case, max x∈X f1(x) − max x∈X f2(x) ≤ max x∈X (f1(x) − f2(x)) ≤ ≤ max x∈X |f1(x) − f2(x)|. 314 Similarly, in the second case, max x∈X f2(x) − max x∈X f1(x) ≤ max x∈X (f2(x) − f1(x)) ≤ ≤ max x∈X |f2(x) − f1(x)| = max x∈X |f1(x) − f2(x)|. Thus, the claim holds. By the Strong Law of Large Numbers, un,i(a) → ui(a) a.s. for all i ∈ I, a ∈ A. That is, Pr{ lim n→∞ un,i(a) = ui(a)} = 1, or, equivalently [8], for any α > 0 and δ > 0, there is M(i, a) > 0 such that Pr{ sup n≥M(i,a) |un,i(a) − ui(a)| < δ 2|A| } ≥ 1 − α. By taking M = maxi∈I maxa∈A M(i, a), we have Pr{max i∈I max a∈A sup n≥M |un,i(a) − ui(a)| < δ 2|A| } ≥ 1 − α. Thus, by the claim, for any n ≥ M, sup n≥M | n(s) − (s)| ≤ max i∈I max ai∈Ai sup n≥M |un,i(ai, s−i) − ui(ai, s−i)|+ + sup n≥M max i∈I |un,i(s) − ui(s)| ≤ max i∈I max ai∈Ai X b∈A−i sup n≥M |un,i(ai, b) − ui(ai, b)|s−i(b)+ + max i∈I X b∈A sup n≥M |un,i(b) − ui(b)|s(b) ≤ max i∈I max ai∈Ai X b∈A−i sup n≥M |un,i(ai, b) − ui(ai, b)|+ + max i∈I X b∈A sup n≥M |un,i(b) − ui(b)| < max i∈I max ai∈Ai X b∈A−i ( δ 2|A| ) + max i∈I X b∈A ( δ 2|A| ) ≤ δ with probability at least 1 − α. Note that since s−i(a) and s(a) are bounded between 0 and 1, we were able to drop them from the expressions above to obtain a bound that will be valid independent of the particular choice of s. Furthermore, since the above result can be obtained for an arbitrary α > 0 and δ > 0, we have Pr{limn→∞ n(s) = (s)} = 1 uniformly on S. A.4 Proof of Lemma 5 We prove the result using uniform continuity of ui(s) and preservation of continuity under maximum. Claim: A function f : Rk → R defined by f(t) = Pk i=1 ziti, where zi are constants in R, is uniformly continuous in t. The claim follows because |f(t)−f(t )| = | Pk i=1 zi(ti−ti)| ≤ Pk i=1 |zi||ti − ti|. An immediate result of this for our purposes is that ui(s) is uniformly continuous in s and ui(ai, s−i) is uniformly continuous in s−i. Claim: Let f(a, b) be uniformly continuous in b ∈ B for every a ∈ A, with |A| < ∞. Then V (b) = maxa∈A f(a, b) is uniformly continuous in b. To show this, take γ > 0 and let b, b ∈ B such that b − b < δ(a) ⇒ |f(a, b) − f(a, b )| < γ. Now take δ = mina∈A δ(a). Then, whenever b − b < δ, |V (b) − V (b )| = | max a∈A f(a, b) − max a∈A f(a, b )| ≤ max a∈A |f(a, b) − f(a, b )| < γ. Now, recall that (s) = maxi[maxai∈Ai ui(ai, s−i) − ui(s)]. By the claims above, maxai∈Ai ui(ai, s−i) is uniformly continuous in s−i and ui(s) is uniformly continuous in s. Since the difference of two uniformly continuous functions is uniformly continuous, and since this continuity is preserved under maximum by our second claim, we have the desired result. A.5 Proof of Theorem 6 Choose δ > 0. First, we need to ascertain that the following claim holds: Claim: ¯ = mins∈S\Nδ (s) exists and ¯ > 0. Since Nδ is an open subset of compact S, it follows that S \ Nδ is compact. As we had also proved in Lemma 5 that (s) is continuous, existence follows from the Weierstrass theorem. That ¯ > 0 is clear since (s) = 0 if and only if s is a Nash equilibrium of Γ. Now, by Theorem 3, for any α > 0 there is M such that Pr{ sup n≥M sup s∈S | n(s) − (s)| < ¯} ≥ 1 − α. Consequently, for any δ > 0, Pr{ sup n≥M h(Nn, Nδ) < δ} ≥ Pr{∀n ≥ M Nn ⊂ Nδ} ≥ Pr{ sup n≥M sup s∈N (s) < ¯} ≥ Pr{ sup n≥M sup s∈S | n(s) − (s)| < ¯} ≥ 1 − α. Since this holds for an arbitrary α > 0 and δ > 0, the desired result follows. A.6 Proof of Theorem 7 Fix θ and choose δ > 0. Since W (s, θ) is continuous at s∗ (θ), given > 0 there is δ > 0 such that for every s that is within δ of s∗ (θ), |W (s , θ) − W (s∗ (θ), θ)| < . By Theorem 6, we can find M(θ) large enough such that all s ∈ Nn are within δ of s∗ (θ) for all n ≥ M(θ) with probability 1. Consequently, for any > 0 we can find M(θ) large enough such that with probability 1 we have supn≥M(θ) sups ∈Nn |W (s , θ) − W (s∗ (θ), θ)| < . Let us assume without loss of generality that there is a unique optimal choice of θ. Now, since the set Θ is finite, there is also the second-best choice of θ (if there is only one θ ∈ Θ this discussion is moot anyway): θ∗∗ = arg max Θ\θ∗ W (s∗ (θ), θ). Suppose w.l.o.g. that θ∗∗ is also unique and let ∆ = W (s∗ (θ∗ ), θ∗ ) − W (s∗ (θ∗∗ ), θ∗∗ ). Then if we let < ∆/2 and let M = maxθ∈Θ M(θ), where each M(θ) is large enough such that supn≥M(θ) sups ∈Nn |W (s , θ)− W (s∗ (θ), θ)| < a.s., the optimal choice of θ based on any empirical equilibrium will be θ∗ with probability 1. Thus, in particular, given any probability distribution over empirical equilibria, the best choice of θ will be θ∗ with probability 1 (similarly, if we take supremum or infimum of W (Nn(θ), θ) over the set of empirical equilibria in constructing the objective function). 315
Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario ABSTRACT Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent. 1. MOTIVATION We illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game. TAC/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain. The agents procure components from the various suppliers and assemble finished goods for sale to customers, repeatedly over a simulated year .1 1Information about TAC and the SCM game, including specifications, rules, and competition results, can be found at http://www.sics.se/tac. As it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation. During the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0. Although jockeying for day-0 procurement turned out to be an interesting strategic issue in itself [19], the phenomenon detracted from other interesting problems, such as adapting production levels to varying demand (since component costs were already sunk), and dynamic management of production, sales, and inventory. Several participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13]. After the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement. The task facing game organizers can be viewed as a problem in mechanism design. The designers have certain game features under their control, and a set of objectives regarding game outcomes. Unlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game. Replacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option. We believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems. In response to the problem, the TAC/SCM designers adopted several rule changes intended to penalize large day-0 orders. These included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods. Despite the changes, day-0 procurement was very high in the early rounds of the 2004 competition. In a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament. Even this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9]. The apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking. Although the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations. Much of the difficulty, of course, is anticipating the agents' (and their developers') responses without essentially running a gaming exercise for this purpose. The episode caused us to consider whether new ap proaches or tools could enable more systematic analysis of design options. Standard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment. Under the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design. In the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC/SCM redesign problem. Our analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted. Our results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise. We also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used. Finally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem. Overall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players' utilities is available. Our methods incorporate estimation of sets of Nash equilibria and sample Nash equilibria, used in conjuction to support general claims about the structure of the mechanism designer's utility, as well as a restricted probabilistic analysis to assess the likelihood of conclusions. We believe that most realistic problems are too complex to be amenable to exact analysis. Consequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses. 2. PRELIMINARIES A normalform game2 is denoted by [I, {Ri}, {ui (r)}], where I refers to the set of players and m = | I | is the number of players. Ri is the set of strategies available to player i ∈ I, with R = R1 ×...× Rm representing the set ofjoint strategies of all players. We designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 ×...× Am. It is often convenient to refer to a strategy of player i separately from that of the remaining players. To accommodate this, we use a_i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A. An s ∈ S is called a mixed strategy profile. When the game is finite (i.e., A and I are both finite), the probability that a ∈ A is played under s is written s (a) = s (ai, a_i). When the distribution s is not correlated, we can simply say si (ai) when referring to the probability player i plays ai under s. Next, we define the payoff (utility) function of each player i by ui: A1 × · · · × Am--+ R, where ui (ai, a_i) indicates the payoff to player i to playing pure strategy ai when the remaining players play a_i. We can extend this definition to mixed strategies by assuming that ui are von Neumann-Morgenstern (vNM) utilities as follows: ui (s) = Es [ui], where Es is the expectation taken with respect to the probability distribution of play induced by the players' mixed strategy s. 2By employing the normal form, we model agents as playing a single action, with decisions taken simultaneously. This is appropriate for our current study, which treats strategies (agent programs) as atomic actions. We could capture finer-grained decisions about action over time in the extensive form. Although any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection). Occasionally, we write ui (x, y) to mean that x ∈ Ai or Si and y ∈ A_i or S_i depending on context. We also express the set of utility functions of all players as u (·) = {u1 (·),..., um (·)}. We define a function, e: R--+ R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile. where r belongs to some strategy set, R, of either pure or mixed strategies. Faced with a game, an agent would ideally play its best strategy given those played by the other agents. A configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium. When r ∈ A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium. We often appeal to the concept of an approximate, or e-Nash equilibrium, where e is the maximum benefit to any agent for deviating from the prescribed strategy. Thus, e (r) as defined above (1) is such that profile r is an e-Nash equilibrium iff e (r) ≤ e. In this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical. 3. THE MODEL We model the strategic interactions between the designer of the mechanism and its participants as a two-stage game. The designer moves first by selecting a value, 0, from a set of allowable mechanism settings, Θ. All the participant agents observe the mechanism parameter 0 and move simultaneously thereafter. For example, the designer could be deciding between a first-price and second-price sealed-bid auction mechanisms, with the presumption that after the choice has been made, the bidders will participate with full awareness of the auction rules. Since the participants play with full knowledge of the mechanism parameter, we define a game between them in the second stage as Γθ = [I, {Ri}, {ui (r, 0)}]. We refer to Γθ as a game induced by 0. Let N (0) be the set of strategy profiles considered solutions of the game Γθ .3 Suppose that the goal of the designer is to optimize the value of some welfare function, W (r, 0), dependent on the mechanism parameter and resulting play, r. We define a pessimistic measure, W (ˆR, 0) = inf {W (r, 0): r ∈ ˆR}, representing the worst-case welfare of the game induced by 0, assuming that agents play some joint strategy in ˆR. Typically we care about W (N (0), 0), the worst-case outcome of playing some solution .4 On some problems we can gain considerable advantage by using an aggregation function to map the welfare outcome of a game 3We generally adopt Nash equilibrium as the solution concept, and thus take N (0) to be the set of equilibria. However, much of the methodology developed here could be employed with alternative criteria for deriving agent behavior from a game definition. 4Again, alternatives are available. For example, if one has a probability distribution over the solution set N (0), it would be natural to take the expectation of W (r, 0) instead. specified in terms of agent strategies to an equivalent welfare outcome specified in terms of a lower-dimensional summary. We overload the function symbol to apply to sets of strategy profiles: φ (ˆR) = {φ (r): r ∈ ˆR}. For convenience of exposition, we write φ ∗ (θ) to mean φ (N (θ)). Using an aggregation function yields a more compact representation of strategy profiles. For example, suppose--as in our application below--that an agent's strategy is defined by a numeric parameter. If all we care about is the total value played, we may take φ (a) = Pmi = 1 ai. If we have chosen our aggregator carefully, we may also capture structure not obvious otherwise. For example, φ ∗ (θ) could be decreasing in θ, whereas N (θ) might have a more complex structure. Given a description of the solution correspondence N (θ) (equivalently, φ ∗ (θ)), the designer faces a standard optimization problem. Alternatively, given a simulator that could produce an unbiased sample from the distribution of W (N (θ), θ) for any θ, the designer would be faced with another much appreciated problem in the literature: simulation optimization [12]. However, even for a game Γθ with known payoffs it may be computationally intractable to solve for Nash equilibria, particularly if the game has large or infinite strategy sets. Additionally, we wish to study games where the payoffs are not explicitly given, but must be determined from simulation or other experience with the game .5 Accordingly, we assume that we are given a (possibly noisy) data set of payoff realizations: Do = {(θ1, a1, U1),..., (θk, ak, Uk)}, where for every data point θi is the observed mechanism parameter setting, ai is the observed pure strategy profile of the participants, and Ui is the corresponding realization of agent payoffs. We may also have additional data generated by a (possibly noisy) simulator: Ds = {(θk +1, ak +1, Uk +1),..., (θk + l, ak + l, Uk + l)}. Let D = {Do, Ds} be the combined data set. (Either Do or Ds may be null for a particular problem.) In the remainder of this paper, we apply our modeling approach, together with several empirical game-theoretic methods, in order to answer questions regarding the design of the TAC/SCM scenario. 4. EMPIRICAL DESIGN ANALYSIS Since our data comes in the form of payoff experience and not as the value of an objective function for given settings of the control variable, we can no longer rely on the methods for optimizing functions using simulations. Indeed, a fundamental aspect of our design problem involves estimating the Nash equilibrium correspondence. Furthermore, we cannot rely directly on the convergence results that abound in the simulation optimization literature, and must establish probabilistic analysis methods tailored for our problem setting. 4.1 TAC/SCM Design Problem We describe our empirical design analysis methods by presenting a detailed application to the TAC/SCM scenario introduced above. Recall that during the 2004 tournament, the designers of the supplychain game chose to dramatically increase storage costs as a measure aimed at curbing day-0 procurement, to little avail. Here we systematically explore the relationship between storage costs and 5This is often the case for real games of interest, where natural language or algorithmic descriptions may substitute for a formal specification of strategy and payoff functions. the aggregate quantity of components procured on day 0 in equilibrium. In doing so, we consider several questions raised during and after the tournament. First, does increasing storage costs actually reduce day-0 procurement? Second, was the excessive day-0 procurement that was observed during the 2004 tournament rational? And third, could increasing storage costs sufficiently have reduced day-0 procurement to an "acceptable" level, and if so, what should the setting of storage costs have been? It is this third question that defines the mechanism design aspect of our analysis .6 To apply our methods, we must specify the agent strategy sets, the designer's welfare function, the mechanism parameter space, and the source of data. We restrict the agent strategies to be a multiplier on the quantity of the day-0 requests by one of the finalists, Deep Maize, in the 2004 TAC/SCM tournament. We further restrict it to the set [0,1.5], since any strategy below 0 is illegal and strategies above 1.5 are extremely aggressive (thus unlikely to provide refuting deviations beyond those available from included strategies, and certainly not part of any desirable equilibrium). All other behavior is based on the behavior of Deep Maize and is identical for all agents. This choice can provide only an estimate of the actual tournament behavior of a "typical" agent. However, we believe that the general form of the results should be robust to changes in the full agent behavior. sum of day-0 purchases. Let φ (a) = P6 We model the designer's welfare function as a threshold on the i = 1 ai be the aggregation function representing the sum of day-0 procurement of the six agents participating in a particular supply-chain game (for mixed strategy profiles s, we take expectation of φ with respect to the mixture). The designer's welfare function W (N (θ), θ) is then given by I {sup {φ ∗ (θ)} ≤ α}, where α is the maximum acceptable level of day-0 procurement and I is the indicator function. The designer selects a value θ of storage costs, expressed as an annual percentage of the baseline value of components in the inventory (charged daily), from the set Θ = R +. Since the designer's decision depends only on φ ∗ (θ), we present all of our results in terms of the value of the aggregation function. 4.2 Estimating Nash Equilibria The objective of TAC/SCM agents is to maximize profits realized over a game instance. Thus, if we fix a strategy for each agent at the beginning of the simulation and record the corresponding profits at the end, we will have obtained a data point in the form (a, U (a)). If we also have fixed the parameter θ of the simulator, the resulting data point becomes part of our data set D. This data set, then, contains data only in the form of pure strategies of players and their corresponding payoffs, and, consequently, in order to formulate the designer's problem as optimization, we must first determine or approximate the set of Nash equilibria of each game Γθ. Thus, we need methods for approximating Nash equilibria for infinite games. Below, we describe the two methods we used in our study. The first has been explored empirically before, whereas the second is introduced here as the method specifically designed to approximate a set of Nash equilibria. 4.2.1 PayoffFunction Approximation The first method for estimating Nash equilibria based on data uses supervised learning to approximate payoff functions of mech6We do not address whether and how other measures (e.g., constraining procurement directly) could have achieved design objectives. Our approach takes as given some set of design options, in this case defined by the storage cost parameter. In principle our methods could be applied to a different or larger design space, though with corresponding complexity growth. anism participants from a data set of game experience [17]. Once approximate payoff functions are available for all players, the Nash equilibria may be either found analytically or approximated using numerical techniques, depending on the learning model. In what follows, we estimate only a sample Nash equilibrium using this technique, although this restriction can be removed at the expense of additional computation time. One advantage of this method is that it can be applied to any data set and does not require the use of a simulator. Thus, we can apply it when Ds = ∅. If a simulator is available, we can generate additional data to build confidence in our initial estimates .7 We tried the following methods for approximating payoff functions: quadratic regression (QR), locally weighted average (LWA), and locally weighted linear regression (LWLR). We also used control variates to reduce the variance of payoff estimates, as in our previous empirical game-theoretic analysis of TAC/SCM -03 [19]. The quadratic regression model makes it possible to compute equilibria of the learned game analytically. For the other methods we applied replicator dynamics [7] to a discrete approximation of the learned game. The expected total day-0 procurement in equilibrium was taken as the estimate of an outcome. 4.2.2 Search in Strategy Profile Space When we have access to a simulator, we can also use directed search through profile space to estimate the set of Nash equilibria, which we describe here after presenting some additional notation. defined as maxa' ∈ Snb (a, D) max {ui (a, a,) (a) − ui (a, al) (a), 0}. We say that a is a candidate δ-equilibrium for δ ≥ ˆ ~. When Snb (a, ˜D) = ∅ (i.e., all strategic neighbors are represented in the data), a is confirmed as an ˆ ~ - Nash equilibrium. Our search method operates by exploring deviations from candidate equilibria. We refer to it as "BestFirstSearch", as it selects with probability one a strategy profile a' ∈ Snb (a, ˜D) that has the smallest ~ ˆ in D. Finally we define an estimator for a set of Nash equilibria. DEFINITION 6. For a set K, define Co (K) to be the convex hull of K. Let Bδ be the set of candidates at level δ. We define ˆφ ∗ (θ) = Co ({φ (a): a ∈ Bδ}) for a fixed δ to be an estimator of φ ∗ (θ). In words, the estimate of a set of equilibrium outcomes is the convex hull of all aggregated strategy profiles with ~ - bound below some fixed δ. This definition allows us to exploit structure arising from the aggregation function. If two profiles are close in terms of aggregation values, they may be likely to have similar ~ - bounds. In particular, if one is an equilibrium, the other may be as well. We present some theoretical support for this method of estimating the set of Nash equilibria below. Since the game we are interested in is infinite, it is necessary to terminate BestFirstSearch before exploring the entire space of strat7For example, we can use active learning techniques [5] to improve the quality of payoff function approximation. In this work, we instead concentrate on search in strategy profile space. egy profiles. We currently determine termination time in a somewhat ad hoc manner, based on observations about the current set of candidate equilibria .8 4.3 Data Generation Our data was collected by simulating TAC/SCM games on a local version of the 2004 TAC/SCM server, which has a configuration setting for the storage cost. Agent strategies in simulated games were selected from the set {0, 0.3, 0.6,..., 1.5} in order to have positive probability of generating strategic neighbors .9 A baseline data set Do was generated by sampling 10 randomly generated strategy profiles for each θ ∈ {0, 50, 100, 150, 200}. Between 5 and 10 games were run for each profile after discarding games that had various flaws .10 We used search to generate a simulated data set Ds, performing between 12 and 32 iterations of BestFirstSearch for each of the above settings of θ. Since simulation cost is extremely high (a game takes nearly 1 hour to run), we were able to run a total of 2670 games over the span of more than six months. For comparison, to get the entire description of an empirical game defined by the restricted finite joint strategy space for each value of θ ∈ {0, 50, 100, 150, 200} would have required at least 23100 games (sampling each profile 10 times). 4.4 Results 4.4.1 Analysis of the Baseline Data Set We applied the three learning methods described above to the baseline data set Do. Additionally, we generated an estimate of the Nash equilibrium correspondence, ˆφ ∗ (θ), by applying Definition 6 with δ = 2.5 E6. The results are shown in Figure 1. As we can see, the correspondence ˆφ ∗ (θ) has little predictive power based on Do, and reveals no interesting structure about the game. In contrast, all three learning methods suggest that total day-0 procurement is a decreasing function of storage costs. Figure 1: Aggregate day-0 procurement estimates based on Do. The correspondence ˆφ ∗ (θ) is the interval between "BaselineMin" and "BaselineMax". 8Generally, search is terminated once the set of candidate equilibria is small enough to draw useful conclusions about the likely range of equilibrium strategies in the game. 9Of course, we do not restrict our Nash equilibrium estimates to stay in this discrete subset of [0,1.5]. 10For example, if we detected that any agent failed during the game (failures included crashes, network connectivity problems, and other obvious anomalies), the game would be thrown out. 4.4.2 Analysis of Search Data To corroborate the initial evidence from the learning methods, we estimated ˆφ ∗ (0) (again, using S = 2.5 E6) on the data set D = {Do, Ds}, where Ds is data generated through the application of BestFirstSearch. The results of this estimate are plotted against the results of the learning methods trained on Do11 in Figure 2. First, we note that the addition of the search data narrows the range of potential equilibria substantially. Furthermore, the actual point predictions of the learning methods and those based on e-bounds after search are reasonably close. Combining the evidence gathered from these two very different approaches to estimating the outcome correspondence yields a much more compelling picture of the relationship between storage costs and day-0 procurement than either method used in isolation. Figure 2: Aggregate day-0 procurement estimates based on search in strategy profile space compared to function approx ˆ imation techniques trained on Do. The correspondence φ ∗ (0) for D = {Do, Ds} is the interval between "SearchMin" and "SearchMax". This evidence supports the initial intuition that day-0 procurement should be decreasing with storage costs. It also confirms that high levels of day-0 procurement are a rational response to the 2004 tournament setting of average storage cost, which corresponds to 0 = 100. The minimum prediction for aggregate procurement at this level of storage costs given by any experimental methods is approximately 3. This is quite high, as it corresponds to an expected commitment of 1/3 of the total supplier capacity for the entire game. The maximum prediction is considerably higher at 4.5. In the actual 2004 competition, aggregate day-0 procurement was equivalent to 5.71 on the scale used here [9]. Our predictions underestimate this outcome to some degree, but show that any rational outcome was likely to have high day-0 procurement. 4.4.3 Extrapolating the Solution Correspondence We have reasonably strong evidence that the outcome correspondence is decreasing. However, the ultimate goal is to be able to either set the storage cost parameter to a value that would curb day-0 procurement in equilibrium or conclude that this is not possible. To answer this question directly, suppose that we set a conservative threshold α = 2 on aggregate day-0 procurement .12 Linear 11It is unclear how meaningful the results of learning would be if Ds were added to the training data set. Indeed, the additional data may actually increase the learning variance. 12Recall that designer's objective is to incentivize aggergate day-0 procurement that is below the threshold α. Our threshold here still represents a commitment of over 20% of the suppliers' capacity for extrapolation of the maximum of the outcome correspondence estimated from D yields 0 = 320. The data for 0 = 320 were collected in the same way as for other storage cost settings, with 10 randomly generated profiles followed by 33 iterations of BestFirstSearch. Figure 3 shows the detailed e-bounds for all profiles in terms of their corresponding values of Figure 3: Values of eˆ for profiles explored using search when 0 = 320. Strategy profiles explored are presented in terms of the corresponding values of φ (a). The gray region corresponds to ˆφ ∗ (320) with S = 2.5 M. The estimated set of aggregate day-0 outcomes is very close to that for 0 = 200, indicating that there is little additional benefit to raising storage costs above 200. Observe, that even the lower bound of our estimated set of Nash equilibria is well above the target day-0 procurement of 2. Furthermore, payoffs to agents are almost always negative at 0 = 320. Consequently, increasing the costs further would be undesirable even if day-0 procurement could eventually be curbed. Since we are reasonably confident that φ ∗ (0) is decreasing in 0, we also do not expect that setting 0 somewhere between 200 and 320 will achieve the desired result. We conclude that it is unlikely that day-0 procurement could ever be reduced to a desirable level using any reasonable setting of the storage cost parameter. That our predictions tend to underestimate tournament outcomes reinforces this conclusion. To achieve the desired reduction in day-0 procurement requires redesigning other aspects of the mechanism. 4.5 Probabilistic Analysis Our empirical analysis has produced evidence in support of the conclusion that no reasonable setting of storage cost was likely to sufficiently curb excessive day-0 procurement in TAC/SCM' 04. All of this evidence has been in the form of simple interpolation and extrapolation of estimates of the Nash equilibrium correspondence. These estimates are based on simulating game instances, and are subject to sampling noise contributed by the various stochastic elements of the game. In this section, we develop and apply methods for evaluating the sensitivity of our e-bound calculations to such stochastic effects. Suppose that all agents have finite (and small) pure strategy sets, A. Thus, it is feasible to sample the entire payoff matrix of the game. Additionally, suppose that noise is additive with zero-mean the entire game on average, so in practice we would probably want the threshold to be even lower. and finite variance, that is, Ui (a) = ui (a) + ˜ξi (a), where Ui (a) is the observed payoff to i when a was played, ui (a) is the actual corresponding payoff, and ˜ξi (a) is a mean-zero normal random variable. We designate the known variance of˜ξi (a) by σ2i (a). Thus, we assume that ˜ξi (a) is normal with distribution N (0, σ2i (a)). We take ¯ ui (a) to be the sample mean over all Ui (a) in D, and follow Chang and Huang [3] to assume that we have an improper prior over the actual payoffs ui (a) and sampling was independent for all i and a. We also rely on their result that ui (a) | ¯ ui (a) = ¯ ui (a) − Zi (a) / [σi (a) / ni (a)] are independent with posterior distributions N (¯ ui (a), σ2i (a) / ni (a)), where ni (a) is the number of samples taken of payoffs to i for pure profile a, and Zi (a) ∼ N (0, 1). We now derive a generic probabilistic bound that a profile a ∈ A is an ~ - Nash equilibrium. If ui (·) | ¯ ui (·) are independent for all i ∈ I and a ∈ A, we have the following result (from this point on we omit conditioning on ¯ ui (·) for brevity): where fui (a) (u) is the pdf of N (¯ ui (a), σi (a)). The proofs of this and all subsequent results are in the Appendix. The posterior distribution of the optimum mean of n samples, derived by Chang and Huang [3], is where a ∈ A and Φ (·) is the N (0, 1) distribution function. Combining the results (2) and (3), we obtain a probabilistic confidence bound that ~ (a) ≤ γ for a given γ. Now, we consider cases of incomplete data and use the results we have just obtained to construct an upper bound (restricted to profiles represented in data) on the distribution of sup {φ ∗ (θ)} and inf {φ ∗ (θ)} (assuming that both are attainable): Table 2: Upper bounds on the distribution of inf {φ ∗ (θ)} re stricted to D for θ ∈ {150, 200, 320} when N (θ) is a set of Nash equilibria. Tables 1 and 2 suggest that the existence of any equilibrium with φ (a) <2.7 is unlikely for any θ that we have data for, although this judgment, as we mentioned, is only with respect to the profiles we have actually sampled. We can then accept this as another piece of evidence that the designer could not find a suitable setting of θ to achieve his objectives--indeed, the designer seems unlikely to achieve his objective even if he could persuade participants to play a desirable equilibrium! Table 1 also provides additional evidence that the agents in the 2004 TAC/SCM tournament were indeed rational in procuring large numbers of components at the beginning fo the game. If we look at the third column of this table, which corresponds to θ = 100, we can gather that no profile a in our data with φ (a) <3 is very likely to be played in equilibrium. The bounds above provide some general evidence, but ultimately we are interested in a concrete probabilistic assessment of our conclusion with respect to the data we have sampled. Particularly, we would like to say something about what happens for the settings of θ for which we have no data. To derive an approximate probabilistic bound on the probability that no θ ∈ Θ could have achieved the designer's objective, let ∪ Jj = 1Θj, be a partition of Θ, and assume that the function sup {φ ∗ (θ)} satisfies the Lipschitz condition with Lipschitz constant Aj on each subset Θj .13 Since we have determined that raising the storage cost above 320 is undesirable due to secondary considerations, we restrict attention to Θ = [0, 320]. We now define each subset j to be the interval between two points for which we have produced data. Thus, with j running between 1 and 5, corresponding to subintervals above. We will further denote each Θj by (aj, bj].14 Then, the following Proposition gives us an approximate upper bound15 on the probability that sup {φ ∗ (θ)} ≤ α. where x is a real number and ≤ D indicates that the upper bound accounts only for strategies that appear in the data set D. Since the events {∃ a ∈ D: φ (a) ≤ x ∧ a ∈ N (θ)} and {inf {φ ∗ (θ)} ≤ x} are equivalent, this also defines an upper bound on the probability of {inf {φ ∗ (θ)} ≤ x}. The values thus derived comprise the Tables 1 and 2. Table 1: Upper bounds on the distribution of inf {φ ∗ (θ)} re stricted to D for θ ∈ {0, 50, 100} when N (θ) is a set of Nash equilibria. where cj = 2α + Aj (bj − aj) and ≤ D indicates that the upper bound only accounts for strategies that appear in the data set D. Due to the fact that our bounds are approximate, we cannot use them as a conclusive probabilistic assessment. Instead, we take this as another piece of evidence to complement our findings. Even if we can assume that a function that we approximate from data is Lipschitz continuous, we rarely actually know the Lipschitz constant for any subset of Θ. Thus, we are faced with a task of estimating it from data. Here, we tried three methods of doing this. The first one simply takes the highest slope that the function attains within the available data and uses this constant value for every subinterval. This produces the most conservative bound, and in many situations it is unlikely to be informative. An alternative method is to take an upper bound on slope obtained within each subinterval using the available data. This produces a much less conservative upper bound on probabilities. However, since the actual upper bound is generally greater for each subinterval, the resulting probabilistic bound may be deceiving. A final method that we tried is a compromise between the two above. Instead of taking the conservative upper bound based on data over the entire function domain Θ, we take the average of upper bounds obtained at each Θj. The bound at an interval is then taken to be the maximum of the upper bound for this interval and the average upper bound for all intervals. The results of evaluating the expression for Table 3: Approximate upper bound on probability that some setting of θ ∈ [0, 320] will satisfy the designer objective with target α = 2. Different methods of approximating the upper bound on slope in each subinterval j are used. this work, the expression gives an upper bound on the probability that some setting of θ (i.e., storage cost) in the interval [0,320] will result in total day-0 procurement that is no greater in any equilibrium than the target specified by α and taken here to be 2. As we had suspected, the most conservative approach to estimating the upper bound on slope, presented in the first column of the table, provides us little information here. However, the other two estimation approaches, found in columns two and three of Table 3, suggest that we are indeed quite confident that no reasonable setting of θ ∈ [0, 320] would have done the job. Given the tremendous difficulty of the problem, this result is very strong .16 Still, we must be very cautious in drawing too heroic a conclusion based on this evidence. Certainly, we have not "checked" all the profiles but only a small proportion of them (infinitesimal, if we consider the entire continuous domain of θ and strategy sets). Nor can we expect ever to obtain enough evidence to make completely objective conclusions. Instead, the approach we advocate here is to collect as much evidence as is feasible given resource constraints, and make the most compelling judgment based on this evidence, if at all possible. 5. CONVERGENCE RESULTS At this point, we explore abstractly whether a design parameter choice based on payoff data can be asymptotically reliable. 16Since we did not have all the possible deviations for any profile available in the data, the true upper bounds may be even lower. As a matter of convenience, we will use notation un, i (a) to refer to a payoff function of player i based on an average over n i.i.d. samples from the distribution of payoffs. We also assume that un, i (a) are independent for all a ∈ A and i ∈ I. We will use the notation Γn to refer to the game [I, R, {ui, n (·)}], whereas Γ will denote the "underlying" game, [I, R, {ui (·)}]. Similarly, we define ~ n (r) to be ~ (r) with respect to the game Γn. In this section, we show that ~ n (s) → ~ (s) a.s. uniformly on the mixed strategy space for any finite game, and, furthermore, that all mixed strategy Nash equilibria in empirical games eventually become arbitrarily close to some Nash equilibrium strategies in the underlying game. We use these results to show that under certain conditions, the optimal choice of the design parameter based on empirical data converges almost surely to the actual optimum. PROOF. Since ~ (s) = 0 for every s ∈ N, we can find M large enough such that Pr {supn ≥ M sups ∈ N ~ n (s) <γ} = 1. By the Corollary, for any game with a finite set of pure strategies and for any ~> 0, all Nash equilibria lie in the set of empirical ~ - Nash equilibria if enough samples have been taken. As we now show, this provides some justification for our use of a set of profiles with a non-zero ~ - bound as an estimate of the set of Nash equilibria. First, suppose we conclude that for a particular setting of θ, sup {ˆφ ∗ (θ)} ≤ α. Then, since for any fixed ~> 0, N (θ) ⊂ Nn, ~ (θ) when n is large enough, for any such n. Thus, since we defined the welfare function of the designer to be I {sup {φ ∗ (θ)} ≤ α} in our domain of interest, the empirical choice of θ satisfies the designer's objective, thereby maximizing his welfare function. Alternatively, suppose we conclude that inf {ˆφ ∗ (θ)}> α for ev for every θ, and we can conclude that no setting of θ will satisfy the designer's objective. Now, we will show that when the number of samples is large enough, every Nash equilibrium of Γn is close to some Nash equilibrium of the underlying game. This result will lead us to consider convergence of optimizers based on empirical data to actual optimal mechanism parameter settings. We first note that the function ~ (s) is continuous in a finite game. LEMMA 5. Let S be a mixed strategy set defined on a finite game. Then ~: S → R is continuous. For the exposition that follows, we need a bit of additional notation. First, let (Z, d) be a metric space, and X, Y C Z and define directed Hausdorffdistance from X to Y to be Observe that U C X =:>. h (U, Y) <h (X, Y). Further, define BS (x, δ) to be an open ball in S C Z with center x E S and radius δ. Now, let Nn denote all Nash equilibria of the game Γn and let that is, the union of open balls of radius δ with centers at Nash equilibria of Γ. Note that h (Nδ, N) = δ. We can then prove the following general result. We will now show that in the special case when Θ and A are finite and each Γθ has a unique Nash equilibrium, the estimates 0ˆ of optimal designer parameter converge to an actual optimizer almost surely. Let 0ˆ = arg maxθ ∈ Θ W (Nn (0), 0), where n is the number of times each pure profile was sampled in Γθ for every 0, and let 0 ∗ = arg maxθ ∈ Θ W (N (0), 0). The shortcoming of the above result is that, within our framework, the designer has no way of knowing or ensuring that Γθ do, indeed, have unique equilibria. However, it does lend some theoretical justification for pursuing design in this manner, and, perhaps, will serve as a guide for more general results in the future. 6. RELATED WORK The mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10]. Additionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant. In much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality. Several related approaches to search for the best mechanism exist in the Computer Science literature. Conitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge. When payoff functions of players are unknown, a search using simulations has been explored as an alternative. One approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria. An alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming. In this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning. Most recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18]. 7. CONCLUSION In this work we spent considerable effort developing general tactics for empirical mechanism design. We defined a formal gametheoretic model of interaction between the designer and the participants of the mechanism as a two-stage game. We also described in some generality the methods for estimating a sample Nash equilibrium function when the data is extremely scarce, or a Nash equilibrium correspondence when more data is available. Our techniques are designed specifically to deal with problems in which both the mechanism parameter space and the agent strategy sets are infinite and only a relatively small data set can be acquired. A difficult design issue in the TAC/SCM game which the TAC community has been eager to address provides us with a setting to test our methods. In applying empirical game analysis to the problem at hand, we are fully aware that our results are inherently inexact. Thus, we concentrate on collecting evidence about the structure of the Nash equilibrium correspondence. In the end, we can try to provide enough evidence to either prescribe a parameter setting, or suggest that no setting is possible that will satisfy the designer. In the case of TAC/SCM, our evidence suggests quite strongly that storage cost could not have been effectively adjusted in the 2004 tournament to curb excessive day-0 procurement without detrimental effects on overall profitability. The success of our analysis in this extremely complex environment with high simulation costs makes us optimistic that our methods can provide guidance in making mechanism design decisions in other challenging domains. The theoretical results confirm some intuitions behind the empirical mechanism design methods we have introduced, and increases our confidence that our framework can be effective in estimating the best mechanism parameter choice in relatively general settings. Acknowledgments We thank Terence Kelly, Matthew Rudary, and Satinder Singh for helpful comments on earlier drafts of this work. This work was supported in part by NSF grant IIS-0205435 and the DARPA REAL strategic reasoning program. 8. REFERENCES APPENDIX A. PROOFS A. 1 Proof of Proposition 1 A. 2 Proof of Proposition 2 First, let us suppose that some function, f (x) defined on [ai, bi], satisfy Lipschitz condition on (ai, bi] with Lipschitz constant Ai. Then the following claim holds: Claim: infx ∈ (ai, bi] f (x)> 0.5 (f (ai) + f (bi) − Ai (bi − ai). To prove this claim, note that the intersection of lines at f (ai) and f (bi) with slopes − Ai and Ai respectively will determine the lower bound on the minimum of f (x) on [ai, bi] (which is a lower bound on infimum of f (x) on (ai, bj]). The line at f (ai) is determined by f (ai) = − Aiai + cL and the line at f (bi) is determined by f (bi) = Aibi + cR. Thus, the intercepts are cL = f (ai) + Aiai and cR = f (bi) + Aibi respectively. Let x ∗ be the point at which these lines intersect. Then, Since we have a finite number of points in the data set for each θ, we can obtain the following expression: by subadditivity and the fact that a profile a is a Nash equilibrium if and only if ~ (a) = 0. Putting everything together yields the desired result. A. 3 Proof of Theorem 3 First, we will need the following fact: Claim: Given a function fi (x) and a set X, | maxx ∈ X f1 (x) − To prove this claim, observe that | max x ∈ X f1 (x) − max x ∈ X f2 (x) | = {maxx f1 (x) − maxx f2 (x) if maxx f1 (x)> maxx f2 (x) maxx f2 (x) − maxx f1 (x) if maxx f2 (x)> maxx f1 (x) In the first case, Thus, the claim holds. By the Strong Law of Large Numbers, un, i (a) → ui (a) a.s. for or, equivalently [8], for any α> 0 and δ> 0, there is M (i, a)> 0 such that Thus, by the claim, for any n ≥ M, Now, recall that ~ (s) = maxi [maxai ∈ Ai ui (ai, s − i) − ui (s)]. By the claims above, maxai ∈ Ai ui (ai, s − i) is uniformly continuous in s − i and ui (s) is uniformly continuous in s. Since the difference of two uniformly continuous functions is uniformly continuous, and since this continuity is preserved under maximum by our second claim, we have the desired result. A. 5 Proof of Theorem 6 Choose δ> 0. First, we need to ascertain that the following claim holds: Claim: ~ ¯ = mins ∈ S \ Nδ ~ (s) exists and ~ ¯> 0. Since Nδ is an open subset of compact S, it follows that S \ Nδ is compact. As we had also proved in Lemma 5 that ~ (s) is continuous, existence follows from the Weierstrass theorem. That ~ ¯> 0 is clear since ~ (s) = 0 if and only if s is a Nash equilibrium of Γ. Now, by Theorem 3, for any α> 0 there is M such that with probability at least 1 − α. Note that since s − i (a) and s (a) are bounded between 0 and 1, we were able to drop them from the expressions above to obtain a bound that will be valid independent of the particular choice of s. Furthermore, since the above result can be obtained for an arbitrary α> 0 and δ> 0, we have Pr {limn → ∞ ~ n (s) = ~ (s)} = 1 uniformly on S. A. 4 Proof of Lemma 5 We prove the result using uniform continuity of ui (s) and preservation of continuity under maximum. Claim: A function f: Rk → R defined by f (t) = Eki = 1 ziti, where zi are constants in R, is uniformly continuous in t. The claim follows because | f (t) − f (t ~) | = | Eki = 1 zi (ti − t ~ i) | ≤ Pk i = 1 | zi | | ti − t ~ i |. An immediate result of this for our purposes is that ui (s) is uniformly continuous in s and ui (ai, s − i) is uniformly continuous in s − i. Claim: Let f (a, b) be uniformly continuous in b ∈ B for every a ∈ A, with | A | <∞. Then V (b) = maxa ∈ A f (a, b) is uniformly continuous in b. To show this, take γ> 0 and let b, b ~ ∈ B such that ~ b − b ~ ~ <δ (a) ⇒ | f (a, b) − f (a, b ~) | <γ. Now take δ = mina ∈ A δ (a). Since this holds for an arbitrary α> 0 and δ> 0, the desired result follows. A. 6 Proof of Theorem 7 Fix θ and choose δ> 0. Since W (s, θ) is continuous at s ∗ (θ), given ~> 0 there is δ> 0 such that for every s ~ that is within δ of s ∗ (θ), | W (s ~, θ) − W (s ∗ (θ), θ) | <~. By Theorem 6, we can find M (θ) large enough such that all s ~ ∈ Nn are within δ of s ∗ (θ) for all n ≥ M (θ) with probability 1. Consequently, for any ~> 0 we can find M (θ) large enough such that with probability 1 we have supn ≥ M (θ) sups ~ ∈ Nn | W (s ~, θ) − W (s ∗ (θ), θ) | <~. Let us assume without loss of generality that there is a unique optimal choice of θ. Now, since the set Θ is finite, there is also the "second-best" choice of θ (if there is only one θ ∈ Θ this discussion is moot anyway): Then if we let ~ <∆ / 2 and let M = maxθ ∈ Θ M (θ), where each M (θ) is large enough such that supn ≥ M (θ) sups ~ ∈ Nn | W (s ~, θ) − W (s ∗ (θ), θ) | <~ a.s., the optimal choice of θ based on any empirical equilibrium will be θ ∗ with probability 1. Thus, in particular, given any probability distribution over empirical equilibria, the best choice of θ will be θ ∗ with probability 1 (similarly, if we take supremum or infimum of W (Nn (θ), θ) over the set of empirical equilibria in constructing the objective function).
Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario ABSTRACT Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent. 1. MOTIVATION We illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game. TAC/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain. The agents procure components from the various suppliers and assemble finished goods for sale to customers, repeatedly over a simulated year .1 1Information about TAC and the SCM game, including specifications, rules, and competition results, can be found at http://www.sics.se/tac. As it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation. During the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0. Although jockeying for day-0 procurement turned out to be an interesting strategic issue in itself [19], the phenomenon detracted from other interesting problems, such as adapting production levels to varying demand (since component costs were already sunk), and dynamic management of production, sales, and inventory. Several participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13]. After the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement. The task facing game organizers can be viewed as a problem in mechanism design. The designers have certain game features under their control, and a set of objectives regarding game outcomes. Unlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game. Replacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option. We believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems. In response to the problem, the TAC/SCM designers adopted several rule changes intended to penalize large day-0 orders. These included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods. Despite the changes, day-0 procurement was very high in the early rounds of the 2004 competition. In a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament. Even this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9]. The apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking. Although the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations. Much of the difficulty, of course, is anticipating the agents' (and their developers') responses without essentially running a gaming exercise for this purpose. The episode caused us to consider whether new ap proaches or tools could enable more systematic analysis of design options. Standard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment. Under the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design. In the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC/SCM redesign problem. Our analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted. Our results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise. We also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used. Finally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem. Overall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players' utilities is available. Our methods incorporate estimation of sets of Nash equilibria and sample Nash equilibria, used in conjuction to support general claims about the structure of the mechanism designer's utility, as well as a restricted probabilistic analysis to assess the likelihood of conclusions. We believe that most realistic problems are too complex to be amenable to exact analysis. Consequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses. 2. PRELIMINARIES A normalform game2 is denoted by [I, {Ri}, {ui (r)}], where I refers to the set of players and m = | I | is the number of players. Ri is the set of strategies available to player i ∈ I, with R = R1 ×...× Rm representing the set ofjoint strategies of all players. We designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 ×...× Am. It is often convenient to refer to a strategy of player i separately from that of the remaining players. To accommodate this, we use a_i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A. An s ∈ S is called a mixed strategy profile. When the game is finite (i.e., A and I are both finite), the probability that a ∈ A is played under s is written s (a) = s (ai, a_i). When the distribution s is not correlated, we can simply say si (ai) when referring to the probability player i plays ai under s. Next, we define the payoff (utility) function of each player i by ui: A1 × · · · × Am--+ R, where ui (ai, a_i) indicates the payoff to player i to playing pure strategy ai when the remaining players play a_i. We can extend this definition to mixed strategies by assuming that ui are von Neumann-Morgenstern (vNM) utilities as follows: ui (s) = Es [ui], where Es is the expectation taken with respect to the probability distribution of play induced by the players' mixed strategy s. 2By employing the normal form, we model agents as playing a single action, with decisions taken simultaneously. This is appropriate for our current study, which treats strategies (agent programs) as atomic actions. We could capture finer-grained decisions about action over time in the extensive form. Although any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection). Occasionally, we write ui (x, y) to mean that x ∈ Ai or Si and y ∈ A_i or S_i depending on context. We also express the set of utility functions of all players as u (·) = {u1 (·),..., um (·)}. We define a function, e: R--+ R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile. where r belongs to some strategy set, R, of either pure or mixed strategies. Faced with a game, an agent would ideally play its best strategy given those played by the other agents. A configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium. When r ∈ A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium. We often appeal to the concept of an approximate, or e-Nash equilibrium, where e is the maximum benefit to any agent for deviating from the prescribed strategy. Thus, e (r) as defined above (1) is such that profile r is an e-Nash equilibrium iff e (r) ≤ e. In this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical. 3. THE MODEL 4. EMPIRICAL DESIGN ANALYSIS 4.1 TAC/SCM Design Problem 4.2 Estimating Nash Equilibria 4.2.1 PayoffFunction Approximation 4.2.2 Search in Strategy Profile Space 4.3 Data Generation 4.4 Results 4.4.1 Analysis of the Baseline Data Set 4.4.2 Analysis of Search Data 4.4.3 Extrapolating the Solution Correspondence 4.5 Probabilistic Analysis 5. CONVERGENCE RESULTS 6. RELATED WORK The mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10]. Additionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant. In much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality. Several related approaches to search for the best mechanism exist in the Computer Science literature. Conitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge. When payoff functions of players are unknown, a search using simulations has been explored as an alternative. One approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria. An alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming. In this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning. Most recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18]. 7. CONCLUSION Acknowledgments 8. REFERENCES APPENDIX A. PROOFS A. 1 Proof of Proposition 1 A. 2 Proof of Proposition 2 A. 3 Proof of Theorem 3 A. 5 Proof of Theorem 6 A. 4 Proof of Lemma 5 A. 6 Proof of Theorem 7
Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario ABSTRACT Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent. 1. MOTIVATION We illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game. TAC/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain. As it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation. During the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0. Several participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13]. After the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement. The task facing game organizers can be viewed as a problem in mechanism design. The designers have certain game features under their control, and a set of objectives regarding game outcomes. Unlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game. Replacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option. We believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems. In response to the problem, the TAC/SCM designers adopted several rule changes intended to penalize large day-0 orders. These included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods. Despite the changes, day-0 procurement was very high in the early rounds of the 2004 competition. In a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament. Even this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9]. The apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking. Although the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations. proaches or tools could enable more systematic analysis of design options. Standard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment. Under the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design. In the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC/SCM redesign problem. Our analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted. Our results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise. We also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used. Finally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem. Overall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players' utilities is available. We believe that most realistic problems are too complex to be amenable to exact analysis. Consequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses. 2. PRELIMINARIES Ri is the set of strategies available to player i ∈ I, with R = R1 ×...× Rm representing the set ofjoint strategies of all players. We designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 ×...× Am. It is often convenient to refer to a strategy of player i separately from that of the remaining players. To accommodate this, we use a_i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A. An s ∈ S is called a mixed strategy profile. When the game is finite (i.e., A and I are both finite), the probability that a ∈ A is played under s is written s (a) = s (ai, a_i). This is appropriate for our current study, which treats strategies (agent programs) as atomic actions. We could capture finer-grained decisions about action over time in the extensive form. Although any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection). We also express the set of utility functions of all players as u (·) = {u1 (·),..., um (·)}. We define a function, e: R--+ R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile. where r belongs to some strategy set, R, of either pure or mixed strategies. Faced with a game, an agent would ideally play its best strategy given those played by the other agents. A configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium. When r ∈ A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium. We often appeal to the concept of an approximate, or e-Nash equilibrium, where e is the maximum benefit to any agent for deviating from the prescribed strategy. In this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical. 6. RELATED WORK The mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10]. Additionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant. In much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality. Several related approaches to search for the best mechanism exist in the Computer Science literature. Conitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge. When payoff functions of players are unknown, a search using simulations has been explored as an alternative. One approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria. An alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming. In this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning. Most recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18].
C-61
Authority Assignment in Distributed Multi-Player Proxy-based Games
We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as communication proxies) which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.
[ "author assign", "author", "commun proxi", "multi-player onlin game", "latenc compens", "artifici latenc", "proxi-base game architectur", "central-server architectur", "first person shooter", "role plai game", "client authorit approach", "cheat-proof mechan", "mmog", "distribut multi-player game" ]
[ "P", "P", "P", "M", "U", "U", "M", "M", "U", "M", "M", "M", "U", "M" ]
Authority Assignment in Distributed Multi-Player Proxy-based Games Sudhir Aggarwal Justin Christofoli Department of Computer Science Florida State University, Tallahassee, FL {sudhir, christof}@cs. fsu.edu Sarit Mukherjee Sampath Rangarajan Center for Networking Research Bell Laboratories, Holmdel, NJ {sarit, sampath}@bell-labs.com ABSTRACT We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as communication proxies) which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics. Categories and Subject Descriptors C.2.4 [Computer-Communication networks]: Distributed Systems-Distributed Applications General Terms Games, Performance 1. INTRODUCTION In Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game. In current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other. In addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events. Such centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment. A number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4]. It has been shown that network latency has real impact on practical game playing experience [3, 5]. Some types of games can function quite well even in the presence of large delays. For example, [4] shows that in a modern RPG called Everquest 2, the breakpoint of the game when adding artificial latency was 1250ms. This is accounted to the fact that the combat system used in Everquest 2 is queueing based and has very low interaction. For example, a player queues up 4 or 5 spells they wish to cast, each of these spells take 1-2 seconds to actually perform, giving the server plenty of time to validate these actions. But there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5]. Latency compensation techniques have been proposed to alleviate the effect of latency [1, 6, 7] but it is obvious that if MMOGs are to increase in interactivity and speed, more architectures will have to be developed that address responsiveness, accuracy and consistency of the gamestate. In this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable. First, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space. Second, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server. In order to move towards more complex real-time networked games, we believe that definitions of authority must be refined. Most currently implemented MMOGs have game servers that have almost absolute authority. We argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience. We believe that in most cases, the client with the most accurate view of an entity is the best suited to make decisions for that entity when the causality of that action will not immediately affect any other players. In this paper we define what it means to have authority within the context of events and objects in a virtual game space. We then show the benefits of delegating authority for different actions and game events between the clients and server. In our model, the game space consists of game clients (representing the players) and objects that they control. We divide the client actions and game events (we will collectively refer to these as events) such as collisions, hits etc. into three different categories, a) events for which the game client has absolute authority, b) events for which the game server has absolute authority, and c) events for which the authority changes dynamically from client to the server and vice-versa. Depending on who has the authority, that entity will make decisions on the events that happen within a game space. We propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player``s game client. These type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players. Authority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players. For example, collision detection of two players that collide with each other and hit detection of non-linear bullets (that changes trajectory with time) should be delegated to the server. Decision on events such as item pickup (for example, picking up items in a game to accumulate points) should be delegated to a server if there are multiple players within close proximity of an item and any one of the players could succeed in picking the item; for item pick-up contention where the client realizes that no other player, except its own player, is within a certain range of the item, the client could be delegated the responsibility to claim the item. The client``s decision can always be accurately verified by the server. In summary, we argue that while current authority models that only delegate responsibility to the server to make authoritative decisions on events is more secure than allowing the clients to make the decisions, these types of models add undesirable delays to events that could very well be decided by the clients without any inconsistency being introduced into the game. As networked games become more complex, our architecture will become more applicable. This architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired. We propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs. Our paper has the following contributions. First we propose an architecture that uses communication proxies to enable clients to connect to the game server. A communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions. In addition, it is capable of multicasting this information only to a relevant subset of other communication proxies. These functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience. Second, we propose a mixed authority assignment mechanism as described above that improves game playing experience. Third, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs. In Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages. In Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience. In Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest. Section 5 discusses related work. In Section 6, we present our conclusions and discuss future work. 2. PROXY-BASED GAME ARCHITECTURE Massively Multi-player Online Games (MMOGs) usually consist of a large game space in which the players and different game objects reside and move around and interact with each-other. State information about the whole game space could be kept in a single central server which we would refer to as a Central-Server Architecture. But to alleviate the heavy demand on the processing for handling the large player population and the objects in the game in real-time, a MMOG is normally implemented using a distributed server architecture where the game space is further sub-divided into regions so that each region has relatively smaller number of players and objects that can be handled by a single server. In other words, the different game regions are hosted by different servers in a distributed fashion. When a player moves out of one game region to another adjacent one, the player must communicate with a different server (than it was currently communicating with) hosting the new region. The servers communicate with one another to hand off a player or an object from one region to another. In this model, the player on the client machine has to establish multiple gaming sessions with different servers so that it can roam in the entire game space. We propose a communication proxy based architecture where a player connects to a (geographically) nearby proxy instead of connecting to a central server in the case of a centralserver architecture or to one of the servers in case of dis2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 tributed server architecture. In the proposed architecture, players who are close by geographically join a particular proxy. The proxy then connects to one or more game servers, as needed by the set of players that connect to it and maintains persistent transport sessions with these server. This alleviates the problem of each player having to connect directly to multiple game servers, which can add extra connection setup delay. Introduction of communication proxies also mitigates the overhead of a large number of transport sessions that must be managed and reduces required network bandwidth [9] and processing at the game servers both with central server and distributed server architectures. With central server architectures, communication proxies reduce the overhead at the server by not requiring the server to terminate persistent transport sessions from every one of the clients. With distributed-server architectures, additionally, communication proxies eliminate the need for the clients to maintain persistent transport sessions to every one of the servers. Figure 1 shows the proposed architecture. Figure 1: Architecture of the gaming environment. Note that the communication proxies need not be cognizant of the game. They host a number of players and inform the servers which players are hosted by the proxy in question. Also note that the players hosted by a proxy may not be in the same game space. That is, a proxy hosts players that are geographically close to it, but the players themselves can reside in different parts of the game space. The proxy communicates with the servers responsible for maintaining the game spaces subscribed by the different players. The proxies communicate with one another in a peer-to-peer to fashion. The responsiveness of the game can be improved for updates that do not need to wait on processing at a central authority. In this way, information about players can be disseminated faster before even the game server gets to know about it. This definitely improves the responsiveness of the game. However, it ignores consistency that is critical in MMORPGs. The notion that an architecture such as this one can still maintain temporal consistency will be discussed in detail in Section 3. Figure 2 shows and example of the working principle of the proposed architecture. Assume that the game space is divided into 9 regions and there are three servers responsible for managing the regions. Server S1 owns regions 1 and 2, S2 manages 4, 5, 7, and 8, and S3 is responsible for 3, 6 and 9. Figure 2: An example. There are four communication proxies placed in geographically distant locations. Players a, b, c join proxy P1, proxy P2 hosts players d, e, f, players g, h are with proxy P3, whereas players i, j, k, l are with proxy P4. Underneath each player, the figure shows which game region the player is located currently. For example, players a, b, c are in regions 1, 2, 6, respectively. Therefore, proxy P1 must communicate with servers S1 and S3. The reader can verify the rest of the links between the proxies and the servers. Players can move within the region and between regions. Player movement within a region will be tracked by the proxy hosting the player and this movement information (for example, the player``s new coordinates) will be multicast to a subset of other relevant communication proxies directly. At the same time, this information will be sent to the server responsible for that region with the indication that this movement has already been communicated to all the other relevant communication proxies (so that the server does not have to relay this information to all the proxies). For example, if player a moves within region 1, this information will be communicated by proxy P1 to server S1 and multicast to proxies P3 and P4. Note that proxies that do not keep state information about this region at this point in time (because they do not have any clients within that region) such as P2 do not have to receive this movement information. If a player is at the boundary of a region and moves into a new region, there are two possibilities. The first possibility is that the proxy hosting the player can identify the region into which the player is moving (based on the trajectory information) because it is also maintaining state information about the new region at that point in time. In this case, the proxy can update movement information directly at the other relevant communication proxies and also send information to the appropriate server informing of the movement (this may require handoff between servers as we will describe). Consider the scenario where player a is at the boundary of region 1 and proxy P1 can identify that the player is moving into region 2. Because proxy P1 is currently keeping state information about region 2, it can inform all The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3 the other relevant communication proxies (in this example, no other proxy maintains information about region 2 at this point and so no update needs to be sent to any of the other proxies) about this movement and then inform the server independently. In this particular case, server S1 is responsible for region 2 as well and so no handoff between servers would be needed. Now consider another scenario where player j moves from region 9 to region 8 and that proxy P4 is able to identify this movement. Again, because proxy P4 maintains state information about region 8, it can inform any other relevant communication proxies (again, none in this example) about this movement. But now, regions 9 and 8 are managed by different servers (servers S3 and S2 respectively) and thus a hand-off between these servers is needed. We propose that in this particular scenario, the handoff be managed by the proxy P4 itself. When the proxy sends movement update to server S3 (informing the server that the player is moving out of its region), it would also send a message to server S2 informing the server of the presence and location of the player in one of its region. In the intra-region and inter-region scenarios described above, the proxy is able to manage movement related information, update only the relevant communication proxies about the movement, update the servers with the movement and enable handoff of a player between the servers if needed. In this way, the proxy performs movement updates without involving the servers in any way in this time-critical function thereby speeding up the game and improving game playing experience for the players. We consider this the fast path for movement update. We envision the proxies to be just communication proxies in that they do not know about the workings of specific games. They merely process movement information of players and objects and communicate this information to the other proxies and the servers. If the proxies are made more intelligent in that they understand more of the game logic, it is possible for them to quickly check on claims made by the clients and mitigate cheating. The servers could perform the same functionality but with more delay. Even without being aware of game logic, the proxies can provide additional functionalities such as timestamping messages to make the game playing experience more accurate [10] and fair [11]. The second possibility that should be considered is when players move between regions. It is possible that a player moves from one region to another but the proxy that is hosting the player is not able to determine the region into which the player is moving, a) the proxy does not maintain state information about all the regions into which the player could potentially move, or b) the proxy is not able to determine which region the player may move into (even if maintains state information about all these regions). In this case, we propose that the proxy be not responsible for making the movement decision, but instead communicate the movement indication to the server responsible for the region within which the player is currently located. The server will then make the movement decision and then a) inform all the proxies including the proxy hosting the player, and b) initiate handoff with another server if the player moves into a region managed by another server. We consider this the slow path for movement update in that the servers need to be involved in determining the new position of the player. In the example, assume that player a moves from region 1 to region 4. Proxy P1 does not maintain state information about region 4 and thus would pass the movement information to server S1. The server will identify that the player has moved into region 4 and would inform proxy P1 as well as proxy P2 (which is the only other proxy that maintains information about region 4 at this point in time). Server S1 will also initiate a handoff of player a with server S2. Proxy P1 will now start maintaining state information about region 4 because one of its hosted players, player a has moved into this region. It will do so by requesting and receiving the current state information about region 4 from server S2 which is responsible for this region. Thus, a proxy architecture allows us to make use of faster movement updates through the fast path through a proxy if and when possible as opposed to conventional server-based architectures that always have to use the slow path through the server for movement updates. By selectively maintaining relevant regional game state information at the proxies, we are able to achieve this capability in our architecture without the need for maintaining the complete game state at every proxy. 3. ASSIGNMENT OF AUTHORITY As a MMOG is played, the players and the game objects that are part of the game, continually change their state. For example, consider a player who owns a tank in a battlefield game. Based on action of the player, the tank changes its position in the game space, the amount of ammunition the tank contains changes as it fires at other tanks, the tank collects bonus firing power based on successful hits, etc.. Similarly objects in the battlefield, such as flags, buildings etc. change their state when a flag is picked up by a player (i.e. tank) or a building is destroyed by firing at it. That is, some decision has to be made on the state of each player and object as the game progresses. Note that the state of a player and/or object can contain several parameters (e.g., position, amount of ammunition, fuel storage, points collected, etc), and if any of the parameters changes, the state of the player/object changes. In a client-server based game, the server controls all the players and the objects. When a player at a client machine makes a move, the move is transmitted to the server over the network. The server then analyzes the move, and if the move is a valid one, changes the state of the player at the server and informs the client of the change. The client subsequently updates the state of the player and renders the player at the new location. In this case the authority to change the state of the player resides with the server entirely and the client simply follows what the server instructs it to do. Most of the current first person shooter (FPS) games and role playing games (RPG) fall under this category. In current FPS games, much like in RPG games, the client is not trusted. All moves and actions that it makes are validated. If a client detects that it has hit another player with a bullet, it proceeds assuming that it is a hit. Meanwhile, an update is sent to the server and the server will send back a message either affirming or denying that the player was hit. If the remote player was not hit, then the client will know that it 4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 did not actually make the shot. If it did make the hit, an update will also be sent from the server to the other clients informing them that the other player was hit. A difference that occurs in some RPGs is that they use very dumb client programs. Some RPGs do not maintain state information at the client and therefore, cannot predict anything such as hits at the client. State information is not maintained because the client is not trusted with it. In RPGs, a cheating player with a hacked game client can use state information stored at the client to gain an advantage and find things such as hidden treasure or monsters lurking around the corner. This is a reason why most MMORPGs do not send a lot of state information to the client and causes the game to be less responsive and have lower interaction game-play than FPS games. In a peer-to-peer game, each peer controls the player and object that it owns. When a player makes a move, the peer machine analyzes the move and if it is a valid one, changes the state of the player and places the player in new position. Afterwards, the owner peer informs all other peers about the new state of the player and the rest of the peers update the state of the player. In this scenario, the authority to change the state of the player is given to the owning peer and all other peers simply follow the owner. For example, Battle Zone Flag (BzFlag) [12] is a multiplayer client-server game where the client has all authority for making decisions. It was built primarily with LAN play in mind and cheating as an afterthought. Clients in BzFlag are completely authoritative and when they detect that they were hit by a bullet, they send an update to the server which simply forwards the message along to all other players. The server does no sort of validation. Each of the above two traditional approaches has its own set of advantages and disadvantages. The first approach, which we will refer to as server authoritative henceforth, uses a centralized method to assign authority. While a centralized approach can keep the state of the game (i.e., state of all the players and objects) consistent across any number of client machines, it suffers from delayed response in game-play as any move that a player at the client machine makes must go through one round-trip delay to the server before it can take effect on the client``s screen. In addition to the round-trip delay, there is also queuing delay in processing the state change request at the server. This can result in additional processing delay, and can also bring in severe scalability problems if there are large number of clients playing the game. One definite advantage of the server authoritative approach is that it can easily detect if a client is cheating and can take appropriate action to prevent cheating. The peer-to-peer approach, henceforth referred to as client authoritative, can make games very responsive. However, it can make the game state inconsistent for a few players and tie break (or roll back) has to be performed to bring the game back to a consistent state. Neither tie break nor roll back is a desirable feature of online gaming. For example, assume that for a game, the goal of each player is to collect as many flags as possible from the game space (e.g. BzFlag). When two players in proximity try to collect the same flag at the same time, depending on the algorithm used at the client-side, both clients may determine that it is the winner, although in reality only one player can pick the flag up. Both players will see on their screen that it is the winner. This makes the state of the game inconsistent. Ways to recover from this inconsistency are to give the flag to only one player (using some tie break rule) or roll the game back so that the players can try again. Neither of these two approaches is a pleasing experience for online gaming. Another problem with client authoritative approach is that of cheating by clients as there is no cross checking of the validation of the state changes authorized by the owner client. We propose to use a hybrid approach to assign the authority dynamically between the client and the server. That is, we assign the authority to the client to make the game responsive, and use the server``s authority only when the client``s individual authoritative decisions can make the game state inconsistent. By moving the authority of time critical updates to the client, we avoid the added delay caused by requiring the server to validate these updates. For example, in the flag pickup game, the clients will be given the authority to pickup flags only when other players are not within a range that they could imminently pickup a flag. Only when two or more players are close by so that more than one player may claim to have picked up a flag, the authority for movement and flag pickup would go to the central server so that the game state does not become inconsistent. We believe that in a large game-space where a player is often in a very wide open and sparsely populated area such as those often seen in the game Second Life [13], this hybrid architecture would be very beneficial because of the long periods that the client would have authority to send movement updates for itself. This has two advantages over the centralauthority approach, it distributes the processing load down to the clients for the majority of events and it allows for a more responsive game that does not need to wait on a server for validation. We believe that our notion of authority can be used to develop a globally consistent state model of the evolution of a game. Fundamentally, the consistent state of the system is the one that is defined by the server. However, if local authority is delegated to the client, in this case, the client``s state is superimposed on the server``s state to determine the correct global state. For example, if the client is authoritative with respect to movement of a player, then the trajectory of the player is the true trajectory and must replace the server``s view of the player``s trajectory. Note that this could be problematic and lead to temporal inconsistency only if, for example, two or more entities are moving in the same region and can interact with each other. In this situation, the client authority must revert to the server and the sever would then make decisions. Thus, the client is only authoritative in situations where there is no potential to imminently interact with other players. We believe that in complex MMOGs, when allowing more rapid movement, it will still be the case that local authority is possible for significant spans of game time. Note that it might also be possible to minimize the occurrences of the Dead Man Shooting problem described in [14]. This could be done by allowing the client to be authoritative for more actions such as its player``s own death and disallowing other players from making preemptive decisions based on a remote player. The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5 One reason why the client-server based architecture has gained popularity is due to belief that the fastest route to the other clients is through the server. While this may be true, we aim to create a new architecture where decisions do not always have to be made at the game server and the fastest route to a client is actually through a communication proxy located close to the client. That is, the shortest distance in our architecture is not through the game server but through the communication proxy. After a client makes an action such as movement, it will simultaneously distribute it directly to the clients and the game server by way of the communications proxy. We note that our architecture however is not practical for a game where game players setup their own servers in an ad-hoc fashion and do not have access to proxies at the various ISPs. This proxy and distributed authority architecture can be used to its full potential only when the proxies can be placed at strategic places within the main ISPs and evenly distributed geographically. Our game architecture does not assume that the client is not to be trusted. We are designing our architecture on the fact that there will be sufficient cheat deterring and detection mechanisms present so that it will be both undesirable and very difficult to cheat [15]. In our proposed approach, we can make the games cheat resilient by using the proxybased architecture when client authoritative decisions take place. In order to achieve this, the proxies have to be game cognizant so that decisions made by a client can be cross checked by a proxy that the client connects to. For example, assume that in a game a plane controlled by a client moves in the game space. It is not possible for the plane to go through a building unharmed. In a client authoritative mode, it is possible for the client to cheat by maneuvering the plane through a building and claiming the plane to be unharmed. However, when such move is published by the client, the proxy, being aware of the game space that the plane is in, can quickly check that the client has misused the authority and then can block such move. This allows us to distribute authority to make decisions about the clients. In the following section we use a multiplayer game called RPGQuest to implement different authoritative schemes and discuss our experience with the implementation. Our implementation shows the viability of our proposed solution. 4. IMPLEMENTATION EXPERIENCE We have experimented with the authority assignment mechanism described in the last section by implementing the mechanisms in a game called RPGQuest. A screen shot from this game is shown in Figure 3. The purpose of the implementation is to test its feasibility in a real game. RPGQuest is a basic first person game where the player can move around a three dimensional environment. Objects are placed within the game world and players gain points for each object that is collected. The game clients connect to a game server which allows many players to coexist in the same game world. The basic functionality of this game is representative of current online first person shooter and role playing games. The game uses the DirectX 8 graphics API and DirectPlay networking API. In this section we will discuss the three different versions of the game that we experimented with. Figure 3: The RPGQuest Game. The first version of the game, which is the original implementation of RPGQuest, was created with a completely authoritative server and a non-authoritative client. Authority given to the server includes decisions of when a player collides with static objects and other players and when a player picks up an object. This version of the game performs well up to 100ms round-trip latency between the client and the server. There is little lag between the time player hits a wall and the time the server corrects the player``s position. However, as more latency is induced between the client and server, the game becomes increasingly difficult to play. With the increased latency, the messages coming from the server correcting the player when it runs into a wall are not received fast enough. This causes the player to pass through the wall for the period that it is waiting for the server to resolve the collision. When studying the source code of the original version of the RPGQuest game, there is a substantial delay that is unavoidable each time an action must be validated by the server. Whenever a movement update is sent to the server, the client must then wait whatever the round trip delay is, plus some processing time at the server in order to receive its validated or corrected position. This is obviously unacceptable in any game where movement or any other rapidly changing state information must be validated and disseminated to the other clients rapidly. In order to get around this problem, we developed a second version of the game, which gives all authority to the client. The client was delegated the authority to validate its own movement and the authority to pick up objects without validation from the server. In this version of the game when a player moves around the game space, the client validates that the player``s new position does not intersect with any walls or static objects. A position update is then sent to the server which then immediately forwards the update to the other clients within the region. The update does not have to go through any extra processing or validation. This game model of complete authority given to the client is beneficial with respect to movement. When latencies of 6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 100ms and up are induced into the link between the client and server, the game is still playable since time critical aspects of the game like movement do not have to wait on a reply from the server. When a player hits a wall, the collision is processed locally and does not have to wait on the server to resolve the collision. Although game playing experience with respect to responsiveness is improved when the authority for movement is given to the client, there are still aspects of games that do not benefit from this approach. The most important of these is consistency. Although actions such as movement are time critical, other actions are not as time critical, but instead require consistency among the player states. An example of a game aspect that requires consistency is picking up objects that should only be possessed by a single player. In our client authoritative version of RPGQuest clients send their own updates to all other players whenever they pick up an object. From our tests we have realized this is a problem because when there is a realistic amount of latency between the client and server, it is possible for two players to pick up the same object at the same time. When two players attempt to pick up an object at physical times which are close to each other, the update sent by the player who picked up the object first will not reach the second player in time for it to see that the object has already been claimed. The two players will now both think that they own the object. This is why a server is still needed to be authoritative in this situation and maintain consistency throughout the players. These two versions of the RPGQuest game has showed us why it is necessary to mix the two absolute models of authority. It is better to place authority on the client for quickly changing actions such as movement. It is not desirable to have to wait for server validation on a movement that could change before the reply is even received. It is also sometimes necessary to place consistency over efficiency in aspects of the game that cannot tolerate any inconsistencies such as object ownership. We believe that as the interactivity of games increases, our architecture of mixed authority that does not rely on server validation will be necessary. To test the benefits and show the feasibility of our architecture of mixed authority, we developed a third version of the RPGQuest game that distributed authority for different actions between the client and server. In this version, in the interest of consistency, the server remained authoritative for deciding who picked up an object. The client was given full authority to send positional updates to other clients and verify its own position without the need to verify its updates with the server. When the player tries to move their avatar, the client verifies that the move will not cause it to move through a wall. A positional update is then sent to the server which then simply forwards it to the other clients within the region. This eliminates any extra processing delay that would occur at the server and is also a more accurate means of verification since the client has a more accurate view of its own state than the server. This version of the RPGQuest game where authority is distributed between the client and server is an improvement from the server authoritative version. The client has no delay in waiting for an update for its own position and other clients do not have to wait on the server to verify the update. The inconsistencies where two clients can pick up the same object in the client authoritative architecture are not present in this version of the client. However, the benefits of mixed authority will not truly be seen until an implementation of our communication proxy is integrated into the game. With the addition of the communication proxy, after the client verifies its own positional updates it will be able to send the update to all clients within its region through a low latency link instead of having to first go through the game server which could possibly be in a very remote location. The coding of the different versions of the game was very simple. The complexity of the client increased very slightly in the client authoritative and hybrid models. The original dumb clients of RPGQuest know the position of other players; it is not just sent a screen snapshot from the server. The server updates each client with the position of all nearby clients. The dumb clients use client side prediction to fill in the gaps between the updates they receive. The only extra processing the client has to do in the hybrid architecture is to compare its current position to the positions of all objects (walls, boxes, etc.) in its area. This obviously means that each client will have to already have downloaded the locations of all static objects within its current region. 5. RELATED WORK It has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used. In [16], different types of architectures are studied with respect to bandwidth efficiencies and latency. It is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server. A Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player. The paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead). The paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server. In essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server). There is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17]. In [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission. The main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games. The further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7 server. The architecture uses interconnected proxy servers that each have a full view of the global game state. Proxy servers are located at various different ISPs. It is mentioned in this work that dividing the game space among multiple games servers such as the federated model presented in [19] is inefficient for a relatively fast game flow and that the proposed architecture alleviates this problem because users do not have to connect to a different server whenever they cross the server boundary. This architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space. Fidelity based agent architectures have been proposed in [20, 21]. These works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space. When an object moves from one portion to another, there is a handoff from one server to another. Although these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies. Our work differs from the above discussed previous works by proposing a) a distributed proxy-based architecture to decrease bandwidth requirements at the clients and the servers without requiring the proxies to keep state information about the whole game space, b) a dynamic authority assignment technique to reduce latency (by performing consistency checks locally at the client whenever possible) by splitting the authority between the clients and servers on a per object basis, and c) proposing that cheat detection can be built into the proxies if they are provided more information about the specific game instead of using them purely as communication proxies (although this idea has not been implemented yet and is part of our future work). 6. CONCLUSIONS AND FUTURE WORK In this paper, we first proposed a proxy-based architecture for MMOGs that enables MMOGs to scale to a large number of users by mitigating the need for a large number of transport sessions to be maintained and decreasing both bandwidth overhead and latency of event update. Second, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game. Third, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience. In future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture. We propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness. We also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near. As discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server. Also, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature. Software such as Punkbuster [22] and a reputation system like those proposed by [23] and [15] would be integral to the operation of an architecture such as ours which has a lot of trust placed on the client. We further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves. 7. REFERENCES [1] Y. W. Bernier. Latency Compensation Methods in Client/Server In-game Protocol Design and Optimization. In Proc. of Game Developers Conference``01, 2001. [2] Lothar Pantel and Lars C. Wolf. On the impact of delay on real-time multiplayer games. In NOSSDAV ``02: Proceedings of the 12th international workshop on Network and operating systems support for digital audio and video, pages 23-29, New York, NY, USA, 2002. ACM Press. [3] G. Armitage. Sensitivity of Quake3 Players to Network Latency. In Proc. of IMW2001, Workshop Poster Session, November 2001. http://www.geocities.com/ gj armitage/q3/quake-results. html. [4] Tobias Fritsch, Hartmut Ritter, and Jochen Schiller. The effect of latency and network limitations on mmorpgs: a field study of everquest2. In NetGames ``05: Proceedings of 4th ACM SIGCOMM workshop on Network and system support for games, pages 1-9, New York, NY, USA, 2005. ACM Press. [5] Tom Beigbeder, Rory Coughlan, Corey Lusher, John Plunkett, Emmanuel Agu, and Mark Claypool. The effects of loss and latency on user performance in unreal tournament 2003. In NetGames ``04: Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games, pages 144-151, New York, NY, USA, 2004. ACM Press. [6] Y. Lin, K. Guo, and S. Paul. Sync-MS: Synchronized Messaging Service for Real-Time Multi-Player Distributed Games. In Proc. of 10th IEEE International Conference on Network Protocols (ICNP), Nov 2002. [7] Katherine Guo, Sarit Mukherjee, Sampath Rangarajan, and Sanjoy Paul. A fair message exchange framework for distributed multi-player games. In NetGames ``03: Proceedings of the 2nd workshop on Network and system support for games, pages 29-41, New York, NY, USA, 2003. ACM Press. [8] T. Barron. Multiplayer Game Programming, chapter 16-17, pages 672-731. Prima Tech``s Game Development Series. Prima Publishing, 2001. 8 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 [9] Carsten Griwodz and P˚al Halvorsen. The fun of using tcp for an mmorpg. In NOSSDAV ``06: Proceedings of the International Workshop on Network and Operating Systems Support for Digital Audio and VIdeo, New York, NY, USA, 2006. ACM Press. [10] Sudhir Aggarwal, Hemant Banavar, Amit Khandelwal, Sarit Mukherjee, and Sampath Rangarajan. Accuracy in dead-reckoning based distributed multi-player games. In NetGames ``04: Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games, pages 161-165, New York, NY, USA, 2004. ACM Press. [11] Sudhir Aggarwal, Hemant Banavar, Sarit Mukherjee, and Sampath Rangarajan. Fairness in dead-reckoning based distributed multi-player games. In NetGames ``05: Proceedings of 4th ACM SIGCOMM workshop on Network and system support for games, pages 1-10, New York, NY, USA, 2005. ACM Press. [12] Riker, T. et al.. Bzflag. http://www.bzflag.org, 2000-2006. [13] Linden Lab. Second life. http://secondlife.com, 2003. [14] Martin Mauve. How to keep a dead man from shooting. In IDMS ``00: Proceedings of the 7th International Workshop on Interactive Distributed Multimedia Systems and Telecommunication Services, pages 199-204, London, UK, 2000. Springer-Verlag. [15] Max Skibinsky. Massively Multiplayer Game Development 2, chapter The Quest for Holy ScalePart 2: P2P Continuum, pages 355-373. Charles River Media, 2005. [16] Joseph D. Pellegrino and Constantinos Dovrolis. Bandwidth requirement and state consistency in three multiplayer game architectures. In NetGames ``03: Proceedings of the 2nd workshop on Network and system support for games, pages 52-59, New York, NY, USA, 2003. ACM Press. [17] M. Mauve J. Widmer and S. Fischer. A Generic Proxy Systems for Networked Computer Games. In Proc. of the Workshop on Network Games, Netgames 2002, April 2002. [18] S. Gorlatch J. Muller, S. Fischer and M.Mauve. A Proxy Server Network Architecture for Real-Time Computer Games. In Euor-Par 2004 Parallel Processing: 10th International EURO-PAR Conference, August-September 2004. [19] H. Hazeyama T. Limura and Y. Kadobayashi. Zoned Federation of Game Servers: A Peer-to-Peer Approach to Scalable Multiplayer On-line Games. In Proc. of ACM Workshop on Network Games, Netgames 2004, August-September 2004. [20] B. Kelly and S. Aggarwal. A Framework for a Fidelity Based Agent Architecture for Distributed Interactive Simulation. In Proc. 14th Workshop on Standards for Distributed Interactive Simulation, pages 541-546, March 1996. [21] S. Aggarwal and B. Kelly. Hierarchical Structuring for Distributed Interactive Simulation. In Proc. 13th Workshop on Standards for Distributed Interactive Simulation, pages 125-132, Sept 1995. [22] Even Balance, Inc.. Punkbuster. http://www.evenbalance.com/, 2001-2006. [23] Y. Wang and J. Vassileva. Trust and Reputation Model in Peer-to-Peer Networks. In Third International Conference on Peer-to-Peer Computing, 2003. The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 9
Authority Assignment in Distributed Multi-Player Proxy-based Games ABSTRACT We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as "communication proxies") which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics. 1. INTRODUCTION In Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game. In current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other. In addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events. Such centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment. A number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4]. It has been shown that network latency has real impact on practical game playing experience [3, 5]. Some types of games can function quite well even in the presence of large delays. For example, [4] shows that in a modern RPG called Everquest 2, the "breakpoint" of the game when adding artificial latency was 1250ms. This is accounted to the fact that the combat system used in Everquest 2 is queueing based and has very low interaction. For example, a player queues up 4 or 5 spells they wish to cast, each of these spells take 1-2 seconds to actually perform, giving the server plenty of time to validate these actions. But there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5]. Latency compensation techniques have been proposed to alleviate the effect of latency [1, 6, 7] but it is obvious that if MMOGs are to increase in interactivity and speed, more architectures will have to be developed that address responsiveness, accuracy and consistency of the gamestate. In this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable. First, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space. Second, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server. In order to move towards more complex real-time networked games, we believe that definitions of authority must be refined. Most currently implemented MMOGs have game servers that have almost absolute authority. We argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience. We believe that in most cases, the client with the most accurate view of an entity is the best suited to make decisions for that entity when the causality of that action will not immediately affect any other players. In this paper we define what it means to have authority within the context of events and objects in a virtual game space. We then show the benefits of delegating authority for different actions and game events between the clients and server. In our model, the game space consists of game clients (representing the players) and objects that they control. We divide the client actions and game events (we will collectively refer to these as "events") such as collisions, hits etc. into three different categories, a) events for which the game client has absolute authority, b) events for which the game server has absolute authority, and c) events for which the authority changes dynamically from client to the server and vice-versa. Depending on who has the authority, that entity will make decisions on the events that happen within a game space. We propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player's game client. These type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players. Authority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players. For example, collision detection of two players that collide with each other and hit detection of non-linear bullets (that changes trajectory with time) should be delegated to the server. Decision on events such as item pickup (for example, picking up items in a game to accumulate points) should be delegated to a server if there are multiple players within close proximity of an item and any one of the players could succeed in picking the item; for item pick-up contention where the client realizes that no other player, except its own player, is within a certain range of the item, the client could be delegated the responsibility to claim the item. The client's decision can always be accurately verified by the server. In summary, we argue that while current authority models that only delegate responsibility to the server to make authoritative decisions on events is more secure than allowing the clients to make the decisions, these types of models add undesirable delays to events that could very well be decided by the clients without any inconsistency being introduced into the game. As networked games become more complex, our architecture will become more applicable. This architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired. We propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs. Our paper has the following contributions. First we propose an architecture that uses communication proxies to enable clients to connect to the game server. A communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions. In addition, it is capable of multicasting this information only to a relevant subset of other communication proxies. These functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience. Second, we propose a mixed authority assignment mechanism as described above that improves game playing experience. Third, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs. In Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages. In Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience. In Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest. Section 5 discusses related work. In Section 6, we present our conclusions and discuss future work. 2. PROXY-BASED GAME ARCHITECTURE Massively Multi-player Online Games (MMOGs) usually consist of a large game space in which the players and different game objects reside and move around and interact with each-other. State information about the whole game space could be kept in a single central server which we would refer to as a Central-Server Architecture. But to alleviate the heavy demand on the processing for handling the large player population and the objects in the game in real-time, a MMOG is normally implemented using a distributed server architecture where the game space is further sub-divided into regions so that each region has relatively smaller number of players and objects that can be handled by a single server. In other words, the different game regions are hosted by different servers in a distributed fashion. When a player moves out of one game region to another adjacent one, the player must communicate with a different server (than it was currently communicating with) hosting the new region. The servers communicate with one another to hand off a player or an object from one region to another. In this model, the player on the client machine has to establish multiple gaming sessions with different servers so that it can roam in the entire game space. We propose a communication proxy based architecture where a player connects to a (geographically) nearby proxy instead of connecting to a central server in the case of a centralserver architecture or to one of the servers in case of dis tributed server architecture. In the proposed architecture, players who are close by geographically join a particular proxy. The proxy then connects to one or more game servers, as needed by the set of players that connect to it and maintains persistent transport sessions with these server. This alleviates the problem of each player having to connect directly to multiple game servers, which can add extra connection setup delay. Introduction of communication proxies also mitigates the overhead of a large number of transport sessions that must be managed and reduces required network bandwidth [9] and processing at the game servers both with central server and distributed server architectures. With central server architectures, communication proxies reduce the overhead at the server by not requiring the server to terminate persistent transport sessions from every one of the clients. With distributed-server architectures, additionally, communication proxies eliminate the need for the clients to maintain persistent transport sessions to every one of the servers. Figure 1 shows the proposed architecture. Figure 1: Architecture of the gaming environment. Note that the communication proxies need not be cognizant of the game. They host a number of players and inform the servers which players are hosted by the proxy in question. Also note that the players hosted by a proxy may not be in the same game space. That is, a proxy hosts players that are geographically close to it, but the players themselves can reside in different parts of the game space. The proxy communicates with the servers responsible for maintaining the game spaces subscribed by the different players. The proxies communicate with one another in a peer-to-peer to fashion. The responsiveness of the game can be improved for updates that do not need to wait on processing at a central authority. In this way, information about players can be disseminated faster before even the game server gets to know about it. This definitely improves the responsiveness of the game. However, it ignores consistency that is critical in MMORPGs. The notion that an architecture such as this one can still maintain temporal consistency will be discussed in detail in Section 3. Figure 2 shows and example of the working principle of the proposed architecture. Assume that the game space is divided into 9 regions and there are three servers responsible for managing the regions. Server S1 owns regions 1 and 2, S2 manages 4, 5, 7, and 8, and S3 is responsible for 3, 6 and 9. Figure 2: An example. There are four communication proxies placed in geographically distant locations. Players a, b, c join proxy P1, proxy P2 hosts players d, e, f, players g, h are with proxy P3, whereas players i, j, k, l are with proxy P4. Underneath each player, the figure shows which game region the player is located currently. For example, players a, b, c are in regions 1, 2, 6, respectively. Therefore, proxy P1 must communicate with servers S1 and S3. The reader can verify the rest of the links between the proxies and the servers. Players can move within the region and between regions. Player movement within a region will be tracked by the proxy hosting the player and this movement information (for example, the player's new coordinates) will be multicast to a subset of other relevant communication proxies directly. At the same time, this information will be sent to the server responsible for that region with the indication that this movement has already been communicated to all the other relevant communication proxies (so that the server does not have to relay this information to all the proxies). For example, if player a moves within region 1, this information will be communicated by proxy P1 to server S1 and multicast to proxies P3 and P4. Note that proxies that do not keep state information about this region at this point in time (because they do not have any clients within that region) such as P2 do not have to receive this movement information. If a player is at the boundary of a region and moves into a new region, there are two possibilities. The first possibility is that the proxy hosting the player can identify the region into which the player is moving (based on the trajectory information) because it is also maintaining state information about the new region at that point in time. In this case, the proxy can update movement information directly at the other relevant communication proxies and also send information to the appropriate server informing of the movement (this may require handoff between servers as we will describe). Consider the scenario where player a is at the boundary of region 1 and proxy P1 can identify that the player is moving into region 2. Because proxy P1 is currently keeping state information about region 2, it can inform all The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 3 the other relevant communication proxies (in this example, no other proxy maintains information about region 2 at this point and so no update needs to be sent to any of the other proxies) about this movement and then inform the server independently. In this particular case, server S1 is responsible for region 2 as well and so no handoff between servers would be needed. Now consider another scenario where player j moves from region 9 to region 8 and that proxy P4 is able to identify this movement. Again, because proxy P4 maintains state information about region 8, it can inform any other relevant communication proxies (again, none in this example) about this movement. But now, regions 9 and 8 are managed by different servers (servers S3 and S2 respectively) and thus a hand-off between these servers is needed. We propose that in this particular scenario, the handoff be managed by the proxy P4 itself. When the proxy sends movement update to server S3 (informing the server that the player is moving out of its region), it would also send a message to server S2 informing the server of the presence and location of the player in one of its region. In the intra-region and inter-region scenarios described above, the proxy is able to manage movement related information, update only the relevant communication proxies about the movement, update the servers with the movement and enable handoff of a player between the servers if needed. In this way, the proxy performs movement updates without involving the servers in any way in this time-critical function thereby speeding up the game and improving game playing experience for the players. We consider this the "fast path" for movement update. We envision the proxies to be just communication proxies in that they do not know about the workings of specific games. They merely process movement information of players and objects and communicate this information to the other proxies and the servers. If the proxies are made more intelligent in that they understand more of the game logic, it is possible for them to quickly check on claims made by the clients and mitigate cheating. The servers could perform the same functionality but with more delay. Even without being aware of game logic, the proxies can provide additional functionalities such as timestamping messages to make the game playing experience more accurate [10] and fair [11]. The second possibility that should be considered is when players move between regions. It is possible that a player moves from one region to another but the proxy that is hosting the player is not able to determine the region into which the player is moving, a) the proxy does not maintain state information about all the regions into which the player could potentially move, or b) the proxy is not able to determine which region the player may move into (even if maintains state information about all these regions). In this case, we propose that the proxy be not responsible for making the movement decision, but instead communicate the movement indication to the server responsible for the region within which the player is currently located. The server will then make the movement decision and then a) inform all the proxies including the proxy hosting the player, and b) initiate handoff with another server if the player moves into a region managed by another server. We consider this the "slow path" for movement update in that the servers need to be involved in determining the new position of the player. In the example, assume that player a moves from region 1 to region 4. Proxy P1 does not maintain state information about region 4 and thus would pass the movement information to server S1. The server will identify that the player has moved into region 4 and would inform proxy P1 as well as proxy P2 (which is the only other proxy that maintains information about region 4 at this point in time). Server S1 will also initiate a handoff of player a with server S2. Proxy P1 will now start maintaining state information about region 4 because one of its hosted players, player a has moved into this region. It will do so by requesting and receiving the current state information about region 4 from server S2 which is responsible for this region. Thus, a proxy architecture allows us to make use of faster movement updates through the fast path through a proxy if and when possible as opposed to conventional server-based architectures that always have to use the slow path through the server for movement updates. By selectively maintaining relevant regional game state information at the proxies, we are able to achieve this capability in our architecture without the need for maintaining the complete game state at every proxy. 3. ASSIGNMENT OF AUTHORITY As a MMOG is played, the players and the game objects that are part of the game, continually change their state. For example, consider a player who owns a tank in a battlefield game. Based on action of the player, the tank changes its position in the game space, the amount of ammunition the tank contains changes as it fires at other tanks, the tank collects bonus firing power based on successful hits, etc. . Similarly objects in the battlefield, such as flags, buildings etc. change their state when a flag is picked up by a player (i.e. tank) or a building is destroyed by firing at it. That is, some decision has to be made on the state of each player and object as the game progresses. Note that the state of a player and/or object can contain several parameters (e.g., position, amount of ammunition, fuel storage, points collected, etc), and if any of the parameters changes, the state of the player/object changes. In a client-server based game, the server controls all the players and the objects. When a player at a client machine makes a move, the move is transmitted to the server over the network. The server then analyzes the move, and if the move is a valid one, changes the state of the player at the server and informs the client of the change. The client subsequently updates the state of the player and renders the player at the new location. In this case the authority to change the state of the player resides with the server entirely and the client simply follows what the server instructs it to do. Most of the current first person shooter (FPS) games and role playing games (RPG) fall under this category. In current FPS games, much like in RPG games, the client is not trusted. All moves and actions that it makes are validated. If a client detects that it has hit another player with a bullet, it proceeds assuming that it is a hit. Meanwhile, an update is sent to the server and the server will send back a message either affirming or denying that the player was hit. If the remote player was not hit, then the client will know that it 4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 did not actually make the shot. If it did make the hit, an update will also be sent from the server to the other clients informing them that the other player was hit. A difference that occurs in some RPGs is that they use very dumb client programs. Some RPGs do not maintain state information at the client and therefore, cannot predict anything such as hits at the client. State information is not maintained because the client is not trusted with it. In RPGs, a cheating player with a hacked game client can use state information stored at the client to gain an advantage and find things such as hidden treasure or monsters lurking around the corner. This is a reason why most MMORPGs do not send a lot of state information to the client and causes the game to be less responsive and have lower interaction game-play than FPS games. In a peer-to-peer game, each peer controls the player and object that it "owns". When a player makes a move, the peer machine analyzes the move and if it is a valid one, changes the state of the player and places the player in new position. Afterwards, the owner peer informs all other peers about the new state of the player and the rest of the peers update the state of the player. In this scenario, the authority to change the state of the player is given to the owning peer and all other peers simply follow the owner. For example, Battle Zone Flag (BzFlag) [12] is a multiplayer client-server game where the client has all authority for making decisions. It was built primarily with LAN play in mind and cheating as an afterthought. Clients in BzFlag are completely authoritative and when they detect that they were hit by a bullet, they send an update to the server which simply forwards the message along to all other players. The server does no sort of validation. Each of the above two traditional approaches has its own set of advantages and disadvantages. The first approach, which we will refer to as "server authoritative" henceforth, uses a centralized method to assign authority. While a centralized approach can keep the state of the game (i.e., state of all the players and objects) consistent across any number of client machines, it suffers from delayed response in game-play as any move that a player at the client machine makes must go through one round-trip delay to the server before it can take effect on the client's screen. In addition to the round-trip delay, there is also queuing delay in processing the state change request at the server. This can result in additional processing delay, and can also bring in severe scalability problems if there are large number of clients playing the game. One definite advantage of the server authoritative approach is that it can easily detect if a client is cheating and can take appropriate action to prevent cheating. The peer-to-peer approach, henceforth referred to as "client authoritative", can make games very responsive. However, it can make the game state inconsistent for a few players and tie break (or roll back) has to be performed to bring the game back to a consistent state. Neither tie break nor roll back is a desirable feature of online gaming. For example, assume that for a game, the goal of each player is to collect as many flags as possible from the game space (e.g. BzFlag). When two players in proximity try to collect the same flag at the same time, depending on the algorithm used at the client-side, both clients may determine that it is the winner, although in reality only one player can pick the flag up. Both players will see on their screen that it is the winner. This makes the state of the game inconsistent. Ways to recover from this inconsistency are to give the flag to only one player (using some tie break rule) or roll the game back so that the players can try again. Neither of these two approaches is a pleasing experience for online gaming. Another problem with client authoritative approach is that of cheating by clients as there is no cross checking of the validation of the state changes authorized by the owner client. We propose to use a hybrid approach to assign the authority dynamically between the client and the server. That is, we assign the authority to the client to make the game responsive, and use the server's authority only when the client's individual authoritative decisions can make the game state inconsistent. By moving the authority of time critical updates to the client, we avoid the added delay caused by requiring the server to validate these updates. For example, in the flag pickup game, the clients will be given the authority to pickup flags only when other players are not within a range that they could imminently pickup a flag. Only when two or more players are close by so that more than one player may claim to have picked up a flag, the authority for movement and flag pickup would go to the central server so that the game state does not become inconsistent. We believe that in a large game-space where a player is often in a very wide open and sparsely populated area such as those often seen in the game Second Life [13], this hybrid architecture would be very beneficial because of the long periods that the client would have authority to send movement updates for itself. This has two advantages over the centralauthority approach, it distributes the processing load down to the clients for the majority of events and it allows for a more responsive game that does not need to wait on a server for validation. We believe that our notion of authority can be used to develop a globally consistent state model of the evolution of a game. Fundamentally, the consistent state of the system is the one that is defined by the server. However, if local authority is delegated to the client, in this case, the client's state is superimposed on the server's state to determine the correct global state. For example, if the client is authoritative with respect to movement of a player, then the trajectory of the player is the "true" trajectory and must replace the server's view of the player's trajectory. Note that this could be problematic and lead to temporal inconsistency only if, for example, two or more entities are moving in the same region and can interact with each other. In this situation, the client authority must revert to the server and the sever would then make decisions. Thus, the client is only authoritative in situations where there is no potential to imminently interact with other players. We believe that in complex MMOGs, when allowing more rapid movement, it will still be the case that local authority is possible for significant spans of game time. Note that it might also be possible to minimize the occurrences of the "Dead Man Shooting" problem described in [14]. This could be done by allowing the client to be authoritative for more actions such as its player's own death and disallowing other players from making preemptive decisions based on a remote player. The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 5 One reason why the client-server based architecture has gained popularity is due to belief that the fastest route to the other clients is through the server. While this may be true, we aim to create a new architecture where decisions do not always have to be made at the game server and the fastest route to a client is actually through a communication proxy located close to the client. That is, the shortest distance in our architecture is not through the game server but through the communication proxy. After a client makes an action such as movement, it will simultaneously distribute it directly to the clients and the game server by way of the communications proxy. We note that our architecture however is not practical for a game where game players setup their own servers in an ad-hoc fashion and do not have access to proxies at the various ISPs. This proxy and distributed authority architecture can be used to its full potential only when the proxies can be placed at strategic places within the main ISPs and evenly distributed geographically. Our game architecture does not assume that the client is not to be trusted. We are designing our architecture on the fact that there will be sufficient cheat deterring and detection mechanisms present so that it will be both undesirable and very difficult to cheat [15]. In our proposed approach, we can make the games cheat resilient by using the proxybased architecture when client authoritative decisions take place. In order to achieve this, the proxies have to be game cognizant so that decisions made by a client can be cross checked by a proxy that the client connects to. For example, assume that in a game a plane controlled by a client moves in the game space. It is not possible for the plane to go through a building unharmed. In a client authoritative mode, it is possible for the client to cheat by maneuvering the plane through a building and claiming the plane to be unharmed. However, when such move is published by the client, the proxy, being aware of the game space that the plane is in, can quickly check that the client has misused the authority and then can block such move. This allows us to distribute authority to make decisions about the clients. In the following section we use a multiplayer game called RPGQuest to implement different authoritative schemes and discuss our experience with the implementation. Our implementation shows the viability of our proposed solution. 4. IMPLEMENTATION EXPERIENCE We have experimented with the authority assignment mechanism described in the last section by implementing the mechanisms in a game called RPGQuest. A screen shot from this game is shown in Figure 3. The purpose of the implementation is to test its feasibility in a real game. RPGQuest is a basic first person game where the player can move around a three dimensional environment. Objects are placed within the game world and players gain points for each object that is collected. The game clients connect to a game server which allows many players to coexist in the same game world. The basic functionality of this game is representative of current online first person shooter and role playing games. The game uses the DirectX 8 graphics API and DirectPlay networking API. In this section we will discuss the three different versions of the game that we experimented with. Figure 3: The RPGQuest Game. The first version of the game, which is the original implementation of RPGQuest, was created with a completely authoritative server and a non-authoritative client. Authority given to the server includes decisions of when a player collides with static objects and other players and when a player picks up an object. This version of the game performs well up to 100ms round-trip latency between the client and the server. There is little lag between the time player hits a wall and the time the server corrects the player's position. However, as more latency is induced between the client and server, the game becomes increasingly difficult to play. With the increased latency, the messages coming from the server correcting the player when it runs into a wall are not received fast enough. This causes the player to pass through the wall for the period that it is waiting for the server to resolve the collision. When studying the source code of the original version of the RPGQuest game, there is a substantial delay that is unavoidable each time an action must be validated by the server. Whenever a movement update is sent to the server, the client must then wait whatever the round trip delay is, plus some processing time at the server in order to receive its validated or corrected position. This is obviously unacceptable in any game where movement or any other rapidly changing state information must be validated and disseminated to the other clients rapidly. In order to get around this problem, we developed a second version of the game, which gives all authority to the client. The client was delegated the authority to validate its own movement and the authority to pick up objects without validation from the server. In this version of the game when a player moves around the game space, the client validates that the player's new position does not intersect with any walls or static objects. A position update is then sent to the server which then immediately forwards the update to the other clients within the region. The update does not have to go through any extra processing or validation. This game model of complete authority given to the client is beneficial with respect to movement. When latencies of 100ms and up are induced into the link between the client and server, the game is still playable since time critical aspects of the game like movement do not have to wait on a reply from the server. When a player hits a wall, the collision is processed locally and does not have to wait on the server to resolve the collision. Although game playing experience with respect to responsiveness is improved when the authority for movement is given to the client, there are still aspects of games that do not benefit from this approach. The most important of these is consistency. Although actions such as movement are time critical, other actions are not as time critical, but instead require consistency among the player states. An example of a game aspect that requires consistency is picking up objects that should only be possessed by a single player. In our client authoritative version of RPGQuest clients send their own updates to all other players whenever they pick up an object. From our tests we have realized this is a problem because when there is a realistic amount of latency between the client and server, it is possible for two players to pick up the same object at the same time. When two players attempt to pick up an object at physical times which are close to each other, the update sent by the player who picked up the object first will not reach the second player in time for it to see that the object has already been claimed. The two players will now both think that they own the object. This is why a server is still needed to be authoritative in this situation and maintain consistency throughout the players. These two versions of the RPGQuest game has showed us why it is necessary to mix the two absolute models of authority. It is better to place authority on the client for quickly changing actions such as movement. It is not desirable to have to wait for server validation on a movement that could change before the reply is even received. It is also sometimes necessary to place consistency over efficiency in aspects of the game that cannot tolerate any inconsistencies such as object ownership. We believe that as the interactivity of games increases, our architecture of mixed authority that does not rely on server validation will be necessary. To test the benefits and show the feasibility of our architecture of mixed authority, we developed a third version of the RPGQuest game that distributed authority for different actions between the client and server. In this version, in the interest of consistency, the server remained authoritative for deciding who picked up an object. The client was given full authority to send positional updates to other clients and verify its own position without the need to verify its updates with the server. When the player tries to move their avatar, the client verifies that the move will not cause it to move through a wall. A positional update is then sent to the server which then simply forwards it to the other clients within the region. This eliminates any extra processing delay that would occur at the server and is also a more accurate means of verification since the client has a more accurate view of its own state than the server. This version of the RPGQuest game where authority is distributed between the client and server is an improvement from the server authoritative version. The client has no delay in waiting for an update for its own position and other clients do not have to wait on the server to verify the update. The inconsistencies where two clients can pick up the same object in the client authoritative architecture are not present in this version of the client. However, the benefits of mixed authority will not truly be seen until an implementation of our communication proxy is integrated into the game. With the addition of the communication proxy, after the client verifies its own positional updates it will be able to send the update to all clients within its region through a low latency link instead of having to first go through the game server which could possibly be in a very remote location. The coding of the different versions of the game was very simple. The complexity of the client increased very slightly in the client authoritative and hybrid models. The original "dumb" clients of RPGQuest know the position of other players; it is not just sent a screen snapshot from the server. The server updates each client with the position of all nearby clients. The "dumb" clients use client side prediction to fill in the gaps between the updates they receive. The only extra processing the client has to do in the hybrid architecture is to compare its current position to the positions of all objects (walls, boxes, etc.) in its area. This obviously means that each client will have to already have downloaded the locations of all static objects within its current region. 5. RELATED WORK It has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used. In [16], different types of architectures are studied with respect to bandwidth efficiencies and latency. It is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server. A Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player. The paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead). The paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server. In essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server). There is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17]. In [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission. The main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games. The further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 7 server. The architecture uses interconnected proxy servers that each have a full view of the global game state. Proxy servers are located at various different ISPs. It is mentioned in this work that dividing the game space among multiple games servers such as the federated model presented in [19] is inefficient for a relatively fast game flow and that the proposed architecture alleviates this problem because users do not have to connect to a different server whenever they cross the server boundary. This architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space. Fidelity based agent architectures have been proposed in [20, 21]. These works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space. When an object moves from one portion to another, there is a handoff from one server to another. Although these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies. Our work differs from the above discussed previous works by proposing a) a distributed proxy-based architecture to decrease bandwidth requirements at the clients and the servers without requiring the proxies to keep state information about the whole game space, b) a dynamic authority assignment technique to reduce latency (by performing consistency checks locally at the client whenever possible) by splitting the authority between the clients and servers on a per object basis, and c) proposing that cheat detection can be built into the proxies if they are provided more information about the specific game instead of using them purely as communication proxies (although this idea has not been implemented yet and is part of our future work). 6. CONCLUSIONS AND FUTURE WORK In this paper, we first proposed a proxy-based architecture for MMOGs that enables MMOGs to scale to a large number of users by mitigating the need for a large number of transport sessions to be maintained and decreasing both bandwidth overhead and latency of event update. Second, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game. Third, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience. In future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture. We propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness. We also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near. As discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server. Also, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature. Software such as Punkbuster [22] and a reputation system like those proposed by [23] and [15] would be integral to the operation of an architecture such as ours which has a lot of trust placed on the client. We further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.
Authority Assignment in Distributed Multi-Player Proxy-based Games ABSTRACT We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as "communication proxies") which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics. 1. INTRODUCTION In Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game. In current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other. In addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events. Such centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment. A number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4]. It has been shown that network latency has real impact on practical game playing experience [3, 5]. Some types of games can function quite well even in the presence of large delays. For example, [4] shows that in a modern RPG called Everquest 2, the "breakpoint" of the game when adding artificial latency was 1250ms. This is accounted to the fact that the combat system used in Everquest 2 is queueing based and has very low interaction. For example, a player queues up 4 or 5 spells they wish to cast, each of these spells take 1-2 seconds to actually perform, giving the server plenty of time to validate these actions. But there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5]. Latency compensation techniques have been proposed to alleviate the effect of latency [1, 6, 7] but it is obvious that if MMOGs are to increase in interactivity and speed, more architectures will have to be developed that address responsiveness, accuracy and consistency of the gamestate. In this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable. First, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space. Second, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server. In order to move towards more complex real-time networked games, we believe that definitions of authority must be refined. Most currently implemented MMOGs have game servers that have almost absolute authority. We argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience. We believe that in most cases, the client with the most accurate view of an entity is the best suited to make decisions for that entity when the causality of that action will not immediately affect any other players. In this paper we define what it means to have authority within the context of events and objects in a virtual game space. We then show the benefits of delegating authority for different actions and game events between the clients and server. In our model, the game space consists of game clients (representing the players) and objects that they control. We divide the client actions and game events (we will collectively refer to these as "events") such as collisions, hits etc. into three different categories, a) events for which the game client has absolute authority, b) events for which the game server has absolute authority, and c) events for which the authority changes dynamically from client to the server and vice-versa. Depending on who has the authority, that entity will make decisions on the events that happen within a game space. We propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player's game client. These type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players. Authority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players. For example, collision detection of two players that collide with each other and hit detection of non-linear bullets (that changes trajectory with time) should be delegated to the server. Decision on events such as item pickup (for example, picking up items in a game to accumulate points) should be delegated to a server if there are multiple players within close proximity of an item and any one of the players could succeed in picking the item; for item pick-up contention where the client realizes that no other player, except its own player, is within a certain range of the item, the client could be delegated the responsibility to claim the item. The client's decision can always be accurately verified by the server. In summary, we argue that while current authority models that only delegate responsibility to the server to make authoritative decisions on events is more secure than allowing the clients to make the decisions, these types of models add undesirable delays to events that could very well be decided by the clients without any inconsistency being introduced into the game. As networked games become more complex, our architecture will become more applicable. This architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired. We propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs. Our paper has the following contributions. First we propose an architecture that uses communication proxies to enable clients to connect to the game server. A communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions. In addition, it is capable of multicasting this information only to a relevant subset of other communication proxies. These functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience. Second, we propose a mixed authority assignment mechanism as described above that improves game playing experience. Third, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs. In Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages. In Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience. In Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest. Section 5 discusses related work. In Section 6, we present our conclusions and discuss future work. 2. PROXY-BASED GAME ARCHITECTURE 3. ASSIGNMENT OF AUTHORITY 4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 4. IMPLEMENTATION EXPERIENCE 5. RELATED WORK It has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used. In [16], different types of architectures are studied with respect to bandwidth efficiencies and latency. It is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server. A Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player. The paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead). The paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server. In essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server). There is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17]. In [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission. The main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games. The further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 7 server. The architecture uses interconnected proxy servers that each have a full view of the global game state. Proxy servers are located at various different ISPs. It is mentioned in this work that dividing the game space among multiple games servers such as the federated model presented in [19] is inefficient for a relatively fast game flow and that the proposed architecture alleviates this problem because users do not have to connect to a different server whenever they cross the server boundary. This architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space. Fidelity based agent architectures have been proposed in [20, 21]. These works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space. When an object moves from one portion to another, there is a handoff from one server to another. Although these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies. Our work differs from the above discussed previous works by proposing a) a distributed proxy-based architecture to decrease bandwidth requirements at the clients and the servers without requiring the proxies to keep state information about the whole game space, b) a dynamic authority assignment technique to reduce latency (by performing consistency checks locally at the client whenever possible) by splitting the authority between the clients and servers on a per object basis, and c) proposing that cheat detection can be built into the proxies if they are provided more information about the specific game instead of using them purely as communication proxies (although this idea has not been implemented yet and is part of our future work). 6. CONCLUSIONS AND FUTURE WORK In this paper, we first proposed a proxy-based architecture for MMOGs that enables MMOGs to scale to a large number of users by mitigating the need for a large number of transport sessions to be maintained and decreasing both bandwidth overhead and latency of event update. Second, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game. Third, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience. In future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture. We propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness. We also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near. As discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server. Also, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature. Software such as Punkbuster [22] and a reputation system like those proposed by [23] and [15] would be integral to the operation of an architecture such as ours which has a lot of trust placed on the client. We further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.
Authority Assignment in Distributed Multi-Player Proxy-based Games ABSTRACT We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as "communication proxies") which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics. 1. INTRODUCTION In Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game. In current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other. In addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events. Such centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment. A number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4]. It has been shown that network latency has real impact on practical game playing experience [3, 5]. Some types of games can function quite well even in the presence of large delays. For example, [4] shows that in a modern RPG called Everquest 2, the "breakpoint" of the game when adding artificial latency was 1250ms. But there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5]. In this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable. First, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space. Second, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server. In order to move towards more complex real-time networked games, we believe that definitions of authority must be refined. Most currently implemented MMOGs have game servers that have almost absolute authority. We argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience. In this paper we define what it means to have authority within the context of events and objects in a virtual game space. We then show the benefits of delegating authority for different actions and game events between the clients and server. In our model, the game space consists of game clients (representing the players) and objects that they control. Depending on who has the authority, that entity will make decisions on the events that happen within a game space. We propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player's game client. These type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players. Authority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players. The client's decision can always be accurately verified by the server. As networked games become more complex, our architecture will become more applicable. This architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired. We propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs. Our paper has the following contributions. First we propose an architecture that uses communication proxies to enable clients to connect to the game server. A communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions. In addition, it is capable of multicasting this information only to a relevant subset of other communication proxies. These functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience. Second, we propose a mixed authority assignment mechanism as described above that improves game playing experience. Third, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs. In Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages. In Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience. In Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest. Section 5 discusses related work. In Section 6, we present our conclusions and discuss future work. 5. RELATED WORK It has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used. In [16], different types of architectures are studied with respect to bandwidth efficiencies and latency. It is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server. A Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player. The paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead). The paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server. In essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server). There is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17]. In [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission. The main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games. The further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 7 server. The architecture uses interconnected proxy servers that each have a full view of the global game state. Proxy servers are located at various different ISPs. This architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space. Fidelity based agent architectures have been proposed in [20, 21]. These works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space. When an object moves from one portion to another, there is a handoff from one server to another. Although these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies. 6. CONCLUSIONS AND FUTURE WORK Second, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game. Third, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience. In future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture. We propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness. We also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near. As discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server. Also, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature. We further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.
C-49
Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces
Traditional mobile ad-hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad-hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic opportunistic network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.
[ "rout protocol", "rout", "contact trace", "opportunist network", "simul", "prophet", "delai-toler network", "mobil opportunist network", "frequent link break", "end-to-end path", "random mobil model", "realist mobil trace", "unicast", "transfer probabl", "direct-deliveri protocol", "epidem protocol", "replic strategi", "past encount and transit histori" ]
[ "P", "P", "P", "P", "P", "P", "M", "R", "M", "R", "M", "R", "U", "U", "R", "R", "U", "M" ]
Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces Libo Song and David F. Kotz Institute for Security Technology Studies (ISTS) Department of Computer Science, Dartmouth College, Hanover, NH, USA 03755 ABSTRACT Traditional mobile ad-hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad-hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic opportunistic network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage. Categories and Subject Descriptors C.2.4 [Computer Systems Organization]: Computer Communication Networks-Distributed Systems General Terms Performance, Design 1. INTRODUCTION Mobile opportunistic networks are one kind of delay-tolerant network (DTN) [6]. Delay-tolerant networks provide service despite long link delays or frequent link breaks. Long link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2]. Link breaks are caused by nodes moving out of range, environmental changes, interference from other moving objects, radio power-offs, or failed nodes. For us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes. Mobile opportunistic networks have received increasing interest from researchers. In the literature, these networks include mobile sensor networks [25], wild-animal tracking networks [11], pocketswitched networks [8], and transportation networks [1, 14]. We expect to see more opportunistic networks when the one-laptopper-child (OLPC) project [18] starts rolling out inexpensive laptops with wireless networking capability for children in developing countries, where often no infrastructure exits. Opportunistic networking is one promising approach for those children to exchange information. One fundamental problem in opportunistic networks is how to route messages from their source to their destination. Mobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception. In mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11]. Some devices can form a small mobile ad-hoc network when the nodes move close to each other. But a node may frequently be isolated from other nodes. Note that traditional Internet routing protocols and ad-hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks. Indeed, there may never exist an end-to-end path between two given devices. In this paper, we study protocols for routing messages between wireless networking devices carried by people. We assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination. Each device represents a unique person (it is out of the scope of this paper when a device maybe carried by multiple people). Each message is destined for a specific person and thus for a specific node carried by that person. Although one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message. We do not consider multicast or geocast in this paper. Many routing protocols have been proposed in the literature. Few of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model. Random walk or random way-point mobility models are often used to evaluate the performance of those routing protocols. Although these synthetic mobility models have received extensive interest by mobile ad-hoc network researchers [3], they do not reflect people``s mobility patterns [9]. Realising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces. Chaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set. Su et al. [22] simulated a set of routing 35 protocols in a small experimental network. Those studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20-30 nodes). Deploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation. Instead of using a complex mobility model to mimic people``s mobility patterns, we used mobility traces collected in a production wireless network at Dartmouth College to drive our simulation. Our messagegeneration model, however, was synthetic. To the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users. Using realistic contact traces, we evaluate the performance of three naive routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22]. We also propose a new prediction-based routing protocol, and compare it to the above in our evaluation. 2. ROUTING PROTOCOL A routing protocol is designed for forwarding messages from one node (source) to another node (destination). Any node may generate messages for any other node, and may carry messages destined for other nodes. In this paper, we consider only messages that are unicast (single destination). DTN routing protocols could be described in part by their transfer probability and replication probability; that is, when one node meets another node, what is the probability that a message should be transfered and if so, whether the sender should retain its copy. Two extremes are the direct-delivery protocol and the epidemic protocol. The former transfers with probability 1 when the node meets the destination, 0 for others, and no replication. The latter uses transfer probability 1 for all nodes and unlimited replication. Both these protocols have their advantages and disadvantages. All other protocols are between the two extremes. First, we define the notion of contact between two nodes. Then we describe five existing protocols before presenting our own proposal. A contact is defined as a period of time during which two nodes have the opportunity to communicate. Although we are aware that wireless technologies differ, we assume that a node can reliably detect the beginning and end time of a contact with nearby nodes. A node may be in contact with several other nodes at the same time. The contact history of a node is a sequence of contacts with other nodes. Node i has a contact history Hi(j), for each other node j, which denotes the historical contacts between node i and node j. We record the start and end time for each contact; however, the last contacts in the node``s contact history may not have ended. 2.1 Direct Delivery Protocol In this simple protocol, a message is transmitted only when the source node can directly communicate with the destination node of the message. In mobile opportunistic networks, however, the probability for the sender to meet the destination may be low, or even zero. 2.2 Epidemic Routing Protocol The epidemic routing protocol [23] floods messages into the network. The source node sends a copy of the message to every node that it meets. The nodes that receive a copy of the message also send a copy of the message to every node that they meet. Eventually, a copy of the message arrives at the destination of the message. This protocol is simple, but may use significant resources; excessive communication may drain each node``s battery quickly. Moreover, since each node keeps a copy of each message, storage is not used efficiently, and the capacity of the network is limited. At a minimum, each node must expire messages after some amount of time or stop forwarding them after a certain number of hops. After a message expires, the message will not be transmitted and will be deleted from the storage of any node that holds the message. An optimization to reduce the communication cost is to transfer index messages before transferring any data message. The index messages contain IDs of messages that a node currently holds. Thus, by examining the index messages, a node only transfers messages that are not yet contained on the other nodes. 2.3 Random Routing An obvious approach between the above two extremes is to select a transfer probability between 0 and 1 to forward messages at each contact. We use a simple replication strategy that allows only the source node to make replicas, and limits the replication to a specific number of copies. The message has some chance of being transferred to a highly mobile node, and thus may have a better chance to reach its destination before the message expires. 2.4 PRoPHET Protocol PRoPHET [16] is a Probabilistic Routing Protocol using History of past Encounters and Transitivity to estimate each node``s delivery probability for each other node. When node i meets node j, the delivery probability of node i for j is updated by pij = (1 − pij)p0 + pij, (1) where p0 is an initial probability, a design parameter for a given network. Lindgren et al. [16] chose 0.75, as did we in our evaluation. When node i does not meet j for some time, the delivery probability decreases by pij = αk pij, (2) where α is the aging factor (α < 1), and k is the number of time units since the last update. The PRoPHET protocol exchanges index messages as well as delivery probabilities. When node i receives node j``s delivery probabilities, node i may compute the transitive delivery probability through j to z with piz = piz + (1 − piz)pijpjzβ, (3) where β is a design parameter for the impact of transitivity; we used β = 0.25 as did Lindgren [16]. 2.5 Link-State Protocol Su et al. [22] use a link-state approach to estimate the weight of each path from the source of a message to the destination. They use the median inter-contact duration or exponentially aged intercontact duration as the weight on links. The exponentially aged inter-contact duration of node i and j is computed by wij = αwij + (1 − α)I, (4) where I is the new inter-contact duration and α is the aging factor. Nodes share their link-state weights when they can communicate with each other, and messages are forwarded to the node that have the path with the lowest link-state weight. 36 3. TIMELY-CONTACT PROBABILITY We also use historical contact information to estimate the probability of meeting other nodes in the future. But our method differs in that we estimate the contact probability within a period of time. For example, what is the contact probability in the next hour? Neither PRoPHET nor Link-State considers time in this way. One way to estimate the timely-contact probability is to use the ratio of the total contact duration to the total time. However, this approach does not capture the frequency of contacts. For example, one node may have a long contact with another node, followed by a long non-contact period. A third node may have a short contact with the first node, followed by a short non-contact period. Using the above estimation approach, both examples would have similar contact probability. In the second example, however, the two nodes have more frequent contacts. We design a method to capture the contact frequency of mobile nodes. For this purpose, we assume that even short contacts are sufficient to exchange messages.1 The probability for node i to meet node j is computed by the following procedure. We divide the contact history Hi(j) into a sequence of n periods of ΔT starting from the start time (t0) of the first contact in history Hi(j) to the current time. We number each of the n periods from 0 to n − 1, then check each period. If node i had any contact with node j during a given period m, which is [t0 + mΔT, t0 + (m + 1)ΔT), we set the contact status Im to be 1; otherwise, the contact status Im is 0. The probability p (0) ij that node i meets node j in the next ΔT can be estimated as the average of the contact status in prior intervals: p (0) ij = 1 n n−1X m=0 Im. (5) To adapt to the change of contact patterns, and reduce the storage space for contact histories, a node may discard old history contacts; in this situation, the estimate would be based on only the retained history. The above probability is the direct contact probability of two nodes. We are also interested in the probability that we may be able to pass a message through a sequence of k nodes. We define the k-order probability inductively, p (k) ij = p (0) ij + X α p (0) iα p (k−1) αj , (6) where α is any node other than i or j. 3.1 Our Routing Protocol We first consider the case of a two-hop path, that is, with only one relay node. We consider two approaches: either the receiving neighbor decides whether to act as a relay, or the source decides which neighbors to use as relay. 3.1.1 Receiver Decision Whenever a node meets other nodes, they exchange all their messages (or as above, index messages). If the destination of a message is the receiver itself, the message is delivered. Otherwise, if the probability of delivering the message to its destination through this receiver node within ΔT is greater than or equal to a certain threshold, the message is stored in the receiver``s storage to forward 1 In our simulation, however, we accurately model the communication costs and some short contacts will not succeed in transfer of all messages. to the destination. If the probability is less than the threshold, the receiver discards the message. Notice that our protocol replicates the message whenever a good-looking relay comes along. 3.1.2 Sender Decision To make decisions, a sender must have the information about its neighbors'' contact probability with a message``s destination. Therefore, meta-data exchange is necessary. When two nodes meet, they exchange a meta-message, containing an unordered list of node IDs for which the sender of the metamessage has a contact probability greater than the threshold. After receiving a meta-message, a node checks whether it has any message that destined to its neighbor, or to a node in the node list of the neighbor``s meta-message. If it has, it sends a copy of the message. When a node receives a message, if the destination of the message is the receiver itself, the message is delivered. Otherwise, the message is stored in the receiver``s storage for forwarding to the destination. 3.1.3 Multi-node Relay When we use more than two hops to relay a message, each node needs to know the contact probabilities along all possible paths to the message destination. Every node keeps a contact probability matrix, in which each cell pij is a contact probability between to nodes i and j. Each node i computes its own contact probabilities (row i) with other nodes using Equation (5) whenever the node ends a contact with other nodes. Each row of the contact probability matrix has a version number; the version number for row i is only increased when node i updates the matrix entries in row i. Other matrix entries are updated through exchange with other nodes when they meet. When two nodes i and j meet, they first exchange their contact probability matrices. Node i compares its own contact matrix with node j``s matrix. If node j``s matrix has a row l with a higher version number, then node i replaces its own row l with node j``s row l. Likewise node j updates its matrix. After the exchange, the two nodes will have identical contact probability matrices. Next, if a node has a message to forward, the node estimates its neighboring node``s order-k contact probability to contact the destination of the message using Equation (6). If p (k) ij is above a threshold, or if j is the destination of the message, node i will send a copy of the message to node j. All the above effort serves to determine the transfer probability when two nodes meet. The replication decision is orthogonal to the transfer decision. In our implementation, we always replicate. Although PRoPHET [16] and Link-State [22] do no replication, as described, we added replication to those protocols for better comparison to our protocol. 4. EVALUATION RESULTS We evaluate and compare the results of direct delivery, epidemic, random, PRoPHET, Link-State, and timely-contact routing protocols. 4.1 Mobility traces We use real mobility data collected at Dartmouth College. Dartmouth College has collected association and disassociation messages from devices on its wireless network wireless users since spring 2001 [13]. Each message records the wireless card MAC address, the time of association/disassociation, and the name of the access point. We treat each unique MAC address as a node. For 37 more information about Dartmouth``s network and the data collection, see previous studies [7, 12]. Our data are not contacts in a mobile ad-hoc network. We can approximate contact traces by assuming that two users can communicate with each other whenever they are associated with the same access point. Chaintreau et al. [5] used Dartmouth data traces and made the same assumption to theoretically analyze the impact of human mobility on opportunistic forwarding algorithms. This assumption may not be accurate,2 but it is a good first approximation. In our simulation, we imagine the same clients and same mobility in a network with no access points. Since our campus has full WiFi coverage, we assume that the location of access points had little impact on users'' mobility. We simulated one full month of trace data (November 2003) taken from CRAWDAD [13], with 5, 142 users. Although predictionbased protocols require prior contact history to estimate each node``s delivery probability, our preliminary results show that the performance improvement of warming-up over one month of trace was marginal. Therefore, for simplicity, we show the results of all protocols without warming-up. 4.2 Simulator We developed a custom simulator.3 Since we used contact traces derived from real mobility data, we did not need a mobility model and omitted physical and link-layer details for node discovery. We were aware that the time for neighbor discovery in different wireless technologies vary from less than one seconds to several seconds. Furthermore, connection establishment also takes time, such as DHCP. In our simulation, we assumed the nodes could discover and connect each other instantly when they were associated with a same AP. To accurately model communication costs, however, we simulated some MAC-layer behaviors, such as collision. The default settings of the network of our simulator are listed in Table 1, using the values recommended by other papers [22, 16]. The message probability was the probability of generating messages, as described in Section 4.3. The default transmission bandwidth was 11 Mb/s. When one node tried to transmit a message, it first checked whether any nearby node was transmitting. If it was, the node backed off a random number of slots. Each slot was 1 millisecond, and the maximum number of backoff slots was 30. The size of messages was uniformly distributed between 80 bytes and 1024 bytes. The hop count limit (HCL) was the maximum number of hops before a message should stop forwarding. The time to live (TTL) was the maximum duration that a message may exist before expiring. The storage capacity was the maximum space that a node can use for storing messages. For our routing method, we used a default prediction window ΔT of 10 hours and a probability threshold of 0.01. The replication factor r was not limited by default, so the source of a message transferred the messages to any other node that had a contact probability with the message destination higher than the probability threshold. 4.3 Message generation After each contact event in the contact trace, we generated a message with a given probability; we choose a source node and a des2 Two nodes may not have been able to directly communicate while they were at two far sides of an access point, or two nodes may have been able to directly communicate if they were between two adjacent access points. 3 We tried to use a general network simulator (ns2), which was extremely slow when simulating a large number of mobile nodes (in our case, more than 5000 nodes), and provided unnecessary detail in modeling lower-level network protocols. Table 1: Default Settings of the Simulation Parameter Default value message probability 0.001 bandwidth 11 Mb/s transmission slot 1 millisecond max backoff slots 30 message size 80-1024 bytes hop count limit (HCL) unlimited time to live (TTL) unlimited storage capacity unlimited prediction window ΔT 10 hours probability threshold 0.01 contact history length 20 replication always aging factor α 0.9 (0.98 PRoPHET) initial probability p0 0.75 (PRoPHET) transitivity impact β 0.25 (PRoPHET) 0 20000 40000 60000 80000 100000 120000 0 5 10 15 20 Numberofoccurrence hour movements contacts Figure 1: Movements and contacts duration each hour tination node randomly using a uniform distribution across nodes seen in the contact trace up to the current time. When there were more contacts during a certain period, there was a higher likelihood that a new message was generated in that period. This correlation is not unreasonable, since there were more movements during the day than during the night, and so the number of contacts. Figure 1 shows the statistics of the numbers of movements and the numbers of contacts during each hour of the day, summed across all users and all days. The plot shows a clear diurnal activity pattern. The activities reached lowest around 5am and peaked between 4pm and 5pm. We assume that in some applications, network traffic exhibits similar patterns, that is, people send more messages during the day, too. Messages expire after a TTL. We did not use proactive methods to notify nodes the delivery of messages, so that the messages can be removed from storage. 4.4 Metrics We define a set of metrics that we use in evaluating routing protocols in opportunistic networks: • delivery ratio, the ratio of the number of messages delivered to the number of total messages generated. • message transmissions, the total number of messages transmitted during the simulation across all nodes. 38 • meta-data transmissions, the total number of meta-data units transmitted during the simulation across all nodes. • message duplications, the number of times a message copy occurred, due to replication. • delay, the duration between a message``s generation time and the message``s delivery time. • storage usage, the max and mean of maximum storage (bytes) used across all nodes. 4.5 Results Here we compare simulation results of the six routing protocols. 0.001 0.01 0.1 1 unlimited 100 24 10 1 Deliveryratio Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 2: Delivery ratio (log scale). The direct and random protocols for one-hour TTL had delivery ratios that were too low to be visible in the plot. Figure 2 shows the delivery ratio of all the protocols, with different TTLs. (In all the plots in the paper, prediction stands for our method, state stands for the Link-State protocol, and prophet represents PRoPHET.) Although we had 5,142 users in the network, the direct-delivery and random protocols had low delivery ratios (note the log scale). Even for messages with an unlimited lifetime, only 59 out of 2077 messages were delivered during this one-month simulation. The delivery ratio of epidemic routing was the best. The three prediction-based approaches had low delivery ratio, compared to epidemic routing. Although our method was slightly better than the other two, the advantage was marginal. The high delivery ratio of epidemic routing came with a price: excessive transmissions. Figure 3 shows the number of message data transmissions. The number of message transmissions of epidemic routing was more than 10 times higher than for the predictionbased routing protocols. Obviously, the direct delivery protocol had the lowest number of message transmissions - the number of message delivered. Among the three prediction-based methods, the PRoPHET transmitted fewer messages, but had comparable delivery-ratio as seen in Figure 2. Figure 4 shows that epidemic and all prediction-based methods had substantial meta-data transmissions, though epidemic routing had relatively more, with shorter TTLs. Because epidemic protocol transmitted messages at every contact, in turn, more nodes had messages that required meta-data transmission during contact. The direct-delivery and random protocols had no meta-data transmissions. In addition to its message transmissions and meta-data transmissions, the epidemic routing protocol also had excessive message 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 unlimited 100 24 10 1 Numberofmessagetransmitted Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 3: Message transmissions (log scale) 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 unlimited 100 24 10 1 Numberofmeta-datatransmissions Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 4: Meta-data transmissions (log scale). Direct and random protocols had no meta-data transmissions. duplications, spreading replicas of messages over the network. Figure 5 shows that epidemic routing had one or two orders more duplication than the prediction-based protocols. Recall that the directdelivery and random protocols did not replicate, thus had no data duplications. Figure 6 shows both the median and mean delivery delays. All protocols show similar delivery delays in both mean and median measures for medium TTLs, but differ for long and short TTLs. With a 100-hour TTL, or unlimited TTL, epidemic routing had the shortest delays. The direct-delivery had the longest delay for unlimited TTL, but it had the shortest delay for the one-hour TTL. The results seem contrary to our intuition: the epidemic routing protocol should be the fastest routing protocol since it spreads messages all over the network. Indeed, the figures show only the delay time for delivered messages. For direct delivery, random, and the probability-based routing protocols, relatively few messages were delivered for short TTLs, so many messages expired before they could reach their destination; those messages had infinite delivery delay and were not included in the median or mean measurements. For longer TTLs, more messages were delivered even for the directdelivery protocol. The statistics of longer TTLs for comparison are more meaningful than those of short TTLs. Since our message generation rate was low, the storage usage was also low in our simulation. Figure 7 shows the maximum and average of maximum volume (in KBytes) of messages stored 39 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 unlimited 100 24 10 1 Numberofmessageduplications Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 5: Message duplications (log scale). Direct and random protocols had no message duplications. 1 10 100 1000 10000 unlimited100 24 10 1 unlimited100 24 10 1 Delay(minute) Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Mean delayMedian delay Figure 6: Median and mean delays (log scale). in each node. The epidemic routing had the most storage usage. The message time-to-live parameter was the big factor affecting the storage usage for epidemic and prediction-based routing protocols. We studied the impact of different parameters of our predictionbased routing protocol. Our prediction-based protocol was sensitive to several parameters, such as the probability threshold and the prediction window ΔT. Figure 8 shows the delivery ratios when we used different probability thresholds. (The leftmost value 0.01 is the value used for the other plots.) A higher probability threshold limited the transfer probability, so fewer messages were delivered. It also required fewer transmissions as shown in Figure 9. With a larger prediction window, we got a higher contact probability. Thus, for the same probability threshold, we had a slightly higher delivery ratio as shown in Figure 10, and a few more transmissions as shown in Figure 11. 5. RELATED WORK In addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature. We did not implement and evaluate these routing protocols, because either they require domain-specific information (location information) [14, 15], assume certain mobility patterns [17], present orthogonal approaches [10, 24] to other routing protocols. 0.1 1 10 100 1000 10000 unlimited100 24 10 1 unlimited100 24 10 1 Storageusage(KB) Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Mean of maximumMax of maximum Figure 7: Max and mean of maximum storage usage across all nodes (log scale). 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Deliveryratio Probability threshold Figure 8: Probability threshold impact on delivery ratio of timely-contact routing. LeBrun et al. [14] propose a location-based delay-tolerant network routing protocol. Their algorithm assumes that every node knows its own position, and the destination is stationary at a known location. A node forwards data to a neighbor only if the neighbor is closer to the destination than its own position. Our protocol does not require knowledge of the nodes'' locations, and learns their contact patterns. Leguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space. Location information of nodes is required to construct mobility patterns. Musolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad-hoc networks. They use a Kalman filter to compute the probability that a node delivers messages. This protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes. When two nodes are in the same connected cloud, DSDV [19] routing is used. Network coding also draws much interest from DTN research. Erasure-coding [10, 24] explores coding algorithms to reduce message replicas. The source node replicates a message m times, then uses a coding scheme to encode them in one big message. After replicas are encoded, the source divides the big message into k 40 0 0.5 1 1.5 2 2.5 3 3.5 0 0.2 0.4 0.6 0.8 1 Numberofmessagetransmitted(million) Probability threshold Figure 9: Probability threshold impact on message transmission of timely-contact routing. 0 0.2 0.4 0.6 0.8 1 0.01 0.1 1 10 100 Deliveryratio Prediction window (hour) Figure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale). blocks of the same size, and transmits a block to each of the first k encountered nodes. If m of the blocks are received at the destination, the message can be restored, where m < k. In a uniformly distributed mobility scenario, the delivery probability increases because the probability that the destination node meets m relays is greater than it meets k relays, given m < k. 6. SUMMARY We propose a prediction-based routing protocol for opportunistic networks. We evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols. Our simulation results show that direct delivery had the lowest delivery ratio, the fewest data transmissions, and no meta-data transmission or data duplication. Direct delivery is suitable for devices that require an extremely low power consumption. The random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes. Epidemic routing delivered the most messages. The excessive transmissions, and data duplication, however, consume more resources than portable devices may be able to provide. None of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks, 0 0.5 1 1.5 2 2.5 3 3.5 0.01 0.1 1 10 100 Numberofmessagetransmitted(million) Prediction window (hour) Figure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale). because they either had an extremely low delivery ratio, or had an extremely high resource consumption. The prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing. They also had fewer data duplications than epidemic routing. All the prediction-based routing protocols that we have evaluated had similar performance. Our method had a slightly higher delivery ratio, but more transmissions and higher storage usage. There are many parameters for prediction-based routing protocols, however, and different parameters may produce different results. Indeed, there is an opportunity for some adaptation; for example, high priority messages may be given higher transfer and replication probabilities to increase the chance of delivery and reduce the delay, or a node with infrequent contact may choose to raise its transfer probability. We only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages. In some applications, context information (such as location) may be available for the peers. One may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message. Location prediction [21] may be used to predict nodes'' mobility, and to choose as relays those nodes moving toward the destined location. Research on routing in opportunistic networks is still in its early stage. Many other issues of opportunistic networks, such as security and privacy, are mainly left open. We anticipate studying these issues in future work. 7. ACKNOWLEDGEMENT This research is a project of the Center for Mobile Computing and the Institute for Security Technology Studies at Dartmouth College. It was supported by DoCoMo Labs USA, the CRAWDAD archive at Dartmouth College (funded by NSF CRI Award 0454062), NSF Infrastructure Award EIA-9802068, and by Grant number 2005-DD-BX-1091 awarded by the Bureau of Justice Assistance. Points of view or opinions in this document are those of the authors and do not represent the official position or policies of any sponsor. 8. REFERENCES [1] John Burgess, Brian Gallagher, David Jensen, and Brian Neil Levine. MaxProp: routing for vehicle-based 41 disruption-tolerant networks. In Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006. [2] Scott Burleigh, Adrian Hooke, Leigh Torgerson, Kevin Fall, Vint Cerf, Bob Durst, Keith Scott, and Howard Weiss. Delay-tolerant networking: An approach to interplanetary Internet. IEEE Communications Magazine, 41(6):128-136, June 2003. [3] Tracy Camp, Jeff Boleng, and Vanessa Davies. A survey of mobility models for ad-hoc network research. Wireless Communication & Mobile Computing (WCMC): Special issue on Mobile ad-hoc Networking: Research, Trends and Applications, 2(5):483-502, 2002. [4] Andrew Campbell, Shane Eisenman, Nicholas Lane, Emiliano Miluzzo, and Ronald Peterson. People-centric urban sensing. In IEEE Wireless Internet Conference, August 2006. [5] Augustin Chaintreau, Pan Hui, Jon Crowcroft, Christophe Diot, Richard Gass, and James Scott. Impact of human mobility on the design of opportunistic forwarding algorithms. In Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006. [6] Kevin Fall. A delay-tolerant network architecture for challenged internets. In Proceedings of the 2003 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM), August 2003. [7] Tristan Henderson, David Kotz, and Ilya Abyzov. The changing usage of a mature campus-wide wireless network. In Proceedings of the 10th Annual International Conference on Mobile Computing and Networking (MobiCom), pages 187-201, September 2004. [8] Pan Hui, Augustin Chaintreau, James Scott, Richard Gass, Jon Crowcroft, and Christophe Diot. Pocket switched networks and human mobility in conference environments. In ACM SIGCOMM Workshop on Delay Tolerant Networking, pages 244-251, August 2005. [9] Ravi Jain, Dan Lelescu, and Mahadevan Balakrishnan. Model T: an empirical model for user registration patterns in a campus wireless LAN. In Proceedings of the 11th Annual International Conference on Mobile Computing and Networking (MobiCom), pages 170-184, 2005. [10] Sushant Jain, Mike Demmer, Rabin Patra, and Kevin Fall. Using redundancy to cope with failures in a delay tolerant network. In Proceedings of the 2005 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM), pages 109-120, August 2005. [11] Philo Juang, Hidekazu Oki, Yong Wang, Margaret Martonosi, Li-Shiuan Peh, and Daniel Rubenstein. Energy-efficient computing for wildlife tracking: Design tradeoffs and early experiences with ZebraNet. In the Tenth International Conference on Architectural Support for Programming Languages and Operating Systems, October 2002. [12] David Kotz and Kobby Essien. Analysis of a campus-wide wireless network. Wireless Networks, 11:115-133, 2005. [13] David Kotz, Tristan Henderson, and Ilya Abyzov. CRAWDAD data set dartmouth/campus. http://crawdad.cs.dartmouth.edu/dartmouth/campus, December 2004. [14] Jason LeBrun, Chen-Nee Chuah, Dipak Ghosal, and Michael Zhang. Knowledge-based opportunistic forwarding in vehicular wireless ad-hoc networks. In IEEE Vehicular Technology Conference, pages 2289-2293, May 2005. [15] Jeremie Leguay, Timur Friedman, and Vania Conan. Evaluating mobility pattern space routing for DTNs. In Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006. [16] Anders Lindgren, Avri Doria, and Olov Schelen. Probabilistic routing in intermittently connected networks. In Workshop on Service Assurance with Partial and Intermittent Resources (SAPIR), pages 239-254, 2004. [17] Mirco Musolesi, Stephen Hailes, and Cecilia Mascolo. Adaptive routing for intermittently connected mobile ad-hoc networks. In IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks, pages 183-189, June 2005. extended version. [18] OLPC. One laptop per child project. http://laptop.org. [19] C. E. Perkins and P. Bhagwat. Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers. Computer Communication Review, pages 234-244, October 1994. [20] C. E. Perkins and E. M. Royer. ad-hoc on-demand distance vector routing. In IEEE Workshop on Mobile Computing Systems and Applications, pages 90-100, February 1999. [21] Libo Song, David Kotz, Ravi Jain, and Xiaoning He. Evaluating next-cell predictors with extensive Wi-Fi mobility data. IEEE Transactions on Mobile Computing, 5(12):1633-1649, December 2006. [22] Jing Su, Ashvin Goel, and Eyal de Lara. An empirical evaluation of the student-net delay tolerant network. In International Conference on Mobile and Ubiquitous Systems (MobiQuitous), July 2006. [23] Amin Vahdat and David Becker. Epidemic routing for partially-connected ad-hoc networks. Technical Report CS-2000-06, Duke University, July 2000. [24] Yong Wang, Sushant Jain, Margaret Martonosia, and Kevin Fall. Erasure-coding based routing for opportunistic networks. In ACM SIGCOMM Workshop on Delay Tolerant Networking, pages 229-236, August 2005. [25] Yu Wang and Hongyi Wu. DFT-MSN: the delay fault tolerant mobile sensor network for pervasive information gathering. In Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006. 42
Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces ABSTRACT Traditional mobile ad hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic "opportunistic" network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage. 1. INTRODUCTION Mobile opportunistic networks are one kind of delay-tolerant network (DTN) [6]. Delay-tolerant networks provide service despite long link delays or frequent link breaks. Long link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2]. Link breaks are caused by nodes moving out of range, environmental changes, interference from other moving objects, radio power-offs, or failed nodes. For us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes. Mobile opportunistic networks have received increasing interest from researchers. In the literature, these networks include mobile sensor networks [25], wild-animal tracking networks [11], "pocketswitched" networks [8], and transportation networks [1, 14]. We expect to see more opportunistic networks when the one-laptopper-child (OLPC) project [18] starts rolling out inexpensive laptops with wireless networking capability for children in developing countries, where often no infrastructure exits. Opportunistic networking is one promising approach for those children to exchange information. One fundamental problem in opportunistic networks is how to route messages from their source to their destination. Mobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception. In mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11]. Some devices can form a small mobile ad hoc network when the nodes move close to each other. But a node may frequently be isolated from other nodes. Note that traditional Internet routing protocols and ad hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks. Indeed, there may never exist an end-to-end path between two given devices. In this paper, we study protocols for routing messages between wireless networking devices carried by people. We assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination. Each device represents a unique person (it is out of the scope of this paper when a device maybe carried by multiple people). Each message is destined for a specific person and thus for a specific node carried by that person. Although one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message. We do not consider multicast or geocast in this paper. Many routing protocols have been proposed in the literature. Few of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model. Random walk or random way-point mobility models are often used to evaluate the performance of those routing protocols. Although these synthetic mobility models have received extensive interest by mobile ad hoc network researchers [3], they do not reflect people's mobility patterns [9]. Realising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces. Chaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set. Su et al. [22] simulated a set of routing protocols in a small experimental network. Those studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20--30 nodes). Deploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation. Instead of using a complex mobility model to mimic people's mobility patterns, we used mobility traces collected in a production wireless network at Dartmouth College to drive our simulation. Our messagegeneration model, however, was synthetic. To the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users. Using realistic contact traces, we evaluate the performance of three "naive" routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22]. We also propose a new prediction-based routing protocol, and compare it to the above in our evaluation. 2. ROUTING PROTOCOL A routing protocol is designed for forwarding messages from one node (source) to another node (destination). Any node may generate messages for any other node, and may carry messages destined for other nodes. In this paper, we consider only messages that are unicast (single destination). DTN routing protocols could be described in part by their transfer probability and replication probability; that is, when one node meets another node, what is the probability that a message should be transfered and if so, whether the sender should retain its copy. Two extremes are the direct-delivery protocol and the epidemic protocol. The former transfers with probability 1 when the node meets the destination, 0 for others, and no replication. The latter uses transfer probability 1 for all nodes and unlimited replication. Both these protocols have their advantages and disadvantages. All other protocols are between the two extremes. First, we define the notion of contact between two nodes. Then we describe five existing protocols before presenting our own proposal. A contact is defined as a period of time during which two nodes have the opportunity to communicate. Although we are aware that wireless technologies differ, we assume that a node can reliably detect the beginning and end time of a contact with nearby nodes. A node may be in contact with several other nodes at the same time. The contact history of a node is a sequence of contacts with other nodes. Node i has a contact history Hi (j), for each other node j, which denotes the historical contacts between node i and node j. We record the start and end time for each contact; however, the last contacts in the node's contact history may not have ended. 2.1 Direct Delivery Protocol In this simple protocol, a message is transmitted only when the source node can directly communicate with the destination node of the message. In mobile opportunistic networks, however, the probability for the sender to meet the destination may be low, or even zero. 2.2 Epidemic Routing Protocol The epidemic routing protocol [23] floods messages into the network. The source node sends a copy of the message to every node that it meets. The nodes that receive a copy of the message also send a copy of the message to every node that they meet. Eventually, a copy of the message arrives at the destination of the message. This protocol is simple, but may use significant resources; excessive communication may drain each node's battery quickly. Moreover, since each node keeps a copy of each message, storage is not used efficiently, and the capacity of the network is limited. At a minimum, each node must expire messages after some amount of time or stop forwarding them after a certain number of hops. After a message expires, the message will not be transmitted and will be deleted from the storage of any node that holds the message. An optimization to reduce the communication cost is to transfer index messages before transferring any data message. The index messages contain IDs of messages that a node currently holds. Thus, by examining the index messages, a node only transfers messages that are not yet contained on the other nodes. 2.3 Random Routing An obvious approach between the above two extremes is to select a transfer probability between 0 and 1 to forward messages at each contact. We use a simple replication strategy that allows only the source node to make replicas, and limits the replication to a specific number of copies. The message has some chance of being transferred to a highly mobile node, and thus may have a better chance to reach its destination before the message expires. 2.4 PRoPHET Protocol PRoPHET [16] is a Probabilistic Routing Protocol using History ofpast Encounters and Transitivity to estimate each node's delivery probability for each other node. When node i meets node j, the delivery probability of node i for j is updated by where p0 is an initial probability, a design parameter for a given network. Lindgren et al. [16] chose 0.75, as did we in our evaluation. When node i does not meet j for some time, the delivery probability decreases by where α is the aging factor (α <1), and k is the number of time units since the last update. The PRoPHET protocol exchanges index messages as well as delivery probabilities. When node i receives node j's delivery probabilities, node i may compute the transitive delivery probability through j to z with where β is a design parameter for the impact of transitivity; we used β = 0.25 as did Lindgren [16]. 2.5 Link-State Protocol Su et al. [22] use a link-state approach to estimate the weight of each path from the source of a message to the destination. They use the median inter-contact duration or exponentially aged intercontact duration as the weight on links. The exponentially aged inter-contact duration of node i and j is computed by where I is the new inter-contact duration and α is the aging factor. Nodes share their link-state weights when they can communicate with each other, and messages are forwarded to the node that have the path with the lowest link-state weight. 3. TIMELY-CONTACT PROBABILITY We also use historical contact information to estimate the probability of meeting other nodes in the future. But our method differs in that we estimate the contact probability within a period of time. For example, what is the contact probability in the next hour? Neither PRoPHET nor Link-State considers time in this way. One way to estimate the "timely-contact probability" is to use the ratio of the total contact duration to the total time. However, this approach does not capture the frequency of contacts. For example, one node may have a long contact with another node, followed by a long non-contact period. A third node may have a short contact with the first node, followed by a short non-contact period. Using the above estimation approach, both examples would have similar contact probability. In the second example, however, the two nodes have more frequent contacts. We design a method to capture the contact frequency of mobile nodes. For this purpose, we assume that even short contacts are sufficient to exchange messages .1 The probability for node i to meet node j is computed by the following procedure. We divide the contact history Hi (j) into a sequence of n periods of ΔT starting from the start time (t0) of the first contact in history Hi (j) to the current time. We number each of the n periods from 0 to n − 1, then check each period. If node i had any contact with node j during a given period m, which is [t0 + mΔT, t0 + (m + 1) ΔT), we set the contact status Im to be 1; otherwise, the contact status Im is 0. The probability p (0) ij that node i meets node j in the next ΔT can be estimated as the average of the contact status in prior intervals: To adapt to the change of contact patterns, and reduce the storage space for contact histories, a node may discard old history contacts; in this situation, the estimate would be based on only the retained history. The above probability is the direct contact probability of two nodes. We are also interested in the probability that we may be able to pass a message through a sequence of k nodes. We define the k-order probability inductively, where α is any node other than i or j. 3.1 Our Routing Protocol We first consider the case of a two-hop path, that is, with only one relay node. We consider two approaches: either the receiving neighbor decides whether to act as a relay, or the source decides which neighbors to use as relay. 3.1.1 Receiver Decision Whenever a node meets other nodes, they exchange all their messages (or as above, index messages). If the destination of a message is the receiver itself, the message is delivered. Otherwise, if the probability of delivering the message to its destination through this receiver node within ΔT is greater than or equal to a certain threshold, the message is stored in the receiver's storage to forward 1In our simulation, however, we accurately model the communication costs and some short contacts will not succeed in transfer of all messages. to the destination. If the probability is less than the threshold, the receiver discards the message. Notice that our protocol replicates the message whenever a good-looking relay comes along. 3.1.2 Sender Decision To make decisions, a sender must have the information about its neighbors' contact probability with a message's destination. Therefore, meta-data exchange is necessary. When two nodes meet, they exchange a meta-message, containing an unordered list of node IDs for which the sender of the metamessage has a contact probability greater than the threshold. After receiving a meta-message, a node checks whether it has any message that destined to its neighbor, or to a node in the node list of the neighbor's meta-message. If it has, it sends a copy of the message. When a node receives a message, if the destination of the message is the receiver itself, the message is delivered. Otherwise, the message is stored in the receiver's storage for forwarding to the destination. 3.1.3 Multi-node Relay When we use more than two hops to relay a message, each node needs to know the contact probabilities along all possible paths to the message destination. Every node keeps a contact probability matrix, in which each cell pij is a contact probability between to nodes i and j. Each node i computes its own contact probabilities (row i) with other nodes using Equation (5) whenever the node ends a contact with other nodes. Each row of the contact probability matrix has a version number; the version number for row i is only increased when node i updates the matrix entries in row i. Other matrix entries are updated through exchange with other nodes when they meet. When two nodes i and j meet, they first exchange their contact probability matrices. Node i compares its own contact matrix with node j's matrix. If node j's matrix has a row l with a higher version number, then node i replaces its own row l with node j's row l. Likewise node j updates its matrix. After the exchange, the two nodes will have identical contact probability matrices. Next, if a node has a message to forward, the node estimates its neighboring node's order-k contact probability to contact the destination of the message using Equation (6). If p (k) ij is above a threshold, or if j is the destination of the message, node i will send a copy of the message to node j. All the above effort serves to determine the transfer probability when two nodes meet. The replication decision is orthogonal to the transfer decision. In our implementation, we always replicate. Although PRoPHET [16] and Link-State [22] do no replication, as described, we added replication to those protocols for better comparison to our protocol. 4. EVALUATION RESULTS We evaluate and compare the results of direct delivery, epidemic, random, PRoPHET, Link-State, and timely-contact routing protocols. 4.1 Mobility traces We use real mobility data collected at Dartmouth College. Dartmouth College has collected association and disassociation messages from devices on its wireless network wireless users since spring 2001 [13]. Each message records the wireless card MAC address, the time of association/disassociation, and the name of the access point. We treat each unique MAC address as a node. For more information about Dartmouth's network and the data collection, see previous studies [7, 12]. Our data are not contacts in a mobile ad hoc network. We can approximate contact traces by assuming that two users can communicate with each other whenever they are associated with the same access point. Chaintreau et al. [5] used Dartmouth data traces and made the same assumption to theoretically analyze the impact of human mobility on opportunistic forwarding algorithms. This assumption may not be accurate,2 but it is a good first approximation. In our simulation, we imagine the same clients and same mobility in a network with no access points. Since our campus has full WiFi coverage, we assume that the location of access points had little impact on users' mobility. We simulated one full month of trace data (November 2003) taken from CRAWDAD [13], with 5, 142 users. Although predictionbased protocols require prior contact history to estimate each node's delivery probability, our preliminary results show that the performance improvement of warming-up over one month of trace was marginal. Therefore, for simplicity, we show the results of all protocols without warming-up. 4.2 Simulator We developed a custom simulator .3 Since we used contact traces derived from real mobility data, we did not need a mobility model and omitted physical and link-layer details for node discovery. We were aware that the time for neighbor discovery in different wireless technologies vary from less than one seconds to several seconds. Furthermore, connection establishment also takes time, such as DHCP. In our simulation, we assumed the nodes could discover and connect each other instantly when they were associated with a same AP. To accurately model communication costs, however, we simulated some MAC-layer behaviors, such as collision. The default settings of the network of our simulator are listed in Table 1, using the values recommended by other papers [22, 16]. The message probability was the probability of generating messages, as described in Section 4.3. The default transmission bandwidth was 11 Mb/s. When one node tried to transmit a message, it first checked whether any nearby node was transmitting. If it was, the node backed off a random number of slots. Each slot was 1 millisecond, and the maximum number of backoff slots was 30. The size of messages was uniformly distributed between 80 bytes and 1024 bytes. The hop count limit (HCL) was the maximum number of hops before a message should stop forwarding. The time to live (TTL) was the maximum duration that a message may exist before expiring. The storage capacity was the maximum space that a node can use for storing messages. For our routing method, we used a default prediction window ΔT of 10 hours and a probability threshold of 0.01. The replication factor r was not limited by default, so the source of a message transferred the messages to any other node that had a contact probability with the message destination higher than the probability threshold. 4.3 Message generation After each contact event in the contact trace, we generated a message with a given probability; we choose a source node and a des2Two nodes may not have been able to directly communicate while they were at two far sides of an access point, or two nodes may have been able to directly communicate if they were between two adjacent access points. Table 1: Default Settings of the Simulation Figure 1: Movements and contacts duration each hour tination node randomly using a uniform distribution across nodes seen in the contact trace up to the current time. When there were more contacts during a certain period, there was a higher likelihood that a new message was generated in that period. This correlation is not unreasonable, since there were more movements during the day than during the night, and so the number of contacts. Figure 1 shows the statistics of the numbers of movements and the numbers of contacts during each hour of the day, summed across all users and all days. The plot shows a clear diurnal activity pattern. The activities reached lowest around 5am and peaked between 4pm and 5pm. We assume that in some applications, network traffic exhibits similar patterns, that is, people send more messages during the day, too. Messages expire after a TTL. We did not use proactive methods to notify nodes the delivery of messages, so that the messages can be removed from storage. 4.4 Metrics We define a set of metrics that we use in evaluating routing protocols in opportunistic networks: • delivery ratio, the ratio of the number of messages delivered to the number of total messages generated. • message transmissions, the total number of messages transmitted during the simulation across all nodes. • meta-data transmissions, the total number of meta-data units transmitted during the simulation across all nodes. • message duplications, the number of times a message copy occurred, due to replication. • delay, the duration between a message's generation time and the message's delivery time. • storage usage, the max and mean of maximum storage (bytes) used across all nodes. 4.5 Results Here we compare simulation results of the six routing protocols. Figure 2: Delivery ratio (log scale). The direct and random protocols for one-hour TTL had delivery ratios that were too low to be visible in the plot. Figure 2 shows the delivery ratio of all the protocols, with different TTLs. (In all the plots in the paper, "prediction" stands for our method, "state" stands for the Link-State protocol, and "prophet" represents PRoPHET.) Although we had 5,142 users in the network, the direct-delivery and random protocols had low delivery ratios (note the log scale). Even for messages with an unlimited lifetime, only 59 out of 2077 messages were delivered during this one-month simulation. The delivery ratio of epidemic routing was the best. The three prediction-based approaches had low delivery ratio, compared to epidemic routing. Although our method was slightly better than the other two, the advantage was marginal. The high delivery ratio of epidemic routing came with a price: excessive transmissions. Figure 3 shows the number of message data transmissions. The number of message transmissions of epidemic routing was more than 10 times higher than for the predictionbased routing protocols. Obviously, the direct delivery protocol had the lowest number of message transmissions--the number of message delivered. Among the three prediction-based methods, the PRoPHET transmitted fewer messages, but had comparable delivery-ratio as seen in Figure 2. Figure 4 shows that epidemic and all prediction-based methods had substantial meta-data transmissions, though epidemic routing had relatively more, with shorter TTLs. Because epidemic protocol transmitted messages at every contact, in turn, more nodes had messages that required meta-data transmission during contact. The direct-delivery and random protocols had no meta-data transmissions. In addition to its message transmissions and meta-data transmissions, the epidemic routing protocol also had excessive message Figure 3: Message transmissions (log scale) Figure 4: Meta-data transmissions (log scale). Direct and random protocols had no meta-data transmissions. duplications, spreading replicas of messages over the network. Figure 5 shows that epidemic routing had one or two orders more duplication than the prediction-based protocols. Recall that the directdelivery and random protocols did not replicate, thus had no data duplications. Figure 6 shows both the median and mean delivery delays. All protocols show similar delivery delays in both mean and median measures for medium TTLs, but differ for long and short TTLs. With a 100-hour TTL, or unlimited TTL, epidemic routing had the shortest delays. The direct-delivery had the longest delay for unlimited TTL, but it had the shortest delay for the one-hour TTL. The results seem contrary to our intuition: the epidemic routing protocol should be the fastest routing protocol since it spreads messages all over the network. Indeed, the figures show only the delay time for delivered messages. For direct delivery, random, and the probability-based routing protocols, relatively few messages were delivered for short TTLs, so many messages expired before they could reach their destination; those messages had infinite delivery delay and were not included in the median or mean measurements. For longer TTLs, more messages were delivered even for the directdelivery protocol. The statistics of longer TTLs for comparison are more meaningful than those of short TTLs. Since our message generation rate was low, the storage usage was also low in our simulation. Figure 7 shows the maximum and average of maximum volume (in KBytes) of messages stored Figure 5: Message duplications (log scale). Direct and random protocols had no message duplications. Figure 6: Median and mean delays (log scale). in each node. The epidemic routing had the most storage usage. The message time-to-live parameter was the big factor affecting the storage usage for epidemic and prediction-based routing protocols. We studied the impact of different parameters of our predictionbased routing protocol. Our prediction-based protocol was sensitive to several parameters, such as the probability threshold and the prediction window ΔT. Figure 8 shows the delivery ratios when we used different probability thresholds. (The leftmost value 0.01 is the value used for the other plots.) A higher probability threshold limited the transfer probability, so fewer messages were delivered. It also required fewer transmissions as shown in Figure 9. With a larger prediction window, we got a higher contact probability. Thus, for the same probability threshold, we had a slightly higher delivery ratio as shown in Figure 10, and a few more transmissions as shown in Figure 11. 5. RELATED WORK In addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature. We did not implement and evaluate these routing protocols, because either they require domain-specific information (location information) [14, 15], assume certain mobility patterns [17], present orthogonal approaches [10, 24] to other routing protocols. Figure 7: Max and mean of maximum storage usage across all nodes (log scale). Figure 8: Probability threshold impact on delivery ratio of timely-contact routing. LeBrun et al. [14] propose a location-based delay-tolerant network routing protocol. Their algorithm assumes that every node knows its own position, and the destination is stationary at a known location. A node forwards data to a neighbor only if the neighbor is closer to the destination than its own position. Our protocol does not require knowledge of the nodes' locations, and learns their contact patterns. Leguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space. Location information of nodes is required to construct mobility patterns. Musolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad hoc networks. They use a Kalman filter to compute the probability that a node delivers messages. This protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes. When two nodes are in the same connected cloud, DSDV [19] routing is used. Network coding also draws much interest from DTN research. Erasure-coding [10, 24] explores coding algorithms to reduce message replicas. The source node replicates a message m times, then uses a coding scheme to encode them in one big message. After replicas are encoded, the source divides the big message into k Figure 9: Probability threshold impact on message transmission of timely-contact routing. Figure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale). blocks of the same size, and transmits a block to each of the first k encountered nodes. If m of the blocks are received at the destination, the message can be restored, where m <k. In a uniformly distributed mobility scenario, the delivery probability increases because the probability that the destination node meets m relays is greater than it meets k relays, given m <k. 6. SUMMARY We propose a prediction-based routing protocol for opportunistic networks. We evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols. Our simulation results show that direct delivery had the lowest delivery ratio, the fewest data transmissions, and no meta-data transmission or data duplication. Direct delivery is suitable for devices that require an extremely low power consumption. The random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes. Epidemic routing delivered the most messages. The excessive transmissions, and data duplication, however, consume more resources than portable devices may be able to provide. None of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks, Figure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale). because they either had an extremely low delivery ratio, or had an extremely high resource consumption. The prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing. They also had fewer data duplications than epidemic routing. All the prediction-based routing protocols that we have evaluated had similar performance. Our method had a slightly higher delivery ratio, but more transmissions and higher storage usage. There are many parameters for prediction-based routing protocols, however, and different parameters may produce different results. Indeed, there is an opportunity for some adaptation; for example, high priority messages may be given higher transfer and replication probabilities to increase the chance of delivery and reduce the delay, or a node with infrequent contact may choose to raise its transfer probability. We only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages. In some applications, context information (such as location) may be available for the peers. One may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message. Location prediction [21] may be used to predict nodes' mobility, and to choose as relays those nodes moving toward the destined location. Research on routing in opportunistic networks is still in its early stage. Many other issues of opportunistic networks, such as security and privacy, are mainly left open. We anticipate studying these issues in future work.
Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces ABSTRACT Traditional mobile ad hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic "opportunistic" network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage. 1. INTRODUCTION Mobile opportunistic networks are one kind of delay-tolerant network (DTN) [6]. Delay-tolerant networks provide service despite long link delays or frequent link breaks. Long link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2]. Link breaks are caused by nodes moving out of range, environmental changes, interference from other moving objects, radio power-offs, or failed nodes. For us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes. Mobile opportunistic networks have received increasing interest from researchers. In the literature, these networks include mobile sensor networks [25], wild-animal tracking networks [11], "pocketswitched" networks [8], and transportation networks [1, 14]. We expect to see more opportunistic networks when the one-laptopper-child (OLPC) project [18] starts rolling out inexpensive laptops with wireless networking capability for children in developing countries, where often no infrastructure exits. Opportunistic networking is one promising approach for those children to exchange information. One fundamental problem in opportunistic networks is how to route messages from their source to their destination. Mobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception. In mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11]. Some devices can form a small mobile ad hoc network when the nodes move close to each other. But a node may frequently be isolated from other nodes. Note that traditional Internet routing protocols and ad hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks. Indeed, there may never exist an end-to-end path between two given devices. In this paper, we study protocols for routing messages between wireless networking devices carried by people. We assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination. Each device represents a unique person (it is out of the scope of this paper when a device maybe carried by multiple people). Each message is destined for a specific person and thus for a specific node carried by that person. Although one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message. We do not consider multicast or geocast in this paper. Many routing protocols have been proposed in the literature. Few of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model. Random walk or random way-point mobility models are often used to evaluate the performance of those routing protocols. Although these synthetic mobility models have received extensive interest by mobile ad hoc network researchers [3], they do not reflect people's mobility patterns [9]. Realising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces. Chaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set. Su et al. [22] simulated a set of routing protocols in a small experimental network. Those studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20--30 nodes). Deploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation. Instead of using a complex mobility model to mimic people's mobility patterns, we used mobility traces collected in a production wireless network at Dartmouth College to drive our simulation. Our messagegeneration model, however, was synthetic. To the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users. Using realistic contact traces, we evaluate the performance of three "naive" routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22]. We also propose a new prediction-based routing protocol, and compare it to the above in our evaluation. 2. ROUTING PROTOCOL 2.1 Direct Delivery Protocol 2.2 Epidemic Routing Protocol 2.3 Random Routing 2.4 PRoPHET Protocol 2.5 Link-State Protocol 3. TIMELY-CONTACT PROBABILITY 3.1 Our Routing Protocol 3.1.1 Receiver Decision 3.1.2 Sender Decision 3.1.3 Multi-node Relay 4. EVALUATION RESULTS 4.1 Mobility traces 4.2 Simulator 4.3 Message generation 4.4 Metrics 4.5 Results 5. RELATED WORK In addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature. We did not implement and evaluate these routing protocols, because either they require domain-specific information (location information) [14, 15], assume certain mobility patterns [17], present orthogonal approaches [10, 24] to other routing protocols. Figure 7: Max and mean of maximum storage usage across all nodes (log scale). Figure 8: Probability threshold impact on delivery ratio of timely-contact routing. LeBrun et al. [14] propose a location-based delay-tolerant network routing protocol. Their algorithm assumes that every node knows its own position, and the destination is stationary at a known location. A node forwards data to a neighbor only if the neighbor is closer to the destination than its own position. Our protocol does not require knowledge of the nodes' locations, and learns their contact patterns. Leguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space. Location information of nodes is required to construct mobility patterns. Musolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad hoc networks. They use a Kalman filter to compute the probability that a node delivers messages. This protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes. When two nodes are in the same connected cloud, DSDV [19] routing is used. Network coding also draws much interest from DTN research. Erasure-coding [10, 24] explores coding algorithms to reduce message replicas. The source node replicates a message m times, then uses a coding scheme to encode them in one big message. After replicas are encoded, the source divides the big message into k Figure 9: Probability threshold impact on message transmission of timely-contact routing. Figure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale). blocks of the same size, and transmits a block to each of the first k encountered nodes. If m of the blocks are received at the destination, the message can be restored, where m <k. In a uniformly distributed mobility scenario, the delivery probability increases because the probability that the destination node meets m relays is greater than it meets k relays, given m <k. 6. SUMMARY We propose a prediction-based routing protocol for opportunistic networks. We evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols. Our simulation results show that direct delivery had the lowest delivery ratio, the fewest data transmissions, and no meta-data transmission or data duplication. Direct delivery is suitable for devices that require an extremely low power consumption. The random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes. Epidemic routing delivered the most messages. The excessive transmissions, and data duplication, however, consume more resources than portable devices may be able to provide. None of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks, Figure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale). because they either had an extremely low delivery ratio, or had an extremely high resource consumption. The prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing. They also had fewer data duplications than epidemic routing. All the prediction-based routing protocols that we have evaluated had similar performance. Our method had a slightly higher delivery ratio, but more transmissions and higher storage usage. There are many parameters for prediction-based routing protocols, however, and different parameters may produce different results. Indeed, there is an opportunity for some adaptation; for example, high priority messages may be given higher transfer and replication probabilities to increase the chance of delivery and reduce the delay, or a node with infrequent contact may choose to raise its transfer probability. We only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages. In some applications, context information (such as location) may be available for the peers. One may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message. Location prediction [21] may be used to predict nodes' mobility, and to choose as relays those nodes moving toward the destined location. Research on routing in opportunistic networks is still in its early stage. Many other issues of opportunistic networks, such as security and privacy, are mainly left open. We anticipate studying these issues in future work.
Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces ABSTRACT Traditional mobile ad hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic "opportunistic" network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage. 1. INTRODUCTION Mobile opportunistic networks are one kind of delay-tolerant network (DTN) [6]. Delay-tolerant networks provide service despite long link delays or frequent link breaks. Long link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2]. For us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes. Mobile opportunistic networks have received increasing interest from researchers. Opportunistic networking is one promising approach for those children to exchange information. One fundamental problem in opportunistic networks is how to route messages from their source to their destination. Mobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception. In mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11]. Some devices can form a small mobile ad hoc network when the nodes move close to each other. But a node may frequently be isolated from other nodes. Note that traditional Internet routing protocols and ad hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks. Indeed, there may never exist an end-to-end path between two given devices. In this paper, we study protocols for routing messages between wireless networking devices carried by people. We assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination. Each message is destined for a specific person and thus for a specific node carried by that person. Although one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message. Many routing protocols have been proposed in the literature. Few of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model. Random walk or random way-point mobility models are often used to evaluate the performance of those routing protocols. Although these synthetic mobility models have received extensive interest by mobile ad hoc network researchers [3], they do not reflect people's mobility patterns [9]. Realising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces. Chaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set. Su et al. [22] simulated a set of routing protocols in a small experimental network. Those studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20--30 nodes). Deploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation. Our messagegeneration model, however, was synthetic. To the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users. Using realistic contact traces, we evaluate the performance of three "naive" routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22]. We also propose a new prediction-based routing protocol, and compare it to the above in our evaluation. 5. RELATED WORK In addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature. Figure 7: Max and mean of maximum storage usage across all nodes (log scale). Figure 8: Probability threshold impact on delivery ratio of timely-contact routing. LeBrun et al. [14] propose a location-based delay-tolerant network routing protocol. Their algorithm assumes that every node knows its own position, and the destination is stationary at a known location. A node forwards data to a neighbor only if the neighbor is closer to the destination than its own position. Our protocol does not require knowledge of the nodes' locations, and learns their contact patterns. Leguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space. Location information of nodes is required to construct mobility patterns. Musolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad hoc networks. They use a Kalman filter to compute the probability that a node delivers messages. This protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes. When two nodes are in the same connected cloud, DSDV [19] routing is used. Network coding also draws much interest from DTN research. Erasure-coding [10, 24] explores coding algorithms to reduce message replicas. The source node replicates a message m times, then uses a coding scheme to encode them in one big message. After replicas are encoded, the source divides the big message into k Figure 9: Probability threshold impact on message transmission of timely-contact routing. Figure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale). blocks of the same size, and transmits a block to each of the first k encountered nodes. If m of the blocks are received at the destination, the message can be restored, where m <k. 6. SUMMARY We propose a prediction-based routing protocol for opportunistic networks. We evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols. Direct delivery is suitable for devices that require an extremely low power consumption. The random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes. Epidemic routing delivered the most messages. None of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks, Figure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale). The prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing. They also had fewer data duplications than epidemic routing. All the prediction-based routing protocols that we have evaluated had similar performance. Our method had a slightly higher delivery ratio, but more transmissions and higher storage usage. There are many parameters for prediction-based routing protocols, however, and different parameters may produce different results. We only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages. One may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message. Location prediction [21] may be used to predict nodes' mobility, and to choose as relays those nodes moving toward the destined location. Research on routing in opportunistic networks is still in its early stage. Many other issues of opportunistic networks, such as security and privacy, are mainly left open. We anticipate studying these issues in future work.
H-97
Feature Representation for Effective Action-Item Detection
E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage(s) indicating the request(s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n=4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation.
[ "action-item detect", "e-mail", "inform retriev", "topic-driven text classif", "text classif", "n-gram", "chi-squar featur select", "featur select", "svm", "autom model select", "embed cross-valid", "text categor", "genr-classif", "e-mail prioriti rank", "speech act identif", "simpl factual question", "document detect", "document rank", "sentenc detect", "sentenc-level classifi", "speech act" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "M", "U", "U", "M", "U", "M", "M", "U" ]
Feature Representation for Effective Action-Item Detection Paul N. Bennett Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 pbennett+@cs.cmu.edu Jaime Carbonell Language Technologies Institute. Carnegie Mellon University Pittsburgh, PA 15213 jgc+@cs.cmu.edu ABSTRACT E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage(s) indicating the request(s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender``s intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n=4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; I.2.6 [Artificial Intelligence]: Learning; I.5.4 [Pattern Recognition]: Applications General Terms Experimentation 1. INTRODUCTION E-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage. This includes prioritizing e-mails over a range of sources from business partners to family members, filtering and reducing junk e-mail, and quickly managing requests that demand From: Henry Hutchins <hhutchins@innovative.company.com> To: Sara Smith; Joe Johnson; William Woolings Subject: meeting with prospective customers Sent: Fri 12/10/2005 8:08 AM Hi All, I``d like to remind all of you that the group from GRTY will be visiting us next Friday at 4:30 p.m.. The current schedule looks like this: + 9:30 a.m. Informal Breakfast and Discussion in Cafeteria + 10:30 a.m. Company Overview + 11:00 a.m. Individual Meetings (Continue Over Lunch) + 2:00 p.m. Tour of Facilities + 3:00 p.m. Sales Pitch In order to have this go off smoothly, I would like to practice the presentation well in advance. As a result, I will need each of your parts by Wednesday. Keep up the good work! -Henry Figure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient``s attention or action. the receiver``s attention or action. Automated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request. Such a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent. We view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions. The utility of such a detector can manifest as a method of prioritizing e-mails according to task-oriented criteria other than the standard ones of topic and sender or as a means of ensuring that the email user hasn``t dropped the proverbial ball by forgetting to address an action request. Action-item detection differs from standard text classification in two important ways. First, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body. In contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22]. Second, action-item detection attempts to recover the email sender``s intent - whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below. Instead we find that we need more information-laden features such as higher-order n-grams. Text categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17]. In fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14]. Topic detection and tracking (TDT), also works well with unigram feature sets [1, 20]. We believe that action-item detection is one of the first clear instances of an IR-related task where we must move beyond bag-of-words to achieve high performance, albeit not too far, as bag-of-n-grams seem to suffice. We first review related work for similar text classification problems such as e-mail priority ranking and speech act identification. Then we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level. From there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem. We then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task. Finally, we summarize this paper``s contributions and consider interesting directions for future work. 2. RELATED WORK Several other researchers have considered very similar text classification tasks. Cohen et al. [5] describe an ontology of speech acts, such as Propose a Meeting, and attempt to predict when an e-mail contains one of these speech acts. We consider action-items to be an important specific type of speech act that falls within their more general classification. While they provide results for several classification methods, their methods only make use of human judgments at the document-level. In contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest. Corston-Oliver et al. [6] consider detecting items in e-mail to Put on a To-Do List. This classification task is very similar to ours except they do not consider simple factual questions to belong to this category. We include questions, but note that not all questions are action-items - some are rhetorical or simply social convention, How are you? . From a learning perspective, while they make use of judgments at the sentence-level, they do not explicitly compare what if any benefits finer-grained judgments offer. Additionally, they do not study alternative choices or approaches to the classification task. Instead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list. In this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems. Interest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature. For example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail. In this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action. 3. PROBLEM DEFINITION & APPROACH In contrast to previous work, we explicitly focus on the benefits that finer-grained, more costly, sentence-level human judgments offer over coarse-grained document-level judgments. Additionally, we consider multiple standard text classification approaches and analyze both the quantitative and qualitative differences that arise from taking a document-level vs. a sentence-level approach to classification. Finally, we focus on the representation necessary to achieve the most competitive performance. 3.1 Problem Definition In order to provide the most benefit to the user, a system would not only detect the document, but it would also indicate the specific sentences in the e-mail which contain the action-items. Therefore, there are three basic problems: 1. Document detection: Classify a document as to whether or not it contains an action-item. 2. Document ranking: Rank the documents such that all documents containing action-items occur as high as possible in the ranking. 3. Sentence detection: Classify each sentence in a document as to whether or not it is an action-item. As in most Information Retrieval tasks, the weight the evaluation metric should give to precision and recall depends on the nature of the application. In situations where a user will eventually read all received messages, ranking (e.g., via precision at recall of 1) may be most important since this will help encourage shorter delays in communications between users. In contrast, high-precision detection at low recall will be of increasing importance when the user is under severe time-pressure and therefore will likely not read all mail. This can be the case for crisis managers during disaster management. Finally, sentence detection plays a role in both timepressure situations and simply to alleviate the user``s required time to gist the message. 3.2 Approach As mentioned above, the labeled data can come in one of two forms: a document-labeling provides a yes/no label for each document as to whether it contains an action-item; a phrase-labeling provides only a yes label for the specific items of interest. We term the human judgments a phrase-labeling since the user``s view of the action-item may not correspond with actual sentence boundaries or predicted sentence boundaries. Obviously, it is straightforward to generate a document-labeling consistent with a phrase-labeling by labeling a document yes if and only if it contains at least one phrase labeled yes. To train classifiers for this task, we can take several viewpoints related to both the basic problems we have enumerated and the form of the labeled data. The document-level view treats each e-mail as a learning instance with an associated class-label. Then, the document can be converted to a feature-value vector and learning progresses as usual. Applying a document-level classifier to document detection and ranking is straightforward. In order to apply it to sentence detection, one must make additional steps. For example, if the classifier predicts a document contains an action-item, then areas of the document that contain a high-concentration of words which the model weights heavily in favor of action-items can be indicated. The obvious benefit of the document-level approach is that training set collection costs are lower since the user only has to specify whether or not an e-mail contains an action-item and not the specific sentences. In the sentence-level view, each e-mail is automatically segmented into sentences, and each sentence is treated as a learning instance with an associated class-label. Since the phrase-labeling provided by the user may not coincide with the automatic segmentation, we must determine what label to assign a partially overlapping sentence when converting it to a learning instance. Once trained, applying the resulting classifiers to sentence detection is now straightforward, but in order to apply the classifiers to document detection and document ranking, the individual predictions over each sentence must be aggregated in order to make a document-level prediction. This approach has the potential to benefit from morespecific labels that enable the learner to focus attention on the key sentences instead of having to learn based on data that the majority of the words in the e-mail provide no or little information about class membership. 3.2.1 Features Consider some of the phrases that might constitute part of an action item: would like to know, let me know, as soon as possible, have you. Each of these phrases consists of common words that occur in many e-mails. However, when they occur in the same sentence, they are far more indicative of an action-item. Additionally, order can be important: consider have you versus you have. Because of this, we posit that n-grams play a larger role in this problem than is typical of problems like topic classification. Therefore, we consider all n-grams up to size 4. When using n-grams, if we find an n-gram of size 4 in a segment of text, we can represent the text as just one occurrence of the ngram or as one occurrence of the n-gram and an occurrence of each smaller n-gram contained by it. We choose the second of these alternatives since this will allow the algorithm itself to smoothly back-off in terms of recall. Methods such as na¨ıve Bayes may be hurt by such a representation because of double-counting. Since sentence-ending punctuation can provide information, we retain the terminating punctuation token when it is identifiable. Additionally, we add a beginning-of-sentence and end-of-sentence token in order to capture patterns that are often indicators at the beginning or end of a sentence. Assuming proper punctuation, these extra tokens are unnecessary, but often e-mail lacks proper punctuation. In addition, for the sentence-level classifiers that use ngrams, we additionally code for each sentence a binary encoding of the position of the sentence relative to the document. This encoding has eight associated features that represent which octile (the first eighth, second eighth, etc.) contains the sentence. 3.2.2 Implementation Details In order to compare the document-level to the sentence-level approach, we compare predictions at the document-level. We do not address how to use a document-level classifier to make predictions at the sentence-level. In order to automatically segment the text of the e-mail, we use the RASP statistical parser [4]. Since the automatically segmented sentences may not correspond directly with the phrase-level boundaries, we treat any sentence that contains at least 30% of a marked action-item segment as an action-item. When evaluating sentencedetection for the sentence-level system, we use these class labels as ground truth. Since we are not evaluating multiple segmentation approaches, this does not bias any of the methods. If multiple segmentation systems were under evaluation, one would need to use a metric that matched predicted positive sentences to phrases labeled positive. The metric would need to punish overly long true predictions as well as too short predictions. Our criteria for converting to labeled instances implicitly includes both criteria. Since the segmentation is fixed, an overly long prediction would be predicting yes for many no instances since presumably the extra length corresponds to additional segmented sentences all of which do not contain 30% of action-item. Likewise, a too short prediction must correspond to a small sentence included in the action-item but not constituting all of the action-item. Therefore, in order to consider the prediction to be too short, there will be an additional preceding/following sentence that is an action-item where we incorrectly predicted no. Once a sentence-level classifier has made a prediction for each sentence, we must combine these predictions to make both a document-level prediction and a document-level score. We use the simple policy of predicting positive when any of the sentences is predicted positive. In order to produce a document score for ranking, the confidence that the document contains an action-item is: ψ(d) = 1 n(d) s∈d|π(s)=1 ψ(s) if for any s ∈ d, π(s) = 1 1 n(d) maxs∈d ψ(s) o.w. where s is a sentence in document d, π is the classifier``s 1/0 prediction, ψ is the score the classifier assigns as its confidence that π(s) = 1, and n(d) is the greater of 1 and the number of (unigram) tokens in the document. In other words, when any sentence is predicted positive, the document score is the length normalized sum of the sentence scores above threshold. When no sentence is predicted positive, the document score is the maximum sentence score normalized by length. As in other text problems, we are more likely to emit false positives for documents with more words or sentences. Thus we include a length normalization factor. 4. EXPERIMENTAL ANALYSIS 4.1 The Data Our corpus consists of e-mails obtained from volunteers at an educational institution and cover subjects such as: organizing a research workshop, arranging for job-candidate interviews, publishing proceedings, and talk announcements. The messages were anonymized by replacing the names of each individual and institution with a pseudonym.1 After attempting to identify and eliminate duplicate e-mails, the corpus contains 744 e-mail messages. After identity anonymization, the corpora has three basic versions. Quoted material refers to the text of a previous e-mail that an author often leaves in an e-mail message when responding to the e-mail. Quoted material can act as noise when learning since it may include action-items from previous messages that are no longer relevant. To isolate the effects of quoted material, we have three versions of the corpora. The raw form contains the basic messages. The auto-stripped version contains the messages after quoted material has been automatically removed. The hand-stripped version contains the messages after quoted material has been removed by a human. Additionally, the hand-stripped version has had any xml content and e-mail signatures removed - leaving only the essential content of the message. The studies reported here are performed with the hand-stripped version. This allows us to balance the cognitive load in terms of number of tokens that must be read in the user-studies we report - including quoted material would complicate the user studies since some users might skip the material while others read it. Additionally, ensuring all quoted material is removed 1 We have an even more highly anonymized version of the corpus that can be made available for some outside experimentation. Please contact the authors for more information on obtaining this data. prevents tainting the cross-validation since otherwise a test item could occur as quoted material in a training document. 4.1.1 Data Labeling Two human annotators labeled each message as to whether or not it contained an action-item. In addition, they identified each segment of the e-mail which contained an action-item. A segment is a contiguous section of text selected by the human annotators and may span several sentences or a complete phrase contained in a sentence. They were instructed that an action item is an explicit request for information that requires the recipient``s attention or a required action and told to highlight the phrases or sentences that make up the request. Annotator 1 No Yes Annotator 2 No 391 26 Yes 29 298 Table 1: Agreement of Human Annotators at Document Level Annotator One labeled 324 messages as containing action items. Annotator Two labeled 327 messages as containing action items. The agreement of the human annotators is shown in Tables 1 and 2. The annotators are said to agree at the document-level when both marked the same document as containing no action-items or both marked at least one action-item regardless of whether the text segments were the same. At the document-level, the annotators agreed 93% of the time. The kappa statistic [3, 5] is often used to evaluate inter-annotator agreement: κ = A − R 1 − R A is the empirical estimate of the probability of agreement. R is the empirical estimate of the probability of random agreement given the empirical class priors. A value close to −1 implies the annotators agree far less often than would be expected randomly, while a value close to 1 means they agree more often than randomly expected. At the document-level, the kappa statistic for inter-annotator agreement is 0.85. This value is both strong enough to expect the problem to be learnable and is comparable with results for similar tasks [5, 6]. In order to determine the sentence-level agreement, we use each judgment to create a sentence-corpus with labels as described in Section 3.2.2, then consider the agreement over these sentences. This allows us to compare agreement over no judgments. We perform this comparison over the hand-stripped corpus since that eliminates spurious no judgments that would come from including quoted material, etc.. Both annotators were free to label the subject as an action-item, but since neither did, we omit the subject line of the message as well. This only reduces the number of no agreements. This leaves 6301 automatically segmented sentences. At the sentence-level, the annotators agreed 98% of the time, and the kappa statistic for inter-annotator agreement is 0.82. In order to produce one single set of judgments, the human annotators went through each annotation where there was disagreement and came to a consensus opinion. The annotators did not collect statistics during this process but anecdotally reported that the majority of disagreements were either cases of clear annotator oversight or different interpretations of conditional statements. For example, If you would like to keep your job, come to tomorrow``s meeting implies a required action where If you would like to join Annotator 1 No Yes Annotator 2 No 5810 65 Yes 74 352 Table 2: Agreement of Human Annotators at Sentence Level the football betting pool, come to tomorrow``s meeting does not. The first would be an action-item in most contexts while the second would not. Of course, many conditional statements are not so clearly interpretable. After reconciling the judgments there are 416 e-mails with no action-items and 328 e-mails containing actionitems. Of the 328 e-mails containing action-items, 259 messages have one action-item segment; 55 messages have two action-item segments; 11 messages have three action-item segments. Two messages have four action-item segments, and one message has six action-item segments. Computing the sentence-level agreement using the reconciled gold standard judgments with each of the annotators'' individual judgments gives a kappa of 0.89 for Annotator One and a kappa of 0.92 for Annotator Two. In terms of message characteristics, there were on average 132 content tokens in the body after stripping. For action-item messages, there were 115. However, by examining Figure 2 we see the length distributions are nearly identical. As would be expected for e-mail, it is a long-tailed distribution with about half the messages having more than 60 tokens in the body (this paragraph has 65 tokens). 4.2 Classifiers For this experiment, we have selected a variety of standard text classification algorithms. In selecting algorithms, we have chosen algorithms that are not only known to work well but which differ along such lines as discriminative vs. generative and lazy vs. eager. We have done this in order to provide both a competitive and thorough sampling of learning methods for the task at hand. This is important since it is easy to improve a strawman classifier by introducing a new representation. By thoroughly sampling alternative classifier choices we demonstrate that representation improvements over bag-of-words are not due to using the information in the bag-of-words poorly. 4.2.1 kNN We employ a standard variant of the k-nearest neighbor algorithm used in text classification, kNN with s-cut score thresholding [19]. We use a tfidf-weighting of the terms with a distanceweighted vote of the neighbors to compute the score before thresholding it. In order to choose the value of s for thresholding, we perform leave-one-out cross-validation over the training set. The value of k is set to be 2( log2 N + 1) where N is the number of training points. This rule for choosing k is theoretically motivated by results which show such a rule converges to the optimal classifier as the number of training points increases [8]. In practice, we have also found it to be a computational convenience that frequently leads to comparable results with numerically optimizing k via a cross-validation procedure. 4.2.2 Na¨ıve Bayes We use a standard multinomial na¨ıve Bayes classifier [16]. In using this classifier, we smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively. 0 20 40 60 80 100 120 140 160 0 200 400 600 800 1000 1200 1400 NumberofMessages Number of Tokens All Messages Action-Item Messages 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0 200 400 600 800 1000 1200 1400 PercentageofMessages Number of Tokens All Messages Action-Item Messages Figure 2: The Histogram (left) and Distribution (right) of Message Length. A bin size of 20 words was used. Only tokens in the body after hand-stripping were counted. After stripping, the majority of words left are usually actual message content. Classifiers Document Unigram Document Ngram Sentence Unigram Sentence Ngram F1 kNN 0.6670 ± 0.0288 0.7108 ± 0.0699 0.7615 ± 0.0504 0.7790 ± 0.0460 na¨ıve Bayes 0.6572 ± 0.0749 0.6484 ± 0.0513 0.7715 ± 0.0597 0.7777 ± 0.0426 SVM 0.6904 ± 0.0347 0.7428 ± 0.0422 0.7282 ± 0.0698 0.7682 ± 0.0451 Voted Perceptron 0.6288 ± 0.0395 0.6774 ± 0.0422 0.6511 ± 0.0506 0.6798 ± 0.0913 Accuracy kNN 0.7029 ± 0.0659 0.7486 ± 0.0505 0.7972 ± 0.0435 0.8092 ± 0.0352 na¨ıve Bayes 0.6074 ± 0.0651 0.5816 ± 0.1075 0.7863 ± 0.0553 0.8145 ± 0.0268 SVM 0.7595 ± 0.0309 0.7904 ± 0.0349 0.7958 ± 0.0551 0.8173 ± 0.0258 Voted Perceptron 0.6531 ± 0.0390 0.7164 ± 0.0376 0.6413 ± 0.0833 0.7082 ± 0.1032 Table 3: Average Document-Detection Performance during Cross-Validation for Each Method and the Sample Standard Deviation (Sn−1) in italics. The best performance for each classifier is shown in bold. 4.2.3 SVM We have used a linear SVM with a tfidf feature representation and L2-norm as implemented in the SVMlight package v6.01 [11]. All default settings were used. 4.2.4 Voted Perceptron Like the SVM, the Voted Perceptron is a kernel-based learning method. We use the same feature representation and kernel as we have for the SVM, a linear kernel with tfidf-weighting and an L2-norm. The voted perceptron is an online-learning method that keeps a history of past perceptrons used, as well as a weight signifying how often that perceptron was correct. With each new training example, a correct classification increases the weight on the current perceptron and an incorrect classification updates the perceptron. The output of the classifier uses the weights on the perceptra to make a final voted classification. When used in an offline-manner, multiple passes can be made through the training data. Both the voted perceptron and the SVM give a solution from the same hypothesis space - in this case, a linear classifier. Furthermore, it is well-known that the Voted Perceptron increases the margin of the solution after each pass through the training data [10]. Since Cohen et al. [5] obtain worse results using an SVM than a Voted Perceptron with one training iteration, they conclude that the best solution for detecting speech acts may not lie in an area with a large margin. Because their tasks are highly similar to ours, we employ both classifiers to ensure we are not overlooking a competitive alternative classifier to the SVM for the basic bag-of-words representation. 4.3 Performance Measures To compare the performance of the classification methods, we look at two standard performance measures, F1 and accuracy. The F1 measure [18, 21] is the harmonic mean of precision and recall where Precision = Correct Positives Predicted Positives and Recall = Correct Positives Actual Positives . 4.4 Experimental Methodology We perform standard 10-fold cross-validation on the set of documents. For the sentence-level approach, all sentences in a document are either entirely in the training set or entirely in the test set for each fold. For significance tests, we use a two-tailed t-test [21] to compare the values obtained during each cross-validation fold with a p-value of 0.05. Feature selection was performed using the chi-squared statistic. Different levels of feature selection were considered for each classifier. Each of the following number of features was tried: 10, 25, 50, 100, 250, 750, 1000, 2000, 4000. There are approximately 4700 unigram tokens without feature selection. In order to choose the number of features to use for each classifier, we perform nested cross-validation and choose the settings that yield the optimal document-level F1 for that classifier. For this study, only the body of each e-mail message was used. Feature selection is always applied to all candidate features. That is, for the n-gram representation, the n-grams and position features are also subject to removal by the feature selection method. 4.5 Results The results for document-level classification are given in Table 3. The primary hypothesis we are concerned with is that n-grams are critical for this task; if this is true, we expect to see a significant gap in performance between the document-level classifiers that use n-grams (denoted Document Ngram) and those using only unigram features (denoted Document Unigram). Examining Table 3, we observe that this is indeed the case for every classifier except na¨ıve Bayes. This difference in performance produced by the n-gram representation is statistically significant for each classifier except for na¨ıve Bayes and the accuracy metric for kNN (see Table 4). Na¨ıve Bayes poor performance with the n-gram representation is not surprising since the bag-of-n-grams causes excessive doublecounting as mentioned in Section 3.2.1; however, na¨ıve Bayes is not hurt at the sentence-level because the sparse examples provide few chances for agglomerative effects of double counting. In either case, when a language-modeling approach is desired, modeling the n-grams directly would be preferable to na¨ıve Bayes. More importantly for the n-gram hypothesis, the n-grams lead to the best document-level classifier performance as well. As would be expected, the difference between the sentence-level n-gram representation and unigram representation is small. This is because the window of text is so small that the unigram representation, when done at the sentence-level, implicitly picks up on the power of the n-grams. Further improvement would signify that the order of the words matter even when only considering a small sentence-size window. Therefore, the finer-grained sentence-level judgments allows a unigram representation to succeed but only when performed in a small window - behaving as an n-gram representation for all practical purposes. Document Winner Sentence Winner kNN Ngram Ngram na¨ıve Bayes Unigram Ngram SVM Ngram† Ngram Voted Perceptron Ngram† Ngram Table 4: Significance results for n-grams versus unigrams for document detection using document-level and sentence-level classifiers. When the F1 result is statistically significant, it is shown in bold. When the accuracy result is significant, it is shown with a † . F1 Winner Accuracy Winner kNN Sentence Sentence na¨ıve Bayes Sentence Sentence SVM Sentence Sentence Voted Perceptron Sentence Document Table 5: Significance results for sentence-level classifiers vs. document-level classifiers for the document detection problem. When the result is statistically significant, it is shown in bold. Further highlighting the improvement from finer-grained judgments and n-grams, Figure 3 graphically depicts the edge the SVM sentence-level classifier has over the standard bag-of-words approach with a precision-recall curve. In the high precision area of the graph, the consistent edge of the sentence-level classifier is rather impressive - continuing at precision 1 out to 0.1 recall. This would mean that a tenth of the user``s action-items would be placed at the top of their action-item sorted inbox. Additionally, the large separation at the top right of the curves corresponds to the area where the optimal F1 occurs for each classifier, agreeing with the large improvement from 0.6904 to 0.7682 in F1 score. Considering the relative unexplored nature of classification at the sentence-level, this gives great hope for further increases in performance. Accuracy F1 Unigram Ngram Unigram Ngram kNN 0.9519 0.9536 0.6540 0.6686 na¨ıve Bayes 0.9419 0.9550 0.6176 0.6676 SVM 0.9559 0.9579 0.6271 0.6672 Voted Perceptron 0.8895 0.9247 0.3744 0.5164 Table 6: Performance of the Sentence-Level Classifiers at Sentence Detection Although Cohen et al. [5] observed that the Voted Perceptron with a single training iteration outperformed SVM in a set of similar tasks, we see no such behavior here. This further strengthens the evidence that an alternate classifier with the bag-of-words representation could not reach the same level of performance. The Voted Perceptron classifier does improve when the number of training iterations are increased, but it is still lower than the SVM classifier. Sentence detection results are presented in Table 6. With regard to the sentence detection problem, we note that the F1 measure gives a better feel for the remaining room for improvement in this difficult problem. That is, unlike document detection where actionitem documents are fairly common, action-item sentences are very rare. Thus, as in other text problems, the accuracy numbers are deceptively high sheerly because of the default accuracy attainable by always predicting no. Although, the results here are significantly above-random, it is unclear what level of performance is necessary for sentence detection to be useful in and of itself and not simply as a means to document ranking and classification. Figure 4: Users find action-items quicker when assisted by a classification system. Finally, when considering a new type of classification task, one of the most basic questions is whether an accurate classifier built for the task can have an impact on the end-user. In order to demonstrate the impact this task can have on e-mail users, we conducted a user study using an earlier less-accurate version of the sentence classifier - where instead of using just a single sentence, a threesentence windowed-approach was used. There were three distinct sets of e-mail in which users had to find action-items. These sets were either presented in a random order (Unordered), ordered by the classifier (Ordered), or ordered by the classifier and with the 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall Action-Item Detection SVM Performance (Post Model Selection) Document Unigram Sentence Ngram Figure 3: Both n-grams and a small prediction window lead to consistent improvements over the standard approach. center sentence in the highest confidence window highlighted (Order+help). In order to perform fair comparisons between conditions, the overall number of tokens in each message set should be approximately equal; that is, the cognitive reading load should be approximately the same before the classifier``s reordering. Additionally, users typically show practice effects by improving at the overall task and thus performing better at later message sets. This is typically handled by varying the ordering of the sets across users so that the means are comparable. While omitting further detail, we note the sets were balanced for the total number of tokens and a latin square design was used to balance practice effects. Figure 4 shows that at intervals of 5, 10, and 15 minutes, users consistently found significantly more action-items when assisted by the classifier, but were most critically aided in the first five minutes. Although, the classifier consistently aids the users, we did not gain an additional end-user impact by highlighting. As mentioned above, this might be a result of the large room for improvement that still exists for sentence detection, but anecdotal evidence suggests this might also be a result of how the information is presented to the user rather than the accuracy of sentence detection. For example, highlighting the wrong sentence near an actual action-item hurts the user``s trust, but if a vague indicator (e.g., an arrow) points to the approximate area the user is not aware of the near-miss. Since the user studies used a three sentence window, we believe this played a role as well as sentence detection accuracy. 4.6 Discussion In contrast to problems where n-grams have yielded little difference, we believe their power here stems from the fact that many of the meaningful n-grams for action-items consist of common words, e.g., let me know. Therefore, the document-level unigram approach cannot gain much leverage, even when modeling their joint probability correctly, since these words will often co-occur in the document but not necessarily in a phrase. Additionally, action-item detection is distinct from many text classification tasks in that a single sentence can change the class label of the document. As a result, good classifiers cannot rely on aggregating evidence from a large number of weak indicators across the entire document. Even though we discarded the header information, examining the top-ranked features at the document-level reveals that many of the features are names or parts of e-mail addresses that occurred in the body and are highly associated with e-mails that tend to contain many or no action-items. A few examples are terms such as org, bob, and gov. We note that these features will be sensitive to the particular distribution (senders/receivers) and thus the document-level approach may produce classifiers that transfer less readily to alternate contexts and users at different institutions. This points out that part of the problem of going beyond bag-of-words may be the methodology, and investigating such properties as learning curves and how well a model transfers may highlight differences in models which appear to have similar performance when tested on the distributions they were trained on. We are currently investigating whether the sentence-level classifiers do perform better over different test corpora without retraining. 5. FUTURE WORK While applying text classifiers at the document-level is fairly well-understood, there exists the potential for significantly increasing the performance of the sentence-level classifiers. Such methods include alternate ways of combining the predictions over each sentence, weightings other than tfidf, which may not be appropriate since sentences are small, better sentence segmentation, and other types of phrasal analysis. Additionally, named entity tagging, time expressions, etc., seem likely candidates for features that can further improve this task. We are currently pursuing some of these avenues to see what additional gains these offer. Finally, it would be interesting to investigate the best methods for combining the document-level and sentence-level classifiers. Since the simple bag-of-words representation at the document-level leads to a learned model that behaves somewhat like a context-specific prior dependent on the sender/receiver and general topic, a first choice would be to treat it as such when combining probability estimates with the sentence-level classifier. Such a model might serve as a general example for other problems where bag-of-words can establish a baseline model but richer approaches are needed to achieve performance beyond that baseline. 6. SUMMARY AND CONCLUSIONS The effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value. Further experiments are needed to see how this interacts with the amount of training data available. Sentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items. This, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification. In this work, we examined how action-items can be effectively detected in e-mails. Our empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments. When finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches. Acknowledgments This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. NBCHD030010. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA), or the Department of InteriorNational Business Center (DOI-NBC). We would like to extend our sincerest thanks to Jill Lehman whose efforts in data collection were essential in constructing the corpus, and both Jill and Aaron Steinfeld for their direction of the HCI experiments. We would also like to thank Django Wexler for constructing and supporting the corpus labeling tools and Curtis Huttenhower``s support of the text preprocessing package. Finally, we gratefully acknowledge Scott Fahlman for his encouragement and useful discussions on this topic. 7. REFERENCES [1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang. Topic detection and tracking pilot study: Final report. In Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, Washington, D.C., 1998. [2] C. Apte, F. Damerau, and S. M. Weiss. Automated learning of decision rules for text categorization. ACM Transactions on Information Systems, 12(3):233-251, July 1994. [3] J. Carletta. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249-254, 1996. [4] J. Carroll. High precision extraction of grammatical relations. In Proceedings of the 19th International Conference on Computational Linguistics (COLING), pages 134-140, 2002. [5] W. W. Cohen, V. R. Carvalho, and T. M. Mitchell. Learning to classify email into speech acts. In EMNLP-2004 (Conference on Empirical Methods in Natural Language Processing), pages 309-316, 2004. [6] S. Corston-Oliver, E. Ringger, M. Gamon, and R. Campbell. Task-focused summarization of email. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 43-50, 2004. [7] A. Culotta, R. Bekkerman, and A. McCallum. Extracting social networks and contact information from email and the web. In CEAS-2004 (Conference on Email and Anti-Spam), Mountain View, CA, July 2004. [8] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag, New York, NY, 1996. [9] S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami. Inductive learning algorithms and representations for text categorization. In CIKM ``98, Proceedings of the 7th ACM Conference on Information and Knowledge Management, pages 148-155, 1998. [10] Y. Freund and R. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277-296, 1999. [11] T. Joachims. Making large-scale svm learning practical. In B. Sch¨olkopf, C. J. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 41-56. MIT Press, 1999. [12] L. S. Larkey. A patent search and classification system. In Proceedings of the Fourth ACM Conference on Digital Libraries, pages 179 - 187, 1999. [13] D. D. Lewis. An evaluation of phrasal and clustered representations on a text categorization task. In SIGIR ``92, Proceedings of the 15th Annual International ACM Conference on Research and Development in Information Retrieval, pages 37-50, 1992. [14] Y. Liu, J. Carbonell, and R. Jin. A pairwise ensemble approach for accurate genre classification. In Proceedings of the European Conference on Machine Learning (ECML), 2003. [15] Y. Liu, R. Yan, R. Jin, and J. Carbonell. A comparison study of kernels for multi-label text classification using category association. In The Twenty-first International Conference on Machine Learning (ICML), 2004. [16] A. McCallum and K. Nigam. A comparison of event models for naive bayes text classification. In Working Notes of AAAI ``98 (The 15th National Conference on Artificial Intelligence), Workshop on Learning for Text Categorization, pages 41-48, 1998. TR WS-98-05. [17] F. Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1-47, March 2002. [18] C. J. van Rijsbergen. Information Retrieval. Butterworths, London, 1979. [19] Y. Yang. An evaluation of statistical approaches to text categorization. Information Retrieval, 1(1/2):67-88, 1999. [20] Y. Yang, J. Carbonell, R. Brown, T. Pierce, B. T. Archibald, and X. Liu. Learning approaches to topic detection and tracking. IEEE EXPERT, Special Issue on Applications of Intelligent Information Retrieval, 1999. [21] Y. Yang and X. Liu. A re-examination of text categorization methods. In SIGIR ``99, Proceedings of the 22nd Annual International ACM Conference on Research and Development in Information Retrieval, pages 42-49, 1999. [22] Y. Yang, J. Zhang, J. Carbonell, and C. Jin. Topic-conditioned novelty detection. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 2002.
Feature Representation for Effective Action-Item Detection ABSTRACT E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage (s) indicating the request (s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n = 4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation. 1. INTRODUCTION E-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage. This includes prioritizing e-mails over a range of sources from business partners to family members, filtering and reducing junk e-mail, and quickly managing requests that demand Figure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient's attention or action. the receiver's attention or action. Automated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request. Such a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent. We view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions. The utility of such a detector can manifest as a method of prioritizing e-mails according to task-oriented criteria other than the standard ones of topic and sender or as a means of ensuring that the email user hasn't dropped the proverbial ball by forgetting to address an action request. Action-item detection differs from standard text classification in two important ways. First, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body. In contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22]. Second, action-item detection attempts to recover the email sender's intent--whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below. Instead we find that we need more information-laden features such as higher-order n-grams. Text categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17]. In fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14]. Topic detection and tracking (TDT), also works well with unigram feature sets [1, 20]. We believe that action-item detection is one of the first clear instances of an IR-related task where we must move beyond bag-of-words to achieve high performance, albeit not too far, as bag-of-n-grams seem to suffice. We first review related work for similar text classification problems such as e-mail priority ranking and speech act identification. Then we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level. From there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem. We then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task. Finally, we summarize this paper's contributions and consider interesting directions for future work. 2. RELATED WORK Several other researchers have considered very similar text classification tasks. Cohen et al. [5] describe an ontology of "speech acts", such as "Propose a Meeting", and attempt to predict when an e-mail contains one of these speech acts. We consider action-items to be an important specific type of speech act that falls within their more general classification. While they provide results for several classification methods, their methods only make use of human judgments at the document-level. In contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest. Corston-Oliver et al. [6] consider detecting items in e-mail to "Put on a To-Do List". This classification task is very similar to ours except they do not consider "simple factual questions" to belong to this category. We include questions, but note that not all questions are action-items--some are rhetorical or simply social convention, "How are you?" . From a learning perspective, while they make use of judgments at the sentence-level, they do not explicitly compare what if any benefits finer-grained judgments offer. Additionally, they do not study alternative choices or approaches to the classification task. Instead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list. In this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems. Interest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature. For example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail. In this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action. 3. PROBLEM DEFINITION & APPROACH In contrast to previous work, we explicitly focus on the benefits that finer-grained, more costly, sentence-level human judgments offer over coarse-grained document-level judgments. Additionally, we consider multiple standard text classification approaches and analyze both the quantitative and qualitative differences that arise from taking a document-level vs. a sentence-level approach to classification. Finally, we focus on the representation necessary to achieve the most competitive performance. 3.1 Problem Definition In order to provide the most benefit to the user, a system would not only detect the document, but it would also indicate the specific sentences in the e-mail which contain the action-items. Therefore, there are three basic problems: 1. Document detection: Classify a document as to whether or not it contains an action-item. 2. Document ranking: Rank the documents such that all documents containing action-items occur as high as possible in the ranking. 3. Sentence detection: Classify each sentence in a document as to whether or not it is an action-item. As in most Information Retrieval tasks, the weight the evaluation metric should give to precision and recall depends on the nature of the application. In situations where a user will eventually read all received messages, ranking (e.g., via precision at recall of 1) may be most important since this will help encourage shorter delays in communications between users. In contrast, high-precision detection at low recall will be of increasing importance when the user is under severe time-pressure and therefore will likely not read all mail. This can be the case for crisis managers during disaster management. Finally, sentence detection plays a role in both timepressure situations and simply to alleviate the user's required time to gist the message. 3.2 Approach As mentioned above, the labeled data can come in one of two forms: a document-labeling provides a yes/no label for each document as to whether it contains an action-item; a phrase-labeling provides only a yes label for the specific items of interest. We term the human judgments a phrase-labeling since the user's view of the action-item may not correspond with actual sentence boundaries or predicted sentence boundaries. Obviously, it is straightforward to generate a document-labeling consistent with a phrase-labeling by labeling a document "yes" if and only if it contains at least one phrase labeled "yes". To train classifiers for this task, we can take several viewpoints related to both the basic problems we have enumerated and the form of the labeled data. The document-level view treats each e-mail as a learning instance with an associated class-label. Then, the document can be converted to a feature-value vector and learning progresses as usual. Applying a document-level classifier to document detection and ranking is straightforward. In order to apply it to sentence detection, one must make additional steps. For example, if the classifier predicts a document contains an action-item, then areas of the document that contain a high-concentration of words which the model weights heavily in favor of action-items can be indicated. The obvious benefit of the document-level approach is that training set collection costs are lower since the user only has to specify whether or not an e-mail contains an action-item and not the specific sentences. In the sentence-level view, each e-mail is automatically segmented into sentences, and each sentence is treated as a learning instance with an associated class-label. Since the phrase-labeling provided by the user may not coincide with the automatic segmentation, we must determine what label to assign a partially overlapping sentence when converting it to a learning instance. Once trained, applying the resulting classifiers to sentence detection is now straightforward, but in order to apply the classifiers to document detection and document ranking, the individual predictions over each sentence must be aggregated in order to make a document-level prediction. This approach has the potential to benefit from morespecific labels that enable the learner to focus attention on the key sentences instead of having to learn based on data that the majority of the words in the e-mail provide no or little information about class membership. 3.2.1 Features Consider some of the phrases that might constitute part of an action item: "would like to know", "let me know", "as soon as possible", "have you". Each of these phrases consists of common words that occur in many e-mails. However, when they occur in the same sentence, they are far more indicative of an action-item. Additionally, order can be important: consider "have you" versus "you have". Because of this, we posit that n-grams play a larger role in this problem than is typical of problems like topic classification. Therefore, we consider all n-grams up to size 4. When using n-grams, if we find an n-gram of size 4 in a segment of text, we can represent the text as just one occurrence of the ngram or as one occurrence of the n-gram and an occurrence of each smaller n-gram contained by it. We choose the second of these alternatives since this will allow the algorithm itself to smoothly back-off in terms of recall. Methods such as na ¨ ıve Bayes may be hurt by such a representation because of double-counting. Since sentence-ending punctuation can provide information, we retain the terminating punctuation token when it is identifiable. Additionally, we add a beginning-of-sentence and end-of-sentence token in order to capture patterns that are often indicators at the beginning or end of a sentence. Assuming proper punctuation, these extra tokens are unnecessary, but often e-mail lacks proper punctuation. In addition, for the sentence-level classifiers that use ngrams, we additionally code for each sentence a binary encoding of the position of the sentence relative to the document. This encoding has eight associated features that represent which octile (the first eighth, second eighth, etc.) contains the sentence. 3.2.2 Implementation Details In order to compare the document-level to the sentence-level approach, we compare predictions at the document-level. We do not address how to use a document-level classifier to make predictions at the sentence-level. In order to automatically segment the text of the e-mail, we use the RASP statistical parser [4]. Since the automatically segmented sentences may not correspond directly with the phrase-level boundaries, we treat any sentence that contains at least 30% of a marked action-item segment as an action-item. When evaluating sentencedetection for the sentence-level system, we use these class labels as ground truth. Since we are not evaluating multiple segmentation approaches, this does not bias any of the methods. If multiple segmentation systems were under evaluation, one would need to use a metric that matched predicted positive sentences to phrases labeled positive. The metric would need to punish overly long true predictions as well as too short predictions. Our criteria for converting to labeled instances implicitly includes both criteria. Since the segmentation is fixed, an overly long prediction would be predicting "yes" for many "no" instances since presumably the extra length corresponds to additional segmented sentences all of which do not contain 30% of action-item. Likewise, a too short prediction must correspond to a small sentence included in the action-item but not constituting all of the action-item. Therefore, in order to consider the prediction to be too short, there will be an additional preceding/following sentence that is an action-item where we incorrectly predicted "no". Once a sentence-level classifier has made a prediction for each sentence, we must combine these predictions to make both a document-level prediction and a document-level score. We use the simple policy of predicting positive when any of the sentences is predicted positive. In order to produce a document score for ranking, the confidence that the document contains an action-item is: where s is a sentence in document d, π is the classifier's 1/0 prediction, ψ is the score the classifier assigns as its confidence that π (s) = 1, and n (d) is the greater of 1 and the number of (unigram) tokens in the document. In other words, when any sentence is predicted positive, the document score is the length normalized sum of the sentence scores above threshold. When no sentence is predicted positive, the document score is the maximum sentence score normalized by length. As in other text problems, we are more likely to emit false positives for documents with more words or sentences. Thus we include a length normalization factor. 4. EXPERIMENTAL ANALYSIS 4.1 The Data Our corpus consists of e-mails obtained from volunteers at an educational institution and cover subjects such as: organizing a research workshop, arranging for job-candidate interviews, publishing proceedings, and talk announcements. The messages were anonymized by replacing the names of each individual and institution with a pseudonym .1 After attempting to identify and eliminate duplicate e-mails, the corpus contains 744 e-mail messages. After identity anonymization, the corpora has three basic versions. Quoted material refers to the text of a previous e-mail that an author often leaves in an e-mail message when responding to the e-mail. Quoted material can act as noise when learning since it may include action-items from previous messages that are no longer relevant. To isolate the effects of quoted material, we have three versions of the corpora. The raw form contains the basic messages. The auto-stripped version contains the messages after quoted material has been automatically removed. The hand-stripped version contains the messages after quoted material has been removed by a human. Additionally, the hand-stripped version has had any xml content and e-mail signatures removed--leaving only the essential content of the message. The studies reported here are performed with the hand-stripped version. This allows us to balance the cognitive load in terms of number of tokens that must be read in the user-studies we report--including quoted material would complicate the user studies since some users might skip the material while others read it. Additionally, ensuring all quoted material is removed 1We have an even more highly anonymized version of the corpus that can be made available for some outside experimentation. Please contact the authors for more information on obtaining this data. prevents tainting the cross-validation since otherwise a test item could occur as quoted material in a training document. 4.1.1 Data Labeling Two human annotators labeled each message as to whether or not it contained an action-item. In addition, they identified each segment of the e-mail which contained an action-item. A segment is a contiguous section of text selected by the human annotators and may span several sentences or a complete phrase contained in a sentence. They were instructed that an action item is "an explicit request for information that requires the recipient's attention or a required action" and told to "highlight the phrases or sentences that make up the request". Table 1: Agreement of Human Annotators at Document Level Annotator One labeled 324 messages as containing action items. Annotator Two labeled 327 messages as containing action items. The agreement of the human annotators is shown in Tables 1 and 2. The annotators are said to agree at the document-level when both marked the same document as containing no action-items or both marked at least one action-item regardless of whether the text segments were the same. At the document-level, the annotators agreed 93% of the time. The kappa statistic [3, 5] is often used to evaluate inter-annotator agreement: A is the empirical estimate of the probability of agreement. R is the empirical estimate of the probability of random agreement given the empirical class priors. A value close to − 1 implies the annotators agree far less often than would be expected randomly, while a value close to 1 means they agree more often than randomly expected. At the document-level, the kappa statistic for inter-annotator agreement is 0.85. This value is both strong enough to expect the problem to be learnable and is comparable with results for similar tasks [5, 6]. In order to determine the sentence-level agreement, we use each judgment to create a sentence-corpus with labels as described in Section 3.2.2, then consider the agreement over these sentences. This allows us to compare agreement over "no judgments". We perform this comparison over the hand-stripped corpus since that eliminates spurious "no" judgments that would come from including quoted material, etc. . Both annotators were free to label the subject as an action-item, but since neither did, we omit the subject line of the message as well. This only reduces the number of "no" agreements. This leaves 6301 automatically segmented sentences. At the sentence-level, the annotators agreed 98% of the time, and the kappa statistic for inter-annotator agreement is 0.82. In order to produce one single set of judgments, the human annotators went through each annotation where there was disagreement and came to a consensus opinion. The annotators did not collect statistics during this process but anecdotally reported that the majority of disagreements were either cases of clear annotator oversight or different interpretations of conditional statements. For example, "Ifyou would like to keep your job, come to tomorrow's meeting" implies a required action where "Ifyou would like to join Table 2: Agreement of Human Annotators at Sentence Level the football betting pool, come to tomorrow's meeting" does not. The first would be an action-item in most contexts while the second would not. Of course, many conditional statements are not so clearly interpretable. After reconciling the judgments there are 416 e-mails with no action-items and 328 e-mails containing actionitems. Of the 328 e-mails containing action-items, 259 messages have one action-item segment; 55 messages have two action-item segments; 11 messages have three action-item segments. Two messages have four action-item segments, and one message has six action-item segments. Computing the sentence-level agreement using the reconciled "gold standard" judgments with each of the annotators' individual judgments gives a kappa of 0.89 for Annotator One and a kappa of 0.92 for Annotator Two. In terms of message characteristics, there were on average 132 content tokens in the body after stripping. For action-item messages, there were 115. However, by examining Figure 2 we see the length distributions are nearly identical. As would be expected for e-mail, it is a long-tailed distribution with about half the messages having more than 60 tokens in the body (this paragraph has 65 tokens). 4.2 Classifiers For this experiment, we have selected a variety of standard text classification algorithms. In selecting algorithms, we have chosen algorithms that are not only known to work well but which differ along such lines as discriminative vs. generative and lazy vs. eager. We have done this in order to provide both a competitive and thorough sampling of learning methods for the task at hand. This is important since it is easy to improve a strawman classifier by introducing a new representation. By thoroughly sampling alternative classifier choices we demonstrate that representation improvements over bag-of-words are not due to using the information in the bag-of-words poorly. 4.2.1 kNN We employ a standard variant of the k-nearest neighbor algorithm used in text classification, kNN with s-cut score thresholding [19]. We use a tfidf-weighting of the terms with a distanceweighted vote of the neighbors to compute the score before thresholding it. In order to choose the value of s for thresholding, we perform leave-one-out cross-validation over the training set. The value of k is set to be 2 ([log2 N] + 1) where N is the number of training points. This rule for choosing k is theoretically motivated by results which show such a rule converges to the optimal classifier as the number of training points increases [8]. In practice, we have also found it to be a computational convenience that frequently leads to comparable results with numerically optimizing k via a cross-validation procedure. 4.2.2 Na ¨ ıve Bayes We use a standard multinomial na ¨ ıve Bayes classifier [16]. In using this classifier, we smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively. Figure 2: The Histogram (left) and Distribution (right) of Message Length. A bin size of 20 words was used. Only tokens in the body after hand-stripping were counted. After stripping, the majority of words left are usually actual message content. Table 3: Average Document-Detection Performance during Cross-Validation for Each Method and the Sample Standard Deviation (Sn − 1) in italics. The best performance for each classifier is shown in bold. 4.2.3 SVM We have used a linear SVM with a tfidf feature representation and L2-norm as implemented in the SVMlight package v6 .01 [11]. All default settings were used. 4.2.4 Voted Perceptron Like the SVM, the Voted Perceptron is a kernel-based learning method. We use the same feature representation and kernel as we have for the SVM, a linear kernel with tfidf-weighting and an L2-norm. The voted perceptron is an online-learning method that keeps a history of past perceptrons used, as well as a weight signifying how often that perceptron was correct. With each new training example, a correct classification increases the weight on the current perceptron and an incorrect classification updates the perceptron. The output of the classifier uses the weights on the perceptra to make a final "voted" classification. When used in an offline-manner, multiple passes can be made through the training data. Both the voted perceptron and the SVM give a solution from the same hypothesis space--in this case, a linear classifier. Furthermore, it is well-known that the Voted Perceptron increases the margin of the solution after each pass through the training data [10]. Since Cohen et al. [5] obtain worse results using an SVM than a Voted Perceptron with one training iteration, they conclude that the best solution for detecting speech acts may not lie in an area with a large margin. Because their tasks are highly similar to ours, we employ both classifiers to ensure we are not overlooking a competitive alternative classifier to the SVM for the basic bag-of-words representation. 4.3 Performance Measures To compare the performance of the classification methods, we look at two standard performance measures, F1 and accuracy. The F1 measure [18, 21] is the harmonic mean of precision and recall where Precision = Correct Positives Predicted Positives and Recall = Correct Positives Actual Positives. 4.4 Experimental Methodology We perform standard 10-fold cross-validation on the set of documents. For the sentence-level approach, all sentences in a document are either entirely in the training set or entirely in the test set for each fold. For significance tests, we use a two-tailed t-test [21] to compare the values obtained during each cross-validation fold with a p-value of 0.05. Feature selection was performed using the chi-squared statistic. Different levels of feature selection were considered for each classifier. Each of the following number of features was tried: 10, 25, 50, 100, 250, 750, 1000, 2000, 4000. There are approximately 4700 unigram tokens without feature selection. In order to choose the number of features to use for each classifier, we perform nested cross-validation and choose the settings that yield the optimal document-level F1 for that classifier. For this study, only the body of each e-mail message was used. Feature selection is always applied to all candidate features. That is, for the n-gram representation, the n-grams and position features are also subject to removal by the feature selection method. 4.5 Results The results for document-level classification are given in Table 3. The primary hypothesis we are concerned with is that n-grams are critical for this task; if this is true, we expect to see a significant gap in performance between the document-level classifiers that use n-grams (denoted Document Ngram) and those using only unigram features (denoted Document Unigram). Examining Table 3, we observe that this is indeed the case for every classifier except na ¨ ıve Bayes. This difference in performance produced by the n-gram representation is statistically significant for each classifier except for na ¨ ıve Bayes and the accuracy metric for kNN (see Table 4). Na ¨ ıve Bayes poor performance with the n-gram representation is not surprising since the bag-of-n-grams causes excessive doublecounting as mentioned in Section 3.2.1; however, na ¨ ıve Bayes is not hurt at the sentence-level because the sparse examples provide few chances for agglomerative effects of double counting. In either case, when a language-modeling approach is desired, modeling the n-grams directly would be preferable to na ¨ ıve Bayes. More importantly for the n-gram hypothesis, the n-grams lead to the best document-level classifier performance as well. As would be expected, the difference between the sentence-level n-gram representation and unigram representation is small. This is because the window of text is so small that the unigram representation, when done at the sentence-level, implicitly picks up on the power of the n-grams. Further improvement would signify that the order of the words matter even when only considering a small sentence-size window. Therefore, the finer-grained sentence-level judgments allows a unigram representation to succeed but only when performed in a small window--behaving as an n-gram representation for all practical purposes. Table 4: Significance results for n-grams versus unigrams for document detection using document-level and sentence-level classifiers. When the F1 result is statistically significant, it is shown in bold. When the accuracy result is significant, it is shown with a †. Table 5: Significance results for sentence-level classifiers vs. document-level classifiers for the document detection problem. When the result is statistically significant, it is shown in bold. Further highlighting the improvement from finer-grained judgments and n-grams, Figure 3 graphically depicts the edge the SVM sentence-level classifier has over the standard bag-of-words approach with a precision-recall curve. In the high precision area of the graph, the consistent edge of the sentence-level classifier is rather impressive--continuing at precision 1 out to 0.1 recall. This would mean that a tenth of the user's action-items would be placed at the top of their action-item sorted inbox. Additionally, the large separation at the top right of the curves corresponds to the area where the optimal F1 occurs for each classifier, agreeing with the large improvement from 0.6904 to 0.7682 in F1 score. Considering the relative unexplored nature of classification at the sentence-level, this gives great hope for further increases in performance. Table 6: Performance of the Sentence-Level Classifiers at Sentence Detection Although Cohen et al. [5] observed that the Voted Perceptron with a single training iteration outperformed SVM in a set of similar tasks, we see no such behavior here. This further strengthens the evidence that an alternate classifier with the bag-of-words representation could not reach the same level of performance. The Voted Perceptron classifier does improve when the number of training iterations are increased, but it is still lower than the SVM classifier. Sentence detection results are presented in Table 6. With regard to the sentence detection problem, we note that the F1 measure gives a better feel for the remaining room for improvement in this difficult problem. That is, unlike document detection where actionitem documents are fairly common, action-item sentences are very rare. Thus, as in other text problems, the accuracy numbers are deceptively high sheerly because of the default accuracy attainable by always predicting "no". Although, the results here are significantly above-random, it is unclear what level of performance is necessary for sentence detection to be useful in and of itself and not simply as a means to document ranking and classification. Figure 4: Users find action-items quicker when assisted by a classification system. Finally, when considering a new type of classification task, one of the most basic questions is whether an accurate classifier built for the task can have an impact on the end-user. In order to demonstrate the impact this task can have on e-mail users, we conducted a user study using an earlier less-accurate version of the sentence classifier--where instead of using just a single sentence, a threesentence windowed-approach was used. There were three distinct sets of e-mail in which users had to find action-items. These sets were either presented in a random order (Unordered), ordered by the classifier (Ordered), or ordered by the classifier and with the Figure 3: Both n-grams and a small prediction window lead to consistent improvements over the standard approach. center sentence in the highest confidence window highlighted (Order + help). In order to perform fair comparisons between conditions, the overall number of tokens in each message set should be approximately equal; that is, the cognitive reading load should be approximately the same before the classifier's reordering. Additionally, users typically show "practice effects" by improving at the overall task and thus performing better at later message sets. This is typically handled by varying the ordering of the sets across users so that the means are comparable. While omitting further detail, we note the sets were balanced for the total number of tokens and a latin square design was used to balance practice effects. Figure 4 shows that at intervals of 5, 10, and 15 minutes, users consistently found significantly more action-items when assisted by the classifier, but were most critically aided in the first five minutes. Although, the classifier consistently aids the users, we did not gain an additional end-user impact by highlighting. As mentioned above, this might be a result of the large room for improvement that still exists for sentence detection, but anecdotal evidence suggests this might also be a result of how the information is presented to the user rather than the accuracy of sentence detection. For example, highlighting the wrong sentence near an actual action-item hurts the user's trust, but if a vague indicator (e.g., an arrow) points to the approximate area the user is not aware of the near-miss. Since the user studies used a three sentence window, we believe this played a role as well as sentence detection accuracy. 4.6 Discussion In contrast to problems where n-grams have yielded little difference, we believe their power here stems from the fact that many of the meaningful n-grams for action-items consist of common words, e.g., "let me know". Therefore, the document-level unigram approach cannot gain much leverage, even when modeling their joint probability correctly, since these words will often co-occur in the document but not necessarily in a phrase. Additionally, action-item detection is distinct from many text classification tasks in that a single sentence can change the class label of the document. As a result, good classifiers cannot rely on aggregating evidence from a large number of weak indicators across the entire document. Even though we discarded the header information, examining the top-ranked features at the document-level reveals that many of the features are names or parts of e-mail addresses that occurred in the body and are highly associated with e-mails that tend to contain many or no action-items. A few examples are terms such as "org", "bob", and "gov". We note that these features will be sensitive to the particular distribution (senders/receivers) and thus the document-level approach may produce classifiers that transfer less readily to alternate contexts and users at different institutions. This points out that part of the problem of going beyond bag-of-words may be the methodology, and investigating such properties as learning curves and how well a model transfers may highlight differences in models which appear to have similar performance when tested on the distributions they were trained on. We are currently investigating whether the sentence-level classifiers do perform better over different test corpora without retraining. 6. SUMMARY AND CONCLUSIONS The effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value. Further experiments are needed to see how this interacts with the amount of training data available. Sentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items. This, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification. In this work, we examined how action-items can be effectively detected in e-mails. Our empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments. When finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.
Feature Representation for Effective Action-Item Detection ABSTRACT E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage (s) indicating the request (s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n = 4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation. 1. INTRODUCTION E-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage. This includes prioritizing e-mails over a range of sources from business partners to family members, filtering and reducing junk e-mail, and quickly managing requests that demand Figure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient's attention or action. the receiver's attention or action. Automated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request. Such a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent. We view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions. The utility of such a detector can manifest as a method of prioritizing e-mails according to task-oriented criteria other than the standard ones of topic and sender or as a means of ensuring that the email user hasn't dropped the proverbial ball by forgetting to address an action request. Action-item detection differs from standard text classification in two important ways. First, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body. In contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22]. Second, action-item detection attempts to recover the email sender's intent--whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below. Instead we find that we need more information-laden features such as higher-order n-grams. Text categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17]. In fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14]. Topic detection and tracking (TDT), also works well with unigram feature sets [1, 20]. We believe that action-item detection is one of the first clear instances of an IR-related task where we must move beyond bag-of-words to achieve high performance, albeit not too far, as bag-of-n-grams seem to suffice. We first review related work for similar text classification problems such as e-mail priority ranking and speech act identification. Then we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level. From there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem. We then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task. Finally, we summarize this paper's contributions and consider interesting directions for future work. 2. RELATED WORK Several other researchers have considered very similar text classification tasks. Cohen et al. [5] describe an ontology of "speech acts", such as "Propose a Meeting", and attempt to predict when an e-mail contains one of these speech acts. We consider action-items to be an important specific type of speech act that falls within their more general classification. While they provide results for several classification methods, their methods only make use of human judgments at the document-level. In contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest. Corston-Oliver et al. [6] consider detecting items in e-mail to "Put on a To-Do List". This classification task is very similar to ours except they do not consider "simple factual questions" to belong to this category. We include questions, but note that not all questions are action-items--some are rhetorical or simply social convention, "How are you?" . From a learning perspective, while they make use of judgments at the sentence-level, they do not explicitly compare what if any benefits finer-grained judgments offer. Additionally, they do not study alternative choices or approaches to the classification task. Instead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list. In this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems. Interest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature. For example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail. In this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action. 3. PROBLEM DEFINITION & APPROACH 3.1 Problem Definition 3.2 Approach 3.2.1 Features 3.2.2 Implementation Details 4. EXPERIMENTAL ANALYSIS 4.1 The Data 4.1.1 Data Labeling 4.2 Classifiers 4.2.1 kNN 4.2.2 Na ¨ ıve Bayes 4.2.3 SVM 4.2.4 Voted Perceptron 4.3 Performance Measures 4.4 Experimental Methodology 4.5 Results 4.6 Discussion 6. SUMMARY AND CONCLUSIONS The effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value. Further experiments are needed to see how this interacts with the amount of training data available. Sentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items. This, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification. In this work, we examined how action-items can be effectively detected in e-mails. Our empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments. When finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.
Feature Representation for Effective Action-Item Detection ABSTRACT E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage (s) indicating the request (s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n = 4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation. 1. INTRODUCTION E-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage. Figure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient's attention or action. the receiver's attention or action. Automated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request. Such a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent. We view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions. Action-item detection differs from standard text classification in two important ways. First, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body. In contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22]. Second, action-item detection attempts to recover the email sender's intent--whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below. Instead we find that we need more information-laden features such as higher-order n-grams. Text categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17]. In fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14]. Topic detection and tracking (TDT), also works well with unigram feature sets [1, 20]. We first review related work for similar text classification problems such as e-mail priority ranking and speech act identification. Then we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level. From there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem. We then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task. Finally, we summarize this paper's contributions and consider interesting directions for future work. 2. RELATED WORK Several other researchers have considered very similar text classification tasks. We consider action-items to be an important specific type of speech act that falls within their more general classification. While they provide results for several classification methods, their methods only make use of human judgments at the document-level. In contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest. Corston-Oliver et al. [6] consider detecting items in e-mail to "Put on a To-Do List". This classification task is very similar to ours except they do not consider "simple factual questions" to belong to this category. . Additionally, they do not study alternative choices or approaches to the classification task. Instead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list. In this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems. Interest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature. For example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail. In this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action. 6. SUMMARY AND CONCLUSIONS The effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value. Sentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items. This, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification. In this work, we examined how action-items can be effectively detected in e-mails. Our empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments. When finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.
H-83
Estimating the Global PageRank of Web Communities
Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.
[ "global pagerank", "web commun", "local search engin", "algorithm", "local domain", "topic-specif domain", "larg-scale search engin", "link-base rank", "subgraph", "global graph", "crawl problem", "experiment" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "U", "M", "M", "U" ]
Estimating the Global PageRank of Web Communities Jason V. Davis Dept. of Computer Sciences University of Texas at Austin Austin, TX 78712 jdavis@cs.utexas.edu Inderjit S. Dhillon Dept. of Computer Sciences University of Texas at Austin Austin, TX 78712 inderjit@cs.utexas.edu ABSTRACT Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; G.1.3 [Numerical Analysis]: Numerical Linear Algebra; G.3 [Probability and Statistics]: Markov Processes General Terms PageRank, Markov Chain, Stochastic Complementation 1. INTRODUCTION Localized search engines are small-scale search engines that index only a single community of the web. Such communities can be site-specific domains, such as pages within the cs.utexas.edu domain, or topic-related communitiesfor example, political websites. Compared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller. Consequently, the computational resources needed to build such a search engine are also similarly lighter. By restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains. One drawback of localized indexes is the lack of global information needed to compute link-based rankings. The PageRank algorithm [3], has proven to be an effective such measure. In general, the PageRank of a given page is dependent on pages throughout the entire web graph. In the context of a localized search engine, if the PageRanks are computed using only the local subgraph, then we would expect the resulting PageRanks to reflect the perceived popularity within the local community and not of the web as a whole. For example, consider a localized search engine that indexes political pages with conservative views. A person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites. If only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community. However, if global PageRanks are also available, then the results can additionally reflect outsiders'' views of the conservative community (those documents that liberals most often access within the conservative community). Thus, for many localized search engines, incorporating global PageRanks can improve the quality of search results. However, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts. Localized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web. In this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph. Our proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks. We construct this supergraph by iteratively crawling global pages on the current web frontier-i.e., global pages with inlinks from pages that have already been crawled. In order to provide 116 Research Track Paper a good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically. This algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes. We experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains. To evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled. We compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods. Finally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages. The paper is organized as follows. Section 2 gives an overview of localized search engines and outlines their advantages over global search. Section 3 provides background on the PageRank algorithm. Section 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms. Section 6 provides experimental results, section 7 gives an overview of related work, and, finally, conclusions are given in section 8. 2. LOCALIZED SEARCH ENGINES Localized search engines index a single community of the web, typically either a site-specific community, or a topicspecific community. Localized search engines enjoy three major advantages over their large-scale counterparts: they are relatively inexpensive to build, they can offer more precise search capability over their local domain, and they can provide a more complete index. The resources needed to build a global search engine are enormous. A 2003 study by Lyman et al. [13] found that the `surface web'' (publicly available static sites) consists of 8.9 billion pages, and that the average size of these pages is approximately 18.7 kilobytes. To download a crawl of this size, approximately 167 terabytes of space is needed. For a researcher who wishes to build a search engine with access to a couple of workstations or a small server, storage of this magnitude is simply not available. However, building a localized search engine over a web community of a hundred thousand pages would only require a few gigabytes of storage. The computational burden required to support search queries over a database this size is more manageable as well. We note that, for topic-specific search engines, the relevant community can be efficiently identified and downloaded by using a focused crawler [21, 4]. For site-specific domains, the local domain is readily available on their own web server. This obviates the need for crawling or spidering, and a complete and up-to-date index of the domain can thus be guaranteed. This is in contrast to their large-scale counterparts, which suffer from several shortcomings. First, crawling dynamically generated pages-pages in the `hidden web''-has been the subject of research [20] and is a non-trivial task for an external crawler. Second, site-specific domains can enable the robots exclusion policy. This prohibits external search engines'' crawlers from downloading content from the domain, and an external search engine must instead rely on outside links and anchor text to index these restricted pages. By restricting itself to only a specific domain of the internet, a localized search engine can provide more precise search results. Consider the canonical ambiguous search query, `jaguar'', which can refer to either the car manufacturer or the animal. A scientist trying to research the habitat and evolutionary history of a jaguar may have better success using a finely tuned zoology-specific search engine than querying Google with multiple keyword searches and wading through irrelevant results. A method to learn better ranking functions for retrieval was recently proposed by Radlinski and Joachims [19] and has been applied to various local domains, including Cornell University``s website [8]. 3. PAGERANK OVERVIEW The PageRank algorithm defines the importance of web pages by analyzing the underlying hyperlink structure of a web graph. The algorithm works by building a Markov chain from the link structure of the web graph and computing its stationary distribution. One way to compute the stationary distribution of a Markov chain is to find the limiting distribution of a random walk over the chain. Thus, the PageRank algorithm uses what is sometimes referred to as the `random surfer'' model. In each step of the random walk, the `surfer'' either follows an outlink from the current page (i.e. the current node in the chain), or jumps to a random page on the web. We now precisely define the PageRank problem. Let U be an m × m adjacency matrix for a given web graph such that Uji = 1 if page i links to page j and Uji = 0 otherwise. We define the PageRank matrix PU to be: PU = αUD−1 U + (1 − α)veT , (1) where DU is the (unique) diagonal matrix such that UD−1 U is column stochastic, α is a given scalar such that 0 ≤ α ≤ 1, e is the vector of all ones, and v is a non-negative, L1normalized vector, sometimes called the `random surfer'' vector. Note that the matrix D−1 U is well-defined only if each column of U has at least one non-zero entry-i.e., each page in the webgraph has at least one outlink. In the presence of such `dangling nodes'' that have no outlinks, one commonly used solution, proposed by Brin et al. [3], is to replace each zero column of U by a non-negative, L1-normalized vector. The PageRank vector r is the dominant eigenvector of the PageRank matrix, r = PU r. We will assume, without loss of generality, that r has an L1-norm of one. Computationally, r can be computed using the power method. This method first chooses a random starting vector r(0) , and iteratively multiplies the current vector by the PageRank matrix PU ; see Algorithm 1. In general, each iteration of the power method can take O(m2 ) operations when PU is a dense matrix. However, in practice, the number of links in a web graph will be of the order of the number of pages. By exploiting the sparsity of the PageRank matrix, the work per iteration can be reduced to O(km), where k is the average number of links per web page. It has also been shown that the total number of iterations needed for convergence is proportional to α and does not depend on the size of the web graph [11, 7]. Finally, the total space needed is also O(km), mainly to store the matrix U. 117 Research Track Paper Algorithm 1: A linear time (per iteration) algorithm for computing PageRank. ComputePR(U) Input: U: Adjacency matrix. Output: r: PageRank vector. Choose (randomly) an initial non-negative vector r(0) such that r(0) 1 = 1. i ← 0 repeat i ← i + 1 ν ← αUD−1 U r(i−1) {α is the random surfing probability} r(i) ← ν + (1 − α)v {v is the random surfer vector.} until r(i) − r(i−1) < δ {δ is the convergence threshold.} r ← r(i) 4. PROBLEM DEFINITION Given a local domain L, let G be an N × N adjacency matrix for the entire connected component of the web that contains L, such that Gji = 1 if page i links to page j and Gji = 0 otherwise. Without loss of generality, we will partition G as: G = L Gout Lout Gwithin , (2) where L is the n × n local subgraph corresponding to links inside the local domain, Lout is the subgraph that corresponds to links from the local domain pointing out to the global domain, Gout is the subgraph containing links from the global domain into the local domain, and Gwithin contains links within the global domain. We assume that when building a localized search engine, only pages inside the local domain are crawled, and the links between these pages are represented by the subgraph L. The links in Lout are also known, as these point from crawled pages in the local domain to uncrawled pages in the global domain. As defined in equation (1), PG is the PageRank matrix formed from the global graph G, and we define the global PageRank vector of this graph to be g. Let the n-length vector p∗ be the L1-normalized vector corresponding to the global PageRank of the pages in the local domain L: p∗ = EL g ELg 1 , where EL = [ I | 0 ] is the restriction matrix that selects the components from g corresponding to nodes in L. Let p denote the PageRank vector constructed from the local domain subgraph L. In practice, the observed local PageRank p and the global PageRank p∗ will be quite different. One would expect that as the size of local matrix L approaches the size of global matrix G, the global PageRank and the observed local PageRank will become more similar. Thus, one approach to estimating the global PageRank is to crawl the entire global domain, compute its PageRank, and extract the PageRanks of the local domain. Typically, however, n N , i.e., the number of global pages is much larger than the number of local pages. Therefore, crawling all global pages will quickly exhaust all local resources (computational, storage, and bandwidth) available to create the local search engine. We instead seek a supergraph ˆF of our local subgraph L with size O(n). Our goal Algorithm 2: The FindGlobalPR algorithm. FindGlobalPR(L, Lout, T, k) Input: L: zero-one adjacency matrix for the local domain, Lout: zero-one outlink matrix from L to global subgraph as in (2), T: number of iterations, k: number of pages to crawl per iteration. Output: ˆp: an improved estimate of the global PageRank of L. F ← L Fout ← Lout f ← ComputePR(F ) for (i = 1 to T) {Determine which pages to crawl next} pages ← SelectNodes(F , Fout, f, k) Crawl pages, augment F and modify Fout {Update PageRanks for new local domain} f ← ComputePR(F ) end {Extract PageRanks of original local domain & normalize} ˆp ← ELf ELf 1 is to find such a supergraph ˆF with PageRank ˆf, so that ˆf when restricted to L is close to p∗ . Formally, we seek to minimize GlobalDiff( ˆf) = EL ˆf EL ˆf 1 − p∗ 1 . (3) We choose the L1 norm for measuring the error as it does not place excessive weight on outliers (as the L2 norm does, for example), and also because it is the most commonly used distance measure in the literature for comparing PageRank vectors, as well as for detecting convergence of the algorithm [3]. We propose a greedy framework, given in Algorithm 2, for constructing ˆF . Initially, F is set to the local subgraph L, and the PageRank f of this graph is computed. The algorithm then proceeds as follows. First, the SelectNodes algorithm (which we discuss in the next section) is called and it returns a set of k nodes to crawl next from the set of nodes in the current crawl frontier, Fout. These selected nodes are then crawled to expand the local subgraph, F , and the PageRanks of this expanded graph are then recomputed. These steps are repeated for each of T iterations. Finally, the PageRank vector ˆp, which is restricted to pages within the original local domain, is returned. Given our computation, bandwidth, and memory restrictions, we will assume that the algorithm will crawl at most O(n) pages. Since the PageRanks are computed in each iteration of the algorithm, which is an O(n) operation, we will also assume that the number of iterations T is a constant. Of course, the main challenge here is in selecting which set of k nodes to crawl next. In the next section, we formally define the problem and give efficient algorithms. 5. NODE SELECTION In this section, we present node selection algorithms that operate within the greedy framework presented in the previous section. We first give a well-defined criteria for the page selection problem and provide experimental evidence that this criteria can effectively identify pages that optimize our problem objective (3). We then present our main al118 Research Track Paper gorithmic contribution of the paper, a method with linear running time that is derived from this page selection criteria. Finally, we give an intuitive analysis of our algorithm in terms of `leaks'' and `flows''. We show that if only the `flow'' is considered, then the resulting method is very similar to a widely used page selection heuristic [6]. 5.1 Formulation For a given page j in the global domain, we define the expanded local graph Fj: Fj = F s uT j 0 , (4) where uj is the zero-one vector containing the outlinks from F into page j, and s contains the inlinks from page j into the local domain. Note that we do not allow self-links in this framework. In practice, self-links are often removed, as they only serve to inflate a given page``s PageRank. Observe that the inlinks into F from node j are not known until after node j is crawled. Therefore, we estimate this inlink vector as the expectation over inlink counts among the set of already crawled pages, s = F T e F T e 1 . (5) In practice, for any given page, this estimate may not reflect the true inlinks from that page. Furthermore, this expectation is sampled from the set of links within the crawled domain, whereas a better estimate would also use links from the global domain. However, the latter distribution is not known to a localized search engine, and we contend that the above estimate will, on average, be a better estimate than the uniform distribution, for example. Let the PageRank of F be f. We express the PageRank f+ j of the expanded local graph Fj as f+ j = (1 − xj)fj xj , (6) where xj is the PageRank of the candidate global node j, and fj is the L1-normalized PageRank vector restricted to the pages in F . Since directly optimizing our problem goal requires knowing the global PageRank p∗ , we instead propose to crawl those nodes that will have the greatest influence on the PageRanks of pages in the original local domain L: influence(j) = k∈L |fj[k] − f[k]| (7) = EL (fj − f) 1. Experimentally, the influence score is a very good predictor of our problem objective (3). For each candidate global node j, figure 1(a) shows the objective function value Global Diff(fj) as a function of the influence of page j. The local domain used here is a crawl of conservative political pages (we will provide more details about this dataset in section 6); we observed similar results in other domains. The correlation is quite strong, implying that the influence criteria can effectively identify pages that improve the global PageRank estimate. As a baseline, figure 1(b) compares our objective with an alternative criteria, outlink count. The outlink count is defined as the number of outlinks from the local domain to page j. The correlation here is much weaker. .00001 .0001 .001 .01 0.26 0.262 0.264 0.266 Influence Objective 1 10 100 1000 0.266 0.264 0.262 0.26 Outlink Count Objective (a) (b) Figure 1: (a) The correlation between our influence page selection criteria (7) and the actual objective function (3) value is quite strong. (b) This is in contrast to other criteria, such as outlink count, which exhibit a much weaker correlation. 5.2 Computation As described, for each candidate global page j, the influence score (7) must be computed. If fj is computed exactly for each global page j, then the PageRank algorithm would need to be run for each of the O(n) such global pages j we consider, resulting in an O(n2 ) computational cost for the node selection method. Thus, computing the exact value of fj will lead to a quadratic algorithm, and we must instead turn to methods of approximating this vector. The algorithm we present works by performing one power method iteration used by the PageRank algorithm (Algorithm 1). The convergence rate for the PageRank algorithm has been shown to equal the random surfer probability α [7, 11]. Given a starting vector x(0) , if k PageRank iterations are performed, the current PageRank solution x(k) satisfies: x(k) − x∗ 1 = O(αk x(0) − x∗ 1), (8) where x∗ is the desired PageRank vector. Therefore, if only one iteration is performed, choosing a good starting vector is necessary to achieve an accurate approximation. We partition the PageRank matrix PFj , corresponding to the × subgraph Fj as: PFj = ˜F ˜s ˜uT j w , (9) where ˜F = αF (DF + diag(uj))−1 + (1 − α) e + 1 eT , ˜s = αs + (1 − α) e + 1 , ˜uj = α(DF + diag(uj))−1 uj + (1 − α) e + 1 , w = 1 − α + 1 , and diag(uj) is the diagonal matrix with the (i, i)th entry equal to one if the ith element of uj equals one, and is zero otherwise. We have assumed here that the random surfer vector is the uniform vector, and that L has no `dangling links''. These assumptions are not necessary and serve only to simplify discussion and analysis. A simple approach for estimating fj is the following. First, estimate the PageRank f+ j of Fj by computing one PageRank iteration over the matrix PFj , using the starting vector ν = f 0 . Then, estimate fj by removing the last 119 Research Track Paper component from our estimate of f+ j (i.e., the component corresponding to the added node j), and renormalizing. The problem with this approach is in the starting vector. Recall from (6) that xj is the PageRank of the added node j. The difference between the actual PageRank f+ j of PFj and the starting vector ν is ν − f+ j 1 = xj + f − (1 − xj)fj 1 ≥ xj + | f 1 − (1 − xj) fj 1| = xj + |xj| = 2xj. Thus, by (8), after one PageRank iteration, we expect our estimate of f+ j to still have an error of about 2αxj. In particular, for candidate nodes j with relatively high PageRank xj, this method will yield more inaccurate results. We will next present a method that eliminates this bias and runs in O(n) time. 5.2.1 Stochastic Complementation Since f+ j , as given in (6) is the PageRank of the matrix PFj , we have: fj(1 − xj) xj = ˜F ˜s ˜uT j w fj(1 − xj) xj = ˜F fj(1 − xj) + ˜sxj ˜uT j fj(1 − xj) + wxj . Solving the above system for fj can be shown to yield fj = ( ˜F + (1 − w)−1 ˜s˜uT j )fj. (10) The matrix S = ˜F +(1−w)−1 ˜s˜uT j is known as the stochastic complement of the column stochastic matrix PFj with respect to the sub matrix ˜F . The theory of stochastic complementation is well studied, and it can be shown the stochastic complement of an irreducible matrix (such as the PageRank matrix) is unique. Furthermore, the stochastic complement is also irreducible and therefore has a unique stationary distribution as well. For an extensive study, see [15]. It can be easily shown that the sub-dominant eigenvalue of S is at most +1 α, where is the size of F . For sufficiently large , this value will be very close to α. This is important, as other properties of the PageRank algorithm, notably the algorithm``s sensitivity, are dependent on this value [11]. In this method, we estimate the length vector fj by computing one PageRank iteration over the × stochastic complement S, starting at the vector f: fj ≈ Sf. (11) This is in contrast to the simple method outlined in the previous section, which first iterates over the ( + 1) × ( + 1) matrix PFj to estimate f+ j , and then removes the last component from the estimate and renormalizes to approximate fj. The problem with the latter method is in the choice of the ( + 1) length starting vector, ν. Consequently, the PageRank estimate given by the simple method differs from the true PageRank by at least 2αxj, where xj is the PageRank of page j. By using the stochastic complement, we can establish a tight lower bound of zero for this difference. To see this, consider the case in which a node k is added to F to form the augmented local subgraph Fk, and that the PageRank of this new graph is (1 − xk)f xk . Specifically, the addition of page k does not change the PageRanks of the pages in F , and thus fk = f. By construction of the stochastic complement, fk = Sfk, so the approximation given in equation (11) will yield the exact solution. Next, we present the computational details needed to efficiently compute the quantity fj −f 1 over all known global pages j. We begin by expanding the difference fj −f, where the vector fj is estimated as in (11), fj − f ≈ Sf − f = αF (DF + diag(uj))−1 f + (1 − α) e + 1 eT f +(1 − w)−1 (˜uT j f)˜s − f. (12) Note that the matrix (DF +diag(uj))−1 is diagonal. Letting o[k] be the outlink count for page k in F , we can express the kth diagonal element as: (DF + diag(uj))−1 [k, k] = 1 o[k]+1 if uj[k] = 1 1 o[k] if uj[k] = 0 Noting that (o[k] + 1)−1 = o[k]−1 − (o[k](o[k] + 1))−1 and rewriting this in matrix form yields (DF +diag(uj))−1 = D−1 F −D−1 F (DF +diag(uj))−1 diag(uj). (13) We use the same identity to express e + 1 = e − e ( + 1) . (14) Recall that, by definition, we have PF = αF D−1 F +(1−α)e . Substituting (13) and (14) in (12) yields fj − f ≈ (PF f − f) −αF D−1 F (DF + diag(uj))−1 diag(uj)f −(1 − α) e ( + 1) + (1 − w)−1 (˜uT j f)˜s = x + y + (˜uT j f)z, (15) noting that by definition, f = PF f, and defining the vectors x, y, and z to be x = −αF D−1 F (DF + diag(uj))−1 diag(uj)f (16) y = −(1 − α) e ( + 1) (17) z = (1 − w)−1 ˜s. (18) The first term x is a sparse vector, and takes non-zero values only for local pages k that are siblings of the global page j. We define (i, j) ∈ F if and only if F [j, i] = 1 (equivalently, page i links to page j) and express the value of the component x[k ] as: x[k ] = −α k:(k,k )∈F ,uj [k]=1 f[k] o[k](o[k] + 1) , (19) where o[k], as before, is the number of outlinks from page k in the local domain. Note that the last two terms, y and z are not dependent on the current global node j. Given the function hj(f) = y + (˜uT j f)z 1, the quantity fj − f 1 120 Research Track Paper can be expressed as fj − f 1 = k x[k] + y[k] + (˜uT j f)z[k] = k:x[k]=0 y[k] + (˜uT j f)z[k] + k:x[k]=0 x[k] + y[k] + (˜uT j f)z[k] = hj(f) − k:x[k]=0 y[k] + (˜uT j f)z[k] + k:x[k]=0 x[k] + y[k] + (˜uT j f)z[k] . (20) If we can compute the function hj in linear time, then we can compute each value of fj − f 1 using an additional amount of time that is proportional to the number of nonzero components in x. These optimizations are carried out in Algorithm 3. Note that (20) computes the difference between all components of f and fj, whereas our node selection criteria, given in (7), is restricted to the components corresponding to nodes in the original local domain L. Let us examine Algorithm 3 in more detail. First, the algorithm computes the outlink counts for each page in the local domain. The algorithm then computes the quantity ˜uT j f for each known global page j. This inner product can be written as (1 − α) 1 + 1 + α k:(k,j)∈Fout f[k] o[k] + 1 , where the second term sums over the set of local pages that link to page j. Since the total number of edges in Fout was assumed to have size O( ) (recall that is the number of pages in F ), the running time of this step is also O( ). The algorithm then computes the vectors y and z, as given in (17) and (18), respectively. The L1NormDiff method is called on the components of these vectors which correspond to the pages in L, and it estimates the value of EL(y + (˜uT j f)z) 1 for each page j. The estimation works as follows. First, the values of ˜uT j f are discretized uniformly into c values {a1, ..., ac}. The quantity EL(y + aiz) 1 is then computed for each discretized value of ai and stored in a table. To evaluate EL (y + az) 1 for some a ∈ [a1, ac], the closest discretized value ai is determined, and the corresponding entry in the table is used. The total running time for this method is linear in and the discretization parameter c (which we take to be a constant). We note that if exact values are desired, we have also developed an algorithm that runs in O( log ) time that is not described here. In the main loop, we compute the vector x, as defined in equation (16). The nested loops iterate over the set of pages in F that are siblings of page j. Typically, the size of this set is bounded by a constant. Finally, for each page j, the scores vector is updated over the set of non-zero components k of the vector x with k ∈ L. This set has size equal to the number of local siblings of page j, and is a subset of the total number of siblings of page j. Thus, each iteration of the main loop takes constant time, and the total running time of the main loop is O( ). Since we have assumed that the size of F will not grow larger than O(n), the total running time for the algorithm is O(n). Algorithm 3: Node Selection via Stochastic Complementation. SC-Select(F , Fout, f, k) Input: F : zero-one adjacency matrix of size corresponding to the current local subgraph, Fout: zero-one outlink matrix from F to global subgraph, f: PageRank of F , k: number of pages to return Output: pages: set of k pages to crawl next {Compute outlink sums for local subgraph} foreach (page j ∈ F ) o[j] ← k:(j,k)∈F F[j, k] end {Compute scalar ˜uT j f for each global node j } foreach (page j ∈ Fout) g[j] ← (1 − α) 1 +1 foreach (page k : (k, j) ∈ Fout) g[j] ← g[j] + α f[k] o[k]+1 end end {Compute vectors y and z as in (17) and (18) } y ← −(1 − α) e ( +1) z ← (1 − w)−1 ˜s {Approximate y + g[j] ∗ z 1 for all values g[j]} norm diffs ←L1NormDiffs(g, ELy, ELz) foreach (page j ∈ Fout) {Compute sparse vector x as in (19)} x ← 0 foreach (page k : (k, j) ∈ Fout) foreach (page k : (k, k ) ∈ F )) x[k ] ← x[k ] − f[k] o[k](o[k]+1) end end x ← αx scores[j] ← norm diffs[j] foreach (k : x[k] > 0 and page k ∈ L) scores[j] ← scores[j] − |y[k] + g[j] ∗ z[k]| +|x[k]+y[k]+g[j]∗z[k])| end end Return k pages with highest scores 5.2.2 PageRank Flows We now present an intuitive analysis of the stochastic complementation method by decomposing the change in PageRank in terms of `leaks'' and `flows''. This analysis is motivated by the decomposition given in (15). PageRank `flow'' is the increase in the local PageRanks originating from global page j. The flows are represented by the non-negative vector (˜uT j f)z (equations (15) and (18)). The scalar ˜uT j f can be thought of as the total amount of PageRank flow that page j has available to distribute. The vector z dictates how the flow is allocated to the local domain; the flow that local page k receives is proportional to (within a constant factor due to the random surfer vector) the expected number of its inlinks. The PageRank `leaks'' represent the decrease in PageRank resulting from the addition of page j. The leakage can be quantified in terms of the non-positive vectors x and y (equations (16) and (17)). For vector x, we can see from equation (19) that the amount of PageRank leaked by a local page is proportional to the weighted sum of the Page121 Research Track Paper Ranks of its siblings. Thus, pages that have siblings with higher PageRanks (and low outlink counts) will experience more leakage. The leakage caused by y is an artifact of the random surfer vector. We will next show that if only the `flow'' term, (˜uT j f)z, is considered, then the resulting method is very similar to a heuristic proposed by Cho et al. [6] that has been widely used for the Crawling Through URL Ordering problem. This heuristic is computationally cheaper, but as we will see later, not as effective as the Stochastic Complementation method. Our node selection strategy chooses global nodes that have the largest influence (equation (7)). If this influence is approximated using only `flows'', the optimal node j∗ is: j∗ = argmaxj EL ˜uT j fz 1 = argmaxj ˜uT j f EL z 1 = argmaxj ˜uT j f = argmaxj α(DF + diag(uj))−1 uj + (1 − α) e + 1 , f = argmaxjfT (DF + diag(uj))−1 uj. The resulting page selection score can be expressed as a sum of the PageRanks of each local page k that links to j, where each PageRank value is normalized by o[k]+1. Interestingly, the normalization that arises in our method differs from the heuristic given in [6], which normalizes by o[k]. The algorithm PF-Select, which is omitted due to lack of space, first computes the quantity fT (DF +diag(uj))−1 uj for each global page j, and then returns the pages with the k largest scores. To see that the running time for this algorithm is O(n), note that the computation involved in this method is a subset of that needed for the SC-Select method (Algorithm 3), which was shown to have a running time of O(n). 6. EXPERIMENTS In this section, we provide experimental evidence to verify the effectiveness of our algorithms. We first outline our experimental methodology and then provide results across a variety of local domains. 6.1 Methodology Given the limited resources available at an academic institution, crawling a section of the web that is of the same magnitude as that indexed by Google or Yahoo! is clearly infeasible. Thus, for a given local domain, we approximate the global graph by crawling a local neighborhood around the domain that is several orders of magnitude larger than the local subgraph. Even though such a graph is still orders of magnitude smaller than the `true'' global graph, we contend that, even if there exist some highly influential pages that are very far away from our local domain, it is unrealistic for any local node selection algorithm to find them. Such pages also tend to be highly unrelated to pages within the local domain. When explaining our node selection strategies in section 5, we made the simplifying assumption that our local graph contained no dangling nodes. This assumption was only made to ease our analysis. Our implementation efficiently handles dangling links by replacing each zero column of our adjacency matrix with the uniform vector. We evaluate the algorithm using the two node selection strategies given in Section 5.2, and also against the following baseline methods: • Random: Nodes are chosen uniformly at random among the known global nodes. • OutlinkCount: Global nodes with the highest number of outlinks from the local domain are chosen. At each iteration of the FindGlobalPR algorithm, we evaluate performance by computing the difference between the current PageRank estimate of the local domain, ELf ELf 1 , and the global PageRank of the local domain ELg ELg 1 . All PageRank calculations were performed using the uniform random surfer vector. Across all experiments, we set the random surfer parameter α, to be .85, and used a convergence threshold of 10−6 . We evaluate the difference between the local and global PageRank vectors using three different metrics: the L1 and L∞ norms, and Kendall``s tau. The L1 norm measures the sum of the absolute value of the differences between the two vectors, and the L∞ norm measures the absolute value of the largest difference. Kendall``s tau metric is a popular rank correlation measure used to compare PageRanks [2, 11]. This metric can be computed by counting the number of pairs of pairs that agree in ranking, and subtracting from that the number of pairs of pairs that disagree in ranking. The final value is then normalized by the total number of n 2 such pairs, resulting in a [−1, 1] range, where a negative score signifies anti-correlation among rankings, and values near one correspond to strong rank correlation. 6.2 Results Our experiments are based on two large web crawls and were downloaded using the web crawler that is part of the Nutch open source search engine project [18]. All crawls were restricted to only `http'' pages, and to limit the number of dynamically generated pages that we crawl, we ignored all pages with urls containing any of the characters `?'' , `*'', `@'', or `=''. The first crawl, which we will refer to as the `edu'' dataset, was seeded by homepages of the top 100 graduate computer science departments in the USA, as rated by the US News and World Report [16], and also by the home pages of their respective institutions. A crawl of depth 5 was performed, restricted to pages within the ‘.edu'' domain, resulting in a graph with approximately 4.7 million pages and 22.9 million links. The second crawl was seeded by the set of pages under the `politics'' hierarchy in the dmoz open directory project[17]. We crawled all pages up to four links away, which yielded a graph with 4.4 million pages and 17.3 million links. Within the `edu'' crawl, we identified the five site-specific domains corresponding to the websites of the top five graduate computer science departments, as ranked by the US News and World Report. This yielded local domains of various sizes, from 10,626 (UIUC) to 59,895 (Berkeley). For each of these site-specific domains with size n, we performed 50 iterations of the FindGlobalPR algorithm to crawl a total of 2n additional nodes. Figure 2(a) gives the (L1) difference from the PageRank estimate at each iteration to the global PageRank, for the Berkeley local domain. The performance of this dataset was representative of the typical performance across the five computer science sitespecific local domains. Initially, the L1 difference between the global and local PageRanks ranged from .0469 (Stanford) to .149 (MIT). For the first several iterations, the 122 Research Track Paper 0 5 10 15 20 25 30 35 40 45 50 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0.055 Number of Iterations GlobalandLocalPageRankDifference(L1) Stochastic Complement PageRank Flow Outlink Count Random 0 10 20 30 40 50 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Number of Iterations GlobalandLocalPageRankDifference(L1) Stochastic Complement PageRank Flow Outlink Count Random 0 5 10 15 20 25 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 Number of Iterations GlobalandLocalPageRankDifference(L1) Stochastic Complement PageRank Flow Outlink Count Random (a) www.cs.berkeley.edu (b) www.enterstageright.com (c) Politics Figure 2: L1 difference between the estimated and true global PageRanks for (a) Berkeley``s computer science website, (b) the site-specific domain, www.enterstageright.com, and (c) the `politics'' topic-specific domain. The stochastic complement method outperforms all other methods across various domains. three link-based methods all outperform the random selection heuristic. After these initial iterations, the random heuristic tended to be more competitive with (or even outperform, as in the Berkeley local domain) the outlink count and PageRank flow heuristics. In all tests, the stochastic complementation method either outperformed, or was competitive with, the other methods. Table 1 gives the average difference between the final estimated global PageRanks and the true global PageRanks for various distance measures. Algorithm L1 L∞ Kendall Stoch. Comp. .0384 .00154 .9257 PR Flow .0470 .00272 .8946 Outlink .0419 .00196 .9053 Random .0407 .00204 .9086 Table 1: Average final performance of various node selection strategies for the five site-specific computer science local domains. Note that Kendall``s Tau measures similarity, while the other metrics are dissimilarity measures. Stochastic Complementation clearly outperforms the other methods in all metrics. Within the `politics'' dataset, we also performed two sitespecific tests for the largest websites in the crawl: www.adamsmith.org, the website for the London based Adam Smith Institute, and www.enterstageright.com, an online conservative journal. As with the `edu'' local domains, we ran our algorithm for 50 iterations, crawling a total of 2n nodes. Figure 2 (b) plots the results for the www.enterstageright.com domain. In contrast to the `edu'' local domains, the Random and OutlinkCount methods were not competitive with either the SC-Select or the PF-Select methods. Among all datasets and all node selection methods, the stochastic complementation method was most impressive in this dataset, realizing a final estimate that differed only .0279 from the global PageRank, a ten-fold improvement over the initial local PageRank difference of .299. For the Adam Smith local domain, the initial difference between the local and global PageRanks was .148, and the final estimates given by the SC-Select, PF-Select, OutlinkCount, and Random methods were .0208, .0193, .0222, and .0356, respectively. Within the `politics'' dataset, we constructed four topicspecific local domains. The first domain consisted of all pages in the dmoz politics category, and also all pages within each of these sites up to two links away. This yielded a local domain of 90,811 pages, and the results are given in figure 2 (c). Because of the larger size of the topic-specific domains, we ran our algorithm for only 25 iterations to crawl a total of n nodes. We also created topic-specific domains from three political sub-topics: liberalism, conservatism, and socialism. The pages in these domains were identified by their corresponding dmoz categories. For each sub-topic, we set the local domain to be all pages within three links from the corresponding dmoz category pages. Table 2 summarizes the performance of these three topic-specific domains, and also the larger political domain. To quantify a global page j``s effect on the global PageRank values of pages in the local domain, we define page j``s impact to be its PageRank value, g[j], normalized by the fraction of its outlinks pointing to the local domain: impact(j) = oL[j] o[j] · g[j], where, oL[j] is the number of outlinks from page j to pages in the local domain L, and o[j] is the total number of j``s outlinks. In terms of the random surfer model, the impact of page j is the probability that the random surfer (1) is currently at global page j in her random walk and (2) takes an outlink to a local page, given that she has already decided not to jump to a random page. For the politics local domain, we found that many of the pages with high impact were in fact political pages that should have been included in the dmoz politics topic, but were not. For example, the two most influential global pages were the political search engine www.askhenry.com, and the home page of the online political magazine, www.policyreview.com. Among non-political pages, the home page of the journal Education Next was most influential. The journal is freely available online and contains articles regarding various aspect of K-12 education in America. To provide some anecdotal evidence for the effectiveness of our page selection methods, we note that the SC-Select method chose 11 pages within the www.educationnext.org domain, the PF-Select method discovered 7 such pages, while the OutlinkCount and Random methods found only 6 pages each. For the conservative political local domain, the socialist website www.ornery.org had a very high impact score. This 123 Research Track Paper All Politics: Algorithm L1 L2 Kendall Stoch. Comp. .1253 .000700 .8671 PR Flow .1446 .000710 .8518 Outlink .1470 .00225 .8642 Random .2055 .00203 .8271 Conservativism: Algorithm L1 L2 Kendall Stoch. Comp. .0496 .000990 .9158 PR Flow .0554 .000939 .9028 Outlink .0602 .00527 .9144 Random .1197 .00102 .8843 Liberalism: Algorithm L1 L2 Kendall Stoch. Comp. .0622 .001360 .8848 PR Flow .0799 .001378 .8669 Outlink .0763 .001379 .8844 Random .1127 .001899 .8372 Socialism: Algorithm L1 L∞ Kendall Stoch. Comp. .04318 .00439 .9604 PR Flow .0450 .004251 .9559 Outlink .04282 .00344 .9591 Random .0631 .005123 .9350 Table 2: Final performance among node selection strategies for the four political topic-specific crawls. Note that Kendall``s Tau measures similarity, while the other metrics are dissimilarity measures. was largely due to a link from the front page of this site to an article regarding global warming published by the National Center for Public Policy Research, a conservative research group in Washington, DC. Not surprisingly, the global PageRank of this article (which happens to be on the home page of the NCCPR, www.nationalresearch.com), was approximately .002, whereas the local PageRank of this page was only .00158. The SC-Select method yielded a global PageRank estimate of approximately .00182, the PFSelect method estimated a value of .00167, and the Random and OutlinkCount methods yielded values of .01522 and .00171, respectively. 7. RELATED WORK The node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6]. Whereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first. They propose several node selection algorithms, including the outlink count heuristic, as well as a variant of our PF-Select algorithm which they refer to as the `PageRank ordering metric''. They found this method to be most effective in optimizing their objective, as did a recent survey of these methods by Baeza-Yates et al. [1]. Boldi et al. also experiment within a similar crawling framework in [2], but quantify their results by comparing Kendall``s rank correlation between the PageRanks of the current set of crawled pages and those of the entire global graph. They found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall``s Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies. However, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values. Many algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14]. If such algorithms are used to compute the global PageRanks of our local domain, they would all require O(N) computation, storage, and bandwidth, where N is the size of the global domain. This is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain. Wang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks. For a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n. One drawback of this system is that the number of distinct web servers that comprise the global domain can be very large. For example, our `edu'' dataset contains websites from over 3,200 different universities; coordinating such a system among a large number of sites can be very difficult. Gan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space. Their approach relies on the availability of a remote connectivity server that can supply the set of inlinks to a given page, an assumption not used in our framework. They experimentally show that a reasonable estimate of the node``s PageRank can be obtained by visiting at most a few hundred nodes. Using their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs. 8. CONCLUSIONS AND FUTURE WORK The internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity. Along with the ubiquity of these large-scale search engines comes an increase in search users'' expectations. By providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find. In this work, we contend that the use of global PageRank in a localized search engine can improve performance. To estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next. Our primary contribution is our stochastic complementation page selection algorithm. This method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain. Experimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains. We conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks. Furthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics. 124 Research Track Paper Often times, topic-specific domains are discovered using a focused web crawler which considers a page``s content in conjunction with link anchor text to decide which pages to crawl next [4]. Although such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process. Typically, these pages are deleted and not indexed by the localized search engine. These pages can of course provide valuable information regarding the global PageRank of the local domain. One way to integrate these pages into our framework is to start the FindGlobalPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler. The global PageRank estimation framework, along with the node selection algorithms presented, all require O(n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk. If the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase. However, as the number of iterations increases (relative to k), the bottleneck will reside in the node selection computation. In this case, our algorithms would benefit from constant factor optimizations. Recall that the FindGlobalPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration. Recent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added. This algorithm was shown to give speedup of five to ten times on some datasets. We plan to investigate this and other such optimizations as future work. In this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks. To determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries. For some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used. The results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks. Acknowledgements. This research was supported by NSF grant CCF-0431257, NSF Career Award ACI-0093404, and a grant from Sabre, Inc. 9. REFERENCES [1] R. Baeza-Yates, M. Marin, C. Castillo, and A. Rodriguez. Crawling a country: better strategies than breadth-first for web page ordering. World-Wide Web Conference, 2005. [2] P. Boldi, M. Santini, and S. Vigna. Do your worst to make the best: paradoxical effects in pagerank incremental computations. Workshop on Web Graphs, 3243:168-180, 2004. [3] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 33(1-7):107-117, 1998. [4] S. Chakrabarti, M. van den Berg, and B. Dom. Focused crawling: a new approach to topic-specific web resource discovery. World-Wide Web Conference, 1999. [5] Y. Chen, Q. Gan, and T. Suel. Local methods for estimating pagerank values. Conference on Information and Knowledge Management, 2004. [6] J. Cho, H. Garcia-Molina, and L. Page. Efficient crawling through url ordering. World-Wide Web Conference, 1998. [7] T. H. Haveliwala and S. D. Kamvar. The second eigenvalue of the Google matrix. Technical report, Stanford University, 2003. [8] T. Joachims, F. Radlinski, L. Granka, A. Cheng, C. Tillekeratne, and A. Patel. Learning retrieval functions from implicit feedback. http://www.cs.cornell.edu/People/tj/career. [9] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub. Exploiting the block structure of the web for computing pagerank. World-Wide Web Conference, 2003. [10] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub. Extrapolation methods for accelerating pagerank computation. World-Wide Web Conference, 2003. [11] A. N. Langville and C. D. Meyer. Deeper inside pagerank. Internet Mathematics, 2004. [12] A. N. Langville and C. D. Meyer. Updating the stationary vector of an irreducible markov chain with an eye on Google``s pagerank. SIAM Journal on Matrix Analysis, 2005. [13] P. Lyman, H. R. Varian, K. Swearingen, P. Charles, N. Good, L. L. Jordan, and J. Pal. How much information 2003? School of Information Management and System, University of California at Berkely, 2003. [14] F. McSherry. A uniform approach to accelerated pagerank computation. World-Wide Web Conference, 2005. [15] C. D. Meyer. Stochastic complementation, uncoupling markov chains, and the theory of nearly reducible systems. SIAM Review, 31:240-272, 1989. [16] US News and World Report. http://www.usnews.com. [17] Dmoz open directory project. http://www.dmoz.org. [18] Nutch open source search engine. http://www.nutch.org. [19] F. Radlinski and T. Joachims. Query chains: learning to rank from implicit feedback. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2005. [20] S. Raghavan and H. Garcia-Molina. Crawling the hidden web. In Proceedings of the Twenty-seventh International Conference on Very Large Databases, 2001. [21] T. Tin Tang, D. Hawking, N. Craswell, and K. Griffiths. Focused crawling for both topical relevance and quality of medical information. Conference on Information and Knowledge Management, 2005. [22] Y. Wang and D. J. DeWitt. Computing pagerank in a distributed internet search system. Proceedings of the 30th VLDB Conference, 2004. 125 Research Track Paper
Estimating the Global PageRank of Web Communities ABSTRACT Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O (n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates. 1. INTRODUCTION Localized search engines are small-scale search engines that index only a single community of the web. Such communities can be site-specific domains, such as pages within the cs.utexas.edu domain, or topic-related communities--for example, political websites. Compared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller. Consequently, the computational resources needed to build such a search engine are also similarly lighter. By restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains. One drawback of localized indexes is the lack of global information needed to compute link-based rankings. The PageRank algorithm [3], has proven to be an effective such measure. In general, the PageRank of a given page is dependent on pages throughout the entire web graph. In the context of a localized search engine, if the PageRanks are computed using only the local subgraph, then we would expect the resulting PageRanks to reflect the perceived popularity within the local community and not of the web as a whole. For example, consider a localized search engine that indexes political pages with conservative views. A person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites. If only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community. However, if global PageRanks are also available, then the results can additionally reflect outsiders' views of the conservative community (those documents that liberals most often access within the conservative community). Thus, for many localized search engines, incorporating global PageRanks can improve the quality of search results. However, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts. Localized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web. In this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph. Our proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks. We construct this supergraph by iteratively crawling global pages on the current web frontier--i.e., global pages with inlinks from pages that have already been crawled. In order to provide a good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically. This algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes. We experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains. To evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled. We compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods. Finally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages. The paper is organized as follows. Section 2 gives an overview of localized search engines and outlines their advantages over global search. Section 3 provides background on the PageRank algorithm. Section 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms. Section 6 provides experimental results, section 7 gives an overview of related work, and, finally, conclusions are given in section 8. 2. LOCALIZED SEARCH ENGINES Localized search engines index a single community of the web, typically either a site-specific community, or a topicspecific community. Localized search engines enjoy three major advantages over their large-scale counterparts: they are relatively inexpensive to build, they can offer more precise search capability over their local domain, and they can provide a more complete index. The resources needed to build a global search engine are enormous. A 2003 study by Lyman et al. [13] found that the ` surface web' (publicly available static sites) consists of 8.9 billion pages, and that the average size of these pages is approximately 18.7 kilobytes. To download a crawl of this size, approximately 167 terabytes of space is needed. For a researcher who wishes to build a search engine with access to a couple of workstations or a small server, storage of this magnitude is simply not available. However, building a localized search engine over a web community of a hundred thousand pages would only require a few gigabytes of storage. The computational burden required to support search queries over a database this size is more manageable as well. We note that, for topic-specific search engines, the relevant community can be efficiently identified and downloaded by using a focused crawler [21, 4]. For site-specific domains, the local domain is readily available on their own web server. This obviates the need for crawling or spidering, and a complete and up-to-date index of the domain can thus be guaranteed. This is in contrast to their large-scale counterparts, which suffer from several shortcomings. First, crawling dynamically generated pages--pages in the ` hidden web'--has been the subject of research [20] and is a non-trivial task for an external crawler. Second, site-specific domains can enable the robots exclusion policy. This prohibits external search engines' crawlers from downloading content from the domain, and an external Research Track Paper search engine must instead rely on outside links and anchor text to index these restricted pages. By restricting itself to only a specific domain of the internet, a localized search engine can provide more precise search results. Consider the canonical ambiguous search query, ` jaguar', which can refer to either the car manufacturer or the animal. A scientist trying to research the habitat and evolutionary history of a jaguar may have better success using a finely tuned zoology-specific search engine than querying Google with multiple keyword searches and wading through irrelevant results. A method to learn better ranking functions for retrieval was recently proposed by Radlinski and Joachims [19] and has been applied to various local domains, including Cornell University's website [8]. 3. PAGERANK OVERVIEW The PageRank algorithm defines the importance of web pages by analyzing the underlying hyperlink structure of a web graph. The algorithm works by building a Markov chain from the link structure of the web graph and computing its stationary distribution. One way to compute the stationary distribution of a Markov chain is to find the limiting distribution of a random walk over the chain. Thus, the PageRank algorithm uses what is sometimes referred to as the ` random surfer' model. In each step of the random walk, the ` surfer' either follows an outlink from the current page (i.e. the current node in the chain), or jumps to a random page on the web. We now precisely define the PageRank problem. Let U be an m × m adjacency matrix for a given web graph such that Uji = 1 if page i links to page j and Uji = 0 otherwise. We define the PageRank matrix PU to be: is column stochastic, α is a given scalar such that 0 <α <1, e is the vector of all ones, and v is a non-negative, L1normalized vector, sometimes called the ` random surfer' vector. Note that the matrix D − 1 U is well-defined only if each column of U has at least one non-zero entry--i.e., each page in the webgraph has at least one outlink. In the presence of such ` dangling nodes' that have no outlinks, one commonly used solution, proposed by Brin et al. [3], is to replace each zero column of U by a non-negative, L1-normalized vector. The PageRank vector r is the dominant eigenvector of the PageRank matrix, r = PU r. We will assume, without loss of generality, that r has an L1-norm of one. Computationally, r can be computed using the power method. This method first chooses a random starting vector r (0), and iteratively multiplies the current vector by the PageRank matrix PU; see Algorithm 1. In general, each iteration of the power method can take O (m2) operations when PU is a dense matrix. However, in practice, the number of links in a web graph will be of the order of the number of pages. By exploiting the sparsity of the PageRank matrix, the work per iteration can be reduced to O (km), where k is the average number of links per web page. It has also been shown that the total number of iterations needed for convergence is proportional to α and does not depend on the size of the web graph [11, 7]. Finally, the total space needed is also O (km), mainly to store the matrix U. Research Track Paper ALGORITHM 1: A linear time (per iteration) algorithm for computing PageRank. 4. PROBLEM DEFINITION Given a local domain L, let G be an N x N adjacency matrix for the entire connected component of the web that contains L, such that Gji = 1 if page i links to page j and Gji = 0 otherwise. Without loss of generality, we will partition G as: where L is the n x n local subgraph corresponding to links inside the local domain, Lout is the subgraph that corresponds to links from the local domain pointing out to the global domain, Gout is the subgraph containing links from the global domain into the local domain, and Gwithin contains links within the global domain. We assume that when building a localized search engine, only pages inside the local domain are crawled, and the links between these pages are represented by the subgraph L. The links in Lout are also known, as these point from crawled pages in the local domain to uncrawled pages in the global domain. As defined in equation (1), PG is the PageRank matrix formed from the global graph G, and we define the global PageRank vector of this graph to be g. Let the n-length vector p ∗ be the L1-normalized vector corresponding to the global PageRank of the pages in the local domain L: where EL = [I | 0] is the restriction matrix that selects the components from g corresponding to nodes in L. Let p denote the PageRank vector constructed from the local domain subgraph L. In practice, the observed local PageRank p and the global PageRank p ∗ will be quite different. One would expect that as the size of local matrix L approaches the size of global matrix G, the global PageRank and the observed local PageRank will become more similar. Thus, one approach to estimating the global PageRank is to crawl the entire global domain, compute its PageRank, and extract the PageRanks of the local domain. Typically, however, n "N, i.e., the number of global pages is much larger than the number of local pages. Therefore, crawling all global pages will quickly exhaust all local resources (computational, storage, and bandwidth) available to create the local search engine. We instead seek a supergraph Fˆ of our local subgraph L with size O (n). Our goal ALGORITHM 2: The FINDGLOBALPR algorithm. is to find such a supergraph Fˆ with PageRank fˆ, so that fˆ when restricted to L is close to p ∗. Formally, we seek to minimize GlobalDif f (We choose the L1 norm for measuring the error as it does not place excessive weight on outliers (as the L2 norm does, for example), and also because it is the most commonly used distance measure in the literature for comparing PageRank vectors, as well as for detecting convergence of the algorithm [3]. for constructing ˆ We propose a greedy framework, given in Algorithm 2, F. Initially, F is set to the local subgraph L, and the PageRank f of this graph is computed. The algorithm then proceeds as follows. First, the SELECTNODES algorithm (which we discuss in the next section) is called and it returns a set of k nodes to crawl next from the set of nodes in the current crawl frontier, Fout. These selected nodes are then crawled to expand the local subgraph, F, and the PageRanks of this expanded graph are then recomputed. These steps are repeated for each of T iterations. Finally, the PageRank vector ˆp, which is restricted to pages within the original local domain, is returned. Given our computation, bandwidth, and memory restrictions, we will assume that the algorithm will crawl at most O (n) pages. Since the PageRanks are computed in each iteration of the algorithm, which is an O (n) operation, we will also assume that the number of iterations T is a constant. Of course, the main challenge here is in selecting which set of k nodes to crawl next. In the next section, we formally define the problem and give efficient algorithms. 5. NODE SELECTION In this section, we present node selection algorithms that operate within the greedy framework presented in the previous section. We first give a well-defined criteria for the page selection problem and provide experimental evidence that this criteria can effectively identify pages that optimize our problem objective (3). We then present our main alFINDGLOBALPR (L, Lout, T, k) Input: L: zero-one adjacency matrix for the local domain, Lout: zero-one outlink matrix from L to global subgraph as in (2), T: number of iterations, k: number of pages to crawl per iteration. Output: ˆp: an improved estimate of the global Page gorithmic contribution of the paper, a method with linear running time that is derived from this page selection criteria. Finally, we give an intuitive analysis of our algorithm in terms of ` leaks' and ` flows'. We show that if only the ` flow' is considered, then the resulting method is very similar to a widely used page selection heuristic [6]. 5.1 Formulation For a given page j in the global domain, we define the expanded local graph Fj: where uj is the zero-one vector containing the outlinks from F into page j, and s contains the inlinks from page j into the local domain. Note that we do not allow self-links in this framework. In practice, self-links are often removed, as they only serve to inflate a given page's PageRank. Observe that the inlinks into F from node j are not known until after node j is crawled. Therefore, we estimate this inlink vector as the expectation over inlink counts among the set of already crawled pages, In practice, for any given page, this estimate may not reflect the true inlinks from that page. Furthermore, this expectation is sampled from the set of links within the crawled domain, whereas a better estimate would also use links from the global domain. However, the latter distribution is not known to a localized search engine, and we contend that the above estimate will, on average, be a better estimate than the uniform distribution, for example. Let the PageRank of F be f. We express the PageRank fj + of the expanded local graph Fj as where xj is the PageRank of the candidate global node j, and fj is the L1-normalized PageRank vector restricted to the pages in F. Since directly optimizing our problem goal requires knowing the global PageRank p ∗, we instead propose to crawl those nodes that will have the greatest influence on the PageRanks of pages in the original local domain L: Experimentally, the influence score is a very good predictor of our problem objective (3). For each candidate global node j, figure 1 (a) shows the objective function value Global Diff (fj) as a function of the influence of page j. The local domain used here is a crawl of conservative political pages (we will provide more details about this dataset in section 6); we observed similar results in other domains. The correlation is quite strong, implying that the influence criteria can effectively identify pages that improve the global PageRank estimate. As a baseline, figure 1 (b) compares our objective with an alternative criteria, outlink count. The outlink count is defined as the number of outlinks from the local domain to page j. The correlation here is much weaker. Research Track Paper Figure 1: (a) The correlation between our influence page selection criteria (7) and the actual objective function (3) value is quite strong. (b) This is in contrast to other criteria, such as outlink count, which exhibit a much weaker correlation. 5.2 Computation As described, for each candidate global page j, the influence score (7) must be computed. If fj is computed exactly for each global page j, then the PageRank algorithm would need to be run for each of the O (n) such global pages j we consider, resulting in an O (n2) computational cost for the node selection method. Thus, computing the exact value of fj will lead to a quadratic algorithm, and we must instead turn to methods of approximating this vector. The algorithm we present works by performing one power method iteration used by the PageRank algorithm (Algorithm 1). The convergence rate for the PageRank algorithm has been shown to equal the random surfer probability α [7, 11]. Given a starting vector x (0), if k PageRank iterations are performed, the current PageRank solution x (k) satisfies: where x * is the desired PageRank vector. Therefore, if only one iteration is performed, choosing a good starting vector is necessary to achieve an accurate approximation. We partition the PageRank matrix PFj, corresponding to the ~ x ~ subgraph Fj as: and diag (uj) is the diagonal matrix with the (i, i) th entry equal to one if the ith element of uj equals one, and is zero otherwise. We have assumed here that the random surfer vector is the uniform vector, and that L has no ` dangling links'. These assumptions are not necessary and serve only to simplify discussion and analysis. A simple approach for estimating fj is the following. First, estimate the PageRank fj + of Fj by computing one PageRank iteration over the matrix PFj, using the starting vec ~ f ~ tor ν =. Then, estimate fj by removing the last Research Track Paper component from our estimate of fj + (i.e., the component corresponding to the added node j), and renormalizing. The problem with this approach is in the starting vector. Recall from (6) that xj is the PageRank of the added node j. The difference between the actual PageRank fj + of PFj and the starting vector ν is Thus, by (8), after one PageRank iteration, we expect our estimate of fj + to still have an error of about 2αxj. In particular, for candidate nodes j with relatively high PageRank xj, this method will yield more inaccurate results. We will next present a method that eliminates this bias and runs in O (n) time. 5.2.1 Stochastic Complementation Since fj +, as given in (6) is the PageRank of the matrix PFj, we have: ~ fj (1 − xj) ~ ~ F˜ s˜ ~ ~ fj (1 − xj) ~ = xj ˜uTj w xj cally, the addition of page k does not change the PageRanks of the pages in F, and thus fk = f. By construction of the stochastic complement, fk = Sfk, so the approximation given in equation (11) will yield the exact solution. Next, we present the computational details needed to efficiently compute the quantity ~ fj − f ~ 1 over all known global pages j. We begin by expanding the difference fj − f, where the vector fj is estimated as in (11), Note that the matrix (DF + diag (uj)) − 1 is diagonal. Letting o [k] be the outlink count for page k in F, we can express the kth diagonal element as: Noting that (o [k] + 1) − 1 = o [k] − 1 − (o [k] (o [k] + 1)) − 1 and rewriting this in matrix form yields The matrix S = F˜ + (1 − w) − 1s ˜˜uTj is known as the stochastic complement of the column stochastic matrix PFj with respect to the sub matrix F˜. The theory of stochastic complementation is well studied, and it can be shown the stochastic complement of an irreducible matrix (such as the PageRank matrix) is unique. Furthermore, the stochastic complement is also irreducible and therefore has a unique stationary distribution as well. For an extensive study, see [15]. It can be easily shown that the sub-dominant eigenvalue of S is at most ~ +1 ~ α, where ~ is the size of F. For sufficiently large ~, this value will be very close to α. This is important, as other properties of the PageRank algorithm, notably the algorithm's sensitivity, are dependent on this value [11]. In this method, we estimate the length ~ vector fj by computing one PageRank iteration over the ~ × ~ stochastic complement S, starting at the vector f: fj ≈ Sf. (11) This is in contrast to the simple method outlined in the previous section, which first iterates over the (~ + 1) × (~ + 1) matrix PFj to estimate fj +, and then removes the last component from the estimate and renormalizes to approximate fj. The problem with the latter method is in the choice of the (~ + 1) length starting vector, ν. Consequently, the PageRank estimate given by the simple method differs from the true PageRank by at least 2αxj, where xj is the PageRank of page j. By using the stochastic complement, we can establish a tight lower bound of zero for this difference. To see this, consider the case in which a node k is added to F to form the augmented local subgraph Fk, and that Recall that, by definition, we have PF = αF D − 1F + (1 − α) e ~. Substituting (13) and (14) in (12) yields noting that by definition, f = PF f, and defining the vectors x, y, and z to be The first term x is a sparse vector, and takes non-zero values only for local pages k ~ that are siblings of the global page j. We define (i, j) ∈ F if and only if F [j, i] = 1 (equivalently, page i links to page j) and express the value of the component x [k ~] as: where o [k], as before, is the number of outlinks from page k in the local domain. Note that the last two terms, y and z are not dependent on the current global node j. Given the function hj (f) = ~ y + (˜uTj f) z ~ 1, the quantity ~ fj − f ~ 1 the PageRank of this new graph is If we can compute the function hj in linear time, then we can compute each value of ~ fj − f ~ 1 using an additional amount of time that is proportional to the number of nonzero components in x. These optimizations are carried out in Algorithm 3. Note that (20) computes the difference between all components of f and fj, whereas our node selection criteria, given in (7), is restricted to the components corresponding to nodes in the original local domain L. Let us examine Algorithm 3 in more detail. First, the algorithm computes the outlink counts for each page in the local domain. The algorithm then computes the quantity ˜uTj f for each known global page j. This inner product can be written as where the second term sums over the set of local pages that link to page j. Since the total number of edges in Fout was assumed to have size O ($) (recall that f is the number of pages in F), the running time of this step is also O ($). The algorithm then computes the vectors y and z, as given in (17) and (18), respectively. The L1NORMDIFF method is called on the components of these vectors which correspond to the pages in L, and it estimates the value of ~ EL (y + (˜uTj f) z) ~ 1 for each page j. The estimation works as follows. First, the values of ˜uTj f are discretized uniformly into c values {a1,..., ac}. The quantity ~ EL (y + aiz) ~ 1 is then computed for each discretized value of ai and stored in a table. To evaluate ~ EL (y + az) ~ 1 for some a ∈ [a1, ac], the closest discretized value ai is determined, and the corresponding entry in the table is used. The total running time for this method is linear in f and the discretization parameter c (which we take to be a constant). We note that if exact values are desired, we have also developed an algorithm that runs in O (f log f) time that is not described here. In the main loop, we compute the vector x, as defined in equation (16). The nested loops iterate over the set of pages in F that are siblings of page j. Typically, the size of this set is bounded by a constant. Finally, for each page j, the scores vector is updated over the set of non-zero components k of the vector x with k ∈ L. This set has size equal to the number of local siblings of page j, and is a subset of the total number of siblings of page j. Thus, each iteration of the main loop takes constant time, and the total running time of the main loop is O ($). Since we have assumed that the size of F will not grow larger than O (n), the total running time for the algorithm is O (n). ALGORITHM 3: Node Selection via Stochastic Complementation. SC-SELECT (F, Fout, f, k) Input: F: zero-one adjacency matrix of size f corresponding to the current local subgraph, Fout: zero-one outlink matrix from F to global subgraph, f: PageRank of F, k: number of pages to return Output: pages: set of k pages to crawl next {Compute outlink sums for local subgraph} Return k pages with highest scores 5.2.2 PageRank Flows We now present an intuitive analysis of the stochastic complementation method by decomposing the change in PageRank in terms of ` leaks' and ` flows'. This analysis is motivated by the decomposition given in (15). PageRank ` flow' is the increase in the local PageRanks originating from global page j. The flows are represented by the non-negative vector (˜uTj f) z (equations (15) and (18)). The scalar ˜uTj f can be thought of as the total amount of PageRank flow that page j has available to distribute. The vector z dictates how the flow is allocated to the local domain; the flow that local page k receives is proportional to (within a constant factor due to the random surfer vector) the expected number of its inlinks. The PageRank ` leaks' represent the decrease in PageRank resulting from the addition of page j. The leakage can be quantified in terms of the non-positive vectors x and y (equations (16) and (17)). For vector x, we can see from equation (19) that the amount of PageRank leaked by a local page is proportional to the weighted sum of the Page Research Track Paper Ranks of its siblings. Thus, pages that have siblings with higher PageRanks (and low outlink counts) will experience more leakage. The leakage caused by y is an artifact of the random surfer vector. We will next show that if only the ` flow' term, (˜uTj f) z, is considered, then the resulting method is very similar to a heuristic proposed by Cho et al. [6] that has been widely used for the "Crawling Through URL Ordering" problem. This heuristic is computationally cheaper, but as we will see later, not as effective as the Stochastic Complementation method. Our node selection strategy chooses global nodes that have the largest influence (equation (7)). If this influence is approximated using only ` flows', the optimal node j ∗ is: The resulting page selection score can be expressed as a sum of the PageRanks of each local page k that links to j, where each PageRank value is normalized by o [k] +1. Interestingly, the normalization that arises in our method differs from the heuristic given in [6], which normalizes by o [k]. The algorithm PF-SELECT, which is omitted due to lack of space, first computes the quantity fT (DF + diag (uj)) − 1uj for each global page j, and then returns the pages with the k largest scores. To see that the running time for this algorithm is O (n), note that the computation involved in this method is a subset of that needed for the SC-SELECT method (Algorithm 3), which was shown to have a running time of O (n). 6. EXPERIMENTS In this section, we provide experimental evidence to verify the effectiveness of our algorithms. We first outline our experimental methodology and then provide results across a variety of local domains. 6.1 Methodology Given the limited resources available at an academic institution, crawling a section of the web that is of the same magnitude as that indexed by Google or Yahoo! is clearly infeasible. Thus, for a given local domain, we approximate the global graph by crawling a local neighborhood around the domain that is several orders of magnitude larger than the local subgraph. Even though such a graph is still orders of magnitude smaller than the ` true' global graph, we contend that, even if there exist some highly influential pages that are very far away from our local domain, it is unrealistic for any local node selection algorithm to find them. Such pages also tend to be highly unrelated to pages within the local domain. When explaining our node selection strategies in section 5, we made the simplifying assumption that our local graph contained no dangling nodes. This assumption was only made to ease our analysis. Our implementation efficiently handles dangling links by replacing each zero column of our adjacency matrix with the uniform vector. We evaluate the algorithm using the two node selection strategies given in Section 5.2, and also against the following baseline methods: • RANDOM: Nodes are chosen uniformly at random among the known global nodes. • OUTLiNKCOUNT: Global nodes with the highest number of outlinks from the local domain are chosen. At each iteration of the FiNDGLOBALPR algorithm, we evaluate performance by computing the difference between the current PageRank estimate of the local domain, ELf ~ ELf ~ 1, and the global PageRank of the local domain ELg ~ ELg ~ 1. All PageRank calculations were performed using the uniform random surfer vector. Across all experiments, we set the random surfer parameter α, to be .85, and used a convergence threshold of 10 − 6. We evaluate the difference between the local and global PageRank vectors using three different metrics: the L1 and L ∞ norms, and Kendall's tau. The L1 norm measures the sum of the absolute value of the differences between the two vectors, and the L ∞ norm measures the absolute value of the largest difference. Kendall's tau metric is a popular rank correlation measure used to compare PageRanks [2, 11]. This metric can be computed by counting the number of pairs of pairs that agree in ranking, and subtracting from that the number of pairs of pairs that disagree in ranking. The final value is then normalized by the total number of n such pairs, resulting in a [− 1, 1] range, where a negative score signifies anti-correlation among rankings, and values near one correspond to strong rank correlation. 6.2 Results Our experiments are based on two large web crawls and were downloaded using the web crawler that is part of the Nutch open source search engine project [18]. All crawls were restricted to only ` http' pages, and to limit the number of dynamically generated pages that we crawl, we ignored all pages with urls containing any of the characters `? ' , ` *', ` @', or ` ='. The first crawl, which we will refer to as the ` edu' dataset, was seeded by homepages of the top 100 graduate computer science departments in the USA, as rated by the US News and World Report [16], and also by the home pages of their respective institutions. A crawl of depth 5 was performed, restricted to pages within the ‘.edu' domain, resulting in a graph with approximately 4.7 million pages and 22.9 million links. The second crawl was seeded by the set of pages under the ` politics' hierarchy in the dmoz open directory project [17]. We crawled all pages up to four links away, which yielded a graph with 4.4 million pages and 17.3 million links. Within the ` edu' crawl, we identified the five site-specific domains corresponding to the websites of the top five graduate computer science departments, as ranked by the US News and World Report. This yielded local domains of various sizes, from 10,626 (UIUC) to 59,895 (Berkeley). For each of these site-specific domains with size n, we performed 50 iterations of the FiNDGLOBALPR algorithm to crawl a total of 2n additional nodes. Figure 2 (a) gives the (L1) difference from the PageRank estimate at each iteration to the global PageRank, for the Berkeley local domain. The performance of this dataset was representative of the typical performance across the five computer science sitespecific local domains. Initially, the L1 difference between the global and local PageRanks ranged from .0469 (Stanford) to .149 (MIT). For the first several iterations, the Figure 2: L1 difference between the estimated and true global PageRanks for (a) Berkeley's computer science website, (b) the site-specific domain, www.enterstageright.com, and (c) the ` politics' topic-specific domain. The stochastic complement method outperforms all other methods across various domains. three link-based methods all outperform the random selection heuristic. After these initial iterations, the random heuristic tended to be more competitive with (or even outperform, as in the Berkeley local domain) the outlink count and PageRank flow heuristics. In all tests, the stochastic complementation method either outperformed, or was competitive with, the other methods. Table 1 gives the average difference between the final estimated global PageRanks and the true global PageRanks for various distance measures. Table 1: Average final performance of various node selection strategies for the five site-specific computer science local domains. Note that Kendall's Tau measures similarity, while the other metrics are dissimilarity measures. Stochastic Complementation clearly outperforms the other methods in all metrics. Within the ` politics' dataset, we also performed two sitespecific tests for the largest websites in the crawl: www.adamsmith.org, the website for the London based Adam Smith Institute, and www.enterstageright.com, an online conservative journal. As with the ` edu' local domains, we ran our algorithm for 50 iterations, crawling a total of 2n nodes. Figure 2 (b) plots the results for the www.enterstageright.com domain. In contrast to the ` edu' local domains, the RANDOM and OUTLINKCOUNT methods were not competitive with either the SC-SELECT or the PF-SELECT methods. Among all datasets and all node selection methods, the stochastic complementation method was most impressive in this dataset, realizing a final estimate that differed only .0279 from the global PageRank, a ten-fold improvement over the initial local PageRank difference of .299. For the Adam Smith local domain, the initial difference between the local and global PageRanks was .148, and the final estimates given by the SC-SELECT, PF-SELECT, OUTLINKCOUNT, and RANDOM methods were .0208, .0193, .0222, and .0356, respectively. Within the ` politics' dataset, we constructed four topicspecific local domains. The first domain consisted of all pages in the dmoz politics category, and also all pages within each of these sites up to two links away. This yielded a local domain of 90,811 pages, and the results are given in figure 2 (c). Because of the larger size of the topic-specific domains, we ran our algorithm for only 25 iterations to crawl a total of n nodes. We also created topic-specific domains from three political sub-topics: liberalism, conservatism, and socialism. The pages in these domains were identified by their corresponding dmoz categories. For each sub-topic, we set the local domain to be all pages within three links from the corresponding dmoz category pages. Table 2 summarizes the performance of these three topic-specific domains, and also the larger political domain. To quantify a global page j's effect on the global PageRank values of pages in the local domain, we define page j's impact to be its PageRank value, g [j], normalized by the fraction of its outlinks pointing to the local domain: where, oL [j] is the number of outlinks from page j to pages in the local domain L, and o [j] is the total number of j's outlinks. In terms of the random surfer model, the impact of page j is the probability that the random surfer (1) is currently at global page j in her random walk and (2) takes an outlink to a local page, given that she has already decided not to jump to a random page. For the politics local domain, we found that many of the pages with high impact were in fact political pages that should have been included in the dmoz politics topic, but were not. For example, the two most influential global pages were the political search engine www.askhenry.com, and the home page of the online political magazine, www.policyreview.com. Among non-political pages, the home page of the journal "Education Next" was most influential. The journal is freely available online and contains articles regarding various aspect of K-12 education in America. To provide some anecdotal evidence for the effectiveness of our page selection methods, we note that the SC-SELECT method chose 11 pages within the www.educationnext.org domain, the PF-SELECT method discovered 7 such pages, while the OUTLINKCOUNT and RANDOM methods found only 6 pages each. For the conservative political local domain, the socialist website www.ornery.org had a very high impact score. This · g [j] l Table 2: Final performance among node selection strategies for the four political topic-specific crawls. Note that Kendall's Tau measures similarity, while the other metrics are dissimilarity measures. was largely due to a link from the front page of this site to an article regarding global warming published by the National Center for Public Policy Research, a conservative research group in Washington, DC. Not surprisingly, the global PageRank of this article (which happens to be on the home page of the NCCPR, www.nationalresearch.com), was approximately .002, whereas the local PageRank of this page was only .00158. The SC-SELECT method yielded a global PageRank estimate of approximately .00182, the PFSELECT method estimated a value of .00167, and the RANDOM and OUTLiNKCOUNT methods yielded values of .01522 and .00171, respectively. 7. RELATED WORK The node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6]. Whereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first. They propose several node selection algorithms, including the outlink count heuristic, as well as a variant of our PF-SELECT algorithm which they refer to as the ` PageRank ordering metric'. They found this method to be most effective in optimizing their objective, as did a recent survey of these methods by Baeza-Yates et al. [1]. Boldi et al. also experiment within a similar crawling framework in [2], but quantify their results by comparing Kendall's rank correlation between the PageRanks of the current set of crawled pages and those of the entire global graph. They found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall's Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies. However, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values. Many algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14]. If such algorithms are used to compute the global PageRanks of our local domain, they would all require O (N) computation, storage, and bandwidth, where N is the size of the global domain. This is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain. Wang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks. For a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n. One drawback of this system is that the number of distinct web servers that comprise the global domain can be very large. For example, our ` edu' dataset contains websites from over 3,200 different universities; coordinating such a system among a large number of sites can be very difficult. Gan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space. Their approach relies on the availability of a remote connectivity server that can supply the set of inlinks to a given page, an assumption not used in our framework. They experimentally show that a reasonable estimate of the node's PageRank can be obtained by visiting at most a few hundred nodes. Using their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs. 8. CONCLUSIONS AND FUTURE WORK The internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity. Along with the ubiquity of these large-scale search engines comes an increase in search users' expectations. By providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find. In this work, we contend that the use of global PageRank in a localized search engine can improve performance. To estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next. Our primary contribution is our stochastic complementation page selection algorithm. This method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain. Experimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains. We conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks. Furthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics. Research Track Paper Often times, topic-specific domains are discovered using a focused web crawler which considers a page's content in conjunction with link anchor text to decide which pages to crawl next [4]. Although such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process. Typically, these pages are deleted and not indexed by the localized search engine. These pages can of course provide valuable information regarding the global PageRank of the local domain. One way to integrate these pages into our framework is to start the FINDGLOBALPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler. The global PageRank estimation framework, along with the node selection algorithms presented, all require O (n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk. If the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase. However, as the number of iterations increases (relative to k), the bottleneck will reside in the node selection computation. In this case, our algorithms would benefit from constant factor optimizations. Recall that the FINDGLOBALPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration. Recent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added. This algorithm was shown to give speedup of five to ten times on some datasets. We plan to investigate this and other such optimizations as future work. In this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks. To determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries. For some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used. The results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks. Acknowledgements. This research was supported by NSF grant CCF-0431257, NSF Career Award ACI-0093404, and a grant from Sabre, Inc. .
Estimating the Global PageRank of Web Communities ABSTRACT Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O (n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates. 1. INTRODUCTION Localized search engines are small-scale search engines that index only a single community of the web. Such communities can be site-specific domains, such as pages within the cs.utexas.edu domain, or topic-related communities--for example, political websites. Compared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller. Consequently, the computational resources needed to build such a search engine are also similarly lighter. By restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains. One drawback of localized indexes is the lack of global information needed to compute link-based rankings. The PageRank algorithm [3], has proven to be an effective such measure. In general, the PageRank of a given page is dependent on pages throughout the entire web graph. In the context of a localized search engine, if the PageRanks are computed using only the local subgraph, then we would expect the resulting PageRanks to reflect the perceived popularity within the local community and not of the web as a whole. For example, consider a localized search engine that indexes political pages with conservative views. A person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites. If only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community. However, if global PageRanks are also available, then the results can additionally reflect outsiders' views of the conservative community (those documents that liberals most often access within the conservative community). Thus, for many localized search engines, incorporating global PageRanks can improve the quality of search results. However, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts. Localized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web. In this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph. Our proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks. We construct this supergraph by iteratively crawling global pages on the current web frontier--i.e., global pages with inlinks from pages that have already been crawled. In order to provide a good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically. This algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes. We experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains. To evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled. We compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods. Finally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages. The paper is organized as follows. Section 2 gives an overview of localized search engines and outlines their advantages over global search. Section 3 provides background on the PageRank algorithm. Section 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms. Section 6 provides experimental results, section 7 gives an overview of related work, and, finally, conclusions are given in section 8. 2. LOCALIZED SEARCH ENGINES Research Track Paper 3. PAGERANK OVERVIEW Research Track Paper 4. PROBLEM DEFINITION ALGORITHM 2: The FINDGLOBALPR algorithm. 5. NODE SELECTION 5.1 Formulation Research Track Paper 5.2 Computation Research Track Paper 5.2.1 Stochastic Complementation 5.2.2 PageRank Flows Research Track Paper 6. EXPERIMENTS 6.1 Methodology 6.2 Results 7. RELATED WORK The node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6]. Whereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first. They propose several node selection algorithms, including the outlink count heuristic, as well as a variant of our PF-SELECT algorithm which they refer to as the ` PageRank ordering metric'. They found this method to be most effective in optimizing their objective, as did a recent survey of these methods by Baeza-Yates et al. [1]. Boldi et al. also experiment within a similar crawling framework in [2], but quantify their results by comparing Kendall's rank correlation between the PageRanks of the current set of crawled pages and those of the entire global graph. They found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall's Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies. However, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values. Many algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14]. If such algorithms are used to compute the global PageRanks of our local domain, they would all require O (N) computation, storage, and bandwidth, where N is the size of the global domain. This is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain. Wang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks. For a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n. One drawback of this system is that the number of distinct web servers that comprise the global domain can be very large. For example, our ` edu' dataset contains websites from over 3,200 different universities; coordinating such a system among a large number of sites can be very difficult. Gan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space. Their approach relies on the availability of a remote connectivity server that can supply the set of inlinks to a given page, an assumption not used in our framework. They experimentally show that a reasonable estimate of the node's PageRank can be obtained by visiting at most a few hundred nodes. Using their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs. 8. CONCLUSIONS AND FUTURE WORK The internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity. Along with the ubiquity of these large-scale search engines comes an increase in search users' expectations. By providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find. In this work, we contend that the use of global PageRank in a localized search engine can improve performance. To estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next. Our primary contribution is our stochastic complementation page selection algorithm. This method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain. Experimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains. We conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks. Furthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics. Research Track Paper Often times, topic-specific domains are discovered using a focused web crawler which considers a page's content in conjunction with link anchor text to decide which pages to crawl next [4]. Although such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process. Typically, these pages are deleted and not indexed by the localized search engine. These pages can of course provide valuable information regarding the global PageRank of the local domain. One way to integrate these pages into our framework is to start the FINDGLOBALPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler. The global PageRank estimation framework, along with the node selection algorithms presented, all require O (n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk. If the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase. However, as the number of iterations increases (relative to k), the bottleneck will reside in the node selection computation. In this case, our algorithms would benefit from constant factor optimizations. Recall that the FINDGLOBALPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration. Recent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added. This algorithm was shown to give speedup of five to ten times on some datasets. We plan to investigate this and other such optimizations as future work. In this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks. To determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries. For some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used. The results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks. Acknowledgements. This research was supported by NSF grant CCF-0431257, NSF Career Award ACI-0093404, and a grant from Sabre, Inc. .
Estimating the Global PageRank of Web Communities ABSTRACT Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O (n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates. 1. INTRODUCTION Localized search engines are small-scale search engines that index only a single community of the web. Such communities can be site-specific domains, such as pages within the cs.utexas.edu domain, or topic-related communities--for example, political websites. Compared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller. Consequently, the computational resources needed to build such a search engine are also similarly lighter. By restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains. One drawback of localized indexes is the lack of global information needed to compute link-based rankings. The PageRank algorithm [3], has proven to be an effective such measure. In general, the PageRank of a given page is dependent on pages throughout the entire web graph. For example, consider a localized search engine that indexes political pages with conservative views. A person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites. If only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community. Thus, for many localized search engines, incorporating global PageRanks can improve the quality of search results. However, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts. Localized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web. In this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph. Our proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks. We construct this supergraph by iteratively crawling global pages on the current web frontier--i.e., global pages with inlinks from pages that have already been crawled. In order to provide a good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically. This algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes. We experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains. To evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled. We compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods. Finally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages. The paper is organized as follows. Section 2 gives an overview of localized search engines and outlines their advantages over global search. Section 3 provides background on the PageRank algorithm. Section 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms. 7. RELATED WORK The node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6]. Whereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first. They found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall's Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies. However, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values. Many algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14]. If such algorithms are used to compute the global PageRanks of our local domain, they would all require O (N) computation, storage, and bandwidth, where N is the size of the global domain. This is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain. Wang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks. For a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n. One drawback of this system is that the number of distinct web servers that comprise the global domain can be very large. Gan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space. Using their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs. 8. CONCLUSIONS AND FUTURE WORK The internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity. Along with the ubiquity of these large-scale search engines comes an increase in search users' expectations. By providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find. In this work, we contend that the use of global PageRank in a localized search engine can improve performance. To estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next. Our primary contribution is our stochastic complementation page selection algorithm. This method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain. Experimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains. We conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks. Furthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics. Research Track Paper Although such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process. Typically, these pages are deleted and not indexed by the localized search engine. These pages can of course provide valuable information regarding the global PageRank of the local domain. One way to integrate these pages into our framework is to start the FINDGLOBALPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler. The global PageRank estimation framework, along with the node selection algorithms presented, all require O (n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk. If the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase. In this case, our algorithms would benefit from constant factor optimizations. Recall that the FINDGLOBALPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration. Recent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added. This algorithm was shown to give speedup of five to ten times on some datasets. We plan to investigate this and other such optimizations as future work. In this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks. To determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries. For some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used. The results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks. Acknowledgements.
H-54
Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature
This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domain-specific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006.
[ "passag extract", "domain-specif knowledg", "retriev model", "keyword search", "document collect", "passag-level inform retriev", "queri concept", "conceptu ir model", "passag map", "aspect map", "document map", "document retriev", "biomed document" ]
[ "P", "P", "P", "U", "U", "M", "M", "R", "M", "U", "U", "M", "M" ]
Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature Wei Zhou, Clement Yu Department of Computer Science University of Illinois at Chicago wzhou8@uic.edu, yu@cs.uic.edu Neil Smalheiser, Vetle Torvik Department of Psychiatry and Psychiatric Institute (MC912) University of Illinois at Chicago {neils, vtorvik}@uic. edu Jie Hong Division of Epidemiology and Biostatistics, School of Public health University of Illinois at Chicago jhong20@uic.edu ABSTRACT This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - retrieval models, query formulation, information filtering. H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing - thesauruses. General Terms Algorithms, Performance, Experimentation. 1. INTRODUCTION Biologists search for literature on a daily basis. For most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature. PubMed allows for keyword search by using Boolean operators. For example, if one desires documents on the use of the drug propanolol in the disease hypertension, a typical PubMed query might be propanolol AND hypertension, which will return all the documents having the two keywords. Keyword search in PubMed is effective if the query is well-crafted by the users using their expertise. However, information needs of biologists, in some cases, are expressed as complex questions [8][9], which PubMed is not designed to handle. While NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search. The Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval. The queries were collected from real biologists and they are expressed as complex questions, such as How do mutations in the Huntingtin gene affect Huntington``s disease? . The document collection contains 162,259 Highwire full-text documents in HTML format. Systems from participating groups are expected to find relevant passages within the full-text documents. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). We approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model. Domain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain. We assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval. For example, given a query What is the role of gene PRNP in the Mad Cow Disease? , expanding the gene symbol PRNP with its synonyms Prp, PrPSc, and prion protein, more relevant documents might be retrieved. PubMed and many other biomedical systems [8][9][10][13] also make use of domain-specific knowledge to improve retrieval effectiveness. Intuitively, retrieval on the level of concepts should outperform bag-of-words approaches, since the semantic relationships among words in a concept are utilized. In some recent studies [13][15], positive results have been reported for this hypothesis. In this paper, concepts are entry terms of the ontology Medical Subject Headings (MeSH), a controlled vocabulary maintained by NLM for indexing biomedical literature, or gene symbols in the Entrez gene database also from NLM. A concept could be a word, such as the gene symbol PRNP, or a phrase, such as Mad cow diseases. In the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels. This paper makes two contributions: 1. We propose a conceptual approach to utilize domain-specific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature. Based on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006. 2. We examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution. This paper is organized as follows: problem statement is given in the next section. The techniques are introduced in section 3. In section 4, we present the experimental results. Related works are given in section 5 and finally, we conclude the paper in section 6. 2. PROBLEM STATEMENT We describe the queries, document collection and the system output in this section. The query set used in the Genomics track of TREC 2006 consists of 28 questions collected from real biologists. As described in [8], these questions all have the following general format: Biological object (1. . m) Relationship ←⎯⎯⎯⎯→ Biological process (1. . n) (1) where a biological object might be a gene, protein, or gene mutation and a biological process can be a physiological process or disease. A question might involve multiple biological objects (m) and multiple biological processes (n). These questions were derived from four templates (Table 2). Table 2 Query templates and examples in the Genomics track of TREC 2006 Template Example What is the role of gene in disease? What is the role of DRD4 in alcoholism? What effect does gene have on biological process? What effect does the insulin receptor gene have on tumorigenesis? How do genes interact in organ function? How do HMG and HMGB1 interact in hepatitis? How does a mutation in gene influence biological process? How does a mutation in Ret influence thyroid function? Features of the queries: 1) They are different from the typical Web queries and the PubMed queries, both of which usually consist of 1 to 3 keywords; 2) They are generated from structural templates which can be used by a system to identify the query components, the biological object or process. The document collection contains 162,259 Highwire full-text documents in HTML format. The output of the system is a list of passages ranked according to their similarities with the query. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). A passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and </P> HTML tags). This is a passage-level information retrieval problem with the attempt to put biologists in contexts where relevant information is provided. 3. TECHNIQUES AND METHODS We approached the problem by first retrieving the top-k most relevant paragraphs, then extracting passages from these paragraphs, and finally ranking the passages. In this process, we employed several techniques and methods, which will be introduced in this section. First, we give two definitions: Definition 3.1 A concept is 1) a entry term in the MeSH ontology, or 2) a gene symbol in the Entrez gene database. This definition of concept can be generalized to include other biomedical dictionary terms. Definition 3.2 A semantic type is a category defined in the Semantic Network of the Unified Medical Language System (UMLS) [14]. The current release of the UMLS Semantic Network contains 135 semantic types such as Disease or Syndrome. Each entry term in the MeSH ontology is assigned one or more semantic types. Each gene symbol in the Entrez gene database maps to the semantic type Gene or Genome. In addition, these semantic types are linked by 54 relationships. For example, Antibiotic prevents Disease or Syndrome. These relationships among semantic types represent general biomedical knowledge. We utilized these semantic types and their relationships to identify related concepts. The rest of this section is organized as follows: in section 3.1, we explain how the concepts are identified within a query. In section 3.2, we specify five different types of domain-specific knowledge and introduce how they are compiled. In section 3.3, we present our conceptual IR model. Finally, our strategy for passage extraction is described in section 3.4. 3.1 Identifying concepts within a query A concept, defined in Definition 3.1, is a gene symbol or a MeSH term. We make use of the query templates to identify gene symbols. For example, the query How do HMG and HMGB1 interact in hepatitis? is derived from the template How do genes interact in organ function? . In this case, HMG and HMGB1 will be identified as gene symbols. In cases where the query templates are not provided, programs for recognition of gene symbols within texts are needed. We use the query translation functionality of PubMed to extract MeSH terms in a query. This is done by submitting the whole query to PubMed, which will then return a file in which the MeSH terms in the query are labeled. In Table 3.1, three MeSH terms within the query What is the role of gene PRNP in the Mad cow disease? are found in the PubMed translation: ``encephalopathy, bovine spongiform'' for Mad cow disease, genes for gene, and role for role. Table 3.1 The PubMed translation of the query ``What is the role of gene PRNP in the Mad cow disease?'' . Term PubMed translation Mad cow disease ``bovine spongiform encephalopathy''[Text Word] OR ``encephalopathy, bovine spongiform''[MeSH Terms] OR Mad cow disease[Text Word] gene (``genes''[TIAB] NOT Medline[SB]) OR ``genes''[MeSH Terms] OR gene[Text Word] role ``role''[MeSH Terms] OR role[Text Word] 3.2 Compiling domain-specific knowledge In this paper, domain-specific knowledge refers to information about concepts and their relationships in a certain domain. We used five types of domain-specific knowledge in the domain of genomics: Type 1. Synonyms (terms listed in the thesauruses that refer to the same meaning) Type 2. Hypernyms (more generic terms, one level only) Type 3. Hyponyms (more specific terms, one level only) Type 4. Lexical variants (different forms of the same concept, such as abbreviations. They are commonly used in the literature, but might not be listed in the thesauruses) Type 5. Implicitly related concepts (terms that are semantically related and also co-occur more frequently than being independent in the biomedical texts) Knowledge of type 1-3 is retrieved from the following two thesauruses: 1) MeSH, a controlled vocabulary maintained by NLM for indexing biomedical literature. The 2007 version of MeSH contains information about 190,000 concepts. These concepts are organized in a tree hierarchy; 2) Entrez Gene, one of the most widely used searchable databases of genes. The current version of Entrez Gene contains information about 1.7 million genes. It does not have a hierarchy. Only synonyms are retrieved from Entrez Gene. The compiling of type 4-5 knowledge is introduced in section 3.2.1 and 3.2.2, respectively. 3.2.1 Lexical variants Lexical variants of gene symbols New gene symbols and their lexical variants are regularly introduced into the biomedical literature [7]. However, many reference databases, such as UMLS and Entrez Gene, may not be able to keep track of all this kind of variants. For example, for the gene symbol ``NF-kappa B'', at least 5 different lexical variants can be found in the biomedical literature: ``NF-kappaB'', ``NFkappaB'', ``NFkappa B'', ``NF-kB'', and ``NFkB'', three of which are not in the current UMLS and two not in the Entrez Gene. [3][21] have shown that expanding gene symbols with their lexical variants improved the retrieval effectiveness of their biomedical IR systems. In our system, we employed the following two strategies to retrieve lexical variants of gene symbols. Strategy I: This strategy is to automatically generate lexical variants according to a set of manually crafted heuristics [3][21]. For example, given a gene symbol PLA2, a variant PLAII is generated according to the heuristic that Roman numerals and Arabic numerals are convertible when naming gene symbols. Another variant, PLA 2, is also generated since a hyphen or a space could be inserted at the transition between alphabetic and numerical characters in a gene symbol. Strategy II: This strategy is to retrieve lexical variants from an abbreviation database. ADAM [22] is an abbreviation database which covers frequently used abbreviations and their definitions (or long-forms) within MEDLINE, the authoritative repository of citations from the biomedical literature maintained by the NLM. Given a query How does nucleoside diphosphate kinase (NM23) contribute to tumor progression? , we first identify the abbreviation NM23 and its long-form nucleoside diphosphate kinase using the abbreviation identification program from [4]. Searching the long-form nucleoside diphosphate kinase in ADAM, other abbreviations, such as NDPK or NDK, are retrieved. These abbreviations are considered as the lexical variants of NM23. Lexical variants of MeSH concepts ADAM is used to obtain the lexical variants of MeSH concepts as well. All the abbreviations of a MeSH concept in ADAM are considered as lexical variants to each other. In addition, those long-forms that share the same abbreviation with the MeSH concept and are different by an edit distance of 1 or 2 are also considered as its lexical variants. As an example, ``human papilloma viruses'' and ``human papillomaviruses'' have the same abbreviation HPV in ADAM and their edit distance is 1. Thus they are considered as lexical variants to each other. The edit distance between two strings is measured by the minimum number of insertions, deletions, and substitutions of a single character required to transform one string into the other [12]. 3.2.2 Implicitly related concepts Motivation: In some cases, there are few documents in the literature that directly answer a given query. In this situation, those documents that implicitly answer their questions or provide supporting information would be very helpful. For example, there are few documents in PubMed that directly answer the query ``What is the role of the genes HNF4 and COUP-tf I in the suppression in the function of the liver?'' . However, there exist some documents about the role of ``HNF4'' and ``COUP-tf I'' in regulating ``hepatitis B virus'' transcription. It is very likely that the biologists would be interested in these documents because ``hepatitis B virus'' is known as a virus that could cause serious damage to the function of liver. In the given example, ``hepatitis B virus'' is not a synonym, hypernym, hyponym, nor a lexical variant of any of the query concepts, but it is semantically related to the query concepts according to the UMLS Semantic Network. We call this type of concepts implicitly related concepts of the query. This notion is similar to the B-term used in [19] for relating two disjoint literatures for biomedical hypothesis generation. The difference is that we utilize the semantic relationships among query concepts to exclusively focus on concepts of certain semantic types. A query q in format (1) of section 2 can be represented by q = (A, C) where A is the set of biological objects and C is the set of biological processes. Those concepts that are semantically related to both A and C according to the UMLS Semantic Network are considered as the implicitly related concepts of the query. In the above example, A = {HNF4, COUP-tf I}, C = {function of liver}, and ``hepatitis B virus'' is one of the implicitly related concepts. We make use of the MEDLINE database to extract the implicitly related concepts. The 2006 version of MEDLINE database contains citations (i.e., abstracts, titles, and etc.) of over 15 million biomedical articles. Each document in MEDLINE is manually indexed by a list of MeSH terms to describe the topics covered by that document. Implicitly related concepts are extracted and ranked in the following steps: Step 1. Let list_A be the set of MeSH terms that are 1) used for indexing those MEDLINE citations having A, and 2) semantically related to A according to the UMLS Semantic Network. Similarly, list_C is created for C. Concepts in B = list_A ∩ list_C are considered as implicitly related concepts of the query. Step 2. For each concept b∈B, compute the association between b and A using the mutual information measure [5]: P( , ) ( , ) log P( )P( ) b A I b A b A = where P(x) = n/N, n is the number of MEDLINE citations having x and N is the size of MEDLINE. A large value for I(b, A) means that b and A co-occur much more often than being independent. I(b, C) is computed similarly. Step 3. Let r(b) = (I(b, A), I(b, C)), for b∈ B. Given b1, b2 ∈ B, we say r(b1) ≤ r(b2) if I(b1, A) ≤ I(b2, A) and I(b1, C) ≤ I(b2, C). Then the association between b and the query q is measured by: { : and ( ) ( )} ( , ) { : and ( ) ( )} x x B r x r b score b q x x B r b r x ∈ ≤ = ∈ ≤ (2) The numerator in Formula 2 is the number of the concepts in B that are associated with both A and C equally with or less than b. The denominator is the number of the concepts in B that are associated with both A and C equally with or more than b. Figure 3.2.2 shows the top 4 implicitly related concepts for the sample query. Figure 3.2.2 Top 4 implicitly related concepts for the query ``How do interactions between HNF4 and COUP-TF1 suppress liver function?'' . In Figure 3.2.2, the top 4 implicitly related concepts are all highly associated with liver: Hepatocytes are liver cells; Hepatoblastoma is a malignant liver neoplasm occurring in young children; the vast majority of Gluconeogenesis takes place in the liver; and Hepatitis B virus is a virus that could cause serious damage to the function of liver. The top-k ranked concepts in B are used for query expansion: if I(b, A) ≥ I(b, C), then b is considered as an implicit related concept of A. A document having b but not A will receive a partial weight of A. The expansion is similar for C when I(b, A) < I(b, C). 3.3 Conceptual IR model We now discuss our conceptual IR model. We first give the basic conceptual IR model in section 3.3.1. Then we explain how the domain-specific knowledge is incorporated in the model using query expansion in section 3.3.2. A pseudo-feedback strategy is introduced in section 3.3.3. In section 3.3.4, we give a strategy to improve the ranking by avoiding incorrect match of abbreviations. 3.3.1 Basic model Given a query q and a document d, our model measures two similarities, concept similarity and word similarity: ( , ) ( , )( , ) ( , ) concept word sim q d sim q d sim q d= Concept similarity Two vectors are derived from a query q, 1 2 1 11 12 1 2 21 22 2 ( , ) ( , ,..., ) ( , ,..., ) m n q v v v c c c v c c c = = = where v1 is a vector of concepts describing the biological object(s) and v2 is a vector of concepts describing the biological process(es). Given a vector of concepts v, let s(v) be the set of concepts in v. The weight of vi is then measured by: ( ) max{log : ( ) ( ) and 0}i i v v N w v s v s v n n = ⊆ > where v is a vector that contains a subset of concepts in vi and nv is the number of documents having all the concepts in v. The concept similarity between q and d is then computed by 2 1 ( )( , ) i i concept i w vsim q d α = = ×∑ where αi is a parameter to indicate the completeness of vi that document d has covered. αi is measured by: and i i i c c d c v c c v idf idf α ∈ ∈ ∈ = ∑ ∑ (3) where idfc is the inverse document frequency of concept c. An example: suppose we have a query How does Nurr-77 delete T cells before they migrate to the spleen or lymph nodes and how does this impact autoimmunity? . After identifying the concepts in the query, we have: 1 2 (`Nurr-77') ('T cells', `spleen', `autoimmunity', `lymph nodes') v v = = Suppose that some document frequencies of different combinations of concepts are as follows: 25 df(`Nurr-77') 0 df('T cells', `spleen', `autoimmunity', `lymph nodes') 326 df('T cells', `spleen', `autoimmunity') 82 df(`spleen', `autoimmunity', `lymph nodes') 147 df('T cells', `autoimmunity', `lymph nodes') 2332 df('T cells', `spleen', `lymph nodes') The weight of vi is then computed by (note that there does not exist a document having all the concepts in v2): 1 2 ( ) log( / 25) ( ) log( /82) w v N w v N = = . Now suppose a document d contains concepts `Nurr-77'', 'T cells', `spleen', and `lymph nodes', but not `autoimmunity'', then the value of parameter αi is computed as follows: 1 2 1 ('T cells')+ (`spleen')+ (`lymph nodes') ('T cells')+ (`spleen')+ (`lymph nodes')+ (`autoimmunity') idf idf idf idf idf idf idf α α = = Word similarity The similarity between q and d on the word level is computed using Okapi [17]: 10.5 ( 1) log( )( , ) 0.5word w q N n k tf sim q d n K tf∈ − + + = + + ∑ (4) where N is the size of the document collection; n is the number of documents containing w; K=k1 × ((1-b)+b × dl/avdl) and k1=1.2, C Function of Liver Implicitly related concepts (B) Hepatocytes Hepatoblastoma Gluconeogenesis Hepatitis B virus HNF4 and COUP-tf I A b=0.75 are constants. dl is the document length of d and avdl is the average document length; tf is the term frequency of w within d. The model Given two documents d1 and d2, we say 1 2( , ) ( , )sim q d sim q d> or d1 will be ranked higher than d2, with respect to the same query q, if either 1) 1 2( , ) ( , ) concept concept sim q d sim q d> or 2) 1 2 1 2and( , ) ( , ) ( , ) ( , ) concept concept word word sim q d sim q d sim q d sim q d= > This conceptual IR model emphasizes the similarity on the concept level. A similar model but applied to non-biomedical domain has been given in [15]. 3.3.2 Incorporating domain-specific knowledge Given a concept c, a vector u is derived by incorporating its domain-specific knowledge: 1 2 3( , , , )u c u u u= where u1 is a vector of its synonyms, hyponyms, and lexical variants; u2 is a vector of its hypernyms; and u3 is a vector of its implicitly related concepts. An occurrence of any term in u1 will be counted as an occurrence of c. idfc in Formula 3 is updated as: 1, logc c u N D idf = 1,c uD is the set of documents having c or any term in 1u . The weight that a document d receives from u is given by: max{ : and }tw t u t d∈ ∈ where wt = β . cidf× The weighting factor β is an empirical tuning parameter determined as: 1. β = 1 if t is the original concept, its synonym, its hyponym, or its lexical variant; 2. β = 0.95 if t is a hypernym; 3. β = 0.90× (k-i+1)/k if t is an implicitly related concept. k is the number of selected top ranked implicitly related concepts (see section 3.2.2); i is the position of t in the ranking of implicitly related concepts. 3.3.3 Pseudo-feedback Pseudo-feedback is a technique commonly used to improve retrieval performance by adding new terms into the original query. We used a modified pseudo-feedback strategy described in [2]. Step 1. Let C be the set of concepts in the top 15 ranked documents. For each concept c in C, compute the similarity between c and the query q, the computation of sim(q,c) can be found in [2]. Step 2. The top-k ranked concepts by sim(q,c) are selected. Step 3. Associate each selected concept c' with the concept cq in q that 1) has the same semantic type as c', and 2) is most related to c' among all the concepts in q. The association between c' and cq is computed by: P( ', ) ( ', ) log P( ')P( ) q q q c c I c c c c = where P(x) = n/N, n is the number of documents having x and N is the size of the document collection. A document having c' but not cq receives a weight given by: (0.5× (k-i+1)/k) ,qcidf× where i is the position of c' in the ranking of step 2. 3.3.4 Avoid incorrect match of abbreviations Some gene symbols are very short and thus ambiguous. For example, the gene symbol APC could be the abbreviation for many non-gene long-forms, such as air pollution control, aerobic plate count, or argon plasma coagulation. This step is to avoid incorrect match of abbreviations in the top ranked documents. Given an abbreviation X with the long-form L in the query, we scan the top-k ranked (k=1000) documents and when a document is found with X, we compare L with all the long-forms of X in that document. If none of these long-forms is equal or close to L (i.e., the edit distance between L and the long-form of X in that document is 1 or 2), then the concept similarity of X is subtracted. 3.4 Passage extraction The goal of passage extraction is to highlight the most relevant fragments of text in paragraphs. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). A passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and </P> HTML tags). It is also possible to have more than one relevant passage in a single paragraph. Our strategy for passage extraction assumes that the optimal passage(s) in a paragraph should have all the query concepts that the whole paragraph has. Also they should have higher density of query concepts than other fragments of text in the paragraph. Suppose we have a query q and a paragraph p represented by a sequence of sentences 1 2... . np s s s= Let C be the set of concepts in q that occur in p and S = Φ. Step 1. For each sequence of consecutive sentences 1... ,i i js s s+ 1 ≤ i ≤ j ≤ n, let S = S 1{ ... }i i js s s+∪ if 1...i i js s s+ satisfies that: 1) Every query concept in C occurs in 1...i i js s s+ and 2) There does not exist k, such that i < k < j and every query concept in C occurs in 1...i i ks s s+ or 1 2... . k k js s s+ + Condition 1 requires 1...i i js s s+ having all the query concepts in p and condition 2 requires 1...i i js s s+ be the minimal. Step 2. Let 1min{ 1: ... }i i jL j i s s s S+= − + ∈ . For every 1...i i js s s+ in S, let 1{ ... }i i jS S s s s+= − if (j - i + 1) > L. This step is to remove those sequences of sentences in S that have lower density of query concepts. Step 3. For every two sequences of consecutive sentences 1 1 1 2 2 21 1... , and ...i i j i i js s s S s s s S+ +∈ ∈ , if 1 2 1 2 2 1 , and 1 i i j j i j ≤ ≤ ≤ + (5) then do Repeat this step until for every two sequences of consecutive sentences in S, condition (5) does not apply. This step is to merge those sequences of sentences in S that are adjacent or overlapped. Finally the remaining sequences of sentences in S are returned as the optimal passages in the paragraph p with respect to the query. 1 1 2 1 1 2 2 2 1 1 1 1 { ... } { ... } { ... } i i j i i j i i j S S s s s S S s s s S S s s s + + + = ∪ = − = − 4. EXPERIMENTAL RESULTS The evaluation of our techniques and the experimental results are given in this section. We first describe the datasets and evaluation metrics used in our experiments and then present the results. 4.1 Data sets and evaluation metrics Our experiments were performed on the platform of the Genomics track of TREC 2006. The document collection contains 162,259 full-text documents from 49 Highwire biomedical journals. The set of queries consists of 28 queries collected from real biologists. The performance is measured on three different levels (passage, aspect, and document) to provide better insight on how the question is answered from different perspectives. Passage MAP: As described in [8], this is a character-based precision calculated as follows: At each relevant retrieved passage, precision will be computed as the fraction of characters overlapping with the gold standard passages divided by the total number of characters included in all nominated passages from this system for the topic up until that point. Similar to regular MAP, relevant passages that were not retrieved will be added into the calculation as well, with precision set to 0 for relevant passages not retrieved. Then the mean of these average precisions over all topics will be calculated to compute the mean average passage precision. Aspect MAP: A question could be addressed from different aspects. For example, the question what is the role of gene PRNP in the Mad cow disease? could be answered from aspects like Diagnosis, Neurologic manifestations, or Prions/Genetics. This measure indicates how comprehensive the question is answered. Document MAP: This is the standard IR measure. The precision is measured at every point where a relevant document is obtained and then averaged over all relevant documents to obtain the average precision for a given query. For a set of queries, the mean of the average precision for all queries is the MAP of that IR system. The output of the system is a list of passages ranked according to their similarities with the query. The performances on the three levels are then calculated based on the ranking of the passages. 4.2 Results The Wilcoxon signed-rank test was employed to determine the statistical significance of the results. In the tables of the following sections, statistically significant improvements (at the 5% level) are marked with an asterisk. 4.2.1 Conceptual IR model vs. term-based model The initial baseline was established using word similarity only computed by the Okapi (Formula 4). Another run based on our basic conceptual IR model was performed without using query expansion, pseudo-feedback, or abbreviation correction. The experimental result is shown in Table 4.2.1. Our basic conceptual IR model significantly outperforms the Okapi on all three levels, which suggests that, although it requires additional efforts to identify concepts, retrieval on the concept level can achieve substantial improvements over purely term-based retrieval model. 4.2.2 Contribution of different types of knowledge A series of experiments were performed to examine how each type of domain-specific knowledge contributes to the retrieval performance. A new baseline was established using the basic conceptual IR model without incorporating any type of domainspecific knowledge. Then five runs were conducted by adding each individual type of domain-specific knowledge. We also conducted a run by adding all types of domain-specific knowledge. Results of these experiments are shown in Table 4.2.2. We found that any available type of domain-specific knowledge improved the performance in passage retrieval. The biggest improvement comes from the lexical variants, which is consistent with the result reported in [3]. This result also indicates that biologists are likely to use different variants of the same concept according to their own writing preferences and these variants might not be collected in the existing biomedical thesauruses. It also suggests that the biomedical IR systems can benefit from the domain-specific knowledge extracted from the literature by text mining systems. Synonyms provided the second biggest improvement. Hypernyms, hyponyms, and implicitly related concepts provided similar degrees of improvement. The overall performance is an accumulative result of adding different types of domain-specific knowledge and it is better than any individual addition. It is clearly shown that the performance is significantly improved (107% on passage level, 63.1% on aspect level, and 49.6% on document level) when the domain-specific knowledge is appropriately incorporated. Although it is not explicitly shown in Table 4.2.3, different types of domain-specific knowledge affect different subsets of queries. More specifically, each of these types (with the exception of the lexical variants which affects a large number of queries) affects only a few queries. But for those affected queries, their improvement is significant. As a consequence, the accumulative improvement is very significant. 4.2.3 Pseudo-feedback and abbreviation correction Using the Baseline+All in Table 4.2.2 as a new baseline, the contribution of abbreviation correction and pseudo-feedback is given in Table 4.2.3. There is little improvement by avoiding incorrect matching of abbreviations. The pseudo-feedback contributed about 4.6% improvement in passage retrieval. 4.2.4 Performance compared with best-reported results We compared our result with the results reported in the Genomics track of TREC 2006 [8] on the conditions that 1) systems are automatic systems and 2) passages are extracted from paragraphs. The performance of our system relative to the best reported results is shown in Table 4.2.4 (in TREC 2006, some systems returned the whole paragraphs as passages. As a consequence, excellent retrieval results were obtained on document and aspect levels at the expense of performance on the passage level. We do not include the results of such systems here). Table 4.2.4 Performance compared with best-reported results. Passage MAP Aspect MAP Document MAP Best reported results 0.1486 0.3492 0.5320 Our results 0.1823 0.3811 0.5391 Improvement 22.68% 9.14% 1.33% The best reported results in the first row of Table 4.2.4 on three levels (passage, aspect, and document) are from different systems. Our result is from a single run on passage retrieval in which it is better than the best reported result by 22.68% in passage retrieval and at the same time, 9.14% better in aspect retrieval, and 1.33% better in document retrieval (Since the average precision of each individual query was not reported, we can not apply the Wilcoxon signed-rank test to calculate the significance of difference between our performance and the best reported result.) . Table 4.2.1 Basic conceptual IR model vs. term-based model Run Passage Aspect Document MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%) Okapi 0.064 N/A 0.175 N/A 0.285 N/A Basic conceptual IR model 0.084* (+31.3%) 17 (65.4%) 0.233* (+33.1%) 12 (46.2%) 0.359* (+26.0%) 15 (57.7%) Table 4.2.2 Contribution of different types of domain-specific knowledge Run Passage Aspect Document MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%) Baseline = Basic conceptual IR model 0.084 N/A 0.233 N/A 0.359 N/A Baseline+Synonyms 0.105 (+25%) 11 (42.3%) 0.246 (+5.6%) 9 (34.6%) 0.420 (+17%) 13 (50%) Baseline+Hypernyms 0.088 (+4.8%) 11 (42.3%) 0.225 (-3.4%) 9 (34.6%) 0.390 (+8.6%) 16 (61.5%) Baseline+Hyponyms 0.087 (+3.6%) 10 (38.5%) 0.217 (-6.9%) 7 (26.9%) 0.389 (+8.4%) 10 (38.5%) Baseline+Variants 0.150* (+78.6%) 16 (61.5%) 0.348* (+49.4%) 13 (50%) 0.495* (+37.9%) 10 (38.5%) Baseline+Related 0.086 (+2.4%) 9 (34.6%) 0.220 (-5.6%) 9 (34.6%) 0.387 (+7.8%) 13 (50%) Baseline+All 0.174* (107%) 25 (96.2%) 0.380* (+63.1%) 19 (73.1%) 0.537* (+49.6%) 14 (53.8%) Table 4.2.3 Contribution of abbreviation correction and pseudo-feedback Run Passage Aspect Document MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%) Baseline+All 0.174 N/A 0.380 N/A 0.537 N/A Baseline+All+Abbr 0.175 (+0.6%) 5 (19.2%) 0.375 (-1.3%) 4 (15.4%) 0.535 (-0.4%) 4 (15.4%) Baseline+All+Abbr+PF 0.182 (+4.6%) 10 (38.5%) 0.381 (+0.3%) 6 (23.1%) 0.539 (+0.4%) 9 (34.6%) A separate experiment has been done using a second testbed, the ad-hoc Task of TREC Genomics 2005, to evaluate our knowledge-intensive conceptual IR model for document retrieval of biomedical literature. The overall performance in terms of MAP is 35.50%, which is about 22.92% above the best reported result [9]. Notice that the performance was only measured on the document level for the ad-hoc Task of TREC Genomics 2005. 5. RELATED WORKS Many studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems. [11][1] assessed query expansion using the UMLS Metathesaurus. Based on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported. [1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown. We used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion. A critical difference between our work and those in [11][1] is that our retrieval model is based on concepts, not on individual words. The Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval. As summarized in [8][9][10], many groups utilized domain-specific knowledge to improve retrieval effectiveness. Among these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion. They have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance. [21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements. Our system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts. In addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution. [20][15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain. In their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms. Little benefit has been shown in [20]. This has been due to ambiguity of the query terms which have different meanings in different contexts. When these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved. In the biomedical domain, this kind of ambiguity of query terms is relatively less frequent, because, although the abbreviations are highly ambiguous, general biomedical concepts usually have only one meaning in the thesaurus, such as UMLS, whereas a term in WordNet usually have multiple meanings (represented as synsets in WordNet). Besides, we have implemented a post-ranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity. Besides, we have implemented a postranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity. The retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents. Although the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases. [13] presented a good study of the role of knowledge in the document retrieval of clinical medicine. They have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements. Although the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18]. Also, our similarity function is very different from that in [13]. In summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems. Second, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution. Third, although some of the techniques seem similar to previously published ones, they are actually quite different in details. For example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept. This is to ensure that our conceptual model of retrieval can be applied. As another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19]. Finally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts. 6. CONCLUSION This paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature. We specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution. We also evaluated other two techniques, pseudo-feedback and abbreviation correction. Experimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results. In our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness. The application of our conceptual IR model in other domains such as clinical medicine will be investigated. 7. ACKNOWLEDGMENTS insightful discussion. 8. REFERENCES [1] Aronson A.R., Rindflesch T.C. Query expansion using the UMLS Metathesaurus. Proc AMIA Annu Fall Symp. 1997. 485-9. [2] Baeza-Yates R., Ribeiro-Neto B. Modern Information Retrieval. Addison-Wesley, 1999, 129-131. [3] Buttcher S., Clarke C.L.A., Cormack G.V. Domain-specific synonym expansion and validation for biomedical information retrieval (MultiText experiments for TREC 2004). TREC``04. [4] Chang J.T., Schutze H., Altman R.B. Creating an online dictionary of abbreviations from MEDLINE. Journal of the American Medical Informatics Association. 2002 9(6). [5] Church K.W., Hanks P. Word association norms, mutual information and lexicography. Computational Linguistics. 1990;16:22, C29. [6] Fontelo P., Liu F., Ackerman M. askMEDLINE: a free-text, natural language query tool for MEDLINE/PubMed. BMC Med Inform Decis Mak. 2005 Mar 10;5(1):5. [7] Fukuda K., Tamura A., Tsunoda T., Takagi T. Toward information extraction: identifying protein names from biological papers. Pac Symp Biocomput. 1998;:707-18. [8] Hersh W.R., and etc.. TREC 2006 Genomics Track Overview. TREC``06. [9] Hersh W.R., and etc.. TREC 2005 Genomics Track Overview. In TREC``05. [10] Hersh W.R., and etc.. TREC 2004 Genomics Track Overview. In TREC``04. [11] Hersh W.R., Price S., Donohoe L. Assessing thesaurus-based query expansion using the UMLS Metathesaurus. Proc AMIA Symp. 344-8. 2000. [12] Levenshtein, V. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics - Doklady 10, 10 (1996), 707-710. [13] Lin J., Demner-Fushman D. The Role of Knowledge in Conceptual Retrieval: A Study in the Domain of Clinical Medicine. SIGIR``06. 99-06. [14] Lindberg D., Humphreys B., and McCray A. The Unified Medical Language System. Methods of Information in Medicine. 32(4):281-291, 1993. [15] Liu S., Liu F., Yu C., and Meng W.Y.. An Effective Approach to Document Retrieval via Utilizing WordNet and Recognizing Phrases. SIGIR``04. 266-272 [16] Proux D., Rechenmann F., Julliard L., Pillet V.V., Jacq B. Detecting Gene Symbols and Names in Biological Texts: A First Step toward Pertinent Information Extraction. Genome Inform Ser Workshop Genome Inform. 1998;9:72-80. [17] Robertson S.E., Walker S. Okapi/Keenbow at TREC-8. NIST Special Publication 500-246: TREC 8. [18] Sackett D.L., and etc.. Evidence-Based Medicine: How to Practice and Teach EBM. Churchill Livingstone. Second edition, 2000. [19] Swanson,D.R., Smalheiser,N.R.. An interactive system for finding complemen-tary literatures: a stimulus to scientific discovery. Artificial Intelligence, 1997; 91,183-203. [20] Voorhees E. Query expansion using lexical-semantic relations. SIGIR 1994. 61-9 [21] Zhong M., Huang X.J. Concept-based biomedical text retrieval. SIGIR``06. 723-4 [22] Zhou W., Torvik V.I., Smalheiser N.R. ADAM: Another Database of Abbreviations in MEDLINE. Bioinformatics. 2006; 22(22): 2813-2818.
Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature ABSTRACT This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006. 1. INTRODUCTION Biologists search for literature on a daily basis. For most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature. PubMed allows for keyword search by using Boolean operators. For example, if one desires documents on the use of the drug propanolol in the disease hypertension, a typical PubMed query might be "propanolol AND hypertension", which will return all the documents having the two keywords. Keyword search in PubMed is effective if the query is well-crafted by the users using their expertise. However, information needs of biologists, in some cases, are expressed as complex questions [8] [9], which PubMed is not designed to handle. While NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search. The Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval. The queries were collected from real biologists and they are expressed as complex questions, such as "How do mutations in the Huntingtin gene affect Huntington's disease?" . The document collection contains 162,259 Highwire full-text documents in HTML format. Systems from participating groups are expected to find relevant passages within the full-text documents. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). We approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model. Domain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain. We assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval. For example, given a query "What is the role of gene PRNP in the Mad Cow Disease?" , expanding the gene symbol "PRNP" with its synonyms "Prp", "PrPSc", and "prion protein", more relevant documents might be retrieved. PubMed and many other biomedical systems [8] [9] [10] [13] also make use of domain-specific knowledge to improve retrieval effectiveness. Intuitively, retrieval on the level of concepts should outperform "bag-of-words" approaches, since the semantic relationships among words in a concept are utilized. In some recent studies [13] [15], positive results have been reported for this hypothesis. In this paper, concepts are entry terms of the ontology Medical Subject Headings (MeSH), a controlled vocabulary maintained by NLM for indexing biomedical literature, or gene symbols in the Entrez gene database also from NLM. A concept could be a word, such as the gene symbol "PRNP", or a phrase, such as "Mad cow diseases". In the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels. This paper makes two contributions: retrieving biomedical literature. Based on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006. 2. We examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution. This paper is organized as follows: problem statement is given in the next section. The techniques are introduced in section 3. In section 4, we present the experimental results. Related works are given in section 5 and finally, we conclude the paper in section 6. 2. PROBLEM STATEMENT We describe the queries, document collection and the system output in this section. The query set used in the Genomics track of TREC 2006 consists of 28 questions collected from real biologists. As described in [8], these questions all have the following general format: where a biological object might be a gene, protein, or gene mutation and a biological process can be a physiological process or disease. A question might involve multiple biological objects (m) and multiple biological processes (n). These questions were derived from four templates (Table 2). Table 2 Query templates and examples in the Genomics track of TREC 2006 Template Example What is the role of gene in What is the role of DRD4 in disease? alcoholism? What effect does gene have What effect does the insulin on biological process? receptor gene have on tumorigenesis? How do genes interact in How do HMG and HMGB1 organ function? interact in hepatitis? How does a mutation in How does a mutation in Ret gene influence biological influence thyroid function? process? Features of the queries: 1) They are different from the typical Web queries and the PubMed queries, both of which usually consist of 1 to 3 keywords; 2) They are generated from structural templates which can be used by a system to identify the query components, the biological object or process. The document collection contains 162,259 Highwire full-text documents in HTML format. The output of the system is a list of passages ranked according to their similarities with the query. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). A passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and </P> HTML tags). This is a passage-level information retrieval problem with the attempt to put biologists in contexts where relevant information is provided. 3. TECHNIQUES AND METHODS We approached the problem by first retrieving the top-k most relevant paragraphs, then extracting passages from these paragraphs, and finally ranking the passages. In this process, we employed several techniques and methods, which will be introduced in this section. First, we give two definitions: Definition 3.1 A concept is 1) a entry term in the MeSH ontology, or 2) a gene symbol in the Entrez gene database. This definition of concept can be generalized to include other biomedical dictionary terms. Definition 3.2 A semantic type is a category defined in the Semantic Network of the Unified Medical Language System (UMLS) [14]. The current release of the UMLS Semantic Network contains 135 semantic types such as "Disease or Syndrome". Each entry term in the MeSH ontology is assigned one or more semantic types. Each gene symbol in the Entrez gene database maps to the semantic type "Gene or Genome". In addition, these semantic types are linked by 54 relationships. For example, "Antibiotic" prevents "Disease or Syndrome". These relationships among semantic types represent general biomedical knowledge. We utilized these semantic types and their relationships to identify related concepts. The rest of this section is organized as follows: in section 3.1, we explain how the concepts are identified within a query. In section 3.2, we specify five different types of domain-specific knowledge and introduce how they are compiled. In section 3.3, we present our conceptual IR model. Finally, our strategy for passage extraction is described in section 3.4. 3.1 Identifying concepts within a query A concept, defined in Definition 3.1, is a gene symbol or a MeSH term. We make use of the query templates to identify gene symbols. For example, the query "How do HMG and HMGB1 interact in hepatitis?" is derived from the template "How do genes interact in organ function?" . In this case, "HMG" and "HMGB1" will be identified as gene symbols. In cases where the query templates are not provided, programs for recognition of gene symbols within texts are needed. We use the query translation functionality of PubMed to extract MeSH terms in a query. This is done by submitting the whole query to PubMed, which will then return a file in which the MeSH terms in the query are labeled. In Table 3.1, three MeSH terms within the query "What is the role of gene PRNP in the Mad cow disease?" are found in the PubMed translation: "encephalopathy, bovine spongiform" for "Mad cow disease", "genes" for "gene", and "role" for "role". Table 3.1 The PubMed translation of the query "What is the role of gene PRNP in the Mad cow disease?" . 3.2 Compiling domain-specific knowledge In this paper, domain-specific knowledge refers to information about concepts and their relationships in a certain domain. We used five types of domain-specific knowledge in the domain of genomics: Type 1. Synonyms (terms listed in the thesauruses that refer to the same meaning) Type 2. Hypernyms (more generic terms, one level only) Type 3. Hyponyms (more specific terms, one level only) Type 4. Lexical variants (different forms of the same concept, such as abbreviations. They are commonly used in the literature, but might not be listed in the thesauruses) Type 5. Implicitly related concepts (terms that are semantically related and also co-occur more frequently than being independent in the biomedical texts) Knowledge of type 1-3 is retrieved from the following two thesauruses: 1) MeSH, a controlled vocabulary maintained by NLM for indexing biomedical literature. The 2007 version of MeSH contains information about 190,000 concepts. These concepts are organized in a tree hierarchy; 2) Entrez Gene, one of the most widely used searchable databases of genes. The current version of Entrez Gene contains information about 1.7 million genes. It does not have a hierarchy. Only synonyms are retrieved from Entrez Gene. The compiling of type 4-5 knowledge is introduced in section 3.2.1 and 3.2.2, respectively. 3.2.1 Lexical variants Lexical variants of gene symbols New gene symbols and their lexical variants are regularly introduced into the biomedical literature [7]. However, many reference databases, such as UMLS and Entrez Gene, may not be able to keep track of all this kind of variants. For example, for the gene symbol "NF-kappa B", at least 5 different lexical variants can be found in the biomedical literature: "NF-kappaB", "NFkappaB", "NFkappa B", "NF-kB", and "NFkB", three of which are not in the current UMLS and two not in the Entrez Gene. [3] [21] have shown that expanding gene symbols with their lexical variants improved the retrieval effectiveness of their biomedical IR systems. In our system, we employed the following two strategies to retrieve lexical variants of gene symbols. Strategy I: This strategy is to automatically generate lexical variants according to a set of manually crafted heuristics [3] [21]. For example, given a gene symbol "PLA2", a variant "PLAII" is generated according to the heuristic that Roman numerals and Arabic numerals are convertible when naming gene symbols. Another variant, "PLA 2", is also generated since a hyphen or a space could be inserted at the transition between alphabetic and numerical characters in a gene symbol. Strategy II: This strategy is to retrieve lexical variants from an abbreviation database. ADAM [22] is an abbreviation database which covers frequently used abbreviations and their definitions (or long-forms) within MEDLINE, the authoritative repository of citations from the biomedical literature maintained by the NLM. Given a query "How does nucleoside diphosphate kinase (NM23) contribute to tumor progression?" , we first identify the abbreviation "NM23" and its long-form "nucleoside diphosphate kinase" using the abbreviation identification program from [4]. Searching the long-form "nucleoside diphosphate kinase" in ADAM, other abbreviations, such as "NDPK" or "NDK", are retrieved. These abbreviations are considered as the lexical variants of "NM23". Lexical variants of MeSH concepts ADAM is used to obtain the lexical variants of MeSH concepts as well. All the abbreviations of a MeSH concept in ADAM are considered as lexical variants to each other. In addition, those long-forms that share the same abbreviation with the MeSH concept and are different by an edit distance of 1 or 2 are also considered as its lexical variants. As an example, "human papilloma viruses" and "human papillomaviruses" have the same abbreviation "HPV" in ADAM and their edit distance is 1. Thus they are considered as lexical variants to each other. The edit distance between two strings is measured by the minimum number of insertions, deletions, and substitutions of a single character required to transform one string into the other [12]. 3.2.2 Implicitly related concepts Motivation: In some cases, there are few documents in the literature that directly answer a given query. In this situation, those documents that implicitly answer their questions or provide supporting information would be very helpful. For example, there are few documents in PubMed that directly answer the query "What is the role of the genes HNF4 and COUP-tf I in the suppression in the function of the liver?" . However, there exist some documents about the role of "HNF4" and "COUP-tf I" in regulating "hepatitis B virus" transcription. It is very likely that the biologists would be interested in these documents because "hepatitis B virus" is known as a virus that could cause serious damage to the function of liver. In the given example, "hepatitis B virus" is not a synonym, hypernym, hyponym, nor a lexical variant of any of the query concepts, but it is semantically related to the query concepts according to the UMLS Semantic Network. We call this type of concepts "implicitly related concepts" of the query. This notion is similar to the "B-term" used in [19] for relating two disjoint literatures for biomedical hypothesis generation. The difference is that we utilize the semantic relationships among query concepts to exclusively focus on concepts of certain semantic types. A query q in format (1) of section 2 can be represented by q = (A, C) where A is the set of biological objects and C is the set of biological processes. Those concepts that are semantically related to both A and C according to the UMLS Semantic Network are considered as the implicitly related concepts of the query. In the above example, A = {"HNF4", "COUP-tf I"}, C = {"function of liver"}, and "hepatitis B virus" is one of the implicitly related concepts. We make use of the MEDLINE database to extract the implicitly related concepts. The 2006 version of MEDLINE database contains citations (i.e., abstracts, titles, and etc.) of over 15 million biomedical articles. Each document in MEDLINE is manually indexed by a list of MeSH terms to describe the topics covered by that document. Implicitly related concepts are extracted and ranked in the following steps: Step 1. Let list_A be the set of MeSH terms that are 1) used for indexing those MEDLINE citations having A, and 2) semantically related to A according to the UMLS Semantic Network. Similarly, list_C is created for C. Concepts in B = list_A ∩ list_C are considered as implicitly related concepts of the query. Step 2. For each concept b ∈ B, compute the association between b and A using the mutual information measure [5]: where P (x) = n/N, n is the number of MEDLINE citations having x and N is the size of MEDLINE. A large value for I (b, A) means that b and A co-occur much more often than being independent. I (b, C) is computed similarly. Step 3. Let r (b) = (I (b, A), I (b, C)), for b ∈ B. Given b1, b2 ∈ B, we say r (b1) <r (b2) if I (b1, A) <I (b2, A) and I (b1, C) <I (b2, C). Then the association between b and the query q is measured by: The numerator in Formula 2 is the number of the concepts in B that are associated with both A and C equally with or less than b. The denominator is the number of the concepts in B that are associated with both A and C equally with or more than b. Figure 3.2.2 shows the top 4 implicitly related concepts for the sample query. Implicitly related concepts (B) where v1 is a vector of concepts describing the biological object (s) and v2 is a vector of concepts describing the biological process (es). Given a vector of concepts v, let s (v) be the set of concepts in v. The weight of vi is then measured by: where v is a vector that contains a subset of concepts in vi and nv is the number of documents having all the concepts in v. The concept similarity between q and d is then computed by where αi is a parameter to indicate the completeness of vi that document d has covered. αi is measured by: c ∈ d and c ∈ vi where idfc is the inverse document frequency of concept c. An example: suppose we have a query "How does Nurr-77 delete T cells before they migrate to the spleen or lymph nodes and how does this impact autoimmunity?" . After identifying the concepts in the query, we have: Figure 3.2.2 Top 4 implicitly related concepts for the query "How do interactions between HNF4 and COUP-TF1 suppress liver function?" . In Figure 3.2.2, the top 4 implicitly related concepts are all highly associated with "liver": "Hepatocytes" are liver cells; "Hepatoblastoma" is a malignant liver neoplasm occurring in young children; the vast majority of "Gluconeogenesis" takes place in the liver; and "Hepatitis B virus" is a virus that could cause serious damage to the function of liver. The top-k ranked concepts in B are used for query expansion: if I (b, A) ≥ I (b, C), then b is considered as an implicit related concept of A. A document having b but not A will receive a partial weight of A. The expansion is similar for C when I (b, A) <I (b, C). 3.3 Conceptual IR model We now discuss our conceptual IR model. We first give the basic conceptual IR model in section 3.3.1. Then we explain how the domain-specific knowledge is incorporated in the model using query expansion in section 3.3.2. A pseudo-feedback strategy is introduced in section 3.3.3. In section 3.3.4, we give a strategy to improve the ranking by avoiding incorrect match of abbreviations. 3.3.1 Basic model Given a query q and a document d, our model measures two similarities, concept similarity and word similarity: concept word Concept similarity Two vectors are derived from a query q, Suppose that some document frequencies of different combinations of concepts are as follows: The weight of vi is then computed by (note that there does not exist a document having all the concepts in v2): Now suppose a document d contains concepts ` Nurr-77',' T cells','s pleen', and' lymph nodes', but not ` autoimmunity', then the value of parameter αi is computed as follows: α1 idf (' T cells') + idf ('s pleen') + idf (' lymph nodes') idf (' T cells') + idf ('s pleen') + idf (' lymph nodes') + idf (' autoimmunity') Word similarity The similarity between q and d on the word level is computed using Okapi [17]: where N is the size of the document collection; n is the number of documents containing w; K = k1 × ((1-b) + b × dl/avdl) and k1 = 1.2, b = 0.75 are constants. dl is the document length of d and avdl is the average document length; tf is the term frequency of w within d. The model Given two documents d1 and d2, we say sim (q, d1)> sim (q, d2) or d1 will be ranked higher than d2, with respect to the same query q, if either This conceptual IR model emphasizes the similarity on the concept level. A similar model but applied to non-biomedical domain has been given in [15]. 3.3.2 Incorporating domain-specific knowledge Given a concept c, a vector u is derived by incorporating its domain-specific knowledge: where u1 is a vector of its synonyms, hyponyms, and lexical variants; u2 is a vector of its hypernyms; and u3 is a vector of its implicitly related concepts. An occurrence of any term in u1 will be counted as an occurrence of c. idfc in Formula 3 is updated as: where wt = 8 × idfc. The weighting factor 8 is an empirical tuning parameter determined as: 1. 8 = 1 if t is the original concept, its synonym, its hyponym, or its lexical variant; 2. 8 = 0.95 if t is a hypernym; 3. 8 = 0.90 × (k--i +1) / k if t is an implicitly related concept. k is the number of selected top ranked implicitly related concepts (see section 3.2.2); i is the position of t in the ranking of implicitly related concepts. 3.3.3 Pseudo-feedback Pseudo-feedback is a technique commonly used to improve retrieval performance by adding new terms into the original query. We used a modified pseudo-feedback strategy described in [2]. Step 1. Let C be the set of concepts in the top 15 ranked documents. For each concept c in C, compute the similarity between c and the query q, the computation of sim (q, c) can be found in [2]. Step 2. The top-k ranked concepts by sim (q, c) are selected. Step 3. Associate each selected concept c' with the concept cq in q that 1) has the same semantic type as c', and 2) is most related to c' among all the concepts in q. The association between c' and cq is computed by: where P (x) = n/N, n is the number of documents having x and N is the size of the document collection. A document having c' but not cq receives a weight given by: (0.5 × (k-i +1) / k) × idfcq, where i is the position of c' in the ranking of step 2. 3.3.4 Avoid incorrect match of abbreviations Some gene symbols are very short and thus ambiguous. For example, the gene symbol "APC" could be the abbreviation for many non-gene long-forms, such as "air pollution control", "aerobic plate count", or "argon plasma coagulation". This step is to avoid incorrect match of abbreviations in the top ranked documents. Given an abbreviation X with the long-form L in the query, we scan the top-k ranked (k = 1000) documents and when a document is found with X, we compare L with all the long-forms of X in that document. If none of these long-forms is equal or close to L (i.e., the edit distance between L and the long-form of X in that document is 1 or 2), then the concept similarity of X is subtracted. 3.4 Passage extraction The goal of passage extraction is to highlight the most relevant fragments of text in paragraphs. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). A passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and </P> HTML tags). It is also possible to have more than one relevant passage in a single paragraph. Our strategy for passage extraction assumes that the optimal passage (s) in a paragraph should have all the query concepts that the whole paragraph has. Also they should have higher density of query concepts than other fragments of text in the paragraph. Suppose we have a query q and a paragraph p represented by a sequence of sentences p = s1s2...sn. Let C be the set of concepts in q that occur in p and S = 0. Step 1. For each sequence of consecutive sentences sisi +1...s j, 1 <i <j <n, let S = S ∪ {sisi +1...s j} if sisi +1...s j satisfies that: 1) Every query concept in C occurs in sisi +1...s j and 2) There does not exist k, such that i <k <j and every query concept in C occurs in sisi +1...sk orsk +1 sk +2...s j. Condition 1 requires sisi +1...sj having all the query concepts in p and condition 2 requires sisi +1...sj be the minimal. Step 2. Let L = min {j − i + 1: sisi +1...s j ∈ S}. For every sisi +1...sj in S, let S = S − {sisi +1...s j} if (j--i + 1)> L. This step is to remove those sequences of sentences in S that have lower density of query concepts. Step 3. For every two sequences of consecutive Repeat this step until for every two sequences of consecutive sentences in S, condition (5) does not apply. This step is to merge those sequences of sentences in S that are adjacent or overlapped. Finally the remaining sequences of sentences in S are returned as the optimal passages in the paragraph p with respect to the query. 4. EXPERIMENTAL RESULTS The evaluation of our techniques and the experimental results are given in this section. We first describe the datasets and evaluation metrics used in our experiments and then present the results. 4.1 Data sets and evaluation metrics Our experiments were performed on the platform of the Genomics track of TREC 2006. The document collection contains 162,259 full-text documents from 49 Highwire biomedical journals. The set of queries consists of 28 queries collected from real biologists. The performance is measured on three different levels (passage, aspect, and document) to provide better insight on how the question is answered from different perspectives. Passage MAP: As described in [8], this is a character-based precision calculated as follows: "At each relevant retrieved passage, precision will be computed as the fraction of characters overlapping with the gold standard passages divided by the total number of characters included in all nominated passages from this system for the topic up until that point. Similar to regular MAP, relevant passages that were not retrieved will be added into the calculation as well, with precision set to 0 for relevant passages not retrieved. Then the mean of these average precisions over all topics will be calculated to compute the mean average passage precision". Aspect MAP: A question could be addressed from different aspects. For example, the question "what is the role of gene PRNP in the Mad cow disease?" could be answered from aspects like "Diagnosis", "Neurologic manifestations", or "Prions/Genetics". This measure indicates how comprehensive the question is answered. Document MAP: This is the standard IR measure. The precision is measured at every point where a relevant document is obtained and then averaged over all relevant documents to obtain the average precision for a given query. For a set of queries, the mean of the average precision for all queries is the MAP of that IR system. The output of the system is a list of passages ranked according to their similarities with the query. The performances on the three levels are then calculated based on the ranking of the passages. 4.2 Results The Wilcoxon signed-rank test was employed to determine the statistical significance of the results. In the tables of the following sections, statistically significant improvements (at the 5% level) are marked with an asterisk. 4.2.1 Conceptual IR model vs. term-based model The initial baseline was established using word similarity only computed by the Okapi (Formula 4). Another run based on our basic conceptual IR model was performed without using query expansion, pseudo-feedback, or abbreviation correction. The experimental result is shown in Table 4.2.1. Our basic conceptual IR model significantly outperforms the Okapi on all three levels, which suggests that, although it requires additional efforts to identify concepts, retrieval on the concept level can achieve substantial improvements over purely term-based retrieval model. 4.2.2 Contribution of different types of knowledge A series of experiments were performed to examine how each type of domain-specific knowledge contributes to the retrieval performance. A new baseline was established using the basic conceptual IR model without incorporating any type of domainspecific knowledge. Then five runs were conducted by adding each individual type of domain-specific knowledge. We also conducted a run by adding all types of domain-specific knowledge. Results of these experiments are shown in Table 4.2.2. We found that any available type of domain-specific knowledge improved the performance in passage retrieval. The biggest improvement comes from the lexical variants, which is consistent with the result reported in [3]. This result also indicates that biologists are likely to use different variants of the same concept according to their own writing preferences and these variants might not be collected in the existing biomedical thesauruses. It also suggests that the biomedical IR systems can benefit from the domain-specific knowledge extracted from the literature by text mining systems. Synonyms provided the second biggest improvement. Hypernyms, hyponyms, and implicitly related concepts provided similar degrees of improvement. The overall performance is an accumulative result of adding different types of domain-specific knowledge and it is better than any individual addition. It is clearly shown that the performance is significantly improved (107% on passage level, 63.1% on aspect level, and 49.6% on document level) when the domain-specific knowledge is appropriately incorporated. Although it is not explicitly shown in Table 4.2.3, different types of domain-specific knowledge affect different subsets of queries. More specifically, each of these types (with the exception of "the lexical variants" which affects a large number of queries) affects only a few queries. But for those affected queries, their improvement is significant. As a consequence, the accumulative improvement is very significant. 4.2.3 Pseudo-feedback and abbreviation correction Using the "Baseline + All" in Table 4.2.2 as a new baseline, the contribution of abbreviation correction and pseudo-feedback is given in Table 4.2.3. There is little improvement by avoiding incorrect matching of abbreviations. The pseudo-feedback contributed about 4.6% improvement in passage retrieval. 4.2.4 Performance compared with best-reported results We compared our result with the results reported in the Genomics track of TREC 2006 [8] on the conditions that 1) systems are automatic systems and 2) passages are extracted from paragraphs. The performance of our system relative to the best reported results is shown in Table 4.2.4 (in TREC 2006, some systems returned the whole paragraphs as passages. As a consequence, excellent retrieval results were obtained on document and aspect levels at the expense of performance on the passage level. We do not include the results of such systems here). Table 4.2.4 Performance compared with best-reported results. The best reported results in the first row of Table 4.2.4 on three levels (passage, aspect, and document) are from different systems. Our result is from a single run on passage retrieval in which it is better than the best reported result by 22.68% in passage retrieval and at the same time, 9.14% better in aspect retrieval, and 1.33% better in document retrieval (Since the average precision of each individual query was not reported, we cannot apply the Wilcoxon signed-rank test to calculate the significance of difference between our performance and the best reported result.) . Table 4.2.1 Basic conceptual IR model vs. term-based model Table 4.2.2 Contribution of different types of domain-specific knowledge Table 4.2.3 Contribution of abbreviation correction and pseudo-feedback A separate experiment has been done using a second testbed, the Ad Hoc Task of TREC Genomics 2005, to evaluate our knowledge-intensive conceptual IR model for document retrieval of biomedical literature. The overall performance in terms of MAP is 35.50%, which is about 22.92% above the best reported result [9]. Notice that the performance was only measured on the document level for the Ad Hoc Task of TREC Genomics 2005. 5. RELATED WORKS Many studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems. [11] [1] assessed query expansion using the UMLS Metathesaurus. Based on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported. [1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown. We used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion. A critical difference between our work and those in [11] [1] is that our retrieval model is based on concepts, not on individual words. The Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval. As summarized in [8] [9] [10], many groups utilized domain-specific knowledge to improve retrieval effectiveness. Among these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion. They have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance. [21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements. Our system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts. In addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution. [20] [15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain. In their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms. Little benefit has been shown in [20]. This has been due to ambiguity of the query terms which have different meanings in different contexts. When these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved. In the biomedical domain, this kind of ambiguity of query terms is relatively less frequent, because, although the abbreviations are highly ambiguous, general biomedical concepts usually have only one meaning in the thesaurus, such as UMLS, whereas a term in WordNet usually have multiple meanings (represented as synsets in WordNet). Besides, we have implemented a post-ranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity. Besides, we have implemented a postranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity. The retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents. Although the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases. [13] presented a good study of the role of knowledge in the document retrieval of clinical medicine. They have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements. Although the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18]. Also, our similarity function is very different from that in [13]. In summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems. Second, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution. Third, although some of the techniques seem similar to previously published ones, they are actually quite different in details. For example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept. This is to ensure that our conceptual model of retrieval can be applied. As another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19]. Finally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts. 6. CONCLUSION This paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature. We specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution. We also evaluated other two techniques, pseudo-feedback and abbreviation correction. Experimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results. In our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness. The application of our conceptual IR model in other domains such as clinical medicine will be investigated.
Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature ABSTRACT This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006. 1. INTRODUCTION Biologists search for literature on a daily basis. For most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature. PubMed allows for keyword search by using Boolean operators. For example, if one desires documents on the use of the drug propanolol in the disease hypertension, a typical PubMed query might be "propanolol AND hypertension", which will return all the documents having the two keywords. Keyword search in PubMed is effective if the query is well-crafted by the users using their expertise. However, information needs of biologists, in some cases, are expressed as complex questions [8] [9], which PubMed is not designed to handle. While NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search. The Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval. The queries were collected from real biologists and they are expressed as complex questions, such as "How do mutations in the Huntingtin gene affect Huntington's disease?" . The document collection contains 162,259 Highwire full-text documents in HTML format. Systems from participating groups are expected to find relevant passages within the full-text documents. A passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or </P>). We approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model. Domain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain. We assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval. For example, given a query "What is the role of gene PRNP in the Mad Cow Disease?" , expanding the gene symbol "PRNP" with its synonyms "Prp", "PrPSc", and "prion protein", more relevant documents might be retrieved. PubMed and many other biomedical systems [8] [9] [10] [13] also make use of domain-specific knowledge to improve retrieval effectiveness. Intuitively, retrieval on the level of concepts should outperform "bag-of-words" approaches, since the semantic relationships among words in a concept are utilized. In some recent studies [13] [15], positive results have been reported for this hypothesis. In this paper, concepts are entry terms of the ontology Medical Subject Headings (MeSH), a controlled vocabulary maintained by NLM for indexing biomedical literature, or gene symbols in the Entrez gene database also from NLM. A concept could be a word, such as the gene symbol "PRNP", or a phrase, such as "Mad cow diseases". In the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels. This paper makes two contributions: retrieving biomedical literature. Based on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006. 2. We examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution. This paper is organized as follows: problem statement is given in the next section. The techniques are introduced in section 3. In section 4, we present the experimental results. Related works are given in section 5 and finally, we conclude the paper in section 6. 2. PROBLEM STATEMENT 3. TECHNIQUES AND METHODS 3.1 Identifying concepts within a query 3.2 Compiling domain-specific knowledge 3.2.1 Lexical variants Lexical variants of gene symbols Lexical variants of MeSH concepts 3.2.2 Implicitly related concepts 3.3 Conceptual IR model 3.3.1 Basic model Concept similarity Word similarity The model 3.3.2 Incorporating domain-specific knowledge 3.3.3 Pseudo-feedback 3.3.4 Avoid incorrect match of abbreviations 3.4 Passage extraction 4. EXPERIMENTAL RESULTS 4.1 Data sets and evaluation metrics 4.2 Results 4.2.1 Conceptual IR model vs. term-based model 4.2.2 Contribution of different types of knowledge 4.2.3 Pseudo-feedback and abbreviation correction 4.2.4 Performance compared with best-reported results 5. RELATED WORKS Many studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems. [11] [1] assessed query expansion using the UMLS Metathesaurus. Based on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported. [1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown. We used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion. A critical difference between our work and those in [11] [1] is that our retrieval model is based on concepts, not on individual words. The Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval. As summarized in [8] [9] [10], many groups utilized domain-specific knowledge to improve retrieval effectiveness. Among these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion. They have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance. [21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements. Our system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts. In addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution. [20] [15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain. In their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms. Little benefit has been shown in [20]. This has been due to ambiguity of the query terms which have different meanings in different contexts. When these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved. In the biomedical domain, this kind of ambiguity of query terms is relatively less frequent, because, although the abbreviations are highly ambiguous, general biomedical concepts usually have only one meaning in the thesaurus, such as UMLS, whereas a term in WordNet usually have multiple meanings (represented as synsets in WordNet). Besides, we have implemented a post-ranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity. Besides, we have implemented a postranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity. The retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents. Although the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases. [13] presented a good study of the role of knowledge in the document retrieval of clinical medicine. They have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements. Although the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18]. Also, our similarity function is very different from that in [13]. In summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems. Second, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution. Third, although some of the techniques seem similar to previously published ones, they are actually quite different in details. For example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept. This is to ensure that our conceptual model of retrieval can be applied. As another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19]. Finally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts. 6. CONCLUSION This paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature. We specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution. We also evaluated other two techniques, pseudo-feedback and abbreviation correction. Experimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results. In our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness. The application of our conceptual IR model in other domains such as clinical medicine will be investigated.
Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature ABSTRACT This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006. 1. INTRODUCTION Biologists search for literature on a daily basis. For most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature. PubMed allows for keyword search by using Boolean operators. Keyword search in PubMed is effective if the query is well-crafted by the users using their expertise. While NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search. The Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval. The queries were collected from real biologists and they are expressed as complex questions, such as "How do mutations in the Huntingtin gene affect Huntington's disease?" . The document collection contains 162,259 Highwire full-text documents in HTML format. Systems from participating groups are expected to find relevant passages within the full-text documents. We approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model. Domain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain. We assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval. For example, given a query "What is the role of gene PRNP in the Mad Cow Disease?" PubMed and many other biomedical systems [8] [9] [10] [13] also make use of domain-specific knowledge to improve retrieval effectiveness. Intuitively, retrieval on the level of concepts should outperform "bag-of-words" approaches, since the semantic relationships among words in a concept are utilized. A concept could be a word, such as the gene symbol "PRNP", or a phrase, such as "Mad cow diseases". In the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels. This paper makes two contributions: retrieving biomedical literature. Based on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006. 2. We examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution. This paper is organized as follows: problem statement is given in the next section. The techniques are introduced in section 3. In section 4, we present the experimental results. Related works are given in section 5 and finally, we conclude the paper in section 6. 5. RELATED WORKS Many studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems. [11] [1] assessed query expansion using the UMLS Metathesaurus. Based on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported. [1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown. We used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion. A critical difference between our work and those in [11] [1] is that our retrieval model is based on concepts, not on individual words. The Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval. As summarized in [8] [9] [10], many groups utilized domain-specific knowledge to improve retrieval effectiveness. Among these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion. They have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance. [21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements. Our system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts. In addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution. [20] [15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain. In their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms. This has been due to ambiguity of the query terms which have different meanings in different contexts. When these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved. The retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents. Although the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases. [13] presented a good study of the role of knowledge in the document retrieval of clinical medicine. They have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements. Although the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18]. Also, our similarity function is very different from that in [13]. In summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems. Second, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution. For example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept. This is to ensure that our conceptual model of retrieval can be applied. As another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19]. Finally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts. 6. CONCLUSION This paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature. We specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution. We also evaluated other two techniques, pseudo-feedback and abbreviation correction. Experimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results. In our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness. The application of our conceptual IR model in other domains such as clinical medicine will be investigated.
H-40
Cross-Lingual Query Suggestion Using Query Logs of Different Languages
Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods.
[ "queri suggest", "queri log", "cross-languag inform retriev", "keyword bid", "search engin advertis", "search engin", "queri translat", "map", "benchmark", "queri expans", "bid term", "monolingu queri suggest", "target languag queri log" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "R", "M" ]
Cross-Lingual Query Suggestion Using Query Logs of Different Languages Wei Gao1* , Cheng Niu2 , Jian-Yun Nie3 , Ming Zhou2 , Jian Hu2 , Kam-Fai Wong1 , Hsiao-Wuen Hon2 1 The Chinese University of Hong Kong, Hong Kong, China {wgao, kfwong}@se. cuhk.edu.hk 2 Microsoft Research Asia, Beijing, China {chengniu, mingzhou, jianh, hon}@microsoft. com 3 Université de Montréal, Montréal, QC, Canada nie@iro.umontreal.ca ABSTRACT Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods. Categories and Subject Descriptors H.3.3 [Information storage and retrieval]: Information Search and Retrieval - Query formulation General Terms Algorithms, Performance, Experimentation, Theory. 1. INTRODUCTION Query suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users. Search engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method. In addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-forperformance search market [12]. Query suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search. But different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries. Typical methods for query suggestion exploit query logs and document collections, by assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners [12, 14, 26]. By suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents. However, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS). CLQS aims to suggest related queries but in a different language. It has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language. 1 CLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query. Dictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation. However, these kinds of approaches usually rely on static knowledge and data. It cannot effectively reflect the quickly shifting interests of Web users. Moreover, there are some problems with translated queries in target language. For instance, the translated terms can be reasonable translations, but they are not popularly used in the target language. For example, the French query aliment biologique is translated into biologic food by Google translation tool2 , yet the correct formulation nowadays should be organic food. Therefore, there exist many mismatch cases between the translated terms and the really used terms in target language. This mismatch makes the suggested terms in the target language ineffective. A natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine. We exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages. As a result, a query written in a source language likely has an equivalent in a query log in the target language. In particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log. Therefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested. In this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc.. A discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries. The model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query``s translation. Besides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query translation in CLIR task on TREC collections. The results show that this new translation method is more effective than the traditional query translation method. The remainder of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the discriminative model for estimating cross-lingual query similarity; Section 4 presents a new CLIR approach using cross-lingual query suggestion as a bridge across language boundaries. Section 5 discusses the experiments and benchmarks; finally, the paper is concluded in Section 6. 2. RELATED WORK Most approaches to CLIR perform a query translation followed by a monolingual IR. Typically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20]. Despite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20]. In [7, 27], OOV term translations are mined from the Web using a search engine. In [17], bilingual knowledge is acquired based on anchor text analysis. In addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19]. 2 http://www.google.com/language_tools Nevertheless, it is arguable that accurate query translation may not be necessary for CLIR. Indeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query. This observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18]. [2] reports the enhancement on CLIR by post-translation expansion. [16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts. [18] makes performance comparison on multiple CLQE techniques, including pre-translation expansion and post-translation expansion. However, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques for CLQE. CLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language. As CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language. Therefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language. Query logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15]. In [1], the target language query log has been exploited to help query translation in CLIR. 3. ESTIMATING CROSS-LINGUAL QUERY SIMILARITY A search engine has a query log containing user queries in different languages within a certain period of time. In addition to query terms, click-through information is also recorded. Therefore, we know which documents have been selected by users for each query. Given a query in the source language, our CLQS task is to determine one or several similar queries in the target language from the query log. The key problem with cross-lingual query suggestion is how to learn a similarity measure between two queries in different languages. Although various statistical similarity measures have been studied for monolingual terms [8, 26], most of them are based on term co-occurrence statistics, and can hardly be applied directly in cross-lingual settings. In order to define a similarity measure across languages, one has to use at least one translation tool or resource. So the measure is based on both translation relation and monolingual similarity. In this paper, as our purpose is to provide up-to-date query similarity measure, it may not be sufficient to use only a static translation resource. Therefore, we also integrate a method to mine possible translations on the Web. This method is particularly useful for dealing with OOV terms. Given a set of resources of different natures, the next question is how to integrate them in a principled manner. In this paper, we propose a discriminative model to learn the appropriate similarity measure. The principle is as follows: we assume that we have a reasonable monolingual query similarity measure. For any training query example for which a translation exists, its similarity measure (with any other query) is transposed to its translation. Therefore, we have the desired cross-language similarity value for this example. Then we use a discriminative model to learn the cross-language similarity function which fits the best these examples. In the following sections, let us first describe the detail of the discriminative model for cross-lingual query similarity estimation. Then we introduce all the features (monolingual and cross-lingual information) that we will use in the discriminative model. 3.1 Discriminative Model for Estimating Cross-Lingual Query Similarity In this section, we propose a discriminative model to learn crosslingual query similarities in a principled manner. The principle is as follows: for a reasonable monolingual query similarity between two queries, a cross-lingual correspondent can be deduced between one query and another query``s translation. In other words, for a pair of queries in different languages, their crosslingual similarity should fit the monolingual similarity between one query and the other query``s translation. For example, the similarity between French query pages jaunes (i.e., yellow page in English) and English query telephone directory should be equal to the monolingual similarity between the translation of the French query yellow page and telephone directory. There are many ways to obtain a monolingual similarity measure between terms, e.g., term co-occurrence based mutual information and 2 χ . Any of them can be used as the target for the cross-lingual similarity function to fit. In this way, cross-lingual query similarity estimation is formulated as a regression task as follows: Given a source language query fq , a target language query eq , and a monolingual query similarity MLsim , the corresponding cross-lingual query similarity CLsim is defined as follows: ),(),( eqMLefCL qTsimqqsim f = (1) where fqT is the translation of fq in the target language. Based on Equation (1), it would be relatively easy to create a training corpus. All it requires is a list of query translations. Then an existing monolingual query suggestion system can be used to automatically produce similar query to each translation, and create the training corpus for cross-lingual similarity estimation. Another advantage is that it is fairly easy to make use of arbitrary information sources within a discriminative modeling framework to achieve optimal performance. In this paper, support vector machine (SVM) regression algorithm [25] is used to learn the cross-lingual term similarity function. Given a vector of feature functions f between fq and eq , ),( efCL ttsim is represented as an inner product between a weight vector and the feature vector in a kernel space as follows: )),((),( efefCL ttfwttsim φ•= (2) where φ is the mapping from the input feature space onto the kernel space, and wis the weight vector in the kernel space which will be learned by the SVM regression training. Once the weight vector is learned, the Equation (2) can be used to estimate the similarity between queries of different languages. We want to point out that instead of regression, one can definitely simplify the task as a binary or ordinal classification, in which case CLQS can be categorized according to discontinuous class labels, e.g., relevant and irrelevant, or a series of levels of relevancies, e.g., strongly relevant, weakly relevant, and irrelevant. In either case, one can resort to discriminative classification approaches, such as an SVM or maximum entropy model, in a straightforward way. However, the regression formalism enables us to fully rank the suggested queries based on the similarity score given by Equation (1). The Equations (1) and (2) construct a regression model for cross-lingual query similarity estimation. In the following sections, the monolingual query similarity measure (see Section 3.2) and the feature functions used for SVM regression (see Section 3.3) will be presented. 3.2 Monolingual Query Similarity Measure Based on Click-through Information Any monolingual term similarity measure can be used as the regression target. In this paper, we select the monolingual query similarity measure presented in [26] which reports good performance by using search users'' click-through information in query logs. The reason to choose this monolingual similarity is that it is defined in a similar context as ours − according to a user log that reflects users'' intention and behavior. Therefore, we can expect that the cross-language term similarity learned from it can also reflect users'' intention and expectation. Following [26], our monolingual query similarity is defined by combining both query content-based similarity and click-through commonality in the query log. First the content similarity between two queries p and q is defined as follows: ))(),(( ),( ),( qknpknMax qpKN qpsimilarity content = (3) where )( xkn is the number of keywords in a query x, ),( qpKN is the number of common keywords in the two queries. Secondly, the click-through based similarity is defined as follows, ))(),(( ),( ),( qrdprdMax qpRD qpsimilarity throughclick =− (4) where )(xrd is the number of clicked URLs for a query x, and ),( qpRD is the number of common URLs clicked for two queries. Finally, the similarity between two queries is a linear combination of the content-based and click-through-based similarities, and is presented as follows: ),(* ),(*),( qpsimilarity qpsimilarityqpsimilarity throughclick content − += β α (5) where α and β are the relative importance of the two similarity measures. In this paper, we set ,4.0=α and 6.0=β following the practice in [26]. Queries with similarity measure higher than a threshold with another query will be regarded as relevant monolingual query suggestions (MLQS) for the latter. In this paper, the threshold is set as 0.9 empirically. 3.3 Features Used for Learning Cross-Lingual Query Similarity Measure This section presents the extraction of candidate relevant queries from the log with the assistance of various monolingual and bilingual resources. Meanwhile, feature functions over source query and the cross-lingual relevant candidates are defined. Some of the resources being used here, such as bilingual lexicon and parallel corpora, were for query translation in previous work. But note that we employ them here as an assistant means for finding relevant candidates in the log rather than for acquiring accurate translations. 3.3.1 Bilingual Dictionary In this subsection, a built-in-house bilingual dictionary containing 120,000 unique entries is used to retrieve candidate queries. Since multiple translations may be associated with each source word, co-occurrence based translation disambiguation is performed [3, 10]. The process is presented as follows: Given an input query }{ ,2,1 fnfff wwwq K= in the source language, for each query term fiw , a set of unique translations are provided by the bilingual dictionary D : },,{)( ,2,1 imiifi tttwD K= . Then the cohesion between the translations of two query terms is measured using mutual information which is computed as follows: )()( ),( log),()( , klij klij klijklij tPtP ttP ttPttMI = (6) where . ) ( )(, ),( ),( N tC tP N ttC ttP klij klij == Here ),( yxC is the number of queries in the log containing both x and y , )(xC is the number of queries containing term x , and N is the total number of queries in the log. Based on the term-term cohesion defined in Equation (6), all the possible query translations are ranked using the summation of the term-term cohesion ∑≠ = kiki klijqdict ttMITS f ,, ),()( . The set of top-4 query translations is denoted as )( fqTS . For each possible query translation )( fqTST∈ , we retrieve all the queries containing the same keywords as T from the target language log. The retrieved queries are candidate target queries, and are assigned )(TSdict as the value of the feature Dictionary-based Translation Score. 3.3.2 Parallel Corpora Parallel corpora are precious resources for bilingual knowledge acquisition. Different from the bilingual dictionary, the bilingual knowledge learned from parallel corpora assigns probability for each translation candidate which is useful in acquiring dominant query translations. In this paper, the Europarl corpus (a set of parallel French and English texts from the proceedings of the European Parliament) is used. The corpus is first sentence aligned. Then word alignments are derived by training an IBM translation model 1 [4] using GIZA++ [21]. The learned bilingual knowledge is used to extract candidate queries from the query log. The process is presented as follows: Given a pair of queries, fq in the source language and eq in the target language, the Bi-Directional Translation Score is defined as follows: )|()|(),( 111 feIBMefIBMefIBM qqpqqpqqS = (7) where )|(1 xypIBM is the word sequence translation probability given by IBM model 1 which has the following form: ∏∑= =+ = || 1 || 0 ||1 )|( )1|(| 1 )|( y j x i ijyIBM xyp x xyp (8) where )|( ij xyp is the word to word translation probability derived from the word-aligned corpora. The reason to use bidirectional translation probability is to deal with the fact that common words can be considered as possible translations of many words. By using bidirectional translation, we test whether the translation words can be translated back to the source words. This is helpful to focus on the translation probability onto the most specific translation candidates. Now, given an input query fq , the top 10 queries }{ eq with the highest bidirectional translation scores with fq are retrieved from the query log, and ),(1 efIBM qqS in Equation (7) is assigned as the value for the feature Bi-Directional Translation Score. 3.3.3 Online Mining for Related Queries OOV word translation is a major knowledge bottleneck for query translation and CLIR. To overcome this knowledge bottleneck, web mining has been exploited in [7, 27] to acquire EnglishChinese term translations based on the observation that Chinese terms may co-occur with their English translations in the same web page. In this section, this web mining approach is adapted to acquire not only translations but semantically related queries in the target language. It is assumed that if a query in the target language co-occurs with the source query in many web pages, they are probably semantically related. Therefore, a simple method is to send the source query to a search engine (Google in our case) for Web pages in the target language in order to find related queries in the target language. For instance, by sending a French query pages jaunes to search for English pages, the English snippets containing the key words yellow pages or telephone directory will be returned. However, this simple approach may induce significant amount of noise due to the non-relevant returns from the search engine. In order to improve the relevancy of the bilingual snippets, we extend the simple approach by the following query modification: the original query is used to search with the dictionary-based query keyword translations, which are unified by the ∧ (and) ∨ (OR) operators into a single Boolean query. For example, for a given query abcq = where the set of translation entries in the dictionary of for a is },,{ 321 aaa , b is },{ 21 bb and c is }{ 1c , we issue 121321 )()( cbbaaaq ∧∨∧∨∨∧ as one web query. From the returned top 700 snippets, the most frequent 10 target queries are identified, and are associated with the feature Frequency in the Snippets. Furthermore, we use Co-Occurrence Double-Check (CODC) Measure to weight the association between the source and target queries. CODC Measure is proposed in [6] as an association measure based on snippet analysis, named Web Search with Double Checking (WSDC) model. In WSDC model, two objects a and b are considered to have an association if b can be found by using a as query (forward process), and a can be found by using b as query (backward process) by web search. The forward process counts the frequency of b in the top N snippets of query a, denoted as )@( abfreq . Similarly, the backward process count the frequency of a in the top N snippets of query b, denoted as )@( bafreq . Then the CODC association score is defined as follows: ⎪ ⎩ ⎪ ⎨ ⎧ =× = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ × otherwise, 0)@()@(if,0 ),( )( )@( )( )@( log α e ef f fe qfreq qqfreq qfreq qqfreq effe efCODC e qqfreqqqfreq qqS (9) CODC measures the association of two terms in the range between 0 and 1, where under the two extreme cases, eq and fq are of no association when 0)@( =fe qqfreq or 0)@( =ef qqfreq , and are of the strongest association when )()@( ffe qfreqqqfreq = and )()@( eef qfreqqqfreq = . In our experiment, α is set at 0.15 following the practice in [6]. Any query eq mined from the Web will be associated with a feature CODC Measure with ),( efCODC qqS as its value. 3.3.4 Monolingual Query Suggestion For all the candidate queries 0Q being retrieved using dictionary (see Section 3.3.1), parallel data (see Section 3.3.2) and web mining (see Section 3.3.3), monolingual query suggestion system (described in Section 3.1) is called to produce more related queries in the target language. For each target query eq , its monolingual source query )( eML qSQ is defined as the query in 0Q with the highest monolingual similarity with eq , i.e., ),(maxarg)( 0 eeMLQqeML qqsimqSQ e ′= ∈′ (10) Then the monolingual similarity between eq and )( eML qSQ is used as the value of the eq ``s Monolingual Query Suggestion Feature. For any target query 0Qq∈ , its Monolingual Query Suggestion Feature is set as 1. For any query 0Qqe ∉ , its values of Dictionary-based Translation Score, Bi-Directional Translation Score, Frequency in the Snippet, and CODC Measure are set to be equal to the feature values of )( eML qSQ . 3.4 Estimating Cross-lingual Query Similarity In summary, four categories of features are used to learn the crosslingual query similarity. SVM regression algorithm [25] is used to learn the weights in Equation (2). In this paper, LibSVM toolkit [5] is used for the regression training. In the prediction stage, the candidate queries will be ranked using the cross-lingual query similarity score computed in terms of )),((),( efefCL ttfwttsim φ•= , and the queries with similarity score lower than a threshold will be regarded as nonrelevant. The threshold is learned using a development data set by fitting MLQS``s output. 4. CLIR BASED ON CROSS-LINGUAL QUERY SUGGESTION In Section 3, we presented a discriminative model for cross lingual query suggestion. However, objectively benchmarking a query suggestion system is not a trivial task. In this paper, we propose to use CLQS as an alternative to query translation, and test its effectiveness in CLIR tasks. The resulting good performance of CLIR corresponds to the high quality of the suggested queries. Given a source query fq , a set of relevant queries }{ eq in the target language are recommended using the cross-lingual query suggestion system. Then a monolingual IR system based on the BM25 model [23] is called using each }{ eqq∈ as queries to retrieve documents. Then the retrieved documents are re-ranked based on the sum of the BM25 scores associated with each monolingual retrieval. 5. PERFORMACNCE EVALUATION In this section, we will benchmark the cross-lingual query suggestion system, comparing its performance with monolingual query suggestion, studying the contribution of various information sources, and testing its effectiveness when being used in CLIR tasks. 5.1 Data Resources In our experiments, French and English are selected as the source and target language respectively. Such selection is due to the fact that large scale query logs are readily available for these two languages. A one-month English query log (containing 7 million unique English queries with occurrence frequency more than 5) of MSN search engine is used as the target language log. And a monolingual query suggestion system is built based on it. In addition, 5,000 French queries are selected randomly from a French query log (containing around 3 million queries), and are manually translated into English by professional French-English translators. Among the 5,000 French queries, 4,171 queries have their translations in the English query log, and are used for CLQS training and testing. Furthermore, among the 4,171 French queries, 70% are used for cross-lingual query similarity training, 10% are used as the development data to determine the relevancy threshold, and 20% are used for testing. To retrieve the crosslingual related queries, a built-in-house French-English bilingual lexicon (containing 120,000 unique entries) and the Europarl corpus are used. Besides benchmarking CLQS as an independent system, the CLQS is also tested as a query translation system for CLIR tasks. Based on the observation that the CLIR performance heavily relies on the quality of the suggested queries, this benchmark measures the quality of CLQS in terms of its effectiveness in helping CLIR. To perform such benchmark, we use the documents of TREC6 CLIR data (AP88-90 newswire, 750MB) with officially provided 25 short French-English queries pairs (CL1-CL25). The selection of this data set is due to the fact that the average length of the queries are 3.3 words long, which matches the web query logs we use to train CLQS. 5.2 Performance of Cross-lingual Query Suggestion Mean-square-error (MSE) is used to measure the regression error and it is defined as follows: ( )2 ),(),( 1 ∑ −= i eiqMLeifiCL qTsimqqsim l MSE fi where l is the total number of cross-lingual query pairs in the testing data. As described in Section 3.4, a relevancy threshold is learned using the development data, and only CLQS with similarity value above the threshold is regarded as truly relevant to the input query. In this way, CLQS can also be benchmarked as a classification task using precision (P) and recall (R) which are defined as follows: CLQS MLQSCLQS P S SS I = , MLQS MLQSCLQS R S SS I = where CLQSS is the set of relevant queries suggested by CLQS, MLQSS is the set of relevant queries suggested by MLQS (see Section 3.2). The benchmarking results with various feature configurations are shown in Table 1. Regression Classification Features MSE P R DD 0.274 0.723 0.098 DD+PC 0.224 0.713 0.125 DD+PC+ Web 0.115 0.808 0.192 DD+PC+ Web+ML QS 0.174 0.796 0.421 Table 1. CLQS performance with different feature settings (DD: dictionary only; DD+PC: dictionary and parallel corpora; DD+PC+Web: dictionary, parallel corpora, and web mining; DD+PC+Web+MLQS: dictionary, parallel corpora, web mining and monolingual query suggestion) Table 1 reports the performance comparison with various feature settings. The baseline system (DD) uses a conventional query translation approach, i.e., a bilingual dictionary with cooccurrence-based translation disambiguation. The baseline system only covers less than 10% of the suggestions made by MLQS. Using additional features obviously enables CLQS to generate more relevant queries. The most significant improvement on recall is achieved by exploiting MLQS. The final CLQS system is able to generate 42% of the queries suggested by MLQS. Among all the feature combinations, there is no significant change in precision. This indicates that our methods can improve the recall by effectively leveraging various information sources without losing the accuracy of the suggestions. Besides benchmarking CLQS by comparing its output with MLQS output, 200 French queries are randomly selected from the French query log. These queries are double-checked to make sure that they are not in the CLQS training corpus. Then CLQS system is used to suggest relevant English queries for them. On average, for each French query, 8.7 relevant English queries are suggested. Then the total 1,740 suggested English queries are manually checked by two professional English/French translators with cross-validation. Among the 1,747 suggested queries, 1,407 queries are recognized as relevant to the original ones, hence the accuracy is 80.9%. Figure 1 shows an example of CLQS of the French query terrorisme international (international terrorism in English). 5.3 CLIR Performance In this section, CLQS is tested with French to English CLIR tasks. We conduct CLIR experiments using the TREC 6 CLIR dataset described in Section 5.1. The CLIR is performed using a query translation system followed by a BM25-based [23] monolingual IR module. The following three different systems have been used to perform query translation: (1) CLQS: our CLQS system; (2) MT: Google French to English machine translation system; (3) DT: a dictionary based query translation system using cooccurrence statistics for translation disambiguation. The translation disambiguation algorithm is presented in Section 3.3.1. Besides, the monolingual IR performance is also reported as a reference. The average precision of the four IR systems are reported in Table 2, and the 11-point precision-recall curves are shown in Figure 2. Table 2. Average precision of CLIR on TREC 6 Dataset (Monolingual: monolingual IR system; MT: CLIR based on machine translation; DT: CLIR based on dictionary translation; CLQS: CLQS-based CLIR) IR System Average Precision % of Monolingual IR Monolingual 0.266 100% MT 0.217 81.6% DT 0.186 69.9% CLQS 0.233 87.6% Figure 1. An example of CLQS of the French query terrorisme international international terrorism (0.991); what is terrorism (0.943); counter terrorism (0.920); terrorist (0.911); terrorist attacks (0.898); international terrorist (0.853); world terrorism (0.845); global terrorism (0.833); transnational terrorism (0.821); human rights (0.811); terrorist groups (0. 777); patterns of global terrorism (0.762) september 11 (0.734) 11-point P-R curves (TREC6) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precison Monolingual MT DT CLQS The benchmark shows that using CLQS as a query translation tool outperforms CLIR based on machine translation by 7.4%, outperforms CLIR based on dictionary translation by 25.2%, and achieves 87.6% of the monolingual IR performance. The effectiveness of CLQS lies in its ability in suggesting closely related queries besides accurate translations. For example, for the query CL14 terrorisme international (international terrorism), although the machine translation tool translates the query correctly, CLQS system still achieves higher score by recommending many additional related terms such as global terrorism, world terrorism, etc. (as shown in Figure 1). Another example is the query La pollution causée par l'automobile (air pollution due to automobile) of CL6. The MT tool provides the translation the pollution caused by the car, while CLQS system enumerates all the possible synonyms of car, and suggest the following queries car pollution, auto pollution, automobile pollution. Besides, other related queries such as global warming are also suggested. For the query CL12 La culture écologique (organic farming), the MT tool fails to generate the correct translation. Although the correct translation is neither in our French-English dictionary, CLQS system generates organic farm as a relevant query due to successful web mining. The above experiment demonstrates the effectiveness of using CLQS to suggest relevant queries for CLIR enhancement. A related research is to perform query expansion to enhance CLIR [2, 18]. So it is very interesting to compare the CLQS approach with the conventional query expansion approaches. Following [18], post-translation expansion is performed based on pseudorelevance feedback (PRF) techniques. We first perform CLIR in the same way as before. Then we use the traditional PRF algorithm described in [24] to select expansion terms. In our experiments, the top 10 terms are selected to expand the original query, and the new query is used to search the collection for the second time. The new CLIR performance in terms of average precision is shown in Table 3. The 11-point P-R curves are drawn in Figure 3. Although being enhanced by pseudo-relevance feedback, the CLIR using either machine translation or dictionary-based query translation still does not perform as well as CLQS-based approach. Statistical t-test [13] is conducted to indicate whether the CLQS-based CLIR performs significantly better. Pair-wise pvalues are shown in Table 4. Clearly, CLQS significantly outperforms MT and DT without PRF as well as DT+PRF, but its superiority over MT+PRF is not significant. However, when combined with PRF, CLQS significant outperforms all the other methods. This indicates the higher effectiveness of CLQS in related term identification by leveraging a wide spectrum of resources. Furthermore, post-translation expansion is capable of improving CLQS-based CLIR. This is due to the fact that CLQS and pseudo-relevance feedback are leveraging different categories of resources, and both approaches can be complementary. IR System AP without PRF AP with PRF Monolingual 0.266 (100%) 0.288 (100%) MT 0.217 (81.6%) 0.222 (77.1%) DT 0.186 (69.9%) 0.220 (76.4%) CLQS 0.233 (87.6%) 0.259 (89.9%) 11-point P-R curves with pseudo relevance feedback (TREC6) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precison Monolingual MT DT CLQS MT DT MT+PRF DT+PRF CLQS 0.0298 3.84e-05 0.1472 0.0282 CLQS+PR F 0.0026 2.63e-05 0.0094 0.0016 6. CONCLUSIONS In this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs. The key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources. The model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query``s translation. Figure 2. 11 points precision-recall on TREC6 CLIR data set Figure 3. 11 points precision-recall on TREC6 CLIR dataset with pseudo relevance feedback Table 3. Comparison of average precision (AP) on TREC 6 without and with post-translation expansion. (%) are the relative percentages over the monolingual IR performance Table 4. The results of pair-wise significance t-test. Here pvalue < 0.05 is considered statistically significant The baseline CLQS system applies a typical query translation approach, using a bilingual dictionary with co-occurrence-based translation disambiguation. This approach only covers 10% of the relevant queries suggested by an MLQS system (when the exact translation of the original query is given). By leveraging additional resources such as parallel corpora, web mining and logbased monolingual query expansion, the final system is able to cover 42% of the relevant queries suggested by an MLQS system with precision as high as 79.6%. To further test the quality of the suggested queries, CLQS system is used as a query translation system in CLIR tasks. Benchmarked using TREC 6 French to English CLIR task, CLQS demonstrates higher effectiveness than the traditional query translation methods using either bilingual dictionary or commercial machine translation tools. The improvement on TREC French to English CLIR task by using CLQS demonstrates the high quality of the suggested queries. This also shows the strong correspondence between the input French queries and English queries in the log. In the future, we will build CLQS system between languages which may be more loosely correlated, e.g., English and Chinese, and study the CLQS performance change due to the less strong correspondence among queries in such languages. 7. REFERENCES [1] Ambati, V. and Rohini., U. Using Monolingual Clickthrough Data to Build Cross-lingual Search Systems. In Proceedings of New Directions in Multilingual Information Access Workshop of SIGIR 2006. [2] Ballestors, L. A. and Croft, W. B. Phrasal Translation and Query Expansion Techniques for Cross-Language Information Retrieval. In Proc. SIGIR 1997, pp. 84-91. [3] Ballestors, L. A. and Croft, W. B. Resolving Ambiguity for Cross-Language Retrieval. In Proc. SIGIR 1998, pp. 64-71. [4] Brown, P. F., Pietra, D. S. A., Pietra, D. V. J., and Mercer, R. L. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263311, 1993. [5] Chang, C. C. and Lin, C. LIBSVM: a Library for Support Vector Machines (Version 2.3). 2001. http://citeseer.ist.psu.edu/chang01libsvm.html [6] Chen, H.-H., Lin, M.-S., and Wei, Y.-C. Novel Association Measures Using Web Search with Double Checking. In Proc. COLING/ACL 2006, pp. 1009-1016. [7] Cheng, P.-J., Teng, J.-W., Chen, R.-C., Wang, J.-H., Lu, W.H., and Chien, L.-F. Translating Unknown Queries with Web Corpora for Cross-Language Information Retrieval. In Proc. SIGIR 2004, pp. 146-153. [8] Cui, H., Wen, J. R., Nie, J.-Y., and Ma, W. Y. Query Expansion by Mining User Logs. IEEE Trans. on Knowledge and Data Engineering, 15(4):829-839, 2003. [9] Fujii A. and Ishikawa, T. Applying Machine Translation to Two-Stage Cross-Language Information Retrieval. In Proceedings of 4th Conference of the Association for Machine Translation in the Americas, pp. 13-24, 2000. [10] Gao, J. F., Nie, J.-Y., Xun, E., Zhang, J., Zhou, M., and Huang, C. Improving query translation for CLIR using statistical Models. In Proc. SIGIR 2001, pp. 96-104. [11] Gao, J. F., Nie, J.-Y., He, H., Chen, W., and Zhou, M. Resolving Query Translation Ambiguity using a Decaying Co-occurrence Model and Syntactic Dependence Relations. In Proc. SIGIR 2002, pp. 183-190. [12] Gleich, D., and Zhukov, L. SVD Subspace Projections for Term Suggestion Ranking and Clustering. In Technical Report, Yahoo! Research Labs, 2004. [13] Hull, D. Using Statistical Testing in the Evaluation of Retrieval Experiments. In Proc. SIGIR 1993, pp. 329-338. [14] Jeon, J., Croft, W. B., and Lee, J. Finding Similar Questions in Large Question and Answer Archives. In Proc. CIKM 2005, pp. 84-90. [15] Joachims, T. Optimizing Search Engines Using Clickthrough Data. In Proc. SIGKDD 2002, pp. 133-142. [16] Lavrenko, V., Choquette, M., and Croft, W. B. Cross-Lingual Relevance Models. In Proc. SIGIR 2002, pp. 175-182. [17] Lu, W.-H., Chien, L.-F., and Lee, H.-J. Anchor Text Mining for Translation Extraction of Query Terms. In Proc. SIGIR 2001, pp. 388-389. [18] McNamee, P. and Mayfield, J. Comparing Cross-Language Query Expansion Techniques by Degrading Translation Resources. In Proc. SIGIR 2002, pp. 159-166. [19] Monz, C. and Dorr, B. J. Iterative Translation Disambiguation for Cross-Language Information Retrieval. In Proc. SIGIR 2005, pp. 520-527. [20] Nie, J.-Y., Simard, M., Isabelle, P., and Durand, R. CrossLanguage Information Retrieval based on Parallel Text and Automatic Mining of Parallel Text from the Web. In Proc. SIGIR 1999, pp. 74-81. [21] Och, F. J. and Ney, H. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51, 2003. [22] Pirkola, A., Hedlund, T., Keshusalo, H., and Järvelin, K. Dictionary-Based Cross-Language Information Retrieval: Problems, Methods, and Research Findings. Information Retrieval, 4(3/4):209-230, 2001. [23] Robertson, S. E., Walker, S., Hancock-Beaulieu, M. M., and Gatford, M. OKAPI at TREC-3. In Proc.TREC-3, pp. 200225, 1995. [24] Robertson, S. E. and Jones, K. S. Relevance Weighting of Search Terms. Journal of the American Society of Information Science, 27(3):129-146, 1976. [25] Smola, A. J. and Schölkopf, B. A. Tutorial on Support Vector Regression. Statistics and Computing, 14(3):199-222, 2004. [26] Wen, J. R., Nie, J.-Y., and Zhang, H. J. Query Clustering Using User Logs. ACM Trans. Information Systems, 20(1):59-81, 2002. [27] Zhang, Y. and Vines, P. Using the Web for Automated Translation Extraction in Cross-Language Information Retrieval. In Proc. SIGIR 2004, pp. 162-169.
Cross-Lingual Query Suggestion Using Query Logs of Different Languages ABSTRACT Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods. 1. INTRODUCTION Query suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users. Search engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method. In addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-forperformance search market [12]. Query suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search. But different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries. Typical methods for query suggestion exploit query logs and document collections, by assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners [12, 14, 26]. By suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents. However, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS). CLQS aims to suggest related queries but in a different language. It has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language. CLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query. Dictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation. However, these kinds of approaches usually rely on static knowledge and data. It cannot effectively reflect the quickly shifting interests of Web users. Moreover, there are some problems with translated queries in target language. For instance, the translated terms can be reasonable translations, but they are not popularly used in the target language. For example, the French query "aliment biologique" is translated into "biologic food" by Google translation tool2, yet the correct formulation nowadays should be "organic food". Therefore, there exist many mismatch cases between the translated terms and the really used terms in target language. This mismatch makes the suggested terms in the target language ineffective. A natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine. We exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages. As a result, a query written in a source language likely has an equivalent in a query log in the target language. In particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log. Therefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested. In this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc. . A discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries. The model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation. Besides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query "translation" in CLIR task on TREC collections. The results show that this new "translation" method is more effective than the traditional query translation method. The remainder of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the discriminative model for estimating cross-lingual query similarity; Section 4 presents a new CLIR approach using cross-lingual query suggestion as a bridge across language boundaries. Section 5 discusses the experiments and benchmarks; finally, the paper is concluded in Section 6. 2. RELATED WORK Most approaches to CLIR perform a query translation followed by a monolingual IR. Typically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20]. Despite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20]. In [7, 27], OOV term translations are mined from the Web using a search engine. In [17], bilingual knowledge is acquired based on anchor text analysis. In addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19]. Nevertheless, it is arguable that accurate query translation may not be necessary for CLIR. Indeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query. This observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18]. [2] reports the enhancement on CLIR by post-translation expansion. [16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts. [18] makes performance comparison on multiple CLQE techniques, including pre-translation expansion and post-translation expansion. However, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques for CLQE. CLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language. As CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language. Therefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language. Query logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15]. In [1], the target language query log has been exploited to help query translation in CLIR. 3. ESTIMATING CROSS-LINGUAL QUERY SIMILARITY A search engine has a query log containing user queries in different languages within a certain period of time. In addition to query terms, click-through information is also recorded. Therefore, we know which documents have been selected by users for each query. Given a query in the source language, our CLQS task is to determine one or several similar queries in the target language from the query log. The key problem with cross-lingual query suggestion is how to learn a similarity measure between two queries in different languages. Although various statistical similarity measures have been studied for monolingual terms [8, 26], most of them are based on term co-occurrence statistics, and can hardly be applied directly in cross-lingual settings. In order to define a similarity measure across languages, one has to use at least one translation tool or resource. So the measure is based on both translation relation and monolingual similarity. In this paper, as our purpose is to provide up-to-date query similarity measure, it may not be sufficient to use only a static translation resource. Therefore, we also integrate a method to mine possible translations on the Web. This method is particularly useful for dealing with OOV terms. Given a set of resources of different natures, the next question is how to integrate them in a principled manner. In this paper, we propose a discriminative model to learn the appropriate similarity measure. The principle is as follows: we assume that we have a reasonable monolingual query similarity measure. For any training query example for which a translation exists, its similarity measure (with any other query) is transposed to its translation. Therefore, we have the desired cross-language similarity value for this example. Then we use a discriminative model to learn the cross-language similarity function which fits the best these examples. In the following sections, let us first describe the detail of the discriminative model for cross-lingual query similarity estimation. Then we introduce all the features (monolingual and cross-lingual information) that we will use in the discriminative model. 3.1 Discriminative Model for Estimating Cross-Lingual Query Similarity In this section, we propose a discriminative model to learn crosslingual query similarities in a principled manner. The principle is as follows: for a reasonable monolingual query similarity between two queries, a cross-lingual correspondent can be deduced between one query and another query's translation. In other words, for a pair of queries in different languages, their crosslingual similarity should fit the monolingual similarity between one query and the other query's translation. For example, the similarity between French query "pages jaunes" (i.e., "yellow page" in English) and English query "telephone directory" should be equal to the monolingual similarity between the translation of the French query "yellow page" and "telephone directory". There are many ways to obtain a monolingual similarity measure between terms, e.g., term co-occurrence based mutual information and 2 χ. Any of them can be used as the target for the cross-lingual similarity function to fit. In this way, cross-lingual query similarity estimation is formulated as a regression task as follows: Given a source language query q f, a target language query qe, and a monolingual query similarity simML, the corresponding cross-lingual query similarity simCL is defined as follows: sim CL (q f, qe) sim ML (T q f, q e = where Tqf is the translation of q f in the target language. Based on Equation (1), it would be relatively easy to create a training corpus. All it requires is a list of query translations. Then an existing monolingual query suggestion system can be used to automatically produce similar query to each translation, and create the training corpus for cross-lingual similarity estimation. Another advantage is that it is fairly easy to make use of arbitrary information sources within a discriminative modeling framework to achieve optimal performance. In this paper, support vector machine (SVM) regression algorithm [25] is used to learn the cross-lingual term similarity function. Given a vector of feature functions f between q f and qe, simCL (t f, te) is represented as an inner product between a weight vector and the feature vector in a kernel space as follows: sim CL (t f, te) = w • φ (f (tf, te)) (2) where φ is the mapping from the input feature space onto the kernel space, and w is the weight vector in the kernel space which will be learned by the SVM regression training. Once the weight vector is learned, the Equation (2) can be used to estimate the similarity between queries of different languages. We want to point out that instead of regression, one can definitely simplify the task as a binary or ordinal classification, in which case CLQS can be categorized according to discontinuous class labels, e.g., relevant and irrelevant, or a series of levels of relevancies, e.g., strongly relevant, weakly relevant, and irrelevant. In either case, one can resort to discriminative classification approaches, such as an SVM or maximum entropy model, in a straightforward way. However, the regression formalism enables us to fully rank the suggested queries based on the similarity score given by Equation (1). The Equations (1) and (2) construct a regression model for cross-lingual query similarity estimation. In the following sections, the monolingual query similarity measure (see Section 3.2) and the feature functions used for SVM regression (see Section 3.3) will be presented. 3.2 Monolingual Query Similarity Measure Based on Click-through Information Any monolingual term similarity measure can be used as the regression target. In this paper, we select the monolingual query similarity measure presented in [26] which reports good performance by using search users' click-through information in query logs. The reason to choose this monolingual similarity is that it is defined in a similar context as ours − according to a user log that reflects users' intention and behavior. Therefore, we can expect that the cross-language term similarity learned from it can also reflect users' intention and expectation. Following [26], our monolingual query similarity is defined by combining both query content-based similarity and click-through commonality in the query log. First the content similarity between two queries p and q is defined as follows: where kn (x) is the number of keywords in a query x, KN (p, q) is the number of common keywords in the two queries. Secondly, the click-through based similarity is defined as follows, similarity click − through p q = where rd (x) is the number of clicked URLs for a query x, and RD (p, q) is the number of common URLs clicked for two queries. Finally, the similarity between two queries is a linear combination of the content-based and click-through-based similarities, and is presented as follows: where α and β are the relative importance of the two similarity measures. In this paper, we set α = 0.4, and β = 0.6 following the practice in [26]. Queries with similarity measure higher than a threshold with another query will be regarded as relevant monolingual query suggestions (MLQS) for the latter. In this paper, the threshold is set as 0.9 empirically. 3.3 Features Used for Learning Cross-Lingual Query Similarity Measure This section presents the extraction of candidate relevant queries from the log with the assistance of various monolingual and bilingual resources. Meanwhile, feature functions over source query and the cross-lingual relevant candidates are defined. Some of the resources being used here, such as bilingual lexicon and parallel corpora, were for query translation in previous work. But note that we employ them here as an assistant means for finding relevant candidates in the log rather than for acquiring accurate translations. 3.3.1 Bilingual Dictionary In this subsection, a built-in-house bilingual dictionary containing 120,000 unique entries is used to retrieve candidate queries. Since multiple translations may be associated with each source word, co-occurrence based translation disambiguation is performed [3, 10]. The process is presented as follows: Given an input query q f = {w f1, w f2,...wfn} in the source language, for each query term wfi, a set of unique translations are provided by the bilingual dictionaryD: D (wfi) = {ti1, ti2,,..., tim}. Then the cohesion between the translations of two query terms is measured using mutual information which is computed as follows: Here C (x, y) is the number of queries in the log containing both x and y, C (x) is the number of queries containing term x, and N is the total number of queries in the log. Based on the term-term cohesion defined in Equation (6), all the possible query translations are ranked using the summation of the term-term cohesion Sdict Tq f top-4 query translations is denoted as S (Tqf). For each possible query translationT ∈ S (Tqf), we retrieve all the queries containing the same keywords as T from the target language log. The retrieved queries are candidate target queries, and are assigned Sdict (T) as the value of the feature Dictionary-based Translation Score. 3.3.2 Parallel Corpora Parallel corpora are precious resources for bilingual knowledge acquisition. Different from the bilingual dictionary, the bilingual knowledge learned from parallel corpora assigns probability for each translation candidate which is useful in acquiring dominant query translations. In this paper, the Europarl corpus (a set of parallel French and English texts from the proceedings of the European Parliament) is used. The corpus is first sentence aligned. Then word alignments are derived by training an IBM translation model 1 [4] using GIZA + + [21]. The learned bilingual knowledge is used to extract candidate queries from the query log. The process is presented as follows: Given a pair of queries, q f in the source language and qe in the target language, the Bi-Directional Translation Score is defined as follows: where pIBM1 (y | x) is the word sequence translation probability given by IBM model 1 which has the following form: where p (yj | xi) is the word to word translation probability derived from the word-aligned corpora. The reason to use bidirectional translation probability is to deal with the fact that common words can be considered as possible translations of many words. By using bidirectional translation, we test whether the translation words can be translated back to the source words. This is helpful to focus on the translation probability onto the most specific translation candidates. Now, given an input query qf, the top 10 queries {qe} with the highest bidirectional translation scores with qf are retrieved from the query log, and S IBM1 (q f, qe) in Equation (7) is assigned as the value for the feature Bi-Directional Translation Score. 3.3.3 Online Mining for Related Queries OOV word translation is a major knowledge bottleneck for query translation and CLIR. To overcome this knowledge bottleneck, web mining has been exploited in [7, 27] to acquire EnglishChinese term translations based on the observation that Chinese terms may co-occur with their English translations in the same web page. In this section, this web mining approach is adapted to acquire not only translations but semantically related queries in the target language. It is assumed that if a query in the target language co-occurs with the source query in many web pages, they are probably semantically related. Therefore, a simple method is to send the source query to a search engine (Google in our case) for Web pages in the target language in order to find related queries in the target language. For instance, by sending a French query "pages jaunes" to search for English pages, the English snippets containing the key words "yellow pages" or "telephone directory" will be returned. However, this simple approach may induce significant amount of noise due to the non-relevant returns from the search engine. In order to improve the relevancy of the bilingual snippets, we extend the simple approach by the following query modification: the original query is used to search with the dictionary-based query keyword translations, which are unified by the n (and) ∨ (OR) operators into a single Boolean query. For example, for a given query q = abc where the set of where MI tij t). The set of (, kl one web query. From the returned top 700 snippets, the most frequent 10 target queries are identified, and are associated with the feature Frequency in the Snippets. Furthermore, we use Co-Occurrence Double-Check (CODC) Measure to weight the association between the source and target queries. CODC Measure is proposed in [6] as an association measure based on snippet analysis, named Web Search with Double Checking (WSDC) model. In WSDC model, two objects a and b are considered to have an association if b can be found by using a as query (forward process), and a can be found by using b as query (backward process) by web search. The forward process counts the frequency of b in the top N snippets of query a, denoted as freq (b @a). Similarly, the backward process count the frequency of a in the top N snippets of query b, denoted as freq (a@b). Then the CODC association score is defined as follows: CODC measures the association of two terms in the range between 0 and 1, where under the two extreme cases, qe and q f are of no association when freq (qe @ q f) = 0 or freq (q f @qe) = 0, and are of the strongest association when freq (qe @ q f) = freq (q f) and freq (q f @ qe) = freq (qe). In our experiment, α is set at 0.15 following the practice in [6]. Any query qe mined from the Web will be associated with a feature CODC Measure with S CODC (qf, qe) as its value. 3.3.4 Monolingual Query Suggestion For all the candidate queries Q0 being retrieved using dictionary (see Section 3.3.1), parallel data (see Section 3.3.2) and web mining (see Section 3.3.3), monolingual query suggestion system (described in Section 3.1) is called to produce more related queries in the target language. For each target query qe, its monolingual source query SQML (qe) is defined as the query in Q0 with the highest monolingual similarity with qe, i.e., Then the monolingual similarity between qe and SQML (qe) is used as the value of the qe's Monolingual Query Suggestion Feature. For any target query q ∈ Q0, its Monolingual Query Suggestion Feature is set as 1. For any query qe ∉ Q0, its values of Dictionary-based Translation Score, Bi-Directional Translation Score, Frequency in the Snippet, and CODC Measure are set to be equal to the feature values of SQML (qe). 3.4 Estimating Cross-lingual Query Similarity In summary, four categories of features are used to learn the crosslingual query similarity. SVM regression algorithm [25] is used to learn the weights in Equation (2). In this paper, LibSVM toolkit [5] is used for the regression training. In the prediction stage, the candidate queries will be ranked using the cross-lingual query similarity score computed in terms of simCL (tf, te) = w • φ (f (tf, te)), and the queries with similarity score lower than a threshold will be regarded as nonrelevant. The threshold is learned using a development data set by fitting MLQS's output. 4. CLIR BASED ON CROSS-LINGUAL QUERY SUGGESTION In Section 3, we presented a discriminative model for cross lingual query suggestion. However, objectively benchmarking a query suggestion system is not a trivial task. In this paper, we propose to use CLQS as an alternative to query translation, and test its effectiveness in CLIR tasks. The resulting good performance of CLIR corresponds to the high quality of the suggested queries. Given a source query q f, a set of relevant queries {qe} in the target language are recommended using the cross-lingual query suggestion system. Then a monolingual IR system based on the BM25 model [23] is called using each q ∈ {qe} as queries to retrieve documents. Then the retrieved documents are re-ranked based on the sum of the BM25 scores associated with each monolingual retrieval. 5. PERFORMACNCE EVALUATION In this section, we will benchmark the cross-lingual query suggestion system, comparing its performance with monolingual query suggestion, studying the contribution of various information sources, and testing its effectiveness when being used in CLIR tasks. 5.1 Data Resources In our experiments, French and English are selected as the source and target language respectively. Such selection is due to the fact that large scale query logs are readily available for these two languages. A one-month English query log (containing 7 million unique English queries with occurrence frequency more than 5) of MSN search engine is used as the target language log. And a monolingual query suggestion system is built based on it. In addition, 5,000 French queries are selected randomly from a French query log (containing around 3 million queries), and are manually translated into English by professional French-English translators. Among the 5,000 French queries, 4,171 queries have their translations in the English query log, and are used for CLQS training and testing. Furthermore, among the 4,171 French queries, 70% are used for cross-lingual query similarity training, 10% are used as the development data to determine the relevancy threshold, and 20% are used for testing. To retrieve the crosslingual related queries, a built-in-house French-English bilingual lexicon (containing 120,000 unique entries) and the Europarl corpus are used. Besides benchmarking CLQS as an independent system, the CLQS is also tested as a query "translation" system for CLIR tasks. Based on the observation that the CLIR performance heavily relies on the quality of the suggested queries, this benchmark measures the quality of CLQS in terms of its effectiveness in helping CLIR. To perform such benchmark, we use the documents of TREC6 CLIR data (AP88-90 newswire, 750MB) with officially provided 25 short French-English queries pairs (CL1-CL25). The selection of this data set is due to the fact that the average length of the queries are 3.3 words long, which matches the web query logs we use to train CLQS. 5.2 Performance of Cross-lingual Query Suggestion Mean-square-error (MSE) is used to measure the regression error and it is defined as follows: where l is the total number of cross-lingual query pairs in the testing data. As described in Section 3.4, a relevancy threshold is learned using the development data, and only CLQS with similarity value above the threshold is regarded as truly relevant to the input query. In this way, CLQS can also be benchmarked as a classification task using precision (P) and recall (R) which are defined as follows: where S CLQS is the set of relevant queries suggested by CLQS, S MLQS is the set of relevant queries suggested by MLQS (see Section 3.2). The benchmarking results with various feature configurations are shown in Table 1. Table 1. CLQS performance with different feature settings (DD: dictionary only; DD+PC: dictionary and parallel corpora; DD+PC+W eb: dictionary, parallel corpora, and web mining; DD+PC+W eb + MLQS: dictionary, parallel corpora, web mining and monolingual query suggestion) Table 1 reports the performance comparison with various feature settings. The baseline system (DD) uses a conventional query translation approach, i.e., a bilingual dictionary with cooccurrence-based translation disambiguation. The baseline system only covers less than 10% of the suggestions made by MLQS. Using additional features obviously enables CLQS to generate more relevant queries. The most significant improvement on recall is achieved by exploiting MLQS. The final CLQS system is able to generate 42% of the queries suggested by MLQS. Among all the feature combinations, there is no significant change in precision. This indicates that our methods can improve the recall by effectively leveraging various information sources without losing the accuracy of the suggestions. Besides benchmarking CLQS by comparing its output with MLQS output, 200 French queries are randomly selected from the French query log. These queries are double-checked to make sure that they are not in the CLQS training corpus. Then CLQS system is used to suggest relevant English queries for them. On average, for each French query, 8.7 relevant English queries are suggested. Then the total 1,740 suggested English queries are manually checked by two professional English/French translators with cross-validation. Among the 1,747 suggested queries, 1,407 queries are recognized as relevant to the original ones, hence the accuracy is 80.9%. Figure 1 shows an example of CLQS of the French query "terrorisme international" ("international terrorism" in English). international terrorism (0.991); what is terrorism (0.943); counter terrorism (0.920); terrorist (0.911); terrorist attacks (0.898); international terrorist (0.853); world terrorism (0.845); global terrorism (0.833); transnational terrorism (0.821); human rights (0.811); terrorist groups (0. 777); patterns of global terrorism (0.762) september 11 (0.734) Figure 1. An example of CLQS of the French query "terrorisme international" 5.3 CLIR Performance In this section, CLQS is tested with French to English CLIR tasks. We conduct CLIR experiments using the TREC 6 CLIR dataset described in Section 5.1. The CLIR is performed using a query translation system followed by a BM25-based [23] monolingual IR module. The following three different systems have been used to perform query translation: (1) CLQS: our CLQS system; (2) MT: Google French to English machine translation system; (3) DT: a dictionary based query translation system using cooccurrence statistics for translation disambiguation. The translation disambiguation algorithm is presented in Section 3.3.1. Besides, the monolingual IR performance is also reported as a reference. The average precision of the four IR systems are reported in Table 2, and the 11-point precision-recall curves are shown in Figure 2. Table 2. Average precision of CLIR on TREC 6 Dataset (Monolingual: monolingual IR system; MT: CLIR based on machine translation; DT: CLIR based on dictionary translation; CLQS: CLQS-based CLIR) Figure 2. 11 points precision-recall on TREC6 CLIR data set The benchmark shows that using CLQS as a query translation tool outperforms CLIR based on machine translation by 7.4%, outperforms CLIR based on dictionary translation by 25.2%, and achieves 87.6% of the monolingual IR performance. The effectiveness of CLQS lies in its ability in suggesting closely related queries besides accurate translations. For example, for the query CL14 "terrorisme international" ("international terrorism"), although the machine translation tool translates the query correctly, CLQS system still achieves higher score by recommending many additional related terms such as "global terrorism", "world terrorism", etc. (as shown in Figure 1). Another example is the query "La pollution causée par l'automobile" ("air pollution due to automobile") of CL6. The MT tool provides the translation "the pollution caused by the car", while CLQS system enumerates all the possible synonyms of "car", and suggest the following queries "car pollution", "auto pollution", "automobile pollution". Besides, other related queries such as "global warming" are also suggested. For the query CL12 "La culture écologique" ("organic farming"), the MT tool fails to generate the correct translation. Although the correct translation is neither in our French-English dictionary, CLQS system generates "organic farm" as a relevant query due to successful web mining. The above experiment demonstrates the effectiveness of using CLQS to suggest relevant queries for CLIR enhancement. A related research is to perform query expansion to enhance CLIR [2, 18]. So it is very interesting to compare the CLQS approach with the conventional query expansion approaches. Following [18], post-translation expansion is performed based on pseudorelevance feedback (PRF) techniques. We first perform CLIR in the same way as before. Then we use the traditional PRF algorithm described in [24] to select expansion terms. In our experiments, the top 10 terms are selected to expand the original query, and the new query is used to search the collection for the second time. The new CLIR performance in terms of average precision is shown in Table 3. The 11-point P-R curves are drawn in Figure 3. Although being enhanced by pseudo-relevance feedback, the CLIR using either machine translation or dictionary-based query translation still does not perform as well as CLQS-based approach. Statistical t-test [13] is conducted to indicate whether the CLQS-based CLIR performs significantly better. Pair-wise pvalues are shown in Table 4. Clearly, CLQS significantly outperforms MT and DT without PRF as well as DT+PRF, but its superiority over MT+PRF is not significant. However, when combined with PRF, CLQS significant outperforms all the other methods. This indicates the higher effectiveness of CLQS in related term identification by leveraging a wide spectrum of resources. Furthermore, post-translation expansion is capable of improving CLQS-based CLIR. This is due to the fact that CLQS and pseudo-relevance feedback are leveraging different categories of resources, and both approaches can be complementary. Table 3. Comparison of average precision (AP) on TREC 6 without and with post-translation expansion. (%) are the relative percentages over the monolingual IR performance Figure 3. 11 points precision-recall on TREC6 CLIR dataset with pseudo relevance feedback Table 4. The results of pair-wise significance t-test. Here pvalue <0.05 is considered statistically significant 6. CONCLUSIONS In this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs. The key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources. The model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.
Cross-Lingual Query Suggestion Using Query Logs of Different Languages ABSTRACT Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods. 1. INTRODUCTION Query suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users. Search engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method. In addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-forperformance search market [12]. Query suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search. But different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries. Typical methods for query suggestion exploit query logs and document collections, by assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners [12, 14, 26]. By suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents. However, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS). CLQS aims to suggest related queries but in a different language. It has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language. CLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query. Dictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation. However, these kinds of approaches usually rely on static knowledge and data. It cannot effectively reflect the quickly shifting interests of Web users. Moreover, there are some problems with translated queries in target language. For instance, the translated terms can be reasonable translations, but they are not popularly used in the target language. For example, the French query "aliment biologique" is translated into "biologic food" by Google translation tool2, yet the correct formulation nowadays should be "organic food". Therefore, there exist many mismatch cases between the translated terms and the really used terms in target language. This mismatch makes the suggested terms in the target language ineffective. A natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine. We exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages. As a result, a query written in a source language likely has an equivalent in a query log in the target language. In particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log. Therefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested. In this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc. . A discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries. The model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation. Besides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query "translation" in CLIR task on TREC collections. The results show that this new "translation" method is more effective than the traditional query translation method. The remainder of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the discriminative model for estimating cross-lingual query similarity; Section 4 presents a new CLIR approach using cross-lingual query suggestion as a bridge across language boundaries. Section 5 discusses the experiments and benchmarks; finally, the paper is concluded in Section 6. 2. RELATED WORK Most approaches to CLIR perform a query translation followed by a monolingual IR. Typically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20]. Despite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20]. In [7, 27], OOV term translations are mined from the Web using a search engine. In [17], bilingual knowledge is acquired based on anchor text analysis. In addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19]. Nevertheless, it is arguable that accurate query translation may not be necessary for CLIR. Indeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query. This observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18]. [2] reports the enhancement on CLIR by post-translation expansion. [16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts. [18] makes performance comparison on multiple CLQE techniques, including pre-translation expansion and post-translation expansion. However, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques for CLQE. CLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language. As CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language. Therefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language. Query logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15]. In [1], the target language query log has been exploited to help query translation in CLIR. 3. ESTIMATING CROSS-LINGUAL QUERY SIMILARITY 3.1 Discriminative Model for Estimating Cross-Lingual Query Similarity 3.2 Monolingual Query Similarity Measure Based on Click-through Information 3.3 Features Used for Learning Cross-Lingual Query Similarity Measure 3.3.1 Bilingual Dictionary 3.3.2 Parallel Corpora 3.3.3 Online Mining for Related Queries 3.3.4 Monolingual Query Suggestion 3.4 Estimating Cross-lingual Query Similarity 4. CLIR BASED ON CROSS-LINGUAL QUERY SUGGESTION 5. PERFORMACNCE EVALUATION 5.1 Data Resources 5.2 Performance of Cross-lingual Query Suggestion 5.3 CLIR Performance 6. CONCLUSIONS In this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs. The key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources. The model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.
Cross-Lingual Query Suggestion Using Query Logs of Different Languages ABSTRACT Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods. 1. INTRODUCTION Query suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users. Search engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method. Query suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search. But different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries. By suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents. However, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS). CLQS aims to suggest related queries but in a different language. It has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language. CLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query. Dictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation. However, these kinds of approaches usually rely on static knowledge and data. Moreover, there are some problems with translated queries in target language. For instance, the translated terms can be reasonable translations, but they are not popularly used in the target language. For example, the French query "aliment biologique" is translated into "biologic food" by Google translation tool2, yet the correct formulation nowadays should be "organic food". Therefore, there exist many mismatch cases between the translated terms and the really used terms in target language. This mismatch makes the suggested terms in the target language ineffective. A natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine. We exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages. As a result, a query written in a source language likely has an equivalent in a query log in the target language. In particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log. Therefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested. In this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc. . A discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries. The model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation. Besides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query "translation" in CLIR task on TREC collections. The results show that this new "translation" method is more effective than the traditional query translation method. 2. RELATED WORK Most approaches to CLIR perform a query translation followed by a monolingual IR. Typically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20]. Despite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20]. In [7, 27], OOV term translations are mined from the Web using a search engine. In addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19]. Nevertheless, it is arguable that accurate query translation may not be necessary for CLIR. Indeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query. This observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18]. [2] reports the enhancement on CLIR by post-translation expansion. [16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts. CLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language. As CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language. Therefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language. Query logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15]. In [1], the target language query log has been exploited to help query translation in CLIR. 6. CONCLUSIONS In this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs. The key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources. The model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.
H-44
A Time Machine for Text Search
Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient time machine scalable to large versioned text collections.
[ "time machin", "text search", "version document collect", "web archiv", "time-travel text search", "invert file index", "tempor search", "approxim tempor coalesc", "collabor author environ", "timestamp inform feed", "document-content overlap", "index rang-base valu", "open sourc search-engin nutch", "static indexprun techniqu", "valid time-interv", "sublist materi", "tempor text index" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "M", "U", "M", "U", "U", "R" ]
A Time Machine for Text Search Klaus Berberich Srikanta Bedathur Thomas Neumann Gerhard Weikum Max-Planck Institute for Informatics Saarbr¨ucken, Germany {kberberi, bedathur, neumann, weikum}@mpi-inf.mpg.de ABSTRACT Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient time machine scalable to large versioned text collections. Categories and Subject Descriptors H.3.1 [Content Analysis and Indexing]: Indexing methods; H.3.3 [Information Search and Retrieval]: Retrieval models, Search process General Terms Algorithms, Experimentation, Performance 1. INTRODUCTION In this work we address time-travel text search over temporally versioned document collections. Given a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t. An increasing number of such versioned document collections is available today including web archives, collaborative authoring environments like Wikis, or timestamped information feeds. Text search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents. Even worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing. Time-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates. For a documentary about a past political scandal, a journalist needs to research early opinions and statements made by the involved politicians. Sending an appropriate query to a major web search-engine, the majority of returned results contains only recent coverage, since many of the early web pages have disappeared and are only preserved in web archives. If the query could be enriched with a time point, say August 20th 2003 as the day after the scandal got revealed, and be issued against a web archive, only pages that existed specifically at that time could be retrieved thus better satisfying the journalist``s information need. Document collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered. Looking at their evolutionary history, we are faced with even larger data volumes. As a consequence, na¨ıve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes. This paper presents an efficient solution to time-travel text search by making the following key contributions: 1. The popular well-studied inverted file index [35] is transparently extended to enable time-travel text search. 2. Temporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate. 3. We develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance. 4. In a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents. The remainder of this paper is organized as follows. The presented work is put in context with related work in Section 2. We delineate our model of a temporally versioned document collection in Section 3. We present our time-travel inverted index in Section 4. Building on it, temporal coalescing is described in Section 5. In Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7. 2. RELATED WORK We can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index. We briefly review work under these categories here. To the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents. Anick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries. Access costs are optimized for accesses to the most recent versions and increase as one moves farther into the past. Burrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents. Recent work by Nørv˚ag and Nybø [25] and their earlier proposals concentrate on the relatively simpler problem of supporting text-containment queries only and neglect the relevance scoring of results. Stack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives. This adaptation, however, does not provide the intended time-travel text search functionality. In contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28]. Unlike the inverted file index, their applicability to text search is not well understood. Moving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size. Their technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context. More recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size. None of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality. Static indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result. They also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction. It should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here. 3. MODEL In the present work, we deal with a temporally versioned document collection D that is modeled as described in the following. Each document d ∈ D is a sequence of its versions d = dt1 , dt2 , ... Each version dti has an associated timestamp ti reflecting when the version was created. Each version is a vector of searchable terms or features. Any modification to a document version results in the insertion of a new version with corresponding timestamp. We employ a discrete definition of time, so that timestamps are non-negative integers. The deletion of a document at time ti, i.e., its disappearance from the current state of the collection, is modeled as the insertion of a special tombstone version ⊥. The validity time-interval val(dti ) of a version dti is [ti, ti+1), if a newer version with associated timestamp ti+1 exists, and [ti, now) otherwise where now points to the greatest possible value of a timestamp (i.e., ∀t : t < now). Putting all this together, we define the state Dt of the collection at time t (i.e., the set of versions valid at t that are not deletions) as Dt = [ d∈D {dti ∈ d | t ∈ val(dti ) ∧ dti = ⊥} . As mentioned earlier, we want to enrich a keyword query q with a timestamp t, so that q be evaluated over Dt , i.e., the state of the collection at time t. The enriched time-travel query is written as q t for brevity. As a retrieval model in this work we adopt Okapi BM25 [27], but note that the proposed techniques are not dependent on this choice and are applicable to other retrieval models like tf-idf [4] or language models [26] as well. For our considered setting, we slightly adapt Okapi BM25 as w(q t , dti ) = X v∈q wtf (v, dti ) · widf (v, t) . In the above formula, the relevance w(q t , dti ) of a document version dti to the time-travel query q t is defined. We reiterate that q t is evaluated over Dt so that only the version dti valid at time t is considered. The first factor wtf (v, dti ) in the summation, further referred to as the tfscore is defined as wtf (v, dti ) = (k1 + 1) · tf(v, dti ) k1 · ((1 − b) + b · dl(d ti ) avdl(ti) ) + tf(v, dti ) . It considers the plain term frequency tf(v, dti ) of term v in version dti normalizing it, taking into account both the length dl(dti ) of the version and the average document length avdl(ti) in the collection at time ti. The length-normalization parameter b and the tf-saturation parameter k1 are inherited from the original Okapi BM25 and are commonly set to values 1.2 and 0.75 respectively. The second factor widf (v, t), which we refer to as the idf-score in the remainder, conveys the inverse document frequency of term v in the collection at time t and is defined as widf (v, t) = log N(t) − df(v, t) + 0.5 df(v, t) + 0.5 where N(t) = |Dt | is the collection size at time t and df(v, t) gives the number of documents in the collection that contain the term v at time t. While the idf-score depends on the whole corpus as of the query time t, the tf-score is specific to each version. 4. TIME-TRAVELINVERTEDFILEINDEX The inverted file index is a standard technique for text indexing, deployed in many systems. In this section, we briefly review this technique and present our extensions to the inverted file index that make it ready for time-travel text search. 4.1 Inverted File Index An inverted file index consists of a vocabulary, commonly organized as a B+-Tree, that maps each term to its idfscore and inverted list. The index list Lv belonging to term v contains postings of the form ( d, p ) where d is a document-identifier and p is the so-called payload. The payload p contains information about the term frequency of v in d, but may also include positional information about where the term appears in the document. The sort-order of index lists depends on which queries are to be supported efficiently. For Boolean queries it is favorable to sort index lists in document-order. Frequencyorder and impact-order sorted index lists are beneficial for ranked queries and enable optimized query processing that stops early after having identified the k most relevant documents [1, 2, 9, 15, 31]. A variety of compression techniques, such as encoding document identifiers more compactly, have been proposed [33, 35] to reduce the size of index lists. For an excellent recent survey about inverted file indexes we refer to [35]. 4.2 Time-Travel Inverted File Index In order to prepare an inverted file index for time travel we extend both inverted lists and the vocabulary structure by explicitly incorporating temporal information. The main idea for inverted lists is that we include a validity timeinterval [tb, te) in postings to denote when the payload information was valid. The postings in our time-travel inverted file index are thus of the form ( d, p, [tb, te) ) where d and p are defined as in the standard inverted file index above and [tb, te) is the validity time-interval. As a concrete example, in our implementation, for a version dti having the Okapi BM25 tf-score wtf (v, dti ) for term v, the index list Lv contains the posting ( d, wtf (v, dti ), [ti, ti+1) ) . Similarly, the extended vocabulary structure maintains for each term a time-series of idf-scores organized as a B+Tree. Unlike the tf-score, the idf-score of every term could vary with every change in the corpus. Therefore, we take a simplified approach to idf-score maintenance, by computing idf-scores for all terms in the corpus at specific (possibly periodic) times. 4.3 Query Processing During processing of a time-travel query q t , for each query term the corresponding idf-score valid at time t is retrieved from the extended vocabulary. Then, index lists are sequentially read from disk, thereby accumulating the information contained in the postings. We transparently extend the sequential reading, which is - to the best of our knowledgecommon to all query processing techniques on inverted file indexes, thus making them suitable for time-travel queryprocessing. To this end, sequential reading is extended by skipping all postings whose validity time-interval does not contain t (i.e., t ∈ [tb, te)). Whether a posting can be skipped can only be decided after the posting has been transferred from disk into memory and therefore still incurs significant I/O cost. As a remedy, we propose index organization techniques in Section 6 that aim to reduce the I/O overhead significantly. We note that our proposed extension of the inverted file index makes no assumptions about the sort-order of index lists. As a consequence, existing query-processing techniques and most optimizations (e.g., compression techniques) remain equally applicable. 5. TEMPORAL COALESCING If we employ the time-travel inverted index, as described in the previous section, to a versioned document collection, we obtain one posting per term per document version. For frequent terms and large highly-dynamic collections, this time score non-coalesced coalesced Figure 1: Approximate Temporal Coalescing leads to extremely long index lists with very poor queryprocessing performance. The approximate temporal coalescing technique that we propose in this section counters this blowup in index-list size. It builds on the observation that most changes in a versioned document collection are minor, leaving large parts of the document untouched. As a consequence, the payload of many postings belonging to temporally adjacent versions will differ only slightly or not at all. Approximate temporal coalescing reduces the number of postings in an index list by merging such a sequence of postings that have almost equal payloads, while keeping the maximal error bounded. This idea is illustrated in Figure 1, which plots non-coalesced and coalesced scores of postings belonging to a single document. Approximate temporal coalescing is greatly effective given such fluctuating payloads and reduces the number of postings from 9 to 3 in the example. The notion of temporal coalescing was originally introduced in temporal database research by B¨ohlen et al. [6], where the simpler problem of coalescing only equal information was considered. We next formally state the problem dealt with in approximate temporal coalescing, and discuss the computation of optimal and approximate solutions. Note that the technique is applied to each index list separately, so that the following explanations assume a fixed term v and index list Lv. As an input we are given a sequence of temporally adjacent postings I = ( d, pi, [ti, ti+1) ), ... , ( d, pn−1, [tn−1, tn)) ) . Each sequence represents a contiguous time period during which the term was present in a single document d. If a term disappears from d but reappears later, we obtain multiple input sequences that are dealt with separately. We seek to generate the minimal length output sequence of postings O = ( d, pj, [tj, tj+1) ), ... , ( d, pm−1, [tm−1, tm)) ) , that adheres to the following constraints: First, O and I must cover the same time-range, i.e., ti = tj and tn = tm. Second, when coalescing a subsequence of postings of the input into a single posting of the output, we want the approximation error to be below a threshold . In other words, if (d, pi, [ti, ti+1)) and (d, pj, [tj, tj+1)) are postings of I and O respectively, then the following must hold for a chosen error function and a threshold : tj ≤ ti ∧ ti+1 ≤ tj+1 ⇒ error(pi, pj) ≤ . In this paper, as an error function we employ the relative error between payloads (i.e., tf-scores) of a document in I and O, defined as: errrel(pi, pj) = |pi − pj| / |pi| . Finding an optimal output sequence of postings can be cast into finding a piecewise-constant representation for the points (ti, pi) that uses a minimal number of segments while retaining the above approximation guarantee. Similar problems occur in time-series segmentation [21, 30] and histogram construction [19, 20]. Typically dynamic programming is applied to obtain an optimal solution in O(n2 m∗ ) [20, 30] time with m∗ being the number of segments in an optimal sequence. In our setting, as a key difference, only a guarantee on the local error is retained - in contrast to a guarantee on the global error in the aforementioned settings. Exploiting this fact, an optimal solution is computable by means of induction [24] in O(n2 ) time. Details of the optimal algorithm are omitted here but can be found in the accompanying technical report [5]. The quadratic complexity of the optimal algorithm makes it inappropriate for the large datasets encountered in this work. As an alternative, we introduce a linear-time approximate algorithm that is based on the sliding-window algorithm given in [21]. This algorithm produces nearly-optimal output sequences that retain the bound on the relative error, but possibly require a few additional segments more than an optimal solution. Algorithm 1 Temporal Coalescing (Approximate) 1: I = ( d, pi, [ti, ti+1) ), ... O = 2: pmin = pi pmax = pi p = pi tb = ti te = ti+1 3: for ( d, pj, [tj, tj+1) ) ∈ I do 4: pmin = min( pmin, pj ) pmax = max( pmax, pj ) 5: p = optrep(pmin, pmax) 6: if errrel(pmin, p ) ≤ ∧ errrel(pmax, p ) ≤ then 7: pmin = pmin pmax = pmax p = p te = tj+1 8: else 9: O = O ∪ ( d, p, [tb, te) ) 10: pmin = pj pmax = pj p = pj tb = tj te = tj+1 11: end if 12: end for 13: O = O ∪ ( d, p, [tb, te) ) Algorithm 1 makes one pass over the input sequence I. While doing so, it coalesces sequences of postings having maximal length. The optimal representative for a sequence of postings depends only on their minimal and maximal payload (pmin and pmax) and can be looked up using optrep in O(1) (see [16] for details). When reading the next posting, the algorithm tries to add it to the current sequence of postings. It computes the hypothetical new representative p and checks whether it would retain the approximation guarantee. If this test fails, a coalesced posting bearing the old representative is added to the output sequence O and, following that, the bookkeeping is reinitialized. The time complexity of the algorithm is in O(n). Note that, since we make no assumptions about the sort order of index lists, temporal-coalescing algorithms have an additional preprocessing cost in O(|Lv| log |Lv|) for sorting the index list and chopping it up into subsequences for each document. 6. SUBLIST MATERIALIZATION Efficiency of processing a query q t on our time-travel inverted index is influenced adversely by the wasted I/O due to read but skipped postings. Temporal coalescing implicitly addresses this problem by reducing the overall index list size, but still a significant overhead remains. In this section, we tackle this problem by proposing the idea of materializing sublists each of which corresponds to a contiguous subinterval of time spanned by the full index. Each of these sublists contains all coalesced postings that overlap with the corresponding time interval of the sublist. Note that all those postings whose validity time-interval spans across the temporal boundaries of several sublists are replicated in each of the spanned sublists. Thus, in order to process the query q t time t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 d1 d2 d3 document 1 2 3 4 5 6 7 8 9 10 Figure 2: Sublist Materialization it is sufficient to scan any materialized sublist whose timeinterval contains t. We illustrate the idea of sublist materialization using an example shown in Figure 2. The index list Lv visualized in the figure contains a total of 10 postings from three documents d1, d2, and d3. For ease of description, we have numbered boundaries of validity time-intervals, in increasing time-order, as t1, ... , t10 and numbered the postings themselves as 1, ... , 10. Now, consider the processing of a query q t with t ∈ [t1, t2) using this inverted list. Although only three postings (postings 1, 5 and 8) are valid at time t, the whole inverted list has to be read in the worst case. Suppose that we split the time axis of the list at time t2, forming two sublists with postings {1, 5, 8} and {2, 3, 4, 5, 6, 7, 8, 9, 10} respectively. Then, we can process the above query with optimal cost by reading only those postings that existed at this t. At a first glance, it may seem counterintuitive to reduce index size in the first step (using temporal coalescing), and then to increase it again using the sublist materialization techniques presented in this section. However, we reiterate that our main objective is to improve the efficiency of processing queries, not to reduce the index size alone. The use of temporal coalescing improves the performance by reducing the index size, while the sublist materialization improves performance by judiciously replicating entries. Further, the two techniques, can be applied separately and are independent. If applied in conjunction, though, there is a synergetic effect - sublists that are materialized from a temporally coalesced index are generally smaller. We employ the notation Lv : [ti, tj) to refer to the materialized sublist for the time interval [ti, tj), that is formally defined as, Lv : [ti, tj) = {( d, p, [tb, te) ) ∈ Lv | tb < tj ∧ te > ti} . To aid the presentation in the rest of the paper, we first provide some definitions. Let T = t1 ... tn be the sorted sequence of all unique time-interval boundaries of an inverted list Lv. Then we define E = { [ti, ti+1) | 1 ≤ i < n} to be the set of elementary time intervals. We refer to the set of time intervals for which sublists are materialized as M ⊆ { [ti, tj) | 1 ≤ i < j ≤ n } , and demand ∀ t ∈ [t1, tn) ∃ m ∈ M : t ∈ m , i.e., the time intervals in M must completely cover the time interval [t1, tn), so that time-travel queries q t for all t ∈ [t1, tn) can be processed. We also assume that intervals in M are disjoint. We can make this assumption without ruling out any optimal solution with regard to space or performance defined below. The space required for the materialization of sublists in a set M is defined as S( M ) = X m∈M |Lv : m| , i.e., the total length of all lists in M. Given a set M, we let π( [ti, ti+1) ) = [tj, tk) ∈ M : [ti, ti+1) ⊆ [tj, tk) denote the time interval that is used to process queries q t with t ∈ [ti, ti+1). The performance of processing queries q t for t ∈ [ti, ti+1) inversely depends on its processing cost PC( [ti, ti+1) ) = |Lv : π( [ti, ti+1) )| , which is assumed to be proportional to the length of the list Lv : π( [ti, ti+1) ). Thus, in order to optimize the performance of processing queries we minimize their processing costs. 6.1 Performance/Space-Optimal Approaches One strategy to eliminate the problem of skipped postings is to eagerly materialize sublists for all elementary time intervals, i.e., to choose M = E. In doing so, for every query q t only postings valid at time t are read and thus the best possible performance is achieved. Therefore, we will refer to this approach as Popt in the remainder. The initial approach described above that keeps only the full list Lv and thus picks M = { [t1, tn) } is referred to as Sopt in the remainder. This approach requires minimal space, since it keeps each posting exactly once. Popt and Sopt are extremes: the former provides the best possible performance but is not space-efficient, the latter requires minimal space but does not provide good performance. The two approaches presented in the rest of this section allow mutually trading off space and performance and can thus be thought of as means to explore the configuration spectrum between the Popt and the Sopt approach. 6.2 Performance-Guarantee Approach The Popt approach clearly wastes a lot of space materializing many nearly-identical sublists. In the example illustrated in Figure 2 materialized sublists for [t1, t2) and [t2, t3) differ only by one posting. If the sublist for [t1, t3) was materialized instead, one could save significant space while incurring only an overhead of one skipped posting for all t ∈ [t1, t3). The technique presented next is driven by the idea that significant space savings over Popt are achievable, if an upper-bounded loss on the performance can be tolerated, or to put it differently, if a performance guarantee relative to the optimum is to be retained. In detail, the technique, which we refer to as PG (Performance Guarantee) in the remainder, finds a set M that has minimal required space, but guarantees for any elementary time interval [ti, ti+1) (and thus for any query q t with t ∈ [ti, ti+1)) that performance is worse than optimal by at most a factor of γ ≥ 1. Formally, this problem can be stated as argmin M S( M ) s.t. ∀ [ti, ti+1) ∈ E : PC( [ti, ti+1) ) ≤ γ · |Lv : [ti, ti+1)| . An optimal solution to the problem can be computed by means of induction using the recurrence C( [t1, tk+1) ) = min {C( [t1, tj) ) + |Lv : [tj, tk+1)| | 1 ≤ j ≤ k ∧ condition} , where C( [t1, tj) ) is the optimal cost (i.e., the space required) for the prefix subproblem { [ti, ti+1) ∈ E | [ti, ti+1) ⊆ [t1, tj) } and condition stands for ∀ [ti, ti+1) ∈ E : [ti, ti+1) ⊆ [tj, tk+1) ⇒ |Lv : [tj, tk+1)| ≤ γ · |Lv : [ti, ti+1)| . Intuitively, the recurrence states that an optimal solution for [t1, tk+1) be combined from an optimal solution to a prefix subproblem C( [t1, tj) ) and a time interval [tj, tk+1) that can be materialized without violating the performance guarantee. Pseudocode of the algorithm is omitted for space reasons, but can be found in the accompanying technical report [5]. The time complexity of the algorithm is in O(n2 ) - for each prefix subproblem the above recurrence must be evaluated, which is possible in linear time if list sizes |L : [ti, tj)| are precomputed. The space complexity is in O(n2 ) - the cost of keeping the precomputed sublist lengths and memoizing optimal solutions to prefix subproblems. 6.3 Space-Bound Approach So far we considered the problem of materializing sublists that give a guarantee on performance while requiring minimal space. In many situations, though, the storage space is at a premium and the aim would be to materialize a set of sublists that optimizes expected performance while not exceeding a given space limit. The technique presented next, which is named SB, tackles this very problem. The space restriction is modeled by means of a user-specified parameter κ ≥ 1 that limits the maximum allowed blowup in index size from the space-optimal solution provided by Sopt. The SB technique seeks to find a set M that adheres to this space limit but minimizes the expected processing cost (and thus optimizes the expected performance). In the definition of the expected processing cost, P( [ti, ti+1) ) denotes the probability of a query time-point being in [ti, ti+1). Formally, this space-bound sublist-materialization problem can be stated as argmin M X [ti, ti+1) ∈ E P( [ti, ti+1) ) · PC( [ti, ti+1) ) s.t. X m∈M |Lv : m| ≤ κ |Lv| . The problem can be solved by using dynamic programming over an increasing number of time intervals: At each time interval in E the algorithms decides whether to start a new materialization time-interval, using the known best materialization decision from the previous time intervals, and keeping track of the required space consumption for materialization. A detailed description of the algorithm is omitted here, but can be found in the accompanying technical report [5]. Unfortunately, the algorithm has time complexity in O(n3 |Lv|) and its space complexity is in O(n2 |Lv|), which is not practical for large data sets. We obtain an approximate solution to the problem using simulated annealing [22, 23]. Simulated annealing takes a fixed number R of rounds to explore the solution space. In each round a random successor of the current solution is looked at. If the successor does not adhere to the space limit, it is always rejected (i.e., the current solution is kept). A successor adhering to the space limit is always accepted if it achieves lower expected processing cost than the current solution. If it achieves higher expected processing cost, it is randomly accepted with probability e−∆/r where ∆ is the increase in expected processing cost and R ≥ r ≥ 1 denotes the number of remaining rounds. In addition, throughout all rounds, the method keeps track of the best solution seen so far. The solution space for the problem at hand can be efficiently explored. As we argued above, we solely have to look at sets M that completely cover the time interval [t1, tn) and do not contain overlapping time intervals. We represent such a set M as an array of n boolean variables b1 ... bn that convey the boundaries of time intervals in the set. Note that b1 and bn are always set to true. Initially, all n − 2 intermediate variables assume false, which corresponds to the set M = { [t1, tn) }. A random successor can now be easily generated by switching the value of one of the n − 2 intermediate variables. The time complexity of the method is in O(n2 ) - the expected processing cost must be computed in each round. Its space complexity is in O(n) - for keeping the n boolean variables. As a side remark note that for κ = 1.0 the SB method does not necessarily produce the solution that is obtained from Sopt, but may produce a solution that requires the same amount of space while achieving better expected performance. 7. EXPERIMENTAL EVALUATION We conducted a comprehensive series of experiments on two real-world datasets to evaluate the techniques proposed in this paper. 7.1 Setup and Datasets The techniques described in this paper were implemented in a prototype system using Java JDK 1.5. All experiments described below were run on a single SUN V40z machine having four AMD Opteron CPUs, 16GB RAM, a large network-attached RAID-5 disk array, and running Microsoft Windows Server 2003. All data and indexes are kept in an Oracle 10g database that runs on the same machine. For our experiments we used two different datasets. The English Wikipedia revision history (referred to as WIKI in the remainder) is available for free download as a single XML file. This large dataset, totaling 0.7 TBytes, contains the full editing history of the English Wikipedia from January 2001 to December 2005 (the time of our download). We indexed all encyclopedia articles excluding versions that were marked as the result of a minor edit (e.g., the correction of spelling errors etc.). This yielded a total of 892,255 documents with 13,976,915 versions having a mean (µ) of 15.67 versions per document at standard deviation (σ) of 59.18. We built a time-travel query workload using the query log temporarily made available recently by AOL Research as follows - we first extracted the 300 most frequent keyword queries that yielded a result click on a Wikipedia article (for e.g., french revolution, hurricane season 2005, da vinci code etc.). The thus extracted queries contained a total of 422 distinct terms. For each extracted query, we randomly picked a time point for each month covered by the dataset. This resulted in a total of 18, 000 (= 300 × 60) time-travel queries. The second dataset used in our experiments was based on a subset of the European Archive [13], containing weekly crawls of 11 . gov.uk websites throughout the years 2004 and 2005 amounting close to 2 TBytes of raw data. We filtered out documents not belonging to MIME-types text/plain and text/html, to obtain a dataset that totals 0.4 TBytes and which we refer to as UKGOV in rest of the paper. This included a total of 502,617 documents with 8,687,108 versions (µ = 17.28 and σ = 13.79). We built a corresponding query workload as mentioned before, this time choosing keyword queries that led to a site in the . gov.uk domain (e.g., minimum wage, inheritance tax , citizenship ceremony dates etc.), and randomly sampling a time point for every month within the two year period spanned by the dataset. Thus, we obtained a total of 7,200 (= 300 × 24) time-travel queries for the UKGOV dataset. In total 522 terms appear in the extracted queries. The collection statistics (i.e., N and avdl) and term statistics (i.e., DF) were computed at monthly granularity for both datasets. 7.2 Impact of Temporal Coalescing Our first set of experiments is aimed at evaluating the approximate temporal coalescing technique, described in Section 5, in terms of index-size reduction and its effect on the result quality. For both the WIKI and UKGOV datasets, we compare temporally coalesced indexes for different values of the error threshold computed using Algorithm 1 with the non-coalesced index as a baseline. WIKI UKGOV # Postings Ratio # Postings Ratio - 8,647,996,223 100.00% 7,888,560,482 100.00% 0.00 7,769,776,831 89.84% 2,926,731,708 37.10% 0.01 1,616,014,825 18.69% 744,438,831 9.44% 0.05 556,204,068 6.43% 259,947,199 3.30% 0.10 379,962,802 4.39% 187,387,342 2.38% 0.25 252,581,230 2.92% 158,107,198 2.00% 0.50 203,269,464 2.35% 155,434,617 1.97% Table 1: Index sizes for non-coalesced index (-) and coalesced indexes for different values of Table 1 summarizes the index sizes measured as the total number of postings. As these results demonstrate, approximate temporal coalescing is highly effective in reducing index size. Even a small threshold value, e.g. = 0.01, has a considerable effect by reducing the index size almost by an order of magnitude. Note that on the UKGOV dataset, even accurate coalescing ( = 0) manages to reduce the index size to less than 38% of the original size. Index size continues to reduce on both datasets, as we increase the value of . How does the reduction in index size affect the query results? In order to evaluate this aspect, we compared the top-k results computed using a coalesced index against the ground-truth result obtained from the original index, for different cutoff levels k. Let Gk and Ck be the top-k documents from the ground-truth result and from the coalesced index respectively. We used the following two measures for comparison: (i) Relative Recall at cutoff level k (RR@k), that measures the overlap between Gk and Ck, which ranges in [0, 1] and is defined as RR@k = |Gk ∩ Ck|/k . (ii) Kendall``s τ (see [7, 14] for a detailed definition) at cutoff level k (KT@k), measuring the agreement between two results in the relative order of items in Gk ∩ Ck, with value 1 (or -1) indicating total agreement (or disagreement). Figure 3 plots, for cutoff levels 10 and 100, the mean of RR@k and KT@k along with 5% and 95% percentiles, for different values of the threshold starting from 0.01. Note that for = 0, results coincide with those obtained by the original index, and hence are omitted from the graph. It is reassuring to see from these results that approximate temporal coalescing induces minimal disruption to the query results, since RR@k and KT@k are within reasonable limits. For = 0.01, the smallest value of in our experiments, RR@100 for WIKI is 0.98 indicating that the results are -1 -0.5 0 0.5 1 ε 0.01 0.05 0.10 0.25 0.50 Relative Recall @ 10 (WIKI) Kendall's τ @ 10 (WIKI) Relative Recall @ 10 (UKGOV) Kendall's τ @ 10 (UKGOV) (a) @10 -1 -0.5 0 0.5 1 ε 0.01 0.05 0.10 0.25 0.50 Relative Recall @ 100 (WIKI) Kendall's τ @ 100 (WIKI) Relative Recall @ 100 (UKGOV) Kendall's τ @ 100 (UKGOV) (b) @100 Figure 3: Relative recall and Kendall``s τ observed on coalesced indexes for different values of almost indistinguishable from those obtained through the original index. Even the relative order of these common results is quite high, as the mean KT@100 is close to 0.95. For the extreme value of = 0.5, which results in an index size of just 2.35% of the original, the RR@100 and KT@100 are about 0.8 and 0.6 respectively. On the relatively less dynamic UKGOV dataset (as can be seen from the σ values above), results were even better, with high values of RR and KT seen throughout the spectrum of values for both cutoff values. 7.3 Sublist Materialization We now turn our attention towards evaluating the sublist materialization techniques introduced in Section 6. For both datasets, we started with the coalesced index produced by a moderate threshold setting of = 0.10. In order to reduce the computational effort, boundaries of elementary time intervals were rounded to day granularity before computing the sublist materializations. However, note that the postings in the materialized sublists still retain their original timestamps. For a comparative evaluation of the four approaches - Popt, Sopt, PG, and SB - we measure space and performance as follows. The required space S(M), as defined earlier, is equal to the total number of postings in the materialized sublists. To assess performance we compute the expected processing cost (EPC) for all terms in the respective query workload assuming a uniform probability distribution among query time-points. We report the mean EPC, as well as the 5%- and 95%-percentile. In other words, the mean EPC reflects the expected length of the index list (in terms of index postings) that needs to be scanned for a random time point and a random term from the query workload. The Sopt and Popt approaches are, by their definition, parameter-free. For the PG approach, we varied its parameter γ, which limits the maximal performance degradation, between 1.0 and 3.0. Analogously, for the SB approach the parameter κ, as an upper-bound on the allowed space blowup, was varied between 1.0 and 3.0. Solutions for the SB approach were obtained running simulated annealing for R = 50, 000 rounds. Table 2 lists the obtained space and performance figures. Note that EPC values are smaller on WIKI than on UKGOV, since terms in the query workload employed for WIKI are relatively rarer in the corpus. Based on the depicted results, we make the following key observations. i) As expected, Popt achieves optimal performance at the cost of an enormous space consumption. Sopt, to the contrary, while consuming an optimal amount of space, provides only poor expected processing cost. The PG and SB methods, for different values of their respective parameter, produce solutions whose space and performance lie in between the extremes that Popt and Sopt represent. ii) For the PG method we see that for an acceptable performance degradation of only 10% (i.e., γ = 1.10) the required space drops by more than one order of magnitude in comparison to Popt on both datasets. iii) The SB approach achieves close-to-optimal performance on both datasets, if allowed to consume at most three times the optimal amount of space (i.e., κ = 3.0), which on our datasets still corresponds to a space reduction over Popt by more than one order of magnitude. We also measured wall-clock times on a sample of the queries with results indicating improvements in execution time by up to a factor of 12. 8. CONCLUSIONS In this work we have developed an efficient solution for time-travel text search over temporally versioned document collections. Experiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results. The present work opens up many interesting questions for future research, e.g.: How can we even further improve performance by applying (and possibly extending) encoding, compression, and skipping techniques [35]? . How can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point? How can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)? 9. ACKNOWLEDGMENTS We are grateful to the anonymous reviewers for their valuable comments - in particular to the reviewer who pointed out the opportunity for algorithmic improvements in Section 5 and Section 6.2. 10. REFERENCES [1] V. N. Anh and A. Moffat. Pruned Query Evaluation Using Pre-Computed Impacts. In SIGIR, 2006. [2] V. N. Anh and A. Moffat. Pruning Strategies for Mixed-Mode Querying. In CIKM, 2006. WIKI UKGOV S(M) EPC S(M) EPC 5% Mean 95% 5% Mean 95% Popt 54,821,634,137 11.22 3,132.29 15,658.42 21,372,607,052 39.93 15,593.60 66,938.86 Sopt 379,962,802 114.05 30,186.52 149,820.1 187,387,342 63.15 22,852.67 102,923.85 PG γ = 1.10 3,814,444,654 11.30 3,306.71 16,512.88 1,155,833,516 40.66 16,105.61 71,134.99 PG γ = 1.25 1,827,163,576 12.37 3,629.05 18,120.86 649,884,260 43.62 17,059.47 75,749.00 PG γ = 1.50 1,121,661,751 13.96 4,128.03 20,558.60 436,578,665 46.68 18,379.69 78,115.89 PG γ = 1.75 878,959,582 15.48 4,560.99 22,476.77 345,422,898 51.26 19,150.06 82,028.48 PG γ = 2.00 744,381,287 16.79 4,992.53 24,637.62 306,944,062 51.48 19,499.78 87,136.31 PG γ = 2.50 614,258,576 18.28 5,801.66 28,849.02 269,178,107 53.36 20,279.62 87,897.95 PG γ = 3.00 552,796,130 21.04 6,485.44 32,361.93 247,666,812 55.95 20,800.35 89,591.94 SB κ = 1.10 412,383,387 38.97 12,723.68 60,350.60 194,287,671 63.09 22,574.54 102,208.58 SB κ = 1.25 467,537,173 26.87 9,011.81 45,119.08 204,454,800 57.42 22,036.39 95,337.33 SB κ = 1.50 557,341,140 19.84 6,699.36 32,810.85 246,323,383 53.24 20,566.68 91,691.38 SB κ = 1.75 647,187,522 16.59 5,769.40 28,272.89 296,345,976 49.56 19,065.99 84,377.44 SB κ = 2.00 737,819,354 15.86 5,358.99 27,112.01 336,445,773 47.58 18,569.08 81,386.02 SB κ = 2.50 916,308,766 13.99 4,639.77 23,037.59 427,122,038 44.89 17,153.94 74,449.28 SB κ = 3.00 1,094,973,140 13.01 4,343.72 22,708.37 511,470,192 42.15 16,772.65 72,307.43 Table 2: Required space and expected processing cost (in # postings) observed on coalesced indexes ( = 0.10) [3] P. G. Anick and R. A. Flynn. Versioning a Full-Text Information Retrieval System. In SIGIR, 1992. [4] R. A. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley, 1999. [5] K. Berberich, S. Bedathur, T. Neumann, and G. Weikum. A Time Machine for Text search. Technical Report MPI-I-2007-5-002, Max-Planck Institute for Informatics, 2007. [6] M. H. B¨ohlen, R. T. Snodgrass, and M. D. Soo. Coalescing in Temporal Databases. In VLDB, 1996. [7] P. Boldi, M. Santini, and S. Vigna. Do Your Worst to Make the Best: Paradoxical Effects in PageRank Incremental Computations. In WAW, 2004. [8] A. Z. Broder, N. Eiron, M. Fontoura, M. Herscovici, R. Lempel, J. McPherson, R. Qi, and E. J. Shekita. Indexing Shared Content in Information Retrieval Systems. In EDBT, 2006. [9] C. Buckley and A. F. Lewit. Optimization of Inverted Vector Searches. In SIGIR, 1985. [10] M. Burrows and A. L. Hisgen. Method and Apparatus for Generating and Searching Range-Based Index of Word Locations. U.S. Patent 5,915,251, 1999. [11] S. B¨uttcher and C. L. A. Clarke. A Document-Centric Approach to Static Index Pruning in Text Retrieval Systems. In CIKM, 2006. [12] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. S. Maarek, and A. Soffer. Static Index Pruning for Information Retrieval Systems. In SIGIR, 2001. [13] http://www.europarchive.org. [14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing Top k Lists. SIAM J. Discrete Math., 17(1):134-160, 2003. [15] R. Fagin, A. Lotem, and M. Naor. Optimal Aggregation Algorithms for Middleware. J. Comput. Syst. Sci., 66(4):614-656, 2003. [16] S. Guha, K. Shim, and J. Woo. REHIST: Relative Error Histogram Construction Algorithms. In VLDB, 2004. [17] M. Hersovici, R. Lempel, and S. Yogev. Efficient Indexing of Versioned Document Sequences. In ECIR, 2007. [18] http://www.archive.org. [19] Y. E. Ioannidis and V. Poosala. Balancing Histogram Optimality and Practicality for Query Result Size Estimation. In SIGMOD, 1995. [20] H. V. Jagadish, N. Koudas, S. Muthukrishnan, V. Poosala, K. C. Sevcik, and T. Suel. Optimal Histograms with Quality Guarantees. In VLDB, 1998. [21] E. J. Keogh, S. Chu, D. Hart, and M. J. Pazzani. An Online Algorithm for Segmenting Time Series. In ICDM, 2001. [22] S. Kirkpatrick, D. G. Jr., and M. P. Vecchi. Optimization by Simulated Annealing. Science, 220(4598):671-680, 1983. [23] J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley, 2005. [24] U. Manber. Introduction to Algorithms: A Creative Approach. Addison-Wesley, 1989. [25] K. Nørv˚ag and A. O. N. Nybø. DyST: Dynamic and Scalable Temporal Text Indexing. In TIME, 2006. [26] J. M. Ponte and W. B. Croft. A Language Modeling Approach to Information Retrieval. In SIGIR, 1998. [27] S. E. Robertson and S. Walker. Okapi/Keenbow at TREC-8. In TREC, 1999. [28] B. Salzberg and V. J. Tsotras. Comparison of Access Methods for Time-Evolving Data. ACM Comput. Surv., 31(2):158-221, 1999. [29] M. Stack. Full Text Search of Web Archive Collections. In IWAW, 2006. [30] E. Terzi and P. Tsaparas. Efficient Algorithms for Sequence Segmentation. In SIAM-DM, 2006. [31] M. Theobald, G. Weikum, and R. Schenkel. Top-k Query Evaluation with Probabilistic Guarantees. In VLDB, 2004. [32] http://www.wikipedia.org. [33] I. H. Witten, A. Moffat, and T. C. Bell. Managing Gigabytes: Compressing and Indexing Documents and Images. Morgan Kaufmann publishers Inc., 1999. [34] J. Zhang and T. Suel. Efficient Search in Large Textual Collections with Redundancy. In WWW, 2007. [35] J. Zobel and A. Moffat. Inverted Files for Text Search Engines. ACM Comput. Surv., 38(2):6, 2006.
A Time Machine for Text Search ABSTRACT Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient "time machine" scalable to large versioned text collections. 1. INTRODUCTION In this work we address time-travel text search over temporally versioned document collections. Given a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t. An increasing number of such versioned document collections is available today including web archives, collabo rative authoring environments like Wikis, or timestamped information feeds. Text search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents. Even worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing. Time-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates. For a documentary about a past political scandal, a journalist needs to research early opinions and statements made by the involved politicians. Sending an appropriate query to a major web search-engine, the majority of returned results contains only recent coverage, since many of the early web pages have disappeared and are only preserved in web archives. If the query could be enriched with a time point, say August 20th 2003 as the day after the scandal got revealed, and be issued against a web archive, only pages that existed specifically at that time could be retrieved thus better satisfying the journalist's information need. Document collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered. Looking at their evolutionary history, we are faced with even larger data volumes. As a consequence, na ¨ ıve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes. This paper presents an efficient solution to time-travel text search by making the following key contributions: 1. The popular well-studied inverted file index [35] is transparently extended to enable time-travel text search. 2. Temporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate. 3. We develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance. 4. In a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents. The remainder of this paper is organized as follows. The presented work is put in context with related work in Section 2. We delineate our model of a temporally versioned document collection in Section 3. We present our time-travel inverted index in Section 4. Building on it, temporal coalescing is described in Section 5. In Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7. 2. RELATED WORK We can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index. We briefly review work under these categories here. To the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents. Anick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries. Access costs are optimized for accesses to the most recent versions and increase as one moves farther into the past. Burrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents. Recent work by Nørv˚ag and Nybø [25] and their earlier proposals concentrate on the relatively simpler problem of supporting text-containment queries only and neglect the relevance scoring of results. Stack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives. This adaptation, however, does not provide the intended time-travel text search functionality. In contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28]. Unlike the inverted file index, their applicability to text search is not well understood. Moving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size. Their technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context. More recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size. None of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality. Static indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result. They also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction. It should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here. 3. MODEL In the present work, we deal with a temporally versioned document collection D that is modeled as described in the following. Each document d E D is a sequence of its versions d = (dt1, dt2, ...). Each version d ti has an associated timestamp ti reflecting when the version was created. Each version is a vector of searchable terms or features. Any modification to a document version results in the insertion of a new version with corresponding timestamp. We employ a discrete definition of time, so that timestamps are non-negative integers. The deletion of a document at time ti, i.e., its disappearance from the current state of the collection, is modeled as the insertion of a special "tombstone" version 1. The validity time-interval val (d ti) of a version d ti is [ti, ti +1), if a newer version with associated timestamp ti +1 exists, and [ti, now) otherwise where now points to the greatest possible value of a timestamp (i.e., ` dt: t <now). Putting all this together, we define the state D t of the collection at time t (i.e., the set of versions valid at t that are not deletions) as As mentioned earlier, we want to enrich a keyword query q with a timestamp t, so that q be evaluated over D t, i.e., the state of the collection at time t. The enriched time-travel query is written as q t for brevity. As a retrieval model in this work we adopt Okapi BM25 [27], but note that the proposed techniques are not dependent on this choice and are applicable to other retrieval models like tf-idf [4] or language models [26] as well. For our considered setting, we slightly adapt Okapi BM25 as w (q t, d ti) = X wtf (v, d ti). widf (v, t). v ∈ q In the above formula, the relevance w (q t, d ti) of a document version dti to the time-travel query q t is defined. We reiterate that q t is evaluated over D t so that only the version d ti valid at time t is considered. The first factor wtf (v, d ti) in the summation, further referred to as the tfscore is defined as It considers the plain term frequency tf (v, d ti) of term v in version d ti normalizing it, taking into account both the length dl (d ti) of the version and the average document length avdl (ti) in the collection at time ti. The length-normalization parameter b and the tf-saturation parameter k1 are inherited from the original Okapi BM25 and are commonly set to values 1.2 and 0.75 respectively. The second factor widf (v, t), which we refer to as the idf-score in the remainder, conveys the inverse document frequency of term v in the collection at time t and is defined as where N (t) = ID tI is the collection size at time t and df (v, t) gives the number of documents in the collection that contain the term v at time t. While the idf-score depends on the whole corpus as of the query time t, the tf-score is specific to each version. 4. TIME-TRAVEL INVERTED FILE INDEX The inverted file index is a standard technique for text indexing, deployed in many systems. In this section, we briefly review this technique and present our extensions to the inverted file index that make it ready for time-travel text search. 4.1 Inverted File Index An inverted file index consists of a vocabulary, commonly organized as a B + - Tree, that maps each term to its idfscore and inverted list. The index list Lv belonging to term v contains postings of the form where d is a document-identifier and p is the so-called payload. The payload p contains information about the term frequency of v in d, but may also include positional information about where the term appears in the document. The sort-order of index lists depends on which queries are to be supported efficiently. For Boolean queries it is favorable to sort index lists in document-order. Frequencyorder and impact-order sorted index lists are beneficial for ranked queries and enable optimized query processing that stops early after having identified the k most relevant documents [1, 2, 9, 15, 31]. A variety of compression techniques, such as encoding document identifiers more compactly, have been proposed [33, 35] to reduce the size of index lists. For an excellent recent survey about inverted file indexes we refer to [35]. 4.2 Time-Travel Inverted File Index In order to prepare an inverted file index for time travel we extend both inverted lists and the vocabulary structure by explicitly incorporating temporal information. The main idea for inverted lists is that we include a validity timeinterval [tb, te) in postings to denote when the payload information was valid. The postings in our time-travel inverted file index are thus of the form (d, p, [tb, te)) where d and p are defined as in the standard inverted file index above and [tb, te) is the validity time-interval. As a concrete example, in our implementation, for a version d ti having the Okapi BM25 tf-score wtf (v, d ti) for term v, the index list Lv contains the posting (d, wtf (v, d ti), [ti, ti +1)). Similarly, the extended vocabulary structure maintains for each term a time-series of idf-scores organized as a B+T ree. Unlike the tf-score, the idf-score of every term could vary with every change in the corpus. Therefore, we take a simplified approach to idf-score maintenance, by computing idf-scores for all terms in the corpus at specific (possibly periodic) times. 4.3 Query Processing During processing of a time-travel query q t, for each query term the corresponding idf-score valid at time t is retrieved from the extended vocabulary. Then, index lists are sequentially read from disk, thereby accumulating the information contained in the postings. We transparently extend the sequential reading, which is--to the best of our knowledge--common to all query processing techniques on inverted file indexes, thus making them suitable for time-travel queryprocessing. To this end, sequential reading is extended by skipping all postings whose validity time-interval does not contain t (i.e., t ∈ 6 [tb, te)). Whether a posting can be skipped can only be decided after the posting has been transferred from disk into memory and therefore still incurs significant I/O cost. As a remedy, we propose index organization techniques in Section 6 that aim to reduce the I/O overhead significantly. We note that our proposed extension of the inverted file index makes no assumptions about the sort-order of index lists. As a consequence, existing query-processing techniques and most optimizations (e.g., compression techniques) remain equally applicable. 5. TEMPORAL COALESCING If we employ the time-travel inverted index, as described in the previous section, to a versioned document collection, we obtain one posting per term per document version. For frequent terms and large highly-dynamic collections, this Figure 1: Approximate Temporal Coalescing leads to extremely long index lists with very poor queryprocessing performance. The approximate temporal coalescing technique that we propose in this section counters this blowup in index-list size. It builds on the observation that most changes in a versioned document collection are minor, leaving large parts of the document untouched. As a consequence, the payload of many postings belonging to temporally adjacent versions will differ only slightly or not at all. Approximate temporal coalescing reduces the number of postings in an index list by merging such a sequence of postings that have almost equal payloads, while keeping the maximal error bounded. This idea is illustrated in Figure 1, which plots non-coalesced and coalesced scores of postings belonging to a single document. Approximate temporal coalescing is greatly effective given such fluctuating payloads and reduces the number of postings from 9 to 3 in the example. The notion of temporal coalescing was originally introduced in temporal database research by B ¨ ohlen et al. [6], where the simpler problem of coalescing only equal information was considered. We next formally state the problem dealt with in approximate temporal coalescing, and discuss the computation of optimal and approximate solutions. Note that the technique is applied to each index list separately, so that the following explanations assume a fixed term v and index list Lv. As an input we are given a sequence of temporally adjacent postings Each sequence represents a contiguous time period during which the term was present in a single document d. If a term disappears from d but reappears later, we obtain multiple input sequences that are dealt with separately. We seek to generate the minimal length output sequence of postings that adheres to the following constraints: First, O and I must cover the same time-range, i.e., ti = tj and tn = tm. Second, when coalescing a subsequence of postings of the input into a single posting of the output, we want the approximation error to be below a threshold ~. In other words, if (d, pi, [ti, ti +1)) and (d, pj, [tj, tj +1)) are postings of I and O respectively, then the following must hold for a chosen error function and a threshold ~: tj <ti n ti +1 <tj +1 =:>. error (pi, pj) <~. In this paper, as an error function we employ the relative error between payloads (i.e., tf-scores) of a document in I and O, defined as: errrel (pi, pj) = Ipi--pjI / IpiI. Finding an optimal output sequence of postings can be cast into finding a piecewise-constant representation for the points (ti, pi) that uses a minimal number of segments while retaining the above approximation guarantee. Similar problems occur in time-series segmentation [21, 30] and histogram construction [19, 20]. Typically dynamic programming is score applied to obtain an optimal solution in O (n2 m *) [20, 30] time with m * being the number of segments in an optimal sequence. In our setting, as a key difference, only a guarantee on the local error is retained--in contrast to a guarantee on the global error in the aforementioned settings. Exploiting this fact, an optimal solution is computable by means of induction [24] in O (n2) time. Details of the optimal algorithm are omitted here but can be found in the accompanying technical report [5]. The quadratic complexity of the optimal algorithm makes it inappropriate for the large datasets encountered in this work. As an alternative, we introduce a linear-time approximate algorithm that is based on the sliding-window algorithm given in [21]. This algorithm produces nearly-optimal output sequences that retain the bound on the relative error, but possibly require a few additional segments more than an optimal solution. Algorithm 1 Temporal Coalescing (Approximate) 1: I = ((d, pi, [ti, ti +1)), ...) O = () 2: pmin = pi pmax = pi p = pi tb = ti te = ti +1 3: for (d, pj, [tj, tj +1)) ∈ I do 4: pnamin = min (pmin, pj) pmmax = max (pmax, pj) 5: p0 = optrep (p' min, pm, max) 6: if errrel (pmin, pI) <e ∧ errrel (pmax, p) <e then 7: pmin = pmin pmax = pmax p = p te = tj +1 8: else 9: O = O U (d, p, [tb, te)) 10: pmin = pj pmax = pj p = pj tb = tj te = tj +1 11: end if 12: end for 13: O = O U (d, p, [tb, te)) Algorithm 1 makes one pass over the input sequence I. While doing so, it coalesces sequences of postings having maximal length. The optimal representative for a sequence of postings depends only on their minimal and maximal payload (pmin and pmax) and can be looked up using optrep in O (1) (see [16] for details). When reading the next posting, the algorithm tries to add it to the current sequence of postings. It computes the hypothetical new representative p and checks whether it would retain the approximation guarantee. If this test fails, a coalesced posting bearing the old representative is added to the output sequence O and, following that, the bookkeeping is reinitialized. The time complexity of the algorithm is in O (n). Note that, since we make no assumptions about the sort order of index lists, temporal-coalescing algorithms have an additional preprocessing cost in O (| Lv | log | Lv |) for sorting the index list and chopping it up into subsequences for each document. 6. SUBLIST MATERIALIZATION Efficiency of processing a query q t on our time-travel inverted index is influenced adversely by the wasted I/O due to read but skipped postings. Temporal coalescing implicitly addresses this problem by reducing the overall index list size, but still a significant overhead remains. In this section, we tackle this problem by proposing the idea of materializing sublists each of which corresponds to a contiguous subinterval of time spanned by the full index. Each of these sublists contains all coalesced postings that overlap with the corresponding time interval of the sublist. Note that all those postings whose validity time-interval spans across the temporal boundaries of several sublists are replicated in each of the spanned sublists. Thus, in order to process the query q t and demand i.e., the time intervals in M must completely cover the time interval [t1, tn), so that time-travel queries q t for all t ∈ [t1, tn) can be processed. We also assume that intervals in M are disjoint. We can make this assumption without ruling out any optimal solution with regard to space or performance defined below. The space required for the materialization of sublists in a set M is defined as Figure 2: Sublist Materialization it is sufficient to scan any materialized sublist whose timeinterval contains t. We illustrate the idea of sublist materialization using an example shown in Figure 2. The index list Lv visualized in the figure contains a total of 10 postings from three documents d1, d2, and d3. For ease of description, we have numbered boundaries of validity time-intervals, in increasing time-order, as t1,..., t10 and numbered the postings themselves as 1,..., 10. Now, consider the processing of a query q t with t ∈ [t1, t2) using this inverted list. Although only three postings (postings 1, 5 and 8) are valid at time t, the whole inverted list has to be read in the worst case. Suppose that we split the time axis of the list at time t2, forming two sublists with postings {1, 5, 8} and {2, 3, 4, 5, 6, 7, 8, 9, 10} respectively. Then, we can process the above query with optimal cost by reading only those postings that existed at this t. At a first glance, it may seem counterintuitive to reduce index size in the first step (using temporal coalescing), and then to increase it again using the sublist materialization techniques presented in this section. However, we reiterate that our main objective is to improve the efficiency of processing queries, not to reduce the index size alone. The use of temporal coalescing improves the performance by reducing the index size, while the sublist materialization improves performance by judiciously replicating entries. Further, the two techniques, can be applied separately and are independent. If applied in conjunction, though, there is a synergetic effect--sublists that are materialized from a temporally coalesced index are generally smaller. We employ the notation Lv: [ti, tj) to refer to the materialized sublist for the time interval [ti, tj), that is formally defined as, To aid the presentation in the rest of the paper, we first provide some definitions. Let T = (t1...tn) be the sorted sequence of all unique time-interval boundaries of an inverted list Lv. Then we define to be the set of elementary time intervals. We refer to the set of time intervals for which sublists are materialized as i.e., the total length of all lists in M. Given a set M, we let π ([ti, ti +1)) = [tj, tk) ∈ M: [ti, ti +1) ⊆ [tj, tk) denote the time interval that is used to process queries q t with t ∈ [ti, ti +1). The performance of processing queries q t for t ∈ [ti, ti +1) inversely depends on its processing cost which is assumed to be proportional to the length of the list Lv: π ([ti, ti +1)). Thus, in order to optimize the performance of processing queries we minimize their processing costs. 6.1 Performance/Space-Optimal Approaches One strategy to eliminate the problem of skipped postings is to eagerly materialize sublists for all elementary time intervals, i.e., to choose M = E. In doing so, for every query q t only postings valid at time t are read and thus the best possible performance is achieved. Therefore, we will refer to this approach as Popt in the remainder. The initial approach described above that keeps only the full list Lv and thus picks M = {[t1, tn)} is referred to as Sopt in the remainder. This approach requires minimal space, since it keeps each posting exactly once. Popt and Sopt are extremes: the former provides the best possible performance but is not space-efficient, the latter requires minimal space but does not provide good performance. The two approaches presented in the rest of this section allow mutually trading off space and performance and can thus be thought of as means to explore the configuration spectrum between the Popt and the Sopt approach. 6.2 Performance-Guarantee Approach The Popt approach clearly wastes a lot of space materializing many nearly-identical sublists. In the example illustrated in Figure 2 materialized sublists for [t1, t2) and [t2, t3) differ only by one posting. If the sublist for [t1, t3) was materialized instead, one could save significant space while incurring only an overhead of one skipped posting for all t ∈ [t1, t3). The technique presented next is driven by the idea that significant space savings over Popt are achievable, if an upper-bounded loss on the performance can be tolerated, or to put it differently, if a performance guarantee relative to the optimum is to be retained. In detail, the technique, which we refer to as PG (Performance Guarantee) in the remainder, finds a set M that has minimal required space, but guarantees for any elementary time interval [ti, ti +1) (and thus for any query q t with t ∈ [ti, ti +1)) that performance is worse than optimal by at most a factor of - y ≥ 1. Formally, this problem can be stated as An optimal solution to the problem can be computed by means of induction using the recurrence where C ([t1, tj)) is the optimal cost (i.e., the space required) for the prefix subproblem Intuitively, the recurrence states that an optimal solution for [t1, tk +1) be combined from an optimal solution to a prefix subproblem C ([t1, tj)) and a time interval [tj, tk +1) that can be materialized without violating the performance guarantee. Pseudocode of the algorithm is omitted for space reasons, but can be found in the accompanying technical report [5]. The time complexity of the algorithm is in O (n 2)--for each prefix subproblem the above recurrence must be evaluated, which is possible in linear time if list sizes | L: [ti, tj) | are precomputed. The space complexity is in O (n 2)--the cost of keeping the precomputed sublist lengths and memoizing optimal solutions to prefix subproblems. 6.3 Space-Bound Approach So far we considered the problem of materializing sublists that give a guarantee on performance while requiring minimal space. In many situations, though, the storage space is at a premium and the aim would be to materialize a set of sublists that optimizes expected performance while not exceeding a given space limit. The technique presented next, which is named SB, tackles this very problem. The space restriction is modeled by means of a user-specified parameter r, ≥ 1 that limits the maximum allowed blowup in index size from the space-optimal solution provided by Sopt. The SB technique seeks to find a set M that adheres to this space limit but minimizes the expected processing cost (and thus optimizes the expected performance). In the definition of the expected processing cost, P ([ti, ti +1)) denotes the probability of a query time-point being in [ti, ti +1). Formally, this space-bound sublist-materialization problem can be stated as The problem can be solved by using dynamic programming over an increasing number of time intervals: At each time interval in E the algorithms decides whether to start a new materialization time-interval, using the known best materialization decision from the previous time intervals, and keeping track of the required space consumption for materialization. A detailed description of the algorithm is omitted here, but can be found in the accompanying technical report [5]. Unfortunately, the algorithm has time complexity in O (n3 | Lv |) and its space complexity is in O (n2 | Lv |), which is not practical for large data sets. We obtain an approximate solution to the problem using simulated annealing [22, 23]. Simulated annealing takes a fixed number R of rounds to explore the solution space. In each round a random successor of the current solution is looked at. If the successor does not adhere to the space limit, it is always rejected (i.e., the current solution is kept). A successor adhering to the space limit is always accepted if it achieves lower expected processing cost than the current solution. If it achieves higher expected processing cost, it is randomly accepted with probability e − o/r where Δ is the increase in expected processing cost and R ≥ r ≥ 1 denotes the number of remaining rounds. In addition, throughout all rounds, the method keeps track of the best solution seen so far. The solution space for the problem at hand can be efficiently explored. As we argued above, we solely have to look at sets M that completely cover the time interval [t1, tn) and do not contain overlapping time intervals. We represent such a set M as an array of n boolean variables b1...bn that convey the boundaries of time intervals in the set. Note that b1 and bn are always set to "true". Initially, all n − 2 intermediate variables assume "false", which corresponds to the set M = {[t1, tn)}. A random successor can now be easily generated by switching the value of one of the n − 2 intermediate variables. The time complexity of the method is in O (n 2)--the expected processing cost must be computed in each round. Its space complexity is in O (n)--for keeping the n boolean variables. As a side remark note that for κ = 1.0 the SB method does not necessarily produce the solution that is obtained from Soft, but may produce a solution that requires the same amount of space while achieving better expected performance. 7. EXPERIMENTAL EVALUATION We conducted a comprehensive series of experiments on two real-world datasets to evaluate the techniques proposed in this paper. 7.1 Setup and Datasets The techniques described in this paper were implemented in a prototype system using Java JDK 1.5. All experiments described below were run on a single SUN V40z machine having four AMD Opteron CPUs, 16GB RAM, a large network-attached RAID-5 disk array, and running Microsoft Windows Server 2003. All data and indexes are kept in an Oracle 10g database that runs on the same machine. For our experiments we used two different datasets. The English Wikipedia revision history (referred to as WIKI in the remainder) is available for free download as a single XML file. This large dataset, totaling 0.7 TBytes, contains the full editing history of the English Wikipedia from January 2001 to December 2005 (the time of our download). We indexed all encyclopedia articles excluding versions that were marked as the result of a minor edit (e.g., the correction of spelling errors etc.). This yielded a total of 892,255 documents with 13,976,915 versions having a mean (µ) of 15.67 versions per document at standard deviation (σ) of 59.18. We built a time-travel query workload using the query log temporarily made available recently by AOL Research as follows--we first extracted the 300 most frequent keyword queries that yielded a result click on a Wikipedia article (for e.g., "french revolution", "hurricane season 2005", "da vinci code" etc.). The thus extracted queries contained a total of 422 distinct terms. For each extracted query, we randomly picked a time point for each month covered by the dataset. This resulted in a total of 18, 000 (= 300 x 60) time-travel queries. The second dataset used in our experiments was based on a subset of the European Archive [13], containing weekly crawls of 11. gov.uk websites throughout the years 2004 and 2005 amounting close to 2 TBytes of raw data. We filtered out documents not belonging to MIME-types text/plain and text/html, to obtain a dataset that totals 0.4 TBytes and which we refer to as UKGOV in rest of the paper. This included a total of 502,617 documents with 8,687,108 versions (µ = 17.28 and σ = 13.79). We built a corresponding query workload as mentioned before, this time choosing keyword queries that led to a site in the. gov.uk domain (e.g., "minimum wage", "inheritance tax", "citizenship ceremony dates" etc.), and randomly sampling a time point for every month within the two year period spanned by the dataset. Thus, we obtained a total of 7,200 (= 300 x 24) time-travel queries for the UKGOV dataset. In total 522 terms appear in the extracted queries. The collection statistics (i.e., N and avdl) and term statistics (i.e., DF) were computed at monthly granularity for both datasets. 7.2 Impact of Temporal Coalescing Our first set of experiments is aimed at evaluating the approximate temporal coalescing technique, described in Section 5, in terms of index-size reduction and its effect on the result quality. For both the WIKI and UKGOV datasets, we compare temporally coalesced indexes for different values of the error threshold e computed using Algorithm 1 with the non-coalesced index as a baseline. Table 1: Index sizes for non-coalesced index (-) and coalesced indexes for different values of e Table 1 summarizes the index sizes measured as the total number of postings. As these results demonstrate, approximate temporal coalescing is highly effective in reducing index size. Even a small threshold value, e.g. e = 0.01, has a considerable effect by reducing the index size almost by an order of magnitude. Note that on the UKGOV dataset, even accurate coalescing (e = 0) manages to reduce the index size to less than 38% of the original size. Index size continues to reduce on both datasets, as we increase the value of E. How does the reduction in index size affect the query results? In order to evaluate this aspect, we compared the top-k results computed using a coalesced index against the ground-truth result obtained from the original index, for different cutoff levels k. Let Gk and Ck be the top-k documents from the ground-truth result and from the coalesced index respectively. We used the following two measures for comparison: (i) Relative Recall at cutoff level k (RR@k), that measures the overlap between Gk and Ck, which ranges in [0, 1] and is defined as (ii) Kendall's τ (see [7, 14] for a detailed definition) at cutoff level k (KT@k), measuring the agreement between two results in the relative order of items in Gk n Ck, with value 1 (or -1) indicating total agreement (or disagreement). Figure 3 plots, for cutoff levels 10 and 100, the mean of RR@k and KT@k along with 5% and 95% percentiles, for different values of the threshold a starting from 0.01. Note that for e = 0, results coincide with those obtained by the original index, and hence are omitted from the graph. It is reassuring to see from these results that approximate temporal coalescing induces minimal disruption to the query results, since RR@k and KT@k are within reasonable limits. For e = 0.01, the smallest value of a in our experiments, RR@100 for WIKI is 0.98 indicating that the results are Figure 3: Relative recall and Kendall's τ observed on coalesced indexes for different values of e almost indistinguishable from those obtained through the original index. Even the relative order of these common results is quite high, as the mean KT@100 is close to 0.95. For the extreme value of e = 0.5, which results in an index size of just 2.35% of the original, the RR@100 and KT@100 are about 0.8 and 0.6 respectively. On the relatively less dynamic UKGOV dataset (as can be seen from the σ values above), results were even better, with high values of RR and KT seen throughout the spectrum of a values for both cutoff values. 7.3 Sublist Materialization We now turn our attention towards evaluating the sublist materialization techniques introduced in Section 6. For both datasets, we started with the coalesced index produced by a moderate threshold setting of e = 0.10. In order to reduce the computational effort, boundaries of elementary time intervals were rounded to day granularity before computing the sublist materializations. However, note that the postings in the materialized sublists still retain their original timestamps. For a comparative evaluation of the four approaches--Poet, Soet, PG, and SB--we measure space and performance as follows. The required space S (M), as defined earlier, is equal to the total number of postings in the materialized sublists. To assess performance we compute the expected processing cost (EPC) for all terms in the respective query workload assuming a uniform probability distribution among query time-points. We report the mean EPC, as well as the 5% - and 95% - percentile. In other words, the mean EPC reflects the expected length of the index list (in terms of index postings) that needs to be scanned for a random time point and a random term from the query workload. The Soet and Poet approaches are, by their definition, parameter-free. For the PG approach, we varied its parameter - y, which limits the maximal performance degradation, between 1.0 and 3.0. Analogously, for the SB approach the parameter r,, as an upper-bound on the allowed space blowup, was varied between 1.0 and 3.0. Solutions for the SB approach were obtained running simulated annealing for R = 50, 000 rounds. Table 2 lists the obtained space and performance figures. Note that EPC values are smaller on WIKI than on UKGOV, since terms in the query workload employed for WIKI are relatively rarer in the corpus. Based on the depicted results, we make the following key observations. i) As expected, Poet achieves optimal performance at the cost of an enormous space consumption. Soet, to the contrary, while consuming an optimal amount of space, provides only poor expected processing cost. The PG and SB methods, for different values of their respective parameter, produce solutions whose space and performance lie in between the extremes that Poet and Soet represent. ii) For the PG method we see that for an acceptable performance degradation of only 10% (i.e., - y = 1.10) the required space drops by more than one order of magnitude in comparison to Poet on both datasets. iii) The SB approach achieves close-to-optimal performance on both datasets, if allowed to consume at most three times the optimal amount of space (i.e., r, = 3.0), which on our datasets still corresponds to a space reduction over Poet by more than one order of magnitude. We also measured wall-clock times on a sample of the queries with results indicating improvements in execution time by up to a factor of 12. 8. CONCLUSIONS In this work we have developed an efficient solution for time-travel text search over temporally versioned document collections. Experiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results. The present work opens up many interesting questions for future research, e.g.: How can we even further improve performance by applying (and possibly extending) encoding, compression, and skipping techniques [35]? . How can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point? How can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?
A Time Machine for Text Search ABSTRACT Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient "time machine" scalable to large versioned text collections. 1. INTRODUCTION In this work we address time-travel text search over temporally versioned document collections. Given a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t. An increasing number of such versioned document collections is available today including web archives, collabo rative authoring environments like Wikis, or timestamped information feeds. Text search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents. Even worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing. Time-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates. For a documentary about a past political scandal, a journalist needs to research early opinions and statements made by the involved politicians. Sending an appropriate query to a major web search-engine, the majority of returned results contains only recent coverage, since many of the early web pages have disappeared and are only preserved in web archives. If the query could be enriched with a time point, say August 20th 2003 as the day after the scandal got revealed, and be issued against a web archive, only pages that existed specifically at that time could be retrieved thus better satisfying the journalist's information need. Document collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered. Looking at their evolutionary history, we are faced with even larger data volumes. As a consequence, na ¨ ıve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes. This paper presents an efficient solution to time-travel text search by making the following key contributions: 1. The popular well-studied inverted file index [35] is transparently extended to enable time-travel text search. 2. Temporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate. 3. We develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance. 4. In a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents. The remainder of this paper is organized as follows. The presented work is put in context with related work in Section 2. We delineate our model of a temporally versioned document collection in Section 3. We present our time-travel inverted index in Section 4. Building on it, temporal coalescing is described in Section 5. In Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7. 2. RELATED WORK We can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index. We briefly review work under these categories here. To the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents. Anick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries. Access costs are optimized for accesses to the most recent versions and increase as one moves farther into the past. Burrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents. Recent work by Nørv˚ag and Nybø [25] and their earlier proposals concentrate on the relatively simpler problem of supporting text-containment queries only and neglect the relevance scoring of results. Stack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives. This adaptation, however, does not provide the intended time-travel text search functionality. In contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28]. Unlike the inverted file index, their applicability to text search is not well understood. Moving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size. Their technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context. More recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size. None of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality. Static indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result. They also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction. It should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here. 3. MODEL 4. TIME-TRAVEL INVERTED FILE INDEX 4.1 Inverted File Index 4.2 Time-Travel Inverted File Index 4.3 Query Processing 5. TEMPORAL COALESCING 6. SUBLIST MATERIALIZATION 6.1 Performance/Space-Optimal Approaches 6.2 Performance-Guarantee Approach 6.3 Space-Bound Approach 7. EXPERIMENTAL EVALUATION 7.1 Setup and Datasets 7.2 Impact of Temporal Coalescing 7.3 Sublist Materialization 8. CONCLUSIONS In this work we have developed an efficient solution for time-travel text search over temporally versioned document collections. Experiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results. The present work opens up many interesting questions for future research, e.g.: How can we even further improve performance by applying (and possibly extending) encoding, compression, and skipping techniques [35]? . How can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point? How can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?
A Time Machine for Text Search ABSTRACT Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient "time machine" scalable to large versioned text collections. 1. INTRODUCTION In this work we address time-travel text search over temporally versioned document collections. Given a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t. An increasing number of such versioned document collections is available today including web archives, collabo Text search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents. Even worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing. Time-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates. Document collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered. As a consequence, na ¨ ıve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes. This paper presents an efficient solution to time-travel text search by making the following key contributions: 1. The popular well-studied inverted file index [35] is transparently extended to enable time-travel text search. 2. Temporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate. 3. We develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance. 4. In a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents. The remainder of this paper is organized as follows. The presented work is put in context with related work in Section 2. We delineate our model of a temporally versioned document collection in Section 3. We present our time-travel inverted index in Section 4. Building on it, temporal coalescing is described in Section 5. In Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7. 2. RELATED WORK We can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index. We briefly review work under these categories here. To the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents. Anick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries. Burrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents. Stack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives. This adaptation, however, does not provide the intended time-travel text search functionality. In contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28]. Unlike the inverted file index, their applicability to text search is not well understood. Moving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size. Their technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context. More recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size. None of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality. Static indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result. They also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction. It should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here. 8. CONCLUSIONS In this work we have developed an efficient solution for time-travel text search over temporally versioned document collections. Experiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results. . How can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point? How can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?
H-50
An Outranking Approach for Rank Aggregation in Information Retrieval
Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators.
[ "outrank approach", "rank aggreg", "inform retriev", "data fusion problem", "data fusion", "decis rule", "multipl criterion framework", "metasearch engin", "combsum and combmnz strategi", "majoritarian method", "ir model", "multipl criterium approach", "outrank method" ]
[ "P", "P", "P", "P", "P", "P", "M", "U", "M", "M", "U", "M", "R" ]
An Outranking Approach for Rank Aggregation in Information Retrieval Mohamed Farah Lamsade, Paris Dauphine University Place du Mal de Lattre de Tassigny 75775 Paris Cedex 16, France farah@lamsade.dauphine.fr Daniel Vanderpooten Lamsade, Paris Dauphine University Place du Mal de Lattre de Tassigny 75775 Paris Cedex 16, France vdp@lamsade.dauphine.fr ABSTRACT Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators. Categories and Subject Descriptors: H.3.3 [Information Systems]: Information Search and Retrieval - Retrieval models. General Terms: Algorithms, Measurement, Experimentation, Performance, Theory. 1. INTRODUCTION A wide range of current Information Retrieval (IR) approaches are based on various search models (Boolean, Vector Space, Probabilistic, Language, etc. [2]) in order to retrieve relevant documents in response to a user request. The result lists produced by these approaches depend on the exact definition of the relevance concept. Rank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking. Such approaches give rise to metasearch engines in the Web context. We consider, in the following, cases where only ranks are available and no other additional information is provided such as the relevance scores. This corresponds indeed to the reality, where only ordinal information is available. Data fusion is also relevant in other contexts, such as when the user writes several queries of his/her information need (e.g., a boolean query and a natural language query) [4], or when many document surrogates are available [16]. Several studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods. For instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant. Moreover, Lee [19] and Vogt and Cottrell [31] found that various retrieval approaches often return very different irrelevant documents, but many of the same relevant documents. Bartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances. These methods also tend to smooth out biases of the input methods according to Montague and Aslam [22]. Data fusion has recently been proved to improve performances for both the ad-hoc retrieval and categorization tasks within the TREC genomics track in 2005 [1]. The rank aggregation problem was addressed in various fields such as i) in social choice theory which studies voting algorithms which specify winners of elections or winners of competitions in tournaments [29], ii) in statistics when studying correlation between rankings, iii) in distributed databases when results from different databases must be combined [12], and iv) in collaborative filtering [23]. Most current rank aggregation methods consider each input ranking as a permutation over the same set of items. They also give rigid interpretation to the exact ranking of the items. Both of these assumptions are rather not valid in the IR context, as will be shown in the following sections. The remaining of the paper is organized as follows. We first review current rank aggregation methods in Section 2. Then we outline the specificities of the data fusion problem in the IR context (Section 3). In Section 4, we present a new aggregation method which is proven to best fit the IR context. Experimental results are presented in Section 5 and conclusions are provided in a final section. 2. RELATED WORK As pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks they receive and majoritarian methods which are based on pairwise comparisons of items to be ranked. These two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature. 2.1 Preliminaries We first introduce some basic notations to present the rank aggregation methods in a uniform way. Let D = {d1, d2, ... , dnd } be a set of nd documents. A list or a ranking j is an ordering defined on Dj ⊆ D (j = 1, ... , n). Thus, di j di means di `is ranked better than'' di in j. When Dj = D, j is said to be a full list. Otherwise, it is a partial list. If di belongs to Dj, rj i denotes the rank or position of di in j. We assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position |Dj|. Let D be the set of all permutations on D or all subsets of D. A profile is a n-tuple of rankings PR = ( 1, 2, ... , n). Restricting PR to the rankings containing document di defines PRi. We also call the number of rankings which contain document di the rank hits of di [19]. The rank aggregation or data fusion problem consists of finding a ranking function or mechanism Ψ (also called a social welfare function in the social choice theory terminology) defined by: Ψ : n D → D PR = ( 1, 2, ... , n) → σ = Ψ(PR) where σ is called a consensus ranking. 2.2 Positional Methods 2.2.1 Borda Count This method [5] first assigns a score n j=1 rj i to each document di. Documents are then ranked by increasing order of this score, breaking ties, if any, arbitrarily. 2.2.2 Linear Combination Methods This family of methods basically combine scores of documents. When used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28]. For instance, Callan et al. [6] used the inference networks model [30] to combine rankings. Fox and Shaw [15] proposed several combination strategies which are CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ. The first three operators correspond to the sum, min and max operators, respectively. CombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits. It is shown in [19] that the CombSUM and CombMNZ operators perform better than the others. Metasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings. 2.2.3 Footrule Optimal Aggregation In this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21]. Formally, given two full lists j and j , this distance is given by F( j, j ) = nd i=1 |rj i − rj i |. It extends to several lists as follows. Given a profile PR and a consensus ranking σ, the Spearman footrule distance of σ to PR is given by F(σ, PR) = n j=1 F(σ, j). Cook and Kress [8] proposed a similar method which consists in optimizing the distance D( j, j ) = 1 2 nd i,i =1 |rj i,i − rj i,i |, where rj i,i = rj i −rj i . This formulation has the advantage that it considers the intensity of preferences. 2.2.4 Probabilistic Methods This kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance. During the training process, probabilities of relevance are calculated. For subsequent queries, documents are ranked based on these probabilities. For instance, in [20], each input ranking j is divided into a number of segments, and the conditional probability of relevance (R) of each document di depending on the segment k it occurs in, is computed, i.e. prob(R|di, k, j). For subsequent queries, the score of each document di is given by n j=1 prob(R|di,k, j ) k . Le Calve and Savoy [18] suggest using a logistic regression approach for combining scores. Training data is needed to infer the model parameters. 2.3 Majoritarian Methods 2.3.1 Condorcet Procedure The original Condorcet rule [7] specifies that a winner of the election is any item that beats or ties with every other item in a pairwise contest. Formally, let C(diσdi ) = { j∈ PR : di j di } be the coalition of rankings that are concordant with establishing diσdi , i.e. with the proposition di `should be ranked better than'' di in the final ranking σ. di beats or ties with di iff |C(diσdi )| ≥ |C(di σdi)|. The repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank. Since there is not always Condorcet winners, variations of the Condorcet procedure have been developed within the multiple criteria decision aid theory, with methods such as ELECTRE [26]. 2.3.2 Kemeny Optimal Aggregation As in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance. Formally, given two full lists j and j , the Kendall tau distance is given by K( j, j ) = |{(di, di ) : i < i , rj i < rj i , rj i > rj i }|, i.e. the number of pairwise disagreements between the two lists. It is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem. 2.3.3 Markov Chain Methods Markov chains (MCs) have been used by Dwork et al. [11] as a `natural'' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event. In the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]): • MC4: move from the current state di to the next state di by first choosing a document di uniformly from D. If for the majority of the rankings, we have rj i ≤ rj i , then move to di , else stay in di. The consensus ranking corresponds to the stationary distribution of MC4. 3. SPECIFICITIES OF THE RANK AGGREGATION PROBLEM IN THE IR CONTEXT 3.1 Limited Significance of the Rankings The exact positions of documents in one input ranking have limited significance and should not be overemphasized. For instance, having three relevant documents in the first three positions, any perturbation of these three items will have the same value. Indeed, in the IR context, the complete order provided by an input method may hide ties. In this case, we call such rankings semi orders. This was outlined in [13] as the problem of aggregation with ties. It is therefore important to build the consensus ranking based on robust information: • Documents with near positions in j are more likely to have similar interest or relevance. Thus a slight perturbation of the initial ranking is meaningless. • Assuming that document di is better ranked than document di in a ranking j, di is more likely to be definitively more relevant than di in j when the number of intermediate positions between di and di increases. 3.2 Partial Lists In real world applications, such as metasearch engines, rankings provided by the input methods are often partial lists. This was outlined in [14] as the problem of having to merge top-k results from various input lists. For instance, in the experiments carried out by Dwork et al. [11], authors found that among the top 100 best documents of 7 input search engines, 67% of the documents were present in only one search engine, whereas less than two documents were present in all the search engines. Rank aggregation of partial lists raises four major difficulties which we state hereafter, proposing for each of them various working assumptions: 1. Partial lists can have various lengths, which can favour long lists. We thus consider the following two working hypotheses: H1 k : We only consider the top k best documents from each input ranking. H1 all: We consider all the documents from each input ranking. 2. Since there are different documents in the input rankings, we must decide which documents should be kept in the consensus ranking. Two working hypotheses are therefore considered: H2 k : We only consider documents which are present in at least k input rankings (k > 1). H2 all: We consider all the documents which are ranked in at least one input ranking. Hereafter, we call documents which will be retained in the consensus ranking, candidate documents, and documents that will be excluded from the consensus ranking, excluded documents. We also call a candidate document which is missing in one or more rankings, a missing document. 3. Some candidate documents are missing documents in some input rankings. Main reasons for a missing document are that it was not indexed or it was indexed but deemed irrelevant ; usually this information is not available. We consider the following two working hypotheses: H3 yes: Each missing document in each j is assigned a position. H3 no: No assumption is made, that is each missing document is considered neither better nor worse than any other document. 4. When assumption H2 k holds, each input ranking may contain documents which will not be considered in the consensus ranking. Regarding the positions of the candidate documents, we can consider the following working hypotheses: H4 init: The initial positions of candidate documents are kept in each input ranking. H4 new: Candidate documents receive new positions in each input ranking, after discarding excluded ones. In the IR context, rank aggregation methods need to decide more or less explicitly which assumptions to retain w.r.t. the above-mentioned difficulties. 4. OUTRANKING APPROACH FOR RANK AGGREGATION 4.1 Presentation Positional methods consider implicitly that the positions of the documents in the input rankings are scores giving thus a cardinal meaning to an ordinal information. This constitutes a strong assumption that is questionable, especially when the input rankings have different lengths. Moreover, for positional methods, assumptions H3 and H4 , which are often arbitrary, have a strong impact on the results. For instance, let us consider an input ranking of 500 documents out of 1000 candidate documents. Whether we assign to each of the missing documents the position 1, 501, 750 or 1000 -corresponding to variations of H3 yes- will give rise to very contrasted results, especially regarding the top of the consensus ranking. Majoritarian methods do not suffer from the above-mentioned drawbacks of the positional methods since they build consensus rankings exploiting only ordinal information contained in the input rankings. Nevertheless, they suppose that such rankings are complete orders, ignoring that they may hide ties. Therefore, majoritarian methods base consensus rankings on illusory discriminant information rather than less discriminant but more robust information. Trying to overcome the limits of current rank aggregation methods, we found that outranking approaches, which were initially used for multiple criteria aggregation problems [26], can also be used for the rank aggregation purpose, where each ranking plays the role of a criterion. Therefore, in order to decide whether a document di should be ranked better than di in the consensus ranking σ, the two following conditions should be met: • a concordance condition which ensures that a majority of the input rankings are concordant with diσdi (majority principle). • a discordance condition which ensures that none of the discordant input rankings strongly refutes dσd (respect of minorities principle). Formally, the concordance coalition with diσdi is Csp (diσdi ) = { j∈ PR : rj i ≤ rj i − sp} where sp is a preference threshold which is the variation of document positions -whether it is absolute or relative to the ranking length- which draws the boundaries between an indifference and a preference situation between documents. The discordance coalition with diσdi is Dsv (diσdi ) = { j∈ PR : rj i ≥ rj i + sv} where sv is a veto threshold which is the variation of document positions -whether it is absolute or relative to the ranking length- which draws the boundaries between a weak and a strong opposition to diσdi . Depending on the exact definition of the preceding concordance and discordance coalitions leading to the definition of some decision rules, several outranking relations can be defined. They can be more or less demanding depending on i) the values of the thresholds sp and sv, ii) the importance or minimal size cmin required for the concordance coalition, and iii) the importance or maximum size dmax of the discordance coalition. A generic outranking relation can thus be defined as follows: diS(sp,sv,cmin,dmax)di ⇔ |Csp (diσdi )| ≥ cmin AND |Dsv (diσdi )| ≤ dmax This expression defines a family of nested outranking relations since S(sp,sv,cmin,dmax) ⊆ S(sp,sv,cmin,dmax) when cmin ≥ cmin and/or dmax ≤ dmax and/or sp ≥ sp and/or sv ≤ sv. This expression also generalizes the majority rule which corresponds to the particular relation S(0,∞, n 2 ,n). It also satisfies important properties of rank aggregation methods, called neutrality, Pareto-optimality, Condorcet property and Extended Condorcet property, in the social choice literature [29]. Outranking relations are not necessarily transitive and do not necessarily correspond to rankings since directed cycles may exist. Therefore, we need specific procedures in order to derive a consensus ranking. We propose the following procedure which finds its roots in [27]. It consists in partitioning the set of documents into r ranked classes. Each class Ch contains documents with the same relevance and results from the application of all relations (if possible) to the set of documents remaining after previous classes are computed. Documents within the same equivalence class are ranked arbitrarily. Formally, let • R be the set of candidate documents for a query, • S1 , S2 , ... be a family of nested outranking relations, • Fk(di, E) = |{di ∈ E : diSk di }| be the number of documents in E(E ⊆ R) that could be considered `worse'' than di according to relation Sk , • fk(di, E) = |{di ∈ E : di Sk di}| be the number of documents in E that could be considered `better'' than di according to Sk , • sk(di, E) = Fk(di, E) − fk(di, E) be the qualification of di in E according to Sk . Each class Ch results from a distillation process. It corresponds to the last distillate of a series of sets E0 ⊇ E1 ⊇ ... where E0 = R \ (C1 ∪ ... ∪ Ch−1) and Ek is a reduced subset of Ek−1 resulting from the application of the following procedure: 1. compute for each di ∈ Ek−1 its qualification according to Sk , i.e. sk(di, Ek−1), 2. define smax = maxdi∈Ek−1 {sk(di, Ek−1)}, then 3. Ek = {di ∈ Ek−1 : sk(di, Ek−1) = smax} When one outranking relation is used, the distillation process stops after the first application of the previous procedure, i.e., Ch corresponds to distillate E1. When different outranking relations are used, the distillation process stops when all the pre-defined outranking relations have been used or when |Ek| = 1. 4.2 Illustrative Example This section illustrates the concepts and procedures of section 4.1. Let us consider a set of candidate documents R = {d1, d2, d3, d4, d5}. The following table gives a profile PR of different rankings of the documents of R: PR = ( 1 , 2, 3, 4). Table 1: Rankings of documents rj i 1 2 3 4 d1 1 3 1 5 d2 2 1 3 3 d3 3 2 2 1 d4 4 4 5 2 d5 5 5 4 4 Let us suppose that the preference and veto thresholds are set to values 1 and 4 respectively, and that the concordance and discordance thresholds are set to values 2 and 1 respectively. The following tables give the concordance, discordance and outranking matrices. Each entry csp (di, di ) (dsv (di, di )) in the concordance (discordance) matrix gives the number of rankings that are concordant (discordant) with diσdi , i.e. csp (di, di ) = |Csp (diσdi )| and dsv (di, di ) = |Dsv (diσdi )|. Table 2: Computation of the outranking relation d1 d2 d3 d4 d5 d1 - 2 2 3 3 d2 2 - 2 3 4 d3 2 2 - 4 4 d4 1 1 0 - 3 d5 1 0 0 1Concordance Matrix d1 d2 d3 d4 d5 d1 - 0 1 0 0 d2 0 - 0 0 0 d3 0 0 - 0 0 d4 1 0 0 - 0 d5 1 1 0 0Discordance Matrix d1 d2 d3 d4 d5 d1 - 1 1 1 1 d2 1 - 1 1 1 d3 1 1 - 1 1 d4 0 0 0 - 1 d5 0 0 0 0Outranking Matrix (S1) For instance, the concordance coalition for the assertion d1σd4 is C1(d1σd4) = { 1, 2, 3} and the discordance coalition for the same assertion is D4(d1σd4) = ∅. Therefore, c1(d1, d4) = 3, d4(d1, d4) = 0 and d1S1 d4 holds. Notice that Fk(di, R) (fk(di, R)) is given by summing the values of the ith row (column) of the outranking matrix. The consensus ranking is obtained as follows: to get the first class C1, we compute the qualifications of all the documents of E0 = R with respect to S1 . They are respectively 2, 2, 2, -2 and -4. Therefore smax equals 2 and C1 = E1 = {d1, d2, d3}. Observe that, if we had used a second outranking relation S2(⊇ S1), these three documents could have been possibly discriminated. At this stage, we remove documents of C1 from the outranking matrix and compute the next class C2: we compute the new qualifications of the documents of E0 = R \ C1 = {d4, d5}. They are respectively 1 and -1. So C3 = E1 = {d4}. The last document d5 is the only document of the last class C3. Thus, the consensus ranking is {d1, d2, d3} → {d4} → {d5}. 5. EXPERIMENTS AND RESULTS 5.1 Test Setting To facilitate empirical investigation of the proposed methodology, we developed a prototype metasearch engine that implements a version of our outranking approach for rank aggregation. In this paper, we apply our approach to the Topic Distillation (TD) task of TREC-2004 Web track [10]. In this task, there are 75 topics where only a short description of each is given. For each query, we retained the rankings of the 10 best runs of the TD task which are provided by TREC-2004 participating teams. The performances of these runs are reported in table 3. Table 3: Performances of the 10 best runs of the TD task of TREC-2004 Run Id MAP P@10 S@1 S@5 S@10 uogWebCAU150 17.9% 24.9% 50.7% 77.3% 89.3% MSRAmixed1 17.8% 25.1% 38.7% 72.0% 88.0% MSRC04C12 16.5% 23.1% 38.7% 74.7% 80.0% humW04rdpl 16.3% 23.1% 37.3% 78.7% 90.7% THUIRmix042 14.7% 20.5% 21.3% 58.7% 74.7% UAmsT04MWScb 14.6% 20.9% 36.0% 66.7% 76.0% ICT04CIIS1AT 14.1% 20.8% 33.3% 64.0% 78.7% SJTUINCMIX5 12.9% 18.9% 29.3% 57.3% 72.0% MU04web1 11.5% 19.9% 33.3% 64.0% 76.0% MeijiHILw3 11.5% 15.3% 30.7% 54.7% 64.0% Average 14.7% 21.2% 34.9% 66.8% 78.94% For each query, each run provides a ranking of about 1000 documents. The number of documents retrieved by all these runs ranges from 543 to 5769. Their average (median) number is 3340 (3386). It is worth noting that we found similar distributions of the documents among the rankings as in [11]. For evaluation, we used the `trec eval'' standard tool which is used by the TREC community to calculate the standard measures of system effectiveness which are Mean Average Precision (MAP) and Success@n (S@n) for n=1, 5 and 10. Our approach effectiveness is compared against some high performing official results from TREC-2004 as well as against some standard rank aggregation algorithms. In the experiments, significance testing is mainly based on the t-student statistic which is computed on the basis of the MAP values of the compared runs. In the tables of the following section, statistically significant differences are marked with an asterisk. Values between brackets of the first column of each table, indicate the parameter value of the corresponding run. 5.2 Results We carried out several series of runs in order to i) study performance variations of the outranking approach when tuning the parameters and working assumptions, ii) compare performances of the outranking approach vs standard rank aggregation strategies , and iii) check whether rank aggregation performs better than the best input rankings. We set our basic run mcm with the following parameters. We considered that each input ranking is a complete order (sp = 0) and that an input ranking strongly refutes diσdi when the difference of both document positions is large enough (sv = 75%). Preference and veto thresholds are computed proportionally to the number of documents retained in each input ranking. They consequently may vary from one ranking to another. In addition, to accept the assertion diσdi , we supposed that the majority of the rankings must be concordant (cmin = 50%) and that every input ranking can impose its veto (dmax = 0). Concordance and discordance thresholds are computed for each tuple (di, di ) as the percentage of the input rankings of PRi ∩PRi . Thus, our choice of parameters leads to the definition of the outranking relation S(0,75%,50%,0). To test the run mcm, we had chosen the following assumptions. We retained the top 100 best documents from each input ranking (H1 100), only considered documents which are present in at least half of the input rankings (H2 5 ) and assumed H3 no and H4 new. In these conditions, the number of successful documents was about 100 on average, and the computation time per query was less than one second. Obviously, modifying the working assumptions should have deeper impact on the performances than tuning our model parameters. This was validated by preliminary experiments. Thus, we hereafter begin by studying performance variation when different sets of assumptions are considered. Afterwards, we study the impact of tuning parameters. Finally, we compare our model performances w.r.t. the input rankings as well as some standard data fusion algorithms. 5.2.1 Impact of the Working Assumptions Table 4 summarizes the performance variation of the outranking approach under different working hypotheses. In Table 4: Impact of the working assumptions Run Id MAP S@1 S@5 S@10 mcm 18.47% 41.33% 81.33% 86.67% mcm22 (H3 yes) 17.72% (-4.06%) 34.67% 81.33% 86.67% mcm23 (H4 init) 18.26% (-1.14%) 41.33% 81.33% 86.67% mcm24 (H1 all) 20.67% (+11.91%*) 38.66% 80.00% 86.66% mcm25 (H2 all) 21.68% (+17.38%*) 40.00% 78.66% 89.33% this table, we first show that run mcm22, in which missing documents are all put in the same last position of each input ranking, leads to performance drop w.r.t. run mcm. Moreover, S@1 moves from 41.33% to 34.67% (-16.11%). This shows that several relevant documents which were initially put at the first position of the consensus ranking in mcm, lose this first position but remain ranked in the top 5 documents since S@5 did not change. We also conclude that documents which have rather good positions in some input rankings are more likely to be relevant, even though they are missing in some other rankings. Consequently, when they are missing in some rankings, assigning worse ranks to these documents is harmful for performance. Also, from Table 4, we found that the performances of runs mcm and mcm23 are similar. Therefore, the outranking approach is not sensitive to keeping the initial positions of candidate documents or recomputing them by discarding excluded ones. From the same Table 4, performance of the outranking approach increases significantly for runs mcm24 and mcm25. Therefore, whether we consider all the documents which are present in half of the rankings (mcm24) or we consider all the documents which are ranked in the first 100 positions in one or more rankings (mcm25), increases performances. This result was predictable since in both cases we have more detailed information on the relative importance of documents. Tables 5 and 6 confirm this evidence. Table 5, where values between brackets of the first column give the number of documents which are retained from each input ranking, shows that selecting more documents from each input ranking leads to performance increase. It is worth mentioning that selecting more than 600 documents from each input ranking does not improve performance. Table 5: Impact of the number of retained documents Run Id MAP S@1 S@5 S@10 mcm (100) 18.47% 41.33% 81.33% 86.67% mcm24-1 (200) 19.32% (+4.60%) 42.67% 78.67% 88.00% mcm24-2 (400) 19.88% (+7.63%*) 37.33% 80.00% 88.00% mcm24-3 (600) 20.80% (+12.62%*) 40.00% 80.00% 88.00% mcm24-4 (800) 20.66% (+11.86%*) 40.00% 78.67% 86.67% mcm24 (1000) 20.67% (+11.91%*) 38.66% 80.00% 86.66% Table 6 reports runs corresponding to variations of H2 k . Values between brackets are rank hits. For instance, in the run mcm32, only documents which are present in 3 or more input rankings, were considered successful. This table shows that performance is significantly better when rare documents are considered, whereas it decreases significantly when these documents are discarded. Therefore, we conclude that many of the relevant documents are retrieved by a rather small set of IR models. Table 6: Performance considering different rank hits Run Id MAP S@1 S@5 S@10 mcm25 (1) 21.68% (+17.38%*) 40.00% 78.67% 89.33% mcm32 (3) 18.98% (+2.76%) 38.67% 80.00% 85.33% mcm (5) 18.47% 41.33% 81.33% 86.67% mcm33 (7) 15.83% (-14.29%*) 37.33% 78.67% 85.33% mcm34 (9) 10.96% (-40.66%*) 36.11% 66.67% 70.83% mcm35 (10) 7.42% (-59.83%*) 39.22% 62.75% 64.70% For both runs mcm24 and mcm25, the number of successful documents was about 1000 and therefore, the computation time per query increased and became around 5 seconds. 5.2.2 Impact of the Variation of the Parameters Table 7 shows performance variation of the outranking approach when different preference thresholds are considered. We found performance improvement up to threshold values of about 5%, then there is a decrease in the performance which becomes significant for threshold values greater than 10%. Moreover, S@1 improves from 41.33% to 46.67% when preference threshold changes from 0 to 5%. We can thus conclude that the input rankings are semi orders rather than complete orders. Table 8 shows the evolution of the performance measures w.r.t. the concordance threshold. We can conclude that in order to put document di before di in the consensus ranking, Table 7: Impact of the variation of the preference threshold from 0 to 12.5% Run Id MAP S@1 S@5 S@10 mcm (0%) 18.47% 41.33% 81.33% 86.67% mcm1 (1%) 18.57% (+0.54%) 41.33% 81.33% 86.67% mcm2 (2.5%) 18.63% (+0.87%) 42.67% 78.67% 86.67% mcm3 (5%) 18.69% (+1.19%) 46.67% 81.33% 86.67% mcm4 (7.5%) 18.24% (-1.25%) 46.67% 81.33% 86.67% mcm5 (10%) 17.93% (-2.92%) 40.00% 82.67% 86.67% mcm5b (12.5%) 17.51% (-5.20%*) 41.33% 80.00% 86.67% at least half of the input rankings of PRi ∩ PRi should be concordant. Performance drops significantly for very low and very high values of the concordance threshold. In fact, for such values, the concordance condition is either fulfilled rather always by too many document pairs or not fulfilled at all, respectively. Therefore, the outranking relation becomes either too weak or too strong respectively. Table 8: Impact of the variation of cmin Run Id MAP S@1 S@5 S@10 mcm11 (20%) 17.63% (-4.55%*) 41.33% 76.00% 85.33% mcm12 (40%) 18.37% (-0.54%) 42.67% 76.00% 86.67% mcm (50%) 18.47% 41.33% 81.33% 86.67% mcm13 (60%) 18.42% (-0.27%) 40.00% 78.67% 86.67% mcm14 (80%) 17.43% (-5.63%*) 40.00% 78.67% 86.67% mcm15 (100%) 16.12% (-12.72%*) 41.33% 70.67% 85.33% In the experiments, varying the veto threshold as well as the discordance threshold within reasonable intervals does not have significant impact on performance measures. In fact, runs with different veto thresholds (sv ∈ [50%; 100%]) had similar performances even though there is a slight advantage for runs with high threshold values which means that it is better not to allow the input rankings to put their veto easily. Also, tuning the discordance threshold was carried out for values 50% and 75% of the veto threshold. For these runs we did not get any noticeable performance variation, although for low discordance thresholds (dmax < 20%), performance slightly decreased. 5.2.3 Impact of the Variation of the Number of Input Rankings To study performance evolution when different sets of input rankings are considered, we carried three more runs where 2, 4, and 6 of the best performing sets of the input rankings are considered. Results reported in Table 9 are seemingly counter-intuitive and also do not support previous findings regarding rank aggregation research [3]. Nevertheless, this result shows that low performing rankings bring more noise than information to the establishment of the consensus ranking. Therefore, when they are considered, performance decreases. Table 9: Performance considering different best performing sets of input rankings Run Id MAP S@1 S@5 S@10 mcm (10) 18.47% 41.33% 81.33% 86.67% mcm27 (6) 18.60% (+0.70%) 41.33% 80.00% 85.33% mcm28 (4) 19.02% (+2.98%) 40.00% 86.67% 88.00% mcm29 (2) 18.33% (-0.76%) 44.00% 76.00% 88.00% 5.2.4 Comparison of the Performance of Different Rank Aggregation Methods In this set of runs, we compare the outranking approach with some standard rank aggregation methods which were proven to have acceptable performance in previous studies: we considered two positional methods which are the CombSUM and the CombMNZ strategies. We also examined the performance of one majoritarian method which is the Markov chain method (MC4). For the comparisons, we considered a specific outranking relation S∗ = S(5%,50%,50%,30%) which results in good overall performances when tuning all the parameters. The first row of Table 10 gives performances of the rank aggregation methods w.r.t. a basic assumption set A1 = (H1 100, H2 5 , H4 new): we only consider the 100 first documents from each ranking, then retain documents present in 5 or more rankings and update ranks of successful documents. For positional methods, we place missing documents at the queue of the ranking (H3 yes) whereas for our method as well as for MC4, we retained hypothesis H3 no. The three following rows of Table 10 report performances when changing one element from the basic assumption set: the second row corresponds to the assumption set A2 = (H1 1000, H2 5 , H4 new), i.e. changing the number of retained documents from 100 to 1000. The third row corresponds to the assumption set A3 = (H1 100, H2 all, H4 new), i.e. considering the documents present in at least one ranking. The fourth row corresponds to the assumption set A4 = (H1 100, H2 5 , H4 init), i.e. keeping the original ranks of successful documents. The fifth row of Table 10, labeled A5, gives performance when all the 225 queries of the Web track of TREC-2004 are considered. Obviously, performance level cannot be compared with previous lines since the additional queries are different from the TD queries and correspond to other tasks (Home Page and Named Page tasks [10]) of TREC-2004 Web track. This set of runs aims to show whether relative performance of the various methods is task-dependent. The last row of Table 10, labeled A6, reports performance of the various methods considering the TD task of TREC2002 instead of TREC-2004: we fused the results of input rankings of the 10 best official runs for each of the 50 TD queries [9] considering the set of assumptions A1 of the first row. This aims to show whether relative performance of the various methods changes from year to year. Values between brackets of Table 10 are variations of performance of each rank aggregation method w.r.t. performance of the outranking approach. Table 10: Performance (MAP) of different rank aggregation methods under 3 different test collections mcm combSUM combMNZ markov A1 18.79% 17.54% (-6.65%*) 17.08% (-9.10%*) 18.63% (-0.85%) A2 21.36% 19.18% (-10.21%*) 18.61% (-12.87%*) 21.33% (-0.14%) A3 21.92% 21.38% (-2.46%) 20.88% (-4.74%) 19.35% (-11.72%*) A4 18.64% 17.58% (-5.69%*) 17.18% (-7.83%*) 18.63% (-0.05%) A5 55.39% 52.16% (-5.83%*) 49.70% (-10.27%*) 53.30% (-3.77%) A6 16.95% 15.65% (-7.67%*) 14.57% (-14.04%*) 16.39% (-3.30%) From the analysis of table 10 the following can be established: • for all the runs, considering all the documents in each input ranking (A2) significantly improves performance (MAP increases by 11.62% on average). This is predictable since some initially unreported relevant documents would receive better positions in the consensus ranking. • for all the runs, considering documents even those present in only one input ranking (A3) significantly improves performance. For mcm, combSUM and combMNZ, performance improvement is more important (MAP increases by 20.27% on average) than for the markov run (MAP increases by 3.86%). • preserving the initial positions of documents (A4) or recomputing them (A1) does not have a noticeable influence on performance for both positional and majoritarian methods. • considering all the queries of the Web track of TREC2004 (A5) as well as the TD queries of the Web track of TREC-2002 (A6) does not alter the relative performance of the different data fusion methods. • considering the TD queries of the Web track of TREC2002, performances of all the data fusion methods are lower than that of the best performing input ranking for which the MAP value equals 18.58%. This is because most of the fused input rankings have very low performances compared to the best one, which brings more noise to the consensus ranking. • performances of the data fusion methods mcm and markov are significantly better than that of the best input ranking uogWebCAU150. This remains true for runs combSUM and combMNZ only under assumptions H1 all or H2 all. This shows that majoritarian methods are less sensitive to assumptions than positional methods. • outranking approach always performs significantly better than positional methods combSUM and combMNZ. It has also better performances than the Markov chain method, especially under assumption H2 all where difference of performances becomes significant. 6. CONCLUSIONS In this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused. We noticed that the input rankings can hide ties, so they should not be considered as complete orders. Only robust information should be used from each input ranking. Current rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings. They should be adapted by considering specific working assumptions. We propose a new outranking method for rank aggregation which is well adapted to the IR context. Indeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document. There is also no need to make specific assumptions on the positions of the missing documents. This is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively. Experimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies. It also out-performs a good performing majoritarian methods which is the Markov chain method. These results are tested against different test collections and queries. From the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing IR models, and that majoritarian data fusion methods perform better than positional methods. The proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list. Further work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores. Acknowledgments The authors would like to thank Jacques Savoy for his valuable comments on a preliminary version of this paper. 7. REFERENCES [1] A. Aronson, D. Demner-Fushman, S. Humphrey, J. Lin, H. Liu, P. Ruch, M. Ruiz, L. Smith, L. Tanabe, and W. Wilbur. Fusion of knowledge-intensive and statistical approaches for retrieving and annotating textual genomics documents. In Proceedings TREC``2005. NIST Publication, 2005. [2] R. A. Baeza-Yates and B. A. Ribeiro-Neto. Modern Information Retrieval. ACM Press , 1999. [3] B. T. Bartell, G. W. Cottrell, and R. K. Belew. Automatic combination of multiple ranked retrieval systems. In Proceedings ACM-SIGIR``94, pages 173-181. Springer-Verlag, 1994. [4] N. J. Belkin, P. Kantor, E. A. Fox, and J. A. Shaw. Combining evidence of multiple query representations for information retrieval. IPM, 31(3):431-448, 1995. [5] J. Borda. M´emoire sur les ´elections au scrutin. Histoire de l``Acad´emie des Sciences, 1781. [6] J. P. Callan, Z. Lu, and W. B. Croft. Searching distributed collections with inference networks. In Proceedings ACM-SIGIR``95, pages 21-28, 1995. [7] M. Condorcet. Essai sur l``application de l``analyse `a la probabilit´e des d´ecisions rendues `a la pluralit´e des voix. Imprimerie Royale, Paris, 1785. [8] W. D. Cook and M. Kress. Ordinal ranking with intensity of preference. Management Science, 31(1):26-32, 1985. [9] N. Craswell and D. Hawking. Overview of the TREC-2002 Web Track. In Proceedings TREC``2002. NIST Publication, 2002. [10] N. Craswell and D. Hawking. Overview of the TREC-2004 Web Track. In Proceedings of TREC``2004. NIST Publication, 2004. [11] C. Dwork, S. R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the Web. In Proceedings WWW``2001, pages 613-622, 2001. [12] R. Fagin. Combining fuzzy information from multiple systems. JCSS, 58(1):83-99, 1999. [13] R. Fagin, R. Kumar, M. Mahdian, D. Sivakumar, and E. Vee. Comparing and aggregating rankings with ties. In PODS, pages 47-58, 2004. [14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing top k lists. SIAM J. on Discrete Mathematics, 17(1):134-160, 2003. [15] E. A. Fox and J. A. Shaw. Combination of multiple searches. In Proceedings of TREC``3. NIST Publication, 1994. [16] J. Katzer, M. McGill, J. Tessier, W. Frakes, and P. DasGupta. A study of the overlap among document representations. Information Technology: Research and Development, 1(4):261-274, 1982. [17] L. S. Larkey, M. E. Connell, and J. Callan. Collection selection and results merging with topically organized U.S. patents and TREC data. In Proceedings ACM-CIKM``2000, pages 282-289. ACM Press, 2000. [18] A. Le Calv´e and J. Savoy. Database merging strategy based on logistic regression. IPM, 36(3):341-359, 2000. [19] J. H. Lee. Analyses of multiple evidence combination. In Proceedings ACM-SIGIR``97, pages 267-276, 1997. [20] D. Lillis, F. Toolan, R. Collier, and J. Dunnion. Probfuse: a probabilistic approach to data fusion. In Proceedings ACM-SIGIR``2006, pages 139-146. ACM Press, 2006. [21] J. I. Marden. Analyzing and Modeling Rank Data. Number 64 in Monographs on Statistics and Applied Probability. Chapman & Hall, 1995. [22] M. Montague and J. A. Aslam. Metasearch consistency. In Proceedings ACM-SIGIR``2001, pages 386-387. ACM Press, 2001. [23] D. M. Pennock and E. Horvitz. Analysis of the axiomatic foundations of collaborative filtering. In Workshop on AI for Electronic Commerce at the 16th National Conference on Artificial Intelligence, 1999. [24] M. E. Renda and U. Straccia. Web metasearch: rank vs. score based rank aggregation methods. In Proceedings ACM-SAC``2003, pages 841-846. ACM Press, 2003. [25] W. H. Riker. Liberalism against populism. Waveland Press, 1982. [26] B. Roy. The outranking approach and the foundations of ELECTRE methods. Theory and Decision, 31:49-73, 1991. [27] B. Roy and J. Hugonnard. Ranking of suburban line extension projects on the Paris metro system by a multicriteria method. Transportation Research, 16A(4):301-312, 1982. [28] L. Si and J. Callan. Using sampled data and regression to merge search engine results. In Proceedings ACM-SIGIR``2002, pages 19-26. ACM Press, 2002. [29] M. Truchon. An extension of the Condorcet criterion and Kemeny orders. Cahier 9813, Centre de Recherche en Economie et Finance Appliqu´ees, Oct. 1998. [30] H. Turtle and W. B. Croft. Inference networks for document retrieval. In Proceedings of ACM-SIGIR``90, pages 1-24. ACM Press, 1990. [31] C. C. Vogt and G. W. Cottrell. Fusion via a linear combination of scores. Information Retrieval, 1(3):151-173, 1999.
An Outranking Approach for Rank Aggregation in Information Retrieval ABSTRACT Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators. 1. INTRODUCTION A wide range of current Information Retrieval (IR) approaches are based on various search models (Boolean, Vector Space, Probabilistic, Language, etc. [2]) in order to retrieve relevant documents in response to a user request. The result lists produced by these approaches depend on the exact definition of the relevance concept. Rank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking. Such approaches give rise to metasearch engines in the Web context. We consider, in the following, cases where only ranks are available and no other additional information is provided such as the relevance scores. This corresponds indeed to the reality, where only ordinal information is available. Data fusion is also relevant in other contexts, such as when the user writes several queries of his/her information need (e.g., a boolean query and a natural language query) [4], or when many document surrogates are available [16]. Several studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods. For instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant. Moreover, Lee [19] and Vogt and Cottrell [31] found that various retrieval approaches often return very different irrelevant documents, but many of the same relevant documents. Bartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances. These methods also tend to smooth out biases of the input methods according to Montague and Aslam [22]. Data fusion has recently been proved to improve performances for both the ad hoc retrieval and categorization tasks within the TREC genomics track in 2005 [1]. The rank aggregation problem was addressed in various fields such as i) in social choice theory which studies voting algorithms which specify winners of elections or winners of competitions in tournaments [29], ii) in statistics when studying correlation between rankings, iii) in distributed databases when results from different databases must be combined [12], and iv) in collaborative filtering [23]. Most current rank aggregation methods consider each input ranking as a permutation over the same set of items. They also give rigid interpretation to the exact ranking of the items. Both of these assumptions are rather not valid in the IR context, as will be shown in the following sections. The remaining of the paper is organized as follows. We first review current rank aggregation methods in Section 2. Then we outline the specificities of the data fusion problem in the IR context (Section 3). In Section 4, we present a new aggregation method which is proven to best fit the IR context. Experimental results are presented in Section 5 and conclusions are provided in a final section. 2. RELATED WORK As pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks they receive and majoritarian methods which are based on pairwise comparisons of items to be ranked. These two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature. 2.1 Preliminaries We first introduce some basic notations to present the rank aggregation methods in a uniform way. Let D = {d1, d2,..., dnd} be a set of nd documents. A list or a ranking ~ j is an ordering defined on Dj ⊆ D (j = 1,..., n). Thus, di ~ j di, means di ` is ranked better than' di, in ~ j. When Dj = D, ~ j is said to be a full list. Otherwise, it is a partial list. If di belongs to Dj, rji denotes the rank or position of di in ~ j. We assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position | Dj |. Let ~ D be the set of all permutations on D or all subsets of D. A profile is a n-tuple of rankings PR = (~ 1, ~ 2,..., ~ n). Restricting PR to the rankings containing document di defines PRi. We also call the number of rankings which contain document di the rank hits of di [19]. The rank aggregation or data fusion problem consists of finding a ranking function or mechanism Ψ (also called a social welfare function in the social choice theory terminology) defined by: where σ is called a consensus ranking. 2.2 Positional Methods 2.2.1 Borda Count This method [5] first assigns a score Enj = 1 rji to each document di. Documents are then ranked by increasing order of this score, breaking ties, if any, arbitrarily. 2.2.2 Linear Combination Methods This family of methods basically combine scores of documents. When used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28]. For instance, Callan et al. [6] used the inference networks model [30] to combine rankings. Fox and Shaw [15] proposed several combination strategies which are CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ. The first three operators correspond to the sum, min and max operators, respectively. CombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits. It is shown in [19] that the CombSUM and CombMNZ operators perform better than the others. Metasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings. 2.2.3 Footrule Optimal Aggregation In this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21]. Formally, given two full lists ~ j and ~ j,, this distance is given by F (~ j, ~ j,) = nd as follows. Given a profile PR and a consensus ranking σ, the Spearman footrule distance of σ to PR is given by i, i, |, where rji, i, = rji, − rji. This formulation has the advantage that it considers the intensity of preferences. 2.2.4 Probabilistic Methods This kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance. During the training process, probabilities of relevance are calculated. For subsequent queries, documents are ranked based on these probabilities. For instance, in [20], each input ranking ~ j is divided into a number of segments, and the conditional probability of relevance (R) of each document di depending on the segment k it occurs in, is computed, i.e. prob (R | di, k, ~ j). For subsequent queries, the score of each document di is given by En prob (R | di, k, Yj). Le Calve and Savoy [18] suggest using j = 1 k a logistic regression approach for combining scores. Training data is needed to infer the model parameters. 2.3 Majoritarian Methods 2.3.1 Condorcet Procedure The original Condorcet rule [7] specifies that a winner of the election is any item that beats or ties with every other item in a pairwise contest. Formally, let C (diσdi,) = {~ j ∈ PR: di ~ j di,} be the coalition of rankings that are concordant with establishing diσdi,, i.e. with the proposition di ` should be ranked better than' di, in the final ranking σ. di beats or ties with di, iff | C (diσdi,) | ≥ | C (di, σdi) |. The repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank. Since there is not always Condorcet winners, variations of the Condorcet procedure have been developed within the multiple criteria decision aid theory, with methods such as ELECTRE [26]. 2.3.2 Kemeny Optimal Aggregation As in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance. Formally, given two full lists ~ j and ~ j,, the Kendall tau distance is given by K (~ j, ~ j,) = | {(di, di,): i <i ~, rji <rji,, rj, i,} |, i.e. the number of pairwise disagreements between the two lists. It is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem. 2.3.3 Markov Chain Methods Markov chains (MCs) have been used by Dwork et al. [11] as a ` natural' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event. In the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]): • MC4: move from the current state di to the next state di, by first choosing a document di, uniformly from D. If for the majority of the rankings, we have rji, ≤ rji, then move to di,, else stay in di. The consensus ranking corresponds to the stationary distribution of MC4. 3. SPECIFICITIES OF THE RANK AGGREGATION PROBLEM IN THE IR CONTEXT 3.1 Limited Significance of the Rankings The exact positions of documents in one input ranking have limited significance and should not be overemphasized. For instance, having three relevant documents in the first three positions, any perturbation of these three items will have the same value. Indeed, in the IR context, the complete order provided by an input method may hide ties. In this case, we call such rankings semi orders. This was outlined in [13] as the problem of aggregation with ties. It is therefore important to build the consensus ranking based on robust information: • Documents with near positions in> - j are more likely to have similar interest or relevance. Thus a slight perturbation of the initial ranking is meaningless. • Assuming that document di is better ranked than document di, in a ranking> - j, di is more likely to be definitively more relevant than di, in> - j when the number of intermediate positions between di and di, increases. 3.2 Partial Lists In real world applications, such as metasearch engines, rankings provided by the input methods are often partial lists. This was outlined in [14] as the problem of having to merge top-k results from various input lists. For instance, in the experiments carried out by Dwork et al. [11], authors found that among the top 100 best documents of 7 input search engines, 67% of the documents were present in only one search engine, whereas less than two documents were present in all the search engines. Rank aggregation of partial lists raises four major difficulties which we state hereafter, proposing for each of them various working assumptions: 1. Partial lists can have various lengths, which can favour long lists. We thus consider the following two working hypotheses: H1k: We only consider the top k best documents from each input ranking. H1 all: We consider all the documents from each input ranking. 2. Since there are different documents in the input rankings, we must decide which documents should be kept in the consensus ranking. Two working hypotheses are therefore considered: H2k: We only consider documents which are present in at least k input rankings (k> 1). H2 all: We consider all the documents which are ranked in at least one input ranking. Hereafter, we call documents which will be retained in the consensus ranking, candidate documents, and documents that will be excluded from the consensus ranking, excluded documents. We also call a candidate document which is missing in one or more rankings, a missing document. 3. Some candidate documents are missing documents in some input rankings. Main reasons for a missing doc ument are that it was not indexed or it was indexed but deemed irrelevant; usually this information is not available. We consider the following two working hypotheses: H3 yes: Each missing document in each> - j is assigned a position. H3no: No assumption is made, that is each missing document is considered neither better nor worse than any other document. 4. When assumption H2k holds, each input ranking may contain documents which will not be considered in the consensus ranking. Regarding the positions of the candidate documents, we can consider the following working hypotheses: H4init: The initial positions of candidate documents are kept in each input ranking. H4 new: Candidate documents receive new positions in each input ranking, after discarding excluded ones. In the IR context, rank aggregation methods need to decide more or less explicitly which assumptions to retain w.r.t. the above-mentioned difficulties. 4. OUTRANKING APPROACH FOR RANK AGGREGATION 4.1 Presentation Positional methods consider implicitly that the positions of the documents in the input rankings are scores giving thus a cardinal meaning to an ordinal information. This constitutes a strong assumption that is questionable, especially when the input rankings have different lengths. Moreover, for positional methods, assumptions H3 and H4, which are often arbitrary, have a strong impact on the results. For instance, let us consider an input ranking of 500 documents out of 1000 candidate documents. Whether we assign to each of the missing documents the position 1, 501, 750 or 1000 - corresponding to variations of H3 yes - will give rise to very contrasted results, especially regarding the top of the consensus ranking. Majoritarian methods do not suffer from the above-mentioned drawbacks of the positional methods since they build consensus rankings exploiting only ordinal information contained in the input rankings. Nevertheless, they suppose that such rankings are complete orders, ignoring that they may hide ties. Therefore, majoritarian methods base consensus rankings on illusory discriminant information rather than less discriminant but more robust information. Trying to overcome the limits of current rank aggregation methods, we found that outranking approaches, which were initially used for multiple criteria aggregation problems [26], can also be used for the rank aggregation purpose, where each ranking plays the role of a criterion. Therefore, in order to decide whether a document di should be ranked better than di, in the consensus ranking σ, the two following conditions should be met: • a concordance condition which ensures that a majority of the input rankings are concordant with diadi' (majority principle). • a discordance condition which ensures that none of the discordant input rankings strongly refutes dad ~ (respect of minorities principle). Formally, the concordance coalition with diadi' is Csp (diadi') = {~ j ∈ PR: rji ≤ rji' − sp} where sp is a preference threshold which is the variation of document positions - whether it is absolute or relative to the ranking length - which draws the boundaries between an indifference and a preference situation between documents. The discordance coalition with diadi' is Dsv (diadi') = {~ j ∈ PR: rji ≥ rji' + sv} where sv is a veto threshold which is the variation of document positions - whether it is absolute or relative to the ranking length - which draws the boundaries between a weak and a strong opposition to diadi'. Depending on the exact definition of the preceding concordance and discordance coalitions leading to the definition of some decision rules, several outranking relations can be defined. They can be more or less demanding depending on i) the values of the thresholds sp and sv, ii) the importance or minimal size cmin required for the concordance coalition, and iii) the importance or maximum size dmax of the discordance coalition. A generic outranking relation can thus be defined as follows: This expression defines a family of nested outranking relations since S (sp, sv, cmin, dmax) ⊆ S (s' p, s' v, c'm in, d'max) when cmin ≥ c ~ min and/or dmax ≤ d ~ max and/or sp ≥ s ~ p and/or sv ≤ s ~ v. This expression also generalizes the majority rule which corresponds to the particular relation S (0, ∞, n 2, n). It also satisfies important properties of rank aggregation methods, called neutrality, Pareto-optimality, Condorcet property and Extended Condorcet property, in the social choice literature [29]. Outranking relations are not necessarily transitive and do not necessarily correspond to rankings since directed cycles may exist. Therefore, we need specific procedures in order to derive a consensus ranking. We propose the following procedure which finds its roots in [27]. It consists in partitioning the set of documents into r ranked classes. Each class Ch contains documents with the same relevance and results from the application of all relations (if possible) to the set of documents remaining after previous classes are computed. Documents within the same equivalence class are ranked arbitrarily. Formally, let • R be the set of candidate documents for a query, • S1, S2,...be a family of nested outranking relations, • Fk (di, E) = | {di' ∈ E: diSkdi'} | be the number of documents in E (E ⊆ R) that could be considered ` worse' than di according to relation Sk, • fk (di, E) = | {di' ∈ E: di' Skdi} | be the number of documents in E that could be considered ` better' than di according to Sk, • sk (di, E) = Fk (di, E) − fk (di, E) be the qualification of di in E according to Sk. Each class Ch results from a distillation process. It corresponds to the last distillate of a series of sets E0 ⊇ E1 ⊇...where E0 = R \ (C1 ∪...∪ Ch − 1) and Ek is a reduced subset of Ek − 1 resulting from the application of the following procedure: 1. compute for each di ∈ Ek − 1 its qualification according to Sk, i.e. sk (di, Ek − 1), 2. define smax = maxdi ∈ Ek-1 {sk (di, Ek − 1)}, then 3. Ek = {di ∈ Ek − 1: sk (di, Ek − 1) = smax} When one outranking relation is used, the distillation process stops after the first application of the previous procedure, i.e., Ch corresponds to distillate E1. When different outranking relations are used, the distillation process stops when all the pre-defined outranking relations have been used or when | Ek | = 1. 4.2 Illustrative Example This section illustrates the concepts and procedures of section 4.1. Let us consider a set of candidate documents Table 1: Rankings of documents Let us suppose that the preference and veto thresholds are set to values 1 and 4 respectively, and that the concordance and discordance thresholds are set to values 2 and 1 respectively. The following tables give the concordance, discordance and outranking matrices. Each entry csp (di, di') (dsv (di, di')) in the concordance (discordance) matrix gives the number of rankings that are concordant (discordant) with diadi', i.e. csp (di, di') = | Csp (diadi') | and dsv (di, di') = | Dsv (diadi') |. Table 2: Computation of the outranking relation For instance, the concordance coalition for the assertion d1ad4 is C1 (d1ad4) = {~ 1, ~ 2, ~ 3} and the discordance coalition for the same assertion is D4 (d1ad4) = ∅. Therefore, c1 (d1, d4) = 3, d4 (d1, d4) = 0 and d1S1d4 holds. Notice that Fk (di, R) (fk (di, R)) is given by summing the values of the ith row (column) of the outranking matrix. The consensus ranking is obtained as follows: to get the first class C1, we compute the qualifications of all the documents of E0 = R with respect to S1. They are respectively 2, 2, 2, -2 and -4. Therefore smax equals 2 and C1 = E1 = {d1, d2, d3}. Observe that, if we had used a second outranking relation S2 (⊇ S1), these three documents could have been possibly discriminated. At this stage, we remove documents of C1 from the outranking matrix and compute the next class C2: we compute the new qualifications of the documents of E0 = R \ C1 = {d4, d5}. They are respectively 1 and -1. So C3 = E1 = {d4}. The last document d5 is the only document of the last class C3. Thus, the consensus ranking is {d1, d2, d3} - {d4} - {d5}. 5. EXPERIMENTS AND RESULTS 5.1 Test Setting To facilitate empirical investigation of the proposed methodology, we developed a prototype metasearch engine that implements a version of our outranking approach for rank aggregation. In this paper, we apply our approach to the Topic Distillation (TD) task of TREC-2004 Web track [10]. In this task, there are 75 topics where only a short description of each is given. For each query, we retained the rankings of the 10 best runs of the TD task which are provided by TREC-2004 participating teams. The performances of these runs are reported in table 3. Table 3: Performances of the 10 best runs of the TD task of TREC-2004 For each query, each run provides a ranking of about 1000 documents. The number of documents retrieved by all these runs ranges from 543 to 5769. Their average (median) number is 3340 (3386). It is worth noting that we found similar distributions of the documents among the rankings as in [11]. For evaluation, we used the ` trec eval' standard tool which is used by the TREC community to calculate the standard measures of system effectiveness which are Mean Average Precision (MAP) and Success@n (S0n) for n = 1, 5 and 10. Our approach effectiveness is compared against some high performing official results from TREC-2004 as well as against some standard rank aggregation algorithms. In the experiments, significance testing is mainly based on the t-student statistic which is computed on the basis of the MAP values of the compared runs. In the tables of the following section, statistically significant differences are marked with an asterisk. Values between brackets of the first column of each table, indicate the parameter value of the corresponding run. 5.2 Results We carried out several series of runs in order to i) study performance variations of the outranking approach when tuning the parameters and working assumptions, ii) compare performances of the outranking approach vs standard rank aggregation strategies, and iii) check whether rank aggregation performs better than the best input rankings. We set our basic run mcm with the following parameters. We considered that each input ranking is a complete order (sp = 0) and that an input ranking strongly refutes diσdi, when the difference of both document positions is large enough (sv = 75%). Preference and veto thresholds are computed proportionally to the number of documents retained in each input ranking. They consequently may vary from one ranking to another. In addition, to accept the assertion diσdii, we supposed that the majority of the rankings must be concordant (cmin = 50%) and that every input ranking can impose its veto (dmax = 0). Concordance and discordance thresholds are computed for each tuple (di, dii) as the percentage of the input rankings of PRi ∩ PRii. Thus, our choice of parameters leads to the definition of the outranking relation S (0,75%,50%,0). To test the run mcm, we had chosen the following assumptions. We retained the top 100 best documents from each input ranking (H1100), only considered documents which are present in at least half of the input rankings (H25) and assumed H3no and H4 new. In these conditions, the number of successful documents was about 100 on average, and the computation time per query was less than one second. Obviously, modifying the working assumptions should have deeper impact on the performances than tuning our model parameters. This was validated by preliminary experiments. Thus, we hereafter begin by studying performance variation when different sets of assumptions are considered. Afterwards, we study the impact of tuning parameters. Finally, we compare our model performances w.r.t. the input rankings as well as some standard data fusion algorithms. 5.2.1 Impact of the Working Assumptions Table 4 summarizes the performance variation of the outranking approach under different working hypotheses. In Table 4: Impact of the working assumptions this table, we first show that run mcm22, in which missing documents are all put in the same last position of each input ranking, leads to performance drop w.r.t. run mcm. Moreover, S01 moves from 41.33% to 34.67% (-16.11%). This shows that several relevant documents which were initially put at the first position of the consensus ranking in mcm, lose this first position but remain ranked in the top 5 documents since S05 did not change. We also conclude that documents which have rather good positions in some input rankings are more likely to be relevant, even though they are missing in some other rankings. Consequently, when they are missing in some rankings, assigning worse ranks to these documents is harmful for performance. Also, from Table 4, we found that the performances of runs mcm and mcm23 are similar. Therefore, the outranking approach is not sensitive to keeping the initial positions of candidate documents or recomputing them by discarding excluded ones. From the same Table 4, performance of the outranking approach increases significantly for runs mcm24 and mcm25. Therefore, whether we consider all the documents which are present in half of the rankings (mcm24) or we consider all the documents which are ranked in the first 100 positions in one or more rankings (mcm25), increases performances. This result was predictable since in both cases we have more detailed information on the relative importance of documents. Tables 5 and 6 confirm this evidence. Table 5, where values between brackets of the first column give the number of documents which are retained from each input ranking, shows that selecting more documents from each input ranking leads to performance increase. It is worth mentioning that selecting more than 600 documents from each input ranking does not improve performance. Table 5: Impact of the number of retained documents Table 6 reports runs corresponding to variations of H2k. Values between brackets are rank hits. For instance, in the run mcm32, only documents which are present in 3 or more input rankings, were considered successful. This table shows that performance is significantly better when rare documents are considered, whereas it decreases significantly when these documents are discarded. Therefore, we conclude that many of the relevant documents are retrieved by a rather small set of IR models. Table 6: Performance considering different rank hits For both runs mcm24 and mcm25, the number of successful documents was about 1000 and therefore, the computation time per query increased and became around 5 seconds. 5.2.2 Impact of the Variation of the Parameters Table 7 shows performance variation of the outranking approach when different preference thresholds are considered. We found performance improvement up to threshold values of about 5%, then there is a decrease in the performance which becomes significant for threshold values greater than 10%. Moreover, S@1 improves from 41.33% to 46.67% when preference threshold changes from 0 to 5%. We can thus conclude that the input rankings are semi orders rather than complete orders. Table 8 shows the evolution of the performance measures w.r.t. the concordance threshold. We can conclude that in order to put document di before di, in the consensus ranking, Table 7: Impact of the variation of the preference at least half of the input rankings of PRi ∩ PRii should be concordant. Performance drops significantly for very low and very high values of the concordance threshold. In fact, for such values, the concordance condition is either fulfilled rather always by too many document pairs or not fulfilled at all, respectively. Therefore, the outranking relation becomes either too weak or too strong respectively. Table 8: Impact of the variation of cmin In the experiments, varying the veto threshold as well as the discordance threshold within reasonable intervals does not have significant impact on performance measures. In fact, runs with different veto thresholds (sv ∈ [50%; 100%]) had similar performances even though there is a slight advantage for runs with high threshold values which means that it is better not to allow the input rankings to put their veto easily. Also, tuning the discordance threshold was carried out for values 50% and 75% of the veto threshold. For these runs we did not get any noticeable performance variation, although for low discordance thresholds (dmax <20%), performance slightly decreased. 5.2.3 Impact of the Variation of the Number of Input Rankings To study performance evolution when different sets of input rankings are considered, we carried three more runs where 2, 4, and 6 of the best performing sets of the input rankings are considered. Results reported in Table 9 are seemingly counter-intuitive and also do not support previous findings regarding rank aggregation research [3]. Nevertheless, this result shows that low performing rankings bring more noise than information to the establishment of the consensus ranking. Therefore, when they are considered, performance decreases. Table 9: Performance considering different best performing sets of input rankings 5.2.4 Comparison of the Performance of Different Rank Aggregation Methods In this set of runs, we compare the outranking approach with some standard rank aggregation methods which were proven to have acceptable performance in previous studies: we considered two positional methods which are the CombSUM and the CombMNZ strategies. We also examined the performance of one majoritarian method which is the Markov chain method (MC4). For the comparisons, we considered a specific outranking relation S ∗ = S (5%,50%,50%,30%) which results in good overall performances when tuning all the parameters. The first row of Table 10 gives performances of the rank aggregation methods w.r.t. a basic assumption set A1 = (H1100, H25, H4new): we only consider the 100 first documents from each ranking, then retain documents present in 5 or more rankings and update ranks of successful documents. For positional methods, we place missing documents at the queue of the ranking (H3 yes) whereas for our method as well as for MC4, we retained hypothesis Hn3o. The three following rows of Table 10 report performances when changing one element from the basic assumption set: the second row corresponds to the assumption set A2 = (H11000, H25, H4new), i.e. changing the number of retained documents from 100 to 1000. The third row corresponds to the assumption set A3 = (H1100, H2 all, Hn4ew), i.e. considering the documents present in at least one ranking. The fourth row corresponds to the assumption set A4 = (H1100, H25, H4init), i.e. keeping the original ranks of successful documents. The fifth row of Table 10, labeled A5, gives performance when all the 225 queries of the Web track of TREC-2004 are considered. Obviously, performance level cannot be compared with previous lines since the additional queries are different from the TD queries and correspond to other tasks (Home Page and Named Page tasks [10]) of TREC-2004 Web track. This set of runs aims to show whether relative performance of the various methods is task-dependent. The last row of Table 10, labeled A6, reports performance of the various methods considering the TD task of TREC2002 instead of TREC-2004: we fused the results of input rankings of the 10 best official runs for each of the 50 TD queries [9] considering the set of assumptions A1 of the first row. This aims to show whether relative performance of the various methods changes from year to year. Values between brackets of Table 10 are variations of performance of each rank aggregation method w.r.t. performance of the outranking approach. Table 10: Performance (MAP) of different rank aggregation methods under 3 different test collections From the analysis of table 10 the following can be established: • for all the runs, considering all the documents in each input ranking (A2) significantly improves performance (MAP increases by 11.62% on average). This is predictable since some initially unreported relevant documents would receive better positions in the consensus ranking. • for all the runs, considering documents even those present in only one input ranking (A3) significantly im proves performance. For mcm, combSUM and combMNZ, performance improvement is more important (MAP increases by 20.27% on average) than for the markov run (MAP increases by 3.86%). • preserving the initial positions of documents (A4) or recomputing them (A1) does not have a noticeable influence on performance for both positional and majoritarian methods. • considering all the queries of the Web track of TREC2004 (A5) as well as the TD queries of the Web track of TREC-2002 (A6) does not alter the relative performance of the different data fusion methods. • considering the TD queries of the Web track of TREC 2002, performances of all the data fusion methods are lower than that of the best performing input ranking for which the MAP value equals 18.58%. This is because most of the fused input rankings have very low performances compared to the best one, which brings more noise to the consensus ranking. • performances of the data fusion methods mcm and markov are significantly better than that of the best input ranking uogWebCAU150. This remains true for runs combSUM and combMNZ only under assumptions H1 all or H2 all. This shows that majoritarian methods are less sensitive to assumptions than positional methods. • outranking approach always performs significantly bet ter than positional methods combSUM and combMNZ. It has also better performances than the Markov chain method, especially under assumption H2 all where difference of performances becomes significant. 6. CONCLUSIONS In this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused. We noticed that the input rankings can hide ties, so they should not be considered as complete orders. Only robust information should be used from each input ranking. Current rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings. They should be adapted by considering specific working assumptions. We propose a new outranking method for rank aggregation which is well adapted to the IR context. Indeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document. There is also no need to make specific assumptions on the positions of the missing documents. This is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively. Experimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies. It also out-performs a good performing majoritarian methods which is the Markov chain method. These results are tested against different test collections and queries. From the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing IR models, and that majoritarian data fusion methods perform better than positional methods. The proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list. Further work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.
An Outranking Approach for Rank Aggregation in Information Retrieval ABSTRACT Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators. 1. INTRODUCTION A wide range of current Information Retrieval (IR) approaches are based on various search models (Boolean, Vector Space, Probabilistic, Language, etc. [2]) in order to retrieve relevant documents in response to a user request. The result lists produced by these approaches depend on the exact definition of the relevance concept. Rank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking. Such approaches give rise to metasearch engines in the Web context. We consider, in the following, cases where only ranks are available and no other additional information is provided such as the relevance scores. This corresponds indeed to the reality, where only ordinal information is available. Data fusion is also relevant in other contexts, such as when the user writes several queries of his/her information need (e.g., a boolean query and a natural language query) [4], or when many document surrogates are available [16]. Several studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods. For instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant. Moreover, Lee [19] and Vogt and Cottrell [31] found that various retrieval approaches often return very different irrelevant documents, but many of the same relevant documents. Bartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances. These methods also tend to smooth out biases of the input methods according to Montague and Aslam [22]. Data fusion has recently been proved to improve performances for both the ad hoc retrieval and categorization tasks within the TREC genomics track in 2005 [1]. The rank aggregation problem was addressed in various fields such as i) in social choice theory which studies voting algorithms which specify winners of elections or winners of competitions in tournaments [29], ii) in statistics when studying correlation between rankings, iii) in distributed databases when results from different databases must be combined [12], and iv) in collaborative filtering [23]. Most current rank aggregation methods consider each input ranking as a permutation over the same set of items. They also give rigid interpretation to the exact ranking of the items. Both of these assumptions are rather not valid in the IR context, as will be shown in the following sections. The remaining of the paper is organized as follows. We first review current rank aggregation methods in Section 2. Then we outline the specificities of the data fusion problem in the IR context (Section 3). In Section 4, we present a new aggregation method which is proven to best fit the IR context. Experimental results are presented in Section 5 and conclusions are provided in a final section. 2. RELATED WORK As pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks they receive and majoritarian methods which are based on pairwise comparisons of items to be ranked. These two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature. 2.1 Preliminaries We first introduce some basic notations to present the rank aggregation methods in a uniform way. Let D = {d1, d2,..., dnd} be a set of nd documents. A list or a ranking ~ j is an ordering defined on Dj ⊆ D (j = 1,..., n). Thus, di ~ j di, means di ` is ranked better than' di, in ~ j. When Dj = D, ~ j is said to be a full list. Otherwise, it is a partial list. If di belongs to Dj, rji denotes the rank or position of di in ~ j. We assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position | Dj |. Let ~ D be the set of all permutations on D or all subsets of D. A profile is a n-tuple of rankings PR = (~ 1, ~ 2,..., ~ n). Restricting PR to the rankings containing document di defines PRi. We also call the number of rankings which contain document di the rank hits of di [19]. The rank aggregation or data fusion problem consists of finding a ranking function or mechanism Ψ (also called a social welfare function in the social choice theory terminology) defined by: where σ is called a consensus ranking. 2.2 Positional Methods 2.2.1 Borda Count This method [5] first assigns a score Enj = 1 rji to each document di. Documents are then ranked by increasing order of this score, breaking ties, if any, arbitrarily. 2.2.2 Linear Combination Methods This family of methods basically combine scores of documents. When used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28]. For instance, Callan et al. [6] used the inference networks model [30] to combine rankings. Fox and Shaw [15] proposed several combination strategies which are CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ. The first three operators correspond to the sum, min and max operators, respectively. CombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits. It is shown in [19] that the CombSUM and CombMNZ operators perform better than the others. Metasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings. 2.2.3 Footrule Optimal Aggregation In this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21]. Formally, given two full lists ~ j and ~ j,, this distance is given by F (~ j, ~ j,) = nd as follows. Given a profile PR and a consensus ranking σ, the Spearman footrule distance of σ to PR is given by i, i, |, where rji, i, = rji, − rji. This formulation has the advantage that it considers the intensity of preferences. 2.2.4 Probabilistic Methods This kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance. During the training process, probabilities of relevance are calculated. For subsequent queries, documents are ranked based on these probabilities. For instance, in [20], each input ranking ~ j is divided into a number of segments, and the conditional probability of relevance (R) of each document di depending on the segment k it occurs in, is computed, i.e. prob (R | di, k, ~ j). For subsequent queries, the score of each document di is given by En prob (R | di, k, Yj). Le Calve and Savoy [18] suggest using j = 1 k a logistic regression approach for combining scores. Training data is needed to infer the model parameters. 2.3 Majoritarian Methods 2.3.1 Condorcet Procedure The original Condorcet rule [7] specifies that a winner of the election is any item that beats or ties with every other item in a pairwise contest. Formally, let C (diσdi,) = {~ j ∈ PR: di ~ j di,} be the coalition of rankings that are concordant with establishing diσdi,, i.e. with the proposition di ` should be ranked better than' di, in the final ranking σ. di beats or ties with di, iff | C (diσdi,) | ≥ | C (di, σdi) |. The repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank. Since there is not always Condorcet winners, variations of the Condorcet procedure have been developed within the multiple criteria decision aid theory, with methods such as ELECTRE [26]. 2.3.2 Kemeny Optimal Aggregation As in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance. Formally, given two full lists ~ j and ~ j,, the Kendall tau distance is given by K (~ j, ~ j,) = | {(di, di,): i <i ~, rji <rji,, rj, i,} |, i.e. the number of pairwise disagreements between the two lists. It is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem. 2.3.3 Markov Chain Methods Markov chains (MCs) have been used by Dwork et al. [11] as a ` natural' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event. In the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]): • MC4: move from the current state di to the next state di, by first choosing a document di, uniformly from D. If for the majority of the rankings, we have rji, ≤ rji, then move to di,, else stay in di. The consensus ranking corresponds to the stationary distribution of MC4. 3. SPECIFICITIES OF THE RANK AGGREGATION PROBLEM IN THE IR CONTEXT 3.1 Limited Significance of the Rankings 3.2 Partial Lists 4. OUTRANKING APPROACH FOR RANK AGGREGATION 4.1 Presentation 4.2 Illustrative Example 5. EXPERIMENTS AND RESULTS 5.1 Test Setting 5.2 Results 5.2.1 Impact of the Working Assumptions 5.2.2 Impact of the Variation of the Parameters 5.2.3 Impact of the Variation of the Number of Input Rankings 5.2.4 Comparison of the Performance of Different Rank Aggregation Methods 6. CONCLUSIONS In this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused. We noticed that the input rankings can hide ties, so they should not be considered as complete orders. Only robust information should be used from each input ranking. Current rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings. They should be adapted by considering specific working assumptions. We propose a new outranking method for rank aggregation which is well adapted to the IR context. Indeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document. There is also no need to make specific assumptions on the positions of the missing documents. This is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively. Experimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies. It also out-performs a good performing majoritarian methods which is the Markov chain method. These results are tested against different test collections and queries. From the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing IR models, and that majoritarian data fusion methods perform better than positional methods. The proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list. Further work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.
An Outranking Approach for Rank Aggregation in Information Retrieval ABSTRACT Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators. 1. INTRODUCTION The result lists produced by these approaches depend on the exact definition of the relevance concept. Rank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking. Such approaches give rise to metasearch engines in the Web context. We consider, in the following, cases where only ranks are available and no other additional information is provided such as the relevance scores. This corresponds indeed to the reality, where only ordinal information is available. Several studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods. For instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant. Bartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances. These methods also tend to smooth out biases of the input methods according to Montague and Aslam [22]. Most current rank aggregation methods consider each input ranking as a permutation over the same set of items. They also give rigid interpretation to the exact ranking of the items. Both of these assumptions are rather not valid in the IR context, as will be shown in the following sections. The remaining of the paper is organized as follows. We first review current rank aggregation methods in Section 2. Then we outline the specificities of the data fusion problem in the IR context (Section 3). In Section 4, we present a new aggregation method which is proven to best fit the IR context. Experimental results are presented in Section 5 and conclusions are provided in a final section. 2. RELATED WORK As pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks they receive and majoritarian methods which are based on pairwise comparisons of items to be ranked. These two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature. 2.1 Preliminaries We first introduce some basic notations to present the rank aggregation methods in a uniform way. Let D = {d1, d2,..., dnd} be a set of nd documents. A list or a ranking ~ j is an ordering defined on Dj ⊆ D (j = 1,..., n). Thus, di ~ j di, means di ` is ranked better than' di, in ~ j. When Dj = D, ~ j is said to be a full list. Otherwise, it is a partial list. If di belongs to Dj, rji denotes the rank or position of di in ~ j. We assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position | Dj |. A profile is a n-tuple of rankings PR = (~ 1, ~ 2,..., ~ n). Restricting PR to the rankings containing document di defines PRi. We also call the number of rankings which contain document di the rank hits of di [19]. The rank aggregation or data fusion problem consists of finding a ranking function or mechanism Ψ (also called a social welfare function in the social choice theory terminology) defined by: where σ is called a consensus ranking. 2.2 Positional Methods 2.2.1 Borda Count This method [5] first assigns a score Enj = 1 rji to each document di. Documents are then ranked by increasing order of this score, breaking ties, if any, arbitrarily. 2.2.2 Linear Combination Methods This family of methods basically combine scores of documents. When used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28]. For instance, Callan et al. [6] used the inference networks model [30] to combine rankings. The first three operators correspond to the sum, min and max operators, respectively. CombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits. It is shown in [19] that the CombSUM and CombMNZ operators perform better than the others. Metasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings. 2.2.3 Footrule Optimal Aggregation In this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21]. Formally, given two full lists ~ j and ~ j,, this distance is given by F (~ j, ~ j,) = nd as follows. Given a profile PR and a consensus ranking σ, the Spearman footrule distance of σ to PR is given by 2.2.4 Probabilistic Methods This kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance. During the training process, probabilities of relevance are calculated. For subsequent queries, documents are ranked based on these probabilities. For subsequent queries, the score of each document di is given by En prob (R | di, k, Yj). Le Calve and Savoy [18] suggest using j = 1 k a logistic regression approach for combining scores. Training data is needed to infer the model parameters. 2.3 Majoritarian Methods 2.3.1 Condorcet Procedure The repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank. 2.3.2 Kemeny Optimal Aggregation As in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance. i,} |, i.e. the number of pairwise disagreements between the two lists. It is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem. 2.3.3 Markov Chain Methods Markov chains (MCs) have been used by Dwork et al. [11] as a ` natural' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event. In the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]): • MC4: move from the current state di to the next state di, by first choosing a document di, uniformly from D. If for the majority of the rankings, we have rji, ≤ rji, then move to di,, else stay in di. The consensus ranking corresponds to the stationary distribution of MC4. 6. CONCLUSIONS In this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused. We noticed that the input rankings can hide ties, so they should not be considered as complete orders. Only robust information should be used from each input ranking. Current rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings. They should be adapted by considering specific working assumptions. We propose a new outranking method for rank aggregation which is well adapted to the IR context. Indeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document. There is also no need to make specific assumptions on the positions of the missing documents. This is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively. Experimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies. It also out-performs a good performing majoritarian methods which is the Markov chain method. These results are tested against different test collections and queries. From the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing IR models, and that majoritarian data fusion methods perform better than positional methods. The proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list. Further work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.
H-87
Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation
This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05~0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with β=0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information.
[ "robust", "adapt filter", "adapt filter", "cross-benchmark evalu", "regular", "logist regress", "lr", "rocchio", "topic detect", "util function", "cross-corpu paramet optim", "relev feedback", "inform retriev", "topic track", "statist learn", "systemat method for paramet tune across multipl corpora", "extern corpu", "valid set", "granular differ", "cost function", "penalti ratio", "optim criterion", "probabilist threshold calibr", "rocchio-style method", "posterior probabl", "sigmoid function", "bia", "gaussian" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "R", "U", "M", "U", "M", "R", "R", "U", "M", "U", "M", "U", "M", "U", "U" ]
Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation Yiming Yang, Shinjae Yoo, Jian Zhang, Bryan Kisiel School of Computer Science, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ABSTRACT This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05~0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with β=0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Information filtering, Relevance feedback, Retrieval models, Selection process; I.5.2 [Design Methodology]: Classifier design and evaluation General Terms Algorithms, Measurement, Performance, Experimentation 1. INTRODUCTION Adaptive filtering (AF) has been a challenging research topic in information retrieval. The task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined topic of interest. Starting from 1997 in the Topic Detection and Tracking (TDT) area and 1998 in the Text Retrieval Conferences (TREC), benchmark evaluations have been conducted by NIST under the following conditions[6][7][8][3][4]: • A very small number (1 to 4) of positive training examples was provided for each topic at the starting point. • Relevance feedback was available but only for the systemaccepted documents (with a yes decision) in the TREC evaluations for AF. • Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004. • TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate. The above conditions attempt to mimic realistic situations where an AF system would be used. That is, the user would be willing to provide a few positive examples for each topic of interest at the start, and might or might not be able to provide additional labeling on a small portion of incoming documents through relevance feedback. Furthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing. These conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons: 1) it is difficult to learn accurate models for prediction based on extremely sparse training data; 2) it is not obvious how to correct the sampling bias (i.e., relevance feedback on system-accepted documents only) during the adaptation process; 3) it is not well understood how to effectively tune parameters in AF methods using cross-corpus validation where the validation and evaluation topics do not overlap, and the documents may be from different sources or different epochs. None of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once. The first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic calibration or local fitting techniques [1][2][9][10][11][12][13]. Although these works provide valuable insights for understanding the problems and possible solutions, it is difficult to draw conclusions regarding the effectiveness and robustness of current methods because the third problem has not been thoroughly investigated. Addressing the third issue is the main focus in this paper. We argue that robustness is an important measure for evaluating and comparing AF methods. By robust we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora. Most AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts. Available training examples, on the other hand, are often insufficient for tuning the parameters. In TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective. This leaves only one option (assuming tuning on the test set is not an alternative), that is, choosing an external corpus as the validation set. Notice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other. Now the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters? Current literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported. In this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR). Rocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2][4][7][8][9][11][13]. Logistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15][14]. It was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1). Furthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus. Stimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing. Specifically, we focus on how much the performance of these methods depends on parameter tuning, what the most influential parameters are in these methods, how difficult (or how easy) to optimize these influential parameters using cross-corpus validation, how strong these methods perform on multiple benchmarks with the systematic tuning of parameters on other corpora, and how efficient these methods are in running AF on large benchmark corpora. The organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study. Section 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences. Section 4 outlines the Rocchio and LR approaches to AF, respectively. Section 5 reports the experiments and results. Section 6 concludes the main findings in this study. 2. BENCHMARK CORPORA We used four benchmark corpora in our study. Table 1 shows the statistics about these data sets. TREC10 was the evaluation benchmark for adaptive filtering in TREC 2001, consisting of roughly 806,791 Reuters news stories from August 1996 to August 1997 with 84 topic labels (subject categories)[7]. The first two weeks (August 20th to 31st , 1996) of documents is the training set, and the remaining 11 & 1/2 months (from September 1st , 1996 to August 19th , 1997) is the test set. TREC11 was the evaluation benchmark for adaptive filtering in TREC 2002, consisting of the same set of documents as those in TREC10 but with a slightly different splitting point for the training and test sets. The TREC11 topics (50) are quite different from those in TREC10; they are queries for retrieval with relevance judgments by NIST assessors [8]. TDT3 was the evaluation benchmark in the TDT2001 dry run1 . The tracking part of the corpus consists of 71,388 news stories from multiple sources in English and Mandarin (AP, NYT, CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America and PRI the World) in the period of October to December 1998. Machine-translated versions of the non-English stories (Xinhua, Zaobao and VOA Mandarin) are provided as well. The splitting point for training-test sets is different for each topic in TDT. TDT5 was the evaluation benchmark in TDT2004 [4]. The tracking part of the corpus consists of 407,459 news stories in the period of April to September, 2003 from 15 news agents or broadcast sources in English, Arabic and Mandarin, with machine-translated versions of the non-English stories. We only used the English versions of those documents in our experiments for this paper. The TDT topics differ from TREC topics both conceptually and statistically. Instead of generic, ever-lasting subject categories (as those in TREC), TDT topics are defined at a finer level of granularity, for events that happen at certain times and locations, and that are born and die, typically associated with a bursty distribution over chronologically ordered news stories. The average size of TDT topics (events) is two orders of magnitude smaller than that of the TREC10 topics. Figure 1 compares the document densities of a TREC topic (Civil Wars) and two TDT topics (Gunshot and APEC Summit Meeting, respectively) over a 3-month time period, where the area under each curve is normalized to one. The granularity differences among topics and the corresponding non-stationary distributions make the cross-benchmark evaluation interesting. For example, algorithms favoring large and stable topics may not work well for short-lasting and nonstationary topics, and vice versa. Cross-benchmark evaluations allow us to test this hypothesis and possibly identify the weaknesses in current approaches to adaptive filtering in tracking the drifting trends of topics. 1 http://www.ldc.upenn.edu/Projects/TDT2001/topics.html Table 1: Statistics of benchmark corpora for adaptive filtering evaluations N(tr) is the number of the initial training documents; N(ts) is the number of the test documents; n+ is the number of positive examples of a predefined topic; * is an average over all the topics. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Week P(topic|week) Gunshot (TDT5) APEC Summit Meeting (TDT3) Civil War(TREC10) Figure 1: The temporal nature of topics 3. METRICS To make our results comparable to the literature, we decided to use both TREC-conventional and TDT-conventional metrics in our evaluation. 3.1 TREC11 metrics Let A, B, C and D be, respectively, the numbers of true positives, false alarms, misses and true negatives for a specific topic, and DCBAN +++= be the total number of test documents. The TREC-conventional metrics are defined as: Precision )/( BAA += , Recall )/( CAA += )(2 )21( CABA A F +++ + = β β β ( ) η ηηβ ηβ − −+− = 1 ),/()(max 11 , CABA SUT where parameters β and η were set to 0.5 and -0.5 respectively in TREC10 (2001) and TREC11 (2002). For evaluating the performance of a system, the performance scores are computed for individual topics first and then averaged over topics (macroaveraging). 3.2 TDT metrics The TDT-conventional metric for topic tracking is defined as: famisstrk PTPwPTPwTC ))(1()()( 21 −+= where P(T) is the percentage of documents on topic T, missP is the miss rate by the system on that topic, faP is the false alarm rate, and 1w and 2w are the costs (pre-specified constants) for a miss and a false alarm, respectively. The TDT benchmark evaluations (since 1997) have used the settings of 11 =w , 1.02 =w and 02.0)( =TP for all topics. For evaluating the performance of a system, Ctrk is computed for each topic first and then the resulting scores are averaged for a single measure (the topic-weighted Ctrk). To make the intuition behind this measure transparent, we substitute the terms in the definition of Ctrk as follows: N CA TP + =-RRB-( , N DB TP + =− )(1 , CA C Pmiss + = , DB B Pfa + = , )( 1 )( 21 21 BwCw N DB B N DB w CA C N CA wTCtrk +⋅= + ⋅ + ⋅+ + ⋅ + ⋅= Clearly, trkC is the average cost per error on topic T, with 1w and 2w controlling the penalty ratio for misses vs. false alarms. In addition to trkC , TDT2004 also employed 1.011 =βSUT as a utility metric. To distinguish this from the 5.011 =βSUT in TREC11, we call former TDT5SU in the rest of this paper. Corpus #Topics N(tr) N(ts) Avg n+ (tr) Avg n+ (ts) Max n+ (ts) Min n+ (ts) #Topics per doc (ts) TREC10 84 20,307 783,484 2 9795.3 39,448 38 1.57 TREC11 50 80.664 726,419 3 378.0 597 198 1.12 TDT3 53 18,738* 37,770* 4 79.3 520 1 1.06 TDT5 111 199,419* 207,991* 1 71.3 710 1 1.01 3.3 The correlations and the differences From an optimization point of view, TDT5SU and T11SU are both utility functions while Ctrk is a cost function. Our objective is to maximize the former or to minimize the latter on test documents. The differences and correlations among these objective functions can be analyzed through the shared counts of A, B, C and D in their definitions. For example, both TDT5SU and T11SU are positively correlated to the values of A and D, and negatively correlated to the values of B and C; the only difference between them is in their penalty ratios for misses vs. false alarms, i.e., 10:1 in TDT5SU and 2:1 in T11SU. The Ctrk function, on the other hand, is positively correlated to the values of C and B, and negatively correlated to the values of A and D; hence, it is negatively correlated to T11SU and TDT5SU. More importantly, there is a subtle and major difference between Ctrk and the utility functions: T11SU and TDT5SU. That is, Ctrk has a very different penalty ratio for misses vs. false alarms: it favors recall-oriented systems to an extreme. At first glance, one would think that the penalty ratio in Ctrk is 10:1 since 11 =w and 1.02 =w . However, this is not true if 02.0)( =TP is an inaccurate estimate of the on-topic documents on average for the test corpus. Using TDT3 as an example, the true percentage is: 002.0 37770 3.79 )( ≈= + = N n TP where N is the average size of the test sets in TDT3, and n+ is the average number of positive examples per topic in the test sets. Using 02.0)(ˆ =TP as an (inaccurate) estimate of 0.002 enlarges the intended penalty ratio of 10:1 to 100:1, roughly speaking. To wit: )1.010( 1 1.010 )3.7937770(37770 3.79 1011.0101 1011.0101 ))(1(2)(1 )02.01(202.01)( BC NN B N C B N C DB B N CA N C faPTPwmissPTPw faPwmissPwTtrkC ×+×=×+×≈ − ×−×+××= + + ×−×+××= ×−⋅+××= −×+××= ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ρρ where 10 002.0 02.0 )( )(ˆ === TP TP ρ is the factor of enlargement in the estimation of P(T) compared to the truth. Comparing the above result to formula 2, we can see the actual penalty ratio for misses vs. false alarms was 100:1 in the evaluations on TDT3 using Ctrk. Similarly, we can compute the enlargement factor for TDT5 using the statistics in Table 1 as follows: 3.58 991,207/3.71 02.0 )( )(ˆ === TP TP ρ which means the actual penalty ratio for misses vs. false alarms in the evaluation on TDT5 using Ctrk was approximately 583:1. The implications of the above analysis are rather significant: • Ctrk defined in the same formula does not necessarily mean the same objective function in evaluation; instead, the optimization criterion depends on the test corpus. • Systems optimized for Ctrk would not optimize TDT5SU (and T11SU) because the former favors high-recall oriented to an extreme while the latter does not. • Parameters tuned on one corpus (e.g., TDT3) might not work for an evaluation on another corpus (say, TDT5) unless we account for the previously-unknown subtle dependency of Ctrk on data. • Results in Ctrk in the past years of TDT evaluations may not be directly comparable to each other because the evaluation collections changed most years and hence the penalty ratio in Ctrk varied. Although these problems with Ctrk were not originally anticipated, it offered an opportunity to examine the ability of systems in trading off precision for extreme recall. This was a challenging part of the TDT2004 evaluation for AF. Comparing the metrics in TDT and TREC from a utility or cost optimization point of view is important for understanding the evaluation results of adaptive filtering methods. This is the first time this issue is explicitly analyzed, to our knowledge. 4. METHODS 4.1 Incremental Rocchio for AF We employed a common version of Rocchio-style classifiers which computes a prototype vector per topic (T) as follows: |)(| ' |)(| )()( )(')( TD d TD d TqTp TDdTDd − ∈ + ∈ ∑∑ −+ −+= rr rr rr γβα The first term on the RHS is the weighted vector representation of topic description whose elements are terms weights. The second term is the weighted centroid of the set )(TD+ of positive training examples, each of which is a vector of withindocument term weights. The third term is the weighted centroid of the set )(TD− of negative training examples which are the nearest neighbors of the positive centroid. The three terms are given pre-specified weights of βα, and γ , controlling the relative influence of these components in the prototype. The prototype of a topic is updated each time the system makes a yes decision on a new document for that topic. If relevance feedback is available (as is the case in TREC adaptive filtering), the new document is added to the pool of either )(TD+ or )(TD− , and the prototype is recomputed accordingly; if relevance feedback is not available (as is the case in TDT event tracking), the system``s prediction (yes) is treated as the truth, and the new document is added to )(TD+ for updating the prototype. Both cases are part of our experiments in this paper (and part of the TDT 2004 evaluations for AF). To distinguish the two, we call the first case simply Rocchio and the second case PRF Rocchio where PRF stands for pseudorelevance feedback. The predictions on a new document are made by computing the cosine similarity between each topic prototype and the document vector, and then comparing the resulting scores against a threshold: ⎩ ⎨ ⎧ − + =− )( )( ))),((cos( no yes dTpsign new θ rr Threshold calibration in incremental Rocchio is a challenging research topic. Multiple approaches have been developed. The simplest is to use a universal threshold for all topics, tuned on a validation set and fixed during the testing phase. More elaborate methods include probabilistic threshold calibration which converts the non-probabilistic similarity scores to probabilities (i.e., )|( dTP r ) for utility optimization [9][13], and margin-based local regression for risk reduction [11]. It is beyond the scope of this paper to compare all the different ways to adapt Rocchio-style methods for AF. Instead, our focus here is to investigate the robustness of Rocchio-style methods in terms of how much their performance depends on elaborate system tuning, and how difficult (or how easy) it is to get good performance through cross-corpus parameter optimization. Hence, we decided to use a relatively simple version of Rocchio as the baseline, i.e., with a universal threshold tuned on a validation corpus and fixed for all topics in the testing phase. This simple version of Rocchio has been commonly used in the past TDT benchmark evaluations for topic tracking, and had strong performance in the TDT2004 evaluations for adaptive filtering with and without relevance feedback (Section 5.1). Results of more complex variants of Rocchio are also discussed when relevant. 4.2 Logistic Regression for AF Logistic regression (LR) estimates the posterior probability of a topic given a document using a sigmoid function )1/(1),|1( xw ewxyP rrrr ⋅− +== where x r is the document vector whose elements are term weights, w r is the vector of regression coefficients, and }1,1{ −+∈y is the output variable corresponding to yes or no with respect to a particular topic. Given a training set of labeled documents { }),(,),,( 11 nn yxyxD r L r = , the standard regression problem is defined as to find the maximum likelihood estimates of the regression coefficients (the model parameters): { } { } { }))exp(1(1logminarg )|(logmaxarg)|(maxarg ii xwyn i w wDP w wDP w mlw rr r r r r r r ⋅−+∑ == == This is a convex optimization problem which can be solved using a standard conjugate gradient algorithm in O(INF) time for training per topic, where I is the average number of iterations needed for convergence, and N and F are the number of training documents and number of features respectively [14]. Once the regression coefficients are optimized on the training data, the filtering prediction on each incoming document is made as: ( ) ⎩ ⎨ ⎧ − + =− )( )( ),|( no yes wxyPsign optnew θ rr Note that w r is constantly updated whenever a new relevance judgment is available in the testing phase of AF, while the optimal threshold optθ is constant, depending only on the predefined utility (or cost) function for evaluation. If T11SU is the metric, for example, with the penalty ratio of 2:1 for misses and false alarms (Section 3.1), the optimal threshold for LR is 33.0)12/(1 =+ for all topics. We modified the standard (above) version of LR to allow more flexible optimization criteria as follows: ⎭ ⎬ ⎫ ⎩ ⎨ ⎧ −++= ∑= ⋅− 2 1 )1log()(minarg μλ rrr rr r weysw n i xwy i w map ii where )( iys is taken to be α , β and γ for query, positive and negative documents respectively, which are similar to those in Rocchio, giving different weights to the three kinds of training examples: topic descriptions (queries), on-topic documents and off-topic documents. The second term in the objective function is for regularization, equivalent to adding a Gaussian prior to the regression coefficients with mean μ r and covariance variance matrix Ι⋅λ2/1 , where Ι is the identity matrix. Tuning λ (≥0) is theoretically justified for reducing model complexity (the effective degree of freedom) and avoiding over-fitting on training data [5]. How to find an effective μ r is an open issue for research, depending on the user``s belief about the parameter space and the optimal range. The solution of the modified objective function is called the Maximum A Posteriori (MAP) estimate, which reduces to the maximum likelihood solution for standard LR if 0=λ . 5. EVALUATIONS We report our empirical findings in four parts: the TDT2004 official evaluation results, the cross-corpus parameter optimization results, and the results corresponding to the amounts of relevance feedback. 5.1 TDT2004 benchmark results The TDT2004 evaluations for adaptive filtering were conducted by NIST in November 2004. Multiple research teams participated and multiple runs from each team were allowed. Ctrk and TDT5SU were used as the metrics. Figure 2 and Figure 3 show the results; the best run from each team was selected with respect to Ctrk or TDT5SU, respectively. Our Rocchio (with adaptive profiles but fixed universal threshold for all topics) had the best result in Ctrk, and our logistic regression had the best result in TDT5SU. All the parameters of our runs were tuned on the TDT3 corpus. Results for other sites are also listed anonymously for comparison. Ctrk Ours 0.0324 Site2 0.0467 Site3 0.1366 Site4 0.2438 Metric = Ctrk (the lower the better) 0.0324 0.0467 0.1366 0.2438 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Ours Site2 Site3 Site4 Figure 2: TDT2004 results in Ctrk of systems using true relevance feedback. (Ours is the Rocchio method.) We also put the 1st and 3rd quartiles as sticks for each site.2 T11SU Ours 0.7328 Site3 0.7281 Site2 0.6672 Site4 0.382 Metric = TDT5SU (the higher the better) 0.7328 0.7281 0.6672 0.382 0 0.2 0.4 0.6 0.8 1 Ours Site3 Site2 Site4 Figure 3:TDT2004 results in TDT5SU of systems using true relevance feedback. (Ours is LR with 0=μ r and 005.0=λ ). CTrk Ours 0.0707 Site2 0.1545 Site5 0.5669 Site4 0.6507 Site6 0.8973 Primary Topic Traking Results in TDT2004 0.0707 0.8973 0.6507 0.1545 0.5669 0 0.2 0.4 0.6 0.8 1 1.2 Ours Site2 Site5 Site4 Site6 Ctrk Figure 4:TDT2004 results in Ctrk of systems without using true relevance feedback. (Ours is PRF Rocchio.) Adaptive filtering without using true relevance feedback was also a part of the evaluations. In this case, systems had only one labeled training example per topic during the entire training and testing processes, although unlabeled test documents could be used as soon as predictions on them were made. Such a setting has been conventional for the Topic Tracking task in TDT until 2004. Figure 4 shows the summarized official submissions from each team. Our PRF Rocchio (with a fixed threshold for all the topics) had the best performance. 2 We use quartiles rather than standard deviations since the former is more resistant to outliers. 5.2 Cross-corpus parameter optimization How much the strong performance of our systems depends on parameter tuning is an important question. Both Rocchio and LR have parameters that must be prespecified before the AF process. The shared parameters include the sample weightsα , β and γ , the sample size of the negative training documents (i.e., )(TD− ), the term-weighting scheme, and the maximal number of non-zero elements in each document vector. The method-specific parameters include the decision threshold in Rocchio, and μ r , λ and MI (the maximum number of iterations in training) in LR. Given that we only have one labeled example per topic in the TDT5 training sets, it is impossible to effectively optimize these parameters on the training data, and we had to choose an external corpus for validation. Among the choices of TREC10, TREC11 and TDT3, we chose TDT3 (c.f. Section 2) because it is most similar to TDT5 in terms of the nature of the topics (Section 2). We optimized the parameters of our systems on TDT3, and fixed those parameters in the runs on TDT5 for our submissions to TDT2004. We also tested our methods on TREC10 and TREC11 for further analysis. Since exhaustive testing of all possible parameter settings is computationally intractable, we followed a step-wise forward chaining procedure instead: we pre-specified an order of the parameters in a method (Rocchio or LR), and then tuned one parameter at the time while fixing the settings of the remaining parameters. We repeated this procedure for several passes as time allowed. 0.05 0.26 0.67 0.69 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 0.02 0.03 0.04 0.06 0.08 0.1 0.15 0.2 0.25 0.3 Threshold TDT5SU TDT3 TDT5 TREC10 TREC11 Figure 5: Performance curves of adaptive Rocchio Figure 5 compares the performance curves in TDT5SU for Rocchio on TDT3, TDT5, TREC10 and TREC11 when the decision threshold varied. These curves peak at different locations: the TDT3-optimal is closest to the TDT5-optimal while the TREC10-optimal and TREC1-optimal are quite far away from the TDT5-optimal. If we were using TREC10 or TREC11 instead of TDT3 as the validation corpus for TDT5, or if the TDT3 corpus were not available, we would have difficulty in obtaining strong performance for Rocchio in TDT2004. The difficulty comes from the ad-hoc (non-probabilistic) scores generated by the Rocchio method: the distribution of the scores depends on the corpus, making cross-corpus threshold optimization a tricky problem. Logistic regression has less difficulty with respect to threshold tuning because it produces probabilistic scores of )|1Pr( xy = upon which the optimal threshold can be directly computed if probability estimation is accurate. Given the penalty ratio for misses vs. false alarms as 2:1 in T11SU, 10:1 in TDT5SU and 583:1 in Ctrk (Section 3.3), the corresponding optimal thresholds (t) are 0.33, 0.091 and 0.0017 respectively. Although the theoretical threshold could be inaccurate, it still suggests the range of near-optimal settings. With these threshold settings in our experiments for LR, we focused on the crosscorpus validation of the Bayesian prior parameters, that is, μ r and λ. Table 2 summarizes the results 3 . We measured the performance of the runs on TREC10 and TREC11 using T11SU, and the performance of the runs on TDT3 and TDT5 using TDT5SU. For comparison we also include the best results of Rocchio-based methods on these corpora, which are our own results of Rocchio on TDT3 and TDT5, and the best results reported by NIST for TREC10 and TREC11. From this set of results, we see that LR significantly outperformed Rocchio on all the corpora, even in the runs of standard LR without any tuning, i.e. λ=0. This empirical finding is consistent with a previous report [13] for LR on TREC11 although our results of LR (0.585~0.608 in T11SU) are stronger than the results (0.49 for standard LR and 0.54 for LR using Rocchio prototype as the prior) in that report. More importantly, our cross-benchmark evaluation gives strong evidence for the robustness of LR. The robustness, we believe, comes from the probabilistic nature of the system-generated scores. That is, compared to the ad-hoc scores in Rocchio, the normalized posterior probabilities make the threshold optimization in LR a much easier problem. Moreover, logistic regression is known to converge towards the Bayes classifier asymptotically while Rocchio classifiers'' parameters do not. Another interesting observation in these results is that the performance of LR did not improve when using a Rocchio prototype as the mean in the prior; instead, the performance decreased in some cases. This observation does not support the previous report by [13], but we are not surprised because we are not convinced that Rocchio prototypes are more accurate than LR models for topics in the early stage of the AF process, and we believe that using a Rocchio prototype as the mean in the Gaussian prior would introduce undesirable bias to LR. We also believe that variance reduction (in the testing phase) should be controlled by the choice of λ (but not μ r ), for which we conducted the experiments as shown in Figure 6. Table 2: Results of LR with different Bayesian priors Corpus TDT3 TDT5 TREC10 TREC11 LR(μ=0,λ=0) 0.7562 0.7737 0.585 0.5715 LR(μ=0,λ=0.01) 0.8384 0.7812 0.6077 0.5747 LR(μ=roc*,λ=0.01) 0.8138 0.7811 0.5803 0.5698 Best Rocchio 0.6628 0.6917 0.4964 0.475 3 The LR results (0.77~0.78) on TDT5 in this table are better than our TDT2004 official result (0.73) because parameter optimization has been improved afterwards. 4 The TREC10-best result (0.496 by Oracle) is only available in T10U which is not directly comparable to the scores in T11SU, just indicative. *: μ r was set to the Rocchio prototype 0 0.2 0.4 0.6 0.8 0.000 0.001 0.005 0.050 0.500 Lambda Performance Ctrk on TDT3 TDT5SU on TDT3 TDT5SU on TDT5 T11SU on TREC11 Figure 6: LR with varying lambda. The performance of LR is summarized with respect to λ tuning on the corpora of TREC10, TREC11 and TDT3. The performance on each corpus was measured using the corresponding metrics, that is, T11SU for the runs on TREC10 and TREC11, and TDT5SU and Ctrk for the runs on TDT3,. In the case of maximizing the utilities, the safe interval for λ is between 0 and 0.01, meaning that the performance of regularized LR is stable, the same as or improved slightly over the performance of standard LR. In the case of minimizing Ctrk, the safe range for λ is between 0 and 0.1, and setting λ between 0.005 and 0.05 yielded relatively large improvements over the performance of standard LR because training a model for extremely high recall is statistically more tricky, and hence more regularization is needed. In either case, tuning λ is relatively safe, and easy to do successfully by cross-corpus tuning. Another influential choice in our experiment settings is term weighting: we examined the choices of binary, TF and TF-IDF (the ltc version) schemes. We found TF-IDF most effective for both Rocchio and LR, and used this setting in all our experiments. 5.3 Percentages of labeled data How much relevance feedback (RF) would be needed during the AF process is a meaningful question in real-world applications. To answer it, we evaluated Rocchio and LR on TDT with the following settings: • Basic Rocchio, no adaptation at all • PRF Rocchio, updating topic profiles without using true relevance feedback; • Adaptive Rocchio, updating topic profiles using relevance feedback on system-accepted documents plus 10 documents randomly sampled from the pool of systemrejected documents; • LR with 0 rr =μ , 01.0=λ and threshold = 0.004; • All the parameters in Rocchio tuned on TDT3. Table 3 summarizes the results in Ctrk: Adaptive Rocchio with relevance feedback on 0.6% of the test documents reduced the tracking cost by 54% over the result of the PRF Rocchio, the best system in the TDT2004 evaluation for topic tracking without relevance feedback information. Incremental LR, on the other hand, was weaker but still impressive. Recall that Ctrk is an extremely high-recall oriented metric, causing frequent updating of profiles and hence an efficiency problem in LR. For this reason we set a higher threshold (0.004) instead of the theoretically optimal threshold (0.0017) in LR to avoid an untolerable computation cost. The computation time in machine-hours was 0.33 for the run of adaptive Rocchio and 14 for the run of LR on TDT5 when optimizing Ctrk. Table 4 summarizes the results in TDT5SU; adaptive LR was the winner in this case, with relevance feedback on 0.05% of the test documents improving the utility by 20.9% over the results of PRF Rocchio. Table 3: AF methods on TDT5 (Performance in Ctrk) Base Roc PRF Roc Adp Roc LR % of RF 0% 0% 0.6% 0.2% Ctrk 0.076 0.0707 0.0324 0.0382 ±% +7% (baseline) -54% -46% Table 4: AF methods on TDT5 (Performance in TDT5SU) Base Roc PRF Roc Adp Roc LR(λ=.01) % of RF 0% 0% 0.04% 0.05% TDT5SU 0.57 0.6452 0.69 0.78 ±% -11.7% (baseline) +6.9% +20.9% Evidently, both Rocchio and LR are highly effective in adaptive filtering, in terms of using of a small amount of labeled data to significantly improve the model accuracy in statistical learning, which is the main goal of AF. 5.4 Summary of Adaptation Process After we decided the parameter settings using validation, we perform the adaptive filtering in the following steps for each topic: 1) Train the LR/Rocchio model using the provided positive training examples and 30 randomly sampled negative examples; 2) For each document in the test corpus: we first make a prediction about relevance, and then get relevance feedback for those (predicted) positive documents. 3) Model and IDF statistics will be incrementally updated if we obtain its true relevance feedback. 6. CONCLUDING REMARKS We presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization. Our main conclusions from this study are the following: • Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past. • Robustness in cross-corpus parameter tuning is important for evaluation and method comparison. • We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning. • We found Rocchio performs strongly when a good validation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme. For future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting. Acknowledgments This material is based upon work supported in parts by the National Science Foundation (NSF) under grant IIS-0434035, by the DoD under award 114008-N66001992891808 and by the Defense Advanced Research Project Agency (DARPA) under Contract No. NBCHD030010. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors. 7. REFERENCES [1] J. Allan. Incremental relevance feedback for information filtering. In SIGIR-96, 1996. [2] J. Callan. Learning while filtering documents. In SIGIR-98, 224-231, 1998. [3] J. Fiscus and G. Duddington. Topic detection and tracking overview. In Topic detection and tracking: event-based information organization, 17-31, 2002. [4] J. Fiscus and B. Wheatley. Overview of the TDT 2004 Evaluation and Results. In TDT-04, 2004. [5] T. Hastie, R. Tibshirani and J. Friedman. Elements of Statistical Learning. Springer, 2001. [6] S. Robertson and D. Hull. The TREC-9 filtering track final report. In TREC-9, 2000. [7] S. Robertson and I. Soboroff. The TREC-10 filtering track final report. In TREC-10, 2001. [8] S. Robertson and I. Soboroff. The TREC 2002 filtering track report. In TREC-11, 2002. [9] S. Robertson and S. Walker. Microsoft Cambridge at TREC-9. In TREC-9, 2000. [10] R. Schapire, Y. Singer and A. Singhal. Boosting and Rocchio applied to text filtering. In SIGIR-98, 215-223, 1998. [11] Y. Yang and B. Kisiel. Margin-based local regression for adaptive filtering. In CIKM-03, 2003. [12] Y. Zhang and J. Callan. Maximum likelihood estimation for filtering thresholds. In SIGIR-01, 2001. [13] Y. Zhang. Using Bayesian priors to combine classifiers for adaptive filtering. In SIGIR-04, 2004. [14] J. Zhang and Y. Yang. Robustness of regularized linear classification methods in text categorization. In SIGIR-03: 190-197, 2003. [15] T. Zhang, F. J. Oles. Text Categorization Based on Regularized Linear Classification Methods. Inf. Retr. 4(1): 5-31 (2001).
Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation ABSTRACT This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05 ~ 0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with β = 0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information. 1. INTRODUCTION Adaptive filtering (AF) has been a challenging research topic in information retrieval. The task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined topic of interest. Starting from 1997 in the Topic Detection and Tracking (TDT) area and 1998 in the Text Retrieval Conferences (TREC), benchmark evaluations have been conducted by NIST under the following conditions [6] [7] [8] [3] [4]: • A very small number (1 to 4) of positive training examples was provided for each topic at the starting point. • Relevance feedback was available but only for the systemaccepted documents (with a "yes" decision) in the TREC evaluations for AF. • Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004. • TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate. The above conditions attempt to mimic realistic situations where an AF system would be used. That is, the user would be willing to provide a few positive examples for each topic of interest at the start, and might or might not be able to provide additional labeling on a small portion of incoming documents through relevance feedback. Furthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing. These conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons: 1) it is difficult to learn accurate models for prediction based on extremely sparse training data; 2) it is not obvious how to correct the sampling bias (i.e., relevance feedback on system-accepted documents only) during the adaptation process; 3) it is not well understood how to effectively tune parameters in AF methods using cross-corpus validation where the validation and evaluation topics do not overlap, and the documents may be from different sources or different epochs. None of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once. The first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic calibration or local fitting techniques [1] [2] [9] [10] [11] [12] [13]. Although these works provide valuable insights for understanding the problems and possible solutions, it is difficult to draw conclusions regarding the effectiveness and robustness of current methods because the third problem has not been thoroughly investigated. Addressing the third issue is the main focus in this paper. We argue that robustness is an important measure for evaluating and comparing AF methods. By "robust" we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora. Most AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts. Available training examples, on the other hand, are often insufficient for tuning the parameters. In TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective. This leaves only one option (assuming tuning on the test set is not an alternative), that is, choosing an external corpus as the validation set. Notice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other. Now the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters? Current literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported. In this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR). Rocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2] [4] [7] [8] [9] [11] [13]. Logistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15] [14]. It was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1). Furthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus. Stimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing. Specifically, we focus on how much the performance of these methods depends on parameter tuning, what the most influential parameters are in these methods, how difficult (or how easy) to optimize these influential parameters using cross-corpus validation, how strong these methods perform on multiple benchmarks with the systematic tuning of parameters on other corpora, and how efficient these methods are in running AF on large benchmark corpora. The organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study. Section 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences. Section 4 outlines the Rocchio and LR approaches to AF, respectively. Section 5 reports the experiments and results. Section 6 concludes the main findings in this study. 2. BENCHMARK CORPORA We used four benchmark corpora in our study. Table 1 shows the statistics about these data sets. TREC10 was the evaluation benchmark for adaptive filtering in TREC 2001, consisting of roughly 806,791 Reuters news stories from August 1996 to August 1997 with 84 topic labels (subject categories) [7]. The first two weeks (August 20th to 31st, 1996) of documents is the training set, and the remaining 11 & 1/2 months (from September 1st, 1996 to August 19th, 1997) is the test set. TREC11 was the evaluation benchmark for adaptive filtering in TREC 2002, consisting of the same set of documents as those in TREC10 but with a slightly different splitting point for the training and test sets. The TREC11 topics (50) are quite different from those in TREC10; they are queries for retrieval with relevance judgments by NIST assessors [8]. TDT3 was the evaluation benchmark in the TDT2001 dry run1. The tracking part of the corpus consists of 71,388 news stories from multiple sources in English and Mandarin (AP, NYT, CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America and PRI the World) in the period of October to December 1998. Machine-translated versions of the non-English stories (Xinhua, Zaobao and VOA Mandarin) are provided as well. The splitting point for training-test sets is different for each topic in TDT. TDT5 was the evaluation benchmark in TDT2004 [4]. The tracking part of the corpus consists of 407,459 news stories in the period of April to September, 2003 from 15 news agents or broadcast sources in English, Arabic and Mandarin, with machine-translated versions of the non-English stories. We only used the English versions of those documents in our experiments for this paper. The TDT "topics" differ from TREC topics both conceptually and statistically. Instead of generic, ever-lasting subject categories (as those in TREC), TDT topics are defined at a finer level of granularity, for events that happen at certain times and locations, and that are "born" and "die", typically associated with a bursty distribution over chronologically ordered news stories. The average size of TDT topics (events) is two orders of magnitude smaller than that of the TREC10 topics. Figure 1 compares the document densities of a TREC topic ("Civil Wars") and two TDT topics ("Gunshot" and "APEC Summit Meeting", respectively) over a 3-month time period, where the area under each curve is normalized to one. The granularity differences among topics and the corresponding non-stationary distributions make the cross-benchmark evaluation interesting. For example, algorithms favoring large and stable topics may not work well for short-lasting and nonstationary topics, and vice versa. Cross-benchmark evaluations allow us to test this hypothesis and possibly identify the weaknesses in current approaches to adaptive filtering in tracking the drifting trends of topics. Table 1: Statistics of benchmark corpora for adaptive filtering evaluations Figure 1: The temporal nature of topics 3. METRICS To make our results comparable to the literature, we decided to use both TREC-conventional and TDT-conventional metrics in our evaluation. 3.1 TREC11 metrics Let A, B, C and D be, respectively, the numbers of true positives, false alarms, misses and true negatives for a specific topic, and N = A + B + C + D be the total number of test documents. The TREC-conventional metrics are defined as: where parameters β and η were set to 0.5 and -0.5 respectively in TREC10 (2001) and TREC11 (2002). For evaluating the performance of a system, the performance scores are computed 3.3 The correlations and the differences From an optimization point of view, TDT5SU and T11SU are both utility functions while Ctrk is a cost function. Our objective for individual topics first and then averaged over topics (macroaveraging). 3.2 TDT metrics The TDT-conventional metric for topic tracking is defined as: where P (T) is the percentage of documents on topic T, Pmiss is the miss rate by the system on that topic, Pfa is the false alarm rate, and w1 and w2 are the costs (pre-specified constants) for a miss and a false alarm, respectively. The TDT benchmark evaluations (since 1997) have used the settings of w1 = 1, w2 = 0. 1 and P (T) = 0.02 for all topics. For evaluating the performance of a system, Ctrk is computed for each topic first and then the resulting scores are averaged for a single measure (the topic-weighted Ctrk). To make the intuition behind this measure transparent, we substitute the terms in the definition of Ctrk as follows: Clearly, Ctrk is the average cost per error on topic T, with w1 and w2 controlling the penalty ratio for misses vs. false alarms. In addition to Ctrk, TDT2004 also employed T 11 SUβ = 0. 1 as a utility metric. To distinguish this from the T 11SUβ = 0 .5 in TREC11, we call former TDT5SU in the rest of this paper. is to maximize the former or to minimize the latter on test documents. The differences and correlations among these objective functions can be analyzed through the shared counts of A, B, C and D in their definitions. For example, both TDT5SU and T11SU are positively correlated to the values of A and D, and negatively correlated to the values of B and C; the only difference between them is in their penalty ratios for misses vs. false alarms, i.e., 10:1 in TDT5SU and 2:1 in T11SU. The Ctrk function, on the other hand, is positively correlated to the values of C and B, and negatively correlated to the values of A and D; hence, it is negatively correlated to T11SU and TDT5SU. More importantly, there is a subtle and major difference between Ctrk and the utility functions: T11SU and TDT5SU. That is, Ctrk has a very different penalty ratio for misses vs. false alarms: it favors recall-oriented systems to an extreme. At first glance, one would think that the penalty ratio in Ctrk is 10:1 since w1 = 1 and w2 = 0. 1. However, this is not true if P (T) = 0 .02 is an inaccurate estimate of the on-topic documents on average for the test corpus. Using TDT3 as an example, the true percentage is: where N is the average size of the test sets in TDT3, and n + is the average number of positive examples per topic in the test sets. Using Pˆ (T) = 0.02 as an (inaccurate) estimate of 0.002 enlarges the intended penalty ratio of 10:1 to 100:1, roughly speaking. To wit: estimation of P (T) compared to the truth. Comparing the above result to formula 2, we can see the actual penalty ratio for misses vs. false alarms was 100:1 in the evaluations on TDT3 using Ctrk. Similarly, we can compute the enlargement factor for TDT5 using the statistics in Table 1 as follows: which means the actual penalty ratio for misses vs. false alarms in the evaluation on TDT5 using Ctrk was approximately 583:1. The implications of the above analysis are rather significant: • Ctrk defined in the same formula does not necessarily mean the same objective function in evaluation; instead, the optimization criterion depends on the test corpus. • Systems optimized for Ctrk would not optimize TDT5SU (and T11SU) because the former favors high-recall oriented to an extreme while the latter does not. • Parameters tuned on one corpus (e.g., TDT3) might not work for an evaluation on another corpus (say, TDT5) unless we account for the previously-unknown subtle dependency of Ctrk on data. • Results in Ctrk in the past years of TDT evaluations may not be directly comparable to each other because the evaluation collections changed most years and hence the penalty ratio in Ctrk varied. Although these problems with Ctrk were not originally anticipated, it offered an opportunity to examine the ability of systems in trading off precision for extreme recall. This was a challenging part of the TDT2004 evaluation for AF. Comparing the metrics in TDT and TREC from a utility or cost optimization point of view is important for understanding the evaluation results of adaptive filtering methods. This is the first time this issue is explicitly analyzed, to our knowledge. 4. METHODS 4.1 Incremental Rocchio for AF We employed a common version of Rocchio-style classifiers which computes a prototype vector per topic (T) as follows: The first term on the RHS is the weighted vector representation of topic description whose elements are terms weights. The second term is the weighted centroid of the set D + (T) of positive training examples, each of which is a vector of withindocument term weights. The third term is the weighted centroid of the set D − (T) of negative training examples which are the nearest neighbors of the positive centroid. The three terms are given pre-specified weights of α, β and γ, controlling the relative influence of these components in the prototype. The prototype of a topic is updated each time the system makes a "yes" decision on a new document for that topic. If relevance feedback is available (as is the case in TREC adaptive filtering), the new document is added to the pool of either D + (T) or D − (T), and the prototype is recomputed accordingly; if relevance feedback is not available (as is the case in TDT event tracking), the system's prediction ("yes") is treated as the truth, and the new document is added to D + (T) for updating the prototype. Both cases are part of our experiments in this paper (and part of the TDT 2004 evaluations for AF). To distinguish the two, we call the first case simply "Rocchio" and the second case "PRF Rocchio" where PRF stands for pseudorelevance feedback. The predictions on a new document are made by computing the cosine similarity between each topic prototype and the document vector, and then comparing the resulting scores against a threshold: Threshold calibration in incremental Rocchio is a challenging research topic. Multiple approaches have been developed. The simplest is to use a universal threshold for all topics, tuned on a validation set and fixed during the testing phase. More elaborate methods include probabilistic threshold calibration which converts the non-probabilistic similarity scores to probabilities r (i.e., P (T | d)) for utility optimization [9] [13], and margin-based local regression for risk reduction [11]. It is beyond the scope of this paper to compare all the different ways to adapt Rocchio-style methods for AF. Instead, our focus here is to investigate the robustness of Rocchio-style methods in terms of how much their performance depends on elaborate system tuning, and how difficult (or how easy) it is to get good performance through cross-corpus parameter optimization. Hence, we decided to use a relatively simple version of Rocchio as the baseline, i.e., with a universal threshold tuned on a validation corpus and fixed for all topics in the testing phase. This simple version of Rocchio has been commonly used in the past TDT benchmark evaluations for topic tracking, and had strong performance in the TDT2004 evaluations for adaptive filtering with and without relevance feedback (Section 5.1). Results of more complex variants of Rocchio are also discussed when relevant. 4.2 Logistic Regression for AF Logistic regression (LR) estimates the posterior probability of a topic given a document using a sigmoid function where xr is the document vector whose elements are term r weights, w is the vector of regression coefficients, and y ∈ {+1, − 1} is the output variable corresponding to "yes" or "no" with respect to a particular topic. Given a training set of labeled documents D {(x1, y1),, (xrn, yn)} r = L, the standard regression problem is defined as to find the maximum likelihood estimates of the regression coefficients ("the model parameters"): This is a convex optimization problem which can be solved using a standard conjugate gradient algorithm in O (INF) time for training per topic, where I is the average number of iterations needed for convergence, and N and F are the number of training documents and number of features respectively [14]. Once the regression coefficients are optimized on the training data, the filtering prediction on each incoming document is made as: Note that wr is constantly updated whenever a new relevance judgment is available in the testing phase of AF, while the optimal threshold θopt is constant, depending only on the predefined utility (or cost) function for evaluation. If T11SU is the metric, for example, with the penalty ratio of 2:1 for misses and false alarms (Section 3.1), the optimal threshold for LR is 1 / (2 + 1) = 0. 3 3 for all topics. We modified the standard (above) version of LR to allow more flexible optimization criteria as follows: where s (yi) is taken to be α, β and γ for query, positive and negative documents respectively, which are similar to those in Rocchio, giving different weights to the three kinds of training examples: topic descriptions ("queries"), on-topic documents and off-topic documents. The second term in the objective function is for regularization, equivalent to adding a Gaussian prior to the regression coefficients with mean µ r and covariance variance matrix 1 / 2λ ⋅ Ι, where Ι is the identity matrix. Tuning λ (≥ 0) is theoretically justified for reducing model complexity ("the effective degree of freedom") and avoiding over-fitting on training data [5]. How to find an effective µr is an open issue for research, depending on the user's belief about the parameter space and the optimal range. The solution of the modified objective function is called the Maximum A Posteriori (MAP) estimate, which reduces to the maximum likelihood solution for standard LR if λ = 0. 5. EVALUATIONS We report our empirical findings in four parts: the TDT2004 official evaluation results, the cross-corpus parameter optimization results, and the results corresponding to the amounts of relevance feedback. 5.1 TDT2004 benchmark results The TDT2004 evaluations for adaptive filtering were conducted by NIST in November 2004. Multiple research teams participated and multiple runs from each team were allowed. Ctrk and TDT5SU were used as the metrics. Figure 2 and Figure 3 show the results; the best run from each team was selected with respect to Ctrk or TDT5SU, respectively. Our Rocchio (with adaptive profiles but fixed universal threshold for all topics) had the best result in Ctrk, and our logistic regression had the best result in TDT5SU. All the parameters of our runs were tuned on the TDT3 corpus. Results for other sites are also listed anonymously for comparison. Figure 2: TDT2004 results in Ctrk of systems using true relevance feedback. ("Ours" is the Rocchio method.) We also put the 1st and 3rd quartiles as sticks for each site .2 Figure 3: TDT2004 results in TDT5SU of systems using true relevance feedback. ("Ours" is LR with µr = 0 and λ = 0.005). Figure 4: TDT2004 results in Ctrk of systems without using true relevance feedback. ("Ours" is PRF Rocchio.) Adaptive filtering without using true relevance feedback was also a part of the evaluations. In this case, systems had only one labeled training example per topic during the entire training and testing processes, although unlabeled test documents could be used as soon as predictions on them were made. Such a setting has been conventional for the Topic Tracking task in TDT until 2004. Figure 4 shows the summarized official submissions from each team. Our PRF Rocchio (with a fixed threshold for all the topics) had the best performance. 2 We use quartiles rather than standard deviations since the former is more resistant to outliers. 5.2 Cross-corpus parameter optimization How much the strong performance of our systems depends on parameter tuning is an important question. Both Rocchio and LR have parameters that must be prespecified before the AF process. The shared parameters include the sample weights α, β and γ, the sample size of the negative training documents (i.e., D − (T)), the term-weighting scheme, and the maximal number of non-zero elements in each document vector. The method-specific parameters include the decision threshold in Rocchio, and µr, λ and MI (the maximum number of iterations in training) in LR. Given that we only have one labeled example per topic in the TDT5 training sets, it is impossible to effectively optimize these parameters on the training data, and we had to choose an external corpus for validation. Among the choices of TREC10, TREC11 and TDT3, we chose TDT3 (c.f. Section 2) because it is most similar to TDT5 in terms of the nature of the topics (Section 2). We optimized the parameters of our systems on TDT3, and fixed those parameters in the runs on TDT5 for our submissions to TDT2004. We also tested our methods on TREC10 and TREC11 for further analysis. Since exhaustive testing of all possible parameter settings is computationally intractable, we followed a step-wise forward chaining procedure instead: we pre-specified an order of the parameters in a method (Rocchio or LR), and then tuned one parameter at the time while fixing the settings of the remaining parameters. We repeated this procedure for several passes as time allowed. Figure 5: Performance curves of adaptive Rocchio Figure 5 compares the performance curves in TDT5SU for Rocchio on TDT3, TDT5, TREC10 and TREC11 when the decision threshold varied. These curves peak at different locations: the TDT3-optimal is closest to the TDT5-optimal while the TREC10-optimal and TREC1-optimal are quite far away from the TDT5-optimal. If we were using TREC10 or TREC11 instead of TDT3 as the validation corpus for TDT5, or if the TDT3 corpus were not available, we would have difficulty in obtaining strong performance for Rocchio in TDT2004. The difficulty comes from the ad hoc (non-probabilistic) scores generated by the Rocchio method: the distribution of the scores depends on the corpus, making cross-corpus threshold optimization a tricky problem. Logistic regression has less difficulty with respect to threshold tuning because it produces probabilistic scores of Pr (y = 1 | x) upon which the optimal threshold can be directly computed if probability estimation is accurate. Given the penalty ratio for misses vs. false alarms as 2:1 in T11SU, 10:1 in TDT5SU and 583:1 in Ctrk (Section 3.3), the corresponding optimal thresholds (t) are 0.33, 0.091 and 0.0017 respectively. Although the theoretical threshold could be inaccurate, it still suggests the range of near-optimal settings. With these threshold settings in our experiments for LR, we focused on the cross-corpus validation of the Bayesian prior parameters, that is, μr and X. Table 2 summarizes the results3. We measured the performance of the runs on TREC10 and TREC11 using T11SU, and the performance of the runs on TDT3 and TDT5 using TDT5SU. For comparison we also include the best results of Rocchio-based methods on these corpora, which are our own results of Rocchio on TDT3 and TDT5, and the best results reported by NIST for TREC10 and TREC11. From this set of results, we see that LR significantly outperformed Rocchio on all the corpora, even in the runs of standard LR without any tuning, i.e. X = 0. This empirical finding is consistent with a previous report [13] for LR on TREC11 although our results of LR (0.585 ~ 0.608 in T11SU) are stronger than the results (0.49 for standard LR and 0.54 for LR using Rocchio prototype as the prior) in that report. More importantly, our cross-benchmark evaluation gives strong evidence for the robustness of LR. The robustness, we believe, comes from the probabilistic nature of the system-generated scores. That is, compared to the ad hoc scores in Rocchio, the normalized posterior probabilities make the threshold optimization in LR a much easier problem. Moreover, logistic regression is known to converge towards the Bayes classifier asymptotically while Rocchio classifiers' parameters do not. Another interesting observation in these results is that the performance of LR did not improve when using a Rocchio prototype as the mean in the prior; instead, the performance decreased in some cases. This observation does not support the previous report by [13], but we are not surprised because we are not convinced that Rocchio prototypes are more accurate than LR models for topics in the early stage of the AF process, and we believe that using a Rocchio prototype as the mean in the Gaussian prior would introduce undesirable bias to LR. We also believe that variance reduction (in the testing phase) should be controlled by the choice of X (but not μr), for which we conducted the experiments as shown in Figure 6. Table 2: Results of LR with different Bayesian priors 3 The LR results (0.77 ~ 0.78) on TDT5 in this table are better than our TDT2004 official result (0.73) because parameter optimization has been improved afterwards. 4 The TREC10-best result (0.496 by Oracle) is only available in T10U which is not directly comparable to the scores in T11SU, just indicative. *: μr was set to the Rocchio prototype Figure 6: LR with varying lambda. The performance of LR is summarized with respect to X tuning on the corpora of TREC10, TREC11 and TDT3. The performance on each corpus was measured using the corresponding metrics, that is, T11SU for the runs on TREC10 and TREC11, and TDT5SU and Ctrk for the runs on TDT3,. In the case of maximizing the utilities, the "safe" interval for X is between 0 and 0.01, meaning that the performance of regularized LR is stable, the same as or improved slightly over the performance of standard LR. In the case of minimizing Ctrk, the safe range for X is between 0 and 0.1, and setting λ between 0.005 and 0.05 yielded relatively large improvements over the performance of standard LR because training a model for extremely high recall is statistically more tricky, and hence more regularization is needed. In either case, tuning X is relatively safe, and easy to do successfully by cross-corpus tuning. Another influential choice in our experiment settings is term weighting: we examined the choices of binary, TF and TF-IDF (the "ltc" version) schemes. We found TF-IDF most effective for both Rocchio and LR, and used this setting in all our experiments. 5.3 Percentages of labeled data How much relevance feedback (RF) would be needed during the AF process is a meaningful question in real-world applications. To answer it, we evaluated Rocchio and LR on TDT with the following settings: • Basic Rocchio, no adaptation at all • PRF Rocchio, updating topic profiles without using true relevance feedback; • Adaptive Rocchio, updating topic profiles using relevance feedback on system-accepted documents plus 10 • documents randomly sampled from the pool of systemrejected documents; • All the parameters in Rocchio tuned on TDT3. Table 3 summarizes the results in Ctrk: Adaptive Rocchio with relevance feedback on 0.6% of the test documents reduced the tracking cost by 54% over the result of the PRF Rocchio, the best system in the TDT2004 evaluation for topic tracking without relevance feedback information. Incremental LR, on the other hand, was weaker but still impressive. Recall that Ctrk is an extremely high-recall oriented metric, causing frequent updating of profiles and hence an efficiency problem in LR. For this reason we set a higher threshold (0.004) instead of the theoretically optimal threshold (0.0017) in LR to avoid an untolerable computation cost. The computation time in machinehours was 0.33 for the run of adaptive Rocchio and 14 for the run of LR on TDT5 when optimizing Ctrk. Table 4 summarizes the results in TDT5SU; adaptive LR was the winner in this case, with relevance feedback on 0.05% of the test documents improving the utility by 20.9% over the results of PRF Rocchio. Table 3: AF methods on TDT5 (Performance in Ctrk) Table 4: AF methods on TDT5 (Performance in TDT5SU) Evidently, both Rocchio and LR are highly effective in adaptive filtering, in terms of using of a small amount of labeled data to significantly improve the model accuracy in statistical learning, which is the main goal of AF. 5.4 Summary of Adaptation Process After we decided the parameter settings using validation, we perform the adaptive filtering in the following steps for each topic: 1) Train the LR/Rocchio model using the provided positive training examples and 30 randomly sampled negative examples; 2) For each document in the test corpus: we first make a prediction about relevance, and then get relevance feedback for those (predicted) positive documents. 3) Model and IDF statistics will be incrementally updated if we obtain its true relevance feedback. 6. CONCLUDING REMARKS We presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization. Our main conclusions from this study are the following: • Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past. • Robustness in cross-corpus parameter tuning is important for evaluation and method comparison. • We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning. • We found Rocchio performs strongly when a good validation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme. For future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.
Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation ABSTRACT This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05 ~ 0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with β = 0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information. 1. INTRODUCTION Adaptive filtering (AF) has been a challenging research topic in information retrieval. The task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined topic of interest. Starting from 1997 in the Topic Detection and Tracking (TDT) area and 1998 in the Text Retrieval Conferences (TREC), benchmark evaluations have been conducted by NIST under the following conditions [6] [7] [8] [3] [4]: • A very small number (1 to 4) of positive training examples was provided for each topic at the starting point. • Relevance feedback was available but only for the systemaccepted documents (with a "yes" decision) in the TREC evaluations for AF. • Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004. • TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate. The above conditions attempt to mimic realistic situations where an AF system would be used. That is, the user would be willing to provide a few positive examples for each topic of interest at the start, and might or might not be able to provide additional labeling on a small portion of incoming documents through relevance feedback. Furthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing. These conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons: 1) it is difficult to learn accurate models for prediction based on extremely sparse training data; 2) it is not obvious how to correct the sampling bias (i.e., relevance feedback on system-accepted documents only) during the adaptation process; 3) it is not well understood how to effectively tune parameters in AF methods using cross-corpus validation where the validation and evaluation topics do not overlap, and the documents may be from different sources or different epochs. None of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once. The first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic calibration or local fitting techniques [1] [2] [9] [10] [11] [12] [13]. Although these works provide valuable insights for understanding the problems and possible solutions, it is difficult to draw conclusions regarding the effectiveness and robustness of current methods because the third problem has not been thoroughly investigated. Addressing the third issue is the main focus in this paper. We argue that robustness is an important measure for evaluating and comparing AF methods. By "robust" we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora. Most AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts. Available training examples, on the other hand, are often insufficient for tuning the parameters. In TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective. This leaves only one option (assuming tuning on the test set is not an alternative), that is, choosing an external corpus as the validation set. Notice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other. Now the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters? Current literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported. In this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR). Rocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2] [4] [7] [8] [9] [11] [13]. Logistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15] [14]. It was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1). Furthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus. Stimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing. Specifically, we focus on how much the performance of these methods depends on parameter tuning, what the most influential parameters are in these methods, how difficult (or how easy) to optimize these influential parameters using cross-corpus validation, how strong these methods perform on multiple benchmarks with the systematic tuning of parameters on other corpora, and how efficient these methods are in running AF on large benchmark corpora. The organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study. Section 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences. Section 4 outlines the Rocchio and LR approaches to AF, respectively. Section 5 reports the experiments and results. Section 6 concludes the main findings in this study. 2. BENCHMARK CORPORA 3. METRICS 3.1 TREC11 metrics 3.3 The correlations and the differences 3.2 TDT metrics 4. METHODS 4.1 Incremental Rocchio for AF 4.2 Logistic Regression for AF 5. EVALUATIONS 5.1 TDT2004 benchmark results 5.2 Cross-corpus parameter optimization 5.3 Percentages of labeled data 5.4 Summary of Adaptation Process 6. CONCLUDING REMARKS We presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization. Our main conclusions from this study are the following: • Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past. • Robustness in cross-corpus parameter tuning is important for evaluation and method comparison. • We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning. • We found Rocchio performs strongly when a good validation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme. For future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.
Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation ABSTRACT This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05 ~ 0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with β = 0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information. 1. INTRODUCTION Adaptive filtering (AF) has been a challenging research topic in information retrieval. The task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined topic of interest. • Relevance feedback was available but only for the systemaccepted documents (with a "yes" decision) in the TREC evaluations for AF. • Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004. • TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate. The above conditions attempt to mimic realistic situations where an AF system would be used. Furthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing. These conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons: None of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once. The first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic Addressing the third issue is the main focus in this paper. We argue that robustness is an important measure for evaluating and comparing AF methods. By "robust" we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora. Most AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts. Available training examples, on the other hand, are often insufficient for tuning the parameters. In TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective. Notice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other. Now the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters? Current literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported. In this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR). Rocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2] [4] [7] [8] [9] [11] [13]. Logistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15] [14]. It was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1). Furthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus. Stimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing. The organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study. Section 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences. Section 4 outlines the Rocchio and LR approaches to AF, respectively. Section 5 reports the experiments and results. Section 6 concludes the main findings in this study. 6. CONCLUDING REMARKS We presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization. Our main conclusions from this study are the following: • Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past. • Robustness in cross-corpus parameter tuning is important for evaluation and method comparison. • We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning. • We found Rocchio performs strongly when a good validation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme. For future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.
C-65
Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors
The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1-degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.
[ "weapon classif", "internod commun", "sensorboard", "self orient", "data fusion", "trajectori", "rang", "calib", "weapon type", "calib estim accuraci", "calib estim", "wireless sensor network-base mobil countersnip system", "helmetmount microphon arrai", "1degre trajectori precis", "sensor network", "acoust sourc local" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "M", "R", "M" ]
Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors Peter Volgyesi, Gyorgy Balogh, Andras Nadas, Christopher B. Nash, Akos Ledeczi Institute for Software Integrated Systems, Vanderbilt University Nashville, TN, USA akos.ledeczi@vanderbilt.edu ABSTRACT The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier``s PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested. Categories and Subject Descriptors C.2.4 [Computer-Communications Networks]: Distributed Systems; J.7 [Computers in Other Systems]: Military General Terms: Design, Measurement, Performance 1. INTRODUCTION The importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East. In October 2006 CNN reported on a new tactic employed by insurgents. A mobile sniper team moves around busy city streets in a car, positions itself at a good standoff distance from dismounted US military personnel, takes a single well-aimed shot and immediately melts in the city traffic. By the time the soldiers can react, they are gone. A countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers. Our team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003. The system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad-hoc multihop network. The acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot. PinPtr is characterized by high precision: 1m average 3D accuracy for shots originating within or near the sensor network and 1 degree bearing precision for both azimuth and elevation and 10% accuracy in range estimation for longer range shots. The truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time. This capability is due to the widely distributed sensing and the unique sensor fusion approach [8]. The system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities. The obvious disadvantage of such a system is its static nature. Once the sensors are distributed, they cover a certain area. Depending on the operation, the deployment may be needed for an hour or a month, but eventually the area looses its importance. It is not practical to gather and reuse the sensors, especially under combat conditions. Even if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place. As it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves. While there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers. A helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it. 113 To move from a static sensor network-based solution to a highly mobile one presents significant challenges. The sensor positions and orientation need to be constantly monitored. As soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before. Moreover, the system should be useful to even a single soldier. Finally, additional requirements called for caliber estimation and weapon classification in addition to source localization. The paper presents the design and evaluation of our soldierwearable mobile countersniper system. It describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12]. Special emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously. The results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented. 2. APPROACH The firing of a typical military rifle, such as the AK47 or M16, produces two distinct acoustic phenomena. The muzzle blast is generated at the muzzle of the gun and travels at the speed sound. The supersonic projectile generates an acoustic shockwave, a kind of sonic boom. The wavefront has a conical shape, the angle of which depends on the Mach number, the speed of the bullet relative to the speed of sound. The shockwave has a characteristic shape resembling a capital N. The rise time at both the start and end of the signal is very fast, under 1 μsec. The length is determined by the caliber and the miss distance, the distance between the trajectory and the sensor. It is typically a few hundred μsec. Once a trajectory estimate is available, the shockwave length can be used for caliber estimation. Our system is based on four microphones connected to a sensorboard. The board detects shockwaves and muzzle blasts and measures their ToA. If at least three acoustic channels detect the same event, its AoA is also computed. If both the shockwave and muzzle blast AoA are available, a simple analytical solution gives the shooter location as shown in Section 6. As the microphones are close to each other, typically 2-4, we cannot expect very high precision. Also, this method does not estimate a trajectory. In fact, an infinite number of trajectory-bullet speed pairs satisfy the observations. However, the sensorboards are also connected to COTS MICAz motes and they share their AoA and ToA measurements, as well as their own location and orientation, with each other using a multihop routing service [9]. A hybrid sensor fusion algorithm then estimates the trajectory, the range, the caliber and the weapon type based on all available observations. The sensorboard is also Bluetooth capable for communication with the soldier``s PDA or laptop computer. A wired USB connection is also available. The sensorfusion algorithm and the user interface get their data through one of these channels. The orientation of the microphone array at the time of detection is provided by a 3-axis digital compass. Currently the system assumes that the soldier``s PDA is GPS-capable and it does not provide self localization service itself. However, the accuracy of GPS is a few meters degrading the Figure 1: Acoustic sensorboard/mote assembly . overall accuracy of the system. Refer to Section 7 for an analysis. The latest generation sensorboard features a Texas Instruments CC-1000 radio enabling the high-precision radio interferometric self localization approach we have developed separately [7]. However, we leave the integration of the two technologies for future work. 3. HARDWARE Since the first static version of our system in 2003, the sensor nodes have been built upon the UC Berkeley/Crossbow MICA product line [11]. Although rudimentary acoustic signal processing can be done on these microcontroller-based boards, they do not provide the required computational performance for shockwave detection and angle of arrival measurements, where multiple signals from different microphones need to be processed in parallel at a high sampling rate. Our 3rd generation sensorboard is designed to be used with MICAz motes-in fact it has almost the same size as the mote itself (see Figure 1). The board utilizes a powerful Xilinx XC3S1000 FPGA chip with various standard peripheral IP cores, multiple soft processor cores and custom logic for the acoustic detectors (Figure 2). The onboard Flash (4MB) and PSRAM (8MB) modules allow storing raw samples of several acoustic events, which can be used to build libraries of various acoustic signatures and for refining the detection cores off-line. Also, the external memory blocks can store program code and data used by the soft processor cores on the FPGA. The board supports four independent analog channels sampled at up to 1 MS/s (million samples per seconds). These channels, featuring an electret microphone (Panasonic WM64PNT), amplifiers with controllable gain (30-60 dB) and a 12-bit serial ADC (Analog Devices AD7476), reside on separate tiny boards which are connected to the main sensorboard with ribbon cables. This partitioning enables the use of truly different audio channels (eg.: slower sampling frequency, different gain or dynamic range) and also results in less noisy measurements by avoiding long analog signal paths. The sensor platform offers a rich set of interfaces and can be integrated with existing systems in diverse ways. An RS232 port and a Bluetooth (BlueGiga WT12) wireless link with virtual UART emulation are directly available on the board and provide simple means to connect the sensor to PCs and PDAs. The mote interface consists of an I2 C bus along with an interrupt and GPIO line (the latter one is used 114 Figure 2: Block diagram of the sensorboard. for precise time synchronization between the board and the mote). The motes are equipped with IEEE 802.15.4 compliant radio transceivers and support ad-hoc wireless networking among the nodes and to/from the base station. The sensorboard also supports full-speed USB transfers (with custom USB dongles) for uploading recorded audio samples to the PC. The on-board JTAG chain-directly accessible through a dedicated connector-contains the FPGA part and configuration memory and provides in-system programming and debugging facilities. The integrated Honeywell HMR3300 digital compass module provides heading, pitch and roll information with 1◦ accuracy, which is essential for calculating and combining directional estimates of the detected events. Due to the complex voltage requirements of the FPGA, the power supply circuitry is implemented on the sensorboard and provides power both locally and to the mote. We used a quad pack of rechargeable AA batteries as the power source (although any other configuration is viable that meets the voltage requirements). The FPGA core (1.2 V) and I/O (3.3 V) voltages are generated by a highly efficient buck switching regulator. The FPGA configuration (2.5 V) and a separate 3.3 V power net are fed by low current LDOs, the latter one is used to provide independent power to the mote and to the Bluetooth radio. The regulators-except the last one-can be turned on/off from the mote or through the Bluetooth radio (via GPIO lines) to save power. The first prototype of our system employed 10 sensor nodes. Some of these nodes were mounted on military kevlar helmets with the microphones directly attached to the surface at about 20 cm separation as shown in Figure 3(a). The rest of the nodes were mounted in plastic enclosures (Figure 3(b)) with the microphones placed near the corners of the boxes to form approximately 5 cm×10 cm rectangles. 4. SOFTWARE ARCHITECTURE The sensor application relies on three subsystems exploiting three different computing paradigms as they are shown in Figure 4. Although each of these execution models suit their domain specific tasks extremely well, this diversity (a) (b) Figure 3: Sensor prototypes mounted on a kevlar helmet (a) and in a plastic box on a tripod (b). presents a challenge for software development and system integration. The sensor fusion and user interface subsystem is running on PDAs and were implemented in Java. The sensing and signal processing tasks are executed by an FPGA, which also acts as a bridge between various wired and wireless communication channels. The ad-hoc internode communication, time synchronization and data sharing are the responsibilities of a microcontroller based radio module. Similarly, the application employs a wide variety of communication protocols such as Bluetooth and IEEE 802.14.5 wireless links, as well as optional UARTs, I2 C and/or USB buses. Soldier Operated Device (PDA/Laptop) FPGA Sensor Board Mica Radio Module 2.4 GHz Wireless Link Radio Control Message Routing Acoustic Event Encoder Sensor Time Synch. Network Time Synch.Remote Control Time stamping Interrupts Virtual Register Interface C O O R D I N A T O R A n a l o g c h a n n e l s Compass PicoBlaze Comm. Interface PicoBlaze WT12 Bluetooth Radio MOTE IF:I2C,Interrupts USB PSRAM U A R T U A R T MB det SW det REC Bluetooth Link User Interface Sensor Fusion Location Engine GPS Message (Dis-)AssemblerSensor Control Figure 4: Software architecture diagram. The sensor fusion module receives and unpacks raw measurements (time stamps and feature vectors) from the sensorboard through the Bluetooth link. Also, it fine tunes the execution of the signal processing cores by setting parameters through the same link. Note that measurements from other nodes along with their location and orientation information also arrive from the sensorboard which acts as a gateway between the PDA and the sensor network. The handheld device obtains its own GPS location data and di115 rectly receives orientation information through the sensorboard. The results of the sensor fusion are displayed on the PDA screen with low latency. Since, the application is implemented in pure Java, it is portable across different PDA platforms. The border between software and hardware is considerably blurred on the sensor board. The IP cores-implemented in hardware description languages (HDL) on the reconfigurable FPGA fabric-closely resemble hardware building blocks. However, some of them-most notably the soft processor cores-execute true software programs. The primary tasks of the sensor board software are 1) acquiring data samples from the analog channels, 2) processing acoustic data (detection), and 3) providing access to the results and run-time parameters through different interfaces. As it is shown in Figure 4, a centralized virtual register file contains the address decoding logic, the registers for storing parameter values and results and the point to point data buses to and from the peripherals. Thus, it effectively integrates the building blocks within the sensorboard and decouples the various communication interfaces. This architecture enabled us to deploy the same set of sensors in a centralized scenario, where the ad-hoc mote network (using the I2 C interface) collected and forwarded the results to a base station or to build a decentralized system where the local PDAs execute the sensor fusion on the data obtained through the Bluetooth interface (and optionally from other sensors through the mote interface). The same set of registers are also accessible through a UART link with a terminal emulation program. Also, because the low-level interfaces are hidden by the register file, one can easily add/replace these with new ones (eg.: the first generation of motes supported a standard μP interface bus on the sensor connector, which was dropped in later designs). The most important results are the time stamps of the detected events. These time stamps and all other timing information (parameters, acoustic event features) are based on a 1 MHz clock and an internal timer on the FPGA. The time conversion and synchronization between the sensor network and the board is done by the mote by periodically requesting the capture of the current timer value through a dedicated GPIO line and reading the captured value from the register file through the I2 C interface. Based on the the current and previous readings and the corresponding mote local time stamps, the mote can calculate and maintain the scaling factor and offset between the two time domains. The mote interface is implemented by the I2 C slave IP core and a thin adaptation layer which provides a data and address bus abstraction on top of it. The maximum effective bandwidth is 100 Kbps through this interface. The FPGA contains several UART cores as well: for communicating with the on-board Bluetooth module, for controlling the digital compass and for providing a wired RS232 link through a dedicated connector. The control, status and data registers of the UART modules are available through the register file. The higher level protocols on these lines are implemented by Xilinx PicoBlaze microcontroller cores [13] and corresponding software programs. One of them provides a command line interface for test and debug purposes, while the other is responsible for parsing compass readings. By default, they are connected to the RS232 port and to the on-board digital compass line respectively, however, they can be rewired to any communication interface by changing the register file base address in the programs (e.g. the command line interface can be provided through the Bluetooth channel). Two of the external interfaces are not accessible through the register file: a high speed USB link and the SRAM interface are tied to the recorder block. The USB module implements a simple FIFO with parallel data lines connected to an external FT245R USB device controller. The RAM driver implements data read/write cycles with correct timing and is connected to the on-board pseudo SRAM. These interfaces provide 1 MB/s effective bandwidth for downloading recorded audio samples, for example. The data acquisition and signal processing paths exhibit clear symmetry: the same set of IP cores are instantiated four times (i.e. the number of acoustic channels) and run independently. The signal paths meet only just before the register file. Each of the analog channels is driven by a serial A/D core for providing a 20 MHz serial clock and shifting in 8-bit data samples at 1 MS/s and a digital potentiometer driver for setting the required gain. Each channel has its own shockwave and muzzle blast detector, which are described in Section 5. The detectors fetch run-time parameter values from the register file and store their results there as well. The coordinator core constantly monitors the detection results and generates a mote interrupt promptly upon full detection or after a reasonable timeout after partial detection. The recorder component is not used in the final deployment, however, it is essential for development purposes for refining parameter values for new types of weapons or for other acoustic sources. This component receives the samples from all channels and stores them in circular buffers in the PSRAM device. If the signal amplitude on one of the channels crosses a predefined threshold, the recorder component suspends the sample collection with a predefined delay and dumps the contents of the buffers through the USB link. The length of these buffers and delays, the sampling rate, the threshold level and the set of recorded channels can be (re)configured run-time through the register file. Note that the core operates independently from the other signal processing modules, therefore, it can be used to validate the detection results off-line. The FPGA cores are implemented in VHDL, the PicoBlaze programs are written in assembly. The complete configuration occupies 40% of the resources (slices) of the FPGA and the maximum clock speed is 30 MHz, which is safely higher than the speed used with the actual device (20MHz). The MICAz motes are responsible for distributing measurement data across the network, which drastically improves the localization and classification results at each node. Besides a robust radio (MAC) layer, the motes require two essential middleware services to achieve this goal. The messages need to be propagated in the ad-hoc multihop network using a routing service. We successfully integrated the Directed Flood-Routing Framework (DFRF) [9] in our application. Apart from automatic message aggregation and efficient buffer management, the most unique feature of DFRF is its plug-in architecture, which accepts custom routing policies. Routing policies are state machines that govern how received messages are stored, resent or discarded. Example policies include spanning tree routing, broadcast, geographic routing, etc.. Different policies can be used for different messages concurrently, and the application is able to 116 change the underlying policies at run-time (eg.: because of the changing RF environment or power budget). In fact, we switched several times between a simple but lavish broadcast policy and a more efficient gradient routing on the field. Correlating ToA measurements requires a common time base and precise time synchronization in the sensor network. The Routing Integrated Time Synchronization (RITS) [15] protocol relies on very accurate MAC-layer time-stamping to embed the cumulative delay that a data message accrued since the time of the detection in the message itself. That is, at every node it measures the time the message spent there and adds this to the number in the time delay slot of the message, right before it leaves the current node. Every receiving node can subtract the delay from its current time to obtain the detection time in its local time reference. The service provides very accurate time conversion (few μs per hop error), which is more than adequate for this application. Note, that the motes also need to convert the sensorboard time stamps to mote time as it is described earlier. The mote application is implemented in nesC [5] and is running on top of TinyOS [6]. With its 3 KB RAM and 28 KB program space (ROM) requirement, it easily fits on the MICAz motes. 5. DETECTION ALGORITHM There are several characteristics of acoustic shockwaves and muzzle blasts which distinguish their detection and signal processing algorithms from regular audio applications. Both events are transient by their nature and present very intense stimuli to the microphones. This is increasingly problematic with low cost electret microphones-designed for picking up regular speech or music. Although mechanical damping of the microphone membranes can mitigate the problem, this approach is not without side effects. The detection algorithms have to be robust enough to handle severe nonlinear distortion and transitory oscillations. Since the muzzle blast signature closely follows the shockwave signal and because of potential automatic weapon bursts, it is extremely important to settle the audio channels and the detection logic as soon as possible after an event. Also, precise angle of arrival estimation necessitates high sampling frequency (in the MHz range) and accurate event detection. Moreover, the detection logic needs to process multiple channels in parallel (4 channels on our existing hardware). These requirements dictated simple and robust algorithms both for muzzle blast and shockwave detections. Instead of using mundane energy detectors-which might not be able to distinguish the two different events-the applied detectors strive to find the most important characteristics of the two signals in the time-domain using simple state machine logic. The detectors are implemented as independent IP cores within the FPGA-one pair for each channel. The cores are run-time configurable and provide detection event signals with high precision time stamps and event specific feature vectors. Although the cores are running independently and in parallel, a crude local fusion module integrates them by shutting down those cores which missed their events after a reasonable timeout and by generating a single detection message towards the mote. At this point, the mote can read and forward the detection times and features and is responsible to restart the cores afterwards. The most conspicuous characteristics of an acoustic shockwave (see Figure 5(a)) are the steep rising edges at the be0 200 400 600 800 1000 1200 1400 1600 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Shockwave (M16) Time (µs) Amplitude 1 3 5 2 4 len (a) s[t] - s[t-D] > E tstart := t s[t] - s[t-D] < E s[t] - s[t-D] > E & t - t_start > Lmin s[t] - s[t-D] < E len := t - tstart IDLE 1 FIRST EDGE DONE 3 SECOND EDGE 4 FIRST EDGE 2 FOUND 5 t - tstart ≥ Lmax t - tstart ≥ Lmax (b) Figure 5: Shockwave signal generated by a 5.56 × 45 mm NATO projectile (a) and the state machine of the detection algorithm (b). ginning and end of the signal. Also, the length of the N-wave is fairly predictable-as it is described in Section 6.5-and is relatively short (200-300 μs). The shockwave detection core is continuously looking for two rising edges within a given interval. The state machine of the algorithm is shown in Figure 5(b). The input parameters are the minimum steepness of the edges (D, E), and the bounds on the length of the wave (Lmin, Lmax). The only feature calculated by the core is the length of the observed shockwave signal. In contrast to shockwaves, the muzzle blast signatures are characterized by a long initial period (1-5 ms) where the first half period is significantly shorter than the second half [4]. Due to the physical limitations of the analog circuitry described at the beginning of this section, irregular oscillations and glitches might show up within this longer time window as they can be clearly seen in Figure 6(a). Therefore, the real challenge for the matching detection core is to identify the first and second half periods properly. The state machine (Figure 6(b)) does not work on the raw samples directly but is fed by a zero crossing (ZC) encoder. After the initial triggering, the detector attempts to collect those ZC segments which belong to the first period (positive amplitude) while discarding too short (in our terminology: garbage) segments-effectively implementing a rudimentary low-pass filter in the ZC domain. After it encounters a sufficiently long negative segment, it runs the same collection logic for the second half period. If too much garbage is discarded in the collection phases, the core resets itself to prevent the (false) detection of the halves from completely different periods separated by rapid oscillation or noise. Finally, if the constraints on the total length and on the length ratio hold, the core generates a detection event along with the actual length, amplitude and energy of the period calculated concurrently. The initial triggering mechanism is based on two amplitude thresholds: one static (but configurable) amplitude level and a dynamically computed one. The latter one is essential to adapt the sensor to different ambient noise environments and to temporarily suspend the muzzle blast detector after a shock wave event (oscillations in the analog section or reverberations in the sensor enclosure might otherwise trigger false muzzle blast detections). The dynamic noise level is estimated by a single pole recursive low-pass filter (cutoff @ 0.5 kHz ) on the FPGA. 117 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Time (µs) Amplitude Muzzle blast (M16) 1 2 3 4 5 len2 + len1 (a) IDLE 1 SECOND ZC 3 PENDING ZC 4 FIRST ZC 2 FOUND 5 amplitude threshold long positive ZC long negative ZC valid full period max garbage wrong sign garbage collect first period garbage collect first period garbage (b) Figure 6: Muzzle blast signature (a) produced by an M16 assault rifle and the corresponding detection logic (b). The detection cores were originally implemented in Java and evaluated on pre-recorded signals because of much faster test runs and more convenient debugging facilities. Later on, they were ported to VHDL and synthesized using the Xilinx ISE tool suite. The functional equivalence between the two implementations were tested by VHDL test benches and Python scripts which provided an automated way to exercise the detection cores on the same set of pre-recorded signals and to compare the results. 6. SENSOR FUSION The sensor fusion algorithm receives detection messages from the sensor network and estimates the bullet trajectory, the shooter position, the caliber of the projectile and the type of the weapon. The algorithm consists of well separated computational tasks outlined below: 1. Compute muzzle blast and shockwave directions of arrivals for each individual sensor (see 6.1). 2. Compute range estimates. This algorithm can analytically fuse a pair of shockwave and muzzle blast AoA estimates. (see 6.2). 3. Compute a single trajectory from all shockwave measurements (see 6.3). 4. If trajectory available compute range (see 6.4). else compute shooter position first and then trajectory based on it. (see 6.4) 5. If trajectory available compute caliber (see 6.5). 6. If caliber available compute weapon type (see 6.6). We describe each step in the following sections in detail. 6.1 Direction of arrival The first step of the sensor fusion is to calculate the muzzle blast and shockwave AoA-s for each sensorboard. Each sensorboard has four microphones that measure the ToA-s. Since the microphone spacing is orders of magnitude smaller than the distance to the sound source, we can approximate the approaching sound wave front with a plane (far field assumption). Let us formalize the problem for 3 microphones first. Let P1, P2 and P3 be the position of the microphones ordered by time of arrival t1 < t2 < t3. First we apply a simple geometry validation step. The measured time difference between two microphones cannot be larger than the sound propagation time between the two microphones: |ti − tj| <= |Pi − Pj |/c + ε Where c is the speed of sound and ε is the maximum measurement error. If this condition does not hold, the corresponding detections are discarded. Let v(x, y, z) be the normal vector of the unknown direction of arrival. We also use r1(x1, y1, z1), the vector from P1 to P2 and r2(x2, y2, z2), the vector from P1 to P3. Let``s consider the projection of the direction of the motion of the wave front (v) to r1 divided by the speed of sound (c). This gives us how long it takes the wave front to propagate form P1 to P2: vr1 = c(t2 − t1) The same relationship holds for r2 and v: vr2 = c(t3 − t1) We also know that v is a normal vector: vv = 1 Moving from vectors to coordinates using the dot product definition leads to a quadratic system: xx1 + yy1 + zz1 = c(t2 − t1) xx2 + yy2 + zz2 = c(t3 − t1) x2 + y2 + z2 = 1 We omit the solution steps here, as they are straightforward, but long. There are two solutions (if the source is on the P1P2P3 plane the two solutions coincide). We use the fourth microphone``s measurement-if there is one-to eliminate one of them. Otherwise, both solutions are considered for further processing. 6.2 Muzzle-shock fusion u v 11,tP 22,tP tP, 2P′ Bullet trajectory Figure 7: Section plane of a shot (at P) and two sensors (at P1 and at P2). One sensor detects the muzzle blast``s, the other the shockwave``s time and direction of arrivals. Consider the situation in Figure 7. A shot was fired from P at time t. Both P and t are unknown. We have one muzzle blast and one shockwave detections by two different sensors 118 with AoA and hence, ToA information available. The muzzle blast detection is at position P1 with time t1 and AoA u. The shockwave detection is at P2 with time t2 and AoA v. u and v are normal vectors. It is shown below that these measurements are sufficient to compute the position of the shooter (P). Let P2 be the point on the extended shockwave cone surface where PP2 is perpendicular to the surface. Note that PP2 is parallel with v. Since P2 is on the cone surface which hits P2, a sensor at P2 would detect the same shockwave time of arrival (t2). The cone surface travels at the speed of sound (c), so we can express P using P2: P = P2 + cv(t2 − t). P can also be expressed from P1: P = P1 + cu(t1 − t) yielding P1 + cu(t1 − t) = P2 + cv(t2 − t). P2P2 is perpendicular to v: (P2 − P2)v = 0 yielding (P1 + cu(t1 − t) − cv(t2 − t) − P2)v = 0 containing only one unknown t. One obtains: t = (P1−P2)v c +uvt1−t2 uv−1 . From here we can calculate the shoter position P. Let``s consider the special single sensor case where P1 = P2 (one sensor detects both shockwave and muzzle blast AoA). In this case: t = uvt1−t2 uv−1 . Since u and v are not used separately only uv, the absolute orientation of the sensor can be arbitrary, we still get t which gives us the range. Here we assumed that the shockwave is a cone which is only true for constant projectile speeds. In reality, the angle of the cone slowly grows; the surface resembles one half of an American football. The decelerating bullet results in a smaller time difference between the shockwave and the muzzle blast detections because the shockwave generation slows down with the bullet. A smaller time difference results in a smaller range, so the above formula underestimates the true range. However, it can still be used with a proper deceleration correction function. We leave this for future work. 6.3 Trajectory estimation Danicki showed that the bullet trajectory and speed can be computed analytically from two independent shockwave measurements where both ToA and AoA are measured [2]. The method gets more sensitive to measurement errors as the two shockwave directions get closer to each other. In the special case when both directions are the same, the trajectory cannot be computed. In a real world application, the sensors are typically deployed on a plane approximately. In this case, all sensors located on one side of the trajectory measure almost the same shockwave AoA. To avoid this error sensitivity problem, we consider shockwave measurement pairs only if the direction of arrival difference is larger than a certain threshold. We have multiple sensors and one sensor can report two different directions (when only three microphones detect the shockwave). Hence, we typically have several trajectory candidates, i.e. one for each AoA pair over the threshold. We applied an outlier filtering and averaging method to fuse together the shockwave direction and time information and come up with a single trajectory. Assume that we have N individual shockwave AoA measurements. Let``s take all possible unordered pairs where the direction difference is above the mentioned threshold and compute the trajectory for each. This gives us at most N(N−1) 2 trajectories. A trajectory is represented by one point pi and the normal vector vi (where i is the trajectory index). We define the distance of two trajectories as the dot product of their normal vectors: D(i, j) = vivj For each trajectory a neighbor set is defined: N(i) := {j|D(i, j) < R} where R is a radius parameter. The largest neighbor set is considered to be the core set C, all other trajectories are outliers. The core set can be found in O(N2 ) time. The trajectories in the core set are then averaged to get the final trajectory. It can happen that we cannot form any sensor pairs because of the direction difference threshold. It means all sensors are on the same side of the trajectory. In this case, we first compute the shooter position (described in the next section) that fixes p making v the only unknown. To find v in this case, we use a simple high resolution grid search and minimize an error function based on the shockwave directions. We have made experiments to utilize the measured shockwave length in the trajectory estimation. There are some promising results, but it needs further research. 6.4 Shooter position estimation The shooter position estimation algorithm aggregates the following heterogenous information generated by earlier computational steps: 1. trajectory, 2. muzzle blast ToA at a sensor, 3. muzzle blast AoA at a sensor, which is effectively a bearing estimate to the shooter, and 4. range estimate at a sensor (when both shockwave and muzzle blast AoA are available). Some sensors report only ToA, some has bearing estimate(s) also and some has range estimate(s) as well, depending on the number of successful muzzle blast and shockwave detections by the sensor. For an example, refer to Figure 8. Note that a sensor may have two different bearing and range estimates. 3 detections gives two possible AoA-s for muzzle blast (i.e. bearing) and/or shockwave. Furthermore, the combination of two different muzzle blast and shockwave AoA-s may result in two different ranges. 119 11111 ,,,, rrvvt ′′ 22 ,vt 333 ,, vvt ′ 4t 5t 6t bullet trajectory shooter position Figure 8: Example of heterogenous input data for the shooter position estimation algorithm. All sensors have ToA measurements (t1, t2, t3, t4, t5), one sensor has a single bearing estimate (v2), one sensor has two possible bearings (v3, v3) and one sensor has two bearing and two range estimates (v1, v1,r1, r1) In a multipath environment, these detections will not only contain gaussian noise, but also possibly large errors due to echoes. It has been showed in our earlier work that a similar problem can be solved efficiently with an interval arithmetic based bisection search algorithm [8]. The basic idea is to define a discrete consistency function over the area of interest and subdivide the space into 3D boxes. For any given 3D box, this function gives the number of measurements supporting the hypothesis that the shooter was within that box. The search starts with a box large enough to contain the whole area of interest, then zooms in by dividing and evaluating boxes. The box with the maximum consistency is divided until the desired precision is reached. Backtracking is possible to avoid getting stuck in a local maximum. This approach has been shown to be fast enough for online processing. Note, however, that when the trajectory has already been calculated in previous steps, the search needs to be done only on the trajectory making it orders of magnitude faster. Next let us describe how the consistency function is calculated in detail. Consider B, a three dimensional box, we would like to compute the consistency value of. First we consider only the ToA information. If one sensor has multiple ToA detections, we use the average of those times, so one sensor supplies at most one ToA estimate. For each ToA, we can calculate the corresponding time of the shot, since the origin is assumed to be in box B. Since it is a box and not a single point, this gives us an interval for the shot time. The maximum number of overlapping time intervals gives us the value of the consistency function for B. For a detailed description of the consistency function and search algorithm, refer to [8]. Here we extend the approach the following way. We modify the consistency function based on the bearing and range data from individual sensors. A bearing estimate supports B if the line segment starting from the sensor with the measured direction intersects the B box. A range supports B, if the sphere with the radius of the range and origin of the sensor intersects B. Instead of simply checking whether the position specified by the corresponding bearing-range pairs falls within B, this eliminates the sensor``s possible orientation error. The value of the consistency function is incremented by one for each bearing and range estimate that is consistent with B. 6.5 Caliber estimation The shockwave signal characteristics has been studied before by Whitham [20]. He showed that the shockwave period T is related to the projectile diameter d, the length l, the perpendicular miss distance b from the bullet trajectory to the sensor, the Mach number M and the speed of sound c. T = 1.82Mb1/4 c(M2−1)3/8 d l1/4 ≈ 1.82d c (Mb l )1/4 0 100 200 300 400 500 600 0 10 20 30 miss distance (m)shockwavelength(microseconds) .50 cal 5.56 mm 7.62 mm Figure 9: Shockwave length and miss distance relationship. Each data point represents one sensorboard after an aggregation of the individual measurements of the four acoustic channels. Three different caliber projectiles have been tested (196 shots, 10 sensors). To illustrate the relationship between miss distance and shockwave length, here we use all 196 shots with three different caliber projectiles fired during the evaluation. (During the evaluation we used data obtained previously using a few practice shots per weapon.) 10 sensors (4 microphones by sensor) measured the shockwave length. For each sensor, we considered the shockwave length estimation valid if at least three out of four microphones agreed on a value with at most 5 microsecond variance. This filtering leads to a 86% report rate per sensor and gets rid of large measurement errors. The experimental data is shown in Figure 9. Whitham``s formula suggests that the shockwave length for a given caliber can be approximated with a power function of the miss distance (with a 1/4 exponent). Best fit functions on our data are: .50 cal: T = 237.75b0.2059 7.62 mm: T = 178.11b0.1996 5.56 mm: T = 144.39b0.1757 To evaluate a shot, we take the caliber whose approximation function results in the smallest RMS error of the filtered sensor readings. This method has less than 1% caliber estimation error when an accurate trajectory estimate is available. In other words, caliber estimation only works if enough shockwave detections are made by the system to compute a trajectory. 120 6.6 Weapon estimation We analyzed all measured signal characteristics to find weapon specific information. Unfortunately, we concluded that the observed muzzle blast signature is not characteristic enough of the weapon for classification purposes. The reflections of the high energy muzzle blast from the environment have much higher impact on the muzzle blast signal shape than the weapon itself. Shooting the same weapon from different places caused larger differences on the recorded signal than shooting different weapons from the same place. 0 100 200 300 400 500 600 700 800 900 0 100 200 300 400 range (m) speed(m/s) AK-47 M240 Figure 10: AK47 and M240 bullet deceleration measurements. Both weapons have the same caliber. Data is approximated using simple linear regression. 0 100 200 300 400 500 600 700 800 900 1000 0 50 100 150 200 250 300 350 range (m) speed(m/s) M16 M249 M4 Figure 11: M16, M249 and M4 bullet deceleration measurements. All weapons have the same caliber. Data is approximated using simple linear regression. However, the measured speed of the projectile and its caliber showed good correlation with the weapon type. This is because for a given weapon type and ammunition pair, the muzzle velocity is nearly constant. In Figures 10 and 11 we can see the relationship between the range and the measured bullet speed for different calibers and weapons. In the supersonic speed range, the bullet deceleration can be approximated with a linear function. In case of the 7.62 mm caliber, the two tested weapons (AK47, M240) can be clearly separated (Figure 10). Unfortunately, this is not necessarily true for the 5.56 mm caliber. The M16 with its higher muzzle speed can still be well classified, but the M4 and M249 weapons seem practically undistinguishable (Figure 11). However, this may be partially due to the limited number of practice shots we were able to take before the actual testing began. More training data may reveal better separation between the two weapons since their published muzzle velocities do differ somewhat. The system carries out weapon classification in the following manner. Once the trajectory is known, the speed can be calculated for each sensor based on the shockwave geometry. To evaluate a shot, we choose the weapon type whose deceleration function results in the smallest RMS error of the estimated range-speed pairs for the estimated caliber class. 7. RESULTS An independent evaluation of the system was carried out by a team from NIST at the US Army Aberdeen Test Center in April 2006 [19]. The experiment was setup on a shooting range with mock-up wooden buildings and walls for supporting elevated shooter positions and generating multipath effects. Figure 12 shows the user interface with an aerial photograph of the site. 10 sensor nodes were deployed on surveyed points in an approximately 30×30 m area. There were five fixed targets behind the sensor network. Several firing positions were located at each of the firing lines at 50, 100, 200 and 300 meters. These positions were known to the evaluators, but not to the operators of the system. Six different weapons were utilized: AK47 and M240 firing 7.62 mm projectiles, M16, M4 and M249 with 5.56mm ammunition and the .50 caliber M107. Note that the sensors remained static during the test. The primary reason for this is that nobody is allowed downrange during live fire tests. Utilizing some kind of remote control platform would have been too involved for the limited time the range was available for the test. The experiment, therefore, did not test the mobility aspect of the system. During the one day test, there were 196 shots fired. The results are summarized in Table 1. The system detected all shots successfully. Since a ballistic shockwave is a unique acoustic phenomenon, it makes the detection very robust. There were no false positives for shockwaves, but there were a handful of false muzzle blast detections due to parallel tests of artillery at a nearby range. Shooter Local- Caliber Trajectory Trajectory Distance No. Range ization Accu- Azimuth Distance Error of (m) Rate racy Error (deg) Error (m) (m) Shots 50 93% 100% 0.86 0.91 2.2 54 100 100% 100% 0.66 1.34 8.7 54 200 96% 100% 0.74 2.71 32.8 54 300 97% 97% 1.49 6.29 70.6 34 All 96% 99.5% 0.88 2.47 23.0 196 Table 1: Summary of results fusing all available sensor observations. All shots were successfully detected, so the detection rate is omitted. Localization rate means the percentage of shots that the sensor fusion was able to estimate the trajectory of. The caliber accuracy rate is relative to the shots localized and not all the shots because caliber estimation requires the trajectory. The trajectory error is broken down to azimuth in degrees and the actual distance of the shooter from the trajectory. The distance error shows the distance between the real shooter position and the estimated shooter position. As such, it includes the error caused by both the trajectory and that of the range estimation. Note that the traditional bearing and range measures are not good ones for a distributed system such as ours because of the lack of a single reference point. 121 Figure 12: The user interface of the system showing the experimental setup. The 10 sensor nodes are labeled by their ID and marked by dark circles. The targets are black squares marked T-1 through T-5. The long white arrows point to the shooter position estimated by each sensor. Where it is missing, the corresponding sensor did not have enough detections to measure the AoA of either the muzzle blast, the shockwave or both. The thick black line and large circle indicate the estimated trajectory and the shooter position as estimated by fusing all available detections from the network. This shot from the 100-meter line at target T-3 was localized almost perfectly by the sensor network. The caliber and weapon were also identified correctly. 6 out of 10 nodes were able to estimate the location alone. Their bearing accuracy is within a degree, while the range is off by less than 10% in the worst case. The localization rate characterizes the system``s ability to successfully estimate the trajectory of shots. Since caliber estimation and weapon classification relies on the trajectory, non-localized shots are not classified either. There were 7 shots out of 196 that were not localized. The reason for missed shots is the trajectory ambiguity problem that occurs when the projectile passes on one side of all the sensors. In this case, two significantly different trajectories can generate the same set of observations (see [8] and also Section 6.3). Instead of estimating which one is more likely or displaying both possibilities, we decided not to provide a trajectory at all. It is better not to give an answer other than a shot alarm than misleading the soldier. Localization accuracy is broken down to trajectory accuracy and range estimation precision. The angle of the estimated trajectory was better than 1 degree except for the 300 m range. Since the range should not affect trajectory estimation as long as the projectile passes over the network, we suspect that the slightly worse angle precision for 300 m is due to the hurried shots we witnessed the soldiers took near the end of the day. This is also indicated by another datapoint: the estimated trajectory distance from the actual targets has an average error of 1.3 m for 300 m shots, 0.75 m for 200 m shots and 0.6 m for all but 300 m shots. As the distance between the targets and the sensor network was fixed, this number should not show a 2× improvement just because the shooter is closer. Since the angle of the trajectory itself does not characterize the overall error-there can be a translation alsoTable 1 also gives the distance of the shooter from the estimated trajectory. These indicate an error which is about 1-2% of the range. To put this into perspective, a trajectory estimate for a 100 m shot will very likely go through or very near the window the shooter is located at. Again, we believe that the disproportionally larger errors at 300 m are due to human errors in aiming. As the ground truth was obtained by knowing the precise location of the shooter and the target, any inaccuracy in the actual trajectory directly adds to the perceived error of the system. We call the estimation of the shooter``s position on the calculated trajectory range estimation due to the lack of a better term. The range estimates are better than 5% accurate from 50 m and 10% for 100 m. However, this goes to 20% or worse for longer distances. We did not have a facility to test system before the evaluation for ranges beyond 100 m. During the evaluation, we ran into the problem of mistaking shockwave echoes for muzzle blasts. These echoes reached the sensors before the real muzzle blast for long range shots only, since the projectile travels 2-3× faster than the speed of sound, so the time between the shockwave (and its possible echo from nearby objects) and the muzzle blast increases with increasing ranges. This resulted in underestimating the range, since the system measured shorter times than the real ones. Since the evaluation we finetuned the muzzle blast detection algorithm to avoid this problem. Distance M16 AK47 M240 M107 M4 M249 M4-M249 50m 100% 100% 100% 100% 11% 25% 94% 100m 100% 100% 100% 100% 22% 33% 100% 200m 100% 100% 100% 100% 50% 22% 100% 300m 67% 100% 83% 100% 33% 0% 57% All 96% 100% 97% 100% 23% 23% 93% Table 2: Weapon classification results. The percentages are relative to the number of shots localized and not all shots, as the classification algorithm needs to know the trajectory and the range. Note that the difference is small; there were 189 shots localized out of the total 196. The caliber and weapon estimation accuracy rates are based on the 189 shots that were successfully localized. Note that there was a single shot that was falsely classified by the caliber estimator. The 73% overall weapon classification accuracy does not seem impressive. But if we break it down to the six different weapons tested, the picture changes dramatically as shown in Table 2. For four of the weapons (AK14, M16, M240 and M107), the classification rate is almost 100%. There were only two shots out of approximately 140 that were missed. The M4 and M249 proved to be too similar and they were mistaken for each other most of the time. One possible explanation is that we had only a limited number of test shots taken with these weapons right before the evaluation and used the wrong deceleration approximation function. Either this or a similar mistake was made 122 since if we simply used the opposite of the system``s answer where one of these weapons were indicated, the accuracy would have improved 3x. If we consider these two weapons a single weapon class, then the classification accuracy for it becomes 93%. Note that the AK47 and M240 have the same caliber (7.62 mm), just as the M16, M4 and M249 do (5.56 mm). That is, the system is able to differentiate between weapons of the same caliber. We are not aware of any system that classifies weapons this accurately. 7.1 Single sensor performance As was shown previously, a single sensor alone is able to localize the shooter if it can determine both the muzzle blast and the shockwave AoA, that is, it needs to measure the ToA of both on at least three acoustic channels. While shockwave detection is independent of the range-unless the projectile becomes subsonic-, the likelihood of muzzle blast detection beyond 150 meters is not enough for consistently getting at least three per sensor node for AoA estimation. Hence, we only evaluate the single sensor performance for the 104 shots that were taken from 50 and 100 m. Note that we use the same test data as in the previous section, but we evaluate individually for each sensor. Table 3 summarizes the results broken down by the ten sensors utilized. Since this is now not a distributed system, the results are given relative to the position of the given sensor, that is, a bearing and range estimate is provided. Note that many of the common error sources of the networked system do not play a role here. Time synchronization is not applicable. The sensor``s absolute location is irrelevant (just as the relative location of multiple sensors). The sensor``s orientation is still important though. There are several disadvantages of the single sensor case compared to the networked system: there is no redundancy to compensate for other errors and to perform outlier rejection, the localization rate is markedly lower, and a single sensor alone is not able to estimate the caliber or classify the weapon. Sensor id 1 2 3 5 7 8 9 10 11 12 Loc. rate 44% 37% 53% 52% 19% 63% 51% 31% 23% 44% Bearing (deg) 0.80 1.25 0.60 0.85 1.02 0.92 0.73 0.71 1.28 1.44 Range (m) 3.2 6.1 4.4 4.7 4.6 4.6 4.1 5.2 4.8 8.2 Table 3: Single sensor accuracy for 108 shots fired from 50 and 100 meters. Localization rate refers to the percentage of shots the given sensor alone was able to localize. The bearing and range values are average errors. They characterize the accuracy of localization from the given sensor``s perspective. The data indicates that the performance of the sensors varied significantly especially considering the localization rate. One factor has to be the location of the given sensor including how far it was from the firing lines and how obstructed its view was. Also, the sensors were hand-built prototypes utilizing nowhere near production quality packaging/mounting. In light of these factors, the overall average bearing error of 0.9 degrees and range error of 5 m with a microphone spacing of less than 10 cm are excellent. We believe that professional manufacturing and better microphones could easily achieve better performance than the best sensor in our experiment (>60% localization rate and 3 m range error). Interestingly, the largest error in range was a huge 90 m clearly due to some erroneous detection, yet the largest bearing error was less than 12 degrees which is still a good indication for the soldier where to look. The overall localization rate over all single sensors was 42%, while for 50 m shots only, this jumped to 61%. Note that the firing range was prepared to simulate an urban area to some extent: there were a few single- and two-storey wooden structures built both in and around the sensor deployment area and the firing lines. Hence, not all sensors had line-of-sight to all shooting positions. We estimate that 10% of the sensors had obstructed view to the shooter on average. Hence, we can claim that a given sensor had about 50% chance of localizing a shot within 130 m. (Since the sensor deployment area was 30 m deep, 100 m shots correspond to actual distances between 100 and 130 m.) Again, we emphasize that localization needs at least three muzzle blast and three shockwave detections out of a possible four for each per sensor. The detection rate for single sensors-corresponding to at least one shockwave detection per sensor-was practically 100%. 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 1 2 3 4 5 6 7 8 9 10 number of sensors percentageofshots Figure 13: Histogram showing what fraction of the 104 shots taken from 50 and 100 meters were localized by at most how many individual sensors alone. 13% of the shots were missed by every single sensor, i.e., none of them had both muzzle blast and shockwave AoA detections. Note that almost all of these shots were still accurately localized by the networked system, i.e. the sensor fusion using all available observations in the sensor network. It would be misleading to interpret these results as the system missing half the shots. As soldiers never work alone and the sensor node is relatively cheap to afford having every soldier equipped with one, we also need to look at the overall detection rates for every shot. Figure 13 shows the histogram of the percentage of shots vs. the number of individual sensors that localized it. 13% of shots were not localized by any sensor alone, but 87% was localized by at least one sensor out of ten. 7.2 Error sources In this section, we analyze the most significant sources of error that affect the performance of the networked shooter localization and weapon classification system. In order to correlate the distributed observations of the acoustic events, the nodes need to have a common time and space reference. Hence, errors in the time synchronization, node localization and node orientation all degrade the overall accuracy of the system. 123 Our time synchronization approach yields errors significantly less than 100 microseconds. As the sound travels about 3 cm in that time, time synchronization errors have a negligible effect on the system. On the other hand, node location and orientation can have a direct effect on the overall system performance. Notice that to analyze this, we do not have to resort to simulation, instead we can utilize the real test data gathered at Aberdeen. But instead of using the real sensor locations known very accurately and the measured and calibrated almost perfect node orientations, we can add error terms to them and run the sensor fusion. This exactly replicates how the system would have performed during the test using the imprecisely known locations and orientations. Another aspect of the system performance that can be evaluated this way is the effect of the number of available sensors. Instead of using all ten sensors in the data fusion, we can pick any subset of the nodes to see how the accuracy degrades as we decrease the number of nodes. The following experiment was carried out. The number of sensors were varied from 2 to 10 in increments of 2. Each run picked the sensors randomly using a uniform distribution. At each run each node was randomly moved to a new location within a circle around its true position with a radius determined by a zero-mean Gaussian distribution. Finally, the node orientations were perturbed using a zeromean Gaussian distribution. Each combination of parameters were generated 100 times and utilized for all 196 shots. The results are summarized in Figure 14. There is one 3D barchart for each of the experiment sets with the given fixed number of sensors. The x-axis shows the node location error, that is, the standard deviation of the corresponding Gaussian distribution that was varied between 0 and 6 meters. The y-axis shows the standard deviation of the node orientation error that was varied between 0 and 6 degrees. The z-axis is the resulting trajectory azimuth error. Note that the elevation angles showed somewhat larger errors than the azimuth. Since all the sensors were in approximately a horizontal plane and only a few shooter positions were out of the same plane and only by 2 m or so, the test was not sufficient to evaluate this aspect of the system. There are many interesting observation one can make by analyzing these charts. Node location errors in this range have a small effect on accuracy. Node orientation errors, on the other hand, noticeably degrade the performance. Still the largest errors in this experiment of 3.5 degrees for 6 sensors and 5 degrees for 2 sensors are still very good. Note that as the location and orientation errors increase and the number of sensors decrease, the most significantly affected performance metric is the localization rate. See Table 4 for a summary. Successful localization goes down from almost 100% to 50% when we go from 10 sensors to 2 even without additional errors. This is primarily caused by geometry: for a successful localization, the bullet needs to pass over the sensor network, that is, at least one sensor should be on the side of the trajectory other than the rest of the nodes. (This is a simplification for illustrative purposes. If all the sensors and the trajectory are not coplanar, localization may be successful even if the projectile passes on one side of the network. See Section 6.3.) As the numbers of sensors decreased in the experiment by randomly selecting a subset, the probability of trajectories abiding by this rule decreased. This also means that even if there are 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 2 sensors 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 4 sensors 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 6 sensors 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 8 sensors Figure 14: The effect of node localization and orientation errors on azimuth accuracy with 2, 4, 6 and 8 nodes. Note that the chart for 10 nodes is almost identical for the 8-node case, hence, it is omitted. 124 many sensors (i.e. soldiers), but all of them are right next to each other, the localization rate will suffer. However, when the sensor fusion does provide a result, it is still accurate even with few available sensors and relatively large individual errors. A very few consistent observation lead to good accuracy as the inconsistent ones are discarded by the algorithm. This is also supported by the observation that for the cases with the higher number of sensors (8 or 10), the localization rate is hardly affected by even large errors. Errors/Sensors 2 4 6 8 10 0 m, 0 deg 54% 87% 94% 95% 96% 2 m, 2 deg 53% 80% 91% 96% 96% 6 m, 0 deg 43% 79% 88% 94% 94% 0 m, 6 deg 44% 78% 90% 93% 94% 6 m, 6 deg 41% 73% 85% 89% 92% Table 4: Localization rate as a function of the number of sensors used, the sensor node location and orientation errors. One of the most significant observations on Figure 14 and Table 4 is that there is hardly any difference in the data for 6, 8 and 10 sensors. This means that there is little advantage of adding more nodes beyond 6 sensors as far as the accuracy is concerned. The speed of sound depends on the ambient temperature. The current prototype considers it constant that is typically set before a test. It would be straightforward to employ a temperature sensor to update the value of the speed of sound periodically during operation. Note also that wind may adversely affect the accuracy of the system. The sensor fusion, however, could incorporate wind speed into its calculations. It would be more complicated than temperature compensation, but could be done. Other practical issues also need to be looked at before a real world deployment. Silencers reduce the muzzle blast energy and hence, the effective range the system can detect it at. However, silencers do not effect the shockwave and the system would still detect the trajectory and caliber accurately. The range and weapon type could not be estimated without muzzle blast detections. Subsonic weapons do not produce a shockwave. However, this is not of great significance, since they have shorter range, lower accuracy and much less lethality. Hence, their use is not widespread and they pose less danger in any case. Another issue is the type of ammunition used. Irregular armies may use substandard, even hand manufactured bullets. This effects the muzzle velocity of the weapon. For weapon classification to work accurately, the system would need to be calibrated with the typical ammunition used by the given adversary. 8. RELATED WORK Acoustic detection and recognition has been under research since the early fifties. The area has a close relevance to the topic of supersonic flow mechanics [20]. Fansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast. Fansler``s work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4]. Experiments with greater distances from the muzzle were conducted by Stoughton [18]. The measurements of the ballistic shockwaves using calibrated pressure transducers at known locations, measured bullet speeds, and miss distances of 355 meters for 5.56 mm and 7.62 mm projectiles were made. Results indicate that ground interaction becomes a problem for miss distances of 30 meters or larger. Another area of research is the signal processing of gunfire acoustics. The focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts. Possible techniques for classifying signals as either shockwaves or muzzle blasts includes short-time Fourier Transform (STFT), the Smoothed Pseudo Wigner-Ville distribution (SPWVD), and a discrete wavelet transformation (DWT). Joint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency. Mays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10]. The edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics. A paper by Sadler [14] compares two shockwave edge detection methods: a simple gradient-based detector, and a multi-scale wavelet detector. It also demonstrates how the length of the shockwave, as determined by the edge detectors, can be used along with Whithams equations [20] to estimate the caliber of a projectile. Note that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform. A related topic is the research and development of experimental and prototype shooter location systems. Researchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers. The fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing. The overall system consists of two to three of these arrays spaced 20 to 100 meters from each other. The soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet. There is a low speed RF connection from the helmet to the processing body. An extensive test has been conducted to measure the performance of both type of systems. The fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched. The angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one. The range accuracy was around 5 percent for both of the systems. The problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter. A sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8]. This is especially important for operation in acoustically reverberant urban areas. Note that BBN``s current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1]. The company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds. The goal of the system is significantly different than that of military systems. Shotspotter reports 25 m typical accuracy which is more than enough for police to 125 respond. They are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available. 9. CONCLUSIONS The main contribution of this work is twofold. First, the performance of the overall distributed networked system is excellent. Most noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested. The system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple. The key factor behind this is the sensor fusion algorithm``s ability to reject erroneous measurements. It is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons. We believe that with the lessons learned in the test, a consecutive field experiment could have showed significantly improved results especially in range estimation beyond 100 m and weapon classification for the remaining two weapons that were mistaken for each other the majority of the times during the test. Second, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good. While the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes. Note that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode. We believe that the technology is mature enough that a next revision of the system could be a commercial one. However, important aspects of the system would still need to be worked on. We have not addresses power management yet. A current node runs on 4 AA batteries for about 12 hours of continuous operation. A deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs. An analog trigger circuit could solve this problem, however, the system would miss the first shot. Instead, the acoustic channels would need to be sampled and stored in a circular buffer. The rest of the board could be turned off. When a trigger wakes up the board, the acoustic data would be immediately available. Experiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life. Other outstanding issues include weatherproof packaging and ruggedization, as well as integration with current military infrastructure. 10. REFERENCES [1] BBN technologies website. http://www.bbn.com. [2] E. Danicki. Acoustic sniper localization. Archives of Acoustics, 30(2):233-245, 2005. [3] G. L. Duckworth et al.. Fixed and wearable acoustic counter-sniper systems for law enforcement. In E. M. Carapezza and D. B. Law, editors, Proc. SPIE Vol. 3577, p. 210-230, pages 210-230, Jan. 1999. [4] K. Fansler. Description of muzzle blast by modified scaling models. Shock and Vibration, 5(1):1-12, 1998. [5] D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler. The nesC language: a holistic approach to networked embedded systems. Proceedings of Programming Language Design and Implementation (PLDI), June 2003. [6] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister. System architecture directions for networked sensors. in Proc. of ASPLOS 2000, Nov. 2000. [7] B. Kus´y, G. Balogh, P. V¨olgyesi, J. Sallai, A. N´adas, A. L´edeczi, M. Mar´oti, and L. Meertens. Node-density independent localization. Information Processing in Sensor Networks (IPSN 06) SPOTS Track, Apr. 2006. [8] A. L´edeczi, A. N´adas, P. V¨olgyesi, G. Balogh, B. Kus´y, J. Sallai, G. Pap, S. D´ora, K. Moln´ar, M. Mar´oti, and G. Simon. Countersniper system for urban warfare. ACM Transactions on Sensor Networks, 1(1):153-177, Nov. 2005. [9] M. Mar´oti. Directed flood-routing framework for wireless sensor networks. In Proceedings of the 5th ACM/IFIP/USENIX International Conference on Middleware, pages 99-114, New York, NY, USA, 2004. Springer-Verlag New York, Inc. [10] B. Mays. Shockwave and muzzle blast classification via joint time frequency and wavelet analysis. Technical report, Army Research Lab Adelphi MD 20783-1197, Sept. 2001. [11] TinyOS Hardware Platforms. http://tinyos.net/scoop/special/hardware. [12] Crossbow MICAz (MPR2400) Radio Module. http://www.xbow.com/Products/productsdetails. aspx?sid=101. [13] PicoBlaze User Resources. http://www.xilinx.com/ipcenter/processor_ central/picoblaze/picoblaze_user_resources.htm. [14] B. M. Sadler, T. Pham, and L. C. Sadler. Optimal and wavelet-based shock wave detection and estimation. Acoustical Society of America Journal, 104:955-963, Aug. 1998. [15] J. Sallai, B. Kus´y, A. L´edeczi, and P. Dutta. On the scalability of routing-integrated time synchronization. 3rd European Workshop on Wireless Sensor Networks (EWSN 2006), Feb. 2006. [16] ShotSpotter website. http: //www.shotspotter.com/products/military.html. [17] G. Simon, M. Mar´oti, A. L´edeczi, G. Balogh, B. Kus´y, A. N´adas, G. Pap, J. Sallai, and K. Frampton. Sensor network-based countersniper system. In SenSys ``04: Proceedings of the 2nd international conference on Embedded networked sensor systems, pages 1-12, New York, NY, USA, 2004. ACM Press. [18] R. Stoughton. Measurements of small-caliber ballistic shock waves in air. Acoustical Society of America Journal, 102:781-787, Aug. 1997. [19] B. A. Weiss, C. Schlenoff, M. Shneier, and A. Virts. Technology evaluations and performance metrics for soldier-worn sensors for assist. In Performance Metrics for Intelligent Systems Workshop, Aug. 2006. [20] G. Whitham. Flow pattern of a supersonic projectile. Communications on pure and applied mathematics, 5(3):301, 1952. 126
Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors ABSTRACT The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested. 1. INTRODUCTION The importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East. In October 2006 CNN reported on a new tactic employed by insurgents. A mobile sniper team moves around busy city streets in a car, positions itself at a good standoff distance from dismounted US military personnel, takes a single well-aimed shot and immediately melts in the city traffic. By the time the soldiers can react, they are gone. A countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers. Our team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003. The system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad hoc multihop network. The acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot. PinPtr is characterized by high precision: 1m average 3D accuracy for shots originating within or near the sensor network and 1 degree bearing precision for both azimuth and elevation and 10% accuracy in range estimation for longer range shots. The truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time. This capability is due to the widely distributed sensing and the unique sensor fusion approach [8]. The system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities. The obvious disadvantage of such a system is its static nature. Once the sensors are distributed, they cover a certain area. Depending on the operation, the deployment may be needed for an hour or a month, but eventually the area looses its importance. It is not practical to gather and reuse the sensors, especially under combat conditions. Even if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place. As it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves. While there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers. A helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it. To move from a static sensor network-based solution to a highly mobile one presents significant challenges. The sensor positions and orientation need to be constantly monitored. As soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before. Moreover, the system should be useful to even a single soldier. Finally, additional requirements called for caliber estimation and weapon classification in addition to source localization. The paper presents the design and evaluation of our soldierwearable mobile countersniper system. It describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12]. Special emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously. The results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented. 2. APPROACH The firing of a typical military rifle, such as the AK47 or M16, produces two distinct acoustic phenomena. The muzzle blast is generated at the muzzle of the gun and travels at the speed sound. The supersonic projectile generates an acoustic shockwave, a kind of sonic boom. The wavefront has a conical shape, the angle of which depends on the Mach number, the speed of the bullet relative to the speed of sound. The shockwave has a characteristic shape resembling a capital N. The rise time at both the start and end of the signal is very fast, under 1 µsec. The length is determined by the caliber and the miss distance, the distance between the trajectory and the sensor. It is typically a few hundred µsec. Once a trajectory estimate is available, the shockwave length can be used for caliber estimation. Our system is based on four microphones connected to a sensorboard. The board detects shockwaves and muzzle blasts and measures their ToA. If at least three acoustic channels detect the same event, its AoA is also computed. If both the shockwave and muzzle blast AoA are available, a simple analytical solution gives the shooter location as shown in Section 6. As the microphones are close to each other, typically 2-4", we cannot expect very high precision. Also, this method does not estimate a trajectory. In fact, an infinite number of trajectory-bullet speed pairs satisfy the observations. However, the sensorboards are also connected to COTS MICAz motes and they share their AoA and ToA measurements, as well as their own location and orientation, with each other using a multihop routing service [9]. A hybrid sensor fusion algorithm then estimates the trajectory, the range, the caliber and the weapon type based on all available observations. The sensorboard is also Bluetooth capable for communication with the soldier's PDA or laptop computer. A wired USB connection is also available. The sensorfusion algorithm and the user interface get their data through one of these channels. The orientation of the microphone array at the time of detection is provided by a 3-axis digital compass. Currently the system assumes that the soldier's PDA is GPS-capable and it does not provide self localization service itself. However, the accuracy of GPS is a few meters degrading the Figure 1: Acoustic sensorboard/mote assembly. overall accuracy of the system. Refer to Section 7 for an analysis. The latest generation sensorboard features a Texas Instruments CC-1000 radio enabling the high-precision radio interferometric self localization approach we have developed separately [7]. However, we leave the integration of the two technologies for future work. 3. HARDWARE Since the first static version of our system in 2003, the sensor nodes have been built upon the UC Berkeley/Crossbow MICA product line [11]. Although rudimentary acoustic signal processing can be done on these microcontroller-based boards, they do not provide the required computational performance for shockwave detection and angle of arrival measurements, where multiple signals from different microphones need to be processed in parallel at a high sampling rate. Our 3rd generation sensorboard is designed to be used with MICAz motes--in fact it has almost the same size as the mote itself (see Figure 1). The board utilizes a powerful Xilinx XC3S1000 FPGA chip with various standard peripheral IP cores, multiple soft processor cores and custom logic for the acoustic detectors (Figure 2). The onboard Flash (4MB) and PSRAM (8MB) modules allow storing raw samples of several acoustic events, which can be used to build libraries of various acoustic signatures and for refining the detection cores off-line. Also, the external memory blocks can store program code and data used by the soft processor cores on the FPGA. The board supports four independent analog channels sampled at up to 1 MS/s (million samples per seconds). These channels, featuring an electret microphone (Panasonic WM64PNT), amplifiers with controllable gain (30-60 dB) and a 12-bit serial ADC (Analog Devices AD7476), reside on separate tiny boards which are connected to the main sensorboard with ribbon cables. This partitioning enables the use of truly different audio channels (eg.: slower sampling frequency, different gain or dynamic range) and also results in less noisy measurements by avoiding long analog signal paths. The sensor platform offers a rich set of interfaces and can be integrated with existing systems in diverse ways. An RS232 port and a Bluetooth (BlueGiga WT12) wireless link with virtual UART emulation are directly available on the board and provide simple means to connect the sensor to PCs and PDAs. The mote interface consists of an I2C bus along with an interrupt and GPIO line (the latter one is used Figure 2: Block diagram of the sensorboard. for precise time synchronization between the board and the mote). The motes are equipped with IEEE 802.15.4 compliant radio transceivers and support ad-hoc wireless networking among the nodes and to/from the base station. The sensorboard also supports full-speed USB transfers (with custom USB dongles) for uploading recorded audio samples to the PC. The on-board JTAG chain--directly accessible through a dedicated connector--contains the FPGA part and configuration memory and provides in-system programming and debugging facilities. The integrated Honeywell HMR3300 digital compass module provides heading, pitch and roll information with 1 ◦ accuracy, which is essential for calculating and combining directional estimates of the detected events. Due to the complex voltage requirements of the FPGA, the power supply circuitry is implemented on the sensorboard and provides power both locally and to the mote. We used a quad pack of rechargeable AA batteries as the power source (although any other configuration is viable that meets the voltage requirements). The FPGA core (1.2 V) and I/O (3.3 V) voltages are generated by a highly efficient buck switching regulator. The FPGA configuration (2.5 V) and a separate 3.3 V power net are fed by low current LDOs, the latter one is used to provide independent power to the mote and to the Bluetooth radio. The regulators--except the last one--can be turned on/off from the mote or through the Bluetooth radio (via GPIO lines) to save power. The first prototype of our system employed 10 sensor nodes. Some of these nodes were mounted on military kevlar helmets with the microphones directly attached to the surface at about 20 cm separation as shown in Figure 3 (a). The rest of the nodes were mounted in plastic enclosures (Figure 3 (b)) with the microphones placed near the corners of the boxes to form approximately 5 cm × 10 cm rectangles. 4. SOFTWARE ARCHITECTURE The sensor application relies on three subsystems exploiting three different computing paradigms as they are shown in Figure 4. Although each of these execution models suit their domain specific tasks extremely well, this diversity Figure 3: Sensor prototypes mounted on a kevlar helmet (a) and in a plastic box on a tripod (b). presents a challenge for software development and system integration. The sensor fusion and user interface subsystem is running on PDAs and were implemented in Java. The sensing and signal processing tasks are executed by an FPGA, which also acts as a bridge between various wired and wireless communication channels. The ad-hoc internode communication, time synchronization and data sharing are the responsibilities of a microcontroller based radio module. Similarly, the application employs a wide variety of communication protocols such as Bluetooth ® and IEEE 802.14.5 wireless links, as well as optional UARTs, I2C and/or USB buses. Figure 4: Software architecture diagram. The sensor fusion module receives and unpacks raw measurements (time stamps and feature vectors) from the sensorboard through the Bluetooth ® link. Also, it fine tunes the execution of the signal processing cores by setting parameters through the same link. Note that measurements from other nodes along with their location and orientation information also arrive from the sensorboard which acts as a gateway between the PDA and the sensor network. The handheld device obtains its own GPS location data and di rectly receives orientation information through the sensorboard. The results of the sensor fusion are displayed on the PDA screen with low latency. Since, the application is implemented in pure Java, it is portable across different PDA platforms. The border between software and hardware is considerably blurred on the sensor board. The IP cores--implemented in hardware description languages (HDL) on the reconfigurable FPGA fabric--closely resemble hardware building blocks. However, some of them--most notably the soft processor cores--execute true software programs. The primary tasks of the sensor board software are 1) acquiring data samples from the analog channels, 2) processing acoustic data (detection), and 3) providing access to the results and run-time parameters through different interfaces. As it is shown in Figure 4, a centralized virtual register file contains the address decoding logic, the registers for storing parameter values and results and the point to point data buses to and from the peripherals. Thus, it effectively integrates the building blocks within the sensorboard and decouples the various communication interfaces. This architecture enabled us to deploy the same set of sensors in a centralized scenario, where the ad-hoc mote network (using the I2C interface) collected and forwarded the results to a base station or to build a decentralized system where the local PDAs execute the sensor fusion on the data obtained through the Bluetooth ® interface (and optionally from other sensors through the mote interface). The same set of registers are also accessible through a UART link with a terminal emulation program. Also, because the low-level interfaces are hidden by the register file, one can easily add/replace these with new ones (eg.: the first generation of motes supported a standard µP interface bus on the sensor connector, which was dropped in later designs). The most important results are the time stamps of the detected events. These time stamps and all other timing information (parameters, acoustic event features) are based on a 1 MHz clock and an internal timer on the FPGA. The time conversion and synchronization between the sensor network and the board is done by the mote by periodically requesting the capture of the current timer value through a dedicated GPIO line and reading the captured value from the register file through the I2C interface. Based on the the current and previous readings and the corresponding mote local time stamps, the mote can calculate and maintain the scaling factor and offset between the two time domains. The mote interface is implemented by the I2C slave IP core and a thin adaptation layer which provides a data and address bus abstraction on top of it. The maximum effective bandwidth is 100 Kbps through this interface. The FPGA contains several UART cores as well: for communicating with the on-board Bluetooth ® module, for controlling the digital compass and for providing a wired RS232 link through a dedicated connector. The control, status and data registers of the UART modules are available through the register file. The higher level protocols on these lines are implemented by Xilinx PicoBlaze microcontroller cores [13] and corresponding software programs. One of them provides a command line interface for test and debug purposes, while the other is responsible for parsing compass readings. By default, they are connected to the RS232 port and to the on-board digital compass line respectively, however, they can be rewired to any communication interface by changing the register file base address in the programs (e.g. the command line interface can be provided through the Bluetooth ® channel). Two of the external interfaces are not accessible through the register file: a high speed USB link and the SRAM interface are tied to the recorder block. The USB module implements a simple FIFO with parallel data lines connected to an external FT245R USB device controller. The RAM driver implements data read/write cycles with correct timing and is connected to the on-board pseudo SRAM. These interfaces provide 1 MB/s effective bandwidth for downloading recorded audio samples, for example. The data acquisition and signal processing paths exhibit clear symmetry: the same set of IP cores are instantiated four times (i.e. the number of acoustic channels) and run independently. The signal paths" meet" only just before the register file. Each of the analog channels is driven by a serial A/D core for providing a 20 MHz serial clock and shifting in 8-bit data samples at 1 MS/s and a digital potentiometer driver for setting the required gain. Each channel has its own shockwave and muzzle blast detector, which are described in Section 5. The detectors fetch run-time parameter values from the register file and store their results there as well. The coordinator core constantly monitors the detection results and generates a mote interrupt promptly upon" full detection" or after a reasonable timeout after" partial detection". The recorder component is not used in the final deployment, however, it is essential for development purposes for refining parameter values for new types of weapons or for other acoustic sources. This component receives the samples from all channels and stores them in circular buffers in the PSRAM device. If the signal amplitude on one of the channels crosses a predefined threshold, the recorder component suspends the sample collection with a predefined delay and dumps the contents of the buffers through the USB link. The length of these buffers and delays, the sampling rate, the threshold level and the set of recorded channels can be (re) configured run-time through the register file. Note that the core operates independently from the other signal processing modules, therefore, it can be used to validate the detection results off-line. The FPGA cores are implemented in VHDL, the PicoBlaze programs are written in assembly. The complete configuration occupies 40% of the resources (slices) of the FPGA and the maximum clock speed is 30 MHz, which is safely higher than the speed used with the actual device (20MHz). The MICAz motes are responsible for distributing measurement data across the network, which drastically improves the localization and classification results at each node. Besides a robust radio (MAC) layer, the motes require two essential middleware services to achieve this goal. The messages need to be propagated in the ad-hoc multihop network using a routing service. We successfully integrated the Directed Flood-Routing Framework (DFRF) [9] in our application. Apart from automatic message aggregation and efficient buffer management, the most unique feature of DFRF is its plug-in architecture, which accepts custom routing policies. Routing policies are state machines that govern how received messages are stored, resent or discarded. Example policies include spanning tree routing, broadcast, geographic routing, etc. . Different policies can be used for different messages concurrently, and the application is able to change the underlying policies at run-time (eg.: because of the changing RF environment or power budget). In fact, we switched several times between a simple but lavish broadcast policy and a more efficient gradient routing on the field. Correlating ToA measurements requires a common time base and precise time synchronization in the sensor network. The Routing Integrated Time Synchronization (RITS) [15] protocol relies on very accurate MAC-layer time-stamping to embed the cumulative delay that a data message accrued since the time of the detection in the message itself. That is, at every node it measures the time the message spent there and adds this to the number in the time delay slot of the message, right before it leaves the current node. Every receiving node can subtract the delay from its current time to obtain the detection time in its local time reference. The service provides very accurate time conversion (few µs per hop error), which is more than adequate for this application. Note, that the motes also need to convert the sensorboard time stamps to mote time as it is described earlier. The mote application is implemented in nesC [5] and is running on top of TinyOS [6]. With its 3 KB RAM and 28 KB program space (ROM) requirement, it easily fits on the MICAz motes. 5. DETECTION ALGORITHM There are several characteristics of acoustic shockwaves and muzzle blasts which distinguish their detection and signal processing algorithms from regular audio applications. Both events are transient by their nature and present very intense stimuli to the microphones. This is increasingly problematic with low cost electret microphones--designed for picking up regular speech or music. Although mechanical damping of the microphone membranes can mitigate the problem, this approach is not without side effects. The detection algorithms have to be robust enough to handle severe nonlinear distortion and transitory oscillations. Since the muzzle blast signature closely follows the shockwave signal and because of potential automatic weapon bursts, it is extremely important to settle the audio channels and the detection logic as soon as possible after an event. Also, precise angle of arrival estimation necessitates high sampling frequency (in the MHz range) and accurate event detection. Moreover, the detection logic needs to process multiple channels in parallel (4 channels on our existing hardware). These requirements dictated simple and robust algorithms both for muzzle blast and shockwave detections. Instead of using mundane energy detectors--which might not be able to distinguish the two different events--the applied detectors strive to find the most important characteristics of the two signals in the time-domain using simple state machine logic. The detectors are implemented as independent IP cores within the FPGA--one pair for each channel. The cores are run-time configurable and provide detection event signals with high precision time stamps and event specific feature vectors. Although the cores are running independently and in parallel, a crude local fusion module integrates them by shutting down those cores which missed their events after a reasonable timeout and by generating a single detection message towards the mote. At this point, the mote can read and forward the detection times and features and is responsible to restart the cores afterwards. The most conspicuous characteristics of an acoustic shockwave (see Figure 5 (a)) are the steep rising edges at the be Figure 5: Shockwave signal generated by a 5.56 × 45 mm NATO projectile (a) and the state machine of the detection algorithm (b). ginning and end of the signal. Also, the length of the N-wave is fairly predictable--as it is described in Section 6.5--and is relatively short (200-300 µs). The shockwave detection core is continuously looking for two rising edges within a given interval. The state machine of the algorithm is shown in Figure 5 (b). The input parameters are the minimum steepness of the edges (D, E), and the bounds on the length of the wave (Lmin, Lmax). The only feature calculated by the core is the length of the observed shockwave signal. In contrast to shockwaves, the muzzle blast signatures are characterized by a long initial period (1-5 ms) where the first half period is significantly shorter than the second half [4]. Due to the physical limitations of the analog circuitry described at the beginning of this section, irregular oscillations and glitches might show up within this longer time window as they can be clearly seen in Figure 6 (a). Therefore, the real challenge for the matching detection core is to identify the first and second half periods properly. The state machine (Figure 6 (b)) does not work on the raw samples directly but is fed by a zero crossing (ZC) encoder. After the initial triggering, the detector attempts to collect those ZC segments which belong to the first period (positive amplitude) while discarding too short (in our terminology: garbage) segments--effectively implementing a rudimentary low-pass filter in the ZC domain. After it encounters a sufficiently long negative segment, it runs the same collection logic for the second half period. If too much garbage is discarded in the collection phases, the core resets itself to prevent the (false) detection of the halves from completely different periods separated by rapid oscillation or noise. Finally, if the constraints on the total length and on the length ratio hold, the core generates a detection event along with the actual length, amplitude and energy of the period calculated concurrently. The initial triggering mechanism is based on two amplitude thresholds: one static (but configurable) amplitude level and a dynamically computed one. The latter one is essential to adapt the sensor to different ambient noise environments and to temporarily suspend the muzzle blast detector after a shock wave event (oscillations in the analog section or reverberations in the sensor enclosure might otherwise trigger false muzzle blast detections). The dynamic noise level is estimated by a single pole recursive low-pass filter (cutoff @ 0.5 kHz) on the FPGA. Figure 6: Muzzle blast signature (a) produced by an M16 assault rifle and the corresponding detection logic (b). The detection cores were originally implemented in Java and evaluated on pre-recorded signals because of much faster test runs and more convenient debugging facilities. Later on, they were ported to VHDL and synthesized using the Xilinx ISE tool suite. The functional equivalence between the two implementations were tested by VHDL test benches and Python scripts which provided an automated way to exercise the detection cores on the same set of pre-recorded signals and to compare the results. 6. SENSOR FUSION The sensor fusion algorithm receives detection messages from the sensor network and estimates the bullet trajectory, the shooter position, the caliber of the projectile and the type of the weapon. The algorithm consists of well separated computational tasks outlined below: 1. Compute muzzle blast and shockwave directions of arrivals for each individual sensor (see 6.1). 2. Compute range estimates. This algorithm can analytically fuse a pair of shockwave and muzzle blast AoA estimates. (see 6.2). 3. Compute a single trajectory from all shockwave measurements (see 6.3). 4. If trajectory available compute range (see 6.4). else compute shooter position first and then trajectory based on it. (see 6.4) 5. If trajectory available compute caliber (see 6.5). 6. If caliber available compute weapon type (see 6.6). We describe each step in the following sections in detail. 6.1 Direction of arrival The first step of the sensor fusion is to calculate the muzzle blast and shockwave AoA-s for each sensorboard. Each sensorboard has four microphones that measure the ToA-s. Since the microphone spacing is orders of magnitude smaller than the distance to the sound source, we can approximate the approaching sound wave front with a plane (far field assumption). Let us formalize the problem for 3 microphones first. Let P1, P2 and P3 be the position of the microphones ordered by time of arrival t1 <t2 <t3. First we apply a simple geometry validation step. The measured time difference between two microphones cannot be larger than the sound propagation time between the two microphones: Where c is the speed of sound and ε is the maximum measurement error. If this condition does not hold, the corresponding detections are discarded. Let v (x, y, z) be the normal vector of the unknown direction of arrival. We also use r1 (x1, y1, z1), the vector from P1 to P2 and r2 (x2, y2, z2), the vector from P1 to P3. Let's consider the projection of the direction of the motion of the wave front (v) to r1 divided by the speed of sound (c). This gives us how long it takes the wave front to propagate form P1 to P2: Moving from vectors to coordinates using the dot product definition leads to a quadratic system: We omit the solution steps here, as they are straightforward, but long. There are two solutions (if the source is on the P1P2P3 plane the two solutions coincide). We use the fourth microphone's measurement--if there is one--to eliminate one of them. Otherwise, both solutions are considered for further processing. 6.2 Muzzle-shock fusion Figure 7: Section plane of a shot (at P) and two sensors (at P1 and at P2). One sensor detects the muzzle blast's, the other the shockwave's time and direction of arrivals. Consider the situation in Figure 7. A shot was fired from P at time t. Both P and t are unknown. We have one muzzle blast and one shockwave detections by two different sensors with AoA and hence, ToA information available. The muzzle blast detection is at position P1 with time t1 and AoA u. The shockwave detection is at P2 with time t2 and AoA v. u and v are normal vectors. It is shown below that these measurements are sufficient to compute the position of the shooter (P). Let P2 ~ be the point on the extended shockwave cone surface where PP2 ~ is perpendicular to the surface. Note that PP2 ~ is parallel with v. Since P2 ~ is on the cone surface which hits P2, a sensor at P2 ~ would detect the same shockwave time of arrival (t2). The cone surface travels at the speed of sound (c), so we can express P using P ~ 2: containing only one unknown t. One obtains: From here we can calculate the shoter position P. Let's consider the special single sensor case where P1 = P2 (one sensor detects both shockwave and muzzle blast AoA). In this case: Since u and v are not used separately only uv, the absolute orientation of the sensor can be arbitrary, we still get t which gives us the range. Here we assumed that the shockwave is a cone which is only true for constant projectile speeds. In reality, the angle of the cone slowly grows; the surface resembles one half of an American football. The decelerating bullet results in a smaller time difference between the shockwave and the muzzle blast detections because the shockwave generation slows down with the bullet. A smaller time difference results in a smaller range, so the above formula underestimates the true range. However, it can still be used with a proper deceleration correction function. We leave this for future work. 6.3 Trajectory estimation Danicki showed that the bullet trajectory and speed can be computed analytically from two independent shockwave measurements where both ToA and AoA are measured [2]. The method gets more sensitive to measurement errors as the two shockwave directions get closer to each other. In the special case when both directions are the same, the trajectory cannot be computed. In a real world application, the sensors are typically deployed on a plane approximately. In this case, all sensors located on one" side" of the trajectory measure almost the same shockwave AoA. To avoid this error sensitivity problem, we consider shockwave measurement pairs only if the direction of arrival difference is larger than a certain threshold. We have multiple sensors and one sensor can report two different directions (when only three microphones detect the shockwave). Hence, we typically have several trajectory candidates, i.e. one for each AoA pair over the threshold. We applied an outlier filtering and averaging method to fuse together the shockwave direction and time information and come up with a single trajectory. Assume that we have N individual shockwave AoA measurements. Let's take all possible unordered pairs where the direction difference is above the mentioned threshold and compute the trajectory for each. This gives us at most N (N-1) 2 trajectories. A tra jectory is represented by one point pi and the normal vector vi (where i is the trajectory index). We define the distance of two trajectories as the dot product of their normal vectors: where R is a radius parameter. The largest neighbor set is considered to be the core set C, all other trajectories are outliers. The core set can be found in O (N2) time. The trajectories in the core set are then averaged to get the final trajectory. It can happen that we cannot form any sensor pairs because of the direction difference threshold. It means all sensors are on the same side of the trajectory. In this case, we first compute the shooter position (described in the next section) that fixes p making v the only unknown. To find v in this case, we use a simple high resolution grid search and minimize an error function based on the shockwave directions. We have made experiments to utilize the measured shockwave length in the trajectory estimation. There are some promising results, but it needs further research. 6.4 Shooter position estimation The shooter position estimation algorithm aggregates the following heterogenous information generated by earlier computational steps: 1. trajectory, 2. muzzle blast ToA at a sensor, 3. muzzle blast AoA at a sensor, which is effectively a bearing estimate to the shooter, and 4. range estimate at a sensor (when both shockwave and muzzle blast AoA are available). Some sensors report only ToA, some has bearing estimate (s) also and some has range estimate (s) as well, depend ing on the number of successful muzzle blast and shockwave detections by the sensor. For an example, refer to Figure 8. Note that a sensor may have two different bearing and range estimates. 3 detections gives two possible AoA-s for muzzle blast (i.e. bearing) and/or shockwave. Furthermore, the combination of two different muzzle blast and shockwave AoA-s may result in two different ranges. Figure 8: Example of heterogenous input data for the shooter position estimation algorithm. All sen sors have ToA measurements (t1, t2, t3, t4, t5), one sensor has a single bearing estimate (v2), one sensor has two possible bearings (v3, v ~ 3) and one sensor has two bearing and two range estimates (v1, v ~ 1, r1, r ~ 1) In a multipath environment, these detections will not only contain gaussian noise, but also possibly large errors due to echoes. It has been showed in our earlier work that a similar problem can be solved efficiently with an interval arithmetic based bisection search algorithm [8]. The basic idea is to define a discrete consistency function over the area of interest and subdivide the space into 3D boxes. For any given 3D box, this function gives the number of measurements supporting the hypothesis that the shooter was within that box. The search starts with a box large enough to contain the whole area of interest, then zooms in by dividing and evaluating boxes. The box with the maximum consistency is divided until the desired precision is reached. Backtracking is possible to avoid getting stuck in a local maximum. This approach has been shown to be fast enough for online processing. Note, however, that when the trajectory has already been calculated in previous steps, the search needs to be done only on the trajectory making it orders of magnitude faster. Next let us describe how the consistency function is calculated in detail. Consider B, a three dimensional box, we would like to compute the consistency value of. First we consider only the ToA information. If one sensor has multiple ToA detections, we use the average of those times, so one sensor supplies at most one ToA estimate. For each ToA, we can calculate the corresponding time of the shot, since the origin is assumed to be in box B. Since it is a box and not a single point, this gives us an interval for the shot time. The maximum number of overlapping time intervals gives us the value of the consistency function for B. For a detailed description of the consistency function and search algorithm, refer to [8]. Here we extend the approach the following way. We modify the consistency function based on the bearing and range data from individual sensors. A bearing estimate supports B if the line segment starting from the sensor with the measured direction intersects the B box. A range supports B, if the sphere with the radius of the range and origin of the sensor intersects B. Instead of simply checking whether the position specified by the corresponding bearing-range pairs falls within B, this eliminates the sensor's possible orientation error. The value of the consistency function is incremented by one for each bearing and range estimate that is consistent with B. 6.5 Caliber estimation The shockwave signal characteristics has been studied before by Whitham [20]. He showed that the shockwave period T is related to the projectile diameter d, the length l, the perpendicular miss distance b from the bullet trajectory to the sensor, the Mach number M and the speed of sound c. Figure 9: Shockwave length and miss distance relationship. Each data point represents one sensorboard after an aggregation of the individual measurements of the four acoustic channels. Three different caliber projectiles have been tested (196 shots, 10 sensors). To illustrate the relationship between miss distance and shockwave length, here we use all 196 shots with three different caliber projectiles fired during the evaluation. (During the evaluation we used data obtained previously using a few practice shots per weapon.) 10 sensors (4 microphones by sensor) measured the shockwave length. For each sensor, we considered the shockwave length estimation valid if at least three out of four microphones agreed on a value with at most 5 microsecond variance. This filtering leads to a 86% report rate per sensor and gets rid of large measurement errors. The experimental data is shown in Figure 9. Whitham's formula suggests that the shockwave length for a given caliber can be approximated with a power function of the miss distance (with a 1/4 exponent). Best fit functions on our data are: To evaluate a shot, we take the caliber whose approximation function results in the smallest RMS error of the filtered sensor readings. This method has less than 1% caliber estimation error when an accurate trajectory estimate is available. In other words, caliber estimation only works if enough shockwave detections are made by the system to compute a trajectory. 6.6 Weapon estimation We analyzed all measured signal characteristics to find weapon specific information. Unfortunately, we concluded that the observed muzzle blast signature is not characteristic enough of the weapon for classification purposes. The reflections of the high energy muzzle blast from the environment have much higher impact on the muzzle blast signal shape than the weapon itself. Shooting the same weapon from different places caused larger differences on the recorded signal than shooting different weapons from the same place. Figure 10: AK47 and M240 bullet deceleration measurements. Both weapons have the same caliber. Data is approximated using simple linear regression. Figure 11: M16, M249 and M4 bullet deceleration measurements. All weapons have the same caliber. Data is approximated using simple linear regression. However, the measured speed of the projectile and its caliber showed good correlation with the weapon type. This is because for a given weapon type and ammunition pair, the muzzle velocity is nearly constant. In Figures 10 and 11 we can see the relationship between the range and the measured bullet speed for different calibers and weapons. In the supersonic speed range, the bullet deceleration can be approximated with a linear function. In case of the 7.62 mm caliber, the two tested weapons (AK47, M240) can be clearly separated (Figure 10). Unfortunately, this is not necessarily true for the 5.56 mm caliber. The M16 with its higher muzzle speed can still be well classified, but the M4 and M249 weapons seem practically undistinguishable (Figure 11). However, this may be partially due to the limited number of practice shots we were able to take before the actual testing began. More training data may reveal better separation between the two weapons since their published muzzle velocities do differ somewhat. The system carries out weapon classification in the following manner. Once the trajectory is known, the speed can be calculated for each sensor based on the shockwave geometry. To evaluate a shot, we choose the weapon type whose deceleration function results in the smallest RMS error of the estimated range-speed pairs for the estimated caliber class. 7. RESULTS An independent evaluation of the system was carried out by a team from NIST at the US Army Aberdeen Test Center in April 2006 [19]. The experiment was setup on a shooting range with mock-up wooden buildings and walls for supporting elevated shooter positions and generating multipath effects. Figure 12 shows the user interface with an aerial photograph of the site. 10 sensor nodes were deployed on surveyed points in an approximately 30 × 30 m area. There were five fixed targets behind the sensor network. Several firing positions were located at each of the firing lines at 50, 100, 200 and 300 meters. These positions were known to the evaluators, but not to the operators of the system. Six different weapons were utilized: AK47 and M240 firing 7.62 mm projectiles, M16, M4 and M249 with 5.56 mm ammunition and the .50 caliber M107. Note that the sensors remained static during the test. The primary reason for this is that nobody is allowed downrange during live fire tests. Utilizing some kind of remote control platform would have been too involved for the limited time the range was available for the test. The experiment, therefore, did not test the mobility aspect of the system. During the one day test, there were 196 shots fired. The results are summarized in Table 1. The system detected all shots successfully. Since a ballistic shockwave is a unique acoustic phenomenon, it makes the detection very robust. There were no false positives for shockwaves, but there were a handful of false muzzle blast detections due to parallel tests of artillery at a nearby range. Table 1: Summary of results fusing all available sen sor observations. All shots were successfully detected, so the detection rate is omitted. Localization rate means the percentage of shots that the sensor fusion was able to estimate the trajectory of. The caliber accuracy rate is relative to the shots localized and not all the shots because caliber estimation requires the trajectory. The trajectory error is broken down to azimuth in degrees and the actual distance of the shooter from the trajectory. The distance error shows the distance between the real shooter position and the estimated shooter position. As such, it includes the error caused by both the trajectory and that of the range estimation. Note that the traditional bearing and range measures are not good ones for a distributed system such as ours because of the lack of a single reference point. Figure 12: The user interface of the system show ing the experimental setup. The 10 sensor nodes are labeled by their ID and marked by dark circles. The targets are black squares marked T-1 through T-5. The long white arrows point to the shooter position estimated by each sensor. Where it is missing, the corresponding sensor did not have enough detections to measure the AoA of either the muzzle blast, the shockwave or both. The thick black line and large circle indicate the estimated trajectory and the shooter position as estimated by fusing all available detections from the network. This shot from the 100-meter line at target T-3 was localized almost perfectly by the sensor network. The caliber and weapon were also identified correctly. 6 out of 10 nodes were able to estimate the location alone. Their bearing accuracy is within a degree, while the range is off by less than 10% in the worst case. The localization rate characterizes the system's ability to successfully estimate the trajectory of shots. Since caliber estimation and weapon classification relies on the trajectory, non-localized shots are not classified either. There were 7 shots out of 196 that were not localized. The reason for missed shots is the trajectory ambiguity problem that occurs when the projectile passes on one side of all the sensors. In this case, two significantly different trajectories can generate the same set of observations (see [8] and also Section 6.3). Instead of estimating which one is more likely or displaying both possibilities, we decided not to provide a trajectory at all. It is better not to give an answer other than a shot alarm than misleading the soldier. Localization accuracy is broken down to trajectory accuracy and range estimation precision. The angle of the estimated trajectory was better than 1 degree except for the 300 m range. Since the range should not affect trajectory estimation as long as the projectile passes over the network, we suspect that the slightly worse angle precision for 300 m is due to the hurried shots we witnessed the soldiers took near the end of the day. This is also indicated by another datapoint: the estimated trajectory distance from the actual targets has an average error of 1.3 m for 300 m shots, 0.75 m for 200 m shots and 0.6 m for all but 300 m shots. As the distance between the targets and the sensor network was fixed, this number should not show a 2 × improvement just because the shooter is closer. Since the angle of the trajectory itself does not characterize the overall error--there can be a translation also--Table 1 also gives the distance of the shooter from the estimated trajectory. These indicate an error which is about 1-2% of the range. To put this into perspective, a trajectory estimate for a 100 m shot will very likely go through or very near the window the shooter is located at. Again, we believe that the disproportionally larger errors at 300 m are due to human errors in aiming. As the ground truth was obtained by knowing the precise location of the shooter and the target, any inaccuracy in the actual trajectory directly adds to the perceived error of the system. We call the estimation of the shooter's position on the calculated trajectory range estimation due to the lack of a better term. The range estimates are better than 5% accurate from 50 m and 10% for 100 m. However, this goes to 20% or worse for longer distances. We did not have a facility to test system before the evaluation for ranges beyond 100 m. During the evaluation, we ran into the problem of mistaking shockwave echoes for muzzle blasts. These echoes reached the sensors before the real muzzle blast for long range shots only, since the projectile travels 2-3 × faster than the speed of sound, so the time between the shockwave (and its possible echo from nearby objects) and the muzzle blast increases with increasing ranges. This resulted in underestimating the range, since the system measured shorter times than the real ones. Since the evaluation we finetuned the muzzle blast detection algorithm to avoid this problem. Table 2: Weapon classification results. The percent ages are relative to the number of shots localized and not all shots, as the classification algorithm needs to know the trajectory and the range. Note that the difference is small; there were 189 shots localized out of the total 196. The caliber and weapon estimation accuracy rates are based on the 189 shots that were successfully localized. Note that there was a single shot that was falsely classified by the caliber estimator. The 73% overall weapon classification accuracy does not seem impressive. But if we break it down to the six different weapons tested, the picture changes dramatically as shown in Table 2. For four of the weapons (AK14, M16, M240 and M107), the classification rate is almost 100%. There were only two shots out of approximately 140 that were missed. The M4 and M249 proved to be too similar and they were mistaken for each other most of the time. One possible explanation is that we had only a limited number of test shots taken with these weapons right before the evaluation and used the wrong deceleration approximation function. Either this or a similar mistake was made since if we simply used the opposite of the system's answer where one of these weapons were indicated, the accuracy would have improved 3x. If we consider these two weapons a single weapon class, then the classification accuracy for it becomes 93%. Note that the AK47 and M240 have the same caliber (7.62 mm), just as the M16, M4 and M249 do (5.56 mm). That is, the system is able to differentiate between weapons of the same caliber. We are not aware of any system that classifies weapons this accurately. 7.1 Single sensor performance As was shown previously, a single sensor alone is able to localize the shooter if it can determine both the muzzle blast and the shockwave AoA, that is, it needs to measure the ToA of both on at least three acoustic channels. While shockwave detection is independent of the range--unless the projectile becomes subsonic--, the likelihood of muzzle blast detection beyond 150 meters is not enough for consistently getting at least three per sensor node for AoA estimation. Hence, we only evaluate the single sensor performance for the 104 shots that were taken from 50 and 100 m. Note that we use the same test data as in the previous section, but we evaluate individually for each sensor. Table 3 summarizes the results broken down by the ten sensors utilized. Since this is now not a distributed system, the results are given relative to the position of the given sensor, that is, a bearing and range estimate is provided. Note that many of the common error sources of the networked system do not play a role here. Time synchronization is not applicable. The sensor's absolute location is irrelevant (just as the relative location of multiple sensors). The sensor's orientation is still important though. There are several disadvantages of the single sensor case compared to the networked system: there is no redundancy to compensate for other errors and to perform outlier rejection, the localization rate is markedly lower, and a single sensor alone is not able to estimate the caliber or classify the weapon. Table 3: Single sensor accuracy for 108 shots fired from 50 and 100 meters. Localization rate refers to the percentage of shots the given sensor alone was able to localize. The bearing and range values are average errors. They characterize the accuracy of localization from the given sensor's perspective. The data indicates that the performance of the sensors varied significantly especially considering the localization rate. One factor has to be the location of the given sensor including how far it was from the firing lines and how obstructed its view was. Also, the sensors were hand-built prototypes utilizing nowhere near production quality packaging/mounting. In light of these factors, the overall average bearing error of 0.9 degrees and range error of 5 m with a microphone spacing of less than 10 cm are excellent. We believe that professional manufacturing and better microphones could easily achieve better performance than the best sensor in our experiment (> 60% localization rate and 3 m range error). Interestingly, the largest error in range was a huge 90 m clearly due to some erroneous detection, yet the largest bearing error was less than 12 degrees which is still a good indication for the soldier where to look. The overall localization rate over all single sensors was 42%, while for 50 m shots only, this jumped to 61%. Note that the firing range was prepared to simulate an urban area to some extent: there were a few single - and two-storey wooden structures built both in and around the sensor deployment area and the firing lines. Hence, not all sensors had line-of-sight to all shooting positions. We estimate that 10% of the sensors had obstructed view to the shooter on average. Hence, we can claim that a given sensor had about 50% chance of localizing a shot within 130 m. (Since the sensor deployment area was 30 m deep, 100 m shots correspond to actual distances between 100 and 130 m.) Again, we emphasize that localization needs at least three muzzle blast and three shockwave detections out of a possible four for each per sensor. The detection rate for single sensors--corresponding to at least one shockwave detection per sensor--was practically 100%. Figure 13: Histogram showing what fraction of the 104 shots taken from 50 and 100 meters were localized by at most how many individual sensors alone. 13% of the shots were missed by every single sensor, i.e., none of them had both muzzle blast and shockwave AoA detections. Note that almost all of these shots were still accurately localized by the networked system, i.e. the sensor fusion using all available observations in the sensor network. It would be misleading to interpret these results as the system missing half the shots. As soldiers never work alone and the sensor node is relatively cheap to afford having every soldier equipped with one, we also need to look at the overall detection rates for every shot. Figure 13 shows the histogram of the percentage of shots vs. the number of individual sensors that localized it. 13% of shots were not localized by any sensor alone, but 87% was localized by at least one sensor out of ten. 7.2 Error sources In this section, we analyze the most significant sources of error that affect the performance of the networked shooter localization and weapon classification system. In order to correlate the distributed observations of the acoustic events, the nodes need to have a common time and space reference. Hence, errors in the time synchronization, node localization and node orientation all degrade the overall accuracy of the system. percentage of shots Our time synchronization approach yields errors significantly less than 100 microseconds. As the sound travels about 3 cm in that time, time synchronization errors have a negligible effect on the system. On the other hand, node location and orientation can have a direct effect on the overall system performance. Notice that to analyze this, we do not have to resort to simulation, instead we can utilize the real test data gathered at Aberdeen. But instead of using the real sensor locations known very accurately and the measured and calibrated almost perfect node orientations, we can add error terms to them and run the sensor fusion. This exactly replicates how the system would have performed during the test using the imprecisely known locations and orientations. Another aspect of the system performance that can be evaluated this way is the effect of the number of available sensors. Instead of using all ten sensors in the data fusion, we can pick any subset of the nodes to see how the accuracy degrades as we decrease the number of nodes. The following experiment was carried out. The number of sensors were varied from 2 to 10 in increments of 2. Each run picked the sensors randomly using a uniform distribution. At each run each node was randomly" moved" to a new location within a circle around its true position with a radius determined by a zero-mean Gaussian distribution. Finally, the node orientations were perturbed using a zeromean Gaussian distribution. Each combination of parameters were generated 100 times and utilized for all 196 shots. The results are summarized in Figure 14. There is one 3D barchart for each of the experiment sets with the given fixed number of sensors. The x-axis shows the node location error, that is, the standard deviation of the corresponding Gaussian distribution that was varied between 0 and 6 meters. The y-axis shows the standard deviation of the node orientation error that was varied between 0 and 6 degrees. The z-axis is the resulting trajectory azimuth error. Note that the elevation angles showed somewhat larger errors than the azimuth. Since all the sensors were in approximately a horizontal plane and only a few shooter positions were out of the same plane and only by 2 m or so, the test was not sufficient to evaluate this aspect of the system. There are many interesting observation one can make by analyzing these charts. Node location errors in this range have a small effect on accuracy. Node orientation errors, on the other hand, noticeably degrade the performance. Still the largest errors in this experiment of 3.5 degrees for 6 sensors and 5 degrees for 2 sensors are still very good. Note that as the location and orientation errors increase and the number of sensors decrease, the most significantly affected performance metric is the localization rate. See Table 4 for a summary. Successful localization goes down from almost 100% to 50% when we go from 10 sensors to 2 even without additional errors. This is primarily caused by geometry: for a successful localization, the bullet needs to pass over the sensor network, that is, at least one sensor should be on the side of the trajectory other than the rest of the nodes. (This is a simplification for illustrative purposes. If all the sensors and the trajectory are not coplanar, localization may be successful even if the projectile passes on one side of the network. See Section 6.3.) As the numbers of sensors decreased in the experiment by randomly selecting a subset, the probability of trajectories abiding by this rule decreased. This also means that even if there are Figure 14: The effect of node localization and orientation errors on azimuth accuracy with 2, 4, 6 and 8 nodes. Note that the chart for 10 nodes is almost identical for the 8-node case, hence, it is omitted. many sensors (i.e. soldiers), but all of them are right next to each other, the localization rate will suffer. However, when the sensor fusion does provide a result, it is still accurate even with few available sensors and relatively large individual errors. A very few consistent observation lead to good accuracy as the inconsistent ones are discarded by the algorithm. This is also supported by the observation that for the cases with the higher number of sensors (8 or 10), the localization rate is hardly affected by even large errors. Table 4: Localization rate as a function of the number of sensors used, the sensor node location and orientation errors. One of the most significant observations on Figure 14 and Table 4 is that there is hardly any difference in the data for 6, 8 and 10 sensors. This means that there is little advantage of adding more nodes beyond 6 sensors as far as the accuracy is concerned. The speed of sound depends on the ambient temperature. The current prototype considers it constant that is typically set before a test. It would be straightforward to employ a temperature sensor to update the value of the speed of sound periodically during operation. Note also that wind may adversely affect the accuracy of the system. The sensor fusion, however, could incorporate wind speed into its calculations. It would be more complicated than temperature compensation, but could be done. Other practical issues also need to be looked at before a real world deployment. Silencers reduce the muzzle blast energy and hence, the effective range the system can detect it at. However, silencers do not effect the shockwave and the system would still detect the trajectory and caliber accurately. The range and weapon type could not be estimated without muzzle blast detections. Subsonic weapons do not produce a shockwave. However, this is not of great significance, since they have shorter range, lower accuracy and much less lethality. Hence, their use is not widespread and they pose less danger in any case. Another issue is the type of ammunition used. Irregular armies may use substandard, even hand manufactured bullets. This effects the muzzle velocity of the weapon. For weapon classification to work accurately, the system would need to be calibrated with the typical ammunition used by the given adversary. 8. RELATED WORK Acoustic detection and recognition has been under research since the early fifties. The area has a close relevance to the topic of supersonic flow mechanics [20]. Fansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast. Fansler's work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4]. Experiments with greater distances from the muzzle were conducted by Stoughton [18]. The measurements of the ballistic shockwaves using calibrated pressure transducers at known locations, measured bullet speeds, and miss distances of 3 55 meters for 5.56 mm and 7.62 mm projectiles were made. Results indicate that ground interaction becomes a problem for miss distances of 30 meters or larger. Another area of research is the signal processing of gunfire acoustics. The focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts. Possible techniques for classifying signals as either shockwaves or muzzle blasts includes short-time Fourier Transform (STFT), the Smoothed Pseudo Wigner-Ville distribution (SPWVD), and a discrete wavelet transformation (DWT). Joint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency. Mays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10]. The edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics. A paper by Sadler [14] compares two shockwave edge detection methods: a simple gradient-based detector, and a multi-scale wavelet detector. It also demonstrates how the length of the shockwave, as determined by the edge detectors, can be used along with Whithams equations [20] to estimate the caliber of a projectile. Note that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform. A related topic is the research and development of experimental and prototype shooter location systems. Researchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers. The fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing. The overall system consists of two to three of these arrays spaced 20 to 100 meters from each other. The soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet. There is a low speed RF connection from the helmet to the processing body. An extensive test has been conducted to measure the performance of both type of systems. The fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched. The angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one. The range accuracy was around 5 percent for both of the systems. The problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter. A sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8]. This is especially important for operation in acoustically reverberant urban areas. Note that BBN's current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1]. The company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds. The goal of the system is significantly different than that of military systems. Shotspotter reports 25 m typical accuracy which is more than enough for police to respond. They are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available. 9. CONCLUSIONS The main contribution of this work is twofold. First, the performance of the overall distributed networked system is excellent. Most noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested. The system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple. The key factor behind this is the sensor fusion algorithm's ability to reject erroneous measurements. It is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons. We believe that with the lessons learned in the test, a consecutive field experiment could have showed significantly improved results especially in range estimation beyond 100 m and weapon classification for the remaining two weapons that were mistaken for each other the majority of the times during the test. Second, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good. While the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes. Note that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode. We believe that the technology is mature enough that a next revision of the system could be a commercial one. However, important aspects of the system would still need to be worked on. We have not addresses power management yet. A current node runs on 4 AA batteries for about 12 hours of continuous operation. A deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs. An analog trigger circuit could solve this problem, however, the system would miss the first shot. Instead, the acoustic channels would need to be sampled and stored in a circular buffer. The rest of the board could be turned off. When a trigger wakes up the board, the acoustic data would be immediately available. Experiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life. Other outstanding issues include weatherproof packaging and ruggedization, as well as integration with current military infrastructure.
Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors ABSTRACT The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested. 1. INTRODUCTION The importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East. In October 2006 CNN reported on a new tactic employed by insurgents. A mobile sniper team moves around busy city streets in a car, positions itself at a good standoff distance from dismounted US military personnel, takes a single well-aimed shot and immediately melts in the city traffic. By the time the soldiers can react, they are gone. A countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers. Our team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003. The system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad hoc multihop network. The acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot. PinPtr is characterized by high precision: 1m average 3D accuracy for shots originating within or near the sensor network and 1 degree bearing precision for both azimuth and elevation and 10% accuracy in range estimation for longer range shots. The truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time. This capability is due to the widely distributed sensing and the unique sensor fusion approach [8]. The system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities. The obvious disadvantage of such a system is its static nature. Once the sensors are distributed, they cover a certain area. Depending on the operation, the deployment may be needed for an hour or a month, but eventually the area looses its importance. It is not practical to gather and reuse the sensors, especially under combat conditions. Even if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place. As it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves. While there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers. A helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it. To move from a static sensor network-based solution to a highly mobile one presents significant challenges. The sensor positions and orientation need to be constantly monitored. As soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before. Moreover, the system should be useful to even a single soldier. Finally, additional requirements called for caliber estimation and weapon classification in addition to source localization. The paper presents the design and evaluation of our soldierwearable mobile countersniper system. It describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12]. Special emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously. The results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented. 2. APPROACH 3. HARDWARE 4. SOFTWARE ARCHITECTURE 5. DETECTION ALGORITHM 6. SENSOR FUSION 6.1 Direction of arrival 6.2 Muzzle-shock fusion 6.3 Trajectory estimation 6.4 Shooter position estimation 6.5 Caliber estimation 6.6 Weapon estimation 7. RESULTS 7.1 Single sensor performance 7.2 Error sources 8. RELATED WORK Acoustic detection and recognition has been under research since the early fifties. The area has a close relevance to the topic of supersonic flow mechanics [20]. Fansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast. Fansler's work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4]. Experiments with greater distances from the muzzle were conducted by Stoughton [18]. The measurements of the ballistic shockwaves using calibrated pressure transducers at known locations, measured bullet speeds, and miss distances of 3 55 meters for 5.56 mm and 7.62 mm projectiles were made. Results indicate that ground interaction becomes a problem for miss distances of 30 meters or larger. Another area of research is the signal processing of gunfire acoustics. The focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts. Possible techniques for classifying signals as either shockwaves or muzzle blasts includes short-time Fourier Transform (STFT), the Smoothed Pseudo Wigner-Ville distribution (SPWVD), and a discrete wavelet transformation (DWT). Joint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency. Mays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10]. The edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics. A paper by Sadler [14] compares two shockwave edge detection methods: a simple gradient-based detector, and a multi-scale wavelet detector. It also demonstrates how the length of the shockwave, as determined by the edge detectors, can be used along with Whithams equations [20] to estimate the caliber of a projectile. Note that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform. A related topic is the research and development of experimental and prototype shooter location systems. Researchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers. The fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing. The overall system consists of two to three of these arrays spaced 20 to 100 meters from each other. The soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet. There is a low speed RF connection from the helmet to the processing body. An extensive test has been conducted to measure the performance of both type of systems. The fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched. The angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one. The range accuracy was around 5 percent for both of the systems. The problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter. A sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8]. This is especially important for operation in acoustically reverberant urban areas. Note that BBN's current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1]. The company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds. The goal of the system is significantly different than that of military systems. Shotspotter reports 25 m typical accuracy which is more than enough for police to respond. They are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available. 9. CONCLUSIONS The main contribution of this work is twofold. First, the performance of the overall distributed networked system is excellent. Most noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested. The system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple. The key factor behind this is the sensor fusion algorithm's ability to reject erroneous measurements. It is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons. We believe that with the lessons learned in the test, a consecutive field experiment could have showed significantly improved results especially in range estimation beyond 100 m and weapon classification for the remaining two weapons that were mistaken for each other the majority of the times during the test. Second, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good. While the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes. Note that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode. We believe that the technology is mature enough that a next revision of the system could be a commercial one. However, important aspects of the system would still need to be worked on. We have not addresses power management yet. A current node runs on 4 AA batteries for about 12 hours of continuous operation. A deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs. An analog trigger circuit could solve this problem, however, the system would miss the first shot. Instead, the acoustic channels would need to be sampled and stored in a circular buffer. The rest of the board could be turned off. When a trigger wakes up the board, the acoustic data would be immediately available. Experiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life. Other outstanding issues include weatherproof packaging and ruggedization, as well as integration with current military infrastructure.
Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors ABSTRACT The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested. 1. INTRODUCTION The importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East. By the time the soldiers can react, they are gone. A countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers. Our team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003. The system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad hoc multihop network. The acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot. The truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time. This capability is due to the widely distributed sensing and the unique sensor fusion approach [8]. The system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities. The obvious disadvantage of such a system is its static nature. Once the sensors are distributed, they cover a certain area. It is not practical to gather and reuse the sensors, especially under combat conditions. Even if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place. As it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves. While there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers. A helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it. To move from a static sensor network-based solution to a highly mobile one presents significant challenges. The sensor positions and orientation need to be constantly monitored. As soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before. Moreover, the system should be useful to even a single soldier. Finally, additional requirements called for caliber estimation and weapon classification in addition to source localization. The paper presents the design and evaluation of our soldierwearable mobile countersniper system. It describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12]. Special emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously. The results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented. 8. RELATED WORK Acoustic detection and recognition has been under research since the early fifties. Fansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast. Fansler's work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4]. Experiments with greater distances from the muzzle were conducted by Stoughton [18]. Results indicate that ground interaction becomes a problem for miss distances of 30 meters or larger. Another area of research is the signal processing of gunfire acoustics. The focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts. Joint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency. Mays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10]. The edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics. Note that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform. A related topic is the research and development of experimental and prototype shooter location systems. Researchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers. The fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing. The overall system consists of two to three of these arrays spaced 20 to 100 meters from each other. The soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet. An extensive test has been conducted to measure the performance of both type of systems. The fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched. The angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one. The range accuracy was around 5 percent for both of the systems. The problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter. A sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8]. This is especially important for operation in acoustically reverberant urban areas. Note that BBN's current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1]. The company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds. The goal of the system is significantly different than that of military systems. Shotspotter reports 25 m typical accuracy which is more than enough for police to respond. They are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available. 9. CONCLUSIONS The main contribution of this work is twofold. First, the performance of the overall distributed networked system is excellent. Most noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested. The system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple. The key factor behind this is the sensor fusion algorithm's ability to reject erroneous measurements. It is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons. Second, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good. While the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes. Note that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode. We believe that the technology is mature enough that a next revision of the system could be a commercial one. However, important aspects of the system would still need to be worked on. A current node runs on 4 AA batteries for about 12 hours of continuous operation. A deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs. An analog trigger circuit could solve this problem, however, the system would miss the first shot. Instead, the acoustic channels would need to be sampled and stored in a circular buffer. The rest of the board could be turned off. When a trigger wakes up the board, the acoustic data would be immediately available. Experiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life.
J-41
An Analysis of Alternative Slot Auction Designs for Sponsored Search
Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: rank by bid (RBB) and rank by revenue (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first- and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the short-run incomplete information setting and the long-run complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.
[ "altern slot auction design", "sponsor search", "sponsor search", "ad list", "rank by bid", "rank by revenu", "incomplet inform", "second price", "auction-style mechan", "second-price payment rule", "equilibrium multitud", "diverg of econom valu", "diverg of valu", "combin market capit", "resurg onlin advertis industri", "web search engin", "pai per click", "search engin", "slot alloc", "auction theori" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M", "M", "R", "R", "R", "U", "M", "M", "U", "M", "R", "M" ]
An Analysis of Alternative Slot Auction Designs for Sponsored Search S´ebastien Lahaie ∗ Division of Engineering and Applied Sciences Harvard University, Cambridge, MA 02138 slahaie@eecs.harvard.edu ABSTRACT Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: rank by bid (RBB) and rank by revenue (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first- and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the short-run incomplete information setting and the long-run complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully. Categories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Theory 1. INTRODUCTION Today, Internet giants Google and Yahoo! boast a combined market capitalization of over $150 billion, largely on the strength of sponsored search, the fastest growing component of a resurgent online advertising industry. PricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues.1 Industry watchers expect 2005 revenues to reach or exceed $7 billion.2 Roughly 80% of Google``s estimated $4 billion in 2005 revenue and roughly 45% of Yahoo!``s estimated $3.7 billion in 2005 revenue will likely be attributable to sponsored search.3 A number of other companies-including LookSmart, FindWhat, InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)-earn hundreds of millions of dollars of sponsored search revenue annually. Sponsored search is a form of advertising where merchants pay to appear alongside web search results. For example, when a user searches for used honda accord san diego in a web search engine, a variety of commercial entities (San Diego car dealers, Honda Corp, automobile information portals, classified ad aggregators, eBay, etc.. .) may bid to to have their listings featured alongside the standard algorithmic search listings. Advertisers bid for placement on the page in an auction-style format where the higher they bid the more likely their listing will appear above other ads on the page. By convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked. Though many people claim to systematically ignore sponsored search ads, Majestic Research reports that 1 www.iab.net/resources/adrevenue/pdf/IAB PwC 2004full.pdf 2 battellemedia.com/archives/002032.php 3 These are rough back of the envelope estimates. Google and Yahoo! 2005 revenue estimates were obtained from Yahoo! Finance. We assumed $7 billion in 2005 industry-wide sponsored search revenues. We used Nielsen/NetRatings estimates of search engine market share in the US, the most monetized market: wired-vig. wired.com/news/technology/0,1282,69291,00.html Using comScore``s international search engine market share estimates would yield different estimates: www.comscore.com/press/release.asp?press=622 218 as many as 17% of Google searches result in a paid click, and that Google earns roughly nine cents on average for every search query they process.4 Usually, sponsored search results appear in a separate section of the page designated as sponsored above or to the right of the algorithmic results. Sponsored search results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a corresponding web page. We call each position in the list a slot. Generally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users. Thus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots. Merchants bid for placement next to particular search queries; for example, Orbitz and Travelocity may bid for las vegas hotel while Dell and HP bid for laptop computer. As mentioned, bids are expressed as a maximum willingness to pay per click. For example, a forty-cent bid by HostRocket for web hosting means HostRocket is willing to pay up to forty cents every time a user clicks on their ad.5 The auctioneer (the search engine6 ) evaluates the bids and allocates slots to advertisers. In principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive. Many allocation rules are plausible. In this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google. The rank by bid (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders. The rank by revenue (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant``s ad after viewing it. In our model, we assume that an ad``s expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots. In practice, the expected clickthrough rate depends on a number of factors, including the position on the page, the ad text (which in turn depends on the identity of the bidder), the nature and intent of the user, and the context of other ads and algorithmic results on the page, and must be learned over time by both the auctioneer and the bidder [13]. As of this writing, to a rough first-order approximation, Yahoo! employs a RBB allocation and Google employs a RBR allocation, though numerous caveats apply in both cases when it comes to the vagaries of real-world implementations.7 Even when examining a one-shot version of a slot auction, the mechanism differs from a standard multi-item auc4 battellemedia.com/archives/001102.php 5 Usually advertisers also set daily or monthly budget caps; in this paper we do not model budget constraints. 6 In the sponsored search industry, the auctioneer and search engine are not always the same entity. For example Google runs the sponsored search ads for AOL web search, with revenue being shared. Similarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon. 7 Here are two among many exceptions to the Yahoo! = RBB and Google = RBR assertion: (1) Yahoo! excludes ads deemed insufficiently relevant either by a human editor or due to poor historical click rate; (2) Google sets differing reserve prices depending on Google``s estimate of ad quality. tion in subtle ways. First, a single bid per merchant is used to allocate multiple non-identical slots. Second, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation. We investigate a number of economic properties of RBB and RBR slot auctions. We consider the short-run incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions. In Section 4 we turn to the long-run complete information case; our characterization results here draw on techniques from linear programming. Throughout, important observations are highlighted as claims supported by examples. Our contributions are as follows: • We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first- or second-pricing. • With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others'' relevance (or even their own) with RBB. • With incomplete information, we prove that RBR is efficient but that RBB is not. • We show via a simple example that no general revenue ranking of RBB and RBR is possible. • We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing. • We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction. In Section 2 we specify our model of bidders and the various slot auction formats. In Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully. There is possible confusion here because the second-price design for slot auctions is reminiscent of the Vickrey auction for a single item; we note that for slot auctions the Vickrey mechanism is in fact very different from the second-price mechanism, and so they have different incentive properties.8 In Section 3.2 we derive the Bayes-Nash equilibrium bids for the various auction formats. This is useful for the efficiency and revenue results in later sections. It should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions. Sections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively. In Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information. In Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions. Our approach is positive rather than normative. We aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of 8 Other authors have also made this observation [5, 6]. 219 incomplete and complete information. We do not attempt to derive the optimal mechanism for a slot auction. Related work. Feng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis. Liu and Chen [12] study properties of slot auctions under incomplete information. Their setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low). They find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results. They also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue. Edelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information. They study the locally envyfree equilibria of the static game of complete information; this is a solution concept motivated by certain bidding behaviors that arise due to the presence of budget constraints. They do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria. They also provide a nice description of the evolution of the market for sponsored search. Varian [18] also studies slot auctions under a setting of complete information. He focuses on symmetric equilibria, which are a refinement of Nash equilibria appropriate for slot auctions. He provides bounds on the revenue obtained in equilibrium. He also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results. In contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria. 2. PRELIMINARIES We focus on a slot auction for a single keyword. In a setting of incomplete information, a bidder knows only distributions over others'' private information (value per click and relevance). With complete information, a bidder knows others'' private information, and so does not need to rely on distributions to strategize. We first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4. 2.1 The Model There is a fixed number K of slots to be allocated among N bidders. We assume without loss of generality that K ≤ N, since superfluous slots can remain blank. Bidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement``s rank.9 The probability that i``s advertisement will be clicked if viewed is Ai ∈ [0, 1]. We refer to Ai as bidder i``s relevance. We refer to Ri = AiXi as bidder i``s revenue. The Xi, Ai, and Ri are random 9 Indeed Kitts et al. [10] find that in their sample of actual click data, the correlation between rank and conversion rate is not statistically significant. However, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank. variables and we denote their realizations by xi, αi, and ri respectively. The probability that an advertisement will be viewed if placed in slot j is γj ∈ [0, 1]. We assume γ1 > γ2 > ... > γK. Hence bidder i``s advertisement will have a clickthrough rate of γjαi if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot. Each bidder``s value and relevance pair (Xi, Ai) is independently and identically distributed on [0, ¯x] × [0, 1] according to a continuous density function f that has full support on its domain. The density f and slot probabilities γ1, ... , γK are common knowledge. Only bidder i knows the realization xi of its value per click Xi. Both bidder i and the seller know the realization αi of Ai, but this realization remains unobservable to the other bidders. We assume that bidders have quasi-linear utility functions. That is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui(j, b) = γjαi(xi − b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption. The assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19]. The assumption that clickthrough rate decays monotonically with lower slots-by the same factors for each agent-is unique to the slot auction problem. We view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory. It also allows for interesting results in the complete information case. A common model of decaying clickthrough rate is the exponential decay model, where γk = 1 δk−1 with decay δ > 1. Feng et al. [7] state that their actual clickthrough data is fitted extremely well by an exponential decay model with δ = 1.428. Our model lacks budget constraints, which are an important feature of real slot auctions. With budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords-a single advertiser typically bids on multiple keywords relevant to his business. Introducing this element into the model is an important next step for future work.10 2.2 Auction Formats In a slot auction a bidder provides to the seller a declared value per click ˜xi(xi, αi) which depends on his true value and relevance. We often denote this declared value (bid) by ˜xi for short. Since a bidder``s relevance αi is observable to the seller, the bidder cannot misrepresent it. We denote the kth highest of the N declared values by ˜x(k) , and the kth highest of the N declared revenues by ˜r(k) , where the declared revenue of bidder i is ˜ri = αi ˜xi. We consider two types of allocation rules, rank by bid (RBB) and rank by revenue (RBR): 10 Models with budget constraints have begun to appear in this research area. Abrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness. Mehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue. 220 RBB. Slot k goes to bidder i if and only if ˜xi = ˜x(k) . RBR. Slot k goes to bidder i if and only if ˜ri = ˜r(k) . We will commonly represent an allocation by a one-to-one function σ : [K] → [N], where [n] is the set of integers {1, 2, ... , n}. Hence slot k goes to bidder σ(k). We also consider two different types of payment rules. Note that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks. First-price. The bidder allocated slot k, namely σ(k), pays ˜xσ(k) per click under both the RBB and RBR allocation rules. Second-price. If k < N, bidder σ(k) pays ˜xσ(k+1) per click under the RBB rule, and pays ˜rσ(k+1)/ασ(k) per click under the RBR rule. If k = N, bidder σ(k) pays 0 per click.11 Intuitively, a second-price payment rule sets a bidder``s payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used. Overture introduced the first slot auction design in 1997, using a first-price RBB scheme. Google then followed in 2000 with a second-price RBR scheme. In 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB. One possible reason for the switch is given in Section 4. We assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue. There is a permutation of the agents κ : [N] → [N] that is fixed beforehand. If the bids of agents i and j are tied, then agent i obtains a higher slot if and only if κ(i) < κ(j). This is consistent with the practice in real slot auctions where ties are broken by the bidders'' order of arrival. 3. INCOMPLETE INFORMATION 3.1 Incentives It should be clear that with a first-price payment rule, truthful bidding is neither a dominant strategy nor an ex post Nash equilibrium using either RBB or RBR, because this guarantees a payoff of 0. There is always an incentive to shade true values with first pricing. The second-price payment rule is reminiscent of the secondprice (Vickrey) auction used for selling a single item, and in a Vickrey auction it is a dominant strategy for a bidder to reveal his true value for the item [19]. However, using a second-price rule in a slot auction together with either allocation rule above does not yield an incentive-compatible mechanism, either in dominant strategies or ex post Nash equilibrium.12 With a second-price rule there is no incentive for a bidder to bid higher than his true value per click using either RBB or RBR: this either leads to no change 11 We are effectively assuming a reserve price of zero, but in practice search engines charge a non-zero reserve price per click. 12 Unless of course there is only a single slot available, since this is the single-item case. With a single slot both RBB and RBR with a second-price payment rule are dominantstrategy incentive-compatible. in the outcome, or a situation in which he will have to pay more than his value per click for each click received, resulting in a negative payoff.13 However, with either allocation rule there may be an incentive to shade true values with second pricing. Claim 1. With second pricing and K ≥ 2, truthful bidding is not a dominant strategy nor an ex post Nash equilibrium for either RBB or RBR. Example. There are two agents and two slots. The agents have relevance α1 = α2 = 1, whereas γ1 = 1 and γ2 = 1/2. Agent 1 has a value of x1 = 6 per click, and agent 2 has a value of x2 = 4 per click. Let us first consider the RBB rule. Suppose agent 2 bids truthfully. If agent 1 also bids truthfully, he wins the first slot and obtains a payoff of 2. However, if he shades his bid down below 4, he obtains the second slot at a cost of 0 per click yielding a payoff of 3. Since the agents have equal relevance, the exact same situation holds with the RBR rule. Hence truthful bidding is not a dominant strategy in either format, and neither is it an ex post Nash equilibrium. To find payments that make RBB and RBR dominantstrategy incentive-compatible, we can apply Holmstrom``s lemma [9] (see also chapter 3 in Milgrom [15]). Under the restriction that a bidder with value 0 per click does not pay anything (even if he obtains a slot, which can occur if there are as many slots as bidders), this lemma implies that there is a unique payment rule that achieves dominant-strategy incentive compatibility for either allocation rule. For RBB, the bidder allocated slot k is charged per click KX i=k+1 (γi−1 − γi)˜x(i) + γK ˜x(K+1) (1) Note that if K = N, ˜x(K+1) = 0 since there is no K + 1th bidder. For RBR, the bidder allocated slot k is charged per click 1 ασ(k) KX i=k+1 (γi−1 − γi)˜r(i) + γK ˜r(K+1) ! (2) Using payment rule (2) and RBR, the auctioneer is aware of the true revenues of the bidders (since they reveal their values truthfully), and hence ranks them according to their true revenues. We show in Section 3.3 that this allocation is in fact efficient. Since the VCG mechanism is the unique mechanism that is efficient, truthful, and ensures bidders with value 0 pay nothing (by the Green-Laffont theorem [8]), the RBR rule and payment scheme (2) constitute exactly the VCG mechanism. In the VCG mechanism an agent pays the externality he imposes on others. To understand payment (2) in this sense, note that the first term is the added utility (due to an increased clickthrough rate) agents in slots k + 1 to K would receive if they were all to move up a slot; the last term is the utility that the agent with the K +1st revenue would receive by obtaining the last slot as opposed to nothing. The leading coefficient simply reduces the agent``s expected payment to a payment per click. 13 In a dynamic setting with second pricing, there may be an incentive to bid higher than one``s true value in order to exhaust competitors'' budgets. This phenomenon is commonly called bid jamming or antisocial bidding [4]. 221 3.2 Equilibrium Analysis To understand the efficiency and revenue properties of the various auction formats, we must first understand which rankings of the bidders occur in equilibrium with different allocation and payment rule combinations. The following lemma essentially follows from the Monotonic Selection Theorem by Milgrom and Shannon [16]. Lemma 1. In a RBB (RBR) auction with either a firstor second-price payment rule, the symmetric Bayes-Nash equilibrium bid is strictly increasing with value ( revenue). As a consequence of this lemma, we find that RBB and RBR auctions allocate the slots greedily by the true values and revenues of the agents, respectively (whether using firstor second-price payment rules). This will be relevant in Section 3.3 below. For a first-price payment rule, we can explicitly derive the symmetric Bayes-Nash equilibrium bid functions for RBB and RBR auctions. The purpose of this exercise is to lend qualitative insights into the parameters that influence an agent``s bidding, and to derive formulae for the expected revenue in RBB and RBR auctions in order to make a revenue ranking of these two allocation rules (in Section 3.4). Let G(y) be the expected resulting clickthrough rate, in a symmetric equilibrium of the RBB auction (with either payment rule), to a bidder with value y and relevance α = 1. Let H(y) be the analogous quantity for a bidder with revenue y and relevance 1 in a RBR auction. By Lemma 1, a bidder with value y will obtain slot k in a RBB auction if y is the kth highest of the true realized values. The same applies in a RBR auction when y is the kth highest of the true realized revenues. Let FX (y) be the distribution function for value, and let FR(y) be the distribution function for revenue. The probability that y is the kth highest out of N values is N − 1 k − 1 ! (1 − FX (y))k−1 FX (y)N−k whereas the probability that y is the kth highest out of N revenues is the same formula with FR replacing FX . Hence we have G(y) = KX k=1 γk N − 1 k − 1 ! (1 − FX (y))k−1 FX (y)N−k The H function is analogous to G with FR replacing FX . In the two propositions that follow, g and h are the derivatives of G and H respectively. We omit the proof of the next proposition, because it is almost identical to the derivation of the equilibrium bid in the single-item case (see Krishna [11], Proposition 2.2). Proposition 1. The symmetric Bayes-Nash equilibrium strategies in a first-price RBB auction are given by ˜xB (x, α) = 1 G(x) Z x 0 y g(y)dy The first-price equilibrium above closely parallels the firstprice equilibrium in the single-item model. With a single item g is the density of the second highest value among all N agent values, whereas in a slot auction it is a weighted combination of the densities for the second, third, etc.. highest values. Note that the symmetric Bayes-Nash equilibrium bid in a first-price RBB auction does not depend on a bidder``s relevance α. To see clearly why, note that a bidder chooses a bid b so as to maximize the objective αG(˜x−1 (b))(x − b) and here α is just a leading constant factor. So dropping it does not change the set of optimal solutions. Hence the equilibrium bid depends only on the value x and function G, and G in turn depends only on the marginal cumulative distribution of value FX . So really only the latter needs to be common knowledge to the bidders. On the other hand, we will now see that information about relevance is needed for bidders to play the equilibrium in the first-price RBR auction. So the informational requirements for a first-price RBB auction are much weaker than for a first-price RBR auction: in the RBB auction a bidder need not know his own relevance, and need not know any distributional information over others'' relevance in order to play the equilibrium. Again we omit the next proposition``s proof since it is so similar to the one above. Proposition 2. The symmetric Bayes-Nash equilibrium strategies in a first-price RBR auction are given by ˜xR (x, α) = 1 αH(αx) Z αx 0 y h(y) dy Here it can be seen that the equilibrium bid is increasing with x, but not necessarily with α. This should not be much of a concern to the auctioneer, however, because in any case the declared revenue in equilibrium is always increasing in the true revenue. It would be interesting to obtain the equilibrium bids when using a second-price payment rule, but it appears that the resulting differential equations for this case do not have a neat analytical solution. Nonetheless, the same conclusions about the informational requirements of the RBB and RBR rules still hold, as can be seen simply by inspecting the objective function associated with an agent``s bidding problem for the second-price case. 3.3 Efficiency A slot auction is efficient if in equilibrium the sum of the bidders'' revenues from their allocated slots is maximized. Using symmetry as our equilibrium selection criterion, we find that the RBB auction is not efficient with either payment rule. Claim 2. The RBB auction is not efficient with either first or second pricing. Example. There are two agents and one slot, with γ1 = 1. Agent 1 has a value of x1 = 6 per click and relevance α1 = 1/2. Agent 2 has a value of x2 = 4 per click and relevance α2 = 1. By Lemma 1, agents are ranked greedily by value. Hence agent 1 obtains the lone slot, for a total revenue of 3 to the agents. However, it is most efficient to allocate the slot to agent 2, for a total revenue of 4. Examples with more agents or more slots are simple to construct along the same lines. On the other hand, under our assumptions on how clickthrough rate decreases with lower rank, the RBR auction is efficient with either payment rule. 222 Theorem 1. The RBR auction is efficient with either first- or second-price payments rules. Proof. Since by Lemma 1 the agents'' equilibrium bids are increasing functions of their revenues in the RBR auction, slots are allocated greedily according to true revenues. Let σ be a non-greedy allocation. Then there are slots s, t with s < t and rσ(s) < rσ(t). We can switch the agents in slots s and t to obtain a new allocation, and the difference between the total revenue in this new allocation and the original allocation``s total revenue is ` γtrσ(s) + γsrσ(t) ´ − ` γsrσ(s) + γtrσ(t) ´ = (γs − γt) ` rσ(t) − rσ(s) ´ Both parenthesized terms above are positive. Hence the switch has increased the total revenue to the bidders. If we continue to perform such switches, we will eventually reach a greedy allocation of greater revenue than the initial allocation. Since the initial allocation was arbitrary, it follows that a greedy allocation is always efficient, and hence the RBR auction``s allocation is efficient. Note that the assumption that clickthrough rate decays montonically by the same factors γ1, ... , γK for all agents is crucial to this result. A greedy allocation scheme does not necessarily find an efficient solution if the clickthrough rates are monotonically decreasing in an independent fashion for each agent. 3.4 Revenue To obtain possible revenue rankings for the different auction formats, we first note that when the allocation rule is fixed to RBB, then using either a first-price, second-price, or truthful payment rule leads to the same expected revenue in a symmetric, increasing Bayes-Nash equilibrium. Because a RBB auction ranks agents by their true values in equilibrium for any of these payment rules (by Lemma 1), it follows that expected revenue is the same for all these payment rules, following arguments that are virtually identical to those used to establish revenue equivalence in the singleitem case (see e.g. Proposition 3.1 in Krishna [11]). The same holds for RBR auctions; however, the revenue ranking of the RBB and RBR allocation rules is still unclear. Because of this revenue equivalence principle, we can choose whichever payment rule is most convenient for the purpose of making revenue comparisons. Using Propositions 1 and 2, it is a simple matter to derive formulae for the expected revenue under both allocation rules. The payment of an agent in a RBB auction is mB (x, α) = αG(x)˜xV (x, α) The expected revenue is then N · E ˆ mV (X, A) ˜ , where the expectation is taken with respect to the joint density of value and relevance. The expected revenue formula for RBR auctions is entirely analogous using ˜xR (x, α) and the H function. With these in hand we can obtain revenue rankings for specific numbers of bidders and slots, and specific distributions over values and relevance. Claim 3. For fixed K, N, and fixed γ1, ... , γK, no revenue ranking of RBB and RBR is possible for an arbitrary density f. Example. Assume there are 2 bidders, 2 slots, and that γ1 = 1, γ2 = 1/2. Assume that value-relevance pairs are uniformly distributed over [0, 1]× [0, 1]. For such a distribution with a closed-form formula, it is most convenient to use the revenue formulae just derived. RBB dominates RBR in terms of revenue for these parameters. The formula for the expected revenue in a RBB auction yields 1/12, whereas for RBR auctions we have 7/108. Assume instead that with probability 1/2 an agent``s valuerelevance pair is (1, 1/2), and that with probability 1/2 it is (1/2, 1). In this scenario it is more convenient to appeal to formulae (1) and (2). In a truthful auction the second agent will always pay 0. According to (1), in a truthful RBB auction the first agent makes an expected payment of E ˆ (γ1 − γ2)Aσ(1)Xσ(2) ˜ = 1 2 E ˆ Aσ(1) ˜ E ˆ Xσ(2) ˜ where we have used the fact that value and relevance are independently distributed for different agents. The expected relevance of the agent with the highest value is E ˆ Aσ(1) ˜ = 5/8. The expected second highest value is also E ˆ Xσ(2) ˜ = 5/8. The expected revenue for a RBB auction here is then 25/128. According to (2), in a truthful RBR auction the first agent makes an expected payment of E ˆ (γ1 − γ2)Rσ(2) ˜ = 1 2 E ˆ Rσ(2) ˜ In expectation the second highest revenue is E ˆ Rσ(2) ˜ = 1/2, so the expected revenue for a RBR auction is 1/4. Hence in this case the RBR auction yields higher expected revenue.1415 This example suggests the following conjecture: when value and relevance are either uncorrelated or positively correlated, RBB dominates RBR in terms of revenue. When value and relevance are negatively correlated, RBR dominates. 4. COMPLETE INFORMATION In typical slot auctions such as those run by Yahoo! and Google, bidders can adjust their bids up or down at any time. As B¨orgers et al. [2] and Edelman et al. [6] have noted, this can be viewed as a continuous-time process in which bidders learn each other``s bids. If the process stabilizes the result can then be modeled as a Nash equilibrium in pure strategies of the static one-shot game of complete information, since each bidder will be playing a best-response to the others'' bids.16 This argument seems especially appropriate for Yahoo!``s slot auction design where all bids are 14 To be entirely rigorous and consistent with our initial assumptions, we should have constructed a continuous probability density with full support over an appropriate domain. Taking the domain to be e.g. [0, 1] × [0, 1] and a continuous density with full support that is sufficiently concentrated around (1, 1/2) and (1/2, 1), with roughly equal mass around both, would yield the same conclusion. 15 Claim 3 should serve as a word of caution, because Feng et al. [7] find through their simulations that with a bivariate normal distribution over value-relevance pairs, and with 5 slots, 15 bidders, and δ = 2, RBR dominates RBB in terms of revenue for any level of correlation between value and relevance. However, they assume that bidding behavior in a second-price slot auction can be well approximated by truthful bidding. 16 We do not claim that bidders will actually learn each others'' private information (value and relevance), just that for a stable set of bids there is a corresponding equilibrium of the complete information game. 223 made public. Google keeps bids private, but experimentation can allow one to discover other bids, especially since second pricing automatically reveals to an agent the bid of the agent ranked directly below him. 4.1 Equilibrium Analysis In this section we ask whether a pure-strategy Nash equilibrium exists in a RBB or RBR slot auction, with either first or second pricing. Before dealing with the first-price case there is a technical issue involving ties. In our model we allow bids to be nonnegative real numbers for mathematical convenience, but this can become problematic because there is then no bid that is just higher than another. We brush over such issues by assuming that an agent can bid infinitesimally higher than another. This is imprecise but allows us to focus on the intuition behind the result that follows. See Reny [17] for a full treatment of such issues. For the remainder of the paper, we assume that there are as many slots as bidders. The following result shows that there can be no pure-strategy Nash equilibrium with first pricing.17 Note that the argument holds for both RBB and RBR allocation rules. For RBB, bids should be interpreted as declared values, and for RBR as declared revenues. Theorem 2. There exists no complete information Nash equilibrium in pure strategies in the first-price slot auction, for any possible values of the agents, whether using a RBB or RBR allocation rule. Proof. Let σ : [K] → [N] be the allocation of slots to the agents resulting from their bids. Let ri and bi be the revenue and bid of the agent ranked ith , respectively. Note that we cannot have bi > bi+1, or else the agent in slot i can make a profitable deviation by instead bidding bi − > bi+1 for small enough > 0. This does not change its allocation, but increases its profit. Hence we must have bi = bi+1 (i.e. with one bidder bidding infinitesimally higher than the other). Since this holds for any two consecutive bidders, it follows that in a Nash equilibrium all bidders must be bidding 0 (since the bidder ranked last matches the bid directly below him, which is 0 by default because there is no such bid). But this is impossible: consider the bidder ranked last. The identity of this bidder is always clear given the deterministic tie-breaking rule. This bidder can obtain the top spot and increase his revenue by (γ1 −γK)rK > 0 by bidding some > 0, and for small enough this is necessarily a profitable deviation. Hence there is no Nash equilibrium in pure strategies. On the other hand, we find that in a second-price slot auction there can be a multitude of pure strategy Nash equilibria. The next two lemmas give conditions that characterize the allocations that can occur as a result of an equilibrium profile of bids, given fixed agent values and revenues. Then if we can exhibit an allocation that satisfies these conditions, there must exist at least one equilibrium. We first consider the RBR case. 17 B¨orgers et al. [2] have proven this result in a model with three bidders and three slots, and we generalize their argument. Edelman et al. [6] also point out this non-existence phenomenon. They only illustrate the fact with an example because the result is quite immediate. Lemma 2. Given an allocation σ, there exists a Nash equilibrium profile of bids b leading to σ in a second-price RBR slot auction if and only if „ 1 − γi γj+1 `` rσ(i) ≤ rσ(j) for 1 ≤ j ≤ N − 2 and i ≥ j + 2. Proof. There exists a desired vector b which constitutes a Nash equilibrium if and only if the following set of inequalities can be satisfied (the variables are the πi and bj): πi ≥ γj(rσ(i) − bj) ∀i, ∀j < i (3) πi ≥ γj(rσ(i) − bj+1) ∀i, ∀j > i (4) πi = γi(rσ(i) − bi+1) ∀i (5) bi ≥ bi+1 1 ≤ i ≤ N − 1 (6) πi ≥ 0, bi ≥ 0 ∀i Here rσ(i) is the revenue of the agent allocated slot i, and πi and bi may be interpreted as this agent``s surplus and declared revenue, respectively. We first argue that constraints (6) can be removed, because the inequalities above can be satisfied if and only if the inequalities without (6) can be satisfied. The necessary direction is immediate. Assume we have a vector (π, b) which satisfies all inequalities above except (6). Then there is some i for which bi < bi+1. Construct a new vector (π, b ) identical to the original except with bi+1 = bi. We now have bi = bi+1. An agent in slot k < i sees the price of slot i decrease from bi+1 to bi+1 = bi, but this does not make i more preferred than k to this agent because we have πk ≥ γi−1(rσ(k) − bi) ≥ γi(rσ(k) − bi) = γi(rσ(k) −bi+1) (i.e. because the agent in slot k did not originally prefer slot i − 1 at price bi, he will not prefer slot i at price bi). A similar argument applies for agents in slots k > i + 1. The agent in slot i sees the price of this slot go down, which only makes it more preferred. Finally, the agent in slot i + 1 sees no change in the price of any slot, so his slot remains most preferred. Hence inequalities (3)-(5) remain valid at (π, b ). We first make this change to the bi+1 where bi < bi+1 and index i is smallest. We then recursively apply the change until we eventually obtain a vector that satisfies all inequalities. We safely ignore inequalities (6) from now on. By the Farkas lemma, the remaining inequalities can be satisfied if and only if there is no vector z such that X i,j (γj rσ(i)) zσ(i)j > 0 X i>j γjzσ(i)j + X i<j γj−1zσ(i)j−1 ≤ 0 ∀j (7) X j zσ(i)j ≤ 0 ∀i (8) zσ(i)j ≥ 0 ∀i, ∀j = i zσ(i)i free ∀i Note that a variable of the form zσ(i)i appears at most once in a constraint of type (8), so such a variable can never be positive. Also, zσ(i)1 = 0 for all i = 1 by constraint (7), since such variables never appear with another of the form zσ(i)i. Now if we wish to raise zσ(i)j above 0 by one unit for j = i, we must lower zσ(i)i by one unit because of the constraint of type (8). Because γjrσ(i) ≤ γirσ(i) for i < j, raising 224 zσ(i)j with i < j while adjusting other variables to maintain feasibility cannot make the objective P i,j(γjrσ(i))zσ(i)j positive. If this objective is positive, then this is due to some component zσ(i)j with i > j being positive. Now for the constraints of type (7), if i > j then zσ(i)j appears with zσ(j−1)j−1 (for 1 < j < N). So to raise the former variable γ−1 j units and maintain feasibility, we must (I) lower zσ(i)i by γ−1 j units, and (II) lower zσ(j−1)j−1 by γ−1 j−1 units. Hence if the following inequalities hold: rσ(i) ≤ „ γi γj `` rσ(i) + rσ(j−1) (9) for 2 ≤ j ≤ N − 1 and i > j, raising some zσ(i)j with i > j cannot make the objective positive, and there is no z that satisfies all inequalities above. Conversely, if some inequality (9) does not hold, the objective can be made positive by raising the corresponding zσ(i)j and adjusting other variables so that feasibility is just maintained. By a slight reindexing, inequalities (9) yield the statement of the theorem. The RBB case is entirely analogous. Lemma 3. Given an allocation σ, there exists a Nash equilibrium profile of bids b leading to σ in a second-price RBB slot auction if and only if „ 1 − γi γj+1 `` xσ(i) ≤ xσ(j) for 1 ≤ j ≤ N − 2 and i ≥ j + 2. Proof Sketch. The proof technique is the same as in the previous lemma. The desired Nash equilibrium exists if and only if a related set of inequalities can be satisfied; by the Farkas lemma, this occurs if and only if an alternate set of inequalities cannot be satisfied. The conditions that determine whether the latter holds are given in the statement of the lemma. The two lemmas above immediately lead to the following result. Theorem 3. There always exists a complete information Nash equilibrium in pure strategies in the second-price RBB slot auction. There always exists an efficient complete information Nash equilibrium in pure strategies in the secondprice RBR slot auction. Proof. First consider RBB. Suppose agents are ranked according to their true values. Since xσ(i) ≤ xσ(j) for i > j, the system of inequalities in Lemma 3 is satisfied, and the allocation is the result of some Nash equilibrium bid profile. By the same type of argument but appealing to Lemma 2 for RBR, there exists a Nash equilibrium bid profile such that bidders are ranked according to their true revenues. By Theorem 1, this latter allocation is efficient. This theorem establishes existence but not uniqueness. Indeed we expect that in many cases there will be multiple allocations (and hence equilibria) which satisfy the conditions of Lemmas 2 and 3. In particular, not all equilibria of a second-price RBR auction will be efficient. For instance, according to Lemma 2, with two agents and two slots any allocation can arise in a RBR equilibrium because no constraints apply. Theorems 2 and 3 taken together provide a possible explanation for Yahoo!``s switch from first to second pricing. We saw in Section 3.1 that this does not induce truthfulness from bidders. With first pricing, there will always be some bidder that feels compelled to adjust his bid. Second pricing is more convenient because an equilibrium can be reached, and this reduces the cost of bid management. 4.2 Efficiency For a given allocation rule, we call the allocation that would result if the bidders reported their values truthfully the standard allocation. Hence in the standard RBB allocation bidders are ranked by true values, and in the standard RBR allocation they are ranked by true revenues. According to Lemmas 2 and 3, a ranking that results from a Nash equilibrium profile can only deviate from the standard allocation by having agents with relatively similar values or revenues switch places. That is, if ri > rj then with RBR agent j can be ranked higher than i only if the ratio rj/ri is sufficiently large; similarly for RBB. This suggests that the value of an equilibrium allocation cannot differ too much from the value obtained in the standard allocation, and the following theorems confirms this. For an allocation σ of slots to agents, we denote its total value by f(σ) = PN i=1 γirσ(i). We denote by g(σ) = PN i=1 γixσ(i) allocation σ``s value when assuming all agents have identical relevance, normalized to 1. Let L = min i=1,...,N−1 min γi+1 γi , 1 − γi+2 γi+1 ff (where by default γN+1 = 0). Let ηx and ηr be the standard allocations when using RBB and RBR, respectively. Theorem 4. For an allocation σ that results from a purestrategy Nash equilibrium of a second-price RBR slot auction, we have f(σ) ≥ Lf(ηr). Proof. We number the agents so that agent i has the ith highest revenue, so r1 ≥ r2 ≥ ... ≥ rN . Hence the standard allocation has value f(ηr) = PN i=1 γiri. To prove the theorem, we will make repeated use of the fact thatP k akP k bk ≥ mink ak bk when the ak and bk are positive. Note that according to Lemma 2, if agent i lies at least two slots below slot j, then rσ(j) ≥ ri 1 − γj+2 γj+1 . It may be the case that for some slot i, we have σ(i) > i and for slots k > i + 1 we have σ(k) > i. We then say that slot i is inverted. Let S be the set of agents with indices at least i + 1; there are N − i of these. If slot i is inverted, it is occupied by some agent from S. Also all slots strictly lower than i + 1 must be occupied by the remaining agents from S, since σ(k) > i for k ≥ i + 2. The agent in slot i + 1 must then have an index σ(i + 1) ≤ i (note this means slot i + 1 cannot be inverted). Now there are two cases. In the first case we have σ(i) = i + 1. Then γirσ(i) + γi+1rσ(i+1) γiri + γi+1ri+1 ≥ γi+1ri + γiri+1 γiri + γi+1ri+1 ≥ min γi+1 γi , γi γi+1 ff = γi+1 γi In the second case we have σ(i) > i+1. Then since all agents in S except the one in slot i lie strictly below slot i + 1, and 225 the agent in slot i is not agent i + 1, it must be that agent i+1 is in a slot strictly below slot i+1. This means that it is at least two slots below the agent that actually occupies slot i, and by Lemma 2 we then have rσ(i) ≥ ri+1 1 − γi+2 γi+1 . Thus, γirσ(i) + γi+1rσ(i+1) γiri + γi+1ri+1 ≥ γi+1ri + γirσ(i) γiri + γi+1ri+1 ≥ min γi+1 γi , 1 − γi+2 γi+1 ff If slot i is not inverted, then on one hand we may have σ(i) ≤ i, in which case rσ(i)/ri ≥ 1. On the other hand we may have σ(i) > i but there is some agent with index j ≤ i that lies at least two slots below slot i. Then by Lemma 2, rσ(i) ≥ rj 1 − γi+2 γi+1 ≥ ri 1 − γi+2 γi+1 . We write i ∈ I if slot i is inverted, and i ∈ I if neither i nor i − 1 are inverted. By our arguments above two consecutive slots cannot be inverted, so we can write f(σ) f(γr) = P i∈I ` γirσ(i) + γi+1rσ(i+1) ´ + P i∈I γirσ(i) P i∈I (γiri + γi+1ri+1) + P i∈I γiri ≥ min min i∈I γirσ(i) + γi+1rσ(i+1) γiri + γi+1ri+1 ff , min i∈I γirσ(i) γiri ffff ≥ L and this completes the proof. Note that for RBR, the standard value is also the efficient value by Theorem 1. Also note that for an exponential decay model, L = min ˘1 δ , 1 − 1 δ ¯ . With δ = 1.428 (see Section 2.1), the factor is L ≈ 1/3.34, so the total value in a pure-strategy Nash equilibrium of a second-price RBR slot auction is always within a factor of 3.34 of the efficient value with such a discount. Again for RBB we have an analogous result. Theorem 5. For an allocation σ that results from a purestrategy Nash equilibrium of a second-price RBB slot auction, we have g(σ) ≥ Lg(ηx). Proof Sketch. Simply substitute bidder values for bidder revenues in the proof of Theorem 4, and appeal to Lemma 3. 5. CONCLUSIONS This paper analyzed stylized versions of the slot auction designs currently used by Yahoo! and Google, namely rank by bid (RBB) and rank by revenue (RBR), respectively. We also considered first and second pricing rules together with each of these allocation rules, since both have been used historically. We first studied the short-run setting with incomplete information, corresponding to the case where agents have just approached the mechanism. Our equilibrium analysis revealed that RBB has much weaker informational requirements than RBR, because bidders need not know any information about relevance (even their own) to play the Bayes-Nash equilibrium. However, RBR leads to an efficient allocation in equilibrium, whereas RBB does not. We showed that for an arbitrary distribution over value and relevance, no revenue ranking of RBB and RBR is possible. We hope that the tools we used to establish these results (revenue equivalence, the form of first-price equilibria, the truthful payments rules) will help others wanting to pursue further analyses of slot auctions. We also studied the long-run case where agents have experimented with their bids and each settled on one they find optimal. We argued that a stable set of bids in this setting can be modeled as a pure-strategy Nash equilibrium of the static game of complete information. We showed that no pure-strategy equilibrium exists with either RBB or RBR using first pricing, but that with second pricing there always exists such an equilibrium (in the case of RBR, an efficient equilibrium). In general second pricing allows for multiple pure-strategy equilibria, but we showed that the value of such equilibria diverges by only a constant factor from the value obtained if all agents bid truthfully (which in the case of RBR is the efficient value). 6. FUTURE WORK Introducing budget constraints into the model is a natural next step for future work. The complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords. Assuming that the optimal choice of budget can be made independent of the choice of bid for a specific keyword, it can be shown that it is a dominant-strategy to report this optimal budget with one``s bid. The problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable. Identifying a condition on the distribution over value and relevance that actually does yield a revenue ranking of RBB and RBR (such as correlation between value and relevance, perhaps) would yield a more satisfactory characterization of their relative revenue properties. Placing bounds on the revenue obtained in a complete information equilibrium is also a relevant question. Because the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc.. .) automatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions. Acknowledgements David Pennock provided valuable guidance throughout this project. I would also like to thank David Parkes for helpful comments. 7. REFERENCES [1] Z. Abrams. Revenue maximization when bidders have budgets. In Proc. the ACM-SIAM Symposium on Discrete Algorithms, 2006. [2] T. B¨orgers, I. Cox, and M. Pesendorfer. Personal Communication. [3] C. Borgs, J. Chayes, N. Immorlica, M. Mahdian, and A. Saberi. Multi-unit auctions with budget-constrained bidders. In Proc. the Sixth ACM Conference on Electronic Commerce, Vancouver, BC, 2005. [4] F. Brandt and G. Weiß. Antisocial agents and Vickrey auctions. In J.-J. C. Meyer and M. Tambe, editors, 226 Intelligent Agents VIII, volume 2333 of Lecture Notes in Artificial Intelligence. Springer Verlag, 2001. [5] B. Edelman and M. Ostrovsky. Strategic bidder behavior in sponsored search auctions. In Workshop on Sponsored Search Auctions, ACM Electronic Commerce, 2005. [6] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet advertising and the generalized second price auction: Selling billions of dollars worth of keywords. NBER working paper 11765, November 2005. [7] J. Feng, H. K. Bhargava, and D. M. Pennock. Implementing sponsored search in web search engines: Computational evaluation of alternative mechanisms. INFORMS Journal on Computing, 2005. Forthcoming. [8] J. Green and J.-J. Laffont. Characterization of satisfactory mechanisms for the revelation of preferences for public goods. Econometrica, 45:427-438, 1977. [9] B. Holmstrom. Groves schemes on restricted domains. Econometrica, 47(5):1137-1144, 1979. [10] B. Kitts, P. Laxminarayan, B. LeBlanc, and R. Meech. A formal analysis of search auctions including predictions on click fraud and bidding tactics. In Workshop on Sponsored Search Auctions, ACM Electronic Commerce, 2005. [11] V. Krishna. Auction Theory. Academic Press, 2002. [12] D. Liu and J. Chen. Designing online auctions with past performance information. Decision Support Systems, 2005. Forthcoming. [13] C. Meek, D. M. Chickering, and D. B. Wilson. Stochastic and contingent payment auctions. In Workshop on Sponsored Search Auctions, ACM Electronic Commerce, 2005. [14] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani. Adwords and generalized on-line matching. In Proc. 46th IEEE Symposium on Foundations of Computer Science, 2005. [15] P. Milgrom. Putting Auction Theory to Work. Cambridge University Press, 2004. [16] P. Milgrom and C. Shannon. Monotone comparative statics. Econometrica, 62(1):157-180, 1994. [17] P. J. Reny. On the existence of pure and mixed strategy Nash equilibria in discontinuous games. Econometrica, 67(5):1029-1056, 1999. [18] H. R. Varian. Position auctions. Working Paper, February 2006. [19] W. Vickrey. Counterspeculation, auctions and competitive sealed tenders. Journal of Finance, 16:8-37, 1961. 227
An Analysis of Alternative Slot Auction Designs for Sponsored Search ABSTRACT Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: "rank by bid" (RBB) and "rank by revenue" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first - and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the "short-run" incomplete information setting and the "long-run" complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully. 1. INTRODUCTION Today, Internet giants Google and Yahoo! boast a combined market capitalization of over $150 billion, largely on the strength of sponsored search, the fastest growing component of a resurgent online advertising industry. PricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues . ' Industry watchers expect 2005 revenues to reach or exceed $7 billion .2 Roughly 80% of Google's estimated $4 billion in 2005 revenue and roughly 45% of Yahoo!'s estimated $3.7 billion in 2005 revenue will likely be attributable to sponsored search .3 A number of other companies--including LookSmart, FindWhat, InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)--earn hundreds of millions of dollars of sponsored search revenue annually. Sponsored search is a form of advertising where merchants pay to appear alongside web search results. For example, when a user searches for "used honda accord san diego" in a web search engine, a variety of commercial entities (San Diego car dealers, Honda Corp, automobile information portals, classified ad aggregators, eBay, etc. . .) may bid to to have their listings featured alongside the standard "algorithmic" search listings. Advertisers bid for placement on the page in an auction-style format where the higher they bid the more likely their listing will appear above other ads on the page. By convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked. Though many people claim to systematically ignore sponsored search ads, Majestic Research reports that as many as 17% of Google searches result in a paid click, and that Google earns roughly nine cents on average for every search query they process .4 Usually, sponsored search results appear in a separate section of the page designated as "sponsored" above or to the right of the algorithmic results. Sponsored search results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a corresponding web page. We call each position in the list a slot. Generally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users. Thus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots. Merchants bid for placement next to particular search queries; for example, Orbitz and Travelocity may bid for "las vegas hotel" while Dell and HP bid for "laptop computer". As mentioned, bids are expressed as a maximum willingness to pay per click. For example, a forty-cent bid by HostRocket for "web hosting" means HostRocket is willing to pay up to forty cents every time a user clicks on their ad .5 The auctioneer (the search engine6) evaluates the bids and allocates slots to advertisers. In principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive. Many allocation rules are plausible. In this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google. The "rank by bid" (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders. The "rank by revenue" (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant's ad after viewing it. In our model, we assume that an ad's expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots. In practice, the expected clickthrough rate depends on a number of factors, including the position on the page, the ad text (which in turn depends on the identity of the bidder), the nature and intent of the user, and the context of other ads and algorithmic results on the page, and must be learned over time by both the auctioneer and the bidder [13]. As of this writing, to a rough first-order approximation, Yahoo! employs a RBB allocation and Google employs a RBR allocation, though numerous caveats apply in both cases when it comes to the vagaries of real-world implementations .7 Even when examining a one-shot version of a slot auction, the mechanism differs from a standard multi-item auc engine are not always the same entity. For example Google runs the sponsored search ads for AOL web search, with revenue being shared. Similarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon. 7Here are two among many exceptions to the Yahoo! = RBB and Google = RBR assertion: (1) Yahoo! excludes ads deemed insufficiently relevant either by a human editor or due to poor historical click rate; (2) Google sets differing reserve prices depending on Google's estimate of ad quality. tion in subtle ways. First, a single bid per merchant is used to allocate multiple non-identical slots. Second, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation. We investigate a number of economic properties of RBB and RBR slot auctions. We consider the "short-run" incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions. In Section 4 we turn to the "long-run" complete information case; our characterization results here draw on techniques from linear programming. Throughout, important observations are highlighted as claims supported by examples. Our contributions are as follows: • We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first - or second-pricing. • With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others' relevance (or even their own) with RBB. • With incomplete information, we prove that RBR is efficient but that RBB is not. • We show via a simple example that no general revenue ranking of RBB and RBR is possible. • We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing. • We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction. In Section 2 we specify our model of bidders and the various slot auction formats. In Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully. There is possible confusion here because the "second-price" design for slot auctions is reminiscent of the Vickrey auction for a single item; we note that for slot auctions the Vickrey mechanism is in fact very different from the second-price mechanism, and so they have different incentive properties .8 In Section 3.2 we derive the Bayes-Nash equilibrium bids for the various auction formats. This is useful for the efficiency and revenue results in later sections. It should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions. Sections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively. In Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information. In Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions. Our approach is positive rather than normative. We aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of incomplete and complete information. We do not attempt to derive the "optimal" mechanism for a slot auction. Related work. Feng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis. Liu and Chen [12] study properties of slot auctions under incomplete information. Their setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low). They find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results. They also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue. Edelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information. They study the "locally envyfree equilibria" of the static game of complete information; this is a solution concept motivated by certain bidding behaviors that arise due to the presence of budget constraints. They do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria. They also provide a nice description of the evolution of the market for sponsored search. Varian [18] also studies slot auctions under a setting of complete information. He focuses on "symmetric" equilibria, which are a refinement of Nash equilibria appropriate for slot auctions. He provides bounds on the revenue obtained in equilibrium. He also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results. In contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria. 2. PRELIMINARIES We focus on a slot auction for a single keyword. In a setting of incomplete information, a bidder knows only distributions over others' private information (value per click and relevance). With complete information, a bidder knows others' private information, and so does not need to rely on distributions to strategize. We first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4. 2.1 The Model There is a fixed number K of slots to be allocated among N bidders. We assume without loss of generality that K ≤ N, since superfluous slots can remain blank. Bidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement's rank .9 The probability that i's advertisement will be clicked if viewed is Ai ∈ [0, 1]. We refer to Ai as bidder i's relevance. We refer to Ri = AiXi as bidder i's revenue. The Xi, Ai, and Ri are random 9Indeed Kitts et al. [10] find that in their sample of actual click data, the correlation between rank and conversion rate is not statistically significant. However, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank. variables and we denote their realizations by xi, αi, and ri respectively. The probability that an advertisement will be viewed if placed in slot j is ryj ∈ [0, 1]. We assume ry1> rye>...> ryK. Hence bidder i's advertisement will have a clickthrough rate of ryjαi if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot. Each bidder's value and relevance pair (Xi, Ai) is independently and identically distributed on [0, ¯ x] × [0, 1] according to a continuous density function f that has full support on its domain. The density f and slot probabilities ry1,..., ryK are common knowledge. Only bidder i knows the realization xi of its value per click Xi. Both bidder i and the seller know the realization αi of Ai, but this realization remains unobservable to the other bidders. We assume that bidders have quasi-linear utility functions. That is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui (j, b) = ryj αi (xi − b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption. The assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19]. The assumption that clickthrough rate decays monotonically with lower slots--by the same factors for each agent--is unique to the slot auction problem. We view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory. It also allows for interesting results in the complete information case. A common model of decaying clickthrough rate is the exponential decay model, where ryk = δk − 1 with decay δ> 1. Feng et al. [7] state that their actual clickthrough data is fitted extremely well by an exponential decay model with δ = 1.428. Our model lacks budget constraints, which are an important feature of real slot auctions. With budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords--a single advertiser typically bids on multiple keywords relevant to his business. Introducing this element into the model is an important next step for future work .10 2.2 Auction Formats In a slot auction a bidder provides to the seller a declared value per click ˜xi (xi, αi) which depends on his true value and relevance. We often denote this declared value (bid) by ˜xi for short. Since a bidder's relevance αi is observable to the seller, the bidder cannot misrepresent it. We denote the kth highest of the N declared values by ˜x (k), and the kth highest of the N declared revenues by ˜r (k), where the declared revenue of bidder i is ˜ri = αi˜xi. We consider two types of allocation rules, "rank by bid" (RBB) and "rank by revenue" (RBR): 10Models with budget constraints have begun to appear in this research area. Abrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness. Mehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue. RBB. Slot k goes to bidder i if and only if ˜xi = ˜x (k). RBR. Slot k goes to bidder i if and only if ˜ri = ˜r (k). We will commonly represent an allocation by a one-to-one function σ: [K]--+ [N], where [n] is the set of integers {1, 2,..., n}. Hence slot k goes to bidder σ (k). We also consider two different types of payment rules. Note that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks. First-price. The bidder allocated slot k, namely σ (k), pays ˜xσ (k) per click under both the RBB and RBR allocation rules. Second-price. If k <N, bidder σ (k) pays ˜xσ (k +1) per click under the RBB rule, and pays ˜rσ (k +1) / ασ (k) per click under the RBR rule. If k = N, bidder σ (k) pays 0 per click .11 Intuitively, a second-price payment rule sets a bidder's payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used. Overture introduced the first slot auction design in 1997, using a first-price RBB scheme. Google then followed in 2000 with a second-price RBR scheme. In 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB. One possible reason for the switch is given in Section 4. We assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue. There is a permutation of the agents κ: [N]--+ [N] that is fixed beforehand. If the bids of agents i and j are tied, then agent i obtains a higher slot if and only if κ (i) <κ (j). This is consistent with the practice in real slot auctions where ties are broken by the bidders' order of arrival. 3. INCOMPLETE INFORMATION 3.1 Incentives It should be clear that with a first-price payment rule, truthful bidding is neither a dominant strategy nor an ex post Nash equilibrium using either RBB or RBR, because this guarantees a payoff of 0. There is always an incentive to shade true values with first pricing. The second-price payment rule is reminiscent of the secondprice (Vickrey) auction used for selling a single item, and in a Vickrey auction it is a dominant strategy for a bidder to reveal his true value for the item [19]. However, using a second-price rule in a slot auction together with either allocation rule above does not yield an incentive-compatible mechanism, either in dominant strategies or ex post Nash equilibrium .12 With a second-price rule there is no incentive for a bidder to bid higher than his true value per click using either RBB or RBR: this either leads to no change 11We are effectively assuming a reserve price of zero, but in practice search engines charge a non-zero reserve price per click. 12Unless of course there is only a single slot available, since this is the single-item case. With a single slot both RBB and RBR with a second-price payment rule are dominantstrategy incentive-compatible. in the outcome, or a situation in which he will have to pay more than his value per click for each click received, resulting in a negative payoff .13 However, with either allocation rule there may be an incentive to shade true values with second pricing. EXAMPLE. There are two agents and two slots. The agents have relevance α1 = α2 = 1, whereas γ1 = 1 and γ2 = 1/2. Agent 1 has a value of x1 = 6 per click, and agent 2 has a value of x2 = 4 per click. Let us first consider the RBB rule. Suppose agent 2 bids truthfully. If agent 1 also bids truthfully, he wins the first slot and obtains a payoff of 2. However, if he shades his bid down below 4, he obtains the second slot at a cost of 0 per click yielding a payoff of 3. Since the agents have equal relevance, the exact same situation holds with the RBR rule. Hence truthful bidding is not a dominant strategy in either format, and neither is it an ex post Nash equilibrium. To find payments that make RBB and RBR dominantstrategy incentive-compatible, we can apply Holmstrom's lemma [9] (see also chapter 3 in Milgrom [15]). Under the restriction that a bidder with value 0 per click does not pay anything (even if he obtains a slot, which can occur if there are as many slots as bidders), this lemma implies that there is a unique payment rule that achieves dominant-strategy incentive compatibility for either allocation rule. For RBB, the bidder allocated slot k is charged per click Note that if K = N, ˜x (K +1) = 0 since there is no K + 1th bidder. For RBR, the bidder allocated slot k is charged per click Using payment rule (2) and RBR, the auctioneer is aware of the true revenues of the bidders (since they reveal their values truthfully), and hence ranks them according to their true revenues. We show in Section 3.3 that this allocation is in fact efficient. Since the VCG mechanism is the unique mechanism that is efficient, truthful, and ensures bidders with value 0 pay nothing (by the Green-Laffont theorem [8]), the RBR rule and payment scheme (2) constitute exactly the VCG mechanism. In the VCG mechanism an agent pays the externality he imposes on others. To understand payment (2) in this sense, note that the first term is the added utility (due to an increased clickthrough rate) agents in slots k + 1 to K would receive if they were all to move up a slot; the last term is the utility that the agent with the K +1 st revenue would receive by obtaining the last slot as opposed to nothing. The leading coefficient simply reduces the agent's expected payment to a payment per click. 13In a dynamic setting with second pricing, there may be an incentive to bid higher than one's true value in order to exhaust competitors' budgets. This phenomenon is commonly called "bid jamming" or "antisocial bidding" [4]. 3.2 Equilibrium Analysis To understand the efficiency and revenue properties of the various auction formats, we must first understand which rankings of the bidders occur in equilibrium with different allocation and payment rule combinations. The following lemma essentially follows from the Monotonic Selection Theorem by Milgrom and Shannon [16]. As a consequence of this lemma, we find that RBB and RBR auctions allocate the slots greedily by the true values and revenues of the agents, respectively (whether using firstor second-price payment rules). This will be relevant in Section 3.3 below. For a first-price payment rule, we can explicitly derive the symmetric Bayes-Nash equilibrium bid functions for RBB and RBR auctions. The purpose of this exercise is to lend qualitative insights into the parameters that influence an agent's bidding, and to derive formulae for the expected revenue in RBB and RBR auctions in order to make a revenue ranking of these two allocation rules (in Section 3.4). Let G (y) be the expected resulting clickthrough rate, in a symmetric equilibrium of the RBB auction (with either payment rule), to a bidder with value y and relevance α = 1. Let H (y) be the analogous quantity for a bidder with revenue y and relevance 1 in a RBR auction. By Lemma 1, a bidder with value y will obtain slot k in a RBB auction if y is the kth highest of the true realized values. The same applies in a RBR auction when y is the kth highest of the true realized revenues. Let FX (y) be the distribution function for value, and let FR (y) be the distribution function for revenue. The probability that y is the kth highest out of N values is! (1 - FX (y)) k − 1FX (y) N − k whereas the probability that y is the kth highest out of N revenues is the same formula with FR replacing FX. Hence we have! (1 - FX (y)) k − 1FX (y) N − k The H function is analogous to G with FR replacing FX. In the two propositions that follow, g and h are the derivatives of G and H respectively. We omit the proof of the next proposition, because it is almost identical to the derivation of the equilibrium bid in the single-item case (see Krishna [11], Proposition 2.2). PROPOSITION 1. The symmetric Bayes-Nash equilibrium strategies in a first-price RBB auction are given by The first-price equilibrium above closely parallels the firstprice equilibrium in the single-item model. With a single item g is the density of the second highest value among all N agent values, whereas in a slot auction it is a weighted combination of the densities for the second, third, etc. highest values. Note that the symmetric Bayes-Nash equilibrium bid in a first-price RBB auction does not depend on a bidder's relevance α. To see clearly why, note that a bidder chooses a bid b so as to maximize the objective and here α is just a leading constant factor. So dropping it does not change the set of optimal solutions. Hence the equilibrium bid depends only on the value x and function G, and G in turn depends only on the marginal cumulative distribution of value FX. So really only the latter needs to be common knowledge to the bidders. On the other hand, we will now see that information about relevance is needed for bidders to play the equilibrium in the first-price RBR auction. So the informational requirements for a first-price RBB auction are much weaker than for a first-price RBR auction: in the RBB auction a bidder need not know his own relevance, and need not know any distributional information over others' relevance in order to play the equilibrium. Again we omit the next proposition's proof since it is so similar to the one above. PROPOSITION 2. The symmetric Bayes-Nash equilibrium strategies in a first-price RBR auction are given by Here it can be seen that the equilibrium bid is increasing with x, but not necessarily with α. This should not be much of a concern to the auctioneer, however, because in any case the declared revenue in equilibrium is always increasing in the true revenue. It would be interesting to obtain the equilibrium bids when using a second-price payment rule, but it appears that the resulting differential equations for this case do not have a neat analytical solution. Nonetheless, the same conclusions about the informational requirements of the RBB and RBR rules still hold, as can be seen simply by inspecting the objective function associated with an agent's bidding problem for the second-price case. 3.3 Efficiency A slot auction is efficient if in equilibrium the sum of the bidders' revenues from their allocated slots is maximized. Using symmetry as our equilibrium selection criterion, we find that the RBB auction is not efficient with either payment rule. CLAIM 2. The RBB auction is not efficient with either first or second pricing. EXAMPLE. There are two agents and one slot, with γ1 = 1. Agent 1 has a value of x1 = 6 per click and relevance α1 = 1/2. Agent 2 has a value of x2 = 4 per click and relevance α2 = 1. By Lemma 1, agents are ranked greedily by value. Hence agent 1 obtains the lone slot, for a total revenue of 3 to the agents. However, it is most efficient to allocate the slot to agent 2, for a total revenue of 4. Examples with more agents or more slots are simple to construct along the same lines. On the other hand, under our assumptions on how clickthrough rate decreases with lower rank, the RBR auction is efficient with either payment rule. PROOF. Since by Lemma 1 the agents' equilibrium bids are increasing functions of their revenues in the RBR auction, slots are allocated greedily according to true revenues. Let σ be a non-greedy allocation. Then there are slots s, t with s <t and rσ (s) <rσ (t). We can switch the agents in slots s and t to obtain a new allocation, and the difference between the total revenue in this new allocation and the original allocation's total revenue is Both parenthesized terms above are positive. Hence the switch has increased the total revenue to the bidders. If we continue to perform such switches, we will eventually reach a greedy allocation of greater revenue than the initial allocation. Since the initial allocation was arbitrary, it follows that a greedy allocation is always efficient, and hence the RBR auction's allocation is efficient. Note that the assumption that clickthrough rate decays montonically by the same factors 71,..., 7K for all agents is crucial to this result. A greedy allocation scheme does not necessarily find an efficient solution if the clickthrough rates are monotonically decreasing in an independent fashion for each agent. 3.4 Revenue To obtain possible revenue rankings for the different auction formats, we first note that when the allocation rule is fixed to RBB, then using either a first-price, second-price, or truthful payment rule leads to the same expected revenue in a symmetric, increasing Bayes-Nash equilibrium. Because a RBB auction ranks agents by their true values in equilibrium for any of these payment rules (by Lemma 1), it follows that expected revenue is the same for all these payment rules, following arguments that are virtually identical to those used to establish revenue equivalence in the singleitem case (see e.g. Proposition 3.1 in Krishna [11]). The same holds for RBR auctions; however, the revenue ranking of the RBB and RBR allocation rules is still unclear. Because of this revenue equivalence principle, we can choose whichever payment rule is most convenient for the purpose of making revenue comparisons. Using Propositions 1 and 2, it is a simple matter to derive formulae for the expected revenue under both allocation rules. The payment of an agent in a RBB auction is The expected revenue is then N • E ˆmV (X, A) ˜, where the expectation is taken with respect to the joint density of value and relevance. The expected revenue formula for RBR auctions is entirely analogous using ˜xR (x, α) and the H function. With these in hand we can obtain revenue rankings for specific numbers of bidders and slots, and specific distributions over values and relevance. EXAMPLE. Assume there are 2 bidders, 2 slots, and that 71 = 1, 72 = 1/2. Assume that value-relevance pairs are uniformly distributed over [0, 1] x [0, 1]. For such a distribution with a closed-form formula, it is most convenient to use the revenue formulae just derived. RBB dominates RBR in terms of revenue for these parameters. The formula for the expected revenue in a RBB auction yields 1/12, whereas for RBR auctions we have 7/108. Assume instead that with probability 1/2 an agent's valuerelevance pair is (1, 1/2), and that with probability 1/2 it is (1/2, 1). In this scenario it is more convenient to appeal to formulae (1) and (2). In a truthful auction the second agent will always pay 0. According to (1), in a truthful RBB auction the first agent makes an expected payment of where we have used the fact that value and relevance are independently distributed for different agents. The expected relevance of the agent with the highest value is E ˆAσ (1) ˜ = 5/8. The expected second highest value is also E ˆXσ (2) ˜ = 5/8. The expected revenue for a RBB auction here is then 25/128. According to (2), in a truthful RBR auction the first agent makes an expected payment of In expectation the second highest revenue is E ˆRσ (2) ˜ = 1/2, so the expected revenue for a RBR auction is 1/4. Hence in this case the RBR auction yields higher expected revenue .1415 This example suggests the following conjecture: when value and relevance are either uncorrelated or positively correlated, RBB dominates RBR in terms of revenue. When value and relevance are negatively correlated, RBR dominates. 4. COMPLETE INFORMATION In typical slot auctions such as those run by Yahoo! and Google, bidders can adjust their bids up or down at any time. As B ¨ orgers et al. [2] and Edelman et al. [6] have noted, this can be viewed as a continuous-time process in which bidders learn each other's bids. If the process stabilizes the result can then be modeled as a Nash equilibrium in pure strategies of the static one-shot game of complete information, since each bidder will be playing a best-response to the others' bids .16 This argument seems especially appropriate for Yahoo!'s slot auction design where all bids are 14To be entirely rigorous and consistent with our initial assumptions, we should have constructed a continuous probability density with full support over an appropriate domain. Taking the domain to be e.g. [0, 1] x [0, 1] and a continuous density with full support that is sufficiently concentrated around (1, 1/2) and (1/2, 1), with roughly equal mass around both, would yield the same conclusion. 15Claim 3 should serve as a word of caution, because Feng et al. [7] find through their simulations that with a bivariate normal distribution over value-relevance pairs, and with 5 slots, 15 bidders, and δ = 2, RBR dominates RBB in terms of revenue for any level of correlation between value and relevance. However, they assume that bidding behavior in a second-price slot auction can be well approximated by truthful bidding. 16We do not claim that bidders will actually learn each others' private information (value and relevance), just that for a stable set of bids there is a corresponding equilibrium of the complete information game. made public. Google keeps bids private, but experimentation can allow one to discover other bids, especially since second pricing automatically reveals to an agent the bid of the agent ranked directly below him. 4.1 Equilibrium Analysis In this section we ask whether a pure-strategy Nash equilibrium exists in a RBB or RBR slot auction, with either first or second pricing. Before dealing with the first-price case there is a technical issue involving ties. In our model we allow bids to be nonnegative real numbers for mathematical convenience, but this can become problematic because there is then no bid that is "just higher" than another. We brush over such issues by assuming that an agent can bid "infinitesimally higher" than another. This is imprecise but allows us to focus on the intuition behind the result that follows. See Reny [17] for a full treatment of such issues. For the remainder of the paper, we assume that there are as many slots as bidders. The following result shows that there can be no pure-strategy Nash equilibrium with first pricing .17 Note that the argument holds for both RBB and RBR allocation rules. For RBB, bids should be interpreted as declared values, and for RBR as declared revenues. PROOF. Let σ: [K] → [N] be the allocation of slots to the agents resulting from their bids. Let ri and bi be the revenue and bid of the agent ranked ith, respectively. Note that we cannot have bi> bi +1, or else the agent in slot i can make a profitable deviation by instead bidding bi − $> bi +1 for small enough $> 0. This does not change its allocation, but increases its profit. Hence we must have bi = bi +1 (i.e. with one bidder bidding infinitesimally higher than the other). Since this holds for any two consecutive bidders, it follows that in a Nash equilibrium all bidders must be bidding 0 (since the bidder ranked last matches the bid directly below him, which is 0 by default because there is no such bid). But this is impossible: consider the bidder ranked last. The identity of this bidder is always clear given the deterministic tie-breaking rule. This bidder can obtain the top spot and increase his revenue by (γ1 − γK) rK> 0 by bidding some $> 0, and for small enough $this is necessarily a profitable deviation. Hence there is no Nash equilibrium in pure strategies. On the other hand, we find that in a second-price slot auction there can be a multitude of pure strategy Nash equilibria. The next two lemmas give conditions that characterize the allocations that can occur as a result of an equilibrium profile of bids, given fixed agent values and revenues. Then if we can exhibit an allocation that satisfies these conditions, there must exist at least one equilibrium. We first consider the RBR case. 17B ¨ orgers et al. [2] have proven this result in a model with three bidders and three slots, and we generalize their argument. Edelman et al. [6] also point out this non-existence phenomenon. They only illustrate the fact with an example because the result is quite immediate. PROOF. There exists a desired vector b which constitutes a Nash equilibrium if and only if the following set of inequalities can be satisfied (the variables are the πi and bj): Here rσ (i) is the revenue of the agent allocated slot i, and πi and bi may be interpreted as this agent's surplus and declared revenue, respectively. We first argue that constraints (6) can be removed, because the inequalities above can be satisfied if and only if the inequalities without (6) can be satisfied. The necessary direction is immediate. Assume we have a vector (π, b) which satisfies all inequalities above except (6). Then there is some i for which bi <bi +1. Construct a new vector (π, b') identical to the original except with b ` i +1 = bi. We now have b' i = b ` i +1. An agent in slot k <i sees the price of slot i decrease from bi +1 to b ` i +1 = bi, but this does not make i more preferred than k to this agent because we have πk ≥ γi-1 (rσ (k) − bi) ≥ γi (rσ (k) − bi) = γi (rσ (k) − b ` i +1) (i.e. because the agent in slot k did not originally prefer slot i − 1 at price bi, he will not prefer slot i at price bi). A similar argument applies for agents in slots k> i + 1. The agent in slot i sees the price of this slot go down, which only makes it more preferred. Finally, the agent in slot i + 1 sees no change in the price of any slot, so his slot remains most preferred. Hence inequalities (3)--(5) remain valid at (π, b'). We first make this change to the bi +1 where bi <bi +1 and index i is smallest. We then recursively apply the change until we eventually obtain a vector that satisfies all inequalities. We safely ignore inequalities (6) from now on. By the Farkas lemma, the remaining inequalities can be satisfied if and only if there is no vector z such that Note that a variable of the form zσ (i) i appears at most once in a constraint of type (8), so such a variable can never be positive. Also, zσ (i) 1 = 0 for all i = 1 by constraint (7), since such variables never appear with another of the form zσ (i) i. Now if we wish to raise zσ (i) j above 0 by one unit for j = i, we must lower zσ (i) i by one unit because of the constraint of type (8). Because γjrσ (i) ≤ γirσ (i) for i <j, raising zσ (i) j with i <j while adjusting other variables to maintain feasibility cannot make the objective Pi, j (7jrσ (i)) zσ (i) j positive. If this objective is positive, then this is due to some component zσ (i) j with i> j being positive. Now for the constraints of type (7), if i> j then zσ (i) j appears with zσ (j − 1) j − 1 (for 1 <j <N). So to raise the former variable 7 − 1 j units and maintain feasibility, we must (I) lower zσ (i) i by 7 − 1 j units, and (II) lower zσ (j − 1) j − 1 by for 2 ≤ j ≤ N − 1 and i> j, raising some zσ (i) j with i> j cannot make the objective positive, and there is no z that satisfies all inequalities above. Conversely, if some inequality (9) does not hold, the objective can be made positive by raising the corresponding zσ (i) j and adjusting other variables so that feasibility is just maintained. By a slight reindexing, inequalities (9) yield the statement of the theorem. The RBB case is entirely analogous. PROOF SKETCH. The proof technique is the same as in the previous lemma. The desired Nash equilibrium exists if and only if a related set of inequalities can be satisfied; by the Farkas lemma, this occurs if and only if an alternate set of inequalities cannot be satisfied. The conditions that determine whether the latter holds are given in the statement of the lemma. The two lemmas above immediately lead to the following result. PROOF. First consider RBB. Suppose agents are ranked according to their true values. Since xσ (i) ≤ xσ (j) for i> j, the system of inequalities in Lemma 3 is satisfied, and the allocation is the result of some Nash equilibrium bid profile. By the same type of argument but appealing to Lemma 2 for RBR, there exists a Nash equilibrium bid profile such that bidders are ranked according to their true revenues. By Theorem 1, this latter allocation is efficient. This theorem establishes existence but not uniqueness. Indeed we expect that in many cases there will be multiple allocations (and hence equilibria) which satisfy the conditions of Lemmas 2 and 3. In particular, not all equilibria of a second-price RBR auction will be efficient. For instance, according to Lemma 2, with two agents and two slots any allocation can arise in a RBR equilibrium because no constraints apply. Theorems 2 and 3 taken together provide a possible explanation for Yahoo!'s switch from first to second pricing. We saw in Section 3.1 that this does not induce truthfulness from bidders. With first pricing, there will always be some bidder that feels compelled to adjust his bid. Second pricing is more convenient because an equilibrium can be reached, and this reduces the cost of bid management. 4.2 Efficiency For a given allocation rule, we call the allocation that would result if the bidders reported their values truthfully the standard allocation. Hence in the standard RBB allocation bidders are ranked by true values, and in the standard RBR allocation they are ranked by true revenues. According to Lemmas 2 and 3, a ranking that results from a Nash equilibrium profile can only deviate from the standard allocation by having agents with relatively similar values or revenues switch places. That is, if ri> rj then with RBR agent j can be ranked higher than i only if the ratio rj/ri is sufficiently large; similarly for RBB. This suggests that the value of an equilibrium allocation cannot differ too much from the value obtained in the standard allocation, and the following theorems confirms this. tal value by f (σ) = PN For an allocation σ of slots to agents, we denote its toi = 1 7irσ (i). We denote by g (σ) = PNi = 1 7ixσ (i) allocation σ's value when assuming all agents have identical relevance, normalized to 1. Let (where by default 7N +1 = 0). Let 77x and 77r be the standard allocations when using RBB and RBR, respectively. THEOREM 4. For an allocation σ that results from a purestrategy Nash equilibrium of a second-price RBR slot auction, we have f (σ) ≥ Lf (77r). PROOF. We number the agents so that agent i has the ith highest revenue, so r1 ≥ r2 ≥...≥ rN. Hence the standard allocation has value f (77r) = PNi = 1 7iri. To prove the theorem, we will make repeated use of the fact that P ak Pk ak k bk ≥ mink bk when the ak and bk are positive. Note that according to Lemma 2, if agent i lies at least two slots "" below slot j, then rσ (j) ≥ ri 1 − γ' +2. γ' +1 It may be the case that for some slot i, we have σ (i)> i and for slots k> i + 1 we have σ (k)> i. We then say that slot i is inverted. Let S be the set of agents with indices at least i + 1; there are N − i of these. If slot i is inverted, it is occupied by some agent from S. Also all slots strictly lower than i + 1 must be occupied by the remaining agents from S, since σ (k)> i for k ≥ i + 2. The agent in slot i + 1 must then have an index σ (i + 1) ≤ i (note this means slot i + 1 cannot be inverted). Now there are two cases. In the first case we have σ (i) = i + 1. Then In the second case we have σ (i)> i +1. Then since all agents in S except the one in slot i lie strictly below slot i + 1, and the agent in slot i is not agent i + 1, it must be that agent i +1 is in a slot strictly below slot i +1. This means that it is at least two slots below the agent that actually occupies slot "" i, and by Lemma 2 we then have rσ (i)> ri +1 1--γi +2. If slot i is not inverted, then on one hand we may have σ (i) <i, in which case rσ (i) / ri> 1. On the other hand we may have σ (i)> i but there is some agent with index j <i that lies at least two slots below slot i. Then by Lemma 2, We write i E I if slot i is inverted, and i E I if neither i nor and this completes the proof. Note that for RBR, the standard value is also the efficient value by Theorem 1. Also note that for an exponential decay model, L = min ˘ δ1, 1--1 ¯. With δ = 1.428 (see Secδ tion 2.1), the factor is L Pz 1/3 .34, so the total value in a pure-strategy Nash equilibrium of a second-price RBR slot auction is always within a factor of 3.34 of the efficient value with such a discount. Again for RBB we have an analogous result. THEOREM 5. For an allocation σ that results from a purestrategy Nash equilibrium of a second-price RBB slot auction, we have g (σ)> Lg (ηx). PROOF SKETCH. Simply substitute bidder values for bidder revenues in the proof of Theorem 4, and appeal to Lemma 3. 5. CONCLUSIONS This paper analyzed stylized versions of the slot auction designs currently used by Yahoo! and Google, namely "rank by bid" (RBB) and "rank by revenue" (RBR), respectively. We also considered first and second pricing rules together with each of these allocation rules, since both have been used historically. We first studied the "short-run" setting with incomplete information, corresponding to the case where agents have just approached the mechanism. Our equilibrium analysis revealed that RBB has much weaker informational requirements than RBR, because bidders need not know any information about relevance (even their own) to play the Bayes-Nash equilibrium. However, RBR leads to an efficient allocation in equilibrium, whereas RBB does not. We showed that for an arbitrary distribution over value and relevance, no revenue ranking of RBB and RBR is possible. We hope that the tools we used to establish these results (revenue equivalence, the form of first-price equilibria, the truthful payments rules) will help others wanting to pursue further analyses of slot auctions. We also studied the "long-run" case where agents have experimented with their bids and each settled on one they find optimal. We argued that a stable set of bids in this setting can be modeled as a pure-strategy Nash equilibrium of the static game of complete information. We showed that no pure-strategy equilibrium exists with either RBB or RBR using first pricing, but that with second pricing there always exists such an equilibrium (in the case of RBR, an efficient equilibrium). In general second pricing allows for multiple pure-strategy equilibria, but we showed that the value of such equilibria diverges by only a constant factor from the value obtained if all agents bid truthfully (which in the case of RBR is the efficient value). 6. FUTURE WORK Introducing budget constraints into the model is a natural next step for future work. The complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords. Assuming that the optimal choice of budget can be made independent of the choice of bid for a specific keyword, it can be shown that it is a dominant-strategy to report this optimal budget with one's bid. The problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable. Identifying a condition on the distribution over value and relevance that actually does yield a revenue ranking of RBB and RBR (such as correlation between value and relevance, perhaps) would yield a more satisfactory characterization of their relative revenue properties. Placing bounds on the revenue obtained in a complete information equilibrium is also a relevant question. Because the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc. . .) automatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.
An Analysis of Alternative Slot Auction Designs for Sponsored Search ABSTRACT Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: "rank by bid" (RBB) and "rank by revenue" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first - and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the "short-run" incomplete information setting and the "long-run" complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully. 1. INTRODUCTION Today, Internet giants Google and Yahoo! boast a combined market capitalization of over $150 billion, largely on the strength of sponsored search, the fastest growing component of a resurgent online advertising industry. PricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues . ' Industry watchers expect 2005 revenues to reach or exceed $7 billion .2 Roughly 80% of Google's estimated $4 billion in 2005 revenue and roughly 45% of Yahoo!'s estimated $3.7 billion in 2005 revenue will likely be attributable to sponsored search .3 A number of other companies--including LookSmart, FindWhat, InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)--earn hundreds of millions of dollars of sponsored search revenue annually. Sponsored search is a form of advertising where merchants pay to appear alongside web search results. For example, when a user searches for "used honda accord san diego" in a web search engine, a variety of commercial entities (San Diego car dealers, Honda Corp, automobile information portals, classified ad aggregators, eBay, etc. . .) may bid to to have their listings featured alongside the standard "algorithmic" search listings. Advertisers bid for placement on the page in an auction-style format where the higher they bid the more likely their listing will appear above other ads on the page. By convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked. Though many people claim to systematically ignore sponsored search ads, Majestic Research reports that as many as 17% of Google searches result in a paid click, and that Google earns roughly nine cents on average for every search query they process .4 Usually, sponsored search results appear in a separate section of the page designated as "sponsored" above or to the right of the algorithmic results. Sponsored search results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a corresponding web page. We call each position in the list a slot. Generally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users. Thus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots. Merchants bid for placement next to particular search queries; for example, Orbitz and Travelocity may bid for "las vegas hotel" while Dell and HP bid for "laptop computer". As mentioned, bids are expressed as a maximum willingness to pay per click. For example, a forty-cent bid by HostRocket for "web hosting" means HostRocket is willing to pay up to forty cents every time a user clicks on their ad .5 The auctioneer (the search engine6) evaluates the bids and allocates slots to advertisers. In principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive. Many allocation rules are plausible. In this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google. The "rank by bid" (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders. The "rank by revenue" (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant's ad after viewing it. In our model, we assume that an ad's expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots. In practice, the expected clickthrough rate depends on a number of factors, including the position on the page, the ad text (which in turn depends on the identity of the bidder), the nature and intent of the user, and the context of other ads and algorithmic results on the page, and must be learned over time by both the auctioneer and the bidder [13]. As of this writing, to a rough first-order approximation, Yahoo! employs a RBB allocation and Google employs a RBR allocation, though numerous caveats apply in both cases when it comes to the vagaries of real-world implementations .7 Even when examining a one-shot version of a slot auction, the mechanism differs from a standard multi-item auc engine are not always the same entity. For example Google runs the sponsored search ads for AOL web search, with revenue being shared. Similarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon. 7Here are two among many exceptions to the Yahoo! = RBB and Google = RBR assertion: (1) Yahoo! excludes ads deemed insufficiently relevant either by a human editor or due to poor historical click rate; (2) Google sets differing reserve prices depending on Google's estimate of ad quality. tion in subtle ways. First, a single bid per merchant is used to allocate multiple non-identical slots. Second, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation. We investigate a number of economic properties of RBB and RBR slot auctions. We consider the "short-run" incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions. In Section 4 we turn to the "long-run" complete information case; our characterization results here draw on techniques from linear programming. Throughout, important observations are highlighted as claims supported by examples. Our contributions are as follows: • We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first - or second-pricing. • With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others' relevance (or even their own) with RBB. • With incomplete information, we prove that RBR is efficient but that RBB is not. • We show via a simple example that no general revenue ranking of RBB and RBR is possible. • We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing. • We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction. In Section 2 we specify our model of bidders and the various slot auction formats. In Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully. There is possible confusion here because the "second-price" design for slot auctions is reminiscent of the Vickrey auction for a single item; we note that for slot auctions the Vickrey mechanism is in fact very different from the second-price mechanism, and so they have different incentive properties .8 In Section 3.2 we derive the Bayes-Nash equilibrium bids for the various auction formats. This is useful for the efficiency and revenue results in later sections. It should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions. Sections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively. In Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information. In Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions. Our approach is positive rather than normative. We aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of incomplete and complete information. We do not attempt to derive the "optimal" mechanism for a slot auction. Related work. Feng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis. Liu and Chen [12] study properties of slot auctions under incomplete information. Their setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low). They find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results. They also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue. Edelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information. They study the "locally envyfree equilibria" of the static game of complete information; this is a solution concept motivated by certain bidding behaviors that arise due to the presence of budget constraints. They do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria. They also provide a nice description of the evolution of the market for sponsored search. Varian [18] also studies slot auctions under a setting of complete information. He focuses on "symmetric" equilibria, which are a refinement of Nash equilibria appropriate for slot auctions. He provides bounds on the revenue obtained in equilibrium. He also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results. In contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria. 2. PRELIMINARIES We focus on a slot auction for a single keyword. In a setting of incomplete information, a bidder knows only distributions over others' private information (value per click and relevance). With complete information, a bidder knows others' private information, and so does not need to rely on distributions to strategize. We first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4. 2.1 The Model There is a fixed number K of slots to be allocated among N bidders. We assume without loss of generality that K ≤ N, since superfluous slots can remain blank. Bidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement's rank .9 The probability that i's advertisement will be clicked if viewed is Ai ∈ [0, 1]. We refer to Ai as bidder i's relevance. We refer to Ri = AiXi as bidder i's revenue. The Xi, Ai, and Ri are random 9Indeed Kitts et al. [10] find that in their sample of actual click data, the correlation between rank and conversion rate is not statistically significant. However, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank. variables and we denote their realizations by xi, αi, and ri respectively. The probability that an advertisement will be viewed if placed in slot j is ryj ∈ [0, 1]. We assume ry1> rye>...> ryK. Hence bidder i's advertisement will have a clickthrough rate of ryjαi if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot. Each bidder's value and relevance pair (Xi, Ai) is independently and identically distributed on [0, ¯ x] × [0, 1] according to a continuous density function f that has full support on its domain. The density f and slot probabilities ry1,..., ryK are common knowledge. Only bidder i knows the realization xi of its value per click Xi. Both bidder i and the seller know the realization αi of Ai, but this realization remains unobservable to the other bidders. We assume that bidders have quasi-linear utility functions. That is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui (j, b) = ryj αi (xi − b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption. The assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19]. The assumption that clickthrough rate decays monotonically with lower slots--by the same factors for each agent--is unique to the slot auction problem. We view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory. It also allows for interesting results in the complete information case. A common model of decaying clickthrough rate is the exponential decay model, where ryk = δk − 1 with decay δ> 1. Feng et al. [7] state that their actual clickthrough data is fitted extremely well by an exponential decay model with δ = 1.428. Our model lacks budget constraints, which are an important feature of real slot auctions. With budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords--a single advertiser typically bids on multiple keywords relevant to his business. Introducing this element into the model is an important next step for future work .10 2.2 Auction Formats In a slot auction a bidder provides to the seller a declared value per click ˜xi (xi, αi) which depends on his true value and relevance. We often denote this declared value (bid) by ˜xi for short. Since a bidder's relevance αi is observable to the seller, the bidder cannot misrepresent it. We denote the kth highest of the N declared values by ˜x (k), and the kth highest of the N declared revenues by ˜r (k), where the declared revenue of bidder i is ˜ri = αi˜xi. We consider two types of allocation rules, "rank by bid" (RBB) and "rank by revenue" (RBR): 10Models with budget constraints have begun to appear in this research area. Abrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness. Mehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue. RBB. Slot k goes to bidder i if and only if ˜xi = ˜x (k). RBR. Slot k goes to bidder i if and only if ˜ri = ˜r (k). We will commonly represent an allocation by a one-to-one function σ: [K]--+ [N], where [n] is the set of integers {1, 2,..., n}. Hence slot k goes to bidder σ (k). We also consider two different types of payment rules. Note that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks. First-price. The bidder allocated slot k, namely σ (k), pays ˜xσ (k) per click under both the RBB and RBR allocation rules. Second-price. If k <N, bidder σ (k) pays ˜xσ (k +1) per click under the RBB rule, and pays ˜rσ (k +1) / ασ (k) per click under the RBR rule. If k = N, bidder σ (k) pays 0 per click .11 Intuitively, a second-price payment rule sets a bidder's payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used. Overture introduced the first slot auction design in 1997, using a first-price RBB scheme. Google then followed in 2000 with a second-price RBR scheme. In 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB. One possible reason for the switch is given in Section 4. We assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue. There is a permutation of the agents κ: [N]--+ [N] that is fixed beforehand. If the bids of agents i and j are tied, then agent i obtains a higher slot if and only if κ (i) <κ (j). This is consistent with the practice in real slot auctions where ties are broken by the bidders' order of arrival. 3. INCOMPLETE INFORMATION 3.1 Incentives 3.2 Equilibrium Analysis 3.3 Efficiency 3.4 Revenue 4. COMPLETE INFORMATION 4.1 Equilibrium Analysis 4.2 Efficiency Pk ak 5. CONCLUSIONS 6. FUTURE WORK Introducing budget constraints into the model is a natural next step for future work. The complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords. Assuming that the optimal choice of budget can be made independent of the choice of bid for a specific keyword, it can be shown that it is a dominant-strategy to report this optimal budget with one's bid. The problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable. Identifying a condition on the distribution over value and relevance that actually does yield a revenue ranking of RBB and RBR (such as correlation between value and relevance, perhaps) would yield a more satisfactory characterization of their relative revenue properties. Placing bounds on the revenue obtained in a complete information equilibrium is also a relevant question. Because the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc. . .) automatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.
An Analysis of Alternative Slot Auction Designs for Sponsored Search ABSTRACT Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: "rank by bid" (RBB) and "rank by revenue" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first - and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the "short-run" incomplete information setting and the "long-run" complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully. 1. INTRODUCTION PricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues . ' Sponsored search is a form of advertising where merchants pay to appear alongside web search results. .) may bid to to have their listings featured alongside the standard "algorithmic" search listings. By convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked. Though many people claim to systematically ignore sponsored search ads, Majestic Research reports that We call each position in the list a slot. Generally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users. Thus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots. As mentioned, bids are expressed as a maximum willingness to pay per click. For example, a forty-cent bid by HostRocket for "web hosting" means HostRocket is willing to pay up to forty cents every time a user clicks on their ad .5 The auctioneer (the search engine6) evaluates the bids and allocates slots to advertisers. In principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive. Many allocation rules are plausible. In this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google. The "rank by bid" (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders. The "rank by revenue" (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant's ad after viewing it. In our model, we assume that an ad's expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots. engine are not always the same entity. For example Google runs the sponsored search ads for AOL web search, with revenue being shared. Similarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon. tion in subtle ways. First, a single bid per merchant is used to allocate multiple non-identical slots. Second, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation. We investigate a number of economic properties of RBB and RBR slot auctions. We consider the "short-run" incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions. In Section 4 we turn to the "long-run" complete information case; our characterization results here draw on techniques from linear programming. Throughout, important observations are highlighted as claims supported by examples. Our contributions are as follows: • We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first - or second-pricing. • With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others' relevance (or even their own) with RBB. • With incomplete information, we prove that RBR is efficient but that RBB is not. • We show via a simple example that no general revenue ranking of RBB and RBR is possible. • We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing. • We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction. In Section 2 we specify our model of bidders and the various slot auction formats. In Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully. This is useful for the efficiency and revenue results in later sections. It should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions. Sections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively. In Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information. In Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions. Our approach is positive rather than normative. We aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of incomplete and complete information. We do not attempt to derive the "optimal" mechanism for a slot auction. Related work. Feng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis. Liu and Chen [12] study properties of slot auctions under incomplete information. Their setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low). They find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results. They also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue. Edelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information. They do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria. They also provide a nice description of the evolution of the market for sponsored search. Varian [18] also studies slot auctions under a setting of complete information. He focuses on "symmetric" equilibria, which are a refinement of Nash equilibria appropriate for slot auctions. He provides bounds on the revenue obtained in equilibrium. He also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results. In contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria. 2. PRELIMINARIES We focus on a slot auction for a single keyword. In a setting of incomplete information, a bidder knows only distributions over others' private information (value per click and relevance). With complete information, a bidder knows others' private information, and so does not need to rely on distributions to strategize. We first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4. 2.1 The Model There is a fixed number K of slots to be allocated among N bidders. We assume without loss of generality that K ≤ N, since superfluous slots can remain blank. Bidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement's rank .9 The probability that i's advertisement will be clicked if viewed is Ai ∈ [0, 1]. We refer to Ai as bidder i's relevance. We refer to Ri = AiXi as bidder i's revenue. However, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank. The probability that an advertisement will be viewed if placed in slot j is ryj ∈ [0, 1]. We assume ry1> rye>...> ryK. Hence bidder i's advertisement will have a clickthrough rate of ryjαi if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot. The density f and slot probabilities ry1,..., ryK are common knowledge. Only bidder i knows the realization xi of its value per click Xi. Both bidder i and the seller know the realization αi of Ai, but this realization remains unobservable to the other bidders. We assume that bidders have quasi-linear utility functions. That is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui (j, b) = ryj αi (xi − b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption. The assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19]. The assumption that clickthrough rate decays monotonically with lower slots--by the same factors for each agent--is unique to the slot auction problem. We view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory. It also allows for interesting results in the complete information case. A common model of decaying clickthrough rate is the exponential decay model, where ryk = δk − 1 with decay δ> 1. their actual clickthrough data is fitted extremely well by an exponential decay model with δ = 1.428. Our model lacks budget constraints, which are an important feature of real slot auctions. With budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords--a single advertiser typically bids on multiple keywords relevant to his business. Introducing this element into the model is an important next step for future work .10 2.2 Auction Formats In a slot auction a bidder provides to the seller a declared value per click ˜xi (xi, αi) which depends on his true value and relevance. We often denote this declared value (bid) by ˜xi for short. Since a bidder's relevance αi is observable to the seller, the bidder cannot misrepresent it. We denote the kth highest of the N declared values by ˜x (k), and the kth highest of the N declared revenues by ˜r (k), where the declared revenue of bidder i is ˜ri = αi˜xi. We consider two types of allocation rules, "rank by bid" (RBB) and "rank by revenue" (RBR): 10Models with budget constraints have begun to appear in this research area. Abrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness. Mehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue. RBB. Slot k goes to bidder i if and only if ˜xi = ˜x (k). RBR. Slot k goes to bidder i if and only if ˜ri = ˜r (k). Hence slot k goes to bidder σ (k). We also consider two different types of payment rules. Note that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks. First-price. The bidder allocated slot k, namely σ (k), pays ˜xσ (k) per click under both the RBB and RBR allocation rules. Second-price. If k <N, bidder σ (k) pays ˜xσ (k +1) per click under the RBB rule, and pays ˜rσ (k +1) / ασ (k) per click under the RBR rule. If k = N, bidder σ (k) pays 0 per click .11 Intuitively, a second-price payment rule sets a bidder's payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used. Overture introduced the first slot auction design in 1997, using a first-price RBB scheme. Google then followed in 2000 with a second-price RBR scheme. In 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB. One possible reason for the switch is given in Section 4. We assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue. If the bids of agents i and j are tied, then agent i obtains a higher slot if and only if κ (i) <κ (j). This is consistent with the practice in real slot auctions where ties are broken by the bidders' order of arrival. 6. FUTURE WORK Introducing budget constraints into the model is a natural next step for future work. The complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords. The problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable. Placing bounds on the revenue obtained in a complete information equilibrium is also a relevant question. Because the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc. . .) automatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.
J-55
From Optimal Limited To Unlimited Supply Auctions
We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.
[ "unlimit suppli", "auction", "ratio", "mechan design", "competit analysi", "benchmark", "aggreg auction", "bound", "prefer", "distribut" ]
[ "P", "P", "P", "U", "M", "U", "M", "U", "U", "U" ]
From Optimal Limited To Unlimited Supply Auctions Jason D. Hartline Microsoft Research 1065 La Avenida) Mountain View, CA 94043 hartline@microsoft.com Robert McGrew ∗ Computer Science Department Stanford University Stanford, CA 94305 bmcgrew@stanford.edu ABSTRACT We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values. Categories and Subject Descriptors F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics, Theory 1. INTRODUCTION The research area of optimal mechanism design looks at designing a mechanism to produce the most desirable outcome for the entity running the mechanism. This problem is well studied for the auction design problem where the optimal mechanism is the one that brings the seller the most profit. Here, the classical approach is to design such a mechanism given the prior distribution from which the bidders'' preferences are drawn (See e.g., [12, 4]). Recently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution. The goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance. Positive results in this direction are fueled by the observation that in auctions for a number of identical units, much of the distribution from which the bidders are drawn can be deduced on the fly by the auction as it is being run [9, 14, 2]. The performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction``s performance on the input distribution that maximizes this ratio. The holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible). Since [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8]. In this paper we continue this line of research by improving in all of these directions. Furthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction``s competitive analysis. This result further validates the use of competitive analysis in gauging auction performance. We consider the single item, multi-unit, unit-demand auction problem. In such an auction there are many units of a single item available for sale to bidders who each desire only one unit. Each bidder has a valuation representing how much the item is worth to him. The auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders. The bidders are assumed to bid so as to maximize their personal utility, the difference between their valuation and the price they pay. To handle the problem of designing and analyzing auctions where bidders may falsely declare their valuations to get a better deal, we will adopt the solution concept of truthful mechanism design (see, e.g., [9, 15, 13]). In a truthful auction, revealing one``s true valuation as one``s bid is an optimal strategy for each bidder regardless of the bids of the other bidders. In this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions. A particularly interesting special case of the auction problem is the unlimited supply case. In this case the number of units for sale is at least the number of bidders in the auction. This is natural for the sale of digital goods where there is negligible cost for duplicating 175 and distributing the good. Pay-per-view television and downloadable audio files are examples of such goods. The competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis. The assumption that two or more units are sold is necessary because in the worst case it is impossible to obtain a constant fraction of the profit of the optimal mechanism when it sells only one unit [9]. In this framework for competitive analysis, an auction is said to be β-competitive if it achieves a profit that is within a factor of β ≥ 1 of the benchmark profit on every input. The optimal auction is the one which is β-competitive with the minimum value of β. Previous to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8]. For the limited supply case, auctions can achieve substantially better competitive ratios. When there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units. When there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13/6 ≈ 2.17 [8]. The results of this paper are as follows: • We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units. This auction achieves a competitive ratio of 13/6, matching the lower bound from [8] (Section 3). • We show that the form of the optimal auction is independent of the benchmark used in competitive analysis. In doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4). • We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction. We refer to auctions derived from this framework as aggregation auctions (Section 5). • We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1). • Assuming that the conjecture that the optimal -unit auction has a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ≥ 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6). For the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction. 2. DEFINITIONS AND BACKGROUND We consider single-round, sealed-bid auctions for a set of identical units of an item to bidders who each desire one unit. As mentioned in the introduction, we adopt the game-theoretic solution concept of truthful mechanism design. A useful simplification of the problem of designing truthful auctions is obtained through the following algorithmic characterization [9]. Related formulations to this one have appeared in numerous places in recent literature (e.g., [1, 14, 5, 10]). DEFINITION 1. Given a bid vector of n bids, b = (b1, ... , bn), let b-i denote the vector of with bi replaced with a `?'' , i.e., b-i = (b1, ... , bi−1, ? , bi+1, ... , bn). DEFINITION 2. Let f be a function from bid vectors (with a `?'') to prices (non-negative real numbers). The deterministic bidindependent auction defined by f, BIf , works as follows. For each bidder i: 1. Set ti = f(b-i). 2. If ti < bi, bidder i wins at price ti. 3. If ti > bi, bidder i loses. 4. Otherwise, (ti = bi) the auction can either accept the bid at price ti or reject it. A randomized bid-independent auction is a distribution over deterministic bid-independent auctions. The proof of the following theorem can be found, for example, in [5]. THEOREM 1. An auction is truthful if and only if it is equivalent to a bid-independent auction. Given this equivalence, we will use the the terminology bidindependent and truthful interchangeably. For a randomized bid-independent auction, f(b-i) is a random variable. We denote the probability density of f(b-i) at z by ρb-i (z). We denote the profit of a truthful auction A on input b as A(b). The expected profit of the auction, E[A(b)], is the sum of the expected payments made by each bidder, which we denote by pi(b) for bidder i. Clearly, the expected payment of each bid satisfies pi(b) = bi 0 xρb-i (x)dx. 2.1 Competitive Framework We now review the competitive framework from [5]. In order to evaluate the performance of auctions with respect to the goal of profit maximization, we introduce the optimal single price omniscient auction F and the related omniscient auction F(2) . DEFINITION 3. Give a vector b = (b1, ... , bn), let b(i) represent the i-th largest value in b. The optimal single price omniscient auction, F, is defined as follows. Auction F on input b determines the value k such that kb(k) is maximized. All bidders with bi ≥ b(k) win at price b(k); all remaining bidders lose. The profit of F on input b is thus F(b) = max1≤k≤n kb(k). In the competitive framework of [5] and subsequent papers, the performance of a truthful auction is gauged in comparison to F(2) , the optimal singled priced auction that sells at least two units. The profit of F(2) is max2≤k≤n kb(k) There are a number of reasons to choose this benchmark for comparison, interested readers should see [5] or [6] for a more detailed discussion. Let A be a truthful auction. We say that A is β-competitive against F(2) (or just β-competitive) if for all bid vectors b, the expected profit of A on b satisfies E[A(b)] ≥ F(2) (b) β . In Section 4 we generalize this framework to other profit benchmarks. 176 2.2 Scale Invariant and Symmetric Auctions A symmetric auction is one where the auction outcome is unchanged when the input bids arrive in a different permutation. Goldberg et al. [8] show that a symmetric auction achieves the optimal competitive ratio. This is natural as the profit benchmark we consider is symmetric, and it allows us to consider only symmetric auctions when looking for the one with the optimal competitive ratio. An auction defined by bid-independent function f is scale invariant if, for all i and all z, Pr[f(b-i) ≥ z] = Pr[f(cb-i) ≥ cz]. It is conjectured that the assumption of scale invariance is without loss of generality. Thus, we are motivated to consider symmetric scale-invariant auctions. When specifying a symmetric scaleinvariant auction we can assume that f is only a function of the relative magnitudes of the n − 1 bids in b-i and that one of the bids, bj = 1. It will be convenient to specify such auctions via the density function of f(b-i), ρb-i (z). It is enough to specify such a density function of the form ρ1,z1,...,zn−1 (z) with 1 ≤ zi ≤ zi+1. 2.3 Limited Supply Versus Unlimited Supply Following [8], throughout the remainder of this paper we will be making the assumption that n = , i.e., the number of bidders is equal to the number of units for sale. This is without loss of generality as (a) any lower bound that applies to the n = case also extends to the case where n ≥ [8], and (b) there is a reduction from the unlimited supply auction problem to the limited supply auction problem that takes an unlimited supply auction that is β-competitive with F(2) and constructs a limited supply auction parameterized by that is β-competitive with F(2, ) , the optimal omniscient auction that sells between 2 and units [6]. Henceforth, we will assume that we are in the unlimited supply case, and we will examine lower bounds for limited supply problems by placing a restriction on the number of bidders in the auction. 2.4 Lower Bounds and Optimal Auctions Frequently in this paper, we will refer to the best known lower bound on the competitive ratio of truthful auctions: THEOREM 2. [8] The competitive ratio of any auction on n bidders is at least 1 − n i=2 −1 n i−1 i i − 1 n − 1 i − 1 . DEFINITION 4. Let Υn denote the n-bidder auction that achieves the optimal competitive ratio. This bound is derived by analyzing the performance of any auction on the following distribution B. In each random bid vector B, each bid Bi is drawn i.i.d. from the distribution such that Pr[Bi ≥ s] ≤ 1/s for all s ∈ S. In the two-bidder case, this lower bound is 2. This is achieved by Υ2 which is the 1-unit Vickrey auction.1 In the three-bidder case, this lower bound is 13/6. In the next section, we define the auction Υ3 which matches this lower bound. In the four-bidder case, this lower bound is 96/215. In the limit as the number of bidders grows, this lower bound approaches a number which is approximately 2.42. It is conjectured that this lower bound is tight for any number of bidders and the optimal auction, Υn, matches it. 1 The 1-unit Vickrey auction sells to the highest bidder at the second highest bid value. 2.5 Profit Extraction In this section we review the truthful profit extraction mechanism ProfitExtractR. This mechanism is a special case of a general cost-sharing schema due to Moulin and Shenker [11]. The goal of profit extraction is, given bids b, to extract a target value R of profit from some subset of the bidders. ProfitExtractR: Given bids b, find the largest k such that the highest k bidders can equally share the cost R. Charge each of these bidders R/k. If no subset of bidders can cover the cost, the mechanism has no winners. Important properties of this auction are as follows: • ProfitExtractR is truthful. • If R ≤ F(b), ProfitExtractR(b) = R; otherwise it has no winners and no revenue. We will use this profit extraction mechanism in Section 5 with the following intuition. Such a profit extractor makes it possible to treat this subset of bidders as a single bid with value F(S). Note that given a single bid, b, a truthful mechanism might offer it price t and if t ≤ b then the bidder wins and pays t; otherwise the bidder pays nothing (and loses). Likewise, a mechanism can offer the set of bidders S a target revenue R. If R ≤ F(2) (S), then ProfitExtractR raises R from S; otherwise, the it raises no revenue from S. 3. AN OPTIMAL AUCTION FOR THREE BIDDERS In this section we define the optimal auction for three bidders, Υ3, and prove that it indeed matches the known lower bound of 13/6. We follow the definition and proof with a discussion of how this auction was derived. DEFINITION 5. Υ3 is scale-invariant and symmetric and given by the bid-independent function with density function ρ1,x(z) = ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ For x ≤ 3/2 1 with probability 9/13 z with probability density g(z) for z > 3/2 For x > 3/2⎧ ⎪⎨ ⎪⎩ 1 with probability 9/13 − x 3/2 zg(z)dz x with probability x 3/2 (z + 1)g(z)dz z with probability density g(z) for z > x where g(x) = 2/13 (x−1)3 . THEOREM 3. The Υ3 auction has a competitive ratio of 13/6 ≈ 2.17, which is optimal. Furthermore, the auction raises exactly 6 13 F(2) on every input with non-identical bids. PROOF. Consider the bids 1, x, y, with 1 < x < y. There are three cases. CASE 1 (x < y ≤ 3/2): F(2) = 3. The auction must raise expected revenue of at least 18/13 on these bids. The bidder with valuation x will pay 1 with 9/13, and the bidder with valuation y will pay 1 with probability 9/13. Therefore Υ3 raises 18/13 on these bids. CASE 2 (x ≤ 3/2 < y): F(2) = 3. The auction must raise expected revenue of at least 18/13 on these bids. The bidder with 177 valuation x will pay 9/13 − y 3/2 zg(z)dz in expectation. The bidder with valuation y will pay 9/13 + y 3/2 zg(z)dz in expectation. Therefore Υ3 raises 18/13 on these bids. CASE 3 (3/2 < x ≤ y): F(2) = 2x. The auction must raise expected revenue of at least 12x/13 on these bids. Consider the revenue raised from all three bidders: E[Υ3(b)] = p(1, x, y) + p(x, 1, y) + p(y, 1, x) = 0 + 9/13 − y 3/2 zg(z)dz + 9/13 − x 3/2 zg(z)dz + x x 3/2 (z + 1)g(z)dz + y x zg(z)dz = 18/13 + (x − 2) x 3/2 zg(z)dz + x x 3/2 g(z)dz = 12x/13. The final equation comes from substituting in g(x) = 2/13 (x−1)3 and expanding the integrals. Note that the fraction of F(2) raised on every input is identical. If any of the inequalities 1 ≤ x ≤ y are not strict, the same proof applies giving a lower bound on the auction``s profit; however, this bound may no longer be tight. Motivation for Υ3 In this section, we will conjecture that a particular input distribution is worst-case, and show, as a consequence, that all inputs are worstcase in the optimal auction. By applying this consequence, we will derive an optimal auction for three bidders. A truthful, randomized auction on n bidders can be represented by a randomized function f : Rn−1 × n → R that maps masked bid vectors to prices in R. By normalization, we can assume that the lowest possible bid is 1. Recall that ρb-i (z) = Pr[f(b-i) = z]. The optimal auction for the finite auction problem can be found by the following optimization problem in which the variables are ρb-i (z): maximize r subject to n i=1 bi z=1 zρb-i (z)dz ≥ rF(2) (b) ∞ z=1 ρb-i (z)dz = 1 ρb-i (z) ≥ 0 This set of integral inequalities is difficult to maximize over. However, by guessing which constraints are tight and which are slack at the optimum, we will be able to derive a set of differential equations for which any feasible solution is an optimal auction. As we discuss in Section 2.4, in [8], the authors define a distribution and use it to find a lower bound on the competitive ratio of the optimal auction. For two bidders, this bid distribution is the worst-case input distribution. We guess (and later verify) that this distribution is the worst-case input distribution for three bidders as well. Since this distribution has full support over the set of all bid vectors and a worst-case distribution puts positive probability only on worst-case inputs, we can therefore assume that all but a measure zero set of inputs is worst-case for the optimal auction. In the optimal two-bidder auction, all inputs with non-identical bids are worst-case, so we will assume the same for three bidders. The guess that these constraints are tight allows us to transform the optimization problem into a feasibility problem constrained by differential equations. If the solution to these equations has value matching the lower bound obtained from the worst-case distribution, then this solution is the optimal auction and that our conjectured choice of worst-case distribution is correct. In Section 6 we show that the optimal auction must sometimes place probability mass on sale prices above the highest bid. This motivates considering symmetric scale-invariant auctions for three bidders with probability density function, ρ1,x(z), of the following form: ρ1,x(z) = ⎧ ⎪⎨ ⎪⎩ 1 with discrete probability a(x) x with discrete probability b(x) z with probability density g(z) for z > x In this auction, the sale price for the first bidder is either one of the latter two bids, or higher than either bid with a probability density which is independent of the input. The feasibility problem which arises from the linear optimization problem by assuming the constraints are tight is as follows: a(y) + a(x) + xb(x) + y x zg(z)dz = r max(3, 2x) ∀x < y a(x) + b(x) + ∞ x g(z)dz = 1 a(x) ≥ 0 b(x) ≥ 0 g(z) ≥ 0 Solving this feasibility problem gives the auction Υ3 proposed above. The proof of its optimality validates its proposed form. Finding a simple restriction on the form of n-bidder auctions for n > 3 under which the optimal auction can be found analytically as above remains an open problem. 4. GENERALIZED PROFIT BENCHMARKS In this section, we widen our focus beyond auctions that compete with F(2) to consider other benchmarks for an auction``s profit. We will show that, for three bidders, the form of the optimal auction is essentially independent of the benchmark profit used. This results strongly corroborates the worst-case competitive analysis of auctions by showing that our techniques allow us to derive auctions which are competitive against a broad variety of reasonable benchmarks rather than simply against F(2) . Previous work in competitive analysis of auctions has focused on the question of designing the auction with the best competitive ratio against F(2) , the profit of the optimal omniscient single-priced mechanism that sells at least two items. However, it is reasonable to consider other benchmarks. For instance, one might wish to compete against V∗ , the profit of the k-Vickrey auction with optimal-inhindsight choice of k.2 Alternatively, if an auction is being used as a subroutine in a larger mechanism, one might wish to choose the auction which is optimally competitive with a benchmark specific to that purpose. Recall that F(2) (b) = max2≥k≥n kb(k). We can generalize this definition to Gs, parameterized by s = (s2, ... , sn) and defined as: Gs(b) = max 2≤k≤n skb(k). When considering Gs we assume without loss of generality that si < si+1 as otherwise the constraint imposed by si+1 is irrelevant. Note that F(2) is the special case of Gs with si = i, and that V∗ = Gs with si = i − 1. 2 Recall that the k-Vickrey auction sells a unit to each of the highest k bidders at a price equal to the k + 1st highest bid, b(k+1), achieving a profit of kb(k+1). 178 Competing with Gs We will now design a three-bidder auction Υs,t 3 that achieves the optimal competitive ratio against Gs,t. As before, we will first find a lower bound on the competitive ratio and then design an auction to meet that bound. We can lower bound the competitive ratio of Υs,t 3 using the same worst-case distribution from [8] that we used against F(2) . Evaluating the performance of any auction competing against Gs,t on this distribution will yield the following theorem. We denote the optimal auction for three bidders against Gs,t as Υs,t 3 . THEOREM 4. The optimal three-bidder auction, Υs,t 3 , competing against Gs,t(b) = max(sb(2), tb(3)) has a competitive ratio of at least s2 +t2 2t . The proof can be found in the appendix. Similarly, we can find the optimal auction against Gs,t using the same technique we used to solve for the three bidder auction with the best competitive ratio against F(2) . DEFINITION 6. Υs,t 3 is scale-invariant and symmetric and given by the bid-independent function with density function ρ1,x(z) = ⎧ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ For x ≤ t s 1 with probability t2 s2+t2 z with probability density g(z) for z > t s For x > t s⎧ ⎪⎪⎨ ⎪⎪⎩ 1 with probability t2 s2+t2 − x t s zg(z)dz x with probability x t s (z + 1)g(z)dz z with probability density g(z) for z > x where g(x) = 2(t−s)2 /(s2 +t2 ) (x−1)3 . THEOREM 5. Υs,t 3 is s2 +t2 2t -competitive with Gs,t. This auction, like Υ3, can be derived by reducing the optimization problem to a feasibility problem, guessing that the optimal solution has the same form as Υs,t 3 , and solving. The auction is optimal because it matches the lower bound found above. Note that the form of Υs,t 3 is essentially the same as for Υ3, but that the probability of each price is scaled depending on the values of s and t. That our auction for three bidders matches the lower bound computed by the input distribution used in [8] is strong evidence that this input distribution is the worst-case input distribution for any number of bidders and any generalized profit benchmark. Furthermore, we strongly suspect that for any number of bidders, the form of the optimal auction will be independent of the benchmark used. 5. AGGREGATION AUCTIONS We have seen that optimal auctions for small cases of the limitedsupply model can be found analytically. In this section, we will construct a schema for turning limited supply auctions into unlimited supply auctions with a good competitive ratio. As discussed in Section 2.5, the existence of a profit extractor, ProfitExtractR, allows an auction to treat a set of bids S as a single bid with value F(S). Given n bidders and an auction, Am, for m < n bidders, we can convert the m-bidder auction into an n-bidder auction by randomly partitioning the bidders into m subsets and then treating each subset as a single bidder (via ProfitExtractR) and running the m-bidder auction. DEFINITION 7. Given a truthful m-bidder auction, Am, the m-aggregation auction for Am, AggAm , works as follows: 1. Cast each bid uniformly at random into one of m bins, resulting in bid vectors b(1) , ... , b(m) . 2. For each bin j, compute the aggregate bid Bj = F(b(j) ). Let B be the vector of aggregate bids, and B−j be the aggregate bids for all bins other than j. 3. Compute the aggregate price Tj = f(B−j), where f is the bid-independent function for Am. 4. For each bin j, run ProfitExtractTj on b(j) . Since Am and ProfitExtractR are truthful, Tj is computed independently of any bid in bin j and thus the price offered any bidder in b(j) is independent of his bid; therefore, THEOREM 6. If Am is truthful, the m-aggregation auction for Am, AggAm , is truthful. Note that this schema yields a new way of understanding the Random Sampling Profit Extraction (RSPE) auction [5] as the simplest case of an aggregation auction. It is the 2-aggregation auction for Υ2, the 1-unit Vickrey auction. To analyze AggAm , consider throwing k balls into m labeled bins. Let k represent a configuration of balls in bins, so that ki is equal to the number of balls in bin i, and k(i) is equal to the number of balls in the ith largest bin. Let Km,k represent the set of all possible configurations of k balls in m bins. We write the multinomial coefficient of k as k k . The probability that a particular configuration k arises by throwing balls into bins uniformly at random is k k m−k . THEOREM 7. Let Am be an auction with competitive ratio β. Then the m-aggregation auction for Am, AggAm , raises the following fraction of the optimal revenue F(2) (b): E AggAm (b) F(2) ≥ min k≥2 k∈Km,k F(2) (k) k k βkmk PROOF. By definition, F(2) sells to k ≥ 2 bidders at a single price p. Let kj be the number of such bidders in b(j) . Clearly, F(b(j) ) ≥ pkj. Therefore, F(2) (F(b(1) ), ... , F(b(n) )) F(2)(b) ≥ F(2) (pk1, ... , pkn) pk = F(2) (k1, ... , kn) k The inequality follows from the monotonicity of F(2) , and the equality from the homogeneity of F(2) . ProfitExtractTj will raise Tj if Tj ≤ Bj , and no profit otherwise. Thus, E AggAm (b) ≥ E F(2) (B)/β . The theorem follows by rewriting this expectation as a sum over all k in Km,k. 5.1 A 3.25 Competitive Auction We apply the aggregation auction schema to Υ3, our optimal auction for three bidders, to achieve an auction with competitive ratio 3.25. This improves on the previously best known auction which is 3.39-competitive [7]. THEOREM 8. The aggregation auction for Υ3 has competitive ratio 3.25. 179 PROOF. By theorem 7, E AggΥ3 (b) F(2)(b) ≥ min k≥2 k i=1 k−i j=1 F(2) (i, j, k − i − j) k i,j,k−i−j βk3k For k = 2 and k = 3, E AggΥ3 (b) = 2 3 k/β. As k increases, E AggΥ3 (b) /F(2) increases as well. Since we do not expect to find a closed-form formula for the revenue, we lower bound F(2) (b) by 3b(3). Using large deviation bounds, one can show that this lower bound is greater than 2 3 k/β for large-enough k, and the remainder can be shown by explicit calculation. Plugging in β = 13/6, the competitive ratio is 13/4. As k increases, the competitive ratio approaches 13/6. Note that the above bound on the competitive ratio of AggΥ3 is tight. To see this, consider the bid vector with two very large and non-identical bids of h and h + with the remaining bids 1. Given that the competitive ratio of Υ3 is tight on this example, the expected revenue of this auction on this input will be exactly 13/4. 5.2 A Gs,t-based Aggregation Auction In this section we show that Υ3 is not the optimal auction to use in an aggregation auction. One can do better by choosing the auction that is optimally competitive against a specially tailored benchmark. To see why this might be the case, notice (Table 1) that the fraction of F(2) (b) raised for when there are k = 2 and k = 3 winning bidders in F(2) (b) is substantially smaller than the fraction of F(2) (b) raised when there are more winners. This occurs because the expected ratio between F(2) (B) and F(2) (b) is lower in this case while the competitive ratio of Υ3 is constant. If we chose a three bidder auction that performed better when F(2) has smaller numbers of winners, our aggregation auction would perform better in the worst case. One approach is to compete against a different benchmark that puts more weight than F(2) on solutions with a small number of winners. Recall that F(2) is the instance of Gs,t with s = 2 and t = 3. By using the auction that competes optimally against Gs,t with s > 2, while holding t = 3, we will raise a higher fraction of revenue on smaller numbers of winning bidders and a lower fraction of revenue on large numbers of winning bidders. We can numerically optimize the values of s and t in Gs,t(b) in order to achieve the best competitive ratio for the aggregation auction. In fact, this will allow us to improve our competitive ratio slightly. THEOREM 9. For an optimal choice of s and t, the aggregation auction for Υs,t 3 is 3.243-competitive. The proof follows the outline of Theorem 7 and 8 with the optimal choice of s = 2.162 (while t is held constant at 3). 5.3 Further Reducing the Competitive Ratio There are a number of ways we might attempt to use this aggregation auction schema to continue to push the competitive ratio down. In this section, we give a brief discussion of several attempts. 5.3.1 AggΥm for m > 3 If the aggregation auction for Υ2 has a competitive ratio of 4 and the aggregation auction for Υ3 has a competitive ratio of 3.25, can we improve the competitive ratio by aggregating Υ4 or Υm for larger m? We conjecture in the negative: for m > 3, the aggregation auction for Υm has a larger competitive ratio than the aggregation auction for Υ3. The primary difficulty in proving this k m = 2 m = 3 m = 4 m = 5 m = 6 m = 7 2 0.25 0.3077 0.3349 0.3508 0.3612 0.3686 3 0.25 0.3077 0.3349 0.3508 0.3612 0.3686 4 0.3125 0.3248 0.3349 0.3438 0.3512 0.3573 5 0.3125 0.3191 0.3244 0.3311 0.3378 0.3439 6 0.3438 0.321 0.3057 0.3056 0.311 0.318 7 0.3438 0.333 0.3081 0.3009 0.3025 0.3074 8 0.3633 0.3229 0.3109 0.3022 0.3002 0.3024 9 0.3633 0.3233 0.3057 0.2977 0.2927 0.292 10 0.377 0.3328 0.308 0.2952 0.2866 0.2837 11 0.377 0.3319 0.3128 0.298 0.2865 0.2813 12 0.3872 0.3358 0.3105 0.3001 0.2894 0.2827 13 0.3872 0.3395 0.3092 0.2976 0.2905 0.2841 14 0.3953 0.3391 0.312 0.2961 0.2888 0.2835 15 0.3953 0.3427 0.3135 0.2973 0.2882 0.2825 16 0.4018 0.3433 0.3128 0.298 0.2884 0.2823 17 0.4018 0.3428 0.3129 0.2967 0.2878 0.282 18 0.4073 0.3461 0.3133 0.2959 0.2859 0.2808 19 0.4073 0.3477 0.3137 0.2962 0.2844 0.2789 20 0.4119 0.3486 0.3148 0.2973 0.2843 0.2777 21 0.4119 0.3506 0.3171 0.298 0.2851 0.2775 22 0.4159 0.3519 0.3189 0.2986 0.2863 0.2781 23 0.4159 0.3531 0.3202 0.2995 0.2872 0.2791 24 0.4194 0.3539 0.3209 0.3003 0.2878 0.2797 25 0.4194 0.3548 0.3218 0.3012 0.2886 0.2801 Table 1: E A(b)/F(2) (b) for AggΥm as a function of k, the optimal number of winners in F(2) (b). The lowest value for each column is printed in bold. conjecture lies in the difficulty of finding a closed-form solution for the formula of Theorem 7. We can, however, evaluate this formula numerically for different values of m and k, assuming that the competitive ratio for Υm matches the lower bound for m given by Theorem 2. Table 1 shows, for each value of m and k, the fraction of F(2) raised by the aggregation auction for AggΥm when there are k winning bidders, assuming the lower bound of Theorem 2 is tight. 5.3.2 Convex combinations of AggΥm As can be seen in Table 1, when m > 3, the worst-case value of k is no longer 2 and 3, but instead an increasing function of m. An aggregation auction for Υm outperforms the aggregation auction for Υ3 when there are two or three winning bidders, while the aggregation auction for Υ3 outperforms the other when there are at least six winning bidders. Thus, for instance, an auction which randomizes between aggregation auctions for Υ3 and Υ4 will have a worst-case which is better than that of either auction alone. Larger combinations of auctions will allow more room to optimize the worst-case. However, we suspect that no convex combination of aggregation auctions will have a competitive ratio lower than 3. Furthermore, note that we cannot yet claim the existence of a good auction via this technique as the optimal auction Υn for n > 3 is not known and it is only conjectured that the bound given by Theorem 2 and represented in Table 1 is correct for Υn. 6. A LOWER BOUND FOR CONSERVATIVE AUCTIONS In this section, we define a class of auctions that never offer a sale price which is higher than any bid in the input and prove a lower bound on the competitive ratio of these auctions. As this 180 lower bound is stronger than the lower bound of Theorem 2 for n ≥ 3, it shows that the optimal auction must occasionally charge a sales price higher than any bid in the input. Specifically, this result partially explains the form of the optimal three bidder auction. DEFINITION 8. We say an auction BIf is conservative if its bidindependent function f satisfies f(b-i) ≤ max(b-i). We can now state our lower bound for conservative auctions. THEOREM 10. Let A be a conservative auction for n bidders. Then the competitive ratio of A is at least 3n−2 n . COROLLARY 1. The competitive ratio of any conservative auction for an arbitrary number of bidders is at least three. For a two-bidder auction, this restriction does not prevent optimality. Υ2, the 1-unit Vickrey auction, is conservative. For larger numbers of bidders, however, the restriction to conservative auctions does affect the competitive ratio. For the three-bidder case, Υ3 has competitive ratio 2.17, while the best conservative auction is no better than 2.33-competitive. The k-Vickrey auction and the Random Sampling Optimal Price auction [9] are conservative auctions. The Random Sampling Profit Extraction auction [5] and the CORE auction [7], on the other hand, use the ProfitExtractR mechanism as a subroutine and thus sometimes offer a sale price which is higher than the highest input bid value. In [8], the authors define a restricted auction as one on which, for any input, the sale prices are drawn from the set of input bid values. The class of conservative auctions can be viewed as a generalization of the class of restricted auctions and therefore our result below gives lower bounds on the performance of a broader class of auctions. We will prove Theorem 10 with the aid of the following lemma: LEMMA 1. Let A be a conservative auction with competitive ratio 1/r for n bidders. Let h n. Let h0 = 1 and hk = kh otherwise. Then, for all k and H ≥ kh, Pr[f(1, 1, ... , 1, H) ≤ hk] ≥ nr−1 n−1 + k(3nr−2r−n n−1 ). PROOF. The lemma is proved by strong induction on k. First some notation that will be convenient. For any particular k and H we will be considering the bid vector b = (1, ... , 1, hk, H) and placing bounds on ρb-i (z). Since we can assume without loss of generality that the auction is symmetric, we will notate b-1 as b with any one of the 1-valued bids masked. Similarly we notate b-hk (resp. b-H ) as b with the hk-valued bid (resp. H-valued bid) masked. We will also let p1(b), phk (b), and pH (b) represent the expected payment of a 1-valued, hk-valued, and H-valued bidder in A on b, respectively (note by symmetry the expected payment for all 1-valued bidders is the same). Base case (k = 0, hk = 1): A must raise revenue of at least rn on b = (1, ... , 1, 1, H): rn ≤ pH (b) + (n − 1)p1(b) = 1 + (n − 1) 1 0 xρb-1 (x)dx ≤ 1 + (n − 1) 1 0 ρb-1 (x)dx The second inequality follows from the conservatism of the underlying auction. The base case follows trivially from the final inequality. Inductive case (k > 0, hk = kh): Let b = (1, ... , 1, hk, H). First, we will find an upper bound on pH(b) pH (b) = 1 0 xρb-H (x)dx + k i=1 hi hi−1 xρb-H (x)dx (1) ≤ 1 + k i=1 hi hi hi−1 ρb-H (x)dx ≤ 1 + 3nr − 2r − n n − 1 k−1 i=1 ih + kh 1 − nr − 1 n − 1 − (k − 1) 3nr − 2r − n n − 1 (2) = kh n(1 − r) n − 1 + (k − 1) 2 3nr − 2r − n n − 1 + 1. (3) Equation (1) follows from the conservatism of A and (2) is from invoking the strong inductive hypothesis with H = kh and the observation that the maximum possible revenue will be found by placing exactly enough probability at each multiple of h to satisfy the constraints of the inductive hypothesis and placing the remaining probability at kh. Next, we will find a lower bound on phk (b) by considering the revenue raised by the bids b. Recall that A must obtain a profit of at least rF(2) (b) = 2rkh. Given upper-bounds on the profit from the H-valued, equation bid (3), and the 1-valued bids, the profit from the hk-valued bid must be at least: phk (b) ≥ 2rkh − (n − 2)p1(b) − pH(b) ≥ kh 2r − n(1 − r) n − 1 + (k − 1) 2 3nr − 2r − n n − 1 − O(n). (4) In order to lower bound Pr[f(b-hk ) ≤ kh], consider the auction that minimizes it and is consistent with the lower bounds obtained by the strong inductive hypothesis on Pr[f(b-hk ) ≤ ih]. To minimize the constraints implied by the strong inductive hypothesis, we place the minimal amount of probability mass required each price level. This gives ρhk (b) with nr−1 n−1 probability at 1 and exactly 3nr−2r−n n−1 at each hi for 1 ≤ i < k. Thus, the profit from offering prices at most hk−1 is nr−1 n−1 −kh(k−1)3nr−2r−n n−1 . In order to satisfy our lower bound, (4), on phk (b), it must put at least 3nr−2r−n n−1 on hk. Therefore, the probability that the sale price will be no more than kh on masked bid vector on bid vector b = (1, ... , 1, kh, H) must be at least nr−1 n−1 + k(3nr−2r−n n−1 ). Given Lemma 1, Theorem 10 is simple to prove. PROOF. Let A be a conservative auction. Suppose 3nr−2r−n n−1 = > 0. Let k = 1/ , H ≥ kh, and h n. By Lemma 1, Pr[f(1, ... , 1, kh, H) ≤ hk] ≥ nr−1 n−1 + k > 1. But this is a contradiction, so 3nr−2r−n n−1 ≤ 0. Thus, r ≤ n 3n−2 . The theorem follows. 7. CONCLUSIONS AND FUTURE WORK We have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis. We have then used this auction to derive the best known auction for the unlimited supply case. Our work leaves many interesting open questions. We found that the lower bound of [8] is matched by an auction for three bidders, 181 even when competing against generalized benchmarks. The most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders. We conjecture that it can. Second, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders. The use of our analytic solution method requires knowledge of a restricted class of auctions which is large enough to contain an optimal auction but small enough that the optimal auction in this class can be found explicitly through analytic methods. No class of auctions which meets these criteria is known even for the four bidder case. Also, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions. Another interesting set of open questions concerns aggregation auctions. As we show, the aggregation auction for Υ3 outperforms the aggregation auction for Υ2 and it appears that the aggregation auction for Υ3 is better than Υm for m > 3. We leave verification of this conjecture for future work. We also show that Υ3 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit. It would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of AggΥm . Finally, we remark that very little is known about the structure of the optimal competitive auction. In our auction Υ3, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values. The optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values. We conjecture that an optimal auction for any number of bidders lies within this class. Our paper provides partial evidence for this conjecture: the lower bound of Section 6 on conservative auctions shows that the optimal auction must offer sales prices higher than any bid value if the lower bound of Theorem 2 is tight, as is conjectured. It remains to show that optimal auctions otherwise only offer sales prices at bid values. 8. ACKNOWLEDGEMENTS The authors wish to thank Yoav Shoham and Noga Alon for helpful discussions. 9. REFERENCES [1] A. Archer and E. Tardos. Truthful mechanisms for one-parameter agents. In Proc. of the 42nd IEEE Symposium on Foundations of Computer Science, 2001. [2] S. Baliga and R. Vohra. Market research and market design. Advances in Theoretical Economics, 3, 2003. [3] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998. [4] J. Bulow and J. Roberts. The Simple Economics of Optimal Auctions. The Journal of Political Economy, 97:1060-90, 1989. [5] A. Fiat, A. V. Goldberg, J. D. Hartline, and A. R. Karlin. Competitive generalized auctions. In Proc. 34th ACM Symposium on the Theory of Computing, pages 72-81. ACM, 2002. [6] A. Goldberg, J. Hartline, A. Karlin, M. Saks, and A. Wright. Competitive auctions and digital goods. Games and Economic Behavior, 2002. Submitted for publication. An earlier version available as InterTrust Technical Report at URL http://www.star-lab.com/tr/tr-99-01.html. [7] A. V. Goldberg and J. D. Hartline. Competitiveness via consensus. In Proc. 14th Symposium on Discrete Algorithms, pages 215-222. ACM/SIAM, 2003. [8] A. V. Goldberg, J. D. Hartline, A. R. Karlin, and M. E. Saks. A lower bound on the competitive ratio of truthful auctions. In Proc. 21st Symposium on Theoretical Aspects of Computer Science, pages 644-655. Springer, 2004. [9] A. V. Goldberg, J. D. Hartline, and A. Wright. Competitive auctions and digital goods. In Proc. 12th Symposium on Discrete Algorithms, pages 735-744. ACM/SIAM, 2001. [10] D. Lehmann, L. I. O``Callaghan, and Y. Shoham. Truth Revelation in Approximately Efficient Combinatorial Auctions. In Proc. of 1st ACM Conf. on E-Commerce, pages 96-102. ACM Press, New York, 1999. [11] H. Moulin and S. Shenker. Strategyproof Sharing of Submodular Costs: Budget Balance Versus Efficiency. Economic Theory, 18:511-533, 2001. [12] R. Myerson. Optimal Auction Design. Mathematics of Operations Research, 6:58-73, 1981. [13] N. Nisan and A. Ronen. Algorithmic Mechanism Design. In Proc. of 31st Symp. on Theory of Computing, pages 129-140. ACM Press, New York, 1999. [14] I. Segal. Optimal pricing mechanisms with unknown demand. American Economic Review, 16:50929, 2003. [15] W. Vickrey. Counterspeculation, Auctions, and Competitive Sealed Tenders. J. of Finance, 16:8-37, 1961. APPENDIX A. PROOF OF THEOREM 4 We wish to prove that Υs,t 3 , the optimal auction for three bidders against Gs,t, has competitive ratio at least s2 +t2 2t . Our proof follows the outline of the proof of Lemma 5 and Theorem 1 from [8]; however, our case is simpler because we only looking for a bound when n = 3. Define the random bid vector B = (B1, B2, B3) with Pr[Bi > z] = 1/z. We compute EB[Gs,t(B)] by integrating Pr[Gs,t(B) > z]. Then we use the fact that no auction can have expected profit greater than 3 on B to find a lower bound on the competitive ratio against Gs,t for any auction. For the input distribution B defined above, let B(i) be the ith largest bid. Define the disjoint events H2 = B(2) ≥ z/s ∧ B(3) < z/t, and H3 = B(3) ≥ z/t. Intuitively, H3 corresponds to the event that all three bidders win in Gs,t, while H2 corresponds to the event that only the top two bidders win. Gs,t(B) will be greater than z if either event occurs: Pr[Gs,t(B) > z] = Pr[H2] + Pr[H3] (5) = 3 s z 2 1 − t z + t z 3 (6) Using the identity defined for non-negative continuous random variables that E[X] = ∞ 0 Pr[X > x] dx, we have EB[Gs,t(B)] = t + ∞ t 3 s z 2 1 − t z + t z 3 dz (7) = 3 s2 + t2 2t (8) Given that, for any auction A, EB[EA[A(B)]] ≤ 3 [8], it is clear that EB[Gs,t(B)] EB[EA[A(B)]] ≥ s2 +t2 2t . Therefore, there exists some input b for each auction A such that Gs,t(b) EA[A(b)] ≥ s2+t2 2t . 182
From Optimal Limited To Unlimited Supply Auctions ABSTRACT We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values. 1. INTRODUCTION The research area of optimal mechanism design looks at designing a mechanism to produce the most desirable outcome for the entity running the mechanism. This problem is well studied for the auction design problem where the optimal mechanism is the one * This work was supported in part by an NSF fellowship and NSF ITR 0205633. that brings the seller the most profit. Here, the classical approach is to design such a mechanism given the prior distribution from which the bidders' preferences are drawn (See e.g., [12, 4]). Recently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution. The goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance. Positive results in this direction are fueled by the observation that in auctions for a number of identical units, much of the distribution from which the bidders are drawn can be deduced on the fly by the auction as it is being run [9, 14, 2]. The performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction's performance on the input distribution that maximizes this ratio. The holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible). Since [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8]. In this paper we continue this line of research by improving in all of these directions. Furthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction's competitive analysis. This result further validates the use of competitive analysis in gauging auction performance. We consider the single item, multi-unit, unit-demand auction problem. In such an auction there are many units of a single item available for sale to bidders who each desire only one unit. Each bidder has a valuation representing how much the item is worth to him. The auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders. The bidders are assumed to bid so as to maximize their personal utility, the difference between their valuation and the price they pay. To handle the problem of designing and analyzing auctions where bidders may falsely declare their valuations to get a better deal, we will adopt the solution concept of truthful mechanism design (see, e.g., [9, 15, 13]). In a truthful auction, revealing one's true valuation as one's bid is an optimal strategy for each bidder regardless of the bids of the other bidders. In this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions. A particularly interesting special case of the auction problem is the unlimited supply case. In this case the number of units for sale is at least the number of bidders in the auction. This is natural for the sale of digital goods where there is negligible cost for duplicating and distributing the good. Pay-per-view television and downloadable audio files are examples of such goods. The competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis. The assumption that two or more units are sold is necessary because in the worst case it is impossible to obtain a constant fraction of the profit of the optimal mechanism when it sells only one unit [9]. In this framework for competitive analysis, an auction is said to be β-competitive if it achieves a profit that is within a factor of β ≥ 1 of the benchmark profit on every input. The optimal auction is the one which is β-competitive with the minimum value of β. Previous to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8]. For the limited supply case, auctions can achieve substantially better competitive ratios. When there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units. When there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13/6 ≈ 2.17 [8]. The results of this paper are as follows: • We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units. This auction achieves a competitive ratio of 13/6, matching the lower bound from [8] (Section 3). • We show that the form of the optimal auction is independent of the benchmark used in competitive analysis. In doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4). • We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction. We refer to auctions derived from this framework as aggregation auctions (Section 5). • We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1). • Assuming that the conjecture that the optimal ~ - unit auction has a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ~ ≥ 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6). For the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction. 2. DEFINITIONS AND BACKGROUND We consider single-round, sealed-bid auctions for a set of ~ identical units of an item to bidders who each desire one unit. As mentioned in the introduction, we adopt the game-theoretic solution concept of truthful mechanism design. A useful simplification of the problem of designing truthful auctions is obtained through the following algorithmic characterization [9]. Related formulations to this one have appeared in numerous places in recent literature (e.g., [1, 14, 5, 10]). DEFINITION 1. Given a bid vector of n bids, b = (b1,..., bn), let b-i denote the vector of with bi replaced with a `? ' , i.e., 1. Set ti = f (b-i). 2. Ifti <bi, bidder i wins at price ti. 3. If ti> bi, bidder i loses. 4. Otherwise, (ti = bi) the auction can either accept the bid at price ti or reject it. A randomized bid-independent auction is a distribution over deterministic bid-independent auctions. The proof of the following theorem can be found, for example, in [5]. Given this equivalence, we will use the the terminology bidindependent and truthful interchangeably. For a randomized bid-independent auction, f (b-i) is a random variable. We denote the probability density of f (b-i) at z by ρb-i (z). We denote the profit of a truthful auction A on input b as A (b). The expected profit of the auction, E [A (b)], is the sum of the expected payments made by each bidder, which we denote by pi (b) for bidder i. Clearly, the expected payment of each bid satisfies 2.1 Competitive Framework We now review the competitive framework from [5]. In order to evaluate the performance of auctions with respect to the goal of profit maximization, we introduce the optimal single price omniscient auction F and the related omniscient auction F (2). DEFINITION 3. Give a vector b = (b1,..., bn), let b (i) represent the i-th largest value in b. The optimal single price omniscient auction, F, is defined as follows. Auction F on input b determines the value k such that kb (k) is maximized. All bidders with bi ≥ b (k) win at price b (k); all remaining bidders lose. The profit of F on input b is thus F (b) = max1 ≤ k ≤ n kb (k). In the competitive framework of [5] and subsequent papers, the performance of a truthful auction is gauged in comparison to F (2), the optimal singled priced auction that sells at least two units. The profit of F (2) is max2 ≤ k ≤ n kb (k) There are a number of reasons to choose this benchmark for comparison, interested readers should see [5] or [6] for a more detailed discussion. Let A be a truthful auction. We say that A is β-competitive against F (2) (or just β-competitive) if for all bid vectors b, the expected profit of A on b satisfies β. In Section 4 we generalize this framework to other profit benchmarks. 2.2 Scale Invariant and Symmetric Auctions A symmetric auction is one where the auction outcome is unchanged when the input bids arrive in a different permutation. Goldberg et al. [8] show that a symmetric auction achieves the optimal competitive ratio. This is natural as the profit benchmark we consider is symmetric, and it allows us to consider only symmetric auctions when looking for the one with the optimal competitive ratio. An auction defined by bid-independent function f is scale invariant if, for all i and all z, Pr [f (b-i)> z] = Pr [f (cb-i)> cz]. It is conjectured that the assumption of scale invariance is without loss of generality. Thus, we are motivated to consider symmetric scale-invariant auctions. When specifying a symmetric scaleinvariant auction we can assume that f is only a function of the relative magnitudes of the n − 1 bids in b-i and that one of the bids, bj = 1. It will be convenient to specify such auctions via the density function of f (b-i), ρb-i (z). It is enough to specify such a density function of the form ρ1, z1,..., zn − 1 (z) with 1 <zi <zi +1. 2.3 Limited Supply Versus Unlimited Supply Following [8], throughout the remainder of this paper we will be making the assumption that n = f, i.e., the number of bidders is equal to the number of units for sale. This is without loss of generality as (a) any lower bound that applies to the n = f case also extends to the case where n> f [8], and (b) there is a reduction from the unlimited supply auction problem to the limited supply auction problem that takes an unlimited supply auction that is 3-competitive with F (2) and constructs a limited supply auction parameterized by f that is 3-competitive with F (2,,1), the optimal omniscient auction that sells between 2 and f units [6]. Henceforth, we will assume that we are in the unlimited supply case, and we will examine lower bounds for limited supply problems by placing a restriction on the number of bidders in the auction. 2.4 Lower Bounds and Optimal Auctions Frequently in this paper, we will refer to the best known lower bound on the competitive ratio of truthful auctions: 2.5 Profit Extraction In this section we review the truthful profit extraction mechanism ProfitExtractR. This mechanism is a special case of a general cost-sharing schema due to Moulin and Shenker [11]. The goal of profit extraction is, given bids b, to extract a target value R of profit from some subset of the bidders. ProfitExtractR: Given bids b, find the largest k such that the highest k bidders can equally share the cost R. Charge each of these bidders R/k. If no subset of bidders can cover the cost, the mechanism has no winners. Important properties of this auction are as follows: • ProfitExtractR is truthful. • If R <F (b), ProfitExtractR (b) = R; otherwise it has no winners and no revenue. We will use this profit extraction mechanism in Section 5 with the following intuition. Such a profit extractor makes it possible to treat this subset of bidders as a single bid with value F (S). Note that given a single bid, b, a truthful mechanism might offer it price t and if t <b then the bidder wins and pays t; otherwise the bidder pays nothing (and loses). Likewise, a mechanism can offer the set of bidders S a target revenue R. If R <F (2) (S), then ProfitExtractR raises R from S; otherwise, the it raises no revenue from S. 3. AN OPTIMAL AUCTION FOR THREE BIDDERS In this section we define the optimal auction for three bidders, Υ3, and prove that it indeed matches the known lower bound of 13/6. We follow the definition and proof with a discussion of how this auction was derived. DEFINITION 5. Υ3 is scale-invariant and symmetric and given by the bid-independent function with density function THEOREM 2. [8] The competitive ratio of any auction on n bidders is at least DEFINITION 4. Let Υn denote the n-bidder auction that achieves the optimal competitive ratio. This bound is derived by analyzing the performance of any auction on the following distribution B. In each random bid vector B, each bid Bi is drawn i.i.d. from the distribution such that Pr [Bi> s] <1/s for all s E S. In the two-bidder case, this lower bound is 2. This is achieved by Υ2 which is the 1-unit Vickrey auction .1 In the three-bidder case, this lower bound is 13/6. In the next section, we define the auction Υ3 which matches this lower bound. In the four-bidder case, this lower bound is 96/215. In the limit as the number of bidders grows, this lower bound approaches a number which is approximately 2.42. It is conjectured that this lower bound is tight for any number of bidders and the optimal auction, Υn, matches it. 1The 1-unit Vickrey auction sells to the highest bidder at the second highest bid value. where g (x) = 2/13 (x − 1) 3. PROOF. Consider the bids 1, x, y, with 1 <x <y. There are three cases. CASE 1 (x <y <3/2): F (2) = 3. The auction must raise expected revenue of at least 18/13 on these bids. The bidder with valuation x will pay 1 with 9/13, and the bidder with valuation y will pay 1 with probability 9/13. Therefore Υ3 raises 18/13 on these bids. CASE 2 (x <3/2 <y): F (2) = 3. The auction must raise expected revenue of at least 18/13 on these bids. The bidder with valuation x will pay 9/13 − fy3/2 zg (z) dz in expectation. The bidder with valuation y will pay 9/13 + fy3/2 zg (z) dz in expectation. Therefore Υ3 raises 18/13 on these bids. CASE 3 (3/2 <x <y): F (2) = 2x. The auction must raise expected revenue of at least 12x/13 on these bids. Consider the revenue raised from all three bidders: The final equation comes from substituting in g (x) = 2/13 (x − 1) 3 and expanding the integrals. Note that the fraction of F (2) raised on every input is identical. If any of the inequalities 1 <x <y are not strict, the same proof applies giving a lower bound on the auction's profit; however, this bound may no longer be tight. Motivation for Υ3 In this section, we will conjecture that a particular input distribution is worst-case, and show, as a consequence, that all inputs are worstcase in the optimal auction. By applying this consequence, we will derive an optimal auction for three bidders. A truthful, randomized auction on n bidders can be represented by a randomized function f: Rn − 1 x n--+ R that maps masked bid vectors to prices in R. By normalization, we can assume that the lowest possible bid is 1. Recall that Pb-i (z) = Pr [f (b-i) = z]. The optimal auction for the finite auction problem can be found by the following optimization problem in which the variables are This set of integral inequalities is difficult to maximize over. However, by guessing which constraints are tight and which are slack at the optimum, we will be able to derive a set of differential equations for which any feasible solution is an optimal auction. As we discuss in Section 2.4, in [8], the authors define a distribution and use it to find a lower bound on the competitive ratio of the optimal auction. For two bidders, this bid distribution is the worst-case input distribution. We guess (and later verify) that this distribution is the worst-case input distribution for three bidders as well. Since this distribution has full support over the set of all bid vectors and a worst-case distribution puts positive probability only on worst-case inputs, we can therefore assume that all but a measure zero set of inputs is worst-case for the optimal auction. In the optimal two-bidder auction, all inputs with non-identical bids are worst-case, so we will assume the same for three bidders. The guess that these constraints are tight allows us to transform the optimization problem into a feasibility problem constrained by differential equations. If the solution to these equations has value matching the lower bound obtained from the worst-case distribution, then this solution is the optimal auction and that our conjectured choice of worst-case distribution is correct. In Section 6 we show that the optimal auction must sometimes place probability mass on sale prices above the highest bid. This motivates considering symmetric scale-invariant auctions for three bidders with probability density function, P1, x (z), of the following form: 1 with discrete probability a (x) x with discrete probability b (x) z with probability density g (z) for z> x In this auction, the sale price for the first bidder is either one of the latter two bids, or higher than either bid with a probability density which is independent of the input. The feasibility problem which arises from the linear optimization problem by assuming the constraints are tight is as follows: Solving this feasibility problem gives the auction Υ3 proposed above. The proof of its optimality validates its proposed form. Finding a simple restriction on the form of n-bidder auctions for n> 3 under which the optimal auction can be found analytically as above remains an open problem. 4. GENERALIZED PROFIT BENCHMARKS In this section, we widen our focus beyond auctions that compete with F (2) to consider other benchmarks for an auction's profit. We will show that, for three bidders, the form of the optimal auction is essentially independent of the benchmark profit used. This results strongly corroborates the worst-case competitive analysis of auctions by showing that our techniques allow us to derive auctions which are competitive against a broad variety of reasonable benchmarks rather than simply against F (2). Previous work in competitive analysis of auctions has focused on the question of designing the auction with the best competitive ratio against F (2), the profit of the optimal omniscient single-priced mechanism that sells at least two items. However, it is reasonable to consider other benchmarks. For instance, one might wish to compete against V ∗, the profit of the k-Vickrey auction with optimal-inhindsight choice of k. 2 Alternatively, if an auction is being used as a subroutine in a larger mechanism, one might wish to choose the auction which is optimally competitive with a benchmark specific to that purpose. Recall that F (2) (b) = max2 ≥ k ≥ n kb (k). We can generalize this definition to 9 $, parameterized by s = (s2,..., sn) and defined as: When considering 9 $we assume without loss of generality that si <si +1 as otherwise the constraint imposed by si +1 is irrelevant. Note that F (2) is the special case of 9 $with si = i, and that V ∗ = 9 $with si = i − 1. 2Recall that the k-Vickrey auction sells a unit to each of the highest k bidders at a price equal to the k + 1st highest bid, b (k +1), achieving a profit of kb (k +1). Competing with Gs We will now design a three-bidder auction Υs, t 3 that achieves the optimal competitive ratio against Gs, t. As before, we will first find a lower bound on the competitive ratio and then design an auction to meet that bound. We can lower bound the competitive ratio of Υs, t 3 using the same worst-case distribution from [8] that we used against F (2). Evaluating the performance of any auction competing against Gs, t on this distribution will yield the following theorem. We denote the optimal auction for three bidders against Gs, t as Υs, t The proof can be found in the appendix. Similarly, we can find the optimal auction against Gs, t using the same technique we used to solve for the three bidder auction with the best competitive ratio against F (2). DEFINITION 6. Υs, t 3 is scale-invariant and symmetric and given by the bid-independent function with density function where g (x) = 2 (t − s) 2 / (s2 + t2) This auction, like Υ3, can be derived by reducing the optimization problem to a feasibility problem, guessing that the optimal solution has the same form as Υs, t 3, and solving. The auction is optimal because it matches the lower bound found above. Note that the form of Υs, t 3 is essentially the same as for Υ3, but that the probability of each price is scaled depending on the values of s and t. That our auction for three bidders matches the lower bound computed by the input distribution used in [8] is strong evidence that this input distribution is the worst-case input distribution for any number of bidders and any generalized profit benchmark. Furthermore, we strongly suspect that for any number of bidders, the form of the optimal auction will be independent of the benchmark used. 5. AGGREGATION AUCTIONS We have seen that optimal auctions for small cases of the limitedsupply model can be found analytically. In this section, we will construct a schema for turning limited supply auctions into unlimited supply auctions with a good competitive ratio. As discussed in Section 2.5, the existence of a profit extractor, ProfitExtractR, allows an auction to treat a set of bids S as a single bid with value F (S). Given n bidders and an auction, Am, for m <n bidders, we can convert the m-bidder auction into an n-bidder auction by randomly partitioning the bidders into m subsets and then treating each subset as a single bidder (via ProfitExtractR) and running the m-bidder auction. DEFINITION 7. Given a truthful m-bidder auction, Am, the m-aggregation auction for Am, AggAm, works as follows: 1. Cast each bid uniformly at random into one of m bins, resulting in bid vectors b (1),..., b (m). 2. For each bin j, compute the aggregate bid Bj = F (b (j)). Let B be the vector of aggregate bids, and B − j be the aggregate bids for all bins other than j. 3. Compute the aggregate price Tj = f (B − j), where f is the bid-independent function for Am. 4. For each bin j, run ProfitExtractTj on b (j). Since Am and ProfitExtractR are truthful, Tj is computed independently of any bid in bin j and thus the price offered any bidder in b (j) is independent of his bid; therefore, THEOREM 6. If Am is truthful, the m-aggregation auction for Am, AggAm, is truthful. Note that this schema yields a new way of understanding the Random Sampling Profit Extraction (RSPE) auction [5] as the simplest case of an aggregation auction. It is the 2-aggregation auction for Υ2, the 1-unit Vickrey auction. To analyze AggAm, consider throwing k balls into m labeled bins. Let k represent a configuration of balls in bins, so that ki is equal to the number of balls in bin i, and k (i) is equal to the number of balls in the ith largest bin. Let Km, k represent the set of all possible configurations of k balls in m bins. We write the multinomial coefficient of k as (k). The probability that a particular configu Then the m-aggregation auction for Am, AggAm, raises the following fraction of the optimal revenue F (2) (b): The inequality follows from the monotonicity of F (2), and the equality from the homogeneity of F (2). ProfitExtractTj will raise Tj if Tj ≤ Bj, and no profit other] wise. Thus, E [AggAm (b)] ≥ E IF (2) (B) / β. The theorem follows by rewriting this expectation as a sum over all k in Km, k. 5.1 A 3.25 Competitive Auction We apply the aggregation auction schema to Υ3, our optimal auction for three bidders, to achieve an auction with competitive ratio 3.25. This improves on the previously best known auction which is 3.39-competitive [7]. THEOREM 8. The aggregation auction for Υ3 has competitive ratio 3.25. PROOF. By theorem 7, For k = 2 and k = 3, E [AggΥ3 (b)] = 23 k /,3. As k increases, E [AggΥ3 (b)] / F (2) increases as well. Since we do not expect to find a closed-form formula for the revenue, we lower bound F (2) (b) by 3b (3). Using large deviation bounds, one can show that this lower bound is greater than 23 k /,3 for large-enough k, and the remainder can be shown by explicit calculation. Plugging in,3 = 13/6, the competitive ratio is 13/4. As k increases, the competitive ratio approaches 13/6. Note that the above bound on the competitive ratio of AggΥ3 is tight. To see this, consider the bid vector with two very large and non-identical bids of h and h + e with the remaining bids 1. Given that the competitive ratio of Υ3 is tight on this example, the expected revenue of this auction on this input will be exactly 13/4. 5.2 A Gs,t-based Aggregation Auction In this section we show that Υ3 is not the optimal auction to use in an aggregation auction. One can do better by choosing the auction that is optimally competitive against a specially tailored benchmark. To see why this might be the case, notice (Table 1) that the fraction of F (2) (b) raised for when there are k = 2 and k = 3 winning bidders in F (2) (b) is substantially smaller than the fraction of F (2) (b) raised when there are more winners. This occurs because the expected ratio between F (2) (B) and F (2) (b) is lower in this case while the competitive ratio of Υ3 is constant. If we chose a three bidder auction that performed better when F (2) has smaller numbers of winners, our aggregation auction would perform better in the worst case. One approach is to compete against a different benchmark that puts more weight than F (2) on solutions with a small number of winners. Recall that F (2) is the instance of Gs, t with s = 2 and t = 3. By using the auction that competes optimally against Gs, t with s> 2, while holding t = 3, we will raise a higher fraction of revenue on smaller numbers of winning bidders and a lower fraction of revenue on large numbers of winning bidders. We can numerically optimize the values of s and t in Gs, t (b) in order to achieve the best competitive ratio for the aggregation auction. In fact, this will allow us to improve our competitive ratio slightly. THEOREM 9. For an optimal choice of s and t, the aggregation auction for Υs, t 3 is 3.243-competitive. The proof follows the outline of Theorem 7 and 8 with the optimal choice of s = 2.162 (while t is held constant at 3). 5.3 Further Reducing the Competitive Ratio There are a number of ways we might attempt to use this aggregation auction schema to continue to push the competitive ratio down. In this section, we give a brief discussion of several attempts. 5.3.1 AggΥm for m> 3 If the aggregation auction for Υ2 has a competitive ratio of 4 and the aggregation auction for Υ3 has a competitive ratio of 3.25, can we improve the competitive ratio by aggregating Υ4 or Υm for larger m? We conjecture in the negative: for m> 3, the aggregation auction for Υm has a larger competitive ratio than the aggregation auction for Υ3. The primary difficulty in proving this Table 1: E A (b) / F (2) (b) for AggΥm as a function of k, the optimal number of winners in F (2) (b). The lowest value for each column is printed in bold. conjecture lies in the difficulty of finding a closed-form solution for the formula of Theorem 7. We can, however, evaluate this formula numerically for different values of m and k, assuming that the competitive ratio for Υm matches the lower bound for m given by Theorem 2. Table 1 shows, for each value of m and k, the fraction of F (2) raised by the aggregation auction for AggΥm when there are k winning bidders, assuming the lower bound of Theorem 2 is tight. 5.3.2 Convex combinations of AggΥm As can be seen in Table 1, when m> 3, the worst-case value of k is no longer 2 and 3, but instead an increasing function of m. An aggregation auction for Υm outperforms the aggregation auction for Υ3 when there are two or three winning bidders, while the aggregation auction for Υ3 outperforms the other when there are at least six winning bidders. Thus, for instance, an auction which randomizes between aggregation auctions for Υ3 and Υ4 will have a worst-case which is better than that of either auction alone. Larger combinations of auctions will allow more room to optimize the worst-case. However, we suspect that no convex combination of aggregation auctions will have a competitive ratio lower than 3. Furthermore, note that we cannot yet claim the existence of a good auction via this technique as the optimal auction Υn for n> 3 is not known and it is only conjectured that the bound given by Theorem 2 and represented in Table 1 is correct for Υn. 6. A LOWER BOUND FOR CONSERVATIVE AUCTIONS In this section, we define a class of auctions that never offer a sale price which is higher than any bid in the input and prove a lower bound on the competitive ratio of these auctions. As this lower bound is stronger than the lower bound of Theorem 2 for n ≥ 3, it shows that the optimal auction must occasionally charge a sales price higher than any bid in the input. Specifically, this result partially explains the form of the optimal three bidder auction. DEFINITION 8. We say an auction BIf is conservative if its bidindependent function f satisfies f (b-i) ≤ max (b-i). We can now state our lower bound for conservative auctions. THEOREM 10. Let A be a conservative auction for n bidders. Then the competitive ratio of A is at least 3n-2n. COROLLARY 1. The competitive ratio ofany conservative auction for an arbitrary number of bidders is at least three. For a two-bidder auction, this restriction does not prevent optimality. Υ2, the 1-unit Vickrey auction, is conservative. For larger numbers of bidders, however, the restriction to conservative auctions does affect the competitive ratio. For the three-bidder case, Υ3 has competitive ratio 2.17, while the best conservative auction is no better than 2.33-competitive. The k-Vickrey auction and the Random Sampling Optimal Price auction [9] are conservative auctions. The Random Sampling Profit Extraction auction [5] and the CORE auction [7], on the other hand, use the ProfitExtractR mechanism as a subroutine and thus sometimes offer a sale price which is higher than the highest input bid value. In [8], the authors define a restricted auction as one on which, for any input, the sale prices are drawn from the set of input bid values. The class of conservative auctions can be viewed as a generalization of the class of restricted auctions and therefore our result below gives lower bounds on the performance of a broader class of auctions. We will prove Theorem 10 with the aid of the following lemma: PROOF. The lemma is proved by strong induction on k. First some notation that will be convenient. For any particular k and H we will be considering the bid vector b = (1,..., 1, hk, H) and placing bounds on ρb-i (z). Since we can assume without loss of generality that the auction is symmetric, we will notate b-1 as b with any one of the 1-valued bids masked. Similarly we notate b-hk (resp. b-H) as b with the hk-valued bid (resp. H-valued bid) masked. We will also let p1 (b), phk (b), and pH (b) represent the expected payment of a 1-valued, hk-valued, and H-valued bidder in A on b, respectively (note by symmetry the expected payment for all 1-valued bidders is the same). Base case (k = 0, hk = 1): A must raise revenue of at least rn on b = (1,..., 1, 1, H): The second inequality follows from the conservatism of the underlying auction. The base case follows trivially from the final inequality. Inductive case (k> 0, hk = kh): Let b = (1,..., 1, hk, H). First, we will find an upper bound on pH (b) Equation (1) follows from the conservatism of A and (2) is from invoking the strong inductive hypothesis with H = kh and the observation that the maximum possible revenue will be found by placing exactly enough probability at each multiple of h to satisfy the constraints of the inductive hypothesis and placing the remaining probability at kh. Next, we will find a lower bound on phk (b) by considering the revenue raised by the bids b. Recall that A must obtain a profit of at least rF (2) (b) = 2rkh. Given upper-bounds on the profit from the H-valued, equation bid (3), and the 1-valued bids, the profit from the hk-valued bid must be at least: In order to lower bound Pr [f (b-hk) ≤ kh], consider the auction that minimizes it and is consistent with the lower bounds obtained by the strong inductive hypothesis on Pr [f (b-hk) ≤ ih]. To minimize the constraints implied by the strong inductive hypothesis, we place the minimal amount of probability mass required each price level. This gives ρhk (b) with nr-1 n-1 probability at 1 and exactly 3nr-2r-n n-1 at each hi for 1 ≤ i <k. Thus, the profit from offering prices at most hk-1 is nr-1 on hk. Therefore, the probability that the sale price will be no more than kh on masked bid vector on bid vector b = (1,..., 1, kh, H) must be at least nr-1 n-1 + k (3nr-2r-n n-1). Given Lemma 1, Theorem 10 is simple to prove. PROOF. Let A be a conservative auction. Suppose 3nr-2r-n n-1 = n-1 ≤ 0. Thus, r ≤ n 3n-2. The theorem follows. 7. CONCLUSIONS AND FUTURE WORK We have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis. We have then used this auction to derive the best known auction for the unlimited supply case. Our work leaves many interesting open questions. We found that the lower bound of [8] is matched by an auction for three bidders, even when competing against generalized benchmarks. The most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders. We conjecture that it can. Second, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders. The use of our analytic solution method requires knowledge of a restricted class of auctions which is large enough to contain an optimal auction but small enough that the optimal auction in this class can be found explicitly through analytic methods. No class of auctions which meets these criteria is known even for the four bidder case. Also, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions. Another interesting set of open questions concerns aggregation auctions. As we show, the aggregation auction for Υ3 outperforms the aggregation auction for Υ2 and it appears that the aggregation auction for Υ3 is better than Υm for m> 3. We leave verification of this conjecture for future work. We also show that Υ3 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit. It would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of AggΥm. Finally, we remark that very little is known about the structure of the optimal competitive auction. In our auction Υ3, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values. The optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values. We conjecture that an optimal auction for any number of bidders lies within this class. Our paper provides partial evidence for this conjecture: the lower bound of Section 6 on conservative auctions shows that the optimal auction must offer sales prices higher than any bid value if the lower bound of Theorem 2 is tight, as is conjectured. It remains to show that optimal auctions otherwise only offer sales prices at bid values.
From Optimal Limited To Unlimited Supply Auctions ABSTRACT We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values. 1. INTRODUCTION The research area of optimal mechanism design looks at designing a mechanism to produce the most desirable outcome for the entity running the mechanism. This problem is well studied for the auction design problem where the optimal mechanism is the one * This work was supported in part by an NSF fellowship and NSF ITR 0205633. that brings the seller the most profit. Here, the classical approach is to design such a mechanism given the prior distribution from which the bidders' preferences are drawn (See e.g., [12, 4]). Recently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution. The goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance. Positive results in this direction are fueled by the observation that in auctions for a number of identical units, much of the distribution from which the bidders are drawn can be deduced on the fly by the auction as it is being run [9, 14, 2]. The performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction's performance on the input distribution that maximizes this ratio. The holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible). Since [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8]. In this paper we continue this line of research by improving in all of these directions. Furthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction's competitive analysis. This result further validates the use of competitive analysis in gauging auction performance. We consider the single item, multi-unit, unit-demand auction problem. In such an auction there are many units of a single item available for sale to bidders who each desire only one unit. Each bidder has a valuation representing how much the item is worth to him. The auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders. The bidders are assumed to bid so as to maximize their personal utility, the difference between their valuation and the price they pay. To handle the problem of designing and analyzing auctions where bidders may falsely declare their valuations to get a better deal, we will adopt the solution concept of truthful mechanism design (see, e.g., [9, 15, 13]). In a truthful auction, revealing one's true valuation as one's bid is an optimal strategy for each bidder regardless of the bids of the other bidders. In this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions. A particularly interesting special case of the auction problem is the unlimited supply case. In this case the number of units for sale is at least the number of bidders in the auction. This is natural for the sale of digital goods where there is negligible cost for duplicating and distributing the good. Pay-per-view television and downloadable audio files are examples of such goods. The competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis. The assumption that two or more units are sold is necessary because in the worst case it is impossible to obtain a constant fraction of the profit of the optimal mechanism when it sells only one unit [9]. In this framework for competitive analysis, an auction is said to be β-competitive if it achieves a profit that is within a factor of β ≥ 1 of the benchmark profit on every input. The optimal auction is the one which is β-competitive with the minimum value of β. Previous to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8]. For the limited supply case, auctions can achieve substantially better competitive ratios. When there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units. When there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13/6 ≈ 2.17 [8]. The results of this paper are as follows: • We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units. This auction achieves a competitive ratio of 13/6, matching the lower bound from [8] (Section 3). • We show that the form of the optimal auction is independent of the benchmark used in competitive analysis. In doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4). • We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction. We refer to auctions derived from this framework as aggregation auctions (Section 5). • We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1). • Assuming that the conjecture that the optimal ~ - unit auction has a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ~ ≥ 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6). For the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction. 2. DEFINITIONS AND BACKGROUND 2.1 Competitive Framework 2.2 Scale Invariant and Symmetric Auctions 2.3 Limited Supply Versus Unlimited Supply 2.4 Lower Bounds and Optimal Auctions 2.5 Profit Extraction 3. AN OPTIMAL AUCTION FOR THREE BIDDERS Motivation for Υ3 4. GENERALIZED PROFIT BENCHMARKS Competing with Gs DEFINITION 6. Υs, t 5. AGGREGATION AUCTIONS 5.1 A 3.25 Competitive Auction 5.2 A Gs,t-based Aggregation Auction 3 is 3.243-competitive. 5.3 Further Reducing the Competitive Ratio 5.3.1 AggΥm for m> 3 5.3.2 Convex combinations of AggΥm 6. A LOWER BOUND FOR CONSERVATIVE AUCTIONS 7. CONCLUSIONS AND FUTURE WORK We have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis. We have then used this auction to derive the best known auction for the unlimited supply case. Our work leaves many interesting open questions. We found that the lower bound of [8] is matched by an auction for three bidders, even when competing against generalized benchmarks. The most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders. We conjecture that it can. Second, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders. The use of our analytic solution method requires knowledge of a restricted class of auctions which is large enough to contain an optimal auction but small enough that the optimal auction in this class can be found explicitly through analytic methods. No class of auctions which meets these criteria is known even for the four bidder case. Also, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions. Another interesting set of open questions concerns aggregation auctions. As we show, the aggregation auction for Υ3 outperforms the aggregation auction for Υ2 and it appears that the aggregation auction for Υ3 is better than Υm for m> 3. We leave verification of this conjecture for future work. We also show that Υ3 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit. It would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of AggΥm. Finally, we remark that very little is known about the structure of the optimal competitive auction. In our auction Υ3, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values. The optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values. We conjecture that an optimal auction for any number of bidders lies within this class. Our paper provides partial evidence for this conjecture: the lower bound of Section 6 on conservative auctions shows that the optimal auction must offer sales prices higher than any bid value if the lower bound of Theorem 2 is tight, as is conjectured. It remains to show that optimal auctions otherwise only offer sales prices at bid values.
From Optimal Limited To Unlimited Supply Auctions ABSTRACT We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values. 1. INTRODUCTION This problem is well studied for the auction design problem where the optimal mechanism is the one * This work was supported in part by an NSF fellowship and NSF ITR 0205633. that brings the seller the most profit. Recently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution. The goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance. The performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction's performance on the input distribution that maximizes this ratio. The holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible). Since [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8]. Furthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction's competitive analysis. This result further validates the use of competitive analysis in gauging auction performance. We consider the single item, multi-unit, unit-demand auction problem. In such an auction there are many units of a single item available for sale to bidders who each desire only one unit. Each bidder has a valuation representing how much the item is worth to him. The auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders. In a truthful auction, revealing one's true valuation as one's bid is an optimal strategy for each bidder regardless of the bids of the other bidders. In this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions. A particularly interesting special case of the auction problem is the unlimited supply case. In this case the number of units for sale is at least the number of bidders in the auction. This is natural for the sale of digital goods where there is negligible cost for duplicating and distributing the good. The competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis. In this framework for competitive analysis, an auction is said to be β-competitive if it achieves a profit that is within a factor of β ≥ 1 of the benchmark profit on every input. The optimal auction is the one which is β-competitive with the minimum value of β. Previous to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8]. For the limited supply case, auctions can achieve substantially better competitive ratios. When there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units. When there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13/6 ≈ 2.17 [8]. The results of this paper are as follows: • We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units. This auction achieves a competitive ratio of 13/6, matching the lower bound from [8] (Section 3). • We show that the form of the optimal auction is independent of the benchmark used in competitive analysis. In doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4). • We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction. We refer to auctions derived from this framework as aggregation auctions (Section 5). • We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1). • Assuming that the conjecture that the optimal ~ - unit auction has a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ~ ≥ 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6). For the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction. 7. CONCLUSIONS AND FUTURE WORK We have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis. We have then used this auction to derive the best known auction for the unlimited supply case. Our work leaves many interesting open questions. We found that the lower bound of [8] is matched by an auction for three bidders, even when competing against generalized benchmarks. The most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders. We conjecture that it can. Second, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders. No class of auctions which meets these criteria is known even for the four bidder case. Also, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions. Another interesting set of open questions concerns aggregation auctions. As we show, the aggregation auction for Υ3 outperforms the aggregation auction for Υ2 and it appears that the aggregation auction for Υ3 is better than Υm for m> 3. We leave verification of this conjecture for future work. We also show that Υ3 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit. It would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of AggΥm. Finally, we remark that very little is known about the structure of the optimal competitive auction. In our auction Υ3, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values. The optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values. We conjecture that an optimal auction for any number of bidders lies within this class. It remains to show that optimal auctions otherwise only offer sales prices at bid values.
C-71
A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks
We propose ι, a novel index for evaluation of point-distribution. ι is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of ι result in a honeycomb structure. We propose that ι can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model. We show that locally maximizing ι at sensor nodes is a good approach to solve this problem with an algorithm called Maximizing-ι Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing ι in coverage-related problems for WSNs.
[ "point-distribut index", "sensor-group", "wireless sensor network", "honeycomb structur", "surveil", "redund", "fault toler", "node-deduct process", "increment coverag qualiti algorithm", "sleep configur protocol", "sensor coverag", "sensor group" ]
[ "P", "P", "P", "P", "U", "U", "U", "M", "M", "U", "M", "M" ]
A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks Yangfan Zhou Haixuan Yang Michael R. Lyu Edith C.-H. Ngai Department of Computer Science and Engineering The Chinese University of Hong Kong Hong Kong, China {yfzhou, hxyang, lyu, chngai}@cse. cuhk.edu.hk ABSTRACT We propose ι, a novel index for evaluation of point-distribution. ι is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of ι result in a honeycomb structure. We propose that ι can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model. We show that locally maximizing ι at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingι Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing ι in coverage-related problems for WSNs. Categories and Subject Descriptors C.2.4 [Computer - Communication Networks]: Network Architecture and Design; C.3 [Special-Purpose and Application-Based Systems]: Realtime and Embedded Systems General Terms Theory, Algorithms, Design, Verification, Performance 1. INTRODUCTION A wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes. A WSN can collect the data about physical phenomena of interest [1]. There are many potential applications of WSNs, including environmental monitoring and surveillance, etc. [1][11]. In many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments. One major problem caused is that sensor nodes are subjected to failures. Therefore, fault tolerance of a WSN is critical. One way to achieve fault tolerance is that a WSN should contain a large number of redundant nodes in order to tolerate node failures. It is vital to provide a mechanism that redundant nodes can be working in sleeping mode (i.e., major power-consuming units such as the transceiver of a redundant sensor node can be shut off) to save energy, and thus to prolong the network lifetime. Redundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6]. We find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points. We call this index, denoted by ι, the normalized minimum distance. If points are moveable, we find that maximizing ι results in a honeycomb structure. The honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work. Employing ι in coverage-related problems is thus deemed promising. This enlightens us that maximizing ι is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area. To explore the effectiveness of employing ι in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs. An algorithm called Maximizing-ι Node-Deduction (MIND) is proposed in which redundant sensor nodes are removed to obtain a large ι for each set of sensors that are currently taking charge in the surveillance work of the network area. We also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND. The main contribution of this paper is twofold. First, we introduce a novel index ι for evaluation of point-distribution. We show that maximizing ι of a WSN results in low redundancy of the network. Second, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With the MIND algorithm we show that locally maximizing ι among each sensor node and its neighbors is a good approach to solve this problem. This demonstrates a good application of employing ι in coveragerelated problems. The rest of the paper is organized as follows. In Section 2, we introduce our point-distribution index ι. We survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3. Section 4 investigates the application of ι in this grouping problem. We propose MIND for this problem 1171 and introduce ICQA as a benchmark. In Section 5, we present our simulation results in which MIND and ICQA are compared. Section 6 provides conclusion remarks. 2. THE NORMALIZED MINIMUM DISTANCE ι: A POINT-DISTRIBUTION INDEX Suppose there are n points in a Euclidean space Ω. The coordinates of these points are denoted by xi (i = 1, ..., n). It may be necessary to evaluate how the distribution of these points is. There are many metrics to achieve this goal. For example, the Mean Square Error from these points to their mean value can be employed to calculate how these points deviate from their mean (i.e., their central). In resource-sharing evaluation, the Global Fairness Index (GFI) is often employed to measure how even the resource distributes among these points [8], when xi represents the amount of resource that belong to point i. In WSNs, GFI is usually used to calculate how even the remaining energy of sensor nodes is. When n is larger than 2 and the points do not all overlap (That points all overlap means xi = xj, ∀ i, j = 1, 2, ..., n). We propose a novel index called the normalized minimum distance, namely ι, to evaluate the distribution of the points. ι is the minimum distance between each pair of points normalized by the average distance between each pair of points. It is calculated by: ι = min(||xi − xj||) µ (∀ i, j = 1, 2, ..., n; and i = j) (1) where ||xi − xj|| denotes the Euclidean distance between point i and point j in Ω, the min(·) function calculates the minimum distance between each pair of points, and µ is the average distance between each pair of points, which is: µ = ( Pn i=1 Pn j=1,j=i ||xi − xj||) n(n − 1) (2) ι measures how well the points separate from one another. Obviously, ι is in interval [0, 1]. ι is equal to 1 if and only if n is equal to 3 and these three points forms an equilateral triangle. ι is equal to zero if any two points overlap. ι is a very interesting value of a set of points. If we consider each xi (∀i = 1, ..., n) is a variable in Ω, how these n points would look like if ι is maximized? An algorithm is implemented to generate the topology in which ι is locally maximized (The algorithm can be found in [19]). We consider a 2-dimensional space. We select n = 10, 20, 30, ..., 100 and perform this algorithm. In order to avoid that the algorithm converge to local optimum, we select different random seeds to generate the initial points for 1000 time and obtain the best one that results in the largest ι when the algorithm converges. Figure 1 demonstrates what the resulting topology looks like when n = 20 as an example. Suppose each point represents a sensor node. If the sensor coverage model is the Boolean coverage model [15][17][18][14] and the coverage radius of each node is the same. It is exciting to see that this topology results in lowest redundancy because the Vonoroi diagram [2] formed by these nodes (A Vonoroi diagram formed by a set of nodes partitions a space into a set of convex polygons such that points inside a polygon are closest to only one particular node) is a honeycomb-like structure1 . This enlightens us that ι may be employed to solve problems related to sensor-coverage of an area. In WSNs, it is desirable 1 This is how base stations of a wireless cellular network are deployed and why such a network is called a cellular one. 0 20 40 60 80 100 120 140 160 0 20 40 60 80 100 120 140 160 X Y Figure 1: Node Number = 20, ι = 0.435376 that the active sensor nodes that are performing surveillance task should separate from one another. Under the constraint that the sensing area should be covered, the more each node separates from the others, the less the redundancy of the coverage is. ι indicates the quality of such separation. It should be useful for approaches on sensor-coverage related problems. In our following discussions, we will show the effectiveness of employing ι in sensor-grouping problem. 3. THE SENSOR-GROUPING PROBLEM In many application scenarios, to achieve fault tolerance, a WSN contains a large number of redundant nodes in order to tolerate node failures. A node sleeping-working schedule scheme is therefore highly desired to exploit the redundancy of working sensors and let as many nodes as possible sleep. Much work in the literature is on this issue [6]. Yan et al introduced a differentiated service in which a sensor node finds out its responsible working duration with cooperation of its neighbors to ensure the coverage of sampled points [17]. Ye et al developed PEAS in which sensor nodes wake up randomly over time, probe their neighboring nodes, and decide whether they should begin to take charge of surveillance work [18]. Xing et al exploited a probabilistic distributed detection model with a protocol called Coordinating Grid (Co-Grid) [16]. Wang et al designed an approach called Coverage Configuration Protocol (CCP) which introduced the notion that the coverage degree of intersection-points of the neighboring nodes'' sensing-perimeters indicates the coverage of a convex region [15]. In our recent work [7], we also provided a sleeping configuration protocol, namely SSCP, in which sleeping eligibility of a sensor node is determined by a local Voronoi diagram. SSCP can provide different levels of redundancy to maintain different requirements of fault tolerance. The major feature of the aforementioned protocols is that they employ online distributed and localized algorithms in which a sensor node determines its sleeping eligibility and/or sleeping time based on the coverage requirement of its sensing area with some information provided by its neighbors. Another major approach for sensor node sleeping-working scheduling issue is to group sensor nodes. Sensor nodes in a network are divided into several disjoint sets. Each set of sensor nodes are able to maintain the required area surveillance work. The sensor nodes are scheduled according to which set they belong to. These sets work successively. Only one set of sensor nodes work at any time. We call the issue sensor-grouping problem. The major advantage of this approach is that it avoids the overhead caused by the processes of coordination of sensor nodes to make decision on whether a sensor node is a candidate to sleep or 1172 work and how long it should sleep or work. Such processes should be performed from time to time during the lifetime of a network in many online distributed and localized algorithms. The large overhead caused by such processes is the main drawback of the online distributed and localized algorithms. On the contrary, roughly speaking, this approach groups sensor nodes in one time and schedules when each set of sensor nodes should be on duty. It does not require frequent decision-making on working/sleeping eligibility2 . In [13] by Slijepcevic et al, the sensing area is divided into regions. Sensor nodes are grouped with the most-constrained leastconstraining algorithm. It is a greedy algorithm in which the priority of selecting a given sensor is determined by how many uncovered regions this sensor covers and the redundancy caused by this sensor. In [5] by Cardei et al, disjoint sensor sets are modeled as disjoint dominating sets. Although maximum dominating sets computation is NP-complete, the authors proposed a graphcoloring based algorithm. Cardei et al also studied similar problem in the domain of covering target points in [4]. The NP-completeness of the problem is proved and a heuristic that computes the sets are proposed. These algorithms are centralized solutions of sensorgrouping problem. However, global information (e.g., the location of each in-network sensor node) of a large scale WSN is also very expensive to obtained online. Also it is usually infeasible to obtain such information before sensor nodes are deployed. For example, sensor nodes are usually deployed in a random manner and the location of each in-network sensor node is determined only after a node is deployed. The solution of sensor-grouping problem should only base on locally obtainable information of a sensor node. That is to say, nodes should determine which group they should join in a fully distributed way. Here locally obtainable information refers to a node``s local information and the information that can be directly obtained from its adjacent nodes, i.e., nodes within its communication range. In Subsection 3.1, we provide a general problem formulation of the sensor-grouping problem. Distributed-solution requirement is formulated in this problem. It is followed by discussion in Subsection 3.2 on a general sensing model, which serves as a given condition of the sensor-grouping problem formulation. To facilitate our discussions, the notations in our following discussions are described as follows. • n: The number in-network sensor nodes. • S(j) (j = 1, 2, ..., m): The jth set of sensor nodes where m is the number of sets. • L(i) (i = 1, 2, ..., n): The physical location of node i. • φ: The area monitored by the network: i.e., the sensing area of the network. • R: The sensing radius of a sensor node. We assume that a sensor node can only be responsible to monitor a circular area centered at the node with a radius equal to R. This is a usual assumption in work that addresses sensor-coverage related problems. We call this circular area the sensing area of a node. 3.1 Problem Formulation We assume that each sensor node can know its approximate physical location. The approximate location information is obtainable if each sensor node carries a GPS receiver or if some localization algorithms are employed (e.g., [3]). 2 Note that if some nodes die, a re-grouping process might also be performed to exploit the remaining nodes in a set of sensor nodes. How to provide this mechanism is beyond the scope of this paper and yet to be explored. Problem 1. Given: • The set of each sensor node i``s sensing neighbors N(i) and the location of each member in N(i); • A sensing model which quantitatively describes how a point P in area φ is covered by sensor nodes that are responsible to monitor this point. We call this quantity the coverage quality of P. • The coverage quality requirement in φ, denoted by s. When the coverage of a point is larger than this threshold, we say this point is covered. For each sensor node i, make a decision on which group S(j) it should join so that: • Area φ can be covered by sensor nodes in each set S(j) • m, the number of sets S(j) is maximized. In this formulation, we call sensor nodes within a circular area centered at a sensor node i with a radius equal to 2 · R the sensing neighbors of node i. This is because sensors nodes in this area, together with node i, may be cooperative to ensure the coverage of a point inside node i``s sensing area. We assume that the communication range of a sensor node is larger than 2 · R, which is also a general assumption in work that addresses sensor-coverage related problems. That is to say, the first given condition in Problem 1 is the information that can be obtained directly from a node``s adjacent nodes. It is therefore locally obtainable information. The last two given conditions in this problem formulation can be programmed into a node before it is deployed or by a node-programming protocol (e.g., [9]) during network runtime. Therefore, the given conditions can all be easily obtained by a sensor-grouping scheme with fully distributed implementation. We reify this problem with a realistic sensing model in next subsection. 3.2 A General Sensing Model As WSNs are usually employed to monitor possible events in a given area, it is therefore a design requirement that an event occurring in the network area must/may be successfully detected by sensors. This issue is usually formulated as how to ensure that an event signal omitted in an arbitrary point in the network area can be detected by sensor nodes. Obviously, a sensing model is required to address this problem so that how a point in the network area is covered can be modeled and quantified. Thus the coverage quality of a WSN can be evaluated. Different applications of WSNs employ different types of sensors, which surely have widely different theoretical and physical characteristics. Therefore, to fulfill different application requirements, different sensing models should be constructed based on the characteristics of the sensors employed. A simple theoretical sensing model is the Boolean sensing model [15][18][17][14]. Boolean sensing model assumes that a sensor node can always detect an event occurring in its responsible sensing area. But most sensors detect events according to the signal strength sensed. Event signals usually fade in relation to the physical distance between an event and the sensor. The larger the distance, the weaker the event signals that can be sensed by the sensor, which results in a reduction of the probability that the event can be successfully detected by the sensor. As in WSNs, event signals are usually electromagnetic, acoustic, or photic signals, they fade exponentially with the increasing of 1173 their transmit distance. Specifically, the signal strength E(d) of an event that is received by a sensor node satisfies: E(d) = α dβ (3) where d is the physical distance from the event to the sensor node; α is related to the signal strength omitted by the event; and β is signal fading factor which is typically a positive number larger than or equal to 2. Usually, α and β are considered as constants. Based on this notion, to be more reasonable, researchers propose collaborative sensing model to capture application requirements: Area coverage can be maintained by a set of collaborative sensor nodes: For a point with physical location L, the point is considered covered by the collaboration of i sensors (denoted by k1, ..., ki) if and only if the following two equations holds [7][10][12]. ∀j = 1, ..., i; L(kj) − L < R. (4) C(L) = iX j=1 (E( L(kj) − L ) > s. (5) C(L) is regarded as the coverage quality of location L in the network area [7][10][12]. However, we notice that defining the sensibility as the sum of the sensed signal strength by each collaborative sensor implies a very special application: Applications must employ the sum of the signal strength to achieve decision-making. To capture generally realistic application requirement, we modify the definition described in Equation (5). The model we adopt in this paper is described in details as follows. We consider the probability P(L, kj ) that an event located at L can be detected by sensor kj is related to the signal strength sensed by kj. Formally, P(L, kj) = γE(d) = δ ( L(kj) − L / + 1)β , (6) where γ is a constant and δ = γα is a constant too. normalizes the distance to a proper scale and the +1 item is to avoid infinite value of P(L, kj). The probability that an event located at L can be detected by any collaborative sensors that satisfied Equation (4) is: P (L) = 1 − iY j=1 (1 − P(L, kj )). (7) As the detection probability P (L) reasonably determines how an event occurring at location L can be detected by the networks, it is a good measure of the coverage quality of location L in a WSN. Specifically, Equation (5) is modified to: C(L) = P (L) = 1 − iY j=1 [1 − δ ( L(kj) − L / + 1)β ] > s. (8) To sum it up, we consider a point at location L is covered if Equation (4) and (8) hold. 4. MAXIMIZING-ι NODE-DEDUCTION ALGORITHM FOR SENSOR-GROUPING PROBLEM Before we process to introduce algorithms to solve the sensor grouping problem, let us define the margin (denoted by θ) of an area φ monitored by the network as the band-like marginal area of φ and all the points on the outer perimeter of θ is ρ distance away from all the points on the inner perimeter of θ. ρ is called the margin length. In a practical network, sensor nodes are usually evenly deployed in the network area. Obviously, the number of sensor nodes that can sense an event occurring in the margin of the network is smaller than the number of sensor nodes that can sense an event occurring in other area of the network. Based on this consideration, in our algorithm design, we ensure the coverage quality of the network area except the margin. The information on φ and ρ is networkbased. Each in-network sensor node can be pre-programmed or on-line informed about φ and ρ, and thus calculate whether a point in its sensing area is in the margin or not. 4.1 Maximizing-ι Node-Deduction Algorithm The node-deduction process of our Maximizing-ι Node-Deduction Algorithm (MIND) is simple. A node i greedily maximizes ι of the sub-network composed by itself, its ungrouped sensing neighbors, and the neighbors that are in the same group of itself. Under the constraint that the coverage quality of its sensing area should be ensured, node i deletes nodes in this sub-network one by one. The candidate to be pruned satisfies that: • It is an ungrouped node. • The deletion of the node will not result in uncovered-points inside the sensing area of i. A candidate is deleted if the deletion of the candidate results in largest ι of the sub-network compared to the deletion of other candidates. This node-deduction process continues until no candidate can be found. Then all the ungrouped sensing neighbors that are not deleted are grouped into the same group of node i. We call the sensing neighbors that are in the same group of node i the group sensing neighbors of node i. We then call node i a finished node, meaning that it has finished the above procedure and the sensing area of the node is covered. Those nodes that have not yet finished this procedure are called unfinished nodes. The above procedure initiates at a random-selected node that is not in the margin. The node is grouped to the first group. It calculates the resulting group sensing neighbors of it based on the above procedure. It informs these group sensing neighbors that they are selected in the group. Then it hands over the above procedure to an unfinished group sensing neighbors that is the farthest from itself. This group sensing neighbor continues this procedure until no unfinished neighbor can be found. Then the first group is formed (Algorithmic description of this procedure can be found at [19]). After a group is formed, another random-selected ungrouped node begins to group itself to the second group and initiates the above procedure. In this way, groups are formed one by one. When a node that involves in this algorithm found out that the coverage quality if its sensing area, except what overlaps the network margin, cannot be ensured even if all its ungrouped sensing neighbors are grouped into the same group as itself, the algorithm stops. MIND is based on locally obtainable information of sensor nodes. It is a distributed algorithm that serves as an approximate solution of Problem 1. 4.2 Incremental Coverage Quality Algorithm: A Benchmark for MIND To evaluate the effectiveness of introducing ι in the sensor-group problem, another algorithm for sensor-group problem called Incremental Coverage Quality Algorithm (ICQA) is designed. Our aim 1174 is to evaluate how an idea, i.e., MIND, based on locally maximize ι performs. In ICQA, a node-selecting process is as follows. A node i greedily selects an ungrouped sensing neighbor in the same group as itself one by one, and informs the neighbor it is selected in the group. The criterion is: • The selected neighbor is responsible to provide surveillance work for some uncovered parts of node i``s sensing area. (i.e., the coverage quality requirement of the parts is not fulfilled if this neighbor is not selected.) • The selected neighbor results in highest improvement of the coverage quality of the neighbor``s sensing area. The improvement of the coverage quality, mathematically, should be the integral of the the improvements of all points inside the neighbor``s sensing area. A numerical approximation is employed to calculate this improvement. Details are presented in our simulation study. This node-selecting process continues until the sensing area of node i is entirely covered. In this way, node i``s group sensing neighbors are found. The above procedure is handed over as what MIND does and new groups are thus formed one by one. And the condition that ICQA stops is the same as MIND. ICQA is also based on locally obtainable information of sensor nodes. ICQA is also a distributed algorithm that serves as an approximate solution of Problem 1. 5. SIMULATION RESULTS To evaluate the effectiveness of employing ι in sensor-grouping problem, we build simulation surveillance networks. We employ MIND and ICQA to group the in-network sensor nodes. We compare the grouping results with respect to how many groups both algorithms find and how the performance of the resulting groups are. Detailed settings of the simulation networks are shown in Table 1. In simulation networks, sensor nodes are randomly deployed in a uniform manner in the network area. Table 1: The settings of the simulation networks Area of sensor field 400m*400m ρ 20m R 80m α, β, γ and 1.0, 2.0, 1.0 and 100.0 s 0.6 For evaluating the coverage quality of the sensing area of a node, we divide the sensing area of a node into several regions and regard the coverage quality of the central point in each region as a representative of the coverage quality of the region. This is a numerical approximation. Larger number of such regions results in better approximation. As sensor nodes are with low computational capacity, there is a tradeoff between the number of such regions and the precision of the resulting coverage quality of the sensing area of a node. In our simulation study, we set this number 12. For evaluating the improvement of coverage quality in ICQA, we sum up all the improvements at each region-center as the total improvement. 5.1 Number of Groups Formed by MIND and ICQA We set the total in-network node number to different values and let the networks perform MIND and ICQA. For each n, simulations run with several random seeds to generate different networks. Results are averaged. Figure 2 shows the group numbers found in networks with different n``s. 500 1000 1500 2000 0 5 10 15 20 25 30 35 40 45 50 Total in−network node number Totalnumberofgroupsfound ICQA MMNP Figure 2: The number of groups found by MIND and ICQA We can see that MIND always outperforms ICQA in terms of the number of groups formed. Obviously, the larger the number of groups can be formed, the more the redundancy of each group is exploited. This output shows that an approach like MIND that aim to maximize ι of the resulting topology can exploits redundancy well. As an example, in case that n = 1500, the results of five networks are listed in Table 2. Table 2: The grouping results of five networks with n = 1500 Net MIND ICQA MIND ICQA Group Number Group Number Average ι Average ι 1 34 31 0.145514 0.031702 2 33 30 0.145036 0.036649 3 33 31 0.156483 0.033578 4 32 31 0.152671 0.029030 5 33 32 0.146560 0.033109 The difference between the average ι of the groups in each network shows that groups formed by MIND result in topologies with larger ι``s. It demonstrates that ι is good indicator of redundancy in different networks. 5.2 The Performance of the Resulting Groups Although MIND forms more groups than ICQA does, which implies longer lifetime of the networks, another importance consideration is how these groups formed by MIND and ICQA perform. We let 10000 events randomly occur in the network area except the margin. We compare how many events happen at the locations where the quality is less than the requirement s = 0.6 when each resulting group is conducting surveillance work (We call the number of such events the failure number of group). Figure 3 shows the average failure numbers of the resulting groups when different node numbers are set. We can see that the groups formed by MIND outperform those formed by ICQA because the groups formed by MIND result in lower failure numbers. This further demonstrates that MIND is a good approach for sensor-grouping problem. 1175 500 1000 1500 2000 0 10 20 30 40 50 60 Total in−network node number averagefailurenumbers ICQA MMNP Figure 3: The failure numbers of MIND and ICQA 6. CONCLUSION This paper proposes ι, a novel index for evaluation of pointdistribution. ι is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of ι result in a honeycomb structure. We propose that ι can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). We set out to validate this idea by employing ι to a sensorgrouping problem. We formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With an algorithm called Maximizing-ι Node-Deduction (MIND), we show that maximizing ι at sensor nodes is a good approach to solve this problem. Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design in terms of the number and the performance of the groups formed. This demonstrates a good application of employing ι in coverage-related problems. 7. ACKNOWLEDGEMENT The work described in this paper was substantially supported by two grants, RGC Project No. CUHK4205/04E and UGC Project No. AoE/E-01/99, of the Hong Kong Special Administrative Region, China. 8. REFERENCES [1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci. A survey on wireless sensor networks. IEEE Communications Magazine, 40(8):102-114, 2002. [2] F. Aurenhammer. Vononoi diagram - a survey of a fundamental geometric data structure. ACM Computing Surveys, 23(2):345-405, September 1991. [3] N. Bulusu, J. Heidemann, and D. Estrin. GPS-less low-cost outdoor localization for very small devices. IEEE Personal Communication, October 2000. [4] M. Cardei and D.-Z. Du. Improving wireless sensor network lifetime through power aware organization. ACM Wireless Networks, 11(3), May 2005. [5] M. Cardei, D. MacCallum, X. Cheng, M. Min, X. Jia, D. Li, and D.-Z. Du. Wireless sensor networks with energy efficient organization. Journal of Interconnection Networks, 3(3-4), December 2002. [6] M. Cardei and J. Wu. Coverage in wireless sensor networks. In Handbook of Sensor Networks, (eds. M. Ilyas and I. Magboub), CRC Press, 2004. [7] X. Chen and M. R. Lyu. A sensibility-based sleeping configuration protocol for dependable wireless sensor networks. CSE Technical Report, The Chinese University of Hong Kong, 2005. [8] R. Jain, W. Hawe, and D. Chiu. A quantitative measure of fairness and discrimination for resource allocation in shared computer systems. Technical Report DEC-TR-301, September 1984. [9] S. S. Kulkarni and L. Wang. MNP: Multihop network reprogramming service for sensor networks. In Proc. of the 25th International Conference on Distributed Computing Systems (ICDCS), June 2005. [10] B. Liu and D. Towsley. A study on the coverage of large-scale sensor networks. In Proc. of the 1st IEEE International Conference on Mobile ad-hoc and Sensor Systems, Fort Lauderdale, FL, October 2004. [11] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson. Wireless sensor networks for habitat monitoring. In Proc. of the ACM International Workshop on Wireless Sensor Networks and Applications, 2002. [12] S. Megerian, F. Koushanfar, G. Qu, G. Veltri, and M. Potkonjak. Explosure in wirless sensor networks: Theory and pratical solutions. Wireless Networks, 8, 2002. [13] S. Slijepcevic and M. Potkonjak. Power efficient organization of wireless sensor networks. In Proc. of the IEEE International Conference on Communications (ICC), volume 2, Helsinki, Finland, June 2001. [14] D. Tian and N. D. Georganas. A node scheduling scheme for energy conservation in large wireless sensor networks. Wireless Communications and Mobile Computing, 3:272-290, May 2003. [15] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C. Gill. Integrated coverage and connectivity configuration in wireless sensor networks. In Proc. of the 1st ACM International Conference on Embedded Networked Sensor Systems (SenSys), Los Angeles, CA, November 2003. [16] G. Xing, C. Lu, R. Pless, and J. A. O´ Sullivan. Co-Grid: an efficient converage maintenance protocol for distributed sensor networks. In Proc. of the 3rd International Symposium on Information Processing in Sensor Networks (IPSN), Berkeley, CA, April 2004. [17] T. Yan, T. He, and J. A. Stankovic. Differentiated surveillance for sensor networks. In Proc. of the 1st ACM International Conference on Embedded Networked Sensor Systems (SenSys), Los Angeles, CA, November 2003. [18] F. Ye, G. Zhong, J. Cheng, S. Lu, and L. Zhang. PEAS: A robust energy conserving protocol for long-lived sensor networks. In Proc. of the 23rd International Conference on Distributed Computing Systems (ICDCS), Providence, Rhode Island, May 2003. [19] Y. Zhou, H. Yang, and M. R. Lyu. A point-distribution index and its application in coverage-related problems. CSE Technical Report, The Chinese University of Hong Kong, 2006. 1176
A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks ABSTRACT We propose t, a novel index for evaluation of point-distribution. t is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of t result in a honeycomb structure. We propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model. We show that locally maximizing t at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingt Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing t in coverage-related problems for WSNs. 1. INTRODUCTION A wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes. A WSN can collect the data about physical phenomena of interest [1]. There are many potential applications of WSNs, including environmental monitoring and surveillance, etc. [1] [11]. In many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments. One major problem caused is that sensor nodes are subjected to failures. Therefore, fault tolerance of a WSN is critical. One way to achieve fault tolerance is that a WSN should contain a large number of redundant nodes in order to tolerate node failures. It is vital to provide a mechanism that redundant nodes can be working in sleeping mode (i.e., major power-consuming units such as the transceiver of a redundant sensor node can be shut off) to save energy, and thus to prolong the network lifetime. Redundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6]. We find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points. We call this index, denoted by t, the normalized minimum distance. If points are moveable, we find that maximizing t results in a honeycomb structure. The honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work. Employing t in coverage-related problems is thus deemed promising. This enlightens us that maximizing t is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area. To explore the effectiveness of employing t in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs. An algorithm called Maximizing-t Node-Deduction (MIND) is proposed in which redundant sensor nodes are removed to obtain a large t for each set of sensors that are currently taking charge in the surveillance work of the network area. We also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND. The main contribution of this paper is twofold. First, we introduce a novel index t for evaluation of point-distribution. We show that maximizing t of a WSN results in low redundancy of the network. Second, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With the MIND algorithm we show that locally maximizing t among each sensor node and its neighbors is a good approach to solve this problem. This demonstrates a good application of employing t in coveragerelated problems. The rest of the paper is organized as follows. In Section 2, we introduce our point-distribution index t. We survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3. Section 4 investigates the application of t in this grouping problem. We propose MIND for this problem and introduce ICQA as a benchmark. In Section 5, we present our simulation results in which MIND and ICQA are compared. Section 6 provides conclusion remarks. 2. THE NORMALIZED MINIMUM DISTANCE ι: A POINT-DISTRIBUTION INDEX Suppose there are n points in a Euclidean space Ω. The coordinates of these points are denoted by xi (i = 1,..., n). It may be necessary to evaluate how the distribution of these points is. There are many metrics to achieve this goal. For example, the Mean Square Error from these points to their mean value can be employed to calculate how these points deviate from their mean (i.e., their central). In resource-sharing evaluation, the Global Fairness Index (GFI) is often employed to measure how even the resource distributes among these points [8], when xi represents the amount of resource that belong to point i. In WSNs, GFI is usually used to calculate how even the remaining energy of sensor nodes is. When n is larger than 2 and the points do not all overlap (That points all overlap means xi = xj, ∀ i, j = 1, 2,..., n). We propose a novel index called the normalized minimum distance, namely ι, to evaluate the distribution of the points. ι is the minimum distance where | | xi − xj | | denotes the Euclidean distance between point i and point j in Ω, the min (·) function calculates the minimum distance between each pair of points, and µ is the average distance between each pair of points, which is: ι measures how well the points separate from one another. Obviously, ι is in interval [0, 1]. ι is equal to 1 if and only if n is equal to 3 and these three points forms an equilateral triangle. ι is equal to zero if any two points overlap. ι is a very interesting value of a set of points. If we consider each xi (∀ i = 1,..., n) is a variable in Ω, how these n points would look like if ι is maximized? An algorithm is implemented to generate the topology in which ι is locally maximized (The algorithm can be found in [19]). We consider a 2-dimensional space. We select n = 10, 20, 30,..., 100 and perform this algorithm. In order to avoid that the algorithm converge to local optimum, we select different random seeds to generate the initial points for 1000 time and obtain the best one that results in the largest ι when the algorithm converges. Figure 1 demonstrates what the resulting topology looks like when n = 20 as an example. Suppose each point represents a sensor node. If the sensor coverage model is the Boolean coverage model [15] [17] [18] [14] and the coverage radius of each node is the same. It is exciting to see that this topology results in lowest redundancy because the Vonoroi diagram [2] formed by these nodes (A Vonoroi diagram formed by a set of nodes partitions a space into a set of convex polygons such that points inside a polygon are closest to only one particular node) is a honeycomb-like structure1. This enlightens us that ι may be employed to solve problems related to sensor-coverage of an area. In WSNs, it is desirable Figure 1: Node Number = 20, ι = 0.435376 that the active sensor nodes that are performing surveillance task should separate from one another. Under the constraint that the sensing area should be covered, the more each node separates from the others, the less the redundancy of the coverage is. ι indicates the quality of such separation. It should be useful for approaches on sensor-coverage related problems. In our following discussions, we will show the effectiveness of employing ι in sensor-grouping problem. 3. THE SENSOR-GROUPING PROBLEM In many application scenarios, to achieve fault tolerance, a WSN contains a large number of redundant nodes in order to tolerate node failures. A node sleeping-working schedule scheme is therefore highly desired to exploit the redundancy of working sensors and let as many nodes as possible sleep. Much work in the literature is on this issue [6]. Yan et al introduced a differentiated service in which a sensor node finds out its responsible working duration with cooperation of its neighbors to ensure the coverage of sampled points [17]. Ye et al developed PEAS in which sensor nodes wake up randomly over time, probe their neighboring nodes, and decide whether they should begin to take charge of surveillance work [18]. Xing et al exploited a probabilistic distributed detection model with a protocol called Coordinating Grid (Co-Grid) [16]. Wang et al designed an approach called Coverage Configuration Protocol (CCP) which introduced the notion that the coverage degree of intersection-points of the neighboring nodes' sensing-perimeters indicates the coverage of a convex region [15]. In our recent work [7], we also provided a sleeping configuration protocol, namely SSCP, in which sleeping eligibility of a sensor node is determined by a local Voronoi diagram. SSCP can provide different levels of redundancy to maintain different requirements of fault tolerance. The major feature of the aforementioned protocols is that they employ online distributed and localized algorithms in which a sensor node determines its sleeping eligibility and/or sleeping time based on the coverage requirement of its sensing area with some information provided by its neighbors. Another major approach for sensor node sleeping-working scheduling issue is to group sensor nodes. Sensor nodes in a network are divided into several disjoint sets. Each set of sensor nodes are able to maintain the required area surveillance work. The sensor nodes are scheduled according to which set they belong to. These sets work successively. Only one set of sensor nodes work at any time. We call the issue sensor-grouping problem. The major advantage of this approach is that it avoids the overhead caused by the processes of coordination of sensor nodes to make decision on whether a sensor node is a candidate to sleep or work and how long it should sleep or work. Such processes should be performed from time to time during the lifetime of a network in many online distributed and localized algorithms. The large overhead caused by such processes is the main drawback of the online distributed and localized algorithms. On the contrary, roughly speaking, this approach groups sensor nodes in one time and schedules when each set of sensor nodes should be on duty. It does not require frequent decision-making on working/sleeping eligibility 2. In [13] by Slijepcevic et al, the sensing area is divided into regions. Sensor nodes are grouped with the most-constrained leastconstraining algorithm. It is a greedy algorithm in which the priority of selecting a given sensor is determined by how many uncovered regions this sensor covers and the redundancy caused by this sensor. In [5] by Cardei et al, disjoint sensor sets are modeled as disjoint dominating sets. Although maximum dominating sets computation is NP-complete, the authors proposed a graphcoloring based algorithm. Cardei et al also studied similar problem in the domain of covering target points in [4]. The NP-completeness of the problem is proved and a heuristic that computes the sets are proposed. These algorithms are centralized solutions of sensorgrouping problem. However, global information (e.g., the location of each in-network sensor node) of a large scale WSN is also very expensive to obtained online. Also it is usually infeasible to obtain such information before sensor nodes are deployed. For example, sensor nodes are usually deployed in a random manner and the location of each in-network sensor node is determined only after a node is deployed. The solution of sensor-grouping problem should only base on locally obtainable information of a sensor node. That is to say, nodes should determine which group they should join in a fully distributed way. Here locally obtainable information refers to a node's local information and the information that can be directly obtained from its adjacent nodes, i.e., nodes within its communication range. In Subsection 3.1, we provide a general problem formulation of the sensor-grouping problem. Distributed-solution requirement is formulated in this problem. It is followed by discussion in Subsection 3.2 on a general sensing model, which serves as a given condition of the sensor-grouping problem formulation. To facilitate our discussions, the notations in our following discussions are described as follows. • n: The number in-network sensor nodes. • S (j) (j = 1, 2,..., m): The jth set of sensor nodes where m is the number of sets. • L (i) (i = 1, 2,..., n): The physical location of node i. • 0: The area monitored by the network: i.e., the sensing area of the network. • R: The sensing radius of a sensor node. We assume that a sensor node can only be responsible to monitor a circular area centered at the node with a radius equal to R. This is a usual assumption in work that addresses sensor-coverage related problems. We call this circular area the sensing area of a node. 3.1 Problem Formulation We assume that each sensor node can know its approximate physical location. The approximate location information is obtainable if each sensor node carries a GPS receiver or if some localization algorithms are employed (e.g., [3]). 2Note that if some nodes die, a re-grouping process might also be performed to exploit the remaining nodes in a set of sensor nodes. How to provide this mechanism is beyond the scope of this paper and yet to be explored. Problem 1. Given: • The set of each sensor node i's sensing neighbors N (i) and the location of each member in N (i); • A sensing model which quantitatively describes how a point P in area 0 is covered by sensor nodes that are responsible to monitor this point. We call this quantity the coverage quality of P. • The coverage quality requirement in 0, denoted by s. When the coverage of a point is larger than this threshold, we say this point is covered. For each sensor node i, make a decision on which group S (j) it shouldjoin so that: • Area 0 can be covered by sensor nodes in each set S (j) • m, the number of sets S (j) is maximized. ■ In this formulation, we call sensor nodes within a circular area centered at a sensor node i with a radius equal to 2 · R the sensing neighbors of node i. This is because sensors nodes in this area, together with node i, may be cooperative to ensure the coverage of a point inside node i's sensing area. We assume that the communication range of a sensor node is larger than 2 · R, which is also a general assumption in work that addresses sensor-coverage related problems. That is to say, the first given condition in Problem 1 is the information that can be obtained directly from a node's adjacent nodes. It is therefore locally obtainable information. The last two given conditions in this problem formulation can be programmed into a node before it is deployed or by a node-programming protocol (e.g., [9]) during network runtime. Therefore, the given conditions can all be easily obtained by a sensor-grouping scheme with fully distributed implementation. We reify this problem with a realistic sensing model in next subsection. 3.2 A General Sensing Model As WSNs are usually employed to monitor possible events in a given area, it is therefore a design requirement that an event occurring in the network area must/may be successfully detected by sensors. This issue is usually formulated as how to ensure that an event signal omitted in an arbitrary point in the network area can be detected by sensor nodes. Obviously, a sensing model is required to address this problem so that how a point in the network area is covered can be modeled and quantified. Thus the coverage quality of a WSN can be evaluated. Different applications of WSNs employ different types of sensors, which surely have widely different theoretical and physical characteristics. Therefore, to fulfill different application requirements, different sensing models should be constructed based on the characteristics of the sensors employed. A simple theoretical sensing model is the Boolean sensing model [15] [18] [17] [14]. Boolean sensing model assumes that a sensor node can always detect an event occurring in its responsible sensing area. But most sensors detect events according to the signal strength sensed. Event signals usually fade in relation to the physical distance between an event and the sensor. The larger the distance, the weaker the event signals that can be sensed by the sensor, which results in a reduction of the probability that the event can be successfully detected by the sensor. As in WSNs, event signals are usually electromagnetic, acoustic, or photic signals, they fade exponentially with the increasing of their transmit distance. Specifically, the signal strength E (d) of an event that is received by a sensor node satisfies: where d is the physical distance from the event to the sensor node; α is related to the signal strength omitted by the event; and β is signal fading factor which is typically a positive number larger than or equal to 2. Usually, α and β are considered as constants. Based on this notion, to be more reasonable, researchers propose collaborative sensing model to capture application requirements: Area coverage can be maintained by a set of collaborative sensor nodes: For a point with physical location L, the point is considered covered by the collaboration of i sensors (denoted by k1,..., ki) if and only if the following two equations holds [7] [10] [12]. C (L) is regarded as the coverage quality of location L in the network area [7] [10] [12]. However, we notice that defining the sensibility as the sum of the sensed signal strength by each collaborative sensor implies a very special application: Applications must employ the sum of the signal strength to achieve decision-making. To capture generally realistic application requirement, we modify the definition described in Equation (5). The model we adopt in this paper is described in details as follows. We consider the probability P (L, kj) that an event located at L can be detected by sensor kj is related to the signal strength sensed by kj. Formally, where γ is a constant and δ = γα is a constant too. ~ normalizes the distance to a proper scale and the "+1" item is to avoid infinite value of P (L, kj). The probability that an event located at L can be detected by any collaborative sensors that satisfied Equation (4) is: As the detection probability P (L) reasonably determines how an event occurring at location L can be detected by the networks, it is a good measure of the coverage quality of location L in a WSN. Specifically, Equation (5) is modified to: To sum it up, we consider a point at location L is covered if Equation (4) and (8) hold. 4. MAXIMIZING-ι NODE-DEDUCTION ALGORITHM FOR SENSOR-GROUPING PROBLEM Before we process to introduce algorithms to solve the sensor grouping problem, let us define the margin (denoted by θ) of an area φ monitored by the network as the band-like marginal area of φ and all the points on the outer perimeter of θ is ρ distance away from all the points on the inner perimeter of θ. ρ is called the margin length. In a practical network, sensor nodes are usually evenly deployed in the network area. Obviously, the number of sensor nodes that can sense an event occurring in the margin of the network is smaller than the number of sensor nodes that can sense an event occurring in other area of the network. Based on this consideration, in our algorithm design, we ensure the coverage quality of the network area except the margin. The information on φ and ρ is networkbased. Each in-network sensor node can be pre-programmed or on-line informed about φ and ρ, and thus calculate whether a point in its sensing area is in the margin or not. 4.1 Maximizing-ι Node-Deduction Algorithm The node-deduction process of our Maximizing-ι Node-Deduction Algorithm (MIND) is simple. A node i greedily maximizes ι of the sub-network composed by itself, its ungrouped sensing neighbors, and the neighbors that are in the same group of itself. Under the constraint that the coverage quality of its sensing area should be ensured, node i deletes nodes in this sub-network one by one. The candidate to be pruned satisfies that: • It is an ungrouped node. • The deletion of the node will not result in uncovered-points inside the sensing area of i. A candidate is deleted if the deletion of the candidate results in largest ι of the sub-network compared to the deletion of other candidates. This node-deduction process continues until no candidate can be found. Then all the ungrouped sensing neighbors that are not deleted are grouped into the same group of node i. We call the sensing neighbors that are in the same group of node i the group sensing neighbors of node i. We then call node i a finished node, meaning that it has finished the above procedure and the sensing area of the node is covered. Those nodes that have not yet finished this procedure are called unfinished nodes. The above procedure initiates at a random-selected node that is not in the margin. The node is grouped to the first group. It calculates the resulting group sensing neighbors of it based on the above procedure. It informs these group sensing neighbors that they are selected in the group. Then it hands over the above procedure to an unfinished group sensing neighbors that is the farthest from itself. This group sensing neighbor continues this procedure until no unfinished neighbor can be found. Then the first group is formed (Algorithmic description of this procedure can be found at [19]). After a group is formed, another random-selected ungrouped node begins to group itself to the second group and initiates the above procedure. In this way, groups are formed one by one. When a node that involves in this algorithm found out that the coverage quality if its sensing area, except what overlaps the network margin, cannot be ensured even if all its ungrouped sensing neighbors are grouped into the same group as itself, the algorithm stops. MIND is based on locally obtainable information of sensor nodes. It is a distributed algorithm that serves as an approximate solution of Problem 1. 4.2 Incremental Coverage Quality Algorithm: A Benchmark for MIND To evaluate the effectiveness of introducing ι in the sensor-group problem, another algorithm for sensor-group problem called Incremental Coverage Quality Algorithm (ICQA) is designed. Our aim is to evaluate how an idea, i.e., MIND, based on locally maximize t performs. In ICQA, a node-selecting process is as follows. A node i greedily selects an ungrouped sensing neighbor in the same group as itself one by one, and informs the neighbor it is selected in the group. The criterion is: • The selected neighbor is responsible to provide surveillance work for some uncovered parts of node i's sensing area. (i.e., the coverage quality requirement of the parts is not fulfilled if this neighbor is not selected.) • The selected neighbor results in highest improvement of the coverage quality of the neighbor's sensing area. The improvement of the coverage quality, mathematically, should be the integral of the the improvements of all points inside the neighbor's sensing area. A numerical approximation is employed to calculate this improvement. Details are presented in our simulation study. This node-selecting process continues until the sensing area of node i is entirely covered. In this way, node i's group sensing neighbors are found. The above procedure is handed over as what MIND does and new groups are thus formed one by one. And the condition that ICQA stops is the same as MIND. ICQA is also based on locally obtainable information of sensor nodes. ICQA is also a distributed algorithm that serves as an approximate solution of Problem 1. 5. SIMULATION RESULTS To evaluate the effectiveness of employing t in sensor-grouping problem, we build simulation surveillance networks. We employ MIND and ICQA to group the in-network sensor nodes. We compare the grouping results with respect to how many groups both algorithms find and how the performance of the resulting groups are. Detailed settings of the simulation networks are shown in Table 1. In simulation networks, sensor nodes are randomly deployed in a uniform manner in the network area. Table 1: The settings of the simulation networks For evaluating the coverage quality of the sensing area of a node, we divide the sensing area of a node into several regions and regard the coverage quality of the central point in each region as a representative of the coverage quality of the region. This is a numerical approximation. Larger number of such regions results in better approximation. As sensor nodes are with low computational capacity, there is a tradeoff between the number of such regions and the precision of the resulting coverage quality of the sensing area of a node. In our simulation study, we set this number 12. For evaluating the improvement of coverage quality in ICQA, we sum up all the improvements at each region-center as the total improvement. 5.1 Number of Groups Formed by MIND and ICQA We set the total in-network node number to different values and let the networks perform MIND and ICQA. For each n, simulations run with several random seeds to generate different networks. Results are averaged. Figure 2 shows the group numbers found in networks with different n's. Figure 2: The number of groups found by MIND and ICQA We can see that MIND always outperforms ICQA in terms of the number of groups formed. Obviously, the larger the number of groups can be formed, the more the redundancy of each group is exploited. This output shows that an approach like MIND that aim to maximize t of the resulting topology can exploits redundancy well. As an example, in case that n = 1500, the results of five networks are listed in Table 2. Table 2: The grouping results of five networks with n = 1500 The difference between the average t of the groups in each network shows that groups formed by MIND result in topologies with larger t's. It demonstrates that t is good indicator of redundancy in different networks. 5.2 The Performance of the Resulting Groups Although MIND forms more groups than ICQA does, which implies longer lifetime of the networks, another importance consideration is how these groups formed by MIND and ICQA perform. We let 10000 events randomly occur in the network area except the margin. We compare how many events happen at the locations where the quality is less than the requirement s = 0.6 when each resulting group is conducting surveillance work (We call the number of such events the failure number of group). Figure 3 shows the average failure numbers of the resulting groups when different node numbers are set. We can see that the groups formed by MIND outperform those formed by ICQA because the groups formed by MIND result in lower failure numbers. This further demonstrates that MIND is a good approach for sensor-grouping problem. Figure 3: The failure numbers of MIND and ICQA 6. CONCLUSION This paper proposes t, a novel index for evaluation of pointdistribution. t is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of t result in a honeycomb structure. We propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). We set out to validate this idea by employing t to a sensorgrouping problem. We formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With an algorithm called Maximizing-t Node-Deduction (MIND), we show that maximizing t at sensor nodes is a good approach to solve this problem. Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design in terms of the number and the performance of the groups formed. This demonstrates a good application of employing t in coverage-related problems.
A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks ABSTRACT We propose t, a novel index for evaluation of point-distribution. t is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of t result in a honeycomb structure. We propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model. We show that locally maximizing t at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingt Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing t in coverage-related problems for WSNs. 1. INTRODUCTION A wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes. A WSN can collect the data about physical phenomena of interest [1]. There are many potential applications of WSNs, including environmental monitoring and surveillance, etc. [1] [11]. In many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments. One major problem caused is that sensor nodes are subjected to failures. Therefore, fault tolerance of a WSN is critical. One way to achieve fault tolerance is that a WSN should contain a large number of redundant nodes in order to tolerate node failures. It is vital to provide a mechanism that redundant nodes can be working in sleeping mode (i.e., major power-consuming units such as the transceiver of a redundant sensor node can be shut off) to save energy, and thus to prolong the network lifetime. Redundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6]. We find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points. We call this index, denoted by t, the normalized minimum distance. If points are moveable, we find that maximizing t results in a honeycomb structure. The honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work. Employing t in coverage-related problems is thus deemed promising. This enlightens us that maximizing t is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area. To explore the effectiveness of employing t in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs. An algorithm called Maximizing-t Node-Deduction (MIND) is proposed in which redundant sensor nodes are removed to obtain a large t for each set of sensors that are currently taking charge in the surveillance work of the network area. We also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND. The main contribution of this paper is twofold. First, we introduce a novel index t for evaluation of point-distribution. We show that maximizing t of a WSN results in low redundancy of the network. Second, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With the MIND algorithm we show that locally maximizing t among each sensor node and its neighbors is a good approach to solve this problem. This demonstrates a good application of employing t in coveragerelated problems. The rest of the paper is organized as follows. In Section 2, we introduce our point-distribution index t. We survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3. Section 4 investigates the application of t in this grouping problem. We propose MIND for this problem and introduce ICQA as a benchmark. In Section 5, we present our simulation results in which MIND and ICQA are compared. Section 6 provides conclusion remarks. 2. THE NORMALIZED MINIMUM DISTANCE ι: A POINT-DISTRIBUTION INDEX 3. THE SENSOR-GROUPING PROBLEM 3.1 Problem Formulation 3.2 A General Sensing Model 4. MAXIMIZING-ι NODE-DEDUCTION ALGORITHM FOR SENSOR-GROUPING PROBLEM 4.1 Maximizing-ι Node-Deduction Algorithm 4.2 Incremental Coverage Quality Algorithm: A Benchmark for MIND 5. SIMULATION RESULTS 5.1 Number of Groups Formed by MIND and ICQA 5.2 The Performance of the Resulting Groups 6. CONCLUSION This paper proposes t, a novel index for evaluation of pointdistribution. t is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of t result in a honeycomb structure. We propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). We set out to validate this idea by employing t to a sensorgrouping problem. We formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With an algorithm called Maximizing-t Node-Deduction (MIND), we show that maximizing t at sensor nodes is a good approach to solve this problem. Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design in terms of the number and the performance of the groups formed. This demonstrates a good application of employing t in coverage-related problems.
A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks ABSTRACT We propose t, a novel index for evaluation of point-distribution. t is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of t result in a honeycomb structure. We propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model. We show that locally maximizing t at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingt Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing t in coverage-related problems for WSNs. 1. INTRODUCTION A wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes. In many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments. One major problem caused is that sensor nodes are subjected to failures. Redundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6]. We find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points. We call this index, denoted by t, the normalized minimum distance. If points are moveable, we find that maximizing t results in a honeycomb structure. The honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work. Employing t in coverage-related problems is thus deemed promising. This enlightens us that maximizing t is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area. To explore the effectiveness of employing t in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs. We also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND. The main contribution of this paper is twofold. First, we introduce a novel index t for evaluation of point-distribution. We show that maximizing t of a WSN results in low redundancy of the network. Second, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With the MIND algorithm we show that locally maximizing t among each sensor node and its neighbors is a good approach to solve this problem. This demonstrates a good application of employing t in coveragerelated problems. The rest of the paper is organized as follows. In Section 2, we introduce our point-distribution index t. We survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3. Section 4 investigates the application of t in this grouping problem. We propose MIND for this problem and introduce ICQA as a benchmark. In Section 5, we present our simulation results in which MIND and ICQA are compared. Section 6 provides conclusion remarks. 6. CONCLUSION This paper proposes t, a novel index for evaluation of pointdistribution. t is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of t result in a honeycomb structure. We propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). We set out to validate this idea by employing t to a sensorgrouping problem. We formulate a general sensor-grouping problem for WSNs and provide a general sensing model. With an algorithm called Maximizing-t Node-Deduction (MIND), we show that maximizing t at sensor nodes is a good approach to solve this problem. This demonstrates a good application of employing t in coverage-related problems.
J-69
Robust Incentive Techniques for Peer-to-Peer Networks
Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflow-based subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation.
[ "incent", "p2p system", "collus", "reciproc decis function", "reput", "adapt stranger polici", "selfinterest user", "incent for cooper", "game-theoret approach", "maxflow-base algorithm", "reciproc peer", "mutual cooper", "asymmetr payoff", "generos", "paramet nomin valu", "whitewash", "stranger adapt", "stranger defect", "peer-to-peer", "free-ride", "cheap pseudonym", "prison dilemma" ]
[ "P", "P", "P", "P", "P", "P", "M", "R", "U", "U", "M", "M", "U", "U", "U", "U", "R", "M", "U", "U", "U", "R" ]
Robust Incentive Techniques for Peer-to-Peer Networks Michal Feldman1 mfeldman@sims.berkeley.edu Kevin Lai2 klai@hp.com Ion Stoica3 istoica@cs.berkeley.edu John Chuang1 chuang@sims.berkeley.edu 1 School of Information Management and Systems U.C. Berkeley 2 HP Labs 3 Computer Science Division U.C. Berkeley ABSTRACT Lack of cooperation (free riding) is one of the key problems that confronts today``s P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner``s Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation. Categories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; J.4 [Social And Behavioral Sciences]: Economics General Terms Design, Economics 1. INTRODUCTION Many peer-to-peer (P2P) systems rely on cooperation among selfinterested users. For example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3]. In a wireless ad-hoc network, overall packet latency and loss rate increase when nodes refuse to forward packets on behalf of others [26]. Further examples are file preservation [25], discussion boards [17], online auctions [16], and overlay routing [6]. In many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance. As a result, each user``s attempt to maximize her own utility effectively lowers the overall A BC Figure 1: Example of asymmetry of interest. A wants service from B, B wants service form C, and C wants service from A. utility of the system. Avoiding this tragedy of the commons [18] requires incentives for cooperation. We adopt a game-theoretic approach in addressing this problem. In particular, we use a prisoners'' dilemma model to capture the essential tension between individual and social utility, asymmetric payoff matrices to allow asymmetric transactions between peers, and a learning-based [14] population dynamic model to specify the behavior of individual peers, which can be changed continuously. While social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including: • Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33]. • Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest. In the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. • Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash). Strategies that work well in traditional prisoners'' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context. Therefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs: • Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly. However, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity. We show that by having each peer keep a 102 private history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover. • Shared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history. Consider the example in Figure 1. If everyone provides service, then the system operates optimally. However, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc.. We show that with shared history, B knows that A served C and consequently will serve A. This results in a higher level of cooperation than with private history. The cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history. • Maxflow-based Subjective Reputation: Shared history creates the possibility for collusion. In the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service. We show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1/3 of the population. The basic idea is that B would only believe C if C had already provided service to B. The cost of the maxflow algorithm is its O(V 3 ) running time, where V is the number of nodes in the system. To eliminate this cost, we have developed a constant mean running time variation, which trades effectiveness for complexity of computation. We show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20]. • Adaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped. We show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system. The adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25]. • Short-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers. The peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host). Long-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions. We show that short-term history prevents traitors from disrupting cooperation. The rest of the paper is organized as follows. We describe the model in Section 2 and the reciprocative decision function in Section 3. We then proceed to the incentive techniques in Section 4. In Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history. In Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it. In Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities. In Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them. We discuss related work in Section 5 and conclude in Section 6. 2. MODEL AND ASSUMPTIONS In this section, we present our assumptions about P2P systems and their users, and introduce a model that aims to capture the behavior of users in a P2P system. 2.1 Assumptions We assume a P2P system in which users are strategic, i.e., they act rationally to maximize their benefit. However, to capture some of the real-life unpredictability in the behavior of users, we allow users to randomly change their behavior with a low probability (see Section 2.4). For simplicity, we assume a homogeneous system in which all peers issue and satisfy requests at the same rate. A peer can satisfy any request, and, unless otherwise specified, peers request service uniformly at random from the population.1 . Finally, we assume that all transactions incur the same cost to all servers and provide the same benefit to all clients. We assume that users can pollute shared history with false recommendations (Section 4.2), switch identities at zero-cost (Section 4.3), and spoof other users (Section 4.4). We do not assume any centralized trust or centralized infrastructure. 2.2 Model To aid the development and study of the incentive schemes, in this section we present a model of the users'' behaviors. In particular, we model the benefits and costs of P2P interactions (the game) and population dynamics caused by mutation, learning, and turnover. Our model is designed to have the following properties that characterize a large set of P2P systems: • Social Dilemma: Universal cooperation should result in optimal overall utility, but individuals who exploit the cooperation of others while not cooperating themselves (i.e., defecting) should benefit more than users who do cooperate. • Asymmetric Transactions: A peer may want service from another peer while not currently being able to provide the service that the second peer wants. Transactions should be able to have asymmetric payoffs. • Untraceable Defections: A peer should not be able to determine the identity of peers who have defected on her. This models the difficulty or expense of determining that a peer could have provided a service, but didn``t. For example, in the Gnutella file sharing system [21], a peer may simply ignore queries despite possessing the desired file, thus preventing the querying peer from identifying the defecting peer. • Dynamic Population: Peers should be able to change their behavior and enter or leave the system independently and continuously. 1The exception is discussed in Section 4.1.1 103 Cooperate Defect Cooperate DefectClient Server sc RR / sc ST / sc PP / sc TS / Figure 2: Payoff matrix for the Generalized Prisoner``s Dilemma. T, R, P, and S stand for temptation, reward, punishment and sucker, respectively. 2.3 Generalized Prisoner``s Dilemma The Prisoner``s Dilemma, developed by Flood, Dresher, and Tucker in 1950 [22] is a non-cooperative repeated game satisfying the social dilemma requirement. Each game consists of two players who can defect or cooperate. Depending how each acts, the players receive a payoff. The players use a strategy to decide how to act. Unfortunately, existing work either uses a specific asymmetric payoff matrix or only gives the general form for a symmetric one [4]. Instead, we use the Generalized Prisoner``s Dilemma (GPD), which specifies the general form for an asymmetric payoff matrix that preserves the social dilemma. In the GPD, one player is the client and one player is the server in each game, and it is only the decision of the server that is meaningful for determining the outome of the transaction. A player can be a client in one game and a server in another. The client and server receive the payoff from a generalized payoff matrix (Figure 2). Rc, Sc, Tc, and Pc are the client``s payoff and Rs, Ss, Ts, and Ps are the server``s payoff. A GPD payoff matrix must have the following properties to create a social dilemma: 1. Mutual cooperation leads to higher payoffs than mutual defection (Rs + Rc > Ps + Pc). 2. Mutual cooperation leads to higher payoffs than one player suckering the other (Rs + Rc > Sc + Ts and Rs + Rc > Ss + Tc). 3. Defection dominates cooperation (at least weakly) at the individual level for the entity who decides whether to cooperate or defect: (Ts ≥ Rs and Ps ≥ Ss and (Ts > Rs or Ps > Ss)) The last set of inequalities assume that clients do not incur a cost regardless of whether they cooperate or defect, and therefore clients always cooperate. These properties correspond to similar properties of the classic Prisoner``s Dilemma and allow any form of asymmetric transaction while still creating a social dilemma. Furthermore, one or more of the four possible actions (client cooperate and defect, and server cooperate and defect) can be untraceable. If one player makes an untraceable action, the other player does not know the identity of the first player. For example, to model a P2P application like file sharing or overlay routing, we use the specific payoff matrix values shown in Figure 3. This satisfies the inequalities specified above, where only the server can choose between cooperating and defecting. In addition, for this particular payoff matrix, clients are unable to trace server defections. This is the payoff matrix that we use in our simulation results. Request Service Don't Request 7 / -1 0 / 0 0 / 0 0 / 0 Provide Service Ignore Request Client Server Figure 3: The payoff matrix for an application like P2P file sharing or overlay routing. 2.4 Population Dynamics A characteristic of P2P systems is that peers change their behavior and enter or leave the system independently and continuously. Several studies [4] [28] of repeated Prisoner``s Dilemma games use an evolutionary model [19] [34] of population dynamics. An evolutionary model is not suitable for P2P systems because it only specifies the global behavior and all changes occur at discrete times. For example, it may specify that a population of 5 100% Cooperate players and 5 100% Defect players evolves into a population with 3 and 7 players, respectively. It does not specify which specific players switched. Furthermore, all the switching occurs at the end of a generation instead of continuously, like in a real P2P system. As a result, evolutionary population dynamics do not accurately model turnover, traitors, and strangers. In our model, entities take independent and continuous actions that change the composition of the population. Time consists of rounds. In each round, every player plays one game as a client and one game as a server. At the end of a round, a player may: 1) mutate 2) learn, 3) turnover, or 4) stay the same. If a player mutates, she switches to a randomly picked strategy. If she learns, she switches to a strategy that she believes will produce a higher score (described in more detail below). If she maintains her identity after switching strategies, then she is referred to as a traitor. If a player suffers turnover, she leaves the system and is replaced with a newcomer who uses the same strategy as the exiting player. To learn, a player collects local information about the performance of different strategies. This information consists of both her personal observations of strategy performance and the observations of those players she interacts with. This models users communicating out-of-band about how strategies perform. Let s be the running average of the performance of a player``s current strategy per round and age be the number of rounds she has been using the strategy. A strategy``s rating is RunningAverage(s ∗ age) RunningAverage(age) . We use the age and compute the running average before the ratio to prevent young samples (which are more likely to be outliers) from skewing the rating. At the end of a round, a player switches to highest rated strategy with a probability proportional to the difference in score between her current strategy and the highest rated strategy. 104 3. RECIPROCATIVE DECISION FUNCTION In this section, we present the new decision function, Reciprocative, that is the basis for our incentive techniques. A decision function maps from a history of a player``s actions to a decision whether to cooperate with or defect on that player. A strategy consists of a decision function, private or shared history, a server selection mechanism, and a stranger policy. Our approach to incentives is to design strategies which maximize both individual and social benefit. Strategic users will choose to use such strategies and thereby drive the system to high levels of cooperation. Two examples of simple decision functions are 100% Cooperate and 100% Defect. 100% Cooperate models a naive user who does not yet realize that she is being exploited. 100% Defect models a greedy user who is intent on exploiting the system. In the absence of incentive techniques, 100% Defect users will quickly dominate the 100% Cooperate users and destroy cooperation in the system. Our requirements for a decision function are that (1) it can use shared and subjective history, (2) it can deal with untraceable defections, and (3) it is robust against different patterns of defection. Previous decision functions such as Tit-for-Tat[4] and Image[28] (see Section 5) do not satisfy these criteria. For example, Tit-for-Tat and Image base their decisions on both cooperations and defections, therefore cannot deal with untraceable defections . In this section and the remaining sections we demonstrate how the Reciprocativebased strategies satisfy all of the requirements stated above. The probability that a Reciprocative player cooperates with a peer is a function of its normalized generosity. Generosity measures the benefit an entity has provided relative to the benefit it has consumed. This is important because entities which consume more services than they provide, even if they provide many services, will cause cooperation to collapse. For some entity i, let pi and ci be the services i has provided and consumed, respectively. Entity i``s generosity is simply the ratio of the service it provides to the service it consumes: g(i) = pi/ci. (1) One possibility is to cooperate with a probability equal to the generosity. Although this is effective in some cases, in other cases, a Reciprocative player may consume more than she provides (e.g., when initially using the Stranger Defect policy in 4.3). This will cause Reciprocative players to defect on each other. To prevent this situation, a Reciprocative player uses its own generosity as a measuring stick to judge its peer``s generosity. Normalized generosity measures entity i``s generosity relative to entity j``s generosity. More concretely, entity i``s normalized generosity as perceived by entity j is gj(i) = g(i)/g(j). (2) In the remainder of this section, we describe our simulation framework, and use it to demonstrate the benefits of the baseline Reciprocative decision function. Parameter Nominal value Section Population Size 100 2.4 Run Time 1000 rounds 2.4 Payoff Matrix File Sharing 2.3 Ratio using 100% Cooperate 1/3 3 Ratio using 100% Defect 1/3 3 Ratio using Reciprocative 1/3 3 Mutation Probability 0.0 2.4 Learning Probability 0.05 2.4 Turnover Probability 0.0001 2.4 Hit Rate 1.0 4.1.1 Table 1: Default simulation parameters. 3.1 Simulation Framework Our simulator implements the model described in Section 2. We use the asymmetric file sharing payoff matrix (Figure 3) with untraceable defections because it models transactions in many P2P systems like file-sharing and packet forwarding in ad-hoc and overlay networks. Our simulation study is composed of different scenarios reflecting the challenges of various non-cooperative behaviors. Table 1 presents the nominal parameter values used in our simulation. The Ratio using rows refer to the initial ratio of the total population using a particular strategy. In each scenario we vary the value range of a specific parameter to reflect a particular situation or attack. We then vary the exact properties of the Reciprocative strategy to defend against that situation or attack. 3.2 Baseline Results 0 20 40 60 80 100 120 0 200 400 600 800 1000 Population Time (a) Total Population: 60 0 20 40 60 80 100 120 0 200 400 600 800 1000 Time (b) Total Population: 120 Defector Cooperator Recip. Private Figure 4: The evolution of strategy populations over time. Time the number of elapsed rounds. Population is the number of players using a strategy. In this section, we present the dynamics of the game for the basic scenario presented in Table 1 to familiarize the reader and set a baseline for more complicated scenarios. Figures 4(a) (60 players) and (b) (120 players) show players switching to higher scoring strategies over time in two separate runs of the simulator. Each point in the graph represents the number of players using a particular strategy at one point in time. Figures 5(a) and (b) show the corresponding mean overall score per round. This measures the degree of cooperation in the system: 6 is the maximum possible (achieved when everybody cooperates) and 0 is the minimum (achieved when everybody defects). From the file sharing payoff matrix, a net of 6 means everyone is able to download a file and a 0 means that no one 105 0 1 2 3 4 5 6 0 200 400 600 800 1000 MeanOverallScore/Round Time (a) Total Population: 60 0 1 2 3 4 5 6 0 200 400 600 800 1000 Time (b) Total Population: 120 Figure 5: The mean overall per round score over time. is able to do so. We use this metric in all later results to evaluate our incentive techniques. Figure 5(a) shows that the Reciprocative strategy using private history causes a system of 60 players to converge to a cooperation level of 3.7, but drops to 0.5 for 120 players. One would expect the 60 player system to reach the optimal level of cooperation (6) because all the defectors are eliminated from the system. It does not because of asymmetry of interest. For example, suppose player B is using Reciprocative with private history. Player A may happen to ask for service from player B twice in succession without providing service to player B in the interim. Player B does not know of the service player A has provided to others, so player B will reject service to player A, even though player A is cooperative. We discuss solutions to asymmetry of interest and the failure of Reciprocative in the 120 player system in Section 4.1. 4. RECIPROCATIVE-BASED INCENTIVE TECHNIQUES In this section we present our incentives techniques and evaluate their behavior by simulation. To make the exposition clear we group our techniques by the challenges they address: large populations and high turnover (Section 4.1), collusions (Section 4.2), zero-cost identities (Section 4.3), and traitors (Section 4.4). 4.1 Large Populations and High Turnover The large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity. Under these conditions, basing decisions only on private history (records about interactions the peer has been directly involved in) is not effective. In addition, private history does not deal well with asymmetry of interest. For example, if player B has cooperated with others but not with player A himself in the past, player A has no indication of player B``s generosity, thus may unduly defect on him. We propose two mechanisms to alleviate the problem of few repeat transactions: server-selection and shared history. 4.1.1 Server Selection A natural way to increase the probability of interacting with familiar peers is by discriminating server selection. However, the asymmetry of transactions challenges selection mechanisms. Unlike in the prisoner``s dilemma payoff matrix, where players can benefit one another within a single transaction, transactions in GPD are asymmetric. As a result, a player who selects her donor for the second time without contributing to her in the interim may face a defection. In addition, due to untraceability of defections, it is impossible to maintain blacklists to avoid interactions with known defectors. In order to deal with asymmetric transactions, every player holds (fixed size) lists of both past donors and past recipients, and selects a server from one of these lists at random with equal probabilities. This way, users approach their past recipients and give them a chance to reciprocate. In scenarios with selective users we omit the complete availability assumption to prevent players from being clustered into a lot of very small groups; thus, we assume that every player can perform the requested service with probability p (for the results presented in this section, p = .3). In addition, in order to avoid bias in favor of the selective players, all players (including the non-discriminative ones) select servers for games. Figure 6 demonstrates the effectiveness of the proposed selection mechanism in scenarios with large population sizes. We fix the initial ratio of Reciprocative in the population (33%) while varying the population size (between 24 to 1000) (Notice that while in Figures 4(a) and (b), the data points demonstrates the evolution of the system over time, each data point in this figure is the result of an entire simulation for a specific scenario). The figure shows that the Reciprocative decision function using private history in conjunction with selective behavior can scale to large populations. In Figure 7 we fix the population size and vary the turnover rate. It demonstrates that while selective behavior is effective for low turnover rates, as turnover gets higher, selective behavior does not scale. This occurs because selection is only effective as long as players from the past stay alive for long enough such that they can be selected for future games. 4.1.2 Shared history In order to mitigate asymmetry of interest and scale to higher turnover rate, there is a need in shared history. Shared history means that every peer keeps records about all of the interactions that occur in the system, regardless of whether he was directly involved in them or not. It allows players to leverage off of the experiences of others in cases of few repeat transactions. It only requires that someone has interacted with a particular player for the entire population to observe it, thus scales better to large populations and high turnovers, and also tolerates asymmetry of interest. Some examples of shared history schemes are [20] [23] [28]. Figure 7 shows the effectiveness of shared history under high turnover rates. In this figure, we fix the population size and vary the turnover rate. While selective players with private history can only tolerate a moderate turnover, shared history scales to turnovers of up to approximately 0.1. This means that 10% of the players leave the system at the end of each round. In Figure 6 we fix the turnover and vary the population size. It shows that shared history causes the system to converge to optimal cooperation and performance, regardless of the size of the population. These results show that shared history addresses all three challenges of large populations, high turnover, and asymmetry of transactions. Nevertheless, shared history has two disadvantages. First, 106 0 1 2 3 4 5 6 0 50 100 150 200 250 300 350 400 MeanOverallScore/Round NumPlayers Shared Non-Sel Private Non-Sel Private Selective Figure 6: Private vs. Shared History as a function of population size. 0 1 2 3 4 5 6 0.0001 0.001 0.01 0.1 MeanOverallScore/Round Turnover Shared Non-Sel Private Non-Sel Private Selective Figure 7: Performance of selection mechanism under turnover. The x-axis is the turnover rate. The y-axis is the mean overall per round score. while a decentralized implementation of private history is straightforward, implementation of shared-history requires communication overhead or centralization. A decentralized shared history can be implemented, for example, on top of a DHT, using a peer-to-peer storage system [36] or by disseminating information to other entities in a similar way to routing protocols. Second, and more fundamental, shared history is vulnerable to collusion. In the next section we propose a mechanism that addresses this problem. 4.2 Collusion and Other Shared History Attacks 4.2.1 Collusion While shared history is scalable, it is vulnerable to collusion. Collusion can be either positive (e.g. defecting entities claim that other defecting entities cooperated with them) or negative (e.g. entities claim that other cooperative entities defected on them). Collusion subverts any strategy in which everyone in the system agrees on the reputation of a player (objective reputation). An example of objective reputation is to use the Reciprocative decision function with shared history to count the total number of cooperations a player has given to and received from all entities in the system; another example is the Image strategy [28]. The effect of collusion is magnified in systems with zero-cost identities, where users can create fake identities that report false statements. Instead, to deal with collusion, entities can compute reputation subjectively, where player A weighs player B``s opinions based on how much player A trusts player B. Our subjective algorithm is based on maxflow [24] [32]. Maxflow is a graph theoretic problem, which given a directed graph with weighted edges asks what is the greatest rate at which material can be shipped from the source to the target without violating any capacity constraints. For example, in figure 8 each edge is labeled with the amount of traffic that can travel on it. The maxflow algorithm computes the maximum amount of traffic that can go from the source (s) to the target (t) without violating the constraints. In this example, even though there is a loop of high capacity edges, the maxflow between the source and the target is only 2 (the numbers in brackets represent the actual flow on each edge in the solution). 100(0) 1(1) 5(1) s t 10(1) 100(1) 1(1) 100(1) 20(0) Figure 8: Each edge in the graph is labeled with its capacity and the actual flow it carries in brackets. The maxflow between the source and the target in the graph is 2. C C CCCC 100100100100 100 00 0 0 20 20 0 0 A B Figure 9: This graph illustrates the robustness of maxflow in the presence of colluders who report bogus high reputation values. We apply the maxflow algorithm by constructing a graph whose vertices are entities and the edges are the services that entities have received from each other. This information can be stored using the same methods as the shared history. A maxflow is the greatest level of reputation the source can give to the sink without violating reputation capacity constraints. As a result, nodes who dishonestly report high reputation values will not be able to subvert the reputation system. Figure 9 illustrates a scenario in which all the colluders (labeled with C) report high reputation values for each other. When node A computes the subjective reputation of B using the maxflow algorithm, it will not be affected by the local false reputation values, rather the maxflow in this case will be 0. This is because no service has been received from any of the colluders. 107 In our algorithm, the benefit that entity i has received (indirectly) from entity j is the maxflow from j to i. Conversely, the benefit that entity i has provided indirectly to j is the maxflow from i to j. The subjective reputation of entity j as perceived by i is: min maxflow(j to i) maxflow(i to j) , 1 (3) 0 1 2 3 4 5 6 0 100 200 300 400 500 600 700 800 900 1000 MeanOverallScore/Round Population Shared Private Subjective Figure 10: Subjective shared history compared to objective shared history and private history in the presence of colluders. Algorithm 1 CONSTANTTIMEMAXFLOW Bound the mean running time of Maxflow to a constant. method CTMaxflow(self, src, dst) 1: self.surplus ← self.surplus + self.increment {Use the running mean as a prediction.} 2: if random() > (0.5∗self.surplus/self.mean iterations) then 3: return None {Not enough surplus to run.} 4: end if {Get the flow and number of iterations used from the maxflow alg.} 5: flow, iterations ← Maxflow(self.G, src, dst) 6: self.surplus ← self.surplus − iterations {Keep a running mean of the number of iterations used.} 7: self.mean iterations ← self.α ∗ self.mean iterations + (1 − self.α) ∗ iterations 8: return flow The cost of maxflow is its long running time. The standard preflowpush maxflow algorithm has a worst case running time of O(V 3 ). Instead, we use Algorithm 1 which has a constant mean running time, but sometimes returns no flow even though one exists. The essential idea is to bound the mean number of nodes examined during the maxflow computation. This bounds the overhead, but also bounds the effectiveness. Despite this, the results below show that a maxflow-based Reciprocative decision function scales to higher populations than one using private history. Figure 10 compares the effectiveness of subjective reputation to objective reputation in the presence of colluders. In these scenarios, defectors collude by claiming that other colluders that they encounter gave them 100 cooperations for that encounter. Also, the parameters for Algorithm 1 are set as follows: increment = 100, α = 0.9. As in previous sections, Reciprocative with private history results in cooperation up to a point, beyond which it fails. The difference here is that objective shared history fails for all population sizes. This is because the Reciprocative players cooperate with the colluders because of their high reputations. However, subjective history can reach high levels of cooperation regardless of colluders. This is because there are no high weight paths in the cooperation graph from colluders to any non-colluders, so the maxflow from a colluder to any non-colluder is 0. Therefore, a subjective Reciprocative player will conclude that that colluder has not provided any service to her and will reject service to the colluder. Thus, the maxflow algorithm enables Reciprocative to maintain the scalability of shared history without being vulnerable to collusion or requiring centralized trust (e.g., trusted peers). Since we bound the running time of the maxflow algorithm, cooperation decreases as the population size increases, but the key point is that the subjective Reciprocative decision function scales to higher populations than one using private history. This advantage only increases over time as CPU power increases and more cycles can be devoted to running the maxflow algorithm (by increasing the increment parameter). Despite the robustness of the maxflow algorithm to the simple form of collusion described previously, it still has vulnerabilities to more sophisticated attacks. One is for an entity (the mole) to provide service and then lie positively about other colluders. The other colluders can then exploit their reputation to receive service. However, the effectiveness of this attack relies on the amount of service that the mole provides. Since the mole is paying all of the cost of providing service and receiving none of the benefit, she has a strong incentive to stop colluding and try another strategy. This forces the colluders to use mechanisms to maintain cooperation within their group, which may drive the cost of collusion to exceed the benefit. 4.2.2 False reports Another attack is for a defector to lie about receiving or providing service to another entity. There are four possibile actions that can be lied about: providing service, not providing service, receiving service, and not receiving service. Falsely claiming to receive service is the simple collusion attack described above. Falsely claiming not to have provided service provides no benefit to the attacker. Falsely claiming to have provided service or not to have received it allows an attacker to boost her own reputation and/or lower the reputation of another entity. An entity may want to lower another entity``s reputation in order to discourage others from selecting it and exclusively use its service. These false claims are clearly identifiable in the shared history as inconsistencies where one entity claims a transaction occurred and another claims it did not. To limit this attack, we modify the maxflow algorithm so that an entity always believes the entity that is closer to him in the flow graph. If both entities are equally distant, then the disputed edge in the flow is not critical to the evaluation and is ignored. This modification prevents those cases where the attacker is making false claims about an entity that is closer than her to the evaluating entity, which prevents her from boosting her own reputation. The remaining possibilities are for the attacker to falsely claim to have provided service to or not to have received it from a victim entity that is farther from the evalulator than her. In these cases, an attacker can only lower the reputation of the victim. The effectiveness of doing this is limited by the number of services provided and received by the attacker, which makes executing this attack expensive. 108 4.3 Zero-Cost Identities History assumes that entities maintain persistent identities. However, in most P2P systems, identities are zero-cost. This is desirable for network growth as it encourages newcomers to join the system. However, this also allows misbehaving users to escape the consequences of their actions by switching to new identities (i.e., whitewashing). Whitewashers can cause the system to collapse if they are not punished appropriately. Unfortunately, a player cannot tell if a stranger is a whitewasher or a legitimate newcomer. Always cooperating with strangers encourages newcomers to join, but at the same time encourages whitewashing behavior. Always defecting on strangers prevents whitewashing, but discourages newcomers from joining and may also initiate unfavorable cycles of defection. This tension suggests that any stranger policy that has a fixed probability of cooperating with strangers will fail by either being too stingy when most strangers are newcomers or too generous when most strangers are whitewashers. Our solution is the Stranger Adaptive stranger policy. The idea is to be generous to strangers when they are being generous and stingy when they are stingy. Let ps and cs be the number of services that strangers have provided and consumed, respectively. The probability that a player using Stranger Adaptive helps a stranger is ps/cs. However, we do not wish to keep these counts permanently (for reasons described in Section 4.4). Also, players may not know when strangers defect because defections are untraceable (as described in Section 2). Consequently, instead of keeping ps and cs, we assume that k = ps + cs, where k is a constant and we keep the running ratio r = ps/cs. When we need to increment ps or cs, we generate the current values of ps and cs from k and r: cs = k/(1 + r) ps = cs ∗ r We then compute the new r as follows: r = (ps + 1)/cs , if the stranger provided service r = ps/(cs + 1) , if the stranger consumed service This method allows us to keep a running ratio that reflects the recent generosity of strangers without knowing when strangers have defected. 0 1 2 3 4 5 6 0.0001 0.001 0.01 0.1 1 MeanOverallScore/Round Turnover Stranger Cooperate Stranger Defect Stranger Adaptive Figure 11: Different stranger policies for Reciprocative with shared history. The x-axis is the turnover rate on a log scale. The y-axis is the mean overall per round score. Figures 11 and 12 compare the effectiveness of the Reciprocative strategy using different policies toward strangers. Figure 11 0 1 2 3 4 5 6 0.0001 0.001 0.01 0.1 1 MeanOverallScore/Round Turnover Stranger Cooperate Stranger Defect Stranger Adaptive Figure 12: Different stranger policies for Reciprocative with private history. The x-axis is the turnover rate on a log scale. The y-axis is the mean overall per round score. compares different stranger policies for Reciprocative with shared history, while Figure 12 is with private history. In both figures, the players using the 100% Defect strategy change their identity (whitewash) after every transaction and are indistinguishable from legitimate newcomers. The Reciprocative players using the Stranger Cooperate policy completely fail to achieve cooperation. This stranger policy allows whitewashers to maximize their payoff and consequently provides a high incentive for users to switch to whitewashing. In contrast, Figure 11 shows that the Stranger Defect policy is effective with shared history. This is because whitewashers always appear to be strangers and therefore the Reciprocative players will always defect on them. This is consistent with previous work [13] showing that punishing strangers deals with whitewashers. However, Figure 12 shows that Stranger Defect is not effective with private history. This is because Reciprocative requires some initial cooperation to bootstrap. In the shared history case, a Reciprocative player can observe that another player has already cooperated with others. With private history, the Reciprocative player only knows about the other players'' actions toward her. Therefore, the initial defection dictated by the Stranger Defect policy will lead to later defections, which will prevent Reciprocative players from ever cooperating with each other. In other simulations not shown here, the Stranger Defect stranger policy fails even with shared history when there are no initial 100% Cooperate players. Figure 11 shows that with shared history, the Stranger Adaptive policy performs as well as Stranger Defect policy until the turnover rate is very high (10% of the population turning over after every transaction). In these scenarios, Stranger Adaptive is using k = 10 and each player keeps a private r. More importantly, it is significantly better than Stranger Defect policy with private history because it can bootstrap cooperation. Although the Stranger Defect policy is marginally more effective than Stranger Adaptive at very high rates of turnover, P2P systems are unlikely to operate there because other services (e.g., routing) also cannot tolerate very high turnover. We conclude that of the stranger policies that we have explored, Stranger Adaptive is the most effective. By using Stranger Adaptive, P2P systems with zero-cost identities and a sufficiently low turnover can sustain cooperation without a centralized allocation of identities. 109 4.4 Traitors Traitors are players who acquire high reputation scores by cooperating for a while, and then traitorously turn into defectors before leaving the system. They model both users who turn deliberately to gain a higher score and cooperators whose identities have been stolen and exploited by defectors. A strategy that maintains longterm history without discriminating between old and recent actions becomes highly vulnerable to exploitation by these traitors. The top two graphs in Figure 13 demonstrate the effect of traitors on cooperation in a system where players keep long-term history (never clear history). In these simulations, we run for 2000 rounds and allow cooperative players to keep their identities when switching to the 100% Defector strategy. We use the default values for the other parameters. Without traitors, the cooperative strategies thrive. With traitors, the cooperative strategies thrive until a cooperator turns traitor after 600 rounds. As this cooperator exploits her reputation to achieve a high score, other cooperative players notice this and follow suit via learning. Cooperation eventually collapses. On the other hand, if we maintain short-term history and/or discounting ancient history vis-a-vis recent history, traitors can be quickly detected, and the overall cooperation level stays high, as shown in the bottom two graphs in Figure 13. 0 20 40 60 80 100 1K 2K Long-TermHistory No Traitors Population 0 20 40 60 80 100 1K 2K Traitors Defector Cooperator Recip. Shared 0 20 40 60 80 100 1K 2K Short-TermHistory Time Population 0 20 40 60 80 100 1K 2K Time Figure 13: Keeping long-term vs. short-term history both with and without traitors. 5. RELATED WORK Previous work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular. A well-known phenomenon in this context is the tragedy of the commons [18] where resources are under-provisioned due to selfish users who free-ride on the system``s resources, and is especially common in large networks [29] [3]. The problem has been extensively studied adopting a game theoretic approach. The prisoners'' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players. In a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates. Our model assumes growth follows local learning rather than evolutionary dynamics [14], and also allows for more kinds of attacks. Nowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history. Players using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold. As a result, an Image player is either vulnerable to partial defectors (if the threshold is set too low) or does not cooperate with other Image players (if the threshold is set too high). In recent years, researchers have used economic mechanism design theory to tackle the cooperation problem in Internet applications. Mechanism design is the inverse of game theory. It asks how to design a game in which the behavior of strategic players results in the socially desired outcome. Distributed Algorithmic Mechanism Design seeks solutions within this framework that are both fully distributed and computationally tractable [12]. [10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing. More recently, DAMD has been also studied in dynamic environments [38]. In this context, demonstrating the superiority of a cooperative strategy (as in the case of our work) is consistent with the objective of incentivizing the desired behavior among selfish players. The unique challenges imposed by peer-to-peer systems inspired additional body of work [5] [37], mainly in the context of packet forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file sharing [15] [31]. Friedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable. Using a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm. [6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system. Some commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code. These mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions. Also, these mechanisms are still vulnerable to zero-cost identities and collusion. BitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user``s upload rate dictates his download rate. 6. CONCLUSIONS In this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks. Addressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance. We find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest. Furthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers. Finally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively. 110 7. ACKNOWLEDGMENTS We thank Mary Baker, T.J. Giuli, Petros Maniatis, the anonymous reviewer, and our shepherd, Margo Seltzer, for their useful comments that helped improve the paper. This work is supported in part by the National Science Foundation under ITR awards ANI-0085879 and ANI-0331659, and Career award ANI-0133811. Views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NSF, or the U.S. government. 8. REFERENCES [1] Kazaa. http://www.kazaa.com. [2] Limewire. http://www.limewire.com. [3] ADAR, E., AND HUBERMAN, B. A. Free Riding on Gnutella. First Monday 5, 10 (October 2000). [4] AXELROD, R. The Evolution of Cooperation. Basic Books, 1984. [5] BURAGOHAIN, C., AGRAWAL, D., AND SURI, S. A Game-Theoretic Framework for Incentives in P2P Systems. In International Conference on Peer-to-Peer Computing (Sep 2003). [6] CASTRO, M., DRUSCHEL, P., GANESH, A., ROWSTRON, A., AND WALLACH, D. S. Security for Structured Peer-to-Peer Overlay Networks. In Proceedings of Multimedia Computing and Networking 2002 (MMCN ``02) (2002). [7] COHEN, B. Incentives build robustness in bittorrent. In 1st Workshop on Economics of Peer-to-Peer Systems (2003). [8] CROWCROFT, J., GIBBENS, R., KELLY, F., AND ˘ OSTRING, S. Modeling Incentives for Collaboration in Mobile ad-hoc Networks. In Modeling and Optimization in Mobile, ad-hoc and Wireless Networks (2003). [9] DOUCEUR, J. R. The Sybil Attack. In Electronic Proceedings of the International Workshop on Peer-to-Peer Systems (2002). [10] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S. A BGP-based Mechanism for Lowest-Cost Routing. In Proceedings of the ACM Symposium on Principles of Distributed Computing (2002). [11] FEIGENBAUM, J., PAPADIMITRIOU, C., AND SHENKER, S. Sharing the Cost of Multicast Transmissions. In Journal of Computer and System Sciences (2001), vol. 63, pp. 21-41. [12] FEIGENBAUM, J., AND SHENKER, S. Distributed Algorithmic Mechanism Design: Recent Results and Future Directions. In Proceedings of the International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications (2002). [13] FRIEDMAN, E., AND RESNICK, P. The Social Cost of Cheap Pseudonyms. Journal of Economics and Management Strategy 10, 2 (1998), 173-199. [14] FUDENBERG, D., AND LEVINE, D. K. The Theory of Learning in Games. The MIT Press, 1999. [15] GOLLE, P., LEYTON-BROWN, K., MIRONOV, I., AND LILLIBRIDGE, M. Incentives For Sharing in Peer-to-Peer Networks. In Proceedings of the 3rd ACM conference on Electronic Commerce, October 2001 (2001). [16] GROSS, B., AND ACQUISTI, A. Balances of Power on EBay: Peers or Unquals? In Workshop on economics of peer-to-peer networks (2003). [17] GU, B., AND JARVENPAA, S. Are Contributions to P2P Technical Forums Private or Public Goods? - An Empirical Investigation. In 1st Workshop on Economics of Peer-to-Peer Systems (2003). [18] HARDIN, G. The Tragedy of the Commons. Science 162 (1968), 1243-1248. [19] JOSEF HOFBAUER AND KARL SIGMUND. Evolutionary Games and Population Dynamics. Cambridge University Press, 1998. [20] KAMVAR, S. D., SCHLOSSER, M. T., AND GARCIA-MOLINA, H. The EigenTrust Algorithm for Reputation Management in P2P Networks. In Proceedings of the Twelfth International World Wide Web Conference (May 2003). [21] KAN, G. Peer-to-Peer: Harnessing the Power of Disruptive Technologies, 1st ed. O``Reilly & Associates, Inc., March 2001, ch. Gnutella, pp. 94-122. [22] KUHN, S. Prisoner``s Dilemma. In The Stanford Encyclopedia of Philosophy, Edward N. Zalta, Ed., Summer ed. 2003. [23] LEE, S., SHERWOOD, R., AND BHATTACHARJEE, B. Cooperative Peer Groups in Nice. In Proceedings of the IEEE INFOCOM (2003). [24] LEVIEN, R., AND AIKEN, A. Attack-Resistant Trust Metrics for Public Key Certification. In Proceedings of the USENIX Security Symposium (1998), pp. 229-242. [25] MANIATIS, P., ROUSSOPOULOS, M., GIULI, T. J., ROSENTHAL, D. S. H., BAKER, M., AND MULIADI, Y. Preserving Peer Replicas by Rate-Limited Sampled Voting. In ACM Symposium on Operating Systems Principles (2003). [26] MARTI, S., GIULI, T. J., LAI, K., AND BAKER, M. Mitigating Routing Misbehavior in Mobile ad-hoc Networks. In Proceedings of MobiCom (2000), pp. 255-265. [27] MICHIARDI, P., AND MOLVA, R. A Game Theoretical Approach to Evaluate Cooperation Enforcement Mechanisms in Mobile ad-hoc Networks. In Modeling and Optimization in Mobile, ad-hoc and Wireless Networks (2003). [28] NOWAK, M. A., AND SIGMUND, K. Evolution of Indirect Reciprocity by Image Scoring. Nature 393 (1998), 573-577. [29] OLSON, M. The Logic of Collective Action: Public Goods and the Theory of Groups. Harvard University Press, 1971. [30] RAGHAVAN, B., AND SNOEREN, A. Priority Forwarding in ad-hoc Networks with Self-Ineterested Parties. In Workshop on Economics of Peer-to-Peer Systems (June 2003). [31] RANGANATHAN, K., RIPEANU, M., SARIN, A., AND FOSTER, I. To Share or Not to Share: An Analysis of Incentives to Contribute in Collaborative File Sharing Environments. In Workshop on Economics of Peer-to-Peer Systems (June 2003). [32] REITER, M. K., AND STUBBLEBINE, S. G. Authentication Metric Analysis and Design. ACM Transactions on Information and System Security 2, 2 (1999), 138-158. [33] SAROIU, S., GUMMADI, P. K., AND GRIBBLE, S. D. A Measurement Study of Peer-to-Peer File Sharing Systems. In Proceedings of Multimedia Computing and Networking 2002 (MMCN ``02) (2002). [34] SMITH, J. M. Evolution and the Theory of Games. Cambridge University Press, 1982. [35] URPI, A., BONUCCELLI, M., AND GIORDANO, S. Modeling Cooperation in Mobile ad-hoc Networks: a Formal Description of Selfishness. In Modeling and Optimization in Mobile, ad-hoc and Wireless Networks (2003). [36] VISHNUMURTHY, V., CHANDRAKUMAR, S., AND SIRER, E. G. KARMA : A Secure Economic Framework for P2P Resource Sharing. In Workshop on Economics of Peer-to-Peer Networks (2003). [37] WANG, W., AND LI, B. To Play or to Control: A Game-based Control-Theoretic Approach to Peer-to-Peer Incentive Engineering. In International Workshop on Quality of Service (June 2003). [38] WOODARD, C. J., AND PARKES, D. C. Strategyproof mechanisms for ad-hoc network formation. In Workshop on Economics of Peer-to-Peer Systems (June 2003). 111
Robust Incentive Techniques for Peer-to-Peer Networks Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation. 1. INTRODUCTION Many peer-to-peer (P2P) systems rely on cooperation among selfinterested users. For example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3]. In a wireless ad-hoc network, overall packet latency and loss rate increase when nodes refuse to forward packets on behalf of others [26]. Further examples are file preservation [25], discussion boards [17], online auctions [16], and overlay routing [6]. In many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance. As a result, each user's attempt to maximize her own utility effectively lowers the overall Figure 1: Example of asymmetry of interest. A wants service from B, B wants service form C, and C wants service from A. utility of the system. Avoiding this "tragedy of the commons" [18] requires incentives for cooperation. We adopt a game-theoretic approach in addressing this problem. In particular, we use a prisoners' dilemma model to capture the essential tension between individual and social utility, asymmetric payoff matrices to allow asymmetric transactions between peers, and a learning-based [14] population dynamic model to specify the behavior of individual peers, which can be changed continuously. While social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including: • Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33]. • Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest. In the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. • Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash). Strategies that work well in traditional prisoners' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context. Therefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs: • Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly. However, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity. We show that by having each peer keep a private history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover. . Shared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history. Consider the example in Figure 1. If everyone provides service, then the system operates optimally. However, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc. . We show that with shared history, B knows that A served C and consequently will serve A. This results in a higher level of cooperation than with private history. The cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history. . Maxflow-based Subjective Reputation: Shared history creates the possibility for collusion. In the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service. We show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1/3 of the population. The basic idea is that B would only believe C if C had already provided service to B. The cost of the maxflow algorithm is its O (V 3) running time, where V is the number of nodes in the system. To eliminate this cost, we have developed a constant mean running time variation, which trades effectiveness for complexity of computation. We show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20]. . Adaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped. We show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system. The adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25]. . Short-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers. The peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host). Long-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions. We show that short-term history prevents traitors from disrupting cooperation. The rest of the paper is organized as follows. We describe the model in Section 2 and the reciprocative decision function in Section 3. We then proceed to the incentive techniques in Section 4. In Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history. In Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it. In Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities. In Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them. We discuss related work in Section 5 and conclude in Section 6. 2. MODEL AND ASSUMPTIONS In this section, we present our assumptions about P2P systems and their users, and introduce a model that aims to capture the behavior of users in a P2P system. 2.1 Assumptions We assume a P2P system in which users are strategic, i.e., they act rationally to maximize their benefit. However, to capture some of the real-life unpredictability in the behavior of users, we allow users to randomly change their behavior with a low probability (see Section 2.4). For simplicity, we assume a homogeneous system in which all peers issue and satisfy requests at the same rate. A peer can satisfy any request, and, unless otherwise specified, peers request service uniformly at random from the population . ' . Finally, we assume that all transactions incur the same cost to all servers and provide the same benefit to all clients. We assume that users can pollute shared history with false recommendations (Section 4.2), switch identities at zero-cost (Section 4.3), and spoof other users (Section 4.4). We do not assume any centralized trust or centralized infrastructure. 2.2 Model To aid the development and study of the incentive schemes, in this section we present a model of the users' behaviors. In particular, we model the benefits and costs of P2P interactions (the game) and population dynamics caused by mutation, learning, and turnover. Our model is designed to have the following properties that characterize a large set of P2P systems:. Social Dilemma: Universal cooperation should result in optimal overall utility, but individuals who exploit the cooperation of others while not cooperating themselves (i.e., defecting) should benefit more than users who do cooperate. . Asymmetric Transactions: A peer may want service from another peer while not currently being able to provide the service that the second peer wants. Transactions should be able to have asymmetric payoffs. . Untraceable Defections: A peer should not be able to determine the identity of peers who have defected on her. This models the difficulty or expense of determining that a peer could have provided a service, but didn't. For example, in the Gnutella file sharing system [21], a peer may simply ignore queries despite possessing the desired file, thus preventing the querying peer from identifying the defecting peer. . Dynamic Population: Peers should be able to change their behavior and enter or leave the system independently and continuously. Figure 2: Payoff matrix for the Generalized Prisoner's Dilemma. T, R, P, and S stand for temptation, reward, punishment and sucker, respectively. 2.3 Generalized Prisoner's Dilemma The Prisoner's Dilemma, developed by Flood, Dresher, and Tucker in 1950 [22] is a non-cooperative repeated game satisfying the social dilemma requirement. Each game consists of two players who can defect or cooperate. Depending how each acts, the players receive a payoff. The players use a strategy to decide how to act. Unfortunately, existing work either uses a specific asymmetric payoff matrix or only gives the general form for a symmetric one [4]. Instead, we use the Generalized Prisoner's Dilemma (GPD), which specifies the general form for an asymmetric payoff matrix that preserves the social dilemma. In the GPD, one player is the client and one player is the server in each game, and it is only the decision of the server that is meaningful for determining the outome of the transaction. A player can be a client in one game and a server in another. The client and server receive the payoff from a generalized payoff matrix (Figure 2). Rte, Ste, T., and P. are the client's payoff and R., S., T., and P. are the server's payoff. A GPD payoff matrix must have the following properties to create a social dilemma: 1. Mutual cooperation leads to higher payoffs than mutual defection (R. + R.> P. + P.). 2. Mutual cooperation leads to higher payoffs than one player suckering the other (R. + R> S + T. and R. + R> S. + T.). 3. Defection dominates cooperation (at least weakly) at the individual level for the entity who decides whether to cooperate or defect: (T.> - R. and P.> - S. and (T.> R. or P.> S.)) The last set of inequalities assume that clients do not incur a cost regardless of whether they cooperate or defect, and therefore clients always cooperate. These properties correspond to similar properties of the classic Prisoner's Dilemma and allow any form of asymmetric transaction while still creating a social dilemma. Furthermore, one or more of the four possible actions (client cooperate and defect, and server cooperate and defect) can be untraceable. If one player makes an untraceable action, the other player does not know the identity of the first player. For example, to model a P2P application like file sharing or overlay routing, we use the specific payoff matrix values shown in Figure 3. This satisfies the inequalities specified above, where only the server can choose between cooperating and defecting. In addition, for this particular payoff matrix, clients are unable to trace server defections. This is the payoff matrix that we use in our simulation results. Figure 3: The payoff matrix for an application like P2P file sharing or overlay routing. 2.4 Population Dynamics A characteristic of P2P systems is that peers change their behavior and enter or leave the system independently and continuously. Several studies [4] [28] of repeated Prisoner's Dilemma games use an evolutionary model [19] [34] of population dynamics. An evolutionary model is not suitable for P2P systems because it only specifies the global behavior and all changes occur at discrete times. For example, it may specify that a population of 5 "100% Cooperate" players and 5 "100% Defect" players evolves into a population with 3 and 7 players, respectively. It does not specify which specific players switched. Furthermore, all the switching occurs at the end of a generation instead of continuously, like in a real P2P system. As a result, evolutionary population dynamics do not accurately model turnover, traitors, and strangers. In our model, entities take independent and continuous actions that change the composition of the population. Time consists of rounds. In each round, every player plays one game as a client and one game as a server. At the end of a round, a player may: 1) mutate 2) learn, 3) turnover, or 4) stay the same. If a player mutates, she switches to a randomly picked strategy. If she learns, she switches to a strategy that she believes will produce a higher score (described in more detail below). If she maintains her identity after switching strategies, then she is referred to as a traitor. If a player suffers turnover, she leaves the system and is replaced with a newcomer who uses the same strategy as the exiting player. To learn, a player collects local information about the performance of different strategies. This information consists of both her personal observations of strategy performance and the observations of those players she interacts with. This models users communicating out-of-band about how strategies perform. Let s be the running average of the performance of a player's current strategy per round and age be the number of rounds she has been using the strategy. A strategy's rating is We use the age and compute the running average before the ratio to prevent young samples (which are more likely to be outliers) from skewing the rating. At the end of a round, a player switches to highest rated strategy with a probability proportional to the difference in score between her current strategy and the highest rated strategy. 3. RECIPROCATIVE DECISION FUNCTION In this section, we present the new decision function, Reciprocative, that is the basis for our incentive techniques. A decision function maps from a history of a player's actions to a decision whether to cooperate with or defect on that player. A strategy consists of a decision function, private or shared history, a server selection mechanism, and a stranger policy. Our approach to incentives is to design strategies which maximize both individual and social benefit. Strategic users will choose to use such strategies and thereby drive the system to high levels of cooperation. Two examples of simple decision functions are "100% Cooperate" and "100% Defect". "100% Cooperate" models a naive user who does not yet realize that she is being exploited. "100% Defect" models a greedy user who is intent on exploiting the system. In the absence of incentive techniques, "100% Defect" users will quickly dominate the "100% Cooperate" users and destroy cooperation in the system. Our requirements for a decision function are that (1) it can use shared and subjective history, (2) it can deal with untraceable defections, and (3) it is robust against different patterns of defection. Previous decision functions such as Tit-for-Tat [4] and Image [28] (see Section 5) do not satisfy these criteria. For example, Tit-for-Tat and Image base their decisions on both cooperations and defections, therefore cannot deal with untraceable defections. In this section and the remaining sections we demonstrate how the Reciprocativebased strategies satisfy all of the requirements stated above. The probability that a Reciprocative player cooperates with a peer is a function of its normalized generosity. Generosity measures the benefit an entity has provided relative to the benefit it has consumed. This is important because entities which consume more services than they provide, even if they provide many services, will cause cooperation to collapse. For some entity i, let pi and ci be the services i has provided and consumed, respectively. Entity i's generosity is simply the ratio of the service it provides to the service it consumes: One possibility is to cooperate with a probability equal to the generosity. Although this is effective in some cases, in other cases, a Reciprocative player may consume more than she provides (e.g., when initially using the "Stranger Defect" policy in 4.3). This will cause Reciprocative players to defect on each other. To prevent this situation, a Reciprocative player uses its own generosity as a measuring stick to judge its peer's generosity. Normalized generosity measures entity i's generosity relative to entity j's generosity. More concretely, entity i's normalized generosity as perceived by entity j is In the remainder of this section, we describe our simulation framework, and use it to demonstrate the benefits of the baseline Reciprocative decision function. Table 1: Default simulation parameters. 3.1 Simulation Framework Our simulator implements the model described in Section 2. We use the asymmetric file sharing payoff matrix (Figure 3) with untraceable defections because it models transactions in many P2P systems like file-sharing and packet forwarding in ad hoc and overlay networks. Our simulation study is composed of different scenarios reflecting the challenges of various non-cooperative behaviors. Table 1 presents the nominal parameter values used in our simulation. The "Ratio using" rows refer to the initial ratio of the total population using a particular strategy. In each scenario we vary the value range of a specific parameter to reflect a particular situation or attack. We then vary the exact properties of the Reciprocative strategy to defend against that situation or attack. 3.2 Baseline Results Figure 4: The evolution of strategy populations over time. "Time" the number of elapsed rounds. "Population" is the number of players using a strategy. In this section, we present the dynamics of the game for the basic scenario presented in Table 1 to familiarize the reader and set a baseline for more complicated scenarios. Figures 4 (a) (60 players) and (b) (120 players) show players switching to higher scoring strategies over time in two separate runs of the simulator. Each point in the graph represents the number of players using a particular strategy at one point in time. Figures 5 (a) and (b) show the corresponding mean overall score per round. This measures the degree of cooperation in the system: 6 is the maximum possible (achieved when everybody cooperates) and 0 is the minimum (achieved when everybody defects). From the file sharing payoff matrix, a net of 6 means everyone is able to download a file and a 0 means that no one Figure 5: The mean overall per round score over time. is able to do so. We use this metric in all later results to evaluate our incentive techniques. Figure 5 (a) shows that the Reciprocative strategy using private history causes a system of 60 players to converge to a cooperation level of 3.7, but drops to 0.5 for 120 players. One would expect the 60 player system to reach the optimal level of cooperation (6) because all the defectors are eliminated from the system. It does not because of asymmetry of interest. For example, suppose player B is using Reciprocative with private history. Player A may happen to ask for service from player B twice in succession without providing service to player B in the interim. Player B does not know of the service player A has provided to others, so player B will reject service to player A, even though player A is cooperative. We discuss solutions to asymmetry of interest and the failure of Reciprocative in the 120 player system in Section 4.1. 4. RECIPROCATIVE-BASED INCENTIVE TECHNIQUES In this section we present our incentives techniques and evaluate their behavior by simulation. To make the exposition clear we group our techniques by the challenges they address: large populations and high turnover (Section 4.1), collusions (Section 4.2), zero-cost identities (Section 4.3), and traitors (Section 4.4). 4.1 Large Populations and High Turnover The large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity. Under these conditions, basing decisions only on private history (records about interactions the peer has been directly involved in) is not effective. In addition, private history does not deal well with asymmetry of interest. For example, if player B has cooperated with others but not with player A himself in the past, player A has no indication of player B's generosity, thus may unduly defect on him. We propose two mechanisms to alleviate the problem of few repeat transactions: server-selection and shared history. 4.1.1 Server Selection A natural way to increase the probability of interacting with familiar peers is by discriminating server selection. However, the asymmetry of transactions challenges selection mechanisms. Unlike in the prisoner's dilemma payoff matrix, where players can benefit one another within a single transaction, transactions in GPD are asymmetric. As a result, a player who selects her donor for the second time without contributing to her in the interim may face a defection. In addition, due to untraceability of defections, it is impossible to maintain blacklists to avoid interactions with known defectors. In order to deal with asymmetric transactions, every player holds (fixed size) lists of both past donors and past recipients, and selects a server from one of these lists at random with equal probabilities. This way, users approach their past recipients and give them a chance to reciprocate. In scenarios with selective users we omit the complete availability assumption to prevent players from being clustered into a lot of very small groups; thus, we assume that every player can perform the requested service with probability P (for the results presented in this section, P = .3). In addition, in order to avoid bias in favor of the selective players, all players (including the non-discriminative ones) select servers for games. Figure 6 demonstrates the effectiveness of the proposed selection mechanism in scenarios with large population sizes. We fix the initial ratio of Reciprocative in the population (33%) while varying the population size (between 24 to 1000) (Notice that while in Figures 4 (a) and (b), the data points demonstrates the evolution of the system over time, each data point in this figure is the result of an entire simulation for a specific scenario). The figure shows that the Reciprocative decision function using private history in conjunction with selective behavior can scale to large populations. In Figure 7 we fix the population size and vary the turnover rate. It demonstrates that while selective behavior is effective for low turnover rates, as turnover gets higher, selective behavior does not scale. This occurs because selection is only effective as long as players from the past stay alive for long enough such that they can be selected for future games. 4.1.2 Shared history In order to mitigate asymmetry of interest and scale to higher turnover rate, there is a need in shared history. Shared history means that every peer keeps records about all of the interactions that occur in the system, regardless of whether he was directly involved in them or not. It allows players to leverage off of the experiences of others in cases of few repeat transactions. It only requires that someone has interacted with a particular player for the entire population to observe it, thus scales better to large populations and high turnovers, and also tolerates asymmetry of interest. Some examples of shared history schemes are [20] [23] [28]. Figure 7 shows the effectiveness of shared history under high turnover rates. In this figure, we fix the population size and vary the turnover rate. While selective players with private history can only tolerate a moderate turnover, shared history scales to turnovers of up to approximately 0.1. This means that 10% of the players leave the system at the end of each round. In Figure 6 we fix the turnover and vary the population size. It shows that shared history causes the system to converge to optimal cooperation and performance, regardless of the size of the population. These results show that shared history addresses all three challenges of large populations, high turnover, and asymmetry of transactions. Nevertheless, shared history has two disadvantages. First, Figure 7: Performance of selection mechanism under turnover. The x-axis is the turnover rate. The y-axis is the mean overall per round score. while a decentralized implementation of private history is straightforward, implementation of shared-history requires communication overhead or centralization. A decentralized shared history can be implemented, for example, on top of a DHT, using a peer-to-peer storage system [36] or by disseminating information to other entities in a similar way to routing protocols. Second, and more fundamental, shared history is vulnerable to collusion. In the next section we propose a mechanism that addresses this problem. 4.2 Collusion and Other Shared History Attacks 4.2.1 Collusion While shared history is scalable, it is vulnerable to collusion. Collusion can be either positive (e.g. defecting entities claim that other defecting entities cooperated with them) or negative (e.g. entities claim that other cooperative entities defected on them). Collusion subverts any strategy in which everyone in the system agrees on the reputation of a player (objective reputation). An example of objective reputation is to use the Reciprocative decision function with shared history to count the total number of cooperations a player has given to and received from all entities in the system; another example is the Image strategy [28]. The effect of collusion is magnified in systems with zero-cost identities, where users can create fake identities that report false statements. Instead, to deal with collusion, entities can compute reputation subjectively, where player A weighs player B's opinions based on how much player A trusts player B. Our subjective algorithm is based on maxflow [24] [32]. Maxflow is a graph theoretic problem, which given a directed graph with weighted edges asks what is the greatest rate at which "material" can be shipped from the source to the target without violating any capacity constraints. For example, in figure 8 each edge is labeled with the amount of traffic that can travel on it. The maxflow algorithm computes the maximum amount of traffic that can go from the source (s) to the target (t) without violating the constraints. In this example, even though there is a loop of high capacity edges, the maxflow between the source and the target is only 2 (the numbers in brackets represent the actual flow on each edge in the solution). Figure 8: Each edge in the graph is labeled with its capacity and the actual flow it carries in brackets. The maxflow between the source and the target in the graph is 2. Figure 9: This graph illustrates the robustness of maxflow in the presence of colluders who report bogus high reputation values. We apply the maxflow algorithm by constructing a graph whose vertices are entities and the edges are the services that entities have received from each other. This information can be stored using the same methods as the shared history. A maxflow is the greatest level of reputation the source can give to the sink without violating "reputation capacity" constraints. As a result, nodes who dishonestly report high reputation values will not be able to subvert the reputation system. Figure 9 illustrates a scenario in which all the colluders (labeled with C) report high reputation values for each other. When node A computes the subjective reputation of B using the maxflow algorithm, it will not be affected by the local false reputation values, rather the maxflow in this case will be 0. This is because no service has been received from any of the colluders. Figure 6: Private vs. Shared History as a function of population size. In our algorithm, the benefit that entity i has received (indirectly) from entity j is the maxflow from j to i. Conversely, the benefit that entity i has provided indirectly to j is the maxflow from i to j. The subjective reputation of entity j as perceived by i is: Figure 10: Subjective shared history compared to objective shared history and private history in the presence of colluders. The cost of maxflow is its long running time. The standard preflowpush maxflow algorithm has a worst case running time of O (V 3). Instead, we use Algorithm 1 which has a constant mean running time, but sometimes returns no flow even though one exists. The essential idea is to bound the mean number of nodes examined during the maxflow computation. This bounds the overhead, but also bounds the effectiveness. Despite this, the results below show that a maxflow-based Reciprocative decision function scales to higher populations than one using private history. Figure 10 compares the effectiveness of subjective reputation to objective reputation in the presence of colluders. In these scenarios, defectors collude by claiming that other colluders that they encounter gave them 100 cooperations for that encounter. Also, the parameters for Algorithm 1 are set as follows: increment = 100, a = 0.9. As in previous sections, Reciprocative with private history results in cooperation up to a point, beyond which it fails. The difference here is that objective shared history fails for all population sizes. This is because the Reciprocative players cooperate with the colluders because of their high reputations. However, subjective history can reach high levels of cooperation regardless of colluders. This is because there are no high weight paths in the cooperation graph from colluders to any non-colluders, so the maxflow from a colluder to any non-colluder is 0. Therefore, a subjective Reciprocative player will conclude that that colluder has not provided any service to her and will reject service to the colluder. Thus, the maxflow algorithm enables Reciprocative to maintain the scalability of shared history without being vulnerable to collusion or requiring centralized trust (e.g., trusted peers). Since we bound the running time of the maxflow algorithm, cooperation decreases as the population size increases, but the key point is that the subjective Reciprocative decision function scales to higher populations than one using private history. This advantage only increases over time as CPU power increases and more cycles can be devoted to running the maxflow algorithm (by increasing the increment parameter). Despite the robustness of the maxflow algorithm to the simple form of collusion described previously, it still has vulnerabilities to more sophisticated attacks. One is for an entity (the "mole") to provide service and then lie positively about other colluders. The other colluders can then exploit their reputation to receive service. However, the effectiveness of this attack relies on the amount of service that the mole provides. Since the mole is paying all of the cost of providing service and receiving none of the benefit, she has a strong incentive to stop colluding and try another strategy. This forces the colluders to use mechanisms to maintain cooperation within their group, which may drive the cost of collusion to exceed the benefit. 4.2.2 False reports Another attack is for a defector to lie about receiving or providing service to another entity. There are four possibile actions that can be lied about: providing service, not providing service, receiving service, and not receiving service. Falsely claiming to receive service is the simple collusion attack described above. Falsely claiming not to have provided service provides no benefit to the attacker. Falsely claiming to have provided service or not to have received it allows an attacker to boost her own reputation and/or lower the reputation of another entity. An entity may want to lower another entity's reputation in order to discourage others from selecting it and exclusively use its service. These false claims are clearly identifiable in the shared history as inconsistencies where one entity claims a transaction occurred and another claims it did not. To limit this attack, we modify the maxflow algorithm so that an entity always believes the entity that is closer to him in the flow graph. If both entities are equally distant, then the disputed edge in the flow is not critical to the evaluation and is ignored. This modification prevents those cases where the attacker is making false claims about an entity that is closer than her to the evaluating entity, which prevents her from boosting her own reputation. The remaining possibilities are for the attacker to falsely claim to have provided service to or not to have received it from a victim entity that is farther from the evalulator than her. In these cases, an attacker can only lower the reputation of the victim. The effectiveness of doing this is limited by the number of services provided and received by the attacker, which makes executing this attack expensive. 4.3 Zero-Cost Identities History assumes that entities maintain persistent identities. However, in most P2P systems, identities are zero-cost. This is desirable for network growth as it encourages newcomers to join the system. However, this also allows misbehaving users to escape the consequences of their actions by switching to new identities (i.e., whitewashing). Whitewashers can cause the system to collapse if they are not punished appropriately. Unfortunately, a player cannot tell if a stranger is a whitewasher or a legitimate newcomer. Always cooperating with strangers encourages newcomers to join, but at the same time encourages whitewashing behavior. Always defecting on strangers prevents whitewashing, but discourages newcomers from joining and may also initiate unfavorable cycles of defection. This tension suggests that any stranger policy that has a fixed probability of cooperating with strangers will fail by either being too stingy when most strangers are newcomers or too generous when most strangers are whitewashers. Our solution is the "Stranger Adaptive" stranger policy. The idea is to be generous to strangers when they are being generous and stingy when they are stingy. Let ps and cs be the number of services that strangers have provided and consumed, respectively. The probability that a player using "Stranger Adaptive" helps a stranger is ps/cs. However, we do not wish to keep these counts permanently (for reasons described in Section 4.4). Also, players may not know when strangers defect because defections are untraceable (as described in Section 2). Consequently, instead of keeping ps and cs, we assume that k = ps + cs, where k is a constant and we keep the running ratio r = ps/cs. When we need to increment ps or cs, we generate the current values of ps and cs from k and r: We then compute the new r as follows: This method allows us to keep a running ratio that reflects the recent generosity of strangers without knowing when strangers have defected. Figure 11: Different stranger policies for Reciprocative with shared history. The x-axis is the turnover rate on a log scale. The y-axis is the mean overall per round score. Figures 11 and 12 compare the effectiveness of the Reciprocative strategy using different policies toward strangers. Figure 11 Figure 12: Different stranger policies for Reciprocative with private history. The x-axis is the turnover rate on a log scale. The y-axis is the mean overall per round score. compares different stranger policies for Reciprocative with shared history, while Figure 12 is with private history. In both figures, the players using the "100% Defect" strategy change their identity (whitewash) after every transaction and are indistinguishable from legitimate newcomers. The Reciprocative players using the "Stranger Cooperate" policy completely fail to achieve cooperation. This stranger policy allows whitewashers to maximize their payoff and consequently provides a high incentive for users to switch to whitewashing. In contrast, Figure 11 shows that the "Stranger Defect" policy is effective with shared history. This is because whitewashers always appear to be strangers and therefore the Reciprocative players will always defect on them. This is consistent with previous work [13] showing that punishing strangers deals with whitewashers. However, Figure 12 shows that "Stranger Defect" is not effective with private history. This is because Reciprocative requires some initial cooperation to bootstrap. In the shared history case, a Reciprocative player can observe that another player has already cooperated with others. With private history, the Reciprocative player only knows about the other players' actions toward her. Therefore, the initial defection dictated by the "Stranger Defect" policy will lead to later defections, which will prevent Reciprocative players from ever cooperating with each other. In other simulations not shown here, the "Stranger Defect" stranger policy fails even with shared history when there are no initial "100% Cooperate" players. Figure 11 shows that with shared history, the "Stranger Adaptive" policy performs as well as "Stranger Defect" policy until the turnover rate is very high (10% of the population turning over after every transaction). In these scenarios, "Stranger Adaptive" is using k = 10 and each player keeps a private r. More importantly, it is significantly better than "Stranger Defect" policy with private history because it can bootstrap cooperation. Although the "Stranger Defect" policy is marginally more effective than "Stranger Adaptive" at very high rates of turnover, P2P systems are unlikely to operate there because other services (e.g., routing) also cannot tolerate very high turnover. We conclude that of the stranger policies that we have explored, "Stranger Adaptive" is the most effective. By using "Stranger Adaptive", P2P systems with zero-cost identities and a sufficiently low turnover can sustain cooperation without a centralized allocation of identities. 4.4 Traitors Traitors are players who acquire high reputation scores by cooperating for a while, and then traitorously turn into defectors before leaving the system. They model both users who turn deliberately to gain a higher score and cooperators whose identities have been stolen and exploited by defectors. A strategy that maintains longterm history without discriminating between old and recent actions becomes highly vulnerable to exploitation by these traitors. The top two graphs in Figure 13 demonstrate the effect of traitors on cooperation in a system where players keep long-term history (never clear history). In these simulations, we run for 2000 rounds and allow cooperative players to keep their identities when switching to the 100% Defector strategy. We use the default values for the other parameters. Without traitors, the cooperative strategies thrive. With traitors, the cooperative strategies thrive until a cooperator turns traitor after 600 rounds. As this "cooperator" exploits her reputation to achieve a high score, other cooperative players notice this and follow suit via learning. Cooperation eventually collapses. On the other hand, if we maintain short-term history and/or discounting ancient history vis-a-vis recent history, traitors can be quickly detected, and the overall cooperation level stays high, as shown in the bottom two graphs in Figure 13. Figure 13: Keeping long-term vs. short-term history both with and without traitors. 5. RELATED WORK Previous work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular. A well-known phenomenon in this context is the "tragedy of the commons" [18] where resources are under-provisioned due to selfish users who free-ride on the system's resources, and is especially common in large networks [29] [3]. The problem has been extensively studied adopting a game theoretic approach. The prisoners' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players. In a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates. Our model assumes growth follows local learning rather than evolutionary dynamics [14], and also allows for more kinds of attacks. Nowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history. Players using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold. As a result, an Image player is either vulnerable to partial defectors (if the threshold is set too low) or does not cooperate with other Image players (if the threshold is set too high). In recent years, researchers have used economic "mechanism design" theory to tackle the cooperation problem in Internet applications. Mechanism design is the inverse of game theory. It asks how to design a game in which the behavior of strategic players results in the socially desired outcome. Distributed Algorithmic Mechanism Design seeks solutions within this framework that are both fully distributed and computationally tractable [12]. [10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing. More recently, DAMD has been also studied in dynamic environments [38]. In this context, demonstrating the superiority of a cooperative strategy (as in the case of our work) is consistent with the objective of incentivizing the desired behavior among selfish players. The unique challenges imposed by peer-to-peer systems inspired additional body of work [5] [37], mainly in the context of packet forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file sharing [15] [31]. Friedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable. Using a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm. [6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system. Some commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code. These mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions. Also, these mechanisms are still vulnerable to zero-cost identities and collusion. BitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user's upload rate dictates his download rate. 6. CONCLUSIONS In this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks. Addressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance. We find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest. Furthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers. Finally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.
Robust Incentive Techniques for Peer-to-Peer Networks Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation. 1. INTRODUCTION Many peer-to-peer (P2P) systems rely on cooperation among selfinterested users. For example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3]. In a wireless ad-hoc network, overall packet latency and loss rate increase when nodes refuse to forward packets on behalf of others [26]. Further examples are file preservation [25], discussion boards [17], online auctions [16], and overlay routing [6]. In many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance. As a result, each user's attempt to maximize her own utility effectively lowers the overall Figure 1: Example of asymmetry of interest. A wants service from B, B wants service form C, and C wants service from A. utility of the system. Avoiding this "tragedy of the commons" [18] requires incentives for cooperation. We adopt a game-theoretic approach in addressing this problem. In particular, we use a prisoners' dilemma model to capture the essential tension between individual and social utility, asymmetric payoff matrices to allow asymmetric transactions between peers, and a learning-based [14] population dynamic model to specify the behavior of individual peers, which can be changed continuously. While social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including: • Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33]. • Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest. In the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. • Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash). Strategies that work well in traditional prisoners' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context. Therefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs: • Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly. However, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity. We show that by having each peer keep a private history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover. . Shared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history. Consider the example in Figure 1. If everyone provides service, then the system operates optimally. However, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc. . We show that with shared history, B knows that A served C and consequently will serve A. This results in a higher level of cooperation than with private history. The cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history. . Maxflow-based Subjective Reputation: Shared history creates the possibility for collusion. In the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service. We show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1/3 of the population. The basic idea is that B would only believe C if C had already provided service to B. The cost of the maxflow algorithm is its O (V 3) running time, where V is the number of nodes in the system. To eliminate this cost, we have developed a constant mean running time variation, which trades effectiveness for complexity of computation. We show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20]. . Adaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped. We show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system. The adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25]. . Short-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers. The peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host). Long-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions. We show that short-term history prevents traitors from disrupting cooperation. The rest of the paper is organized as follows. We describe the model in Section 2 and the reciprocative decision function in Section 3. We then proceed to the incentive techniques in Section 4. In Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history. In Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it. In Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities. In Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them. We discuss related work in Section 5 and conclude in Section 6. 2. MODEL AND ASSUMPTIONS 2.1 Assumptions 2.2 Model 2.3 Generalized Prisoner's Dilemma 2.4 Population Dynamics 3. RECIPROCATIVE DECISION FUNCTION 3.1 Simulation Framework 3.2 Baseline Results 4. RECIPROCATIVE-BASED INCENTIVE TECHNIQUES 4.1 Large Populations and High Turnover 4.1.1 Server Selection 4.1.2 Shared history 4.2 Collusion and Other Shared History Attacks 4.2.1 Collusion 4.2.2 False reports 4.3 Zero-Cost Identities 4.4 Traitors 5. RELATED WORK Previous work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular. A well-known phenomenon in this context is the "tragedy of the commons" [18] where resources are under-provisioned due to selfish users who free-ride on the system's resources, and is especially common in large networks [29] [3]. The problem has been extensively studied adopting a game theoretic approach. The prisoners' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players. In a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates. Our model assumes growth follows local learning rather than evolutionary dynamics [14], and also allows for more kinds of attacks. Nowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history. Players using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold. As a result, an Image player is either vulnerable to partial defectors (if the threshold is set too low) or does not cooperate with other Image players (if the threshold is set too high). In recent years, researchers have used economic "mechanism design" theory to tackle the cooperation problem in Internet applications. Mechanism design is the inverse of game theory. It asks how to design a game in which the behavior of strategic players results in the socially desired outcome. Distributed Algorithmic Mechanism Design seeks solutions within this framework that are both fully distributed and computationally tractable [12]. [10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing. More recently, DAMD has been also studied in dynamic environments [38]. In this context, demonstrating the superiority of a cooperative strategy (as in the case of our work) is consistent with the objective of incentivizing the desired behavior among selfish players. The unique challenges imposed by peer-to-peer systems inspired additional body of work [5] [37], mainly in the context of packet forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file sharing [15] [31]. Friedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable. Using a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm. [6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system. Some commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code. These mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions. Also, these mechanisms are still vulnerable to zero-cost identities and collusion. BitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user's upload rate dictates his download rate. 6. CONCLUSIONS In this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks. Addressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance. We find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest. Furthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers. Finally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.
Robust Incentive Techniques for Peer-to-Peer Networks Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation. 1. INTRODUCTION Many peer-to-peer (P2P) systems rely on cooperation among selfinterested users. For example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3]. In many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance. As a result, each user's attempt to maximize her own utility effectively lowers the overall Figure 1: Example of asymmetry of interest. utility of the system. Avoiding this "tragedy of the commons" [18] requires incentives for cooperation. We adopt a game-theoretic approach in addressing this problem. While social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including: • Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33]. • Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest. In the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. • Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash). Strategies that work well in traditional prisoners' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context. Therefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs: • Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly. However, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity. We show that by having each peer keep a private history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover. . Shared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history. Consider the example in Figure 1. If everyone provides service, then the system operates optimally. However, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc. . We show that with shared history, B knows that A served C and consequently will serve A. This results in a higher level of cooperation than with private history. The cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history. . Maxflow-based Subjective Reputation: Shared history creates the possibility for collusion. In the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service. We show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1/3 of the population. The cost of the maxflow algorithm is its O (V 3) running time, where V is the number of nodes in the system. We show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20]. . Adaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped. We show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system. The adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25]. . Short-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers. The peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host). Long-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions. We show that short-term history prevents traitors from disrupting cooperation. We describe the model in Section 2 and the reciprocative decision function in Section 3. We then proceed to the incentive techniques in Section 4. In Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history. In Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it. In Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities. In Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them. We discuss related work in Section 5 and conclude in Section 6. 5. RELATED WORK Previous work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular. The problem has been extensively studied adopting a game theoretic approach. The prisoners' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players. In a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates. Nowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history. Players using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold. In recent years, researchers have used economic "mechanism design" theory to tackle the cooperation problem in Internet applications. Mechanism design is the inverse of game theory. It asks how to design a game in which the behavior of strategic players results in the socially desired outcome. [10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing. More recently, DAMD has been also studied in dynamic environments [38]. Friedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable. Using a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm. [6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system. Some commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code. These mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions. Also, these mechanisms are still vulnerable to zero-cost identities and collusion. BitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user's upload rate dictates his download rate. 6. CONCLUSIONS In this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks. Addressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance. We find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest. Furthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers. Finally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.
I-38
Bidding Algorithms for a Distributed Combinatorial Auction
Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder's utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm.
[ "bid algorithm", "combinatori auction", "distribut alloc", "coordin", "paus auction", "task and resourc alloc", "revenu-maxim solut", "combinatori optim problem", "agent", "progress adapt user select environ", "branch and bound search", "search tree", "branch-on-bid tree" ]
[ "P", "P", "P", "P", "P", "M", "M", "M", "U", "U", "M", "M", "U" ]
Bidding Algorithms for a Distributed Combinatorial Auction Benito Mendoza ∗ and Jos´e M. Vidal Computer Science and Engineering University of South Carolina Columbia, SC 29208 mendoza2@engr.sc.edu, vidal@sc.edu ABSTRACT Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder``s utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm. Categories and Subject Descriptors I.2.11 [Computing Methodologies]: Distributed Artificial Intelligence-Intelligent Agents, Multiagent Systems. General Terms Algorithms, Performance. 1. INTRODUCTION Both the research and practice of combinatorial auctions have grown rapidly in the past ten years. In a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items. Once the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer``s revenue. This problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10]. Nevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed. The practical applications of combinatorial auctions include: allocation of airport takeoff and landing time slots, procurement of freight transportation services, procurement of public transport services, and industrial procurement [2]. Because of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem. Thus, several approaches and algorithms have been proposed to address the winner determination problem. However, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners. Examples of these algorithms are CASS [3], Bidtree [11] and CABOB [12]. We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer. The PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders. PAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them. However, it is not concerned with how the bidders determine what they should bid. In this paper we present two algorithms, pausebid and cachedpausebid, which enable agents in a PAUSE auction to find the bidset that maximizes their utility. Our algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent``s utility given the outstanding best bids at a given time. pausebid performs a branch and bound search completely from scratch every time that it is called. cachedpausebid is a caching-based algorithm which explores fewer nodes, since it caches some solutions. 694 978-81-904262-7-5 (RPS) c 2007 IFAAMAS 2. THE PAUSE AUCTION A PAUSE auction for m items has m stages. Stage 1 consists of having simultaneous ascending price open-cry auctions and during this stage the bidders can only place bids on individual items. At the end of this state we will know what the highest bid for each individual item is and who placed that bid. Each successive stage k = 2, 3, ... , m consists of an ascending price auction where the bidders must submit bidsets that cover all items but each one of the bids must be for k items or less. The bidders are allowed to use bids that other agents have placed in previous rounds when building their bidsets, thus allowing them to find better solutions. Also, any new bidset has to have a sum of bid prices which is bigger than that of the currently winning bidset. At the end of each stage k all agents know the best bid for every subset of size k or less. Also, at any point in time after stage 1 has ended there is a standing bidset whose value increases monotonically as new bidsets are submitted. Since in the final round all agents consider all possible bidsets, we know that the final winning bidset will be one such that no agent can propose a better bidset. Note, however, that this bidset is not guaranteed to be the one that maximizes revenue since we are using an ascending price auction so the winning bid for each set will be only slightly bigger than the second highest bid for the particular set of items. That is, the final prices will not be the same as the prices in a traditional combinatorial auction where all the bidders bid their true valuation. However, there remains the open question of whether the final distribution of items to bidders found in a PAUSE auction is the same as the revenue maximizing solution. Our test results provide an answer to this question. The PAUSE auction makes the job of the auctioneer very easy. All it has to do is to make sure that each new bidset has a revenue bigger than the current winning bidset, as well as make sure that every bid in an agent``s bidset that is not his does indeed correspond to some other agents'' previous bid. The computational problem shifts from one of winner determination to one of bid generation. Each agent must search over the space of all bidsets which contain at least one of its bids. The search is made easier by the fact that the agent needs to consider only the current best bids and only wants bidsets where its own utility is higher than in the current winning bidset. Each agent also has a clear incentive for performing this computation, namely, its utility only increases with each bidset it proposes (of course, it might decrease with the bidsets that others propose). Finally, the PAUSE auction has been shown to be envy-free in that at the conclusion of the auction no bidder would prefer to exchange his allocation with that of any other bidder [2]. We can even envision completely eliminating the auctioneer and, instead, have every agent perform the task of the auctioneer. That is, all bids are broadcast and when an agent receives a bid from another agent it updates the set of best bids and determines if the new bid is indeed better than the current winning bid. The agents would have an incentive to perform their computation as it will increase their expected utility. Also, any lies about other agents'' bids are easily found out by keeping track of the bids sent out by every agent (the set of best bids). Namely, the only one that can increase an agent``s bid value is the agent itself. Anyone claiming a higher value for some other agent is lying. The only thing missing is an algorithm that calculates the utility-maximizing bidset for each agent. 3. PROBLEM FORMULATION A bid b is composed of three elements bitems (the set of items the bid is over), bagent (the agent that placed the bid), and bvalue (the value or price of the bid). The agents maintain a set B of the current best bids, one for each set of items of size ≤ k, where k is the current stage. At any point in the auction, after the first round, there will also be a set W ⊆ B of currently winning bids. This is the set of bids that covers all the items and currently maximizes the revenue, where the revenue of W is given by r(W) = b∈W bvalue . (1) Agent i``s value function is given by vi(S) ∈ where S is a set of items. Given an agent``s value function and the current winning bidset W we can calculate the agent``s utility from W as ui(W) = b∈W | bagent=i vi(bitems ) − bvalue . (2) That is, the agent``s utility for a bidset W is the value it receives for the items it wins in W minus the price it must pay for those items. If the agent is not winning any items then its utility is zero. The goal of the bidding agents in the PAUSE auction is to maximize their utility, subject to the constraint that their next set of bids must have a total revenue that is at least bigger than the current revenue, where is the smallest increment allowed in the auction. Formally, given that W is the current winning bidset, agent i must find a g∗ i such that r(g∗ i ) ≥ r(W) + and g∗ i = arg max g⊆2B ui(g), (3) where each g is a set of bids that covers all items and ∀b∈g (b ∈ B) or (bagent = i and bvalue > B(bitems ) and size(bitems ) ≤ k), and where B(items) is the value of the bid in B for the set items (if there is no bid for those items it returns zero). That is, each bid b in g must satisfy at least one of the two following conditions. 1) b is already in B, 2) b is a bid of size ≤ k in which the agent i bids higher than the price for the same items in B. 4. BIDDING ALGORITHMS According to the PAUSE auction, during the first stage we have only several English auctions, with the bidders submitting bids on individual items. In this case, an agent``s dominant strategy is to bid higher than the current winning bid until it reaches its valuation for that particular item. Our algorithms focus on the subsequent stages: k > 1. When k > 1, agents have to find g∗ i . This can be done by performing a complete search on B. However, this approach is computationally expensive since it produces a large search tree. Our algorithms represent alternative approaches to overcome this expensive search. 4.1 The PAUSEBID Algorithm In the pausebid algorithm (shown in Figure 1) we implement some heuristics to prune the search tree. Given that bidders want to maximize their utility and that at any given point there are likely only a few bids within B which The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 695 pausebid(i, k) 1 my-bids ← ∅ 2 their-bids ← ∅ 3 for b ∈ B 4 do if bagent = i or vi(bitems ) > bvalue 5 then my-bids ← my-bids +new Bid(bitems , i, vi(bitems )) 6 else their-bids ← their-bids +b 7 for S ∈ subsets of k or fewer items such that vi(S) > 0 and ¬∃b∈Bbitems = S 8 do my-bids ← my-bids +new Bid(S, i, vi(S)) 9 bids ← my-bids + their-bids 10 g∗ ← ∅ # Global variable 11 u∗ ← ui(W)# Global variable 12 pbsearch(bids, ∅) 13 surplus ← b∈g∗ | bagent=i bvalue − B(bitems ) 14 if surplus = 0 15 then return g∗ 16 my-payment ← vi(g∗ ) − u∗ 17 for b ∈ g∗ | bagent = i 18 do if my-payment ≤ 0 19 then bvalue ← B(bitems ) 20 else bvalue ← B(bitems ) + my-payment ·bvalue −B(bitems ) surplus 21 return g∗ Figure 1: The pausebid algorithm which implements a branch and bound search. i is the agent and k is the current stage of the auction, for k ≥ 2. the agent can dominate, we start by defining my-bids to be the list of bids for which the agent``s valuation is higher than the current best bid, as given in B. We set the value of these bids to be the agent``s true valuation (but we won``t necessarily be bidding true valuation, as we explain later). Similarly, we set their-bids to be the rest of the bids from B. Finally, the agent``s search list is simply the concatenation of my-bids and their-bids. Note that the agent``s own bids are placed first on the search list as this will enable us to do more pruning (pausebid lines 3 to 9). The agent can now perform a branch and bound search on the branch-on-bids tree produced by these bids. This branch and bound search is implemented by pbsearch (Figure 2). Our algorithm not only implements the standard bound but it also implements other pruning techniques in order to further reduce the size of the search tree. The bound we use is the maximum utility that the agent can expect to receive from a given set of bids. We call it u∗ . Initially, u∗ is set to ui(W) (pausebid line 11) since that is the utility the agent currently receives and any solution he proposes should give him more utility. If pbsearch ever comes across a partial solution where the maximum utility the agent can expect to receive is less than u∗ then that subtree is pruned (pbsearch line 21). Note that we can determine the maximum utility only after the algorithm has searched over all of the agent``s own bids (which are first on the list) because after that we know that the solution will not include any more bids where the agent is the winner thus the agent``s utility will no longer increase. For example, pbsearch(bids, g) 1 if bids = ∅ then return 2 b ← first(bids) 3 bids ← bids −b 4 g ← g + b 5 ¯Ig ← items not in g 6 if g does not contain a bid from i 7 then return 8 if g includes all items 9 then min-payment ← max(0, r(W) + − (r(g) − ri(g)), b∈g | bagent=i B(bitems )) 10 max-utility ← vi(g) − min-payment 11 if r(g) > r(W) and max-utility ≥ u∗ 12 then g∗ ← g 13 u∗ ← max-utility 14 pbsearch(bids, g − b) # b is Out 15 else max-revenue ← r(g) + max(h(¯Ig), hi(¯Ig)) 16 if max-revenue ≤ r(W) 17 then pbsearch(bids, g − b) # b is Out 18 elseif bagent = i 19 then min-payment ← (r(W) + ) −(r(g) − ri(g)) − h(¯Ig) 20 max-utility ← vi(g) − min-payment 21 if max-utility > u∗ 22 then pbsearch({x ∈ bids | xitems ∩ bitems = ∅}, g) # b is In 23 pbsearch(bids, g − b) # b is Out 24 else 25 pbsearch({x ∈ bids | xitems ∩ bitems = ∅}, g) # b is In 26 pbsearch(bids, g − b) # b is Out 27 return Figure 2: The pbsearch recursive procedure where bids is the set of available bids and g is the current partial solution. if an agent has only one bid in my-bids then the maximum utility he can expect is equal to his value for the items in that bid minus the minimum possible payment we can make for those items and still come up with a set of bids that has revenue greater than r(W). The calculation of the minimum payment is shown in line 19 for the partial solution case and line 9 for the case where we have a complete solution in pbsearch. Note that in order to calculate the min-payment for the partial solution case we need an upper bound on the payments that we must make for each item. This upper bound is provided by h(S) = s∈S max b∈B | s∈bitems bvalue size(bitems) . (4) This function produces a bound identical to the one used by the Bidtree algorithm-it merely assigns to each individual item in S a value equal to the maximum bid in B divided by the number of items in that bid. To prune the branches that cannot lead to a solution with revenue greater than the current W, the algorithm considers both the values of the bids in B and the valuations of the 696 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) agent. Similarly to (4) we define hi(S, k) = s∈S max S | size(S )≤k and s∈S and vi(S )>0 vi(S ) size(S ) (5) which assigns to each individual item s in S the maximum value produced by the valuation of S divided by the size of S , where S is a set for which the agent has a valuation greater than zero, contains s, and its size is less or equal than k. The algorithm uses the heuristics h and hi (lines 15 and 19 of pbsearch), to prune the just mentioned branches in the same way an A∗ algorithm uses its heuristic. A final pruning technique implemented by the algorithm is ignoring any branches where the agent has no bids in the current answer g and no more of the agent``s bids are in the list (pbsearch lines 6 and 7). The resulting g∗ found by pbsearch is thus the set of bids that has revenue bigger than r(W) and maximizes agent i``s utility. However, agent i``s bids in g∗ are still set to his own valuation and not to the lowest possible price. Lines 17 to 20 in pausebid are responsible for setting the agent``s payments so that it can achieve its maximum utility u∗ . If the agent has only one bid in g∗ then it is simply a matter of reducing the payment of that bid by u∗ from the current maximum of the agent``s true valuation. However, if the agent has more than one bid then we face the problem of how to distribute the agent``s payments among these bids. There are many ways of distributing the payments and there does not appear to be a dominant strategy for performing this distribution. We have chosen to distribute the payments in proportion to the agent``s true valuation for each set of items. pausebid assumes that the set of best bids B and the current best winning bidset W remains constant during its execution, and it returns the agent``s myopic utility-maximizing bidset (if there is one) using a branch and bound search. However it repeats the whole search at every stage. We can minimize this problem by caching the result of previous searches. 4.2 The CACHEDPAUSEBID Algorithm The cachedpausebid algorithm (shown in Figure 3) is our second approach to solve the bidding problem in the PAUSE auction. It is based in a cache table called C-Table where we store some solutions to avoid doing a complete search every time. The problem is the same; the agent i has to find g∗ i . We note that g∗ i is a bidset that contains at least one bid of the agent i. Let S be a set of items for which the agent i has a valuation such that vi(S) ≥ B(S) > 0, let gS i be a bidset over S such that r(gS i ) ≥ r(W) + and gS i = arg max g⊆2B ui(g), (6) where each g is a set of bids that covers all items and ∀b∈g (b ∈ B) or (bagent = i and bvalue > B(bitems )) and (∃b∈gbitems = S and bagent = i). That is, gS i is i``s best bidset for all items which includes a bid from i for all S items. In the PAUSE auction we cannot bid for sets of items with size greater than k. So, if we have for each set of items S for which vi(S) > 0 and size(S) ≤ k its corresponding gS i then g∗ i is the gS i that maximizes the agent``s utility. That is g∗ i = arg max {S | vi(S)>0∧size(S)≤k} ui(gS i ). (7) Each agent i implements a hash table C-Table such that C-Table[S] = gS for all S which vi(S) ≥ B(S) > 0. We can cachedpausebid(i, k, k-changed) 1 for each S in C-Table 2 do if vi(S) < B(S) 3 then remove S from C-Table 4 else if k-changed and size(S) = k 5 then B ← B + new Bid(i, S, vi(S)) 6 g∗ ← ∅ 7 u∗ ← ui(W) 8 for each S with size(S) ≤ k in C-Table 9 do ¯S ← Items − S 10 gS ← C-Table[S] # Global variable 11 min-payment ← max(r(W) + , b∈gS B(bitems )) 12 uS ← r(gS ) − min-payment # Global variable 13 if (k-changed and size(S) = k) or (∃b∈B bitems ⊆ ¯S and bagent = i) 14 then B ← {b ∈ B |bitems ⊆ ¯S} 15 bids ← B +{b ∈ B|bitems ⊆ ¯S and b /∈ B } 16 for b ∈ bids 17 do if vi(bitems ) > bvalue 18 then bagent ← i 19 bvalue ← vi(bitems ) 20 if k-changed and size(S) = k 21 then n ← size(bids) 22 uS ← 0 23 else n ← size(B ) 24 g ← ∅ + new Bid(S, i, vi(S)) 25 cpbsearch(bids, g, n) 26 C-Table[S] ← gS 27 if uS > u∗ and r(gS ) ≥ r(W) + 28 then surplus ← b∈gS | bagent=i bvalue − B(bitems ) 29 if surplus > 0 30 then my-payment ← vi(gS ) − ui(gS ) 31 for b ∈ gS | bagent = i 32 do if my-payment ≤ 0 33 then bvalue ← B(bitems ) 34 else bvalue ← B(bitems )+ my-payment ·bvalue −B(bitems ) surplus 35 u∗ ← ui(gS ) 36 g∗ ← gS 37 else if uS ≤ 0 and vi(S) < B(S) 38 then remove S from C-Table 39 return g∗ Figure 3: The cachedpausebid algorithm that implements a caching based search to find a bidset that maximizes the utility for the agent i. k is the current stage of the auction (for k ≥ 2), and k-changed is a boolean that is true right after the auction moved to the next stage. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 697 cpbsearch(bids, g, n) 1 if bids = ∅ or n ≤ 0 then return 2 b ← first(bids) 3 bids ← bids −b 4 g ← g + b 5 ¯Ig ← items not in g 6 if g includes all items 7 then min-payment ← max(0, r(W) + − (r(g) − ri(g)), b∈g | bagent=i B(bitems )) 8 max-utility ← vi(g) − min-payment 9 if r(g) > r(W) and max-utility ≥ uS 10 then gS ← g 11 uS ← max-utility 12 cpbsearch(bids, g − b, n − 1) # b is Out 13 else max-revenue ← r(g) + max(h(¯Ig), hi(¯Ig)) 14 if max-revenue ≤ r(W) 15 then cpbsearch(bids, g − b, n − 1) # b is Out 16 elseif bagent = i 17 then min-payment ← (r(W) + ) −(r(g) − ri(g)) − h(¯Ig) 18 max-utility ← vi(g) − min-payment 19 if max-utility > uS 20 then cpbsearch({x ∈ bids | xitems ∩ bitems = ∅}, g, n + 1) # b is In 21 cpbsearch(bids, g − b, n − 1) # b is Out 22 else 23 cpbsearch({x ∈ bids | xitems ∩ bitems = ∅}, g, n + 1) # b is In 24 cpbsearch(bids, g − b, n − 1) # b is Out 25 return Figure 4: The cpbsearch recursive procedure where bids is the set of available bids, g is the current partial solution and n is a value that indicates how deep in the list bids the algorithm has to search. then find g∗ by searching for the gS , stored in C-Table[S], that maximizes the agent``s utility, considering only the set of items S with size(S) ≤ k. The problem remains in maintaining the C-Table updated and avoiding to search every gS every time. cachedpausebid deals with this and other details. Let B be the set of bids that contains the new best bids, that is, B contains the bids recently added to B and the bids that have changed price (always higher), bidder, or both and were already in B. Let ¯S = Items − S be the complement of S (the set of items not included in S). cachedpausebid takes three parameters: i the agent, k the current stage of the auction, and k-changed a boolean that is true right after the auction moved to the next stage. Initially C-Table has one row or entry for each set S for which vi(S) > 0. We start by eliminating the entries corresponding to each set S for which vi(S) < B(S) from C-Table (line 3). Then, in the case that k-changed is true, for each set S with size(S) = k, we add to B a bid for that set with value equal to vi(S) and bidder agent i (line 5); this a bid that the agent is now allowed to consider. We then search for g∗ amongst the gS stored in C-Table, for this we only need to consider the sets with size(S) ≤ k (line 8). But how do we know that the gS in C-Table[S] is still the best solution for S? There are only two cases when we are not sure about that and we need to do a search to update C-Table[S]. These cases are: i) When k-changed is true and size(S) ≤ k, since there was no gS stored in C-Table for this S. ii) When there exists at least one bid in B for the set of items ¯S or a subset of it submitted by an agent different than i, since it is probable that this new bid can produce a solution better than the one stored in C-Table[S]. We handle the two cases mentioned above in lines 13 to 26 of cachedpausebid. In both of these cases, since gS must contain a bid for S we need to find a bidset that cover the missing items, that is ¯S. Thus, our search space consists of all the bids on B for the set of items ¯S or for a subset of it. We build the list bids that contains only those bids. However, we put the bids from B at the beginning of bids (line 14) since they are the ones that have changed. Then, we replace the bids in bids that have a price lower than the valuation the agent i has for those same items with a bid from agent i for those items and value equal to the agent``s valuation (lines 16-19). The recursive procedure cpbsearch, called in line 25 of cachedpausebid and shown in Figure 4, is the one that finds the new gS . cpbsearch is a slightly modified version of our branch and bound search implemented in pbsearch. The first modification is that it has a third parameter n that indicates how deep on the list bids we want to search, since it stops searching when n less or equal to zero and not only when the list bids is empty (line 1). Each time that there is a recursive call of cpbsearch n is decreased by one when a bid from bids is discarded or out (lines 12, 15, 21, and 24) and n remains the same otherwise (lines 20 and 23). We set the value of n before calling cpbsearch, to be the size of the list bids (cachedpausebid line 21) in case i), since we want cpbsearch to search over all bids; and we set n to be the number of bids from B included in bids (cachedpausebid line 23) in case ii), since we know that only the those first n bids in bids changed and can affect our current gS . Another difference with pbsearch is that the bound in cpbsearch is uS which we set to be 0 (cachedpausebid line 22) when in case i) and r(gS )−min-payment (cachedpausebid line 12) when in case ii). We call cpbsearch with g already containing a bid for S. After cpbsearch is executed we are sure that we have the right gS , so we store it in the corresponding C-Table[S] (cachedpausebid line 26). When we reach line 27 in cachedpausebid, we are sure that we have the right gS . However, agent i``s bids in gS are still set to his own valuation and not to the lowest possible price. If uS is greater than the current u∗ , lines 31 to 34 in cachedpausebid are responsible for setting the agent``s payments so that it can achieve its maximum utility uS . As in pausebid, we have chosen to distribute the payments in proportion to the agent``s true valuation for each set of items. In the case that uS less than or equal to zero and the valuation that the agent i has for the set of items S is lower than the current value of the bid in B for the same set of items, we remove the corresponding C-Table[S] since we know that is not worthwhile to keep it in the cache table (cachedpausebid line 38). The cachedpausebid function is called when k > 1 and returns the agent``s myopic utility-maximizing bidset, if there is one. It assumes that W and B remains constant during its execution. 698 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) generatevalues(i, items) 1 for x ∈ items 2 do vi(x) = expd(.01) 3 for n ← 1 ... (num-bids − items) 4 do s1, s2 ←Two random sets of items with values. 5 vi(s1 ∪ s2) = vi(s1) + vi(s2) + expd(.01) Figure 5: Algorithm for the generation of random value functions. expd(x) returns a random number taken from an exponential distribution with mean 1/x. 0 20 40 60 80 100 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 6: Average percentage of convergence (y-axis), which is the percentage of times that our algorithms converge to the revenue-maximizing solution, as function of the number of items in the auction. 5. TEST AND COMPARISON We have implemented both algorithms and performed a series of experiments in order to determine how their solution compares to the revenue-maximizing solution and how their times compare with each other. In order to do our tests we had to generate value functions for the agents1 . The algorithm we used is shown in Figure 5. The type of valuations it generates correspond to domains where a set of agents must perform a set of tasks but there are cost savings for particular agents if they can bundle together certain subsets of tasks. For example, imagine a set of robots which must pick up and deliver items to different locations. Since each robot is at a different location and has different abilities, each one will have different preferences over how to bundle. Their costs for the item bundles are subadditive, which means that their preferences are superadditive. The first experiment we performed simply ensured the proper 1 Note that we could not use CATS [6] because it generates sets of bids for an indeterminate number of agents. It is as if you were told the set of bids placed in a combinatorial auction but not who placed each bid or even how many people placed bids, and then asked to determine the value function of every participant in the auction. 0 20 40 60 80 100 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 7: Average percentage of revenue from our algorithms relative to maximum revenue (y-axis) as function of the number of items in the auction. functioning of our algorithms. We then compared the solutions found by both of them to the revenue-maximizing solution as found by CASS when given a set of bids that corresponds to the agents'' true valuation. That is, for each agent i and each set of items S for which vi(S) > 0 we generated a bid. This set of bids was fed to CASS which implements a centralized winner determination algorithm to find the solution which maximizes revenue. Note, however, that the revenue from the PAUSE auction on all the auctions is always smaller than the revenue of the revenue-maximizing solution when the agents bid their true valuations. Since PAUSE uses English auctions the final prices (roughly) represent the second-highest valuation, plus , for that set of items. We fixed the number of agents to be 5 and we experimented with different number of items, namely from 2 to 10. We ran both algorithms 100 times for each combination. When we compared the solutions of our algorithms to the revenue-maximizing solution, we realized that they do not always find the same distribution of items as the revenue-maximizing solution (as shown in Figure 6). The cases where our algorithms failed to arrive at the distribution of the revenue-maximizing solution are those where there was a large gap between the first and second valuation for a set (or sets) of items. If the revenue-maximizing solution contains the bid (or bids) using these higher valuation then it is impossible for the PAUSE auction to find this solution because that bid (those bids) is never placed. For example, if agent i has vi(1) = 1000 and the second highest valuation for (1) is only 10 then i only needs to place a bid of 11 in order to win that item. If the revenue-maximizing solution requires that 1 be sold for 1000 then that solution will never be found because that bid will never be placed. We also found that average percentage of times that our algorithms converges to the revenue-maximizing solution decreases as the number of items increases. For 2 items is almost 100% but decreases a little bit less than 1 percent as the items increase, so that this average percentage of convergence is around 90% for 10 items. In a few instances our algorithms find different solutions this is due to the different The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 699 1 10 100 1000 10000 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 8: Average number of expanded nodes (y-axis) as function of items in the auction. ordering of the bids in the bids list which makes them search in different order. We know that the revenue generated by the PAUSE auction is generally lower than the revenue of the revenuemaximizing solution, but how much lower? To answer this question we calculated percentage representing the proportion of the revenue given by our algorithms relative to the revenue given by CASS. We found that the percentage of revenue of our algorithms increases in average 2.7% as the number of items increases, as shown in Figure 7. However, we found that cachedpausebid generates a higher revenue than pausebid (4.3% higher in average) except for auctions with 2 items where both have about the same percentage. Again, this difference is produced by the order of the search. In the case of 2 items both algorithms produce in average a revenue proportion of 67.4%, while in the other extreme (10 items), cachedpausebid produced in average a revenue proportion of 91.5% while pausebid produced in average a revenue proportion of 87.7%. The scalability of our algorithms can be determined by counting the number of nodes expanded in the search tree. For this we count the number of times that pbsearch gets invoked for each time that pausebid is called and the number of times that fastpausebidsearch gets invoked for each time that cachedpausebid, respectively for each of our algorithms. As expected since this is an NP-Hard problem, the number of expanded nodes does grow exponentially with the number of items (as shown in Figure 8). However, we found that cachedpausebid outperforms pausebid, since it expands in average less than half the number of nodes. For example, the average number of nodes expanded when 2 items is zero for cachedpausebid while for pausebid is 2; and in the other extreme (10 items) cachedpausebid expands in average only 633 nodes while pausebid expands in average 1672 nodes, a difference of more than 1000 nodes. Although the number of nodes expanded by our algorithms increases as function of the number of items, the actual number of nodes is a much smaller than the worst-case scenario of nn where n is the number of items. For example, for 10 items we expand slightly more than 103 nodes for the case of pausebid and less than that for the case of cachedpause0.1 1 10 100 1000 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 9: Average time in seconds that takes to finish an auction (y-axis) as function of the number of items in the auction. bid which are much smaller numbers than 1010 . Notice also that our value generation algorithm (Figure 5) generates a number of bids that is exponential on the number of items, as might be expected in many situations. As such, these results do not support the conclusion that time grows exponentially with the number of items when the number of bids is independent of the number of items. We expect that both algorithms will grow exponentially as a function the number of bids, but stay roughly constant as the number of items grows. We wanted to make sure that less expanded nodes does indeed correspond to faster execution, especially since our algorithms execute different operations. We thus ran the same experiment with all the agents in the same machine, an Intel Centrino 2.0 GHz laptop PC with 1 GB of RAM and a 7200 RMP 60 GB hard drive, and calculated the average time that takes to finish an auction for each algorithm. As shown in Figure 9, cachedpausebid is faster than pausebid, the difference in execution speed is even more clear as the number of items increases. 6. RELATED WORK A lot of research has been done on various aspects of combinatorial auctions. We recommend [2] for a good review. However, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new. One approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution. The VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation. This algorithm also fails to converge to a solution for most cases. In [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem. Their mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure 700 The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) central server which then checks to make sure results from all pairs agree, otherwise a re-calculation is ordered. This general idea, which they call the redundancy principle, could also be applied to our problem but it requires the existence of a secure center agent that everyone trusts. Another interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier. Finally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best. As such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction. 7. CONCLUSIONS We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer or when we wish to distribute the computational load among the bidders. The PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions. With this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer. However, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids. We have presented two algorithms, pausebid and cachedpausebid, that bidder agents can use to engage in a PAUSE auction. Both algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent``s utility given the set of outstanding best bids at any given time, without considering possible future bids. Both algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution. The cases where our algorithms failed to arrive at that distribution are those where there was a large gap between the first and second valuation for a set (or sets) of items. As it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search. pausebid performs a branch and bound search completely from scratch each time it is invoked. cachedpausebid caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times. cachedpausebid has a better performance since it explores fewer nodes (less than half) and it is faster. As expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that cachedpausebid generates in average 4.7% higher revenue than pausebid. We also found that the revenue generated by our algorithms increases as function of the number of items in the auction. Our algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm. Moreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation. Our bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer. 8. REFERENCES [1] P. J. Brewer. Decentralized computation procurement and computational robustness in a smart market. Economic Theory, 13(1):41-92, January 1999. [2] P. Cramton, Y. Shoham, and R. Steinberg, editors. Combinatorial Auctions. MIT Press, 2006. [3] Y. Fujishima, K. Leyton-Brown, and Y. Shoham. Taming the computational complexity of combinatorial auctions: Optimal and approximate approaches. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pages 548-553. Morgan Kaufmann Publishers Inc., 1999. [4] F. Kelly and R. Stenberg. A combinatorial auction with multiple winners for universal service. Management Science, 46(4):586-596, 2000. [5] A. Land, S. Powell, and R. Steinberg. PAUSE: A computationally tractable combinatorial auction. In Cramton et al. [2], chapter 6, pages 139-157. [6] K. Leyton-Brown, M. Pearson, and Y. Shoham. Towards a universal test suite for combinatorial auction algorithms. In Proceedings of the 2nd ACM conference on Electronic commerce, pages 66-76. ACM Press, 2000. http://cats.stanford.edu. [7] M. V. Narumanchi and J. M. Vidal. Algorithms for distributed winner determination in combinatorial auctions. In LNAI volume of AMEC/TADA. Springer, 2006. [8] S. Park and M. H. Rothkopf. Auctions with endogenously determined allowable combinations. Technical report, Rutgets Center for Operations Research, January 2001. RRR 3-2001. [9] D. C. Parkes and J. Shneidman. Distributed implementations of vickrey-clarke-groves auctions. In Proceedings of the Third International Joint Conference on Autonomous Agents and MultiAgent Systems, pages 261-268. ACM, 2004. [10] M. H. Rothkopf, A. Pekec, and R. M. Harstad. Computationally manageable combinational auctions. Management Science, 44(8):1131-1147, 1998. [11] T. Sandholm. An algorithm for winner determination in combinatorial auctions. Artificial Intelligence, 135(1-2):1-54, February 2002. [12] T. Sandholm, S. Suri, A. Gilpin, and D. Levine. CABOB: a fast optimal algorithm for winner determination in combinatorial auctions. Management Science, 51(3):374-391, 2005. The Sixth Intl.. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 701
Bidding Algorithms for a Distributed Combinatorial Auction ABSTRACT Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder's utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm. 1. INTRODUCTION Both the research and practice of combinatorial auctions have grown rapidly in the past ten years. In a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items. Once the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer's revenue. This problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10]. Nevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed. The practical applications of combinatorial auctions include: allocation of airport takeoff and landing time slots, procurement of freight transportation services, procurement of public transport services, and industrial procurement [2]. Because of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem. Thus, several approaches and algorithms have been proposed to address the winner determination problem. However, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners. Examples of these algorithms are CASS [3], Bidtree [11] and CABOB [12]. We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer. The PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders. PAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them. However, it is not concerned with how the bidders determine what they should bid. In this paper we present two algorithms, PAUSEBID and CACHEDPAUSEBID, which enable agents in a PAUSE auction to find the bidset that maximizes their utility. Our algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent's utility given the outstanding best bids at a given time. PAUSEBID performs a branch and bound search completely from scratch every time that it is called. CACHEDPAUSEBID is a caching-based algorithm which explores fewer nodes, since it caches some solutions. 2. THE PAUSE AUCTION A PAUSE auction for m items has m stages. Stage 1 consists of having simultaneous ascending price open-cry auctions and during this stage the bidders can only place bids on individual items. At the end of this state we will know what the highest bid for each individual item is and who placed that bid. Each successive stage k = 2, 3,..., m consists of an ascending price auction where the bidders must submit bidsets that cover all items but each one of the bids must be for k items or less. The bidders are allowed to use bids that other agents have placed in previous rounds when building their bidsets, thus allowing them to find better solutions. Also, any new bidset has to have a sum of bid prices which is bigger than that of the currently winning bidset. At the end of each stage k all agents know the best bid for every subset of size k or less. Also, at any point in time after stage 1 has ended there is a standing bidset whose value increases monotonically as new bidsets are submitted. Since in the final round all agents consider all possible bidsets, we know that the final winning bidset will be one such that no agent can propose a better bidset. Note, however, that this bidset is not guaranteed to be the one that maximizes revenue since we are using an ascending price auction so the winning bid for each set will be only slightly bigger than the second highest bid for the particular set of items. That is, the final prices will not be the same as the prices in a traditional combinatorial auction where all the bidders bid their true valuation. However, there remains the open question of whether the final distribution of items to bidders found in a PAUSE auction is the same as the revenue maximizing solution. Our test results provide an answer to this question. The PAUSE auction makes the job of the auctioneer very easy. All it has to do is to make sure that each new bidset has a revenue bigger than the current winning bidset, as well as make sure that every bid in an agent's bidset that is not his does indeed correspond to some other agents' previous bid. The computational problem shifts from one of winner determination to one of bid generation. Each agent must search over the space of all bidsets which contain at least one of its bids. The search is made easier by the fact that the agent needs to consider only the current best bids and only wants bidsets where its own utility is higher than in the current winning bidset. Each agent also has a clear incentive for performing this computation, namely, its utility only increases with each bidset it proposes (of course, it might decrease with the bidsets that others propose). Finally, the PAUSE auction has been shown to be envy-free in that at the conclusion of the auction no bidder would prefer to exchange his allocation with that of any other bidder [2]. We can even envision completely eliminating the auctioneer and, instead, have every agent perform the task of the auctioneer. That is, all bids are broadcast and when an agent receives a bid from another agent it updates the set of best bids and determines if the new bid is indeed better than the current winning bid. The agents would have an incentive to perform their computation as it will increase their expected utility. Also, any lies about other agents' bids are easily found out by keeping track of the bids sent out by every agent (the set of best bids). Namely, the only one that can increase an agent's bid value is the agent itself. Anyone claiming a higher value for some other agent is lying. The only thing missing is an algorithm that calculates the utility-maximizing bidset for each agent. 3. PROBLEM FORMULATION A bid b is composed of three elements bitems (the set of items the bid is over), bagent (the agent that placed the bid), and bvalue (the value or price of the bid). The agents maintain a set B of the current best bids, one for each set of items of size <k, where k is the current stage. At any point in the auction, after the first round, there will also be a set W C B of currently winning bids. This is the set of bids that covers all the items and currently maximizes the revenue, where the revenue of W is given by Agent i's value function is given by vi (S) E R where S is a set of items. Given an agent's value function and the current winning bidset W we can calculate the agent's utility from W as That is, the agent's utility for a bidset W is the value it receives for the items it wins in W minus the price it must pay for those items. If the agent is not winning any items then its utility is zero. The goal of the bidding agents in the PAUSE auction is to maximize their utility, subject to the constraint that their next set of bids must have a total revenue that is at least E bigger than the current revenue, where E is the smallest increment allowed in the auction. Formally, given that W is the current winning bidset, agent i must find a g ∗ i such that r (g ∗ i)> r (W) + E and where each g is a set of bids that covers all items and db ∈ g (b E B) or (bagent = i and bvalue> B (bitems) and size (bitems) <k), and where B (items) is the value of the bid in B for the set items (if there is no bid for those items it returns zero). That is, each bid b in g must satisfy at least one of the two following conditions. 1) b is already in B, 2) b is a bid of size <k in which the agent i bids higher than the price for the same items in B. 4. BIDDING ALGORITHMS According to the PAUSE auction, during the first stage we have only several English auctions, with the bidders submitting bids on individual items. In this case, an agent's dominant strategy is to bid E higher than the current winning bid until it reaches its valuation for that particular item. Our algorithms focus on the subsequent stages: k> 1. When k> 1, agents have to find g ∗ i. This can be done by performing a complete search on B. However, this approach is computationally expensive since it produces a large search tree. Our algorithms represent alternative approaches to overcome this expensive search. 4.1 The PAUSEBID Algorithm In the PAUSEBID algorithm (shown in Figure 1) we implement some heuristics to prune the search tree. Given that bidders want to maximize their utility and that at any given point there are likely only a few bids within B which Figure 1: The PAUSEBID algorithm which implements a branch and bound search. i is the agent and k is the current stage of the auction, for k> 2. the agent can dominate, we start by defining my-bids to be the list of bids for which the agent's valuation is higher than the current best bid, as given in B. We set the value of these bids to be the agent's true valuation (but we won't necessarily be bidding true valuation, as we explain later). Similarly, we set their-bids to be the rest of the bids from B. Finally, the agent's search list is simply the concatenation of my-bids and their-bids. Note that the agent's own bids are placed first on the search list as this will enable us to do more pruning (PAUSEBID lines 3 to 9). The agent can now perform a branch and bound search on the branch-on-bids tree produced by these bids. This branch and bound search is implemented by PBSEARCH (Figure 2). Our algorithm not only implements the standard bound but it also implements other pruning techniques in order to further reduce the size of the search tree. The bound we use is the maximum utility that the agent can expect to receive from a given set of bids. We call it u ∗. Initially, u ∗ is set to ui (W) (PAUSEBID line 11) since that is the utility the agent currently receives and any solution he proposes should give him more utility. If PBSEARCH ever comes across a partial solution where the maximum utility the agent can expect to receive is less than u ∗ then that subtree is pruned (PBSEARCH line 21). Note that we can determine the maximum utility only after the algorithm has searched over all of the agent's own bids (which are first on the list) because after that we know that the solution will not include any more bids where the agent is the winner thus the agent's utility will no longer increase. For example, Figure 2: The PBSEARCH recursive procedure where bids is the set of available bids and g is the current partial solution. if an agent has only one bid in my-bids then the maximum utility he can expect is equal to his value for the items in that bid minus the minimum possible payment we can make for those items and still come up with a set of bids that has revenue greater than r (W). The calculation of the minimum payment is shown in line 19 for the partial solution case and line 9 for the case where we have a complete solution in PBSEARCH. Note that in order to calculate the min-payment for the partial solution case we need an upper bound on the payments that we must make for each item. This upper bound is provided by This function produces a bound identical to the one used by the Bidtree algorithm--it merely assigns to each individual item in S a value equal to the maximum bid in B divided by the number of items in that bid. To prune the branches that cannot lead to a solution with revenue greater than the current W, the algorithm considers both the values of the bids in B and the valuations of the 696 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) which assigns to each individual item s in S the maximum value produced by the valuation of S ~ divided by the size of S ~, where S ~ is a set for which the agent has a valuation greater than zero, contains s, and its size is less or equal than k. The algorithm uses the heuristics h and hi (lines 15 and 19 of PBSEARCH), to prune the just mentioned branches in the same way an A ∗ algorithm uses its heuristic. A final pruning technique implemented by the algorithm is ignoring any branches where the agent has no bids in the current answer g and no more of the agent's bids are in the list (PBSEARCH lines 6 and 7). The resulting g ∗ found by PBSEARCH is thus the set of bids that has revenue bigger than r (W) and maximizes agent i's utility. However, agent i's bids in g ∗ are still set to his own valuation and not to the lowest possible price. Lines 17 to 20 in PAUSEBID are responsible for setting the agent's payments so that it can achieve its maximum utility u ∗. If the agent has only one bid in g ∗ then it is simply a matter of reducing the payment of that bid by u ∗ from the current maximum of the agent's true valuation. However, if the agent has more than one bid then we face the problem of how to distribute the agent's payments among these bids. There are many ways of distributing the payments and there does not appear to be a dominant strategy for performing this distribution. We have chosen to distribute the payments in proportion to the agent's true valuation for each set of items. PAUSEBID assumes that the set of best bids B and the current best winning bidset W remains constant during its execution, and it returns the agent's myopic utility-maximizing bidset (if there is one) using a branch and bound search. However it repeats the whole search at every stage. We can minimize this problem by caching the result of previous searches. 4.2 The CACHEDPAUSEBID Algorithm The CACHEDPAUSEBID algorithm (shown in Figure 3) is our second approach to solve the bidding problem in the PAUSE auction. It is based in a cache table called C-Table where we store some solutions to avoid doing a complete search every time. The problem is the same; the agent i has to find g ∗ i. We note that g ∗ i is a bidset that contains at least one bid of the agent i. Let S be a set of items for which the agent i has a valuation such that vi (S) ≥ B (S)> 0, let gSi be a bidset over S such that r (gSi) ≥ r (W) + e and where each g is a set of bids that covers all items and ∀ b ∈ g (b ∈ B) or (bagent = i and bvalue> B (bitems)) and (∃ b ∈ gbitems = S and bagent = i). That is, gSi is i's best bidset for all items which includes a bid from i for all S items. In the PAUSE auction we cannot bid for sets of items with size greater than k. So, if we have for each set of items S for which vi (S)> 0 and size (S) ≤ k its corresponding gSi then g ∗ i is the gSi that maximizes the agent's utility. That is Figure 3: The CACHEDPAUSEBID algorithm that imple ments a caching based search to find a bidset that maximizes the utility for the agent i. k is the current stage of the auction (for k ≥ 2), and k-changed is a boolean that is true right after the auction moved to the next stage. Figure 4: The CPBSEARCH recursive procedure where bids is the set of available bids, g is the current partial solution and n is a value that indicates how deep in the list bids the algorithm has to search. then find g' by searching for the gS, stored in C-Table [S], that maximizes the agent's utility, considering only the set of items S with size (S) <k. The problem remains in maintaining the C-Table updated and avoiding to search every gS every time. CACHEDPAUSEBID deals with this and other details. Let B' be the set of bids that contains the new best bids, that is, B' contains the bids recently added to B and the bids that have changed price (always higher), bidder, or both and were already in B. Let S ¯ = Items--S be the complement of S (the set of items not included in S). CACHEDPAUSEBID takes three parameters: i the agent, k the current stage of the auction, and k-changed a boolean that is true right after the auction moved to the next stage. Initially C-Table has one row or entry for each set S for which vi (S)> 0. We start by eliminating the entries corresponding to each set S for which vi (S) <B (S) from C-Table (line 3). Then, in the case that k-changed is true, for each set S with size (S) = k, we add to B' a bid for that set with value equal to vi (S) and bidder agent i (line 5); this a bid that the agent is now allowed to consider. We then search for g' amongst the gS stored in C-Table, for this we only need to consider the sets with size (S) <k (line 8). But how do we know that the gS in C-Table [S] is still the best solution for S? There are only two cases when we are not sure about that and we need to do a search to update C-Table [S]. These cases are: i) When k-changed is true and size (S) <k, since there was no gS stored in C-Table for this S. ii) When there exists at least one bid in B' for the set of items submitted by an agent different than i, since it is probable that this new bid can produce a solution better than the one stored in C-Table [S]. We handle the two cases mentioned above in lines 13 to 26 of CACHEDPAUSEBID. In both of these cases, since gS must contain a bid for S we need to find a bidset that cover the missing items, that is ¯ S. Thus, our search space consists of all the bids on B for the set of items S ¯ or for a subset of it. We build the list bids that contains only those bids. However, we put the bids from B' at the beginning of bids (line 14) since they are the ones that have changed. Then, we replace the bids in bids that have a price lower than the valuation the agent i has for those same items with a bid from agent i for those items and value equal to the agent's valuation (lines 16--19). The recursive procedure CPBSEARCH, called in line 25 of CACHEDPAUSEBID and shown in Figure 4, is the one that finds the new gS. CPBSEARCH is a slightly modified version of our branch and bound search implemented in PBSEARCH. The first modification is that it has a third parameter n that indicates how deep on the list bids we want to search, since it stops searching when n less or equal to zero and not only when the list bids is empty (line 1). Each time that there is a recursive call of CPBSEARCH n is decreased by one when a bid from bids is discarded or out (lines 12, 15, 21, and 24) and n remains the same otherwise (lines 20 and 23). We set the value of n before calling CPBSEARCH, to be the size of the list bids (CACHEDPAUSEBID line 21) in case i), since we want CPBSEARCH to search over all bids; and we set n to be the number of bids from B' included in bids (CACHEDPAUSEBID line 23) in case ii), since we know that only the those first n bids in bids changed and can affect our current gS. Another difference with PBSEARCH is that the bound in CPBSEARCH is uS which we set to be 0 (CACHEDPAUSEBID line 22) when in case i) and r (gS)--min-payment (CACHEDPAUSEBID line 12) when in case ii). We call CPBSEARCH with g already containing a bid for S. After CPBSEARCH is executed we are sure that we have the right gS, so we store it in the corresponding C-Table [S] (CACHEDPAUSEBID line 26). When we reach line 27 in CACHEDPAUSEBID, we are sure that we have the right gS. However, agent i's bids in gS are still set to his own valuation and not to the lowest possible price. If uS is greater than the current u', lines 31 to 34 in CACHEDPAUSEBID are responsible for setting the agent's payments so that it can achieve its maximum utility uS. As in PAUSEBID, we have chosen to distribute the payments in proportion to the agent's true valuation for each set of items. In the case that uS less than or equal to zero and the valuation that the agent i has for the set of items S is lower than the current value of the bid in B for the same set of items, we remove the corresponding C-Table [S] since we know that is not worthwhile to keep it in the cache table (CACHEDPAUSEBID line 38). The CACHEDPAUSEBID function is called when k> 1 and returns the agent's myopic utility-maximizing bidset, if there is one. It assumes that W and B remains constant during its execution. Figure 5: Algorithm for the generation of random value functions. EXPD (x) returns a random number taken from an exponential distribution with mean 1/x. Figure 6: Average percentage of convergence (y-axis), which is the percentage of times that our algorithms converge to the revenue-maximizing solution, as function of the number of items in the auction. 5. TEST AND COMPARISON We have implemented both algorithms and performed a series of experiments in order to determine how their solution compares to the revenue-maximizing solution and how their times compare with each other. In order to do our tests we had to generate value functions for the agents1. The algorithm we used is shown in Figure 5. The type of valuations it generates correspond to domains where a set of agents must perform a set of tasks but there are cost savings for particular agents if they can bundle together certain subsets of tasks. For example, imagine a set of robots which must pick up and deliver items to different locations. Since each robot is at a different location and has different abilities, each one will have different preferences over how to bundle. Their costs for the item bundles are subadditive, which means that their preferences are superadditive. The first experiment we performed simply ensured the proper 1Note that we could not use CATS [6] because it generates sets of bids for an indeterminate number of agents. It is as if you were told the set of bids placed in a combinatorial auction but not who placed each bid or even how many people placed bids, and then asked to determine the value function of every participant in the auction. Figure 7: Average percentage of revenue from our algorithms relative to maximum revenue (y-axis) as function of the number of items in the auction. functioning of our algorithms. We then compared the solutions found by both of them to the revenue-maximizing solution as found by CASS when given a set of bids that corresponds to the agents' true valuation. That is, for each agent i and each set of items S for which vi (S)> 0 we generated a bid. This set of bids was fed to CASS which implements a centralized winner determination algorithm to find the solution which maximizes revenue. Note, however, that the revenue from the PAUSE auction on all the auctions is always smaller than the revenue of the revenue-maximizing solution when the agents bid their true valuations. Since PAUSE uses English auctions the final prices (roughly) represent the second-highest valuation, plus E, for that set of items. We fixed the number of agents to be 5 and we experimented with different number of items, namely from 2 to 10. We ran both algorithms 100 times for each combination. When we compared the solutions of our algorithms to the revenue-maximizing solution, we realized that they do not always find the same distribution of items as the revenue-maximizing solution (as shown in Figure 6). The cases where our algorithms failed to arrive at the distribution of the revenue-maximizing solution are those where there was a large gap between the first and second valuation for a set (or sets) of items. If the revenue-maximizing solution contains the bid (or bids) using these higher valuation then it is impossible for the PAUSE auction to find this solution because that bid (those bids) is never placed. For example, if agent i has vi (1) = 1000 and the second highest valuation for (1) is only 10 then i only needs to place a bid of 11 in order to win that item. If the revenue-maximizing solution requires that 1 be sold for 1000 then that solution will never be found because that bid will never be placed. We also found that average percentage of times that our algorithms converges to the revenue-maximizing solution decreases as the number of items increases. For 2 items is almost 100% but decreases a little bit less than 1 percent as the items increase, so that this average percentage of convergence is around 90% for 10 items. In a few instances our algorithms find different solutions this is due to the different Figure 8: Average number of expanded nodes (y-axis) as function of items in the auction. ordering of the bids in the bids list which makes them search in different order. We know that the revenue generated by the PAUSE auction is generally lower than the revenue of the revenuemaximizing solution, but how much lower? To answer this question we calculated percentage representing the proportion of the revenue given by our algorithms relative to the revenue given by CASS. We found that the percentage of revenue of our algorithms increases in average 2.7% as the number of items increases, as shown in Figure 7. However, we found that CACHEDPAUSEBID generates a higher revenue than PAUSEBID (4.3% higher in average) except for auctions with 2 items where both have about the same percentage. Again, this difference is produced by the order of the search. In the case of 2 items both algorithms produce in average a revenue proportion of 67.4%, while in the other extreme (10 items), CACHEDPAUSEBID produced in average a revenue proportion of 91.5% while PAUSEBID produced in average a revenue proportion of 87.7%. The scalability of our algorithms can be determined by counting the number of nodes expanded in the search tree. For this we count the number of times that PBSEARCH gets invoked for each time that PAUSEBID is called and the number of times that FASTPAUSEBIDSEARCH gets invoked for each time that CACHEDPAUSEBID, respectively for each of our algorithms. As expected since this is an NP-Hard problem, the number of expanded nodes does grow exponentially with the number of items (as shown in Figure 8). However, we found that CACHEDPAUSEBID outperforms PAUSEBID, since it expands in average less than half the number of nodes. For example, the average number of nodes expanded when 2 items is zero for CACHEDPAUSEBID while for PAUSEBID is 2; and in the other extreme (10 items) CACHEDPAUSEBID expands in average only 633 nodes while PAUSEBID expands in average 1672 nodes, a difference of more than 1000 nodes. Although the number of nodes expanded by our algorithms increases as function of the number of items, the actual number of nodes is a much smaller than the worst-case scenario of nn where n is the number of items. For example, for 10 items we expand slightly more than 103 nodes for the case of PAUSEBID and less than that for the case of CACHEDPAUSE2 3 4 5 6 7 8 9 10 Number of Items Figure 9: Average time in seconds that takes to finish an auction (y-axis) as function of the number of items in the auction. BID which are much smaller numbers than 1010. Notice also that our value generation algorithm (Figure 5) generates a number of bids that is exponential on the number of items, as might be expected in many situations. As such, these results do not support the conclusion that time grows exponentially with the number of items when the number of bids is independent of the number of items. We expect that both algorithms will grow exponentially as a function the number of bids, but stay roughly constant as the number of items grows. We wanted to make sure that less expanded nodes does indeed correspond to faster execution, especially since our algorithms execute different operations. We thus ran the same experiment with all the agents in the same machine, an Intel Centrino 2.0 GHz laptop PC with 1 GB of RAM and a 7200 RMP 60 GB hard drive, and calculated the average time that takes to finish an auction for each algorithm. As shown in Figure 9, CACHEDPAUSEBID is faster than PAUSEBID, the difference in execution speed is even more clear as the number of items increases. 6. RELATED WORK A lot of research has been done on various aspects of combinatorial auctions. We recommend [2] for a good review. However, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new. One approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution. The VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation. This algorithm also fails to converge to a solution for most cases. In [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem. Their mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure 700 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) central server which then checks to make sure results from all pairs agree, otherwise a re-calculation is ordered. This general idea, which they call the redundancy principle, could also be applied to our problem but it requires the existence of a secure center agent that everyone trusts. Another interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier. Finally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best. As such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction. 7. CONCLUSIONS We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer or when we wish to distribute the computational load among the bidders. The PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions. With this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer. However, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids. We have presented two algorithms, PAUSEBID and CACHEDPAUSEBID, that bidder agents can use to engage in a PAUSE auction. Both algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent's utility given the set of outstanding best bids at any given time, without considering possible future bids. Both algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution. The cases where our algorithms failed to arrive at that distribution are those where there was a large gap between the first and second valuation for a set (or sets) of items. As it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search. PAUSEBID performs a branch and bound search completely from scratch each time it is invoked. CACHEDPAUSEBID caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times. CACHEDPAUSEBID has a better performance since it explores fewer nodes (less than half) and it is faster. As expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that CACHEDPAUSEBID generates in average 4.7% higher revenue than PAUSEBID. We also found that the revenue generated by our algorithms increases as function of the number of items in the auction. Our algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm. Moreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation. Our bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.
Bidding Algorithms for a Distributed Combinatorial Auction ABSTRACT Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder's utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm. 1. INTRODUCTION Both the research and practice of combinatorial auctions have grown rapidly in the past ten years. In a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items. Once the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer's revenue. This problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10]. Nevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed. The practical applications of combinatorial auctions include: allocation of airport takeoff and landing time slots, procurement of freight transportation services, procurement of public transport services, and industrial procurement [2]. Because of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem. Thus, several approaches and algorithms have been proposed to address the winner determination problem. However, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners. Examples of these algorithms are CASS [3], Bidtree [11] and CABOB [12]. We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer. The PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders. PAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them. However, it is not concerned with how the bidders determine what they should bid. In this paper we present two algorithms, PAUSEBID and CACHEDPAUSEBID, which enable agents in a PAUSE auction to find the bidset that maximizes their utility. Our algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent's utility given the outstanding best bids at a given time. PAUSEBID performs a branch and bound search completely from scratch every time that it is called. CACHEDPAUSEBID is a caching-based algorithm which explores fewer nodes, since it caches some solutions. 2. THE PAUSE AUCTION 3. PROBLEM FORMULATION 4. BIDDING ALGORITHMS 4.1 The PAUSEBID Algorithm 696 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2 The CACHEDPAUSEBID Algorithm 5. TEST AND COMPARISON 6. RELATED WORK A lot of research has been done on various aspects of combinatorial auctions. We recommend [2] for a good review. However, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new. One approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution. The VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation. This algorithm also fails to converge to a solution for most cases. In [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem. Their mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure 700 The Sixth Intl. . Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) central server which then checks to make sure results from all pairs agree, otherwise a re-calculation is ordered. This general idea, which they call the redundancy principle, could also be applied to our problem but it requires the existence of a secure center agent that everyone trusts. Another interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier. Finally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best. As such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction. 7. CONCLUSIONS We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer or when we wish to distribute the computational load among the bidders. The PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions. With this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer. However, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids. We have presented two algorithms, PAUSEBID and CACHEDPAUSEBID, that bidder agents can use to engage in a PAUSE auction. Both algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent's utility given the set of outstanding best bids at any given time, without considering possible future bids. Both algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution. The cases where our algorithms failed to arrive at that distribution are those where there was a large gap between the first and second valuation for a set (or sets) of items. As it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search. PAUSEBID performs a branch and bound search completely from scratch each time it is invoked. CACHEDPAUSEBID caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times. CACHEDPAUSEBID has a better performance since it explores fewer nodes (less than half) and it is faster. As expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that CACHEDPAUSEBID generates in average 4.7% higher revenue than PAUSEBID. We also found that the revenue generated by our algorithms increases as function of the number of items in the auction. Our algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm. Moreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation. Our bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.
Bidding Algorithms for a Distributed Combinatorial Auction ABSTRACT Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder's utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm. 1. INTRODUCTION Both the research and practice of combinatorial auctions have grown rapidly in the past ten years. In a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items. Once the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer's revenue. This problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10]. Nevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed. Because of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem. Thus, several approaches and algorithms have been proposed to address the winner determination problem. However, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners. Examples of these algorithms are CASS [3], Bidtree [11] and CABOB [12]. We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer. The PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders. PAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them. However, it is not concerned with how the bidders determine what they should bid. In this paper we present two algorithms, PAUSEBID and CACHEDPAUSEBID, which enable agents in a PAUSE auction to find the bidset that maximizes their utility. Our algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent's utility given the outstanding best bids at a given time. PAUSEBID performs a branch and bound search completely from scratch every time that it is called. CACHEDPAUSEBID is a caching-based algorithm which explores fewer nodes, since it caches some solutions. 6. RELATED WORK A lot of research has been done on various aspects of combinatorial auctions. However, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new. One approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution. The VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation. This algorithm also fails to converge to a solution for most cases. In [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem. Their mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure 700 The Sixth Intl. . Joint Conf. Another interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier. Finally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best. As such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction. 7. CONCLUSIONS The PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions. With this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer. However, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids. We have presented two algorithms, PAUSEBID and CACHEDPAUSEBID, that bidder agents can use to engage in a PAUSE auction. Both algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent's utility given the set of outstanding best bids at any given time, without considering possible future bids. Both algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution. As it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search. PAUSEBID performs a branch and bound search completely from scratch each time it is invoked. CACHEDPAUSEBID caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times. As expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that CACHEDPAUSEBID generates in average 4.7% higher revenue than PAUSEBID. We also found that the revenue generated by our algorithms increases as function of the number of items in the auction. Our algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm. Moreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation. Our bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.